Upload papers/2402/2402.08491.tex with huggingface_hub
Browse files- papers/2402/2402.08491.tex +673 -0
papers/2402/2402.08491.tex
ADDED
@@ -0,0 +1,673 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
\documentclass[11pt]{article}
|
2 |
+
\topmargin -20mm
|
3 |
+
|
4 |
+
\textheight 24truecm
|
5 |
+
\textwidth 16truecm
|
6 |
+
\oddsidemargin 5mm
|
7 |
+
\evensidemargin 5mm
|
8 |
+
\setlength\parskip{10pt}
|
9 |
+
\pagestyle{plain}
|
10 |
+
|
11 |
+
\usepackage{boxedminipage}
|
12 |
+
\usepackage{amsfonts}
|
13 |
+
\usepackage{amsmath}
|
14 |
+
\usepackage{amssymb}
|
15 |
+
\usepackage{graphicx}
|
16 |
+
\usepackage{amsthm}
|
17 |
+
\usepackage{t1enc}
|
18 |
+
\usepackage{cite}
|
19 |
+
\usepackage[dvipsnames]{xcolor}
|
20 |
+
\usepackage{svg}
|
21 |
+
\usepackage{caption}
|
22 |
+
\usepackage{subcaption}
|
23 |
+
\usepackage{tikz}
|
24 |
+
\usepackage{float}
|
25 |
+
\usepackage{bbm}
|
26 |
+
\usepackage{url}
|
27 |
+
\usepackage{authblk}
|
28 |
+
|
29 |
+
|
30 |
+
\newtheorem{theorem}{Theorem}[section]
|
31 |
+
\newtheorem{lemma}[theorem]{Lemma}
|
32 |
+
\newtheorem{observation}[theorem]{Observation}
|
33 |
+
\newtheorem{proposition}[theorem]{Proposition}
|
34 |
+
\newtheorem{corollary}[theorem]{Corollary}
|
35 |
+
\newtheorem{definition}[theorem]{Definition}
|
36 |
+
\newtheorem{example}[theorem]{Example}
|
37 |
+
|
38 |
+
\newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor}
|
39 |
+
\newcommand{\ceil}[1]{\left\lceil #1 \right\rceil}
|
40 |
+
|
41 |
+
\title{Deep Reinforcement Learning for Controlled Traversing of the Attractor Landscape of Boolean Models in the Context of Cellular Reprogramming}
|
42 |
+
\date{}
|
43 |
+
|
44 |
+
\author[1, 2]{Andrzej Mizera}
|
45 |
+
\author[1, 2]{Jakub Zarzycki}
|
46 |
+
|
47 |
+
\affil[1]{University of Warsaw}
|
48 |
+
\affil[2]{IDEAS NCBR}
|
49 |
+
|
50 |
+
\newcommand{\C}{\mathbb{C}}
|
51 |
+
\newcommand{\R}{\mathbb{R}}
|
52 |
+
\newcommand{\Z}{\mathbb{Z}}
|
53 |
+
|
54 |
+
\newcommand{\AM}[1]{\textcolor{blue}{#1}}
|
55 |
+
|
56 |
+
|
57 |
+
\begin{document}
|
58 |
+
|
59 |
+
\maketitle
|
60 |
+
|
61 |
+
\abstract{Cellular reprogramming can be used for both the prevention and cure of different diseases. However, the efficiency of discovering reprogramming strategies with classical wet-lab experiments is hindered by lengthy time commitments and high costs. In this study, we develop a~novel computational framework based on deep reinforcement learning that facilitates the identification of reprogramming strategies. For this aim, we formulate a~control problem in the context of cellular reprogramming for the frameworks of BNs and PBNs under the asynchronous update mode. Furthermore, we introduce the notion of a~pseudo-attractor and a~procedure for identification of pseudo-attractor state during training. Finally, we devise a~computational framework for solving the control problem, which we test on a~number of different models.}
|
62 |
+
|
63 |
+
|
64 |
+
|
65 |
+
|
66 |
+
\section{Introduction}
|
67 |
+
|
68 |
+
Complex diseases pose a~great challenge largely because genes and gene products operate within a~complex system --~the {\it gene regulatory network} (GRN). There is an~inherent dynamic behaviour emerging from the structural wiring of a~GRN: gene expression profiles, i.e., states of a~GRN, evolve in time to finally reach stable states referred to as {\it attractors}. Attractors correspond to cell types or cell fates~\cite{HEBI05}. During normal development of a~multi-cellular organism, not all attractors are manifested. Some of the `abnormal attractors', associated with diseases, become accessible by disturbance of the GRN's dynamics. This is seldom a~consequence of a~disruption in a~single gene, but rather arises as an~aftermath of GRN perturbations~\cite{Barabasi2011NetMed}. This could be cured by guiding cells to desired `healthy' attractors with experimental techniques of {\it cellular reprogramming}, i.e., the artificial changing of cell fate. Unfortunately, finding effective interventions that trigger desired changes using solely wet-lab experiments is difficult, costly, and requires lengthy time commitments. This motivates us to consider \emph{in-silico} approaches.
|
69 |
+
|
70 |
+
Although various computational frameworks are commonly used to model GRNs, the formalism of Boolean networks (BNs) and its extension, i.e., probabilistic Boolean networks (PBNs), have the advantage of being simple yet capable of capturing the important dynamic properties of the system under study. As such, they facilitate the modelling of large biological systems. This is especially relevant in the context of \emph{in-silico} identification of effective cellular reprogramming strategies, which requires large GRNs to be modelled.
|
71 |
+
|
72 |
+
Identification of cellular reprogramming strategies can be stated as a~control problem of BN and PBN models of GRNs. Although many BN/PBN control methods exist in the literature, the existing structure- and dynamics-based state-of-the-art computational techniques are limited to small and mid-size networks, i.e., of up to a~hundred of genes or so, usually requiring the systems to be decomposed in some way. This is often insufficient for cellular reprogramming considerations.
|
73 |
+
|
74 |
+
The issue of scalability can be addressed by devising new methods based on deep reinforcement learning (DRL) techniques, which have proved very successful in decision problems characterised by huge state-action spaces. To contribute to the realisation of this idea, we formulate a~control problem in the context of cellular reprogramming for the frameworks of BNs and PBNs under the asynchronous update mode. Furthermore, we introduce the notion of a~pseudo-attractor and a~procedure for identifying pseudo-attractor states during DRL agent training. Finally, these contributions allow us to devise a~DRL-based framework for solving the control problem. We consider our contributions as a~relevant step towards achieving scalable control methods for large Boolean models of GRNs for identifying effective and efficient cellular reprogramming strategies.
|
75 |
+
|
76 |
+
The paper is structured as follows. Related work is discussed in Sec.~\ref{sec:rw}. Preliminaries are provided in Sec.~\ref{sec:preliminaries}. We formulate our control problem in the context of cellular reprogramming in Sec.~\ref{sec:control_problem} and devise our DRL-based control framework in Sec.~\ref{sec:framework}. The experiments performed to evaluate our framework and the obtained results are presented in Sec.\ref{sec:experiments} and Sec.~\ref{sec:results}, respectively. Finally, we conclude our study in Sec.~\ref{sec:conclusions}.
|
77 |
+
|
78 |
+
\section{Related work}
|
79 |
+
\label{sec:rw}
|
80 |
+
\subsection{Dynamics-based approaches to GRN control}
|
81 |
+
Identification of proper control strategies for non-linear systems requires both network structure and dynamics~\cite{GR16}. Thus, we focus on dynamic-based and DRL-based methods for BN/PBN control. An~efficient method based on the `divide and conquer' strategy was proposed in~\cite{PSPM20} to solve the minimal {\em one-step source-target control} problem by using instantaneous, temporary, and permanent gene perturbations. The minimal {\em sequential source-target control} and the {\em target control} problems of BNs were considered in~\cite{SP20a} and~\cite{SP20b}, respectively. All these methods were implemented as a~software tool CABEAN~\cite{SP21}.
|
82 |
+
Recently, semi-symbolic algorithms were proposed in~\cite{BBPS+23} to stabilise partially specified asynchronous BNs in states exhibiting specific traits.
|
83 |
+
In~\cite{Pauleve23BoNesis}, the control problem for the most permissive BN update mode in the context of fixed points and minimal trap spaces is considered.
|
84 |
+
|
85 |
+
\subsection{DRL-based approches to GRN control}
|
86 |
+
|
87 |
+
The application of reinforcement learning for controlling GRNs was pioneered by in~\cite{SPA13} with focus on how to control GRNs by avoiding undesirable states in terms of steady state probabilities of PBNs. The main idea was to treat the time series gene expression samples as a~sequence of experience tuples and use a~batch version of Q-Learning to produce an~approximated policy over the experience tuples.
|
88 |
+
Later, the BOAFQI-Sarsa method that does not require time series samples was devised in~\cite{NCB18}. A~batch reinforcement learning method, mSFQI, was proposed in~\cite{NBC20} for control based on probabilities of gene activity profiles.
|
89 |
+
Recently, the study of~\cite{AYGV20} used a~Deep Q-Network with prioritised experience replay, for control of synchronous PBNs to drive the networks from a~particular state towards a~more desirable one.
|
90 |
+
Finally, a~DRL-based approximate solution to the control problem in synchronous PBNs was proposed in~\cite{MCSW22}. The proposed method finds a~control strategy from any network state to a~specified target attractor using a~Double Deep Q-Network model.
|
91 |
+
|
92 |
+
|
93 |
+
\section{Preliminaries}
|
94 |
+
\label{sec:preliminaries}
|
95 |
+
|
96 |
+
\subsection{Boolean and probabilistic Boolean networks}
|
97 |
+
Boolean networks is a~well established framework for the modelling of GRNs. A~Boolean network consists of nodes, that can be in one of two states, and functions describing how the individual nodes interact with each other. PBNs are an~extension of the formalism of BNs.
|
98 |
+
|
99 |
+
\begin{definition}(Boolean Network)
|
100 |
+
A~\emph{Boolean Network} is defined as a~pair $(V, F)$, where $V = \{x_1, x_2, \ldots, x_n\}$ is a~set of binary-valued nodes (also referred to as genes) and $F = \{f_1, f_2, \ldots, f_n\}$ is a~set of Boolean predictor functions, where $f_i(x_{i_1},x_{i_2}, \ldots, x_{i_k})$ defines the value of node $x_i$ depending on the values of the $k \leq n$ parent nodes $x_{i_1},x_{i_2},\ldots, x_{i_k}$ with $i_j \in [1..n]$ for $j\leq k$.
|
101 |
+
\end{definition}
|
102 |
+
|
103 |
+
Since interactions in biology are usually more complex we need a more general model of a GRN. We achieve it by allowing for each node to have multiple Boolean functions. Formally, probabilistic Boolean networks are defined as follows:
|
104 |
+
\begin{definition}(Probabilistic Boolean Network)
|
105 |
+
A~\emph{probabilistic Boolean network} is defined as a~pair $(V, \mathcal{F})$, where $V = \{x_1, x_2, \ldots, x_n\}$ is a~set of binary-valued nodes (also referred to as genes) and $\mathcal{F} = (F_1, F_2,\ldots, F_n)$ is a~list of sets. Each node $x_i \in V$, $i = 1, 2, \ldots, n$, has associated a~set $F_i \in \mathcal{F}$ of Boolean predictor functions: $F_i = \{f^i_1, f^i_2, \ldots, f^i_{l(i)}\}$, where $l(i)$ is the number of predictor functions of node $x_i$. Each $f^i_j \in F_i$ is a~Boolean function defined with respect to a~subset of $V$ referred to as parent nodes for $f^i_j$ and denoted $\textrm{Pa}(f^i_j)$. For each node $x_i \in V$ there is a~probability distribution $\mathbf{c}^i = (c^i_1, c^i_2, \ldots, c^i_{l(i)})$ on $F_i$, where each predictor function $f^i_j \in F_i$ has an~associated selection probability denoted $c^i_j$; it holds that $\sum_{j=1}^{l(i)}c^i_j=1$.
|
106 |
+
|
107 |
+
A~PBN in which each node only admits only one Boolean function is a~\emph{Boolean network}.
|
108 |
+
\end{definition}
|
109 |
+
|
110 |
+
|
111 |
+
|
112 |
+
|
113 |
+
|
114 |
+
|
115 |
+
|
116 |
+
|
117 |
+
|
118 |
+
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
\subsection{Network dynamics}
|
123 |
+
|
124 |
+
We define a~\emph{state} of a~BN/PBN as an~$n$-dimensional vector $\mathbf{s} \in \{0,1\}^n$, where the $i$-th element represents the state of gene $x_i$ for $i \in [1..n]$.
|
125 |
+
A~BN/PBN evolves in discrete time steps. It starts in an~initial state $\mathbf{s}_0$ and its state gets updated in every time step in accordance to the predictor functions.
|
126 |
+
In this study, we focus on the asynchronous updating, which is preferable in the context of GRN modelling. Under the asynchronous scheme, a~single gene $x_i$ is selected and updated in accordance with its predictor function $f_i$ (BNs) or one randomly selected from $F_i$ in accordance with $\mathbf{c}^i$.
|
127 |
+
The network dynamics can be depicted in the form of a~\emph{state transition graph}. Based on this concept of, we can introduce the notion of a~BN/PBN attractor.
|
128 |
+
|
129 |
+
\begin{definition}(State Transition Graph (STG))
|
130 |
+
A~state transition graph of a~BN/PBN of $n$ genes under the asynchronous update mode is a~graph $G(S,\rightarrow)$, where $S = \{0,1\}^n$ is the set of all possible states and $\rightarrow$ is the set of directed edges such that a~directed edge from $s$ to $s'$, denoted $s \rightarrow s'$, is in $\rightarrow$ if and only if $s'$ can be obtained from $s$ by a~single asynchronous update.
|
131 |
+
\end{definition}
|
132 |
+
|
133 |
+
|
134 |
+
|
135 |
+
|
136 |
+
|
137 |
+
\begin{definition}(Attractor)
|
138 |
+
An~\emph{attractor} of a~BN/PBN is a~bottom strongly connected component in the STG of the network. A~\emph{fixed-point attractor} and a~\emph{multi-state attractor} are bottom strongly connected components consisting of a~single state or more than one state, respectively.
|
139 |
+
\end{definition}
|
140 |
+
|
141 |
+
|
142 |
+
|
143 |
+
\begin{example}
|
144 |
+
\label{running}
|
145 |
+
We consider a~PBN of 4 genes $V=\{x_0, x_1, x_2, x_3\}$ regulated in accordance with the following Boolean functions:
|
146 |
+
|
147 |
+
\begin{center}
|
148 |
+
\begin{tabular}{ c }
|
149 |
+
$f_0^1(x_0) = x_0$ \\
|
150 |
+
$f_0^2(x_0, x_1, x_2, x_3) = x_0 \& \neg (x_0 \& \neg x_1 \& \neg x_2 \& x_3)$\\
|
151 |
+
$f_1^1(x_0, x_1) = \neg x_0 \& x_1$\\
|
152 |
+
$f_1^2(x_0, x_1, x_2, x_3) = \neg x_0 \& (x_1 | (x_2 \& x_3))$ \\
|
153 |
+
$f_2^1(x_0, x_1, x_2, x_3) = \neg x_0 \& (x_1 \& x_2 \& x_3)$\\
|
154 |
+
$f_2^2(x_0, x_1, x_2, x_3) = x_0 \& (\neg x_1 \& \neg x_2 \& \neg x_3)$ \\
|
155 |
+
$f_3^1(x_0, x_1, x_2, x_3) = \neg x_0 \& (x_1 | x_2 | x_3)$\\
|
156 |
+
$f_3^2(x_0, x_1, x_2, x_3) = \neg x_0 \& (x_1 | x_2 | x_3)$
|
157 |
+
\end{tabular}
|
158 |
+
\end{center}
|
159 |
+
Under the asynchronous update mode, the dynamics of the PBN is governed by the STG depicted in Fig.~\ref{fig:stg}.
|
160 |
+
\end{example}
|
161 |
+
|
162 |
+
|
163 |
+
|
164 |
+
|
165 |
+
|
166 |
+
|
167 |
+
|
168 |
+
|
169 |
+
|
170 |
+
|
171 |
+
|
172 |
+
\subsection{Reinforcement Learning}
|
173 |
+
|
174 |
+
The main task of reinforcement learning (RL) is to solve sequential decision problems by optimising a~cumulative reward function.
|
175 |
+
A~\emph{policy} is a~strategy that determines which action to take and an~\emph{optimal policy} is one determined by selecting the actions that maximise the future cumulative reward.
|
176 |
+
It be obtained by solving the \emph{the Bellman equation}, which expresses the relationship between the value of a~state and the expected future rewards:
|
177 |
+
$$V(s) = \max_a[R_a(s, s') + \gamma \sum_{s'} P(s' \mid s,a) V(s')],$$
|
178 |
+
where $V(s)$ is the value of state $s$, $R_a(s, s')$ is the immediate reward, $P(s' \mid s,a)$ is the transition probability to the next state $s'$, and $\gamma$ is the discount factor. The equation guides the RL agent's decision-making by considering both immediate rewards and the discounted value of future states, forming the basis for reinforcement learning algorithms.
|
179 |
+
To find an~approximate solution to the Bellman equation, the $Q$ function is considered which is defined as the total discounted reward received after taking action $a$ in state $s$:
|
180 |
+
$$Q(s, a) = R_a(s, s') + \gamma \sum_{s'} P(s' \mid s, a) \max_{a'} Q(s', a').$$
|
181 |
+
|
182 |
+
\subsection{Q function approximations}
|
183 |
+
\label{sec:Q-function_approx}
|
184 |
+
|
185 |
+
In the case of large state-action spaces, the $Q$ function values often cannot be determined, therefore they are approximated using DRL. It was shown that as the agent explores the environment this approximation converges to the true values of $Q$~\cite{rl}.
|
186 |
+
Under the assumption that the environment is stationary, i.e., the reward function and transition probabilities do not change in time, one can keep evaluating the agent on new states without affecting its ability to train itself~\cite{non_stationary_rl}.
|
187 |
+
|
188 |
+
|
189 |
+
|
190 |
+
\begin{figure}[t]
|
191 |
+
\centering
|
192 |
+
\scalebox{.8}{\begin{tikzpicture}[x=6cm,y=4cm] \tikzset{
|
193 |
+
e4c node/.style={circle,draw,minimum size=0.75cm,inner sep=0},
|
194 |
+
e4c edge/.style={sloped,above,font=\footnotesize}
|
195 |
+
}
|
196 |
+
\node[e4c node] (1) at (0.56, 0.8) {(0,0,0,1)};
|
197 |
+
\node[e4c node] (2) at (0.44, 0.48) {(0,0,1,0)};
|
198 |
+
\node[e4c node] (3) at (0.89, 0.62) {(0,0,1,1)};
|
199 |
+
\node[e4c node] (4) at (0.3 , 0.03) {(0,1,0,0)};
|
200 |
+
\node[fill=blue!30][e4c node] (5) at (0.7, 0.34) {(0,1,0,1)};
|
201 |
+
\node[fill=blue!30][e4c node] (8) at (0.2, 1.54) {(1,0,0,0)};
|
202 |
+
\node[e4c node] (7) at (1.00, 0.23) {(0,1,1,1)};
|
203 |
+
\node[e4c node] (6) at (0.71, -0.13) {(0,1,1,0)};
|
204 |
+
\node[e4c node] (9) at (0.56, 1.25) {(1,0,0,0)};
|
205 |
+
\node[fill=blue!30][e4c node] (10) at (0.55, 2.1) {(1,0,1,0)};
|
206 |
+
\node[e4c node] (11) at (0.88, 1.60) {(1,0,1,1)};
|
207 |
+
\node[e4c node] (12) at (0.15, 2.1) {(1,1,0,0)};
|
208 |
+
\node[e4c node] (13) at (0.60, 1.65) {(1,1,0,1)};
|
209 |
+
\node[e4c node] (14) at (0.54, 2.5) {(1,1,1,0)};
|
210 |
+
\node[e4c node] (15) at (0.91, 2.00) {(1,1,1,1)};
|
211 |
+
\node[fill=blue!30][e4c node] (16) at (0.10, 0.41) {(0,0,0,0)};
|
212 |
+
|
213 |
+
|
214 |
+
\path[->,draw,thick]
|
215 |
+
(2) edge[e4c edge] (3)
|
216 |
+
(1) edge[e4c edge] (2)
|
217 |
+
(1) edge[e4c edge] (5)
|
218 |
+
(2) edge[e4c edge ] (16)
|
219 |
+
(4) edge[e4c edge] (5)
|
220 |
+
(4) edge[e4c edge] (16)
|
221 |
+
(3) edge[e4c edge] (1)
|
222 |
+
(3) edge[e4c edge] (7)
|
223 |
+
(15) edge[e4c edge] (11)
|
224 |
+
(15) edge[e4c edge] (13)
|
225 |
+
(15) edge[e4c edge] (14)
|
226 |
+
(14) edge[e4c edge] (10)
|
227 |
+
(14) edge[e4c edge] (12)
|
228 |
+
(13) edge[e4c edge] (9)
|
229 |
+
(13) edge[e4c edge] (12)
|
230 |
+
(12) edge[e4c edge] (8)
|
231 |
+
(12) edge[e4c edge] (10)
|
232 |
+
(11) edge[e4c edge] (9)
|
233 |
+
(11) edge[e4c edge] (10)
|
234 |
+
(10) edge[e4c edge, bend left=20] (8)
|
235 |
+
(9) edge[e4c edge] (1)
|
236 |
+
(6) edge[e4c edge] (7)
|
237 |
+
(6) edge[e4c edge] (5)
|
238 |
+
(6) edge[e4c edge] (2)
|
239 |
+
(7) edge[e4c edge] (5)
|
240 |
+
(9) edge[e4c edge] (8)
|
241 |
+
(8) edge[e4c edge, bend left=20] (10)
|
242 |
+
;
|
243 |
+
\end{tikzpicture}
|
244 |
+
}
|
245 |
+
\caption{STG of the PBN defined in Example~\ref{running} under the asynchronous update mode. Shaded states are the attractor states of the three attractors, i.e., two fixed-point attractors $A_1=\{(0,0,0,0)\}$ and $A_2=\{(0, 1, 0,1)\}$, and one multi-state attractor $A_3 = \{(1,0,0,0), (1,0,1,0)\}$.}
|
246 |
+
\label{fig:stg}
|
247 |
+
\end{figure}
|
248 |
+
|
249 |
+
|
250 |
+
\subsubsection{Branching Dueling Q-Network}
|
251 |
+
Different DRL-based approaches can be used for the approximation of the $Q$ function. In this study we will focus on the Branching Dueling Q-Network (BDQ) approach introduced in~\cite{bdq} as an~extension of another well-known approach, i.e., the Dueling Double Deep Q-Network (DDQN)~\cite{ddqn}.
|
252 |
+
|
253 |
+
BDQ deep neural network structures are designed to address complex and high-dimensional action spaces. Instead of using a~single output layer for all actions, BDQ has multiple branches, each responsible for a~specific subset of actions.
|
254 |
+
BDQ aims to enhance the scalability and sample efficiency of reinforcement learning algorithms in complex scenarios. It can be thought of as an~adaptation of the dueling network into the action branching architecture. The dueling architecture uses two separate artificial neural networks, i.e., the \emph{target network} for evaluation and the \emph{controller network} for selection of actions. Its main benefit is that it avoids overestimating q-values, can more rapidly identify action redundancies, and generalises more efficiently by learning a~general q-value that is shared across many similar actions.
|
255 |
+
|
256 |
+
|
257 |
+
|
258 |
+
\section{Formulation of the control problem}
|
259 |
+
\label{sec:control_problem}
|
260 |
+
|
261 |
+
\subsection{Pseudo-attractors}
|
262 |
+
\label{sec:pseudo-attractors}
|
263 |
+
|
264 |
+
|
265 |
+
Unfortunately, obtaining the attractor landscape for a~large BN/PBN network, i.e., the family of all its attractors, is a~challenging problem by itself and one cannot expect to be in possession of this information in advance. Because our aim is to devise a~scalable computational framework for the control of large network models based on DRL, we need to be able to identify the BN/PBN attractors during training, i.e., exploration of the DRL environment. For this purpose, we first introduce the notion of a~\emph{pseudo-attractor}. Then, we proceed to define the problem of \emph{source-target attractor control}.
|
266 |
+
|
267 |
+
In general, identifying attractors of a~large Boolean network is a~computationally demanding task. Finding an attractor with the shortest period is an~NP-hard problem~\cite{Akutsu2003}.
|
268 |
+
Moreover, in the case of classical PBNs, the fix-point and limit cycle attractors correspond to the irreducible sets of states in the underlying Markov chain~\cite{PBN02}. For large-size PBNs with different predictors for numerous individual genes, the limit cycle attractors may be large, i.e., they may consist of many states. Nevertheless, usually states of an~irreducible set are not revisited with the same probability. From the point of view of the control problem in the context of cellular reprogramming, only the frequently revisited states of an~attractor are the relevant ones since they correspond to phenotypical cellular states that are observable in the lab. This makes these states `recognisable' for the application of cellular reprogramming interventions in practice in accordance with the control strategy suggested by our computational framework. We refer to the subset of frequently revisited states of an~attractor as a~\emph{pseudo-attractor} associated with the attractor and define it formally as follows.
|
269 |
+
|
270 |
+
\begin{definition}[Pseudo-attractor]
|
271 |
+
\label{def:pseudo-attractor}
|
272 |
+
Let $A$ be an~attractor of a~PBN in the classical formulation, i.e., an~irreducible set of states of the Markov chain underlying the PBN. Let $n := |A|$ be the size of the attractor $A$ and let $\mathbb{P}_A$ be the unique stationary probability distribution on $A$. The \emph{pseudo-attractor} associated with $A$ is the maximal subset $PA \subseteq A$ such that for all $s \in PA$ it holds that $\mathbb{P}_A(s) \geq \frac{1}{n}$. The states of a~pseudo attractor are referred to as \emph{pseudo-attractor states}.
|
273 |
+
\end{definition}
|
274 |
+
|
275 |
+
The correctness of the definition is guaranteed by the fact that the state space of the underlying Markov chain of a~PBN is finite and that the Markov chain restricted to the attractor states is irreducible. It is a~well known fact that all states of a~finite and irreducible Markov chain are positive recurrent. In consequence, the attractor restricted Markov chain has a~unique stationary distribution. Furthermore, for any PBN attractor there exists a~non-empty pseudo-attractor as stated by the following lemma.
|
276 |
+
|
277 |
+
\begin{observation}
|
278 |
+
\label{obs:existence}
|
279 |
+
Let $A$ be an~attractor of a~PBN. Then there exists a~pseudo-attractor $PA \subseteq A$ such that $|PA| \geq 1$.
|
280 |
+
\end{observation}
|
281 |
+
\begin{proof}
|
282 |
+
Let $n$ be the size of the attractor $A$, i.e., $n:=|A|$. Since the underlying Markov chain of the PBN restricted to $A$ is irreducible and positive recurrent, it has a~unique stationary distribution, which we denote $\mathbb{P}_A$. We proceed to show that there exists at least one state $s' \in A$ such that $\mathbb{P}_A(s') \geq \frac{1}{n}$. For this, let us assume that no such state exists. Then, we have that
|
283 |
+
$$
|
284 |
+
\sum_{s \in A}\mathbb{P}_A(s) < \sum_{s\in A}\frac{1}{n} = n\cdot\frac{1}{n} = 1.
|
285 |
+
$$
|
286 |
+
The left-hand side of the above inequality is strictly less than 1 and hence $\mathbb{P}_A$ is not a~probability distribution on $A$, which leads to a~contradiction. In consequence, $|PA| \geq 1$.
|
287 |
+
\end{proof}
|
288 |
+
|
289 |
+
\begin{observation}
|
290 |
+
\label{obs:uniform}
|
291 |
+
In the case of the~uniform stationary distribution on an~attractor, the associated pseudo-attractor is equal to the attractor:
|
292 |
+
Let $A$ be an~attractor of a~PBN such that the unique stationary distribution of the underlying Markov chain of the PBN restricted to $A$ is uniform. Then, for the pseudo-attractor $PA$ associated with $A$ it holds that $PA=A$.
|
293 |
+
\end{observation}
|
294 |
+
|
295 |
+
\begin{proof}
|
296 |
+
Let $n$ be the size of the attractor $A$. By the assumption of uniformity of the stationary distribution $\mathbb{P}_A$ it holds that $\mathbb{P}_A(s) = \frac{1}{n}$ for each $s \in A$. Since the pseudo-attractor $PA$ is the maximal subset of $A$ such that $\mathbb{P}_A(s) \geq \frac{1}{n}$ for each $s \in A$, it folows that $PA=A$.
|
297 |
+
\end{proof}
|
298 |
+
|
299 |
+
|
300 |
+
|
301 |
+
|
302 |
+
|
303 |
+
Finally, we argue that Def.~\ref{def:pseudo-attractor} of the pseudo-attractor straightforwardly extends to BNs under the asynchronous update mode and that Obs.~\ref{obs:existence} and Obs.~\ref{obs:uniform} remain valid in this case. Indeed, the asynchronous dynamics of the BN restricted to a~multi-state attractor of the network is a~finite and irreducible Markov chain. Therefore, in the continuation, we use the notion of the pseudo-attractor both in the context of PBNs and BNs.
|
304 |
+
|
305 |
+
\subsection{Source-target attractor control}
|
306 |
+
|
307 |
+
With the biological context of cellular reprogramming in mind, we proceed to define our control problem for BN and PBN models of GRNs.
|
308 |
+
We start with providing the~definition of an~\emph{attractor-based control strategy}, also referred to as \emph{control strategy} for short. Then, we define \emph{Source-Target Attractor Control}, and immediately follow with an example. Note that in Def.~\ref{def:control_strategy} pseudo-attractor states are considered and not pseudo-attractors. This is due to the fact that the procedure that will be introduced in Sec.~\ref{sec:pas-procedure} identifies pseudo-attractor states but does not assign them to individual pseudo-attractors.
|
309 |
+
Note that our definition of the source-target attractor control is a~generalisation of the `attractor-based sequential instantaneous control (ASI)' problem for BNs defined in~\cite{SP20a} as our formulation of the control problem extends to the formalism of PBNs and pseudo-attractor states. An~exact `divide-and-conquer'-type algorithm for solving the ASI problem for BNs was provided in~\cite{SP20a}, and implemented in the software tool CABEAN~\cite{SP21}.
|
310 |
+
|
311 |
+
\begin{definition}(Attractor-based Control Strategy)
|
312 |
+
\label{def:control_strategy}
|
313 |
+
Given a~BN/ PBN and a~pair of its source-target (pseudo-)attractors, a~attractor-based control strategy is a~sequence of interventions which drives the network dynamics from the source to the target (pseudo-)attractor. Interventions are understood as simultaneous flips (perturbations) of values for a~subset of genes in a~particular network state and their application is limited to (pseudo-)attractor states.
|
314 |
+
We will denote simultanious inverventions as sets, e.g. $\{x_1, x_3, x_7\}$ and strategies as lists of sets, e.g. $[\{x_1 x_7\}, \{x_2\} \{x_2, x_4\}]$.
|
315 |
+
Furthermore, the \emph{length of a~control strategy} is defined as the number of interventions in the control sequence. We refer to an~attractor-based control strategy of the shortest length as the \emph{minimal attractor-based control strategy}.
|
316 |
+
|
317 |
+
\end{definition}
|
318 |
+
|
319 |
+
\begin{definition}[Source-Target Attractor Control]
|
320 |
+
Given a~BN/ PBN and a~pair of source-target attractors or pseudo-attractor states, find a~minimal attractor-based control strategy.
|
321 |
+
\end{definition}
|
322 |
+
|
323 |
+
\begin{example}\label{ex:control}
|
324 |
+
The PBN from Example~\ref{running} may be controlled from state $(1,0,1,0)$ to $(0,0,0,0)$ by intervening on $x_1$ and allowing the PBN to evolve in accordance with its original dynamics:
|
325 |
+
$$(1,0,1,0) \xrightarrow{i=1} (0, 0, 1, 0) \xrightarrow{\text{evolution}} (0, 0, 0, 0).$$
|
326 |
+
|
327 |
+
However, the evolution is non-deterministic and the PBN may evolve to another attractor, see Fig.~\ref{fig:stg}:
|
328 |
+
$$(0, 0, 1, 0) \xrightarrow{\text{evolution}} (0, 1, 0, 1).$$
|
329 |
+
|
330 |
+
The only way to be sure to move to $(0,0,0,0)$ is to flip genes $\{x_1, x_3\}$ either simultaneously, which gives a~strategy of length one, or one-by-one, which gives a~strategy of length two, i.e., $[\{x_3\}, \{x_1\}]$.
|
331 |
+
|
332 |
+
\end{example}
|
333 |
+
|
334 |
+
|
335 |
+
\section{DRL-based framework for the source-target attractor control}
|
336 |
+
\label{sec:framework}
|
337 |
+
|
338 |
+
We propose a~DRL-based computational framework, i.e., pbn-STAC, for solving the source-target attractor control problem. Since our control problem is to some extent similar to the one considered in~\cite{MCSW22} and our implementation is based on the implementation therein, we compare our framework assumptions and solutions to theirs during the presentation of pbn-STAC. In contrast to the synchronous PBN update mode in~\cite{MCSW22}, we consider the asynchronous update mode, which is commonly considered as more appropriate for the modelling of biological systems. The approach of~\cite{MCSW22} allows DRL agents to apply control actions in any state of the PBN environment. Since our focus is on the modelling of cellular reprogramming, we believe that this approach may be hard to apply in experimental practice. It would require the ability to discern virtually all cellular states, including the transient ones, which is impossible with currently available experimental techniques. Since attractors correspond to cellular types or, more generally, to cellular phenotypic functional states, which are more easily observable in experimental practice, we allow our DRL agent to intervene only in (pseudo-) attractor states in consistency with the control problem formulation in Sec.~\ref{sec:control_problem}.
|
339 |
+
|
340 |
+
In the control framework of~\cite{MCSW22}, an~action of the DRL agent can perturb at most one gene at a~time. However, for our formulation of the control problem this is too restrictive. We have encountered examples of source-target attractor pair where no control strategy consisting of such actions exists. Therefore, we need to relax this restriction. However, we do not want to intervene on too many genes at once as it would be rather pointless --~in the extreme case of allowing all genes to be perturbed at once, one could simply flip all of the unmatched gene values. Furthermore, such an~intervention would also be hard to realise or even be unworkable in real biological scenarios --~it is expensive and sometimes even impossible to intervene on many genes at once in the lab. Hence, we introduce a~parameter which value defines an~upper limit for the number of genes that can be simultaneously perturbed. Based on experiments (data not shown), we set this value to three. This setting is sufficient for obtaining successful control strategies for all of our case studies, yet low enough not to trivialise the control problem. Of course, the value can be tuned to meet particular needs.
|
341 |
+
|
342 |
+
The DRL agent in~\cite{MCSW22} learns how to drive the network dynamics from any state to the specified target attractor. With the context of cellular reprogramming in mind, we consider in our framework only attractors as control sources and targets with both of them specified. This models the process of transforming a~cell from one type into another. To be able to solve the source-target attractor control problem, we define the reward function $R_a(s,s')$ as:
|
343 |
+
$$R_a(s, s') = 1000 * \mathbbm{1}_{TA}(s') - |a|,$$
|
344 |
+
where $\mathbbm{1}_{TA}$ is an~indicator function of target attractor, and $|a|$ is the number of genes perturbed by applying action $a$.
|
345 |
+
The loss function is defined as the Mean Squared Error (MSE) between the predicted Q-values and the target Q-values, calculated using the Bellman equation.
|
346 |
+
|
347 |
+
To train our DRL agent in each episode, we randomly choose a~source-target attractor pair and terminate each episode after 20 unsuccessful steps.
|
348 |
+
This approach however requires all the attractors to be known prior to training. For networks with small numbers of nodes, the attractors can be computed. However, as already mentioned, obtaining the list of all attractors for large networks is a~challenging problem by itself and one cannot expect the list to be available in advance. To address this issue, we have introduced the notion of a~pseudo-attractor in Def.~\ref{def:pseudo-attractor}. Now we proceed to present a~procedure for detecting pseudo-attractor states which is exploited by our framework for solving the control problem for large networks, i.e., ones for which information on attractors is missing.
|
349 |
+
|
350 |
+
|
351 |
+
|
352 |
+
|
353 |
+
|
354 |
+
\subsection{Pseudo-attractor states identification procedure}
|
355 |
+
\label{sec:pas-procedure}
|
356 |
+
|
357 |
+
Identification of pseudo-attractors is hindered in large-size PBN models. Nevertheless, pseudo-attractor states can be identified with simulations due their property of being frequently revisited. We propose the following Pseudo-Attractor States Identification Procedure (PASIP) which consists of two steps executed in two phases: Step~I during PBN environment pre-processing and Step~II with two cases, referred to as Step~II-1 and Step~II-2, during DRL agent training.
|
358 |
+
|
359 |
+
\paragraph{\textbf{Pseudo-Attractor States Identification Procedure}}
|
360 |
+
\begin{enumerate}
|
361 |
+
\item[I] During environment pre-processing, a~pool of $k$ randomly selected initial states is considered, from which PBN simulations are started. Each PBN simulation is run for initial $n_0=200$ time steps, which are discarded, i.e., the so-called burn-in period. Then, the simulation continues for $n_1=1000$ time steps during which the visits to individual states are counted. All states in which at least $5\%$ of the simulation time $n_1$ is spent are added to the list of pseudo-attractor states.
|
362 |
+
|
363 |
+
\item[II] During training, the procedure discerns two cases:
|
364 |
+
|
365 |
+
\begin{itemize}
|
366 |
+
|
367 |
+
\item[II-1] The simulation of the PBN environment may enter a~fix-point attractor not detected in Step~I. If the simulation gets stuck in a~particular state for $n_2=1000$ steps, the state is added to the list of pseudo-attractor states.
|
368 |
+
|
369 |
+
\item[II-2] During training, the simulation of the PBN environment may enter a~multi-state attractor that has not been detected in Step~I. For this reason, a~history of the most recently visited states is kept. When the history buffer reaches the size of $n_3=10000$ items, revisits for each state are counted and states revisited more than 15\% of times are added to the list of pseudo-attractor states. If no such state exists, the history buffer is cleared and the procedure continues for another $n_3$ time steps. The new pseudo-attractor states are added provided no known pseudo-attractor state was reached. Otherwise, the history information is discarded.
|
370 |
+
\end{itemize}
|
371 |
+
\end{enumerate}
|
372 |
+
Notice that the procedure allows us to identify the pseudo-attractor states, but does not allow us to assign them to individual pseudo-attractors. Therefore, when training a~DRL agent with the use of pseudo-attractor states, we consider the control strategies between all source-target pairs of pseudo-attractor states.
|
373 |
+
This is why our formulation of the control problem, the DRL agent is restricted to apply its actions only in PBN (pseudo-)attractor states. Therefore, in the case of large networks where no information on attractors is available, the environment pre-processing phase is important as it provides an~initial pool of pseudo-attractor states for the training.
|
374 |
+
|
375 |
+
|
376 |
+
|
377 |
+
When identifying pseudo-attractors, we do not know the size of the PBN attractor with which the pseudo-atractor is associated. Therefore, we cannot determine the exact probability threshold of Def.~\ref{def:pseudo-attractor} for identifying individual pseudo-attractors. The proposed procedure addresses this issue as follows. In Step~I, the chosen $5\%$ identification threshold enables the identification of a~pseudo-attractor being a~complete attractor, i.e., all its states, of size up to $20$ states, which follows from the following observation.
|
378 |
+
|
379 |
+
\begin{observation}
|
380 |
+
\label{obs:p-a_size_bounds}
|
381 |
+
For any PBN attractor $A$, the size of the associated pseudo-attractor $PA$ found by Step~I of the pseudo-attractor identification procedure with $k\%$ identification threshold is exactly upper bounded by
|
382 |
+
\[
|
383 |
+
|PA| \leq
|
384 |
+
\begin{cases}
|
385 |
+
\frac{100}{k}-1, & 100 \text{ mod } k = 0 \text{ and } |A| > \frac{100}{k} \\
|
386 |
+
\floor{\frac{100}{k}}, & \text{otherwise}.
|
387 |
+
\end{cases}
|
388 |
+
\]
|
389 |
+
\end{observation}
|
390 |
+
\begin{proof}
|
391 |
+
Let $S = \frac{100}{k} -1$ if $k \mid 100$ and $|A| = \floor{\frac{100}{k}}$ and assume that we have an attractor $A$ of size strictly greater than $S$. Then by the pigeonhole principle one of the states has to be visited less then $\frac{100}{S}\% < k\%$ of times. So it will not be fully recovered by the procedure. Hence $|PA| < S + 1$.
|
392 |
+
|
393 |
+
Contrary, if we have an attractor $A$ of size $S$, which has uniform distribution then $PA$ will equal exactly $|A|$, so the upper bound for the size of $|PA|$ is at least $S$.
|
394 |
+
\end{proof}
|
395 |
+
|
396 |
+
|
397 |
+
|
398 |
+
|
399 |
+
In light of Def.~\ref{def:pseudo-attractor} and Obs.~\ref{obs:uniform}, the associated pseudo-attractor of an~attractor of size $20$ can be identified only if the stationary distribution on the attractor is uniform. If the attractor size is less than 20, it is still possible to include all attractor states in the pseudo-attractor even if the distribution is non-uniform. Notice that with decreasing size of the unknown attractor, our procedure allows more and more pronounced deviations from the uniform distribution while preserving the complete attractor detection capability provided the stationary probabilities of all attractor states are above the threshold.
|
400 |
+
|
401 |
+
If an~attractor is of size larger than $20$ states, Step~I of our procedure with $5\%$ identification threshold will identify the associated pseudo-attractor only if the stationary distribution is non-uniform and the pseudo-attractor will contain only the most frequently revisited states. The maximum possible size of the identified pseudo-attractor in this case is $19$, which follows from Obs.~\ref{obs:p-a_size_bounds}.
|
402 |
+
This is a~desired property of our procedure as it keeps the number of pseudo-attractor states manageable which has significant positive influence on stabilising the model training as will be discussed below.
|
403 |
+
|
404 |
+
|
405 |
+
The environment pre-processing phase provides an~initial set of pseudo-attractor states.
|
406 |
+
The initial set is expanded in Step~II during the model training phase. Step~II-1 allows to identify plausible fix-point attractors. Step~II-2 enables the identification of plausible multi-state attractors. However, here the focus is on smaller attractors than in the case of Step~1: we classify states as pseudo-attractor states if they are revisited at least $15\%$ of time, which corresponds to attractors of size 6. This is to restrict the number of spurious pseudo-attractor states in order to stabilise model training as explained next.
|
407 |
+
|
408 |
+
|
409 |
+
|
410 |
+
|
411 |
+
|
412 |
+
|
413 |
+
|
414 |
+
|
415 |
+
|
416 |
+
We have encoutered an issue related to late discovery of pseudo-attractor states during training. As can be observed in Fig.~\ref{Fig:Data1}, the procedure may detect a~new pseudo-attractor at any point in time which destabilises training: a~new state is detected at around 90000 steps, which causes abrupt, significant increase of the average episode length. We propose a~remedy to this problem in Sec.~\ref{sec:expl_prob_boost}.
|
417 |
+
Our experiments with small networks, i.e., ones for which exact attractors could be computed, revealed that it is beneficial to underestimate the set of attractor states in Step~I of the procedure as the missed ones are usually discovered later during the training phase.
|
418 |
+
|
419 |
+
|
420 |
+
For big networks, e.g., with hundreds of nodes, the set of pseudo-attractors may take a~long time to stabilise. Yet this approach provides us with the ability to process networks too big to be handled by traditional methods. The computations of pseudo-attractors can be parallelised in a~rather straightforward way to speed up the detection. Furthermore, the notion of pseudo-attractors can easily be generalised to other types of GRN models, e.g., PBNs with perturbation, which is yet another well-established GRN modelling framework.
|
421 |
+
|
422 |
+
|
423 |
+
|
424 |
+
\subsection{Exploration probability boost}
|
425 |
+
\label{sec:expl_prob_boost}
|
426 |
+
|
427 |
+
The approach of \cite{MCSW22} implements the $\varepsilon$-greedy policy in order to balance exploitation and exploration of the DRL agent during training. The $\varepsilon$-greedy policy introduces the \emph{exploration probability} $\varepsilon$ and with probability $1-\varepsilon$ follows the greedy policy, i.e., it selects the action $a^* = \arg\max_{a \in \mathcal{A}} Q(s, a)$, or with the $\varepsilon$ exploration probability selects a~random action. We set the initial $\varepsilon$ value to 1
|
428 |
+
and linearly decrease it to 0.05
|
429 |
+
over the initial
|
430 |
+
3000 steps of training.
|
431 |
+
|
432 |
+
Combining the original $\varepsilon$-greedy policy with online pseudo-attractor states identification gives rise to unstable training. When trying to train the DRL agent for our control problem
|
433 |
+
while keeping identifying pseudo-attractor states during training, stability issues discussed in Sec.~\ref{sec:pseudo-attractors} were observed. To alleviate this negative influence on training, we introduce the \emph{exploration probability boost} (EPB) to the $\varepsilon$-greedy policy. The idea of EPB is to increase the exploration probability $\varepsilon$ after each discovery of a~new pseudo-attractor to $\max(\varepsilon, 0.3)$ if the current value of $\varepsilon$ is less than 0.3. After the increase, the linear decrease to 0.05 follows with the rate of the initial decrease. As revealed by our experiments, this simple technique makes learning much more stable. This is illustrated in Fig.~\ref{Fig:Data2}, where the agent discovers new pseudo-attractor states at around the 150000-th training step and the use of the improved $\varepsilon$-greedy policy allowed us to reduce the increase of the average episode length in a~significant way and resulted in a~quick return to the previously trained low value of the average episode length.
|
434 |
+
|
435 |
+
\begin{figure}[ht]
|
436 |
+
\centering
|
437 |
+
\begin{subfigure}[t]{0.49\linewidth}
|
438 |
+
\centering
|
439 |
+
\includegraphics[width=\textwidth]{pbn70_v4_statistical_bdq.png}
|
440 |
+
\caption{Training without EPB.}
|
441 |
+
\label{Fig:Data1}
|
442 |
+
\end{subfigure}
|
443 |
+
\hfill
|
444 |
+
\begin{subfigure}[t]{0.49\linewidth}
|
445 |
+
\centering
|
446 |
+
\includegraphics[width=\textwidth]{bdq_stat_2_pbn70_cluster.png}
|
447 |
+
\caption{Training with EPB.}
|
448 |
+
\label{Fig:Data2}
|
449 |
+
\end{subfigure}
|
450 |
+
\caption{Examples of average episode lengths during training run with and without EPB. New pseudo-attractor states are being identified during training.}
|
451 |
+
\end{figure}
|
452 |
+
|
453 |
+
\subsection{pbn-STAC implementation}
|
454 |
+
\label{sec:implementation}
|
455 |
+
We implement pbn-STAC as a~fork of gym-PBN~\cite{sota-gym}, an~environment for modelling of PBNs, and pbn-rl~\cite{sota-pbn-rl-git}, a~suite of DRL experiments for a~different PBN control problem formulated in~\cite{MCSW22}.
|
456 |
+
In pbn-STAC, we have adapted the original code of gym-PBN and pbn-rl to our formulation of the PBN control problem, i.e., the source-target attractor control. First, we extend gym-PBN by adding the asynchronous PBN environment to it. Second, to allow for simultaneous perturbation of a~combination of genes within a~DRL action, we replace the original DDQN architecture with the BDQ architecture~\cite{bdq}, which, contrary to DDQN, scales linearly with the dimension of the action space. The architecture of our BDQ network is depicted in Fig.~\ref{fig:BDQ_arch}.
|
457 |
+
\begin{figure}[ht]
|
458 |
+
\centering
|
459 |
+
\includegraphics[width=.8\linewidth]{figures/BDQ.pdf}
|
460 |
+
\caption{Schematic illustration of the BDQ network architecture}
|
461 |
+
\label{fig:BDQ_arch}
|
462 |
+
\end{figure}
|
463 |
+
Third, we implement the pseudo-attractor states identification procedure and the exploration probability boost technique. Finally, the framework takes as input a~source-target pair of attractors or pseudo-attractor states. In the case of a~multi-state target attractor, a~training episode is regarded as successful if any of the target attractor states is reached. For a~multi-state source attractor, we uniformly sample one of its states and set it as the initial state. In this way, different source attractor states are considered as initial during DRL agent training.
|
464 |
+
Our DRL-based framework for the source-target attrator control is made available via the dedicated pbn-STAC GitHub repository~\cite{git_gym_pbn}.
|
465 |
+
|
466 |
+
|
467 |
+
\section{Experiments}
|
468 |
+
\label{sec:experiments}
|
469 |
+
|
470 |
+
\subsection{BN and PBN models of GRNs}
|
471 |
+
|
472 |
+
\textbf{Melanoma models.} We infer BN and PBN environments of various sizes for the melanoma GRN using the gene expression data provided by Bittner \emph{et al.} in~\cite{bittner}. This is a~well-known dataset on melanoma, which is extensively studied in the literature, see, e.g.,~\cite{MCSW22,Bene2023,Du2023}.
|
473 |
+
To infer the BN/PBN structures, we follow the approach of~\cite{MCSW22} implemented in gym-PBN~\cite{MCSW22}. It is based on the coefficient of determination (COD), which is a~measure of how well the dependent variable can be predicted by a~model, a~perceptron in the case of~\cite{MCSW22}.
|
474 |
+
|
475 |
+
|
476 |
+
|
477 |
+
|
478 |
+
|
479 |
+
|
480 |
+
|
481 |
+
|
482 |
+
The original dataset of Bittner \emph{et al.} is quantised by the method of~\cite{MCSW22}. Then, the BN and PBN models of sizes 7, 10, and 28 are obtained from these data. The models are denoted as BN-x or PBN-x, respectively, where x is the number of genes. To infer the predictors for the models, we set the number of predictors for each gene to 1 for BN models and to 3 for PBN models. For each gene, the algorithm selects the Boolean functions with the maximum COD values. For more details on the inference method, we refer to~\cite{MCSW22}.
|
483 |
+
|
484 |
+
|
485 |
+
\noindent\textbf{Case study of {\it B.~bronchiseptica}.}
|
486 |
+
We test our DRL-based control framework on an~existing model of a~real biological system, i.e., the network of immune response against infection with the respiratory bacterium \emph{Bordetella bronchiseptica}, which was originally proposed and verified against empirical findings in~\cite{BB}. The computational model, denoted IRBB-33, is an~asynchronous BN consisting of 33 genes.
|
487 |
+
|
488 |
+
|
489 |
+
|
490 |
+
\subsection{Performance evaluation methodology}
|
491 |
+
We evaluate the performance of pbn-STAC in solving the control problem formulated in Sec.~\ref{sec:control_problem} on BN and PBN models of melanoma of various sizes, i.e., incorporating 7, 10, and 28 genes. Moreover, we consider IRBB-33, the 33-genes BN model. The dynamics under the asynchronous update mode is considered for all models.
|
492 |
+
The evaluation consists of the agent interacting with the environment by taking actions, where an~action consists of flipping the values of a~particular subset of genes in an~attractor or pseudo-attractor state. We recover a~control strategy for a~given source-target pair learned by a~trained DRL agent byinitialising the BN/PBN environment with the source and target and letting it run while applying the actions suggested by the DRL agent in the source and all the intermediate (pseudo-)attractor states encountered on the path from the source to the target.
|
493 |
+
To evaluate the performance of pbn-STAC on a~particular BN/PBN model, we recover control strategies for all possible ordered source-target pairs of the model's attractors or pseudo-attractor states.
|
494 |
+
For all the BN models of melanoma and IRBB-33, we are able to compute all their attractors and optimal control strategies for all pairs of attractors using the CABEAN software tool with the attractor-based sequential instantaneous source-target control (ASI) method. We use the information on the exact attractors and optimal control strategies for BN models as ground truth for the evaluation of pbn-STAC.
|
495 |
+
|
496 |
+
For PBN-7 and PBN-10, we compute the attractors with the NetworkX package~\cite{NetworkX}, which facilitates the analysis of complex networks in Python. Unfortunately, due to very large memory requirements, we are unable to obtain the attractors of the 28-genes PBN model of melanoma with this approach, so we consider pseudo-attractor states. The optimal-length control strategies are obtained for PBN-7 and PBN-10 models by exhaustive search.
|
497 |
+
|
498 |
+
Notice that due to the nondeterministic nature of our environments, i.e., the asynchronous update mode, the results may vary between runs. Therefore, for each source-target pair, we repeat the run 10 times. For each recovered control strategy, we count its length, and record the information whether the target attractor is reached. For a~given BN/PBN model, we report the percentage of successful control strategies found and the average length of the successful control strategies.
|
499 |
+
|
500 |
+
\section{Results}
|
501 |
+
\label{sec:results}
|
502 |
+
|
503 |
+
\subsection{Identification of pseudo-attractor states}
|
504 |
+
|
505 |
+
We evaluate the performance of PASIP proposed in Sec.~\ref{sec:pas-procedure}. For this purpose, we run pbn-STAC with PASIP on the considered BN and PBN models. We present the obtained results in Tab.~\ref{tab:pas-procedure_res}. For each model, except the melanoma PBN-28 for which the exact attractors could not be obtained, we provide the information on the number of exact attractors, the total number of attractor states, and the total numbers of identified pseudo-attractor states with our procedure. We measure the precision of our approach defined as $\textrm{TP}/(\textrm{TP}+\textrm{FP})$,
|
506 |
+
where $\textrm{TP}$ is the number of true positives, i.e., the number of pseudo-attractor states that are attractor states, and $\textrm{FP}$ is the number of false positives, i.e., the number of states identified as pseudo-attractor states which are not part of any of the network's attractor. We can conclude that for all cases in which the exact attractors are known, our procedure does not introduce any FPs. Moreover, it can identify the attractor states with $100\%$ precision in all but one case, i.e., the PBN-28 network which has 2412 fix-point attractors and for which our procedure correctly identifies 1053 of them. This justifies our strong belief that running our procedure for longer time would result in definitely higher precision also in the case of BN-28. In summary, the presented results show that PASIP is reliable.
|
507 |
+
|
508 |
+
|
509 |
+
|
510 |
+
|
511 |
+
|
512 |
+
|
513 |
+
\begin{table}[h!]
|
514 |
+
\centering
|
515 |
+
\begin{tabular}{|c|c|c|c|c|}
|
516 |
+
\hline
|
517 |
+
Model & \#Attr. & \#Attr. states & \#PA-states & Precision \\ [0.5ex]
|
518 |
+
\hline\hline
|
519 |
+
BN-7 & 6 & 6 & 6 & 100\% \\
|
520 |
+
\hline
|
521 |
+
BN-10 & 26 & 26 & 26 & 100\% \\
|
522 |
+
\hline
|
523 |
+
BN-28 & 2412 & 2412 & 1053 & 43.65\% \\
|
524 |
+
\hline
|
525 |
+
IRBB-33 & 3 & 3 & 3 & 100\% \\
|
526 |
+
\hline
|
527 |
+
PBN-7 & 4 & 4 & 4 & 100\% \\
|
528 |
+
\hline
|
529 |
+
PBN-10 & 6 & 6 & 6 & 100\% \\
|
530 |
+
\hline
|
531 |
+
PBN-28 & unknown & unknown & 14 & N/A \\
|
532 |
+
\hline
|
533 |
+
\end{tabular}
|
534 |
+
\caption{Comparison of the number of exact attractor states and pseudo-attractor states identified by PASIP for various BN and PBN models. The fact that we were unable to obtain the exact attractors for the PBN-28 model is indicated with `unknown'. Attr. is short for Attracor and PA stands for Pseudo-attractor.}
|
535 |
+
\label{tab:pas-procedure_res}
|
536 |
+
\end{table}
|
537 |
+
|
538 |
+
|
539 |
+
\begin{table}[h!]
|
540 |
+
\centering
|
541 |
+
\begin{tabular}{|l|c|c|c|}
|
542 |
+
\hline
|
543 |
+
Model & \#Attractors & Optimal Strategy & pbn-STAC
|
544 |
+
\\
|
545 |
+
\hline\hline
|
546 |
+
BN-7 & 6 & 1.0 & 3.98 \\
|
547 |
+
\hline
|
548 |
+
BN-10 & 26 &1.1 & 2.14 \\
|
549 |
+
\hline
|
550 |
+
BN-28 & 2412 & 1.1 & - \\
|
551 |
+
\hline
|
552 |
+
IRBB-33 & 3 & 1.0 & 9.2 \\
|
553 |
+
\hline
|
554 |
+
PBN-7 & 4 & 1.1 & 5.5 \\
|
555 |
+
\hline
|
556 |
+
PBN-10 & 6 & 1.2 & 15.2 \\
|
557 |
+
\hline
|
558 |
+
PBN-28 & unknown & unknown & 60.7 \\
|
559 |
+
\hline
|
560 |
+
\end{tabular}
|
561 |
+
\caption{Average lengths of pbn-STAC control strategies and optimal control strategies obtained with CABEAN (BNs) or exhaustive search (PBNs) for all source-target pairs of individual models. The fact that we were unable to obtain the optimal strategy for the PBN-28 model is indicated with `unknown'.}
|
562 |
+
\label{tab:control_res}
|
563 |
+
\end{table}
|
564 |
+
|
565 |
+
|
566 |
+
|
567 |
+
|
568 |
+
|
569 |
+
\subsection{Control of BN models of melanoma}
|
570 |
+
|
571 |
+
We evaluate the ability of pbn-STAC to solve the control problem by comparing the obtained results to the optimal ASI control strategies computed with CABEAN.
|
572 |
+
As can be seen in Tab.~\ref{tab:control_res}, the strategies obtained with pbn-STAC for larger BN models tend to be longer on average compared to the optimal ones. However the overhead is rather stable across different models. We investigate the issue of longer control strategies further by computing a~histogram of control strategy lengths for the BN-7 model provided in Fig.~\ref{fig:bn7}. It is apparent that in most cases the control strategies are short and close to the optimal ones. Nevertheless, there are a~few cases of longer control strategies that give rise to the higher average values. The longer strategies are present due to the fact that the interventions suggested by the trained DRL-agent often place the system in a~so-called \emph{weak basin of attraction} of an~attractor, i.e., a~set of states from which the attractor is reachable, but not necessarily --~the dynamics can still lead the system to another attractor from these states due to non-determinism arising from the asynchronous update mode. The strategies computed by CABEAN are optimal since they are obtained by considering the so-called \emph{strong basins of attraction}, i.e., states from which only a~single attractor can be reached. Nevertheless, determining strong basins is challenging, not to say impossible, for large networks (see~\cite{PSPM20} for details). In light of this and the fact that pbn-STAC can handle larger networks, the obtained results can be seen as reasonable and acceptable.
|
573 |
+
|
574 |
+
Unfortunately, due to the huge number of attractors of the BN-28 model, the training of pbn-STAC on this model needs to be run for much longer time and we did not manage to finish it within our time limits. Notice that our training procedure considers all ordered pairs of the attractors. Handling such cases requires further research.
|
575 |
+
|
576 |
+
\begin{figure}[ht]
|
577 |
+
\centering
|
578 |
+
\begin{subfigure}[t]{0.49\linewidth}
|
579 |
+
\centering
|
580 |
+
\includegraphics[width=\linewidth]{PBN33_mixed_reward.png}
|
581 |
+
\caption{Usual reward}\label{fig:pbn33_data1}
|
582 |
+
\end{subfigure}
|
583 |
+
\begin{subfigure}[t]{0.49\linewidth}
|
584 |
+
\centering
|
585 |
+
\includegraphics[width=\linewidth]{PBN33_neg_reward_0.png}
|
586 |
+
\caption{Improved reward}\label{fig:pbn33_data2}
|
587 |
+
\end{subfigure}
|
588 |
+
\caption{Training of the DRL agent on the IRBB-33 environment with different reward schemes.}
|
589 |
+
\end{figure}
|
590 |
+
|
591 |
+
\subsection{Control of the IRBB-33 model}
|
592 |
+
|
593 |
+
In the case of the IRBB-33 network, we have to modify the reward scheme. As can be seen in Fig.~\ref{fig:pbn33_data1}, the reward scheme introduced in Sec.~\ref{sec:framework}, referred to as the \emph{mixed reward}, does not lead to any improvement of the average episode length during training over 200\,000 training steps. After trying different reward schemes for this network (data not shown), we found that the following scheme
|
594 |
+
\begin{align*}
|
595 |
+
R_a(s, s') = -|a| + 100 * (\mathbbm{1}_{TA}(s') - 1),
|
596 |
+
\end{align*}
|
597 |
+
It improves the training of the DRL agent significantly, as can be seen in Fig.~\ref{fig:pbn33_data2}, where the convergence is achieved in tens of thousands of steps.
|
598 |
+
|
599 |
+
\begin{figure}
|
600 |
+
\begin{subfigure}{0.49\linewidth}
|
601 |
+
\includegraphics[width=\linewidth]{figures/pbn7.pdf}
|
602 |
+
\caption{BN-7}
|
603 |
+
\label{fig:bn7}
|
604 |
+
\end{subfigure}
|
605 |
+
\hfill
|
606 |
+
\begin{subfigure}{0.49\linewidth}
|
607 |
+
\includegraphics[width=\linewidth]{figures/pbn10.pdf}
|
608 |
+
\caption{BN-10}
|
609 |
+
\label{fig:bn10}
|
610 |
+
\end{subfigure}
|
611 |
+
|
612 |
+
\medskip
|
613 |
+
\begin{subfigure}{0.49\linewidth}
|
614 |
+
\includegraphics[width=\linewidth]{figures/pbn28.pdf}
|
615 |
+
\caption{PBN-28}
|
616 |
+
\label{fig:pbn28}
|
617 |
+
\end{subfigure}
|
618 |
+
\hfill
|
619 |
+
\begin{subfigure}{0.49\linewidth}
|
620 |
+
\includegraphics[width=\linewidth]{figures/pbn33.pdf}
|
621 |
+
\caption{BN-33}
|
622 |
+
\label{fig:pbn33}
|
623 |
+
\end{subfigure}
|
624 |
+
|
625 |
+
\caption{Histogram of the control strategy lengths for the BN-7 model.}
|
626 |
+
\label{fig:model_tester_out}
|
627 |
+
\end{figure}
|
628 |
+
|
629 |
+
|
630 |
+
|
631 |
+
The average control strategy length obtained with pbn-STAC is $9.2$, as presented in Tab.~\ref{tab:control_res}. The length is again larger than in the optimal case, but the overhead is stable with respect to the results obtained for the BN models of melanoma. Again, as can be observed in Fig.~\ref{fig:pbn33}, in the majority of cases the strategies are of length one, which perfectly corresponds with the optimal strategy. Unfortunately, there are a~few very long ones, which give rise to the higher average value.
|
632 |
+
|
633 |
+
|
634 |
+
\subsection{Control of PBN models of melanoma}
|
635 |
+
|
636 |
+
We run the pbn-STAC control framework on the three PBN models of melanoma. For PBN-28, we are not able to compute the set of exact attractors, but we identify 14 pseudo-attractor states. Unfortunately, we can not obtain the optimal control strategies for this network with exhaustive search.
|
637 |
+
|
638 |
+
As shown in Tab.~\ref{tab:control_res}, the control strategies found by pbn-STAC are on average longer than the optimal ones. Moreover, their lengths seem to increase with the size of the network faster than in the case of BN models. Unfortunately, the optimal result is not available for the PBN-28 model to make a~comparison. Although the average length for the this network is high, the distribution is heavily skewed with a~long tail of longer control strategies as can be seen in Fig.~\ref{fig:pbn28}. Nevertheless, once again the majority of the source-target pairs are controllable with very few interventions. This characteristic of the control strategies obtained with pbn-STAC is consistent across different models and theirs types.
|
639 |
+
|
640 |
+
|
641 |
+
|
642 |
+
|
643 |
+
\section{Conclusions}
|
644 |
+
\label{sec:conclusions}
|
645 |
+
|
646 |
+
In this study we formulated a~control problem for the BN and PBN frameworks under the asynchronous update mode that corresponds to the problem of identifying effective cellular reprogramming strategies. We have developed and implemented a~computational framework, i.e., pbn-STAC, based on DRL that solves the control problem. It allows to find proper control strategies that drive a~network from the source to the target attractor by intervening only in other attractor states that correspond to phenotypical functional cellular states that can be observed in the lab. Since identifying attractors of large BNs/PBNs is a~challenging problem by itself and we consider our framework as a~contribution towards developing scalable control methods for large networks, we introduced the notion of a~pseudo-attractor and developed a~procedure that identifies pseudo-attractor states during DRL agent training. We evaluate the performance of pbn-SEC on a~number of networks of various sizes and a~biological case study and compare its solutions with the exact, optimal ones wherever possible.
|
647 |
+
|
648 |
+
The obtained results show the potential of the framework in terms of effectiveness and at the same time reveal some bottlenecks that need to be overcome to improve the performance. The major identified issue is related to the long tails of the distributions of the lengths of strategies identified by pbn-SEC, i.e., there are many strategies of lengths close to the optimal ones, and a~few which are very long. This negatively influences the average value. Addressing this problem would allow us to significantly improve the performance of pbn-STAC and make it effective on large models. We consider these developments and the evaluations of pbn-STAC on models of large sizes as future work.
|
649 |
+
|
650 |
+
Finally, we perceive our framework as rather straightforwardly adaptable to other types of PBNs, such as PBNs with perturbations, or Probabilistic Boolean Control Networks.
|
651 |
+
|
652 |
+
|
653 |
+
|
654 |
+
|
655 |
+
|
656 |
+
|
657 |
+
|
658 |
+
|
659 |
+
|
660 |
+
|
661 |
+
|
662 |
+
|
663 |
+
|
664 |
+
|
665 |
+
\bibliographystyle{abbrv}
|
666 |
+
\bibliography{mybib}
|
667 |
+
|
668 |
+
|
669 |
+
|
670 |
+
|
671 |
+
|
672 |
+
|
673 |
+
\end{document}
|