# A Survey On Causal Discovery Methods For I.I.D. And Time Series Data Uzma Hasan uzmahasan@umbc.edu Causal AI Lab Department of Information Systems University of Maryland, Baltimore County Baltimore, MD, USA Emam Hossain *emamh1@umbc.edu* Causal AI Lab Department of Information Systems University of Maryland, Baltimore County Baltimore, MD, USA Md Osman Gani *mogani@umbc.edu* Causal AI Lab Department of Information Systems University of Maryland, Baltimore County Baltimore, MD, USA Reviewed on OpenReview: *https: // openreview. net/ forum? id= YdMrdhGx9y* ## Abstract The ability to understand causality from data is one of the major milestones of human-level intelligence. Causal Discovery (CD) algorithms can identify the cause-effect relationships among the variables of a system from related observational data with certain assumptions. Over the years, several methods have been developed primarily based on the statistical properties of data to uncover the underlying causal mechanism. In this study, we present an extensive discussion on the methods designed to perform causal discovery from both independent and identically distributed (I.I.D.) data and time series data. For this purpose, we first introduce the common terminologies used in causal discovery literature and then provide a comprehensive discussion of the algorithms designed to identify causal relations in different settings. We further discuss some of the benchmark datasets available for evaluating the algorithmic performance, off-the-shelf tools or software packages to perform causal discovery readily, and the common metrics used to evaluate these methods. We also evaluate some widely used causal discovery algorithms on multiple benchmark datasets and compare their performances. Finally, we conclude by discussing the research challenges and the applications of causal discovery algorithms in multiple areas of interest. ## 1 Introduction The identification of the cause-effect relationships among the variables of a system from the corresponding data is called Causal Discovery (CD). A major part of the causal analysis involves unfolding the *cause and* effect relationships among the entities in complex systems that can help us build better solutions in health care, earth science, politics, business, education, and many other diverse areas (Peyrot (1996), Nogueira et al. (2021)). The *causal explanations* precisely the causal factors obtained from a causal analysis play an important role in decision-making and policy formulation as well as in foreseeing the consequences of interventions without actually doing them. Causal discovery algorithms enable the *discovery of the underlying* causal structure given a set of observations. The underlying causal structure also known as a causal graph (CG) is a representation of the cause-effect relationships between the variables in the data (Pearl (2009)). Causal graphs represent the causal relationships with directed arrows from the cause to the effect. Discovering the causal relations, and thereby, the estimation of their effects would enable us to understand the underlying data generating mechanism (DGM) better, and take necessary interventional actions. However, traditional Artificial Intelligence (AI) applications rely solely on predictive models and often ignore causal knowledge. Systems without the knowledge of causal relationships often cannot make rational and informed decisions (Marwala (2015)). The result may be devastating when correlations are mistaken for causation. Because two variables can be highly correlated, and yet not have any causal influence on each other. There may be a third variable often called a latent confounder or hidden factor that may be causing both of them (see Figure 2 (a)). Thus, *embedding the knowledge of causal relationships* in black-box AI systems is important to improve their explainability and reliability (Dubois & Prade (2020), Ganguly et al. (2023)). In multiple fields such as healthcare, politics, economics, climate science, business, and education, the ability to understand causal relations can facilitate the formulation of better policies with a greater understanding of the data. The standard approach to discover the cause-effect relationships is to perform randomized control trials (RCTs) (Sibbald & Roland (1998)). However, RCTs are often infeasible to conduct due to high costs and some ethical reasons (Resnik (2008)). As a result, over the last few decades, researchers have developed a variety of methods to unravel causal relations from purely observational data (Glymour et al. (2019), Vowels et al. (2021)). These methods are often based on some assumptions about the data and the underlying mechanism. The *outcome* of any causal discovery method is a causal graph or a causal adjacency matrix where the cause and effect relations among the entities or variables are represented. The structure of a causal graph is often similar to a *directed acyclic graph (DAG)* where directed edges from one variable to another represent the cause-effect relationship between them. Figure 2 (b) represents a causal graph showing the factors that are responsible for causing Cancer. This type of structural representation of the underlying data-generating mechanism is beneficial for understanding how the system entities interact with each other. There exists a wide range of approaches for performing causal discovery under different settings or assumptions. Some approaches are designed particularly for *independent and identically distributed (I.I.D.) data* (Spirtes et al. (2000b), Chickering (2002)) i.e. non-temporal data while others are focused on *time series* data (Runge et al. (2019), Hyvärinen et al. (2010)) or temporal data. Since in real-world settings, both types of data are available in different problem domains, it is essential to have approaches to perform causal structure recovery from both of these. Recently, there has been a growing body of research that considers *prior* knowledge incorporation for recovering the causal relationships (Mooij et al. (2020), Hasan & Gani (2022), Hasan & Gani (2023)). Although there exist some surveys (see Table 1) on causal discovery approaches (Heinze-Deml et al. (2018), Glymour et al. (2019), Guo et al. (2020), Vowels et al. (2021), Assaad et al. (2022b)), none of these present a comprehensive review of the different approaches designed for structure recovery from both I.I.D. and time series data. Also, these surveys do not discuss the approaches that perform causal discovery in the presence of background knowledge. Hence, the goal of this survey is to provide an overview of the wide range of existing approaches for performing causal discovery from I.I.D. as well as time series data under different settings. Existing surveys lack a combined overview of the approaches present for both I.I.D. and time series data. So in this survey, we want to introduce the readers to the methods available in both domains. We discuss prominent methods based on the different approaches such as conditional independence (CI) testing, score function usage, functional causal models (FCMs), continuous optimization strategy, prior knowledge infusion, and miscellaneous ones. These methods primarily differ from each other based on the primary strategy they follow. Apart from introducing the different causal discovery approaches Figure 1: Causal Discovery: Identification of a causal graph from data. ![1_image_0.png](1_image_0.png) ![2_image_0.png](2_image_0.png) Figure 2: (a) Latent confounder L causes both variables S and C, and the association between S and C is denoted by ? which can be mistaken as causation. The graph in (b) is a causal graph depicting the causes and effects of cancer (Korb & Nicholson (2010)). and algorithms for I.I.D. and time series data, we also discuss the different tools, metrics, and benchmark datasets used for performing CD and the challenges and applications of CD in a wide range of areas. Table 1: Comparison among the existing surveys for causal discovery approaches. A discussion on the different approaches can be found in section 3 and section 4. | Survey | Focused Approaches | I.I.D. Data | Time Series Data | | | |---------------------------|--------------------------------|-------------------|--------------------|----|----| | Heinze-Deml et al. (2018) | Constraint, Score, Hybrid & FCMbased approaches. | ✓ | × | | | | Glymour et al. (2019) | Traditional | Constraint-based, | | | | | Score-based, | & | FCM-based | ap | | | | proaches. | ✓ | × | | | | | Guo et al. (2020) | Constraint-based, | Score-based, | & | ✓ | × | | FCM-based approaches. | × | ✓ | | | | | Vowels et al. (2021) | Continuous Optimization-based. | ✓ | × | | | | Assaad et al. (2022b) | Constraint-based, | Score-based, | | | | | FCM-based, etc. | approaches for | | | | | | time series data. | ✓ | ✓ | | | | | This study | Constraint-based, | Score-based, | | | | | FCM-based, | Hybrid-based, | | | | | | Continuous-Optimization-based, Prior-Knowledge-based, and Miscellaneous. | | | | | | To summarize, the structure of this paper is as follows: *First*, we provide a brief introduction to the common terminologies in the field of causal discovery (section 2). *Second*, we discuss the wide range of causal discovery approaches that exist for both I.I.D. (section 3) and time-series data (section 4). *Third*, we briefly overview the common evaluation metrics (section 5) and datasets (section 6) used for evaluating the causal discovery approaches, and report the performance comparison of some causal discovery approaches in section 7. *Fourth*, we list the different technologies and open-source software (section 8) available for performing causal discovery. *Fifth*, we discuss the challenges (section 9.1) and applications (section 9.2) of causal discovery in multiple areas such as healthcare, business, social science, economics, and so on. *Lastly*, we conclude by discussing the scopes of improvement in future causal discovery research, and the importance of causality in improving the existing predictive AI systems which can thereby impact informed and reliable decision-making in different areas of interest (section 10). ## 2 Preliminaries Of Causal Discovery In this section, we briefly discuss the important terminologies and concepts that are widely used in causal discovery. Some common notations used to explain the terminologies are presented in Table 2. | | Table 2: Common notations. | |------------|----------------------------------------------------------------------| | Notation | Description | | G | A graph or DAG or ground-truth graph | | G ′ | An estimated graph | | X, Y, Z, W | Observational variables | | X - Y | An unoriented or undirected edge between X and Y | | X → Y | A directed edge from X to Y where X is the cause and Y is the effect | | X ̸→ Y | Absence of an edge or causal link between X and Y | | X → Z ← Y | V-structure or Collider where Z is the common child of X and Y | | ⊥⊥ | Independence or d-separation | | X ⊥⊥ Y | Z | X is d-separated from Y given Z | ## 2.1 Graphical Models A graph *G = (V, E)* consists of a set of vertices (nodes) V and a set of edges E where the edges represent the relationships among the vertices. Figure 3 (a) represents a graph G with vertices V = [*X, Y, Z*] and edges E = [(X, Y ),(X, Z),(*Z, Y* )]. There can be different types of edges in a graph such as directed edges (→), undirected edges (-), bi-directed edges (↔), etc. (Colombo et al. (2012)). A graph that consists of only undirected edges (-) between the nodes which represent their adjacencies is called a *skeleton graph* SG. This type of graph is also known as an *undirected graph* (Figure 3 (b)). A graph that has a mixture of different types of edges is known as a *mixed graph* MG (Figure 3 (c)). A *path* p between two nodes X and Y is a sequence of edges beginning from X and ending at Y . A *cycle* c is a path that begins and ends at the same vertex. A graph with no cycle c is called an *acyclic graph*. And, a directed graph in which the edges have directions (→) and has no cycle is called a *directed acyclic graph* (DAG). In a DAG G, a directed path from X to Y implies that X is an ancestor of Y , and Y is a descendant of X. The graph G in Figure 3 (a) is a DAG as it is acyclic, and consists of directed edges. Figure 3: (a) A graph G, (b) its *skeleton* graph SG, (c) a *mixed graph* MG with directed & undirected edges. ![3_image_0.png](3_image_0.png) There can be different kinds of DAGs based on the type of edges they contain. A class of DAG known as *partially directed acyclic graph* (PDAG) contains both directed (→) and undirected (-) edges. The mixed graph of Figure 3 (c) is also a PDAG. A *completed PDAG* (CPDAG) consists of directed (→) edges that exist in every DAG G having the same conditional dependencies, and undirected (-) edges that are reversible in G. An extension of DAGs that retain many of the significant properties that are associated with DAGs is known as *ancestral graphs* (AGs). Two different DAGs may lead to the same ancestral graph (Richardson & Spirtes (2002a)). Often there are hidden confounders and selection biases in real-world data. Ancestral graphs can represent the data-generating mechanisms that may involve latent confounders and/or selection bias, without explicitly modeling the unobserved variables. There exist different types of ancestral graphs. A *maximal ancestral graph* (MAG) is a mixed graph that can have both directed (→) and bidirectional (↔) edges (Richardson & Spirtes (2002b)). A *partial ancestral graph* (PAG) can have four types of edges such as directed (→), bi-directed (↔), partially directed (o→), and undirected (−) (Spirtes (2001)). That is, edges in a PAG can have three kinds of endpoints: −, o, or >. An ancestral graph without bi-directed edges (↔) is a DAG (Triantafillou & Tsamardinos (2016)). ## 2.2 Causal Graphical Models A *causal graphical model* (CGM) or *causal graph* (CG) is a DAG G that represents a joint probability distribution P over a set of random variables X = (X1, X2*, . . . , X*d) where P is Markovian with respect to G. In a CGM, the nodes represent variables X, and the arrows represent causal relationships between them. The joint distribution P can be factorized as follows where pa(xi, G) denotes the parents of xiin G. $$P(x_{1},\ldots,x_{d})=\prod_{i=1}^{d}P(x_{i}|p a(x_{i},G))$$ $$(1)$$ P(xi|pa(xi, G)) (1) Causal graphs are often used to study the underlying data-generating mechanism in real-world problems. For any dataset D with variables X, causal graphs can encode the cause-effect relationships among the variables using directed edges (→) from cause to the effect. Most of the time causal graphs take the form of a DAG. In Figure 3 (a), X is the cause that effects both Y and Z (i.e. Y ← X → Z). Also, Z is a cause of Y (i.e. Z → Y ). The mechanism that enables the estimation of a causal graph G from a dataset D is called *causal* discovery *(CD)* (Figure 1). The outcome of any causal discovery algorithm is a causal graph G where the directed edges (→) represent the cause-and-effect relationship between the variables X in D. However, some approaches have different forms of graphs (PDAGs, CPDAGs, ancestral graphs, etc.) as the output causal graph. Table 3 lists the output causal graphs of some common approaches which are discussed in section 3. Table 3: List of some CD algorithms with their output causal graphs. A detailed discussion of the algorithms is in section 3. The cells with ✓ represent the type of graph produced by the corresponding algorithm. | Algorithms | DAG | PDAG | CPDAG | MAG | PAG | |--------------|-------|--------|---------|-------|-------| | PC | | ✓ | | | | | FCI | | ✓ | | | | | RFCI | | ✓ | | | | | GES | | ✓ | | | | | GIES | ✓ | | | | | | MMHC | ✓ | | | | | | LiNGAM | ✓ | | | | | | NOTEARS | ✓ | | | | | | GSMAG | | ✓ | | | | ## 2.2.1 Key Structures In Causal Graphs There are three fundamental *building blocks* (key structures) commonly observed in the graphical models or causal graphs, namely, Chain, *Fork*, and *Collider*. Any graphical model consisting of at least three variables is composed of these key structures. We discuss these basic building blocks and their implications in dependency relationships below. Definition 1 (Chain) A chain X → Y → Z is a graphical structure or a configuration of three variables X, Y , and Z in graph G where X has a directed edge to Y and Y has a directed edge to Z *(see Figure 4* (a)). Here, X causes Y and Y causes Z, and Y *is called a mediator.* Definition 2 (Fork) A fork Y ← X → Z is a triple of variables X, Y , and Z where one variable is the common parent of the other two variables. In Figure 4 (b), the triple (X, Y , Z) is a fork where X *is a* common parent of Y and Z. Definition 3 (Collider/V-structure) A v-structure or collider X → Z ← Y *is a triple of variables* X, Y , and Z *where one variable is a common child of the other two variables which are non-adjacent. In Figure* 4 (c), the triple (X, Y , Z) is a v-structure where Z is a common child of X and Y , but X and Y are non-adjacent in the graph. Figure 4 (d) is also a collider with a descendant W. ![5_image_0.png](5_image_0.png) Figure 4: Fundamental building blocks in causal graphical models. ## 2.2.2 Conditional Independence In Causal Graphs Testing for *conditional independence* (CI) between the variables is one of the most important techniques to find the causal relationships among the variables. Conditional independence between two variables X and Y results when they are independent of each other given a third variable Z (i.e. X ⊥⊥ Y | Z). In the case of causal discovery, CI testing allows deciding if any two variables are causally connected or disconnected. An important criterion for CI testing is the *d-separation* criterion which is formally defined below. Definition 4 (d-separation) (Pearl (1988)) A path p in G is blocked by a set of nodes N *if either* i. p contains a chain of nodes X → Y → Z or a fork X ← Y → Z such that the middle node Y *is in* N, ii. p contains a collider X → Y ← Z such that the collision node Y is not in N*, and no descendant of* Y is in N. If N *blocks every path between two nodes, then they are d-separated, conditional on N, and thus are independent conditional on N.* In *d-separation*, d stands for *directional*. The d-separation criterion provides a set of rules to check if two variables are independent when conditioned on a set of variables. The conditioning variable can be a single variable or a set of variables. However, two variables with a directed edge (→) between them are always dependent. The set of testable implications provided by *d-separation* can be benchmarked with the available data D. If a graph G might have been generated from a dataset D, then *d-separation* tells us which variables in G must be independent conditional on other variables. If every *d-separation* condition matches a conditional independence in data, then no further test can refute the model (Pearl (1988)). If there is at least one path between two variables that is unblocked, then they are *d-connected*. If two variables are d-connected, then they are most likely dependent (except intransitive cases) (Pearl (1988)). The d-separation or conditional independence between the variables in the **key structures** (Figure 4) or building blocks of causal graphs follow some rules which are discussed below: i. *Conditional Independence in Chains:* If there is only one unidirectional path between variables X and Z (Figure 4 (a)), and Y is any variable or set of variables that intercept that path, then X and Z are conditionally independent given Y , i.e. X ⊥⊥ Z | Y . ii. *Conditional Independence in Forks:* If a variable X is a common cause of variables Y and Z, and there is only one path between Y and Z, then Y and Z are independent conditional on X (i.e. Y ⊥⊥ Z | X) (Figure 4(b)). iii. *Conditional Independence in Colliders:* If a variable Z is the collision node between two variables X and Y (Figure 4(c)), and there is only one path between X and Y , then X and Y are unconditionally independent (i.e. X ⊥⊥ Y ). But, they become dependent when conditioned on Z or any descendants of Z (Figure 4(d)). ## 2.2.3 Markov Equivalence In Causal Graphs A set of causal graphs having the same set of conditional independencies is known as a Markov equivalence class (MEC). Two DAGs that are Markov equivalent have the *(i) same skeleton* (the underlying undirected graph) and (ii) *same v-structures (colliders)* (Verma & Pearl (2022)). That is, all DAGs in a MEC share the same edges, regardless of the direction of those edges, and the same colliders whose parents are not adjacent. *Chain* and *Fork* share the same independencies, hence, they belong to the same MEC (Figure 5). Definition 5 (Markov Blanket) *For any variable X, its Markov blanket (MB) is the set of variables such* ![6_image_0.png](6_image_0.png) that X is independent of all other variables given MB. The **members** in the Markov blanket of any variable will include all of its *parents, children, and spouses*. Figure 5: Markov Equivalence in Chains and Fork. Markov equivalence in different types of DAGs may vary. A *partial* DAG (PDAG) a.k.a an essential graph (Perković et al. (2017)) can represent an equivalence class of DAGs. Each equivalent class of DAGs can be uniquely represented by a PDAG. A *completed* PDAG or CPDAG represents the union (over the set of edges) of Markov equivalent DAGs, and can uniquely represent an MEC (Malinsky & Spirtes (2016b)). More specifically, in a CPDAG, an undirected edge between any two nodes X and Y indicates that some DAG in the equivalence class contains the edge X→Y and some DAG may contain Y →X. Figure 6 shows a CPDAG and the DAGs (G and H) belonging to an equivalence class. Markov equivalence in the case of ancestral graphs works as follows. A *maximal ancestral graph* (MAG) represents a DAG where all hidden variables are marginalized out and preserves all conditional independence relations among the variables that are true in the underlying DAG. That is, MAGs can model causality and conditional independencies in causally insufficient systems (Triantafillou & Tsamardinos (2016)). Partial ancestral graphs (PAGs) represent an equivalence class of MAGs where all common edge marks shared by all members in the class are displayed, and also, circles for those marks that are uncommon are presented. PAGs represent all of the observed d-separation relations in a DAG. Different PAGs that represent distinct equivalence classes of MAGs involve different sets of conditional independence constraints. An MEC of MAGs can be represented by a PAG (Malinsky & Spirtes (2016b)). Figure 6: DAGs G and H belong to the same MEC. The leftmost graph is a CPDAG of G and H with an ![6_image_1.png](6_image_1.png) undirected edge (−) between X and Z, and the rest of the edges same as in G and H. ## 2.3 Structural Causal Models Pearl (2009) defined a *class of models* for formalizing structural knowledge about the *data-generating process* known as the *structural causal models (SCMs)*. The SCMs are valuable tools for reasoning and decisionmaking in causal analysis since they are capable of representing the underlying causal story of data (Kaddour et al. (2022)). Definition 6 (Structural Causal Model) *Pearl (2009); A structural causal model is a 4-tuple* M = ⟨U, V, F, P(u)⟩*, where* i. U is a set of background variables (also called exogenous) that are determined by factors outside the model. ii. V *is a set* {V1, V2, . . . , Vn} of endogenous variables that are determined by variables in the model, viz. variables in U ∪ V . iii. F is a set of functions {f1, f2, . . . , fn} such that each fi *is a mapping from the respective domains of* Ui ∪ P Ai to Vi and the entire set F forms a mapping from U to V . In other words, each fi *assigns a* value to the corresponding Vi ∈ V , vi ← fi(pai, ui), for i = 1, 2*, . . . n*. iv. P(u) *is a probability function defined over the domain of* U. Each SCM M is associated with a *causal graphical model* G that is a DAG, and a set of functions fi. ![7_image_0.png](7_image_0.png) Causation in SCMs can be interpreted as follows: a variable Y is directly caused by X if X is in the function f of Y . In other words, each fi assigns a value to the corresponding Vi ∈ V , vi ← fi(pai, ui), for i = 1, 2*, . . . n*. In the SCM of Figure 7, X is a direct cause of Y as X appears in the function that assigns Y 's value. That is, if a variable Y is the child of another variable X, then X is a direct cause of Y . In Figure 7, UX, UY and UZ are the exogenous variables; X, Y and Z are the endogenous variables, and fX, fY & fZ are the functions that assign values to the variables in the system. Any variable is *an exogenous variable* if (i) it is an unobserved or unmeasured variable and (ii) it cannot be a descendant of any other variables. Every endogenous variable is a descendant of at least one exogenous variable. Figure 7: A Structural Causal Model (SCM) with causal graph G and functions fX, fY and fZ which denotes how the variables X, Y , and Z are generated respectively. ## 2.4 Causal Assumptions Often, the available data provide only partial information about the underlying causal story. Hence, it is essential to make some assumptions about the world for performing causal discovery (Lee & Honavar (2020)). Following are the common assumptions usually made by causal discovery algorithms. i. *Causal Markov Condition (CMC):* The causal Markov assumption states that a variable X is independent of every other variable (except its descendants) conditional on all of its direct causes (Scheines (1997)). That is, the CMC requires that every variable in the causal graph is independent of its nondescendants conditional on its parents (Malinsky & Spirtes (2016a)). In Figure 8, W is the only descendant of X. As per the CMC, X is independent of Z conditioned on its parent Y (X ⊥⊥ Z | Y ). Figure 8: Illustration of the causal Markov condition (CMC) among four variables. ![8_image_0.png](8_image_0.png) ii. *Causal Faithfulness Condition (CFC):* The faithfulness assumption states that except for the variables that are d-separated in a DAG, all other variables are dependent. More specifically, for a set of variables V whose causal structure is represented by a DAG G, no conditional independence holds unless entailed by the causal Markov condition (Ramsey et al. (2012)). That is, the CFC a.k.a the Stability condition is a converse principle of the CMC. CFC can be also explained in terms of d-separation as follows: For every three disjoint sets of variables X, Y , and Z, if X and Y are not d-separated by Z in the causal DAG, then X and Y are not independent conditioned on Z (Ramsey et al. (2012)). The faithfulness assumption may fail in certain scenarios. For example, it fails whenever there exist two paths with equal and opposite effects between variables. It also fails in systems with deterministic relationships among variables, and also, when there is a failure of transitivity along a single path (Weinberger (2018)). iii. *Causal Sufficiency:* The causal sufficiency assumption states that there exist no latent/hidden/unobserved confounders, and all the common causes are measured. Thus, the assumption of causal sufficiency is satisfied only when all the common causes of the measured variables are measured. This is a strong assumption as it restricts the search space of all possible DAGs that may be inferred. However, real-world datasets may have hidden confounders which might frequently cause the assumption to be violated in such scenarios. Algorithms that violate the causal sufficiency assumption may observe degradation in their performance. The causal insufficiency in real-world datasets may be overcome by leveraging domain knowledge in the discovery pipeline. The CMC tends to fail for a causally insufficient set of variables. iv. *Acyclicity:* It is the most common assumption which states that *there are no cycles in a causal graph*. That is, a graph needs to be acyclic in order to be a causal graph. As per the acyclicity condition, there can be no directed paths starting from a node and ending back to itself. This resembles the structure of a directed acyclic graph (DAG). A recent approach (Zheng et al. (2018)) has formulated a new function (Equation 2) to enforce the acyclicity constraint during causal discovery in continuous optimization settings. The weighted adjacency matrix W in Equation 2 is a DAG if it satisfies the following condition where ◦ is the Hadamard product, eW◦W is the matrix exponential of W ◦ W, and d is the total number of vertices. $$h(W)=t r(e^{W\circ W})-d=0$$ W◦W ) − d = 0 (2) v. *Data Assumptions:* There can be different types of assumptions about the data. Data may have linear or nonlinear dependencies and can be continuously valued or discrete valued in nature. Data can be independent and identically distributed (I.I.D.) or the data distribution may shift with time (e.g. time-series data). Also, the data may belong to different noise distributions such as Gaussian, Gumbel, or Exponential noise. Occasionally, some other data assumptions such as the existence of selection bias, missing variables, hidden confounders, etc. are found. However, in this survey, we do not focus much on the methods with these assumptions. ## 3 Causal Discovery Algorithms For I.I.D. Data Causal graphs are essential as they represent the underlying causal story embedded in the data. There are two very common approaches to recovering the causal structure from observational data, *i) Constraintbased* (Spirtes et al. (2000b), Spirtes (2001), Colombo et al. (2012)) and *ii) Score-based* (Chickering (2002)). Among the other types of approaches, *functional causal models (FCMs)-based* (Shimizu et al. (2006), Hoyer et al. (2008)) approaches and *hybrid* approaches (Tsamardinos et al. (2006)) are noteworthy. Recently, some *gradient-based* approaches have been proposed based on neural networks (Abiodun et al. (2018)) and ![9_image_0.png](9_image_0.png) Figure 9: Taxonomy of some causal discovery approaches for I.I.D. data. The approaches are classified based on their core contribution or the primary strategy they adopt for causal structure recovery. The approaches that leverage prior knowledge are marked by an ∗ symbol. Some of the gradient-based optimization approaches that use a score function are indicated by a ⋄ symbol. They are primarily classified as gradientbased methods because of the use of gradient descent for optimization. However, they can be a score-based method too as they compute data likelihood scores on the way. a modified definition (Equation 2) of the acyclicity constraint (Zheng et al. (2018), Yu et al. (2019)). Other approaches include the ones that prioritize the use of *background knowledge* and provides ways to incorporate prior knowledge and experts' opinion into the search process (Wang et al. (2020); Sinha & Ramsey (2021)). In this section, we provide an overview of the causal discovery algorithms for I.I.D. data based on the different types of approaches mentioned above. The algorithms primarily distinguish from each other based on the core approach they follow to perform causal discovery. We further discuss noteworthy similar approaches specialized for non-I.I.D. or time series data in section 4. ## 3.1 Constraint-Based Testing for conditional independence (CI) is a core objective of constraint-based causal discovery approaches. Conditional independence tests can be used to recover the causal skeleton if the probability distribution of the observed data is faithful to the underlying causal graph (Marx & Vreeken (2019)). Thus, constraintbased approaches conduct CI tests between the variables to check for the presence or absence of edges. These approaches infer the conditional independencies within the data using the *d-separation criterion* to search for a DAG that entails these independencies, and detect which variables are d-separated and which are d-connected (Triantafillou & Tsamardinos (2016)). X is conditionally independent of Z given Y i.e. X ⊥⊥ Z | Y in Figure 10 (a) and in Figure 10 (b), X and Z are independent, but are not conditionally independent given Y . Table 4 lists different types of CI tests used by constraint-based causal discovery approaches. Figure 10: (a) X ⊥⊥ Z | Y and (b) X and Z are not conditionally independent given Y . ![9_image_1.png](9_image_1.png) Table 4: Types of conditional independence (CI) tests. Please refer to the study Runge (2018) for a detailed discussion on CI tests. | Conditional Independence Test | Ref. | | |---------------------------------|-----------------------------------------------------------|--------------------------------| | 1. | Conditional Distance Correlation (CDC) test | Wang et al. (2015) | | 2. | Momentary Conditional Independence (MCI) | Runge et al. (2019) | | 3. | Kernel-based CI test (KCIT) | Zhang et al. (2012) | | 4. | Randomized Conditional Correlation Test (RCoT) | Strobl et al. (2019) | | 5. | Generative Conditional Independence Test (GCIT) | Bellot & van der Schaar (2019) | | 6. | Model-Powered CI test | Sen et al. (2017) | | 7. | Randomized Conditional Independence Test (RCIT) | Strobl et al. (2019) | | 8. | Kernel Conditional Independence Permutation Test | Doran et al. (2014) | | 9. | Gaussian Processes and Distance Correlation-based (GPDC) | Rasmussen et al. (2006) | | 10. | Conditional mutual information estimated with a k-nearest | Runge (2018) | | neighbor estimator (CMIKnn) | | | ## 3.1.1 Pc The Peter-Clark (PC) algorithm (Spirtes et al. (2000b)) is one of the oldest constraint-based algorithms for causal discovery. To learn the underlying causal structure, this approach depends largely on conditional independence (CI) tests. This is because it is based on the concept that two statistically independent variables are not causally linked. The outcome of a PC algorithm is a CPDAG. It learns the CPDAG of the underlying DAG in three steps: *Step 1 - Skeleton identification, Step 2 - V-structures determination, and Step 3 - Edge* orientations. It starts with a fully connected undirected graph using every variable in the dataset, then eliminates the unconditionally and conditionally independent edges (skeleton detection), then it finds and orients the v-structures or colliders (i.e. X → Y ← Z) based on the d-separation set of node pairs, and finally orients the remaining edges based on two aspects: i) availability of no new v-structures, and ii) not allowing any cycle formation. The assumptions made by the PC algorithm include acyclicity, causal faithfulness, and causal sufficiency. It is computationally more feasible for sparse graphs. An implementation of this algorithm can be found in the CDT repository (https://github.com/ElementAI/causal_discovery_toolbox) and also, in the gCastle toolbox (Zhang et al. (2021a)). A number of the constraint-based approaches namely FCI, RFCI, PCMCI, PC-stable, etc. use the PC algorithm as a backbone to perform the CI tests. Figure 11: Step-by-step workflow of the PC (Spirtes et al. (2000b)) algorithm. ![10_image_0.png](10_image_0.png) ## 3.1.2 Fci The Fast Causal Inference (FCI) algorithm (Spirtes et al. (2000a)) is a variant of the PC algorithm that can infer conditional independencies and learn causal relations in the presence of many arbitrary latent and selection variables. As a result, it is accurate in the large sample limit with a high probability even when there exists *hidden variables*, and *selection bias* (Berk (1983)). The first step of the FCI algorithm is similar to the PC algorithm where it starts with a complete undirected graph to perform the skeleton determination. After that, it requires additional tests to learn the correct skeleton and has additional orientation rules. In the worst case, the number of conditional independence tests performed by the algorithm grows exponentially with the number of variables in the dataset. This can affect both the speed and the accuracy of the algorithm in the case of small data samples. To improve the algorithm, particularly in terms of speed, there exist different variants such as the RFCI (Colombo et al. (2012)) and the Anytime FCI (Spirtes (2001)) algorithms. ## 3.1.3 Anytime Fci Anytime FCI (Spirtes (2001)) is a modified and faster version of the FCI (Spirtes et al. (2000a)) algorithm. The number of CI tests required by FCI makes it infeasible if the model has a large number of variables. Moreover, when the FCI requires independence tests conditional on a large set of variables, the accuracy decreases for a small sample size. The outer loop of the FCI algorithm performs independence tests conditional on the increasing size of variables. In the anytime FCI algorithm, the authors showed that this outer loop can be stopped anytime during the execution for any smaller variable size. As the number of variables in the conditional set reduces, anytime FCI becomes much faster for the large sample size. More importantly, it is also more reliable on limited samples since the statistical tests with the lowest power are discarded. To support the claim, the authors provided proof for the change in FCI that guarantees good results despite the interruption. The result of the interrupted anytime FCI algorithm is still valid, but as it cannot provide answers to most questions, the results could be less informative compared to the situation if it was allowed to run uninterrupted. ## 3.1.4 Rfci Really Fast Causal Inference (RFCI) (Colombo et al. (2012)) is a much faster variant of the traditional FCI for learning PAGs that uses fewer CI tests than FCI. Unlike FCI, RFCI assumes that causal sufficiency holds. To ensure soundness, RFCI performs some additional tests before orienting v-structures and discriminating paths. It conditions only on subsets of the adjacency sets and unlike FCI, avoids the CI tests given subsets of possible d-separation sets which can become very large even for sparse graphs. As a result, the number of these additional tests and the size of their conditioning sets are small for sparse graphs which makes RFCI much faster and computationally feasible than FCI for high-dimensional sparse graphs. Also, the lower computational complexity of RFCI leads to high-dimensional consistency results under weaker conditions than FCI. ## 3.1.5 Fci With Tiered Background Knowledge Andrews et al. (2020) show that the Fast Causal Inference (FCI) algorithm (Spirtes et al. (2000a)) is sound and complete with tiered background knowledge (TBK). By *tiered background knowledge*, it means any knowledge where the variables may be partitioned into two or more mutually exclusive and exhaustive subsets among which there is a known causal order. Tiered background knowledge may arise in many different situations, including but not limited to instrumental variables, data from multiple contexts and interventions, and temporal data with contemporaneous confounding. The proof that FCI is complete with TBK suggests that the algorithm is able to find all of the causal relationships that are identifiable from tiered background knowledge and observational data under the typical assumptions. ## 3.1.6 Pc-Stable The independence tests in the original PC method are prone to errors in the presence of a few samples. Additionally, because the graph is updated dynamically, maintaining or deleting an edge incorrectly will affect the neighboring sets of other nodes. As a result, the sequence in which the CI tests are run will affect the output graph. Despite the fact that this order dependency is not a significant issue in low-dimensional situations, it is a severe problem in high-dimensional settings. To solve this problem, Colombo et al. (2014) suggested changing the original PC technique to produce a stable output skeleton that is independent of the input dataset's variable ordering. This approach, known as the stable-PC algorithm, queries and maintains the neighbor (adjacent) sets of every node at each distinct level. Since the conditioning sets of the other nodes are unaffected by an edge deletion at one level, the outcome is independent of the variable ordering. They demonstrated that this updated version greatly outperforms the original algorithm in high-dimensional settings while maintaining the original algorithms' low-dimensional settings performance. However, this modification lengthens the algorithm's runtime even more by requiring additional CI checks to be done at each level. The R-package pcalg contains the source code for PC-stable. ## 3.1.7 Pkcl Wang et al. (2020) proposed an algorithm, Prior-Knowledge-driven Local Causal Structure Learning (PKCL), to discover the underlying causal mechanism between *bone mineral density* (BMD) and its factors from clinical data. It first discovers the neighbors of the target variables and then detects the MaskingPCs to eliminate their effect. After that, it finds the spouse of target variables utilizing the neighbors set. This way the skeleton of the causal network is constructed. In the global stage, PKCL leverages the *Markov* blanket (MB) sets learned in the local stage to learn the global causal structure in which prior knowledge is incorporated to guide the global learning phase. Specifically, it learns the causal direction between feature variables and target variables by combining the constraint-based and score-based structure search methods. Also, in the learning phase, it automatically adds casual direction according to the available prior knowledge. ## 3.2 Score-Based Score-based causal discovery algorithms search over the space of all possible DAGs to find the graph that best explains the data. Typically, any score-based approach has two main components: *(i) a search strategy* to explore the possible search states or space of candidate graphs G ′*, and (ii) a score function* to assess the candidate causal graphs. The search strategy along with a score function helps to optimize the search over the space of all possible DAGs. More specifically, a score function S(G ′, D) maps causal graphs G ′to a numerical score, based on how well G ′fits a given dataset D. A commonly used score function to select causal models is the *Bayesian Information Criterion (BIC)* (Schwarz (1978a)) which is defined below: $${\mathcal{S}}(G^{'},D)=-2\mathrm{log}\;{\mathcal{L}}\{G^{'},D\}+k\mathrm{log}\;n,$$ $\left(3\right)$. ′, D} + klog n, (3) ![12_image_0.png](12_image_0.png) where n is the number of samples in D, k is the dimension of G ′and L is the maximum-likelihood function associated with the candidate graph G ′. The lower the BIC score, the better the model. BDeu, BGe, MDL, etc. (listed in Table 5) are some of the other commonly used score functions. These objective functions are optimized through a heuristic search for model selection. After evaluating the quality of the candidate causal graphs using the score function, the score-based methods output one or more causal graphs that achieve the highest score (Huang et al. (2018b)). We discuss some of the well-known approaches in this category below. Figure 12: General components of a score-based causal discovery approach. ## 3.2.1 Ges Greedy Equivalence Search (GES) (Chickering (2002)) is one of the oldest score-based causal discovery algorithms that perform a greedy search over the space of equivalence classes of DAGs. Each search state is represented by a CPDAG where some insert and delete operators allow for single-edge additions and deletions respectively. Primarily GES works in two phases: i) Forwards Equivalence Search (FES), and ii) Backward Equivalence Search (BES). In the first phase, FES starts with an empty CPDAG (no-edge model), and greedily adds edges by taking into account every single-edge addition that could be performed to every DAG in the current equivalence class. After an edge modification is done to the current CPDAG, a score function is used to score the model. If the new score is better than the current score, only then the modification is allowed. When the forward phase reaches a local maximum, the second phase, BES starts ![13_image_0.png](13_image_0.png) where at each step, it takes into account all single-edge deletions that might be allowed for all DAGs in the current equivalence class. The algorithm terminates once the local maximum is found in the second phase. Implementation of GES is available at the following Python packages: Causal Discovery Toolbox or CDT (Kalainathan & Goudet (2019)) and gCastle (Zhang et al. (2021a)). GES assumes that the score function is decomposable and can be expressed as a sum of the scores of individual nodes and their parents. A summary workflow of GES is shown in Figure 13. Figure 13: Different stages in the GES algorithm. ## 3.2.2 Fgs Fast Greedy Search (FGS) (Ramsey (2015)) is another score-based method that is an optimized version of the GES algorithm (Chickering (2002)). This optimized algorithm is based on the faithfulness assumption and uses an alternative method to reduce scoring redundancy. An ascending list L is introduced which stores the score difference of arrows. After making a thorough search, the first edge e.g. X → Y is inserted into the graph and the graph pattern is reverted. For variables that are adjacent to X or Y with positive score differences, new edges are added to L. This process in the forward phase repeats until the L becomes empty. Then the reverse phase starts, filling the list L and continuing until L is empty. This study considered the experiment where GES was able to search over 1000 samples with 50,000 variables in 13 minutes using a 4-core processor and 16GB RAM computer. Following the new scoring method, FGS was able to complete the task with 1000 samples on 1,000,000 variables for sparse models in 18 hours using a supercomputer having 40 processors and 384GB RAM at the Pittsburgh Supercomputing Center. The code for FGS is available on GitHub as a part of the Tetrad project: https://github.com/cmu-phil/tetrad. ## 3.2.3 Sges Selective Greedy Equivalence Search (SGES) (Chickering & Meek (2015)) is another score-based causal discovery algorithm that is a restrictive variant of the GES algorithm (Chickering (2002)). By assuming perfect generative distribution, SGES provides a polynomial performance guarantee yet maintains the asymptotic accuracy of GES. While doing this, it is possible to keep the algorithm's large sample guarantees by ignoring all but a small fraction of the backward search operators that GES considered. In the forward phase, SGES uses a polynomial number of insert operation calls to the score function. In the backward phase, it consists of only a subset of delete operators of GES which include, consistent operators to preserve GES's consistency over large samples. The authors demonstrated that, for a given set of graph-theoretic complexity features, Table 5: Some commonly used score functions for causal discovery. Please refer to the study Huang et al. (2018a) for a detailed discussion of the score functions. Score Function/Criterion **Ref.** Minimum description length (MDL) Schwarz (1978b) Bayesian information criterion (BIC) Schwarz (1978a) Akaike information criterion (AIC) Akaike (1998) Bayesian Dirichlet equivalence score (BDeU) Buntine (1991) Bayesian metric for Gaussian networks (BGe) Geiger & Heckerman (1994) Factorized normalized maximum likelihood (fNML) Silander et al. (2008) such as maximum-clique size, the maximum number of parents, and v-width, the number of score assessments by SGES can be polynomial in the number of nodes and exponential in these complexity measurements. ## 3.2.4 Rl-Bic RL-BIC is a score-based approach that uses *Reinforcement Learning (RL)* and a BIC score to search for the DAG with the best reward (Zhu et al. (2019)). For data-to-graph conversion, it uses an encoder-decoder architecture that takes observational data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates a BIC score function and two penalty terms for enforcing acyclicity. The *actor-critic RL algorithm* is used as a *search strategy* and the final output is the causal graph that achieves the best reward among all the generated graphs. The approach is applicable to small and medium graphs of up to 30 nodes. However, dealing with large and very large graphs is still a challenge for it. This study mentions that their future work involves developing a more efficient and effective score function since computing scores is much more time-consuming than training NNs. The original implementation of the approach is available at: https://github.com/huawei-noah/trustworthyAI. Figure 14: Components of the RL-BIC (Zhu et al. (2019)) approach. ![14_image_0.png](14_image_0.png) ## 3.2.5 A* Search Xiang & Kim (2013) proposed a one-stage method for learning sparse network structures with continuous variables using the A* search algorithm with lasso in its scoring system. This method increased the computational effectiveness of popular exact methods based on dynamic programming. The study demonstrated how the proposed approach achieved comparable or better accuracy with significantly faster computation time when compared to two-stage approaches, including L1MB and SBN. Along with that, a heuristic approach was added that increased A* lasso's effectiveness while maintaining the accuracy of the outcomes. In high-dimensional spaces, this is a promising approach for learning sparse Bayesian networks. ## 3.2.6 Triplet A* Lu et al. (2021) uses the *A* exhaustive search* (Yuan & Malone (2013)) combined with an optimal BIC score that requires milder assumptions on data than conventional CD approaches to guarantee its asymptotic correctness. The optimal BIC score combined with the exhaustive search finds the MEC of the true DAG if and only if the true DAG satisfies the optimal BIC Condition. To gain scalability, they also developed an approximation algorithm for complex large systems based on the A* method. This extended approach is named Triplet A* which can scale up to more than 60 variables. This extended method is rather general and can be used to scale up other exhaustive search approaches as well. Triplet A* can particularly handle linear Gaussian and non-Gaussian networks. It works in the following way. Initially, it makes a guess about the parents and children of each variable. Then for each variable X and its neighbors (*Y, Z*), it forms a cluster consisting of *X, Y, Z* with their direct neighbors and runs an exhaustive search on each cluster. Lastly, it combines the results from all clusters. The study shows that empirically Triplet A* outperforms GES for large dense networks. ## 3.2.7 Kcrl Prior Knowledge-based Causal Discovery Framework with Reinforcement Learning a.k.a. KCRL (Hasan & Gani (2022)) is a framework for causal discovery that utilizes prior knowledge as constraints and penalizes the search process for violation of these constraints. This utilization of background knowledge significantly improves performance by reducing the search space, and also, enabling a faster convergence to the optimal causal structure. KCRL leverages reinforcement learning (RL) as the search strategy where the RL agent is penalized each time for the violation of any imposed knowledge constraints. In the KCRL framework (Figure 15), at first, the observational data is fed to an RL agent. Here, data-to-adjacency matrix conversion is done using an encoder-decoder architecture which is a part of the RL agent. At every iteration, the agent produces an equivalent adjacency matrix of the causal graph. A comparator compares the generated adjacency matrix with the true causal edges in the prior knowledge matrix Pm, and thereby, computes a penalty p for the violation of any ground truth edges in the produced graph. Each generated graph is also scored using a standard scoring function such as BIC. A reward R is estimated as a sum of the BIC score SBIC , the penalty for acyclicity h(W), and β weighted prior knowledge penalty βp. Finally, the entire process halts when the stopping criterion Sc is reached, and the best-rewarded graph is the final output causal graph. Although originally KCRL was designed for the healthcare domain, it can be used in any other domain for causal discovery where some prior knowledge is available. Code for KCRL is available at https://github.com/UzmaHasan/KCRL. $R=S_{BIC}+\beta p+h(W)$. ![15_image_0.png](15_image_0.png) R = SBIC + βp + h(W) (4) Figure 15: The KCRL (Hasan & Gani (2022)) framework. Another recent method called KGS (Hasan & Gani (2023)) leverages prior causal information such as the presence or absence of a causal edge to guide a greedy score-based causal discovery process towards a more restricted and accurate search space. It demonstrates how the search space as well as scoring candidate graphs can be reduced when different edge constraints are leveraged during a search over equivalence classes of causal networks. It concludes that any type of edge information is useful to improve the accuracy of the graph discovery as well as the run time. ## 3.2.8 Ilp-Based Structure Learning Bartlett & Cussens (2017) looked into the application of integer linear programming (ILP) to the structure learning problem. To boost the effectiveness of ILP-based Bayesian network learning, they suggested adding auxiliary implied constraints. Experiments were conducted to determine the effect of each constraint on the optimization process. It was discovered that the most effective configuration of these constraints could significantly boost the effectiveness and speed of ILP-based Bayesian network learning. The study made a significant contribution to the field of structure learning and showed how well ILP can perform under non-essential constraints. ## 3.3 Functional Causal Model-Based Functional Causal Model (FCM) based approaches describe the causal relationship between variables in a specific functional form. FCMs represent variables as a function of their parents (direct causes) together with an independent noise term E (see Equation 5) (Zhang et al. (2015)). FCM-based methods can distinguish among different DAGs in the same equivalence class by imposing additional assumptions on the data distributions and/or function classes (Zhang et al. (2021b)). Some of the noteworthy FCM-based causal discovery approaches are listed below. $$X=f(P A_{X})+E$$ ![16_image_0.png](16_image_0.png) $$\left(5\right)$$ X = f(P AX) + E (5) Figure 16: A functional causal model (FCM) with four variables. ## 3.3.1 Li**Ngam** Linear Non-Gaussian Acyclic Model (LiNGAM) aims to discover the causal structure from observational data under the assumptions that the data generating process is linear, there are no unobserved confounders, and noises have non-Gaussian distributions with non-zero variances (Shimizu et al. (2006)). It uses the statistical method known as independent component analysis (ICA) (Comon (1994)), and states that when the assumption of **non-Gaussianity** is valid, the complete causal structure can be estimated. That is, the causal direction is identifiable if the variables have a linear relation, and the noise (ε) distribution is non-Gaussian in nature. Figure 17 depicts three scenarios where when X and ε are Gaussian (case 1), the predictor and regression residuals are independent of each other. For the other two cases, X and ε are non-Gaussian, and we see that for the regression in the anti-causal or backward direction (X given Y ), the regression residual and the predictor are not independent as earlier. That is, for the non-Gaussian cases, independence between regression residual and predictor occurs only for the correct causal direction. There are 3 properties of a LiNGAM. *First*, the variables xi = x1, x2*, ..., x*n are arranged in a causal order k(i) such that the cause always preceedes the effect. *Second*, each variable xiis assigned a value as per the Equation 6 where eiis the noise/disturbance term and bij denotes the causal strength between xi and xj . *Third*, the exogenous noise ei follows a non-Gaussian distribution, with zero mean and non-zero variance, and are independent of each other which implies that there is no hidden confounder. Python implementation of the LiNGAM algorithm is available at https://github.com/cdt15/lingam as well as in the gCastle package (Zhang et al. (2021b)). Any standard ICA algorithm which can estimate independent components of many different distributions can be used in LiNGAM. However, the original implementation uses the FastICA (Hyvarinen (1999)) algorithm. $$x_{i}=\sum_{k(j)