diff --git "a/deduped/dedup_0501.jsonl" "b/deduped/dedup_0501.jsonl"
new file mode 100644--- /dev/null
+++ "b/deduped/dedup_0501.jsonl"
@@ -0,0 +1,504 @@
+{"text": "Modern developmental biology relies heavily on the analysis of embryonic gene expression patterns. Investigators manually inspect hundreds or thousands of expression patterns to identify those that are spatially similar and to ultimately infer potential gene interactions. However, the rapid accumulation of gene expression pattern data over the last two decades, facilitated by high-throughput techniques, has produced a need for the development of efficient approaches for direct comparison of images, rather than their textual descriptions, to identify spatially similar expression patterns.The effectiveness of the Binary Feature Vector (BFV) and Invariant Moment Vector (IMV) based digital representations of the gene expression patterns in finding biologically meaningful patterns was compared for a small (226 images) and a large (1819 images) dataset. For each dataset, an ordered list of images, with respect to a query image, was generated to identify overlapping and similar gene expression patterns, in a manner comparable to what a developmental biologist might do. The results showed that the BFV representation consistently outperforms the IMV representation in finding biologically meaningful matches when spatial overlap of the gene expression pattern and the genes involved are considered. Furthermore, we explored the value of conducting image-content based searches in a dataset where individual expression components (or domains) of multi-domain expression patterns were also included separately. We found that this technique improves performance of both IMV and BFV based searches.We conclude that the BFV representation consistently produces a more extensive and better list of biologically useful patterns than the IMV representation. The high quality of results obtained scales well as the search database becomes larger, which encourages efforts to build automated image query and retrieval systems for spatial gene expression patterns. Drosophila melanogaster (the fruit fly) has been a canonical model animal for understanding this developmental process in the laboratory. The raw data from experiments consist of photographs of the Drosophila embryo showing a particular gene expression pattern revealed by a gene-specific probe in wildtype and mutant backgrounds. Manual, visual comparison of these spatial gene expressions is usually carried out to identify overlaps in gene expression and to infer interactions . As a ra genome . In addiWith this rapid increase in the amount of available primary gene expression images, searchable textual descriptions of images have become available ,10,11. HWe previously proposed a binary coded bit stream pattern to represent gene expression pattern images . In thisIn this paper, we explore how a more sophisticated Invariant Moment Vectors based dPreviously, we had examined the performance of the BFV representation for a limited dataset of early stage images . Here weDuring these investigations, we also developed another measure of image-to-image similarity for the BFV representation. This measure is aimed at finding images that contain as much of the query image expression pattern as possible, but without penalizing for the presence of any expression outside the overlap region in the target image. In addition, we examined whether partitioning a multi-domain expression pattern into multiple BFV representations, each containing only one domain, yields a better result set.Recently, Peng and Myers have proAn image database of 226 gene expression pattern images was initially generated using data from the literature -29. All In order to present comprehensible result sets in this paper, we have primarily discussed the findings from the dataset of 226 and provided information on how those queries scaled when they were conducted for the larger dataset. In general, our focus was to show the retrieval of biologically significant matches based on both the visual overlap of the spatial gene expression pattern and the genes associated with the pattern retrieved.\u03c61 through \u03c67), and binary feature representations were stored in a database. We also calculated and stored the expression area (the count of the number of 1's in the binary feature represented image), the X and Y coordinates of the centroid , and the principal angle (\u03b8) for each extracted pattern.Each image was standardized and the binary expression pattern extracted following the procedures described previously . These eSS, SC) based on the BFV representation (See equations 2 and 3 in Methods). SS is designed to find gene expression patterns with overall similarity to the query image, whereas SC is for finding images that contain as much of the query image expression pattern as possible without penalizing for the presence of any expression outside the overlap region in the target image. For a given pair of gene expression patterns (A and B), SS is the same irrespective of which image in the pair is the query image. That is, SS = SS . This is not so for SC, because SC measures how much of the query gene expression pattern is contained in the image. Therefore, SC \u2260 SC.To quantify the similarity of gene expressions in two images, we computed two measures . Results from D\u03c6 should be compared to that from SS, as both of these measurements do not depend on the reference image, i.e., D\u03c6 = D\u03c6 and, also they capture overall similarity or dissimilarity.For IMV representation, we computed one dissimilarity measure gene in a wildtype embryo. The BESTi-matches based on the SS measure for the representations are given in Figure sloppy paired genes (slp1 and slp2) in a variety of genetic backgrounds or in combination with a head gap gene orthodentical (otd); all of these genes are essential for the pattern formation in Drosophila head development or m or mcis-Separating multi-domain gene expression patterns into individual components was straightforward; we simply generated multiple images from the same initial image and included them in the target dataset. This resulted in 192 additional images in the database all of which were components of the initial gene expression patterns. The images were separated into expression regions horizontally and/or vertically depending on the gene expression. For this new set of images, the IMV as well as BFV representations were re-calculated and the BESTi query constructed as above.SS and IMV queries for this data set are given in Figures tailless (tll), which is known to interact with slp1 in defining the embryonic head [race (related to angiotensin converting enzyme), sog (short gastrulation) and eve (even-skipped) due to enhanced race expression in the anterior domain caused by a transgenic construct causing ectopic expression of sog [Results from BFV-nic head , and witn of sog . TherefoSS, SC and D\u03c6 in finding BESTi matches for a query pattern with multiple regions of expression mutant background, and a middle stripe due to misexpressed sog using an eve stripe-2 enhancer [Figure SS finds many images from the same paper as well as some images from other research articles with similar expression patterns. The results correctly include expression pattern of eve value calculated does not show a significant difference in the early stage embryos used in this study. The results using the SC based search are given in Figure SS results. However, as expected, there are significant differences between the two searches.When t Figure . HoweverSS as the search criterion. These searches are based on the complete expression . This is referred to as the Binary Sequence Vector (BSV) in . In otheDE, between the binary feature vectors of every possible pair of images in the dataset. DE was introduced in [The expression patterns are ordered by evaluating a set of difference values, duced in and is fDE = Count(A XOR B)/Count(A OR B) \u00a0\u00a0\u00a0 (1)Count(A XOR B) corresponds to the number of pixels not spatially common to the two images and the term Count(A OR B) provides the normalizing factor, as it refers to the total number of stained pixels (expression area) depicted in either of the two images being compared. For simplicity, we use the one's complement of DE, as a measure of similarity of gene expression patterns between two images, SS, is given by the equationThe term SS = (1 - DE). \u00a0\u00a0\u00a0 (2)SS quantifies the amount of similarity based on the overlap between two expression patterns. SS is equal to 1 when the two expression patterns are identical (DE = 0).SC quantifies the amount of similarity based on the containment of one expression pattern in the other given byWe introduce a new similarity measure in this paper that does not penalize for any non-overlapping region. The measure SC = Count(A AND B)/Count (A) \u00a0\u00a0\u00a0 (3)i.e., there is complete overlap (with respect to the query image) SC is equal to 1. Note that, SC \u2260 SC, because the denominator corresponds to the gene expression area of the query image.If the entire query image is contained within the result set images found in the database, Some methodologies of image analysis produce numeric descriptors that compensate for variations of scale, translation and rotation. In the following section, we describe the invariant moment analysis of gene expression data. Invariant moment calculations have been used in optical character recognition and other applications for many years .To calculate these invariant moment descriptors the standardized binary image is conveMpq = \u222cxp yq fdxdy, \u00a0\u00a0\u00a0 (4)Mpq is the two-dimensional moment of the function of the gene expression pattern, f. The order of the moment is defined as (p + q), where both p and q are positive natural numbers. When implemented in a digital or discrete form this equation becomeswhere and which are the coordinates of the center of gravity, centroid, of the area showing expression. They are calculated asWe then normalize for image translation using Discrete representations of the central moments are then defined as follows:A further normalization for variations in scale can be implemented using the formula, is the normalization factor. From the central moments, the following values are calculated:and \u03c67 is a skew invariant to distinguish mirror images. In the above, \u03c61 and \u03c62 are second order moments and \u03c63 through \u03c67 are third order moments. \u03c61 (the sum of the second order moments) may be thought of as the \"spread\" of the gene expression pattern; whereas the square root of \u03c62 (the difference of the second order moments) may be interpreted as the \"slenderness\" of the pattern. Moments \u03c63 through \u03c67 do not have any direct physical meaning, but include the spatial frequencies and ranges of the image.where f is the angular displacement of the minimum rotational inertia line that passes through the centroid and is given as:In order to provide a discriminator for image inversion (and rotation), sometimes called the \"6\", \"9\" problem, it has been suggested ,42 that \u03b8. It is calculated knowing that the moment of inertia of f around the line is a line through with slope \u03b8. We can find the \u03b8 value at which the momentum is minimum by differentiating this equation with respect to \u03b8 and setting the results equal to zero. This produces the following equation:The slope of the principal axis is called the principal angle \u03b8| < 45\u00b0 one can distinguish the \"6\" from the \"9\" and rotationally similar gene expression patterns.Using the condition |\u03c61 through \u03c67) and combinations of these moments. For example, if the first two invariant moments are used, thenIn invariant moment analysis, our initial method of image comparison calculates the Euclidean distance between the images using all moments (Dij, between a pair of images i and j where i, j = 1, 2,...n is given byand the distance D\u03c6, between any two images is calculated asThis can be expanded to use all of the moment variables. Here, the Euclidean distance, i and q designate images whose distance is being calculated and j designates the parameters used in the distance calculation and j = 1, 2, ..., 7. This assumes that all moments have the same dimensions or that they are dimensionless.where \u03c61 has the dimension of distance squared, while \u03c62 has the dimension of the fourth power of distance, thus requiring the square root function to equalize dimensions for comparable distance calculation purposes. In general, the greater number of invariant moments used in the distance calculation, the more selective the ranking. We have also allowed for the use of the centroids and principal angle as a means of list limiting.Using this method, it is possible to rank each of the images in order of their similarity based on, for example, the first two invariant moments that have clear-cut physical meanings. Expansion to include additional moments or parameters can be performed in a number of ways. It is possible to add additional parameters to the distance calculation making sure that each of the parameters has the same dimension. For example, SK originally conceived the project, developed the image distance measures based on the BFV representation, wrote an early version of the manuscript, and edited it until the final version. RG was responsible for writing new and using pre-existing programs to perform the image distance and parameter calculations, helped prepare the figures, searched the literature for gene expression data, maintained the database of gene expression pattern images, and helped in writing the manuscript. BVE provided the IMV method description, managed the day-to-day activities in the project, and did significant editing to produce the manuscript in the desired format for the journal. SP originally proposed the use of invariant moment vectors for biological image analysis, contributed significantly for the image distance and parameter calculations and provided critical feedback during the later stages of revision."}
+{"text": "Dense time series of metabolite concentrations or of the expression patterns of proteins may be available in the near future as a result of the rapid development of novel, high-throughput experimental techniques. Such time series implicitly contain valuable information about the connectivity and regulatory structure of the underlying metabolic or proteomic networks. The extraction of this information is a challenging task because it usually requires nonlinear estimation methods that involve iterative search algorithms. Priming these algorithms with high-quality initial guesses can greatly accelerate the search process. In this article, we propose to obtain such guesses by preprocessing the temporal profile data and fitting them preliminarily by multivariate linear regression.The results of a small-scale analysis indicate that the regression coefficients reflect the connectivity of the network quite well. Using the mathematical modeling framework of Biochemical Systems Theory (BST), we also show that the regression coefficients may be translated into constraints on the parameter values of the nonlinear BST model, thereby reducing the parameter search space considerably.The proposed method provides a good approach for obtaining a preliminary network structure from dense time series. This will be more valuable as the systems become larger, because preprocessing and effective priming can significantly limit the search space of parameters defining the network connectivity, thereby facilitating the nonlinear estimation task. The intriguing aspect of profiles is that they implicitly contain information about the dynamics and regulation of the pathway or network from which the data were obtained. The challenge for the mathematical modeler is thus to develop methods that extract this information and lead to insights about the underlying pathway or network.The rapid development of experimental tools like nuclear magnetic resonance (NMR), mass spectrometry (MS), tissue array analysis, phosphorylation of protein kinases, and fluorescence labeling combined with autoradiography on two-dimensional gels promises unprecedented, powerful strategies for the identification of the structure of metabolic and proteomic networks. What is common to these techniques is that they allow simultaneous measurements of multiple metabolites or proteins. At present, these types of measurements are in their infancy and typically limited to snapshots of many metabolites at one time point , to swith NMR ,4, 2-d gwith NMR or protewith NMR ), or to with NMR that peret al. [i.e., the maximum deviation from steady state) as well as the slopes of the traces at the initial phase of the response. Torralba et al. [in vitro glycolytic system. Similarly, by studying a large number of perturbations, Samoilov et al. [In simple cases, the extraction of information can be accomplished to some degree by direct observation and interpretation of the shape of profiles. For instance, assuming a pulse perturbation from a stable steady state, Vance et al. present a et al. recentlyv et al. showed tFor larger and more complex systems, simple inspection of peaks and initial slopes is not feasible. Instead, the extraction of information from profiles requires two components. One is of a mathematical nature and consists of the need for a model structure that is believed to have the capability of capturing the dynamics of the underlying network structure with sufficient accuracy. The second is computational and consists of fitting this model to the observed data. Given these two components along with profile data, the inference of a network is in principle a regression problem, where the aim is minimization of the distance between the model and the data. If a linear model is deemed appropriate for the given data, this process is indeed trivial, because it simply requires multivariate linear regression, which is straightforward even in high-dimensional cases. However, linear models are seldom valid as representations of biological data, and the alternative of a nonlinear model poses several taxing challenges.canonical forms, which are nonlinear structures that conceptually resemble the unalterable linear systems models, but are nonlinear. Canonical models have in common that they always have the same mathematical structure, no matter what the application area is. They also have a number of desirable features, which include the ability to capture a wide variety of behaviors, minimal requirements for a priori information, clearly defined relationships between network characteristics and parameters, and greatly enhanced facility for customized analysis.First, in contrast to linear models, there are infinite possibilities for nonlinear model structures. In specific cases, the subject area from which the data were obtained may suggest particular models, such as a logistic function for bacterial growth, but in a generic sense there are hardly any guidelines that would help with model selection. One strategy for tackling this problem is the use of The best-known examples of nonlinear canonical forms are Lotka-Volterra models , their The strict focus on two-component interactions in LV models has substantial mathematical advantages, but it has proven less convenient for the representation of metabolic pathways, where individual reaction steps depend on the substrate, but not necessarily on the product of the reaction, or are affected by more than two variables. A simple example of the latter is a bi-substrate reaction that also depends on enzyme activity, a co-factor and possibly on inhibition or modulation by some other metabolite in the system. These types of processes have been modeled very successfully with GMA and S-systems. Between these two forms, the S-system representation has unique advantages for system identification from profiles, as was shown elsewhere -24 and wcf. [The inference of a nonlinear model structure from experimental data is in principle a straightforward \"inverse problem\" that should be solvable with a regression method that minimizes the residual error between model and data. In practice, however, this process is everything but trivial (cf. ) as it acf. , similarcf. , mutual cf. , and gencf. . An indicf. . The alget al. [Nonlinear estimation methods have been studied for a long time, and while computational and algorithmic efficiency will continue to increase, the combinatorial explosion of the number of parameters in systems with increasingly more variables mandates that identification tasks be made easier if larger systems are to be identified. One important possibility, which we pursue here, is to prime the iterative search with high-quality starting conditions that are better than na\u00efve defaults. Clearly, if it is possible to identify parameter guesses that are relatively close to the true, yet unknown solution, the algorithm is less likely to get trapped in suboptimal local minima. We are proposing here to obtain such initial guesses by preprocessing the temporal profile data and fitting them preliminarily by straightforward multivariate linear regression. The underlying assumption is that the structural and regulatory connectivity of the network will be reflected, at least qualitatively, in the regression coefficients. D'haeseleer et al. exploredet al. [et al. [e.g., [et al. [Several other groups have recently begun to target network identification tasks with rather diverse strategies. Chevalier et al. and Diazet al. ,34 propo [et al. recently. [e.g., ,36) that. [e.g., used a n [et al. used neun species can often be represented by a system of nonlinear differential equations of the generic formThe behavior of a biochemical network with X is a vector of variables Xi, i = 1, ..., n, f is a vector of nonlinear functions fi, and \u03bc is a set of parameters. If the mathematical structure of the functions fi is known, the identification of the network consists of the numerical estimation of \u03bc. In addition to the challenges associated with nonlinear searches mentioned above, this estimation requires numerical integration of the differential equations in (1) at every step of the search. This is a costly process, requiring in excess of 95% of the total search time; if the differential equations are stiff, this percentage approaches 100% [n differential equations, and if measurements are available at N time points, the decoupling leads to n \u00d7 N algebraic equations of the formwhere hes 100% . A simplhes 100% ,17. Thus\u03bcij are the \"unknowns\" that need to be identified.It may be surprising at first that it is valid to decouple the tightly coupled system of nonlinear differential equations. Indeed, this is only justified for the purpose of parameter estimation, where the decoupled algebraic equations simply provide numerical values of variables (metabolites or proteins) and slopes at a finite set of discrete time points. The experimental measurements thus serve as the \"data points,\" while the parameters a priori not trivial. However, we have recently shown [The quality of this decoupling approach is largely dependent on an efficient and accurate estimation of slopes from the data. Since the data must be expected to contain noise, this estimation is ly shown ,39 that f in Eq. (1) about one or several reference states. As long as the system stays close to the given reference state(s), this linearization is a suitable and valid approximation. We consider four options: (I) linearization of absolute deviations from steady state; (II) linearization of relative deviations from steady state; (III) piecewise linearization; and (IV) Lotka-Volterra linearization.The smoothing and decoupling approach reduces the cost of finding a numerical solution of the estimation task considerably. Nonetheless, algorithmic issues associated with local minima and the lack of convergence persist and can only be ameliorated with good initial guesses. To this end, we linearize the model zi = Xi - Xir, where Xir denotes the value at a reference state of choice. If the reference state is chosen at a stable steady state, the first-order Taylor-approximation is given byOption (I) is based on deviations of the type A is the n \u00d7 n Jacobian with elements aij = (dfi / dXj) calculated at Xr .where Xr cf. -34). If . If A isui = zi/Xir. At a steady state, this yields the linear systemFor option II, we define a new variable A' is an n \u00d7 n matrix in which a'ij = (Xjr / Xir)\u00b7aij.where f, a perturbation of this magnitude may already lead to appreciable approximation errors. While this is a valid argument, it must be kept in mind that the purpose of this priming step is simply to detect the topological structure of connectivity and not necessarily to estimate precise values of interaction parameters. Simulations (see below) seem to indicate that this detection is indeed feasible in many cases, even if the deviations are relatively large.A general concern regarding linearization procedures is the range of validity of sufficiently accurate representation, which is impossible to define generically. From an experimental point of view, the perturbations from steady state must be large enough to yield measurable responses. This may require that they be at the order of 10% or more. Depending on the nonlinearities in ai0, which is equal to fi(Xr). The choice of subsets and operating points offers further options. In the analysis below, we use the locations of extreme values (maximum deviation from steady state) of the variables as the breakpoints between different subsets. Thus, a variable with a maximum and a later minimum has its time course divided into three subsets.In order to overcome the limitation of small perturbations, a piecewise linear regression (option III) may be a superior alternative. In this case, we subdivide the dataset into appropriate time intervals and linearize the system around a chosen state within each subset. Most reference states are now different from the steady state, with the consequence that Eq. (3) has a constant term Xi and Xj is assumed to be proportional to the product XiXj [Xi isThe fourth alternative (option IV) is a Lotka-Volterra linearization. In a Lotka-Volterra model, the interaction between two species uct XiXj . FurtherXi, which is usually valid in biochemical and proteomic systems, because all quantities of interest are non-zero. Thus, the differentials are again replaced by estimated slopes, the slopes are divided by the corresponding variable at each time point, and fitting the nonlinear LV model to the time profiles becomes a matter of linear regression that does not even require the choice of a reference state. The quality of this procedure is thus solely dependent on the quality of the data and ability of the LV model to capture the dynamics of the observed network. It is known that thatXi, i.e., vij's and wi's, or \u03b1ij's). The response variable is the rate of change of a metabolite, while the predictors are the concentrations of each metabolite in the network. The different linearization models (I-IV) differ in the transformations of the original datasets, which are summarized in Table yi = /Xir, and the predictor variables are transformed as xi = (Xi - Xir)/Xir.No matter which option is chosen, the next step of the analysis consists of subjecting all measured time traces to multivariate linear regression and solving for the regression coefficients (Xj affects the dynamics (slope) of another metabolite Xi. In particular, a coefficient that is zero or close to zero signals that there is no significant effect of Xj on the slope of Xi. By the same token, a coefficient that is significantly different from zero suggests the presence of an effect, and its value tends to reflect the strength and direction of the interaction. In either case, the coefficients computed from the linear regression provide valuable insight into the connectivity of the network. Furthermore, the estimated coefficients provide constraints on the parameter values of the desired nonlinear model f. Indeed, if f consists of an S-system model, the coefficients estimated from the regression can be converted into combinations of S-system parameters, as is demonstrated in the following theoretical section and illustrated later with a specific example.The result of the regression is a matrix of coefficients that indicate to what degree a metabolite f in Eq. (1) if this model has the form of an S-system. To determine the relationships between the regression coefficients and the parameters of the S-system, it is convenient to work backwards by computing the different types of linearizations discussed before for the particular case of S-system models. This derivation is simply a matter of applying Taylor's theorem.The regression analysis yields coefficients that offer information on the connectivity of the network of interest. It also provides clues about the parameter values of the underlying nonlinear network model In the S-system formalism, the rate of change in each pool (variable) is represented as the difference between influx into the pool and efflux out of the pool. Each term is approximated by a product of power-law functions, so that the generic form of any S-system model isn is the number of state variables [gij and hij are called kinetic orders and describe the quantitative effect of Xj on the production or degradation of Xi, respectively. A kinetic order of zero implies that the corresponding variable Xj does not have an effect on Xi. If the kinetic order is positive, the effect is activating or augmenting, and if it is negative, the effect is inhibiting. The multipliers \u03b1 i and \u03b2 i are rate constants that quantify the turnover rate of the production or degradation, respectively.where ariables ,14. The zi = Xi - Xis, where the subscript s denotes the value of the variable at steady state, then leads directly toIf the Taylor linearization is performed at a steady state, the production term of the S-system model equals the degradation term. The absolute deviation of the first option, wherecij = gij - hij,cf. [Fij are always non-negative, while cij may be either positive or negative depending on the relationship between Xi and Xj. A common scenario is that a variable Xj influences either the production or degradation of variable Xi, but not both. In this case, a positive (negative) cij implies activation (inhibition) of production or inhibition (activation) of degradation. The special case of cij = 0 permits two possible interpretations: 1) gij = hij = 0, which implies that Xj has no effect on either production or degradation of Xi; or 2) gij = hij \u2260 0, which means that Xj has the same effect on both production and degradation of Xi. The former case is the more likely, but there are examples where the latter may be true as well, and this is indeed the case in the small gene network in Figure (cf. ). The soaij in Eq. (3) corresponds to the product of Fij and cij:Comparing the expression in Eq. (6) with the linear regression results, one sees immediately that each coefficient aij = Fijcij. \u00a0\u00a0\u00a0 (7)aij have been estimated, the parameters of the corresponding S-system are constrained \u2013 though not fully determined \u2013 by Eq. (7). In particular, Eq. (7) does not allow a distinction between various combinations of gij and hij, as long as the two have the same difference. For instance, re-interpreting the regression coefficients as S-system parameters does not differentiate between the overall absence of effect of Xj on Xi (gij = hij = 0) and the same effect of Xj on both the production and degradation of Xi (gij = hij \u2260 0). This observation is related to the observation of Sorribas and Cascante [Thus, once the regression has been performed and the coefficients Cascante that steui = (Xi - Xis) / Xis, in option II, are assessed in an analogous fashion. In this case one obtainsRelative deviations from steady state, wherecij = gij - hij,Fi are positive, while cij may be either positive or negative.. Again, The piecewise linear model for an S-system is easily derived as well. It is given asXjr denotes the value of the variable at the reference state. This case also includes the situation of a single approximation, which however is not necessarily based on a steady-state operating point.where Xi and then linearizing around an operating point. The resulting expressions become especially simple if this point is chosen as the steady state. In this case, the relationship between the parameters of the LV system and the S-system areIn the case of the Lotka-Volterra linearization, the correspondence between computed regression coefficients and S-system parameters is determined most easily by dividing the S-system equations by the corresponding cij = gij - hij.where et al. [We applied the methods described in the previous sections to simulated time profiles obtained from the small gene network in Figure et al. used it et al. ,39.et al. [To generate time profiles, the system was implemented with the parameter values published by Hlavacek and Savageau , and as et al. , the modet al. [i.e., the maximum deviations from steady state) of the various variables both in time and size is in accordance with their \"topological distance\" from the perturbed variable, and variables not directly affected by the perturbed variable have zero initial slopes. As an example, the effect of a perturbation in X3 is shown in Figure X1 and X4 reaching their maximal deviation from steady state before X2 and X5, suggesting that X1 and X4 precede X2 and X5 in the pathway. The value of the initial slope is different from zero for X1 and X4, implying that these variables are directly affected by X3, whereas X2 and X5 have zero initial slopes suggesting that their responses are mediated through other variables.Quasi as a pre-analysis, we examined the guidelines proposed by Vance et al. . Indeed,X2 on X3. This result is not surprising, because the effect of X2 is the same on both the production and degradation of X3, which leads to cancellation. It is noted that this analysis does not necessarily distinguish between transfer of mass and a positive modulation, because both result in a positive effect on a variable. In a realistic situation, biological knowledge may exclude one of the two options, as in this case, where modulation is the only possibility for the effect of X3 on both X1 and X4, because the former is a protein and the latter are RNA transcripts. For the mathematical model in the S-system form, this is not an issue, as both types of influence are included in the equations in the same way (as a positive kinetic order).Maximal information about the network is obtained when every variable is perturbed sequentially. Experimentally, such perturbations could be implemented with modern methods of RNA interference or, for While Vance's method works well in this simple noise-free system, it is not scalable to larger and more complex systems. The next step of our analysis is therefore regression according to the four options presented above and with a number of simulated datasets of the gene network that differ in the variable to be perturbed and the size of the perturbation. Because the illustration here uses a known model and artificial data, it is easy to compute the true regression coefficients through differentiation of the S-system model. These coefficients can be used as a reference for comparisons with coefficients computed from the entire time traces, which mimics the estimation process for (smoothed) actual data.The results for three of the options can be summarized in the following three points, while the piecewise linear model will be discussed afterwards.The network connectivity is reflected in the values of the regression coefficients. The values of the estimated coefficients provide strong indication as to which variables have a significant influence on the dynamics of other variables. A comparison between computed and estimated coefficients is shown in Table a12 and a24) are not estimated as exactly zero, but their values are at least one order of magnitude smaller than the coefficients that are in actuality not zero. Table X3 and X4. A possible explanation for X3 is that the effect of X2 is present in the non-linear system, but not in the linear system, and thus the behavior of X3 must be explained by the other variables. Overall, of the 25 theoretically possible connections, 76% are correctly identified, while 24 % are false positives.(1) The different linear models give the same results. A comparison of the results of the three models reveals that the values of the regression coefficients are very similar The greater the perturbation, the less accurate is the estimation of the regression coefficients. The deviation between the estimated and computed coefficients increases as the size of the perturbation increases (see Table (3) t = 0 to the time of the first extreme value for a given variable . For the perturbed variable the first limit point was given by the smallest of the limit points of the other variables. The second interval contained the data points from the first to the second extreme value (a minimum), while the third interval included the remaining data points. The midpoint of each interval was taken to be the reference state. The result of the piecewise linear regression for a 5% deviation in X3 is given in Table X3 in the two last subsets reflect the variable's connectivity to a much greater extent than the other linearization approaches. As the reference state is different from the steady state, the effect of X2 is present in the linear system as well, and thus there is no compensation through the other variables. Another benefit is that the piecewise model tolerates larger perturbations. Even for a two-fold perturbation, the fraction of correctly identified coefficients in the last subset is 84%.The piecewise linear model was obtained by dividing the whole dataset into three smaller subsets for each variable. The first interval contained the data points from X3 is identified correctly from comparing the six models. The classification of the remaining four connections varies greatly among the different models, and it is therefore impossible to deduce a type of interaction with sufficient reliability.If we compare the results of all four linearized models, the degree of similarity may provide a measure of how reliable the estimated coefficients are, assuming that an interaction identified in all models is more reliable than an interaction identified in only one or few of the models. Considering the piecewise linear model as three models, yielding a total of 6 models from one dataset, one may thus determine the most likely connectivity for the small gene network. The result is presented in Table X1. Table X1, X3 and X5. If so, the linear model in Eq. (8) suggests the following:In addition to reflecting the connectivity, the coefficients provide likely parameter ranges or likely constraints on parameter values of the true model. As an example, consider variable and the regression coefficients (aij) are taken from the model in Eq. (4). The values of the variables at steady state are known. Because the kinetic orders may be positive or negative and the cij may result from different combinations of gij's and hij's, it is not possible to deduce directly which exponent is greater than the other. However, in many cases one may have additional information on the system, which further limits the degrees of freedom . In addIdentifying the structure of metabolic or proteomic networks from time series is a task that most likely will require large, parallelized computational effort. The search space for the algorithms is typically of high dimension and unknown structure and very often contains numerous local minima. This generic and frequent problem may be ameliorated if the search algorithm is provided with good initial guesses and/or constraints on admissible parameter values. Here, we have shown that linear regression may provide such information directly from the types of data to be expected from future experiments. For illustrative purposes, we used artificial data from a known network, but all methods are directly applicable to actual profile data and scaleable to large systems.The coefficients estimated from the different regressions reflect the effect of one variable on another surprisingly well and thus provide a simple fashion of prescreening the connectivity of the network. In addition, the estimated coefficients provide constraints on the parameter values, if the alleged nonlinear model has the form of an S-system. To explore the pre-assessment of data as fully as feasible, we studied four linearization strategies: using an absolute deviation from steady state; a relative deviation from steady state; piecewise linearization; and Lotka-Volterra linearization. Interestingly, all models gave qualitatively similar results for the analyzed example, and this degree of similarity may provide a measure of how reliable the identified connections are. Specifically, of the 25 possible connections in the small gene network studied, 19 were identified correctly in at least 83 % of the regression analyses.A concern of any linearization approach is the validity of the linear approximation. However, as long as the perturbation from steady state remains relatively small, the estimated linear model is likely to be a good fit of the actual nonlinear model, at least qualitatively. This limitation may furthermore be alleviated by fitting the profile data in a piecewise linear fashion. As most reference states in this case are different from the steady state, this strategy has the added benefit that more of the true relationships within the nonlinear model are likely to be preserved. As an alternative, one could explore the performance of the so-called \"log-linear\" model, which is linear in log-transformed variables .The Lotka-Volterra linearization did not perform as well as expected with regard to large perturbations. This may be a consequence of the particular example, which was originally in S-system form rather than in a form more conducive to the LV structure, which emphasizes interactions between pairs of variables. Since it is easy to perform the LV analysis along with the other regressions discussed here, it may be advisable to execute all four analyses.The illustrative model used for testing the procedure consisted of a relatively small system with only five variables and relatively few interactions. Nonetheless, one should recall that this very system required substantial identification time in a direct estimation approach . In ordeNone declared.SRV performed the analysis and prepared the results. JS developed and implemented the neural network for computation of slopes. EOV developed the basic ideas and directed the project.universal function which smoothes the data with predetermined precision and also allows the straightforward computation of slopes that can be used for network identification purposes. This appendix briefly outlines the procedure; details can be found in Almeida [It was recently shown that good parameter estimates of S-system models from metabolic profiles might be obtained by training an artificial neural network (ANN) directly with the experimental data. The result of this training is a so-called Almeida and Voit Almeida . The ANN Almeida .Noise and sample size do not have a devastating effect on the results of the ANN-method, as long as the true trend is well represented . In factThe use of the entire time course is in stark contrast to earlier methods of parameter estimation and structure identification in metabolic networks. Mendes and Kell applied et al. [Chevalier and co-workers first fiet al. suggestez(tk + 1) = z(tk)exp(h\u00b7A), \u00a0\u00a0\u00a0 (A1)h is the step size. The problem is thereby reduced to a mulitilinear regression in which the matrix \u03a6 = exp(h\u00b7A) is the output. Instead of estimating the slopes, they obtain the Jacobian directly by expanded in its Taylor-series. This approach yields a faster convergence to the elements of the Jacobian than the one suggested by Chevalier et al. [where r et al. , but theOur approach takes advantage of the entire time course and is therefore less sensitive to the particularities of assessing a system at a single point. The ANN itself does not provide much insight, because it is strictly a black-box model, but it is a valuable tool for controlling problems that are germane to any data analysis, namely noise, measurement inaccuracies, and missing data."}
+{"text": "Cluster analyses are used to analyze microarray time-course data for gene discovery and pattern recognition. However, in general, these methods do not take advantage of the fact that time is a continuous variable, and existing clustering methods often group biologically unrelated genes together.We propose a quadratic regression method for identification of differentially expressed genes and classification of genes based on their temporal expression profiles for non-cyclic short time-course microarray data. This method treats time as a continuous variable, therefore preserves actual time information. We applied this method to a microarray time-course study of gene expression at short time intervals following deafferentation of olfactory receptor neurons. Nine regression patterns have been identified and shown to fit gene expression profiles better than k-means clusters. EASE analysis identified over-represented functional groups in each regression pattern and each k-means cluster, which further demonstrated that the regression method provided more biologically meaningful classifications of gene expression profiles than the k-means clustering method. Comparison with Peddada et al.'s order-restricted inference method showed that our method provides a different perspective on the temporal gene profiles. Reliability study indicates that regression patterns have the highest reliabilities.Our results demonstrate that the proposed quadratic regression method improves gene discovery and pattern recognition for non-cyclic short time-course microarray data. With a freely accessible Excel macro, investigators can readily apply this method to their microarray data. Microarray time-course experiments allow researchers to explore the temporal expression profiles for thousands of genes simultaneously. The premise for pattern analysis is that genes sharing similar expression profiles might be functionally related or co-regulated . Due to In microarray time-course studies, time dependency of gene expression levels is usually of primary interest. Since time can affect the gene expression levels, it is important to preserve time information in time-course data analysis. However, most methods for analyzing microarray time-course data treat time as a nominal variable rather than a continuous variable, and thus ignore the actual times at which these points were sampled. Peddada et al. (2003) proposed a method for gene selection and clustering using order-restricted inference, which preserves the ordering of time but treats time as nominal . Recentl[In this paper, we propose a model-based approach, step down quadratic regression, for gene identification and pattern recognition in non-cyclic short time-course microarray data. This approach takes into account time information because time is treated as a continuous variable. It is performed by initially fitting a quadratic regression model to each gene; a linear regression model will be fit to the gene if the quadratic term is determined to have no statistically significant relationship with time. Significance of gene differential expression and classification of gene expression patterns can be determined based on relevant F-statistics and least squares estimates. Major advantages of our approach are that it not only preserves the ordering of time but also utilizes the actual times at which they were sampled; it identifies differentially expressed genes and classifies these genes based on their temporal expression profiles; and the temporal expression patterns discovered are readily understandable and biologically meaningful. A free Excel macro for applying this method is available at . The pro. Biologi showed t and regrth gene:We propose a step-down quadratic regression method for gene discovery and pattern recognition for non-cyclic short time-course microarray experiment. The first step is to fit the following quadratic regression model to the jyij = \u03b2j 0+ \u03b2j1x + \u03b2j2x2 + \u03b5ij \u00a0\u00a0\u00a0 (1)yij denotes the expression of the jth gene at the ith replication, x denotes time, \u03b2j 0is the mean expression of the jth gene at x = 0, \u03b2j 1is the linear effect parameter of the jth gene, \u03b2j 2is the quadratic effect parameter of the jth gene, and, \u03b5ij is the random error associated with the expression of the jth gene at the ith replication and is assumed to be independently distributed normal with mean 0 and variance . Two levels of significance, \u03b10 and \u03b11, need to be pre-specified, where \u03b10 to is recommended to be small to reduce the false positive rate in the gene discovery and \u03b11 less stringent to control pattern classification. \u03b10 could be chosen using various multiple testing p-value adjustment procedures, for example, False Discovery Rate (FDR) [where te (FDR) . The tem\u03b10, the jth gene is considered to have no significant differential expression over time. The expression pattern of the gene is \"flat\".1. If overall model (1) p-value >\u03b10, the jth gene will be considered to have significant differential expression over time. The patterns are then determined based on the p-values obtained from F tests p-value \u2264 p-value \u03b11 and p-value of linear effect \u2264 \u03b11, the jth gene is considered to be significant in both the quadratic and linear terms. The expression pattern of the gene is \"quadratic-linear\".a. If both p-value of quadratic effect \u2264 \u03b11 and p-value of linear effect >\u03b11, the jth gene is considered to be significant only in the quadratic term. The expression pattern of the gene is \"quadratic\".b. If p-value of quadratic effect \u2264 >\u03b11, the jth gene is considered to be non-significant in the quadratic term. The quadratic term will be dropped and a linear regression model will be fitted to the gene:c. If p-value of quadratic effect yij = \u03b2j 0+ \u03b2j1x + \u03b5ij \u00a0\u00a0\u00a0 (2)From fitting model (2),\u03b11, the jth gene is considered to be significant in the linear term. The expression pattern of the gene is \"linear\".\u2022 If p-value of linear effect \u2264 >\u03b11, the jth gene is considered to be non-significant in the linear term. The expression pattern of the gene is \"flat\".\u2022 If p-value of linear effect and and the predicted signals , increases faster then slower , or increases slower then faster . Peddada et al.'s UD2 profile contains genes that are first up-regulated then down-regulated with maximum at the second time points, which could be classified as regression pattern QLCD in general , but it could also be classified as LD if the expression levels of all time points are close to a line ; or classified as QC if the expression profile is close to quadratic ; or classified as QLCU if the expression levels of last 4 time points are much closer than those of the first time point. Similarly, Peddada et al.'s UD3 profile could be classified as regression patterns QC, QLCU, and QLCD .Peddada et al.'s method was appll Figure , Oazin, e Figure , Grik5; c Figure , Ubl1; oANOVA-protected k-means clustering was applied to the expression signals of 3834 present genes. Out of 3834 present genes, 770 were identified to be differentially expressed over time by one way ANOVA . These 770 genes were used for classification by k-means clustering with k = 9 and the distance measure being Pearson correlation coefficient . The similarity of the temporal expression profiles in Figure Clu; and D17H6S56E-5); similarly see Figure Sfpi1; and b, Anxa2). Once again, the regression method provides better classification. Figure Psmb6; Adora2b). Adora2b clearly starts differential expression later than Psmb6 , these two genes show similar upward regulation. These two genes were classified into the same regression group, but in different k-means clusters. Based on the above analysis, our regression method is demonstrated to be more appropriate for the classification of temporal gene expression profiles than k-means method.In order to make the regression patterns comparable with the k-means clusters, the quadratic regression method was applied to the 770 ANOVA significant genes. Table To further explore the effectiveness of the regression method on gene classification, EASE software was used to examine the potential relationship between the biological functions of the genes and their expression patterns . EASE caKerr and Churchill (2001) introduced a bootstrap technique to assess the stability of clustering results . We appl\u03b10 = 0.01 and the numbers of significant genes in each of the 50 data were obtained. The average proportion of significance is 1.01% with standard deviation 0.01%. This demonstrates that the false positive rate of the regression method is accurate because 1% of 10000 genes would be expected to be significant at 0.01 level by chance. The false positive rates of the regression patterns LU, LD, QC, QV are all approximately equal to 1/6 of the average false positives, and those of QLCU, QLCD, QLVU, and QLVD are all approximately equal to 1/12 of the average false positives.We investigated the false positive rate (gene specific) of our method via a simulation study. The data were generated randomly from N, containing expression signals of 10000 \"null\" genes , with 5 time points and 3 replications per time point per gene. 50 of such data were generated. The regression approach was applied to each gene in each simulated data at th-order polynomial regression model to this data . The model with 4th-order polynomial will work similarly to connecting the mean at each time point, therefore will provide a good fit to the data with smallest R2 and minimum Mean Squared Error, compared with lower-order polynomials. However, the purpose of pattern analysis is to cluster the data instead of fitting models, so the quadratic fit is useful even though the goodness of fit may not be great. Also, the use of high-order polynomials (higher than the second-order) should be avoided if possible [The proposed step-down quadratic regression method is an effective statistical approach for gene discovery and pattern recognition. It utilizes the actual time information, and provides biologically meaningful classification of temporal gene expression profiles. Furthermore, it does not require replication at each time point, which ANOVA-type methods do require. Also, this method can identify genes with subtle changes over time and therefore discover genes that might be undetectable by other methods, eg, ANOVA-type methods. However, there are several limitations to this method. Firstly, it is designed to fit time-course data with a small number of time points. We recommend this method when there are 4 to 10 time points in the data. For an experiment with more time points, spline-type methods ,12 couldpossible , particu\u03b10 to increase the stability of regression patterns. \u03b10 could be reduced using various multiple testing p-value adjustment procedures, for example, Westfall and Young's step down method [\u03b1): let p(1)
0.99 confidence , and normalized per chip using non-linear Normalize.quantiles or VSN normalization to remove array to array variability (Bioconductor Project ). The efp-value \u2264 0.05 and with 1.2 fold changes in both the biological replicates of each duplex at a given time point, were maintained for expression analysis. Genes identified as significantly modified by each of the 4 FAS siRNA duplexes were then overlapped using venn diagrams to identify a set of genes modified by 3 or more of the siRNA treatments. Expression profiles of the core gene list were averaged, and genes modified by 1.5 fold were visually examined and ordered by hierarchical clustering using the Pearson Correlation similarity measurement (GeneSpring).Differentially expressed genes that changed over time in response to each of the four FAS siRNA duplex were identified using two independent methods. In the first method, per chip normalized Normalize.quantiles data was imported into the GeneSpring GX 7.3.1 software package , and per gene normalized to the appropriate biological control for each time point. Genes that significantly changed over time in response to each FAS siRNA duplex were compared against control using a two-way ANOVA with a Benjamini & Hochberg FDR of 0.05. In the second method, VSN normalized data was imported into the Bioconductor Timecourse package . Genes rp-value < 0.05 and a multiple hypothesis testing FDR < 0.25. NES represents the enrichment of genes in the designated GSEA gene set, ranked according to the overrepresentation of genes at the top or bottom of the list, normalized to gene set size.For pathway analysis, data from the 16,585 genes expressed in MDA-MB-435 cells was exported from GeneSpring, filtered for duplicate symbols and analyzed using GSEA software (Broad Institute) according to published methods . BrieflyFAS, fatty acid synthase; GSEA, Gene Set Enrichment Analysis; HLH, helix-loop-helix; siRNA, small interfering RNA; SREBP, sterol regulatory element-binding protein; TRAIL, TNF-related apoptosis-inducing ligand.JS and LK conceived and designed the study. LK carried out the siRNA transfections, validated target knockdown, and prepared RNA for BeadArray expression profiling, performed data analysis and drafted the manuscript. JS supervised the coordination of the study and participated in manuscript preparation. All authors read and approved the final manuscript.Genes significantly modified by FAS siRNA duplex #1 (\u22651.2 or \u22640.83 fold). The fold change values represent the average from 2 replicates of FAS siRNA duplex #1 compared to non-silencing control siRNA. Significance was determined using a p-value of < 0.05, with a 1.2 fold change cutoff for both biological replicates at a given time point.Click here for fileGenes significantly modified by FAS siRNA duplex #2 (\u22651.2 or \u22640.83 fold). The fold change values represent the average from 2 replicates of FAS siRNA duplex #2 compared to non-silencing control siRNA. Significance was determined using a p-value of < 0.05, with a 1.2 fold change cutoff for both biological replicates at a given time point.Click here for fileGenes significantly modified by FAS siRNA duplex #3 (\u22651.2 or \u22640.83 fold). The fold change values represent the average from 2 replicates of FAS siRNA duplex #3 compared to non-silencing control siRNA. Significance was determined using a p-value of < 0.05, with a 1.2 fold change cutoff for both biological replicates at a given time point.Click here for fileGenes significantly modified by FAS siRNA duplex #4 (\u22651.2 or \u22640.83 fold). The fold change values represent the average from 2 replicates of FAS siRNA duplex #4 compared to non-silencing control siRNA. Significance was determined using a p-value of < 0.05, with a 1.2 fold change cutoff for both biological replicates at a given time point.Click here for fileFAS knockdown signature of 279 genes modified by 1.5 fold. The table displays genes modified by 1.5 fold compared to non-silencing control siRNA. The fold change values represent the average from at least 3 of the FAS siRNA duplexes.Click here for fileGSEA pathways down-regulated in response to knockdown of FAS. The table displays all pathways found to be down-regulated in response to FAS siRNA treatment compared to non-silencing control siRNA. Significance was determined using a nominal p-value < 0.05 or FDR < 0.250.Click here for fileGSEA pathways up-regulated in response to knockdown of FAS. The table displays all pathways found to be up-regulated in response to FAS siRNA treatment compared to non-silencing control siRNA. Significance was determined using a nominal p-value < 0.05 or FDR < 0.250.Click here for file"}
+{"text": "Proteins are the primary regulatory agents of transcription even though mRNA expression data alone, from systems like DNA microarrays, are widely used. In addition, the regulation process in genetic systems is inherently non-linear in nature, and most studies employ a time-course analysis of mRNA expression. These considerations should be taken into account in the development of methods for the inference of regulatory interactions in genetic networks.Bacillus anthracis. We show that the method captures most of the important known interactions between the chosen genes.We use an S-system based model for the transcription and translation process. We propose an optimization-based regulatory network inference approach that uses time-varying data from DNA microarray analysis. Currently, this seems to be the only model-based method that can be used for the analysis of time-course \"relative\" expressions (expression ratios). We perform an analysis of the dynamic behavior of the system when the number of experimental samples available is varied, when there are different levels of noise in the data and when there are genes that are not considered by the experimenter. Our studies show that the principal factor affecting the ability of a method to infer interactions correctly is the similarity in the time profiles of some or all the genes. The less similar the profiles are to each other the easier it is to infer the interactions. We propose a heuristic method for resolving networks and show that it displays reasonable performance on a synthetic network. Finally, we validate our approach using real experimental data for a chosen subset of genes involved in the sporulation cascade of The performance of any inference method for regulatory interactions between genes depends on the noise in the data, the existence of unknown genes affecting the network genes, and the similarity in the time profiles of some or all genes. Though subject to these issues, the inference method proposed in this paper would be useful because of its ability to infer important interactions, the fact that it can be used with time-course DNA microarray data and because it is based on a non-linear model of the process that explicitly accounts for the regulatory role of proteins. Inference of regulatory interactions in a genetic system provides fundamental biological knowledge and significant efforts have been invested for the solution of this problem, -23. The The model used in this paper shares similarity with inference methods based on S-system models -15. Howeet al. Vsm,i andn genes, which are perturbed at some time before t = 0, from t = 0 onwards there are no external perturbations, and the mRNA and protein concentrations change continuously over time.The basic goal is to quantify the strengths of regulatory interactions and rate constants that best fit the dynamic model described by Equation (1) to a given set of time-course, gene-expression data. We consider a network of i at time tj is given by,Experimental methods like the DNA microarrays typically measure the absolute value or the logarithm of gene (mRNA) expression ratios at discrete points in time. Thus, the log-expression ratio for gene i.where i at time t can be written in the following form:Protein concentrations are not directly observable, unless an accurate proteomics technology is used ,32, and pi(0) is the initial concentration of protein i, fi, and hi are non-linear functions of \u03b4i and time, t, derived using the splines fitted to the gene-expression data in the mass balance equations for the proteins. The initial protein concentrations pi(0) and the reference states \u03b2i and \u03b4i can be obtained from the available half-life of mRNAs and proteins Yij is a \u03b5ij and in the logarithm of the transcription rates, log(\u03b1i), and it is non-convex in the translation rate constants, \u03b3i, the initial protein concentrations, pi(0) and the reference mRNA expression states, \u03b3i, pi(0), link all the genes together through the objective function, in the sense that if these three sets of parameters were known, then the resulting optimization problem would be convex in its unknowns and the problem could be equivalently split into n sub-problems, one problem for each gene. Thus, instead of dealing with one mixed-integer optimization problem of O(n2) variables, we would have n mixed-integer problems with O(n) variables. The method then essentially repeats the two steps below for a given number of times (say Nl),.There are two main issues associated with computing the solution of the optimization problem described by Equations (3)-(8). First, the objective function in (3) is convex in the terms of strength of interactions, problems ,38. A se\u03b3i, pi(0), as determined either by an initial guess or from Step 2 below. Solve n (mixed-integer quadratic) optimization problems, one for each gene i, in the parameters \u03b1i). Each problem is mathematically stated as:1. Fix the values of subject toA is some large positive number.where \u03b1i) determined from Step 1 and solve the following optimization problem in the three sets of parameters, \u03b3i, pi(0).2. Fix the values of subject to,Nl may be good enough.Our numerical studies suggest that the improvements attained by increasing the number of repetitions of the above two steps are marginal , i.e., a relatively small value for best solution considers the network of interactions derived from all the collected solutions, i.e., of similar optimum objective function values, by accepting an interaction to be present if it is inferred to be present in the majority of the solutions. Since the set of optimal solutions can be considered as alternative networks that are consistent with the experimental data, an interaction can be considered physiologically significant if it occurred in the majority of the solutions.Different initial guesses would potentially lead to different solutions, and the proposed method does not guarantee finding the globally optimal solutions. A procedure of reporting the Nl (the number of iterations), and Nt (the number of discretization time points in the objective function) of the heuristic method; and (vi) missing genes from the analysis.The applicability of any genetic network inference method is affected by a number of factors are known. This will allow us to correctly base our conclusions using globally optimal solutions.The time-series data were obtained by the integration of the S-system of differential equations (1) in MATLAB , for difThe degree of similarity between the different time profiles of mRNA expression is an important determinant of the amount of information present in the data. We studied three different types of networks that we labeled \"Low\", \"Medium\" and \"High\" according to the degree of similarity between the different profiles which we quantified using the condition number of the matrix \u03a6 formed by the logarithm of protein concentrations at each time point:3 .The time series of the logarithm of expression ratios are shown in Figure Ns, used for the protein estimation increased positives equal to 30. The performance of the method is not very sensitive to Nt. Moreover, most of the interactions in the network appear to be quite robust to sampling frequency. For example at least 50% of the interactions were correctly identified for all sample sizes become larger while the smaller expression ratios (<1) become smaller.The DNA microarray technology tends to suppress the measured expression ratios , and somIf all the interacting genes in a network are not considered for analysis by an inference method, then incorrect interactions are likely to be identified. If we remove genes that contribute to significant regulatory interactions, the number of incorrect identifications would increase . Finally, relatively small errors in the estimates of the half-lives of the mRNAs and proteins cause only a modest deterioration in the performance of the method .\u03b3i, pi(0) were now assumed to be unknown, i.e., the problem was non-convex. We found 7 solutions using the Coordinate descent heuristic method starting from 7 random initial guesses and all solutions converged to similar (O(1)) objective function values. The best solution was identified as the one whose interactions occurred in the majority of the solutions. The '4 out of 7' solution identified 11/30 interactions correctly while the '5 out of 7' solution identified 10/30 interactions correctly. If the method identifying the interactions were random, and since we have assumed a 10-gene network with 3 regulatory inputs for each gene, an interaction will be identified as inducing with probability 15%, repressing with probability 15% and absent with probability 70%. Therefore the average number of correct interactions identified will be 15% (4.5/30), suggesting that the heuristic method we are using is doing better than a random method.The algorithm was tested using data from the \"Low\" network with 1000 \"experimental\" samples. There was no noise in the data but the parameters \u03b3i, pi(0) are unknown. The results here give a similar coverage indicating that these 10/30 interactions are not just robust to noise but are also important in the sense that they are captured in all the solutions found.Note that we found that about 30% of the interactions could be identified with a large amount of noise, or when the parameters nen).To give an estimate of the time required to obtain a solution, it took about 15 hours to obtain 5 solutions, each running in parallel on a P4, 2 GHz, and 1 Gig RAM PC. Since the main emphasis of this study was not to obtain a computationally efficient method, only this estimate of the time taken for a solution is provided. Note that the computational complexity of the heuristic method is of O(Bacillus anthracis is an endospore-forming bacterium (a prokaryote) that is responsible for the anthrax disease. Under environmental-stress conditions, like most bacilli, it commits to sporulation via the bacillus endospore program. Mature spores can survive many extreme conditions, thus assuring species survival. When conditions are suitable, the endospore germinates and the organism then can begin to grow again.et al. . Using this approximation, the general solution to the protein mass-balance equation can be approximated by,So bij are given by,where the first term represents the homogeneous solution to the protein mass-balance differential equation and the second term the particular solution. The values for cij can be obtained in terms of the initial protein concentration, pi(t1) (t1 = 0), \u03b3i by enforcing the continuity of the protein function across the break points, i.e.,This can be verified by checking that the particular solution indeed satisfies the protein mass-balance equation. We can then show thatQi(t) and Rij(t) are defined in terms of tq}, for q in {1..j}, d in {1..4} and \u03b4i. Let,where Then,Mi(t) and pi(t) for any time t in the interval . The error in the approximation of Mi(t) is O(T \u00d7 Ns-4) . Therefore the integral of this function with respect to time t, over this time period should also be zero.The last equation is a function of time n genes in the system. Hence,The above equation should hold for all the Nt points. In other words, we require that the mass balance equations are satisfied only at a finite number of points as opposed to every time point in the period of observation. Note that this discrete summation can also be viewed as a trapezoidal rule-based approximation of the integral:The objective function can further be simplified by approximating the integral by a discrete summation, say at B. anthracis network, curated the experimental data for use by the identification algorithm and provided invaluable biological insights and literature information. ETP and VH identified the general problem and provided the overall project direction. VH suggested and developed the modeling framework for the analysis and oversaw the detailed model development effort. ETP oversaw the manuscript preparation and editing, and provided guidance on biological issues and their interplay with computational issues. SM advised on and checked the model development and optimization formulations. All authors have read and approved the final manuscript.RT proposed the identification method, did the analysis and prepared the manuscript. CJP assisted in the identification of the Additional derivations, data and results. This document has information on derivations, explanations and data that is related to the work in this paper. However knowledge of this information is not crucial to understanding what is stated in the paper. For the interested reader, the paper does refer to this material at appropriate places in the paper.Click here for file"}
+{"text": "Time series microarray experiments are widely used to study dynamical biological processes. Due to the cost of microarray experiments, and also in some cases the limited availability of biological material, about 80% of microarray time series experiments are short (3\u20138 time points). Previously short time series gene expression data has been mainly analyzed using more general gene expression analysis tools not designed for the unique challenges and opportunities inherent in short time series gene expression data.We introduce the Short Time-series Expression Miner (STEM) the first software program specifically designed for the analysis of short time series microarray gene expression data. STEM implements unique methods to cluster, compare, and visualize such data. STEM also supports efficient and statistically rigorous biological interpretations of short time series data through its integration with the Gene Ontology..The unique algorithms STEM implements to cluster and compare short time series gene expression data combined with its visualization capabilities and integration with the Gene Ontology should make STEM useful in the analysis of data from a significant portion of all microarray studies. STEM is available for download for free to academic and non-profit users at Microarray time series gene expression experiments are widely used to study a range of biological processes such as the cell cycle , developIn this paper we introduce the Short Time-series Expression Miner (STEM), the first software application designed specifically for the analysis of short time series gene expression datasets (3\u20138 time points). Data from short time series gene expression experiments poses unique challenges. In these experiments thousands of genes are being profiled simultaneously while the number of time points is few. In such cases many genes will have the same expression pattern just by random chance. Furthermore as with any time series experiment, there are usually few, if any, full time series repeats from which to gain statistical power. STEM uses a method of analysis that takes advantage of the number of genes being large and the number of time points being few to identify statistically significant temporal expression profiles and the genes associated with these profiles . STEM alk-means clustering algorithm on real biological data using GO we refer the reader to [The novel clustering algorithm which STEM implements for short time series expression data is briefly reviewed in the Implementation section. For a detailed discussion of the clustering algorithm including experimental results on simulated data and a comparison with the eader to . The mailonger time series. General methods for gene expression analysis that are frequently applied to time series expression data include popular clustering methods such as hierarchical clustering [k-means clustering [To date, researchers analyzing short time series expression data relied mainly on two types of software. The first is general gene expression analysis software implementing methods which do not take advantage of the sequential information in time series data. The second is gene expression time series analysis software implementing methods primarily designed for ustering , k-meansustering , and selustering . These sustering and the ustering . GQL impSTEM is freely available for download at for non-STEM is implemented entirely in Java and will work with any operating system supporting Java 1.4 or later. Portions of the interface of STEM are implemented using a third party library, the Java Piccolo toolkit from the University of Maryland . STEM alA user of STEM first specifies a tab delimited gene expression data file as input to STEM. Next, the user specifies a gene annotation source, and may adjust default parameters through the input interface shown in Figure The novel clustering algorithm that STEM implements takes advantage of there being only a few time points in a dataset. The clustering algorithm first selects a set of distinct and representative temporal expression profiles (which we will refer to as model profiles from now on). These model profiles are selected independent of the data. The procedure for selecting the model profiles, and theoretical guarantees that the models profiles selected are representative and distinct appear in . See Figk-means clustering algorithm. A user thus has the option to compare directly within STEM, results of STEM's novel clustering method with those produced using k-means. A user that still prefers the k-means clustering methodology for clustering short time series data, or is interested in using k-means to cluster other types of data for which the STEM clustering method does not apply, may still be interested in using STEM's implementation of k-means in order to leverage STEM's visualization capabilities and integration with GO. The results and discussion of STEM in this paper are presented using STEM's novel clustering method. For details on using the k-means clustering algorithm with STEM see Based on a reviewer's suggestion, STEM now also provides an implementation of the A screenshot of the main interface window of STEM appears in Figure The model overview screen is designed such that by default a user can visualize all profiles simultaneously, but as a result each profile box needs to be relatively small. At times however, a user will be interested in focusing on a small subset of neighboring profiles. The interface of STEM supports zooming and panning on any portion of the model profiles overview screen. The ability to zoom and pan is powered by the open source Java libraries of Piccolo .Clicking on a profile box on the model profiles overview interface displays a window with detailed information about the profile. Examples of such windows appear in Figure The Gene Ontology (GO) is a structured vocabulary for describing biological processes, cellular components, and molecular functions of gene products . The ontThe integration with GO is designed to be simple for the user, comprehensive, and current. A user can select from a drop down menu on the main interface any of 35 gene annotation sources available from the Gene Ontology or Europr. The default enrichment in STEM and the method used in other software is actual size based enrichment, in which the enrichment is computed using the hypergeometric distribution based on the number of genes in the set of interest. Formally denote by N the total number of unique genes on the microarray. Denote by m the total number of genes that are in the GO category of interest. Denote by sa the number of gene's assigned to profile r. Based on the hypergeometric distribution the p-value of seeing v or more genes in the intersection of the category of interest and profile r can be computed as:STEM implements two types of gene enrichments for a set of genes assigned to the same model temporal expression profile An advantage of the actual size enrichment is that it provides a means to externally validate a clustering algorithm, since the enrichment calculation makes no assumptions about how a set of genes was produced. Such a biological validation for the STEM clustering algorithm appears in .se the expected size of profile r. Then the p-value of seeing more than v genes belonging to both the category and profile r can be computed using the binomial distribution with parameters m and Unlike other clustering algorithms, STEM's clustering algorithm also computes the expected number of genes matching a specific model profile. This leads to a new GO category enrichment p-value based on a profile's expected size. Formally, denote by An advantage of expected size enrichment occurs in the case in which the genes of multiple independent processes happen to have the same temporal expression pattern. In this case a temporal expression pattern could be very significant in terms of the number of genes assigned versus expected, but no GO category will appear enriched under an actual size enrichment test. However under an expected size enrichment test the GO categories could correctly be identified as being enriched. Expected size based enrichment is also useful for ordering temporal expression profiles to determine which are most relevant to a given GO category (see next subsection).As many GO categories are being tested simultaneously, it is necessary to correct p-values using a multiple hypothesis correction. STEM can correct p-values using the Bonferonni correction, or in the case of actual size enrichment also by using a randomization test.STEM's integration with GO is bidirectional. In addition to allowing a user to determine for a given model profile what GO terms are significantly enriched, STEM can also determine for a given GO category what model profiles were most enriched for genes in that category. Given a GO category, STEM ranks the profiles based on their p-value enrichment for that category. The profiles on the main interface can be reordered based on either the actual or expected size enrichment. Figure X in experiment A, what significant responses did they have in experiment B?\". STEM uses the hypergeometric distribution to compute the significance of overlap between gene sets of model profiles of two experiments. Since the model profiles are defined independent of the data, the boundaries in expression space that they induce will remain the same between experiments. In contrast, cluster boundaries from traditional, data driven, clustering algorithms will change between experiments. STEM is thus able to detect significant sets of genes with the same expression profiles across experiments that might otherwise be missed if the clusters were defined differently across experiments. Furthermore since the model profiles in STEM are also selected to be distinct and representative of all expression profiles, STEM will determine for all pairs of distinct expression patterns if there is a significant gene set intersection. If the clusters had been formed with a data driven clustering algorithm no such guarantee is possible.Many microarray studies include a comparison of the temporal response of genes between experimental conditions. For example, researchers have compared the temporal response of genes infected with a wildtype pathogen to those infected with a knockout mutant version of the pathogen or the rHelicobacter pylori to the response when infected with the vacA- strain [vacA- infected cells was similar to the wildtype infected cells. The profile pairs on the comparison interface can be rearranged based on the significance of the intersection or how different the expression profiles are as measured by the correlation coefficient. On the main model profiles overview screen a user can reorder all the model profiles from one experiment based on the enrichment for a set of genes assigned to a profile or set of profiles in the other experiment.Figure - strain . The win- strain that theA number of software packages implementing general methods for the analysis of gene expression data from multiple experiments have been used to analyze time series data. These include Cluster , EXPANDEUnlike STEM, CAGED and GQL do not support comparing time series data sets. CAGED does not offer any GO analysis features, though it does have an automated report generation feature not available in STEM or GQL. GQL does provide support for determining GO enrichments for a cluster of genes. However, unlike STEM the support is not bidirectional, that is, there is no support for directly determining the temporal response of genes belonging to a GO category of interest. In terms of running time, STEM was the fastest when compared on the same real biological data. Table We have introduced, STEM, a new software package for analyzing short time series expression data. The software can find statistically significant patterns from short time series microarray experiments and can compare data sets across experiments. STEM presents its analysis of the data in a highly visual and interactive manner, and the integration with GO allows for efficient biological interpretations of the data. Through an analysis of the Gene Expression Omnibus we have estimated that short time series expression data is represented in about a quarter of all microarray studies. While STEM was designed with time series data in mind, it only makes the assumption that experiments can naturally be sequentially ordered. Thus, STEM could also be used for other types of sequential experiments such as dose response and temperature response experiments. The unique automated analysis capabilities of STEM combined with its visualization capabilities and integration with GO, should merit STEM to be a software of choice to analyze data from a significant portion of all microarray studies.Project name: STEM: Short Time-series Expression MinerProject home page: Operating system(s): Platform independentProgramming language: JavaOther requirements: Java 1.4 or higherLicense: non-commercial research use licenseAny restrictions to use by non-academics: license needed for commercial useCluster Analysis of Gene Expression Dynamics (CAGED)Extraction of Differential Gene Expression (EDGE)Gene Ontology (GO)Graphical Query Language (GQL)Short Time-series Expression Miner (STEM)Significance Analysis of Microarrays (SAM)JE and ZBJ both contributed to the design of STEM. JE implemented STEM. Both JE and ZBJ participated in the drafting and revising of the manuscript, and read and approved the final manuscript.STEM user manual. manual.pdf is a comprehensive user manual for STEM in pdf format.Click here for file"}
+{"text": "Reverse engineering cellular networks is currently one of the most challenging problems in systems biology. Dynamic Bayesian networks (DBNs) seem to be particularly suitable for inferring relationships between cellular variables from the analysis of time series measurements of mRNA or protein concentrations. As evaluating inference results on a real dataset is controversial, the use of simulated data has been proposed. However, DBN approaches that use continuous variables, thus avoiding the information loss associated with discretization, have not yet been extensively assessed, and most of the proposed approaches have dealt with linear Gaussian models.We propose a generalization of dynamic Gaussian networks to accommodate nonlinear dependencies between variables. As a benchmark dataset to test the new approach, we used data from a mathematical model of cell cycle control in budding yeast that realistically reproduces the complexity of a cellular system. We evaluated the ability of the networks to describe the dynamics of cellular systems and their precision in reconstructing the true underlying causal relationships between variables. We also tested the robustness of the results by analyzing the effect of noise on the data, and the impact of a different sampling time.The results confirmed that DBNs with Gaussian models can be effectively exploited for a first level analysis of data from complex cellular systems. The inferred models are parsimonious and have a satisfying goodness of fit. Furthermore, the networks not only offer a phenomenological description of the dynamics of cellular systems, but are also able to suggest hypotheses concerning the causal interactions between variables. The proposed nonlinear generalization of Gaussian models yielded models characterized by a slightly lower goodness of fit than the linear model, but a better ability to recover the true underlying connections between variables. Reverse engineering cellular networks is one of the most challenging problems in systems biology. Starting with the measurements of certain variables, such as gene expression or protein concentration values, an attempt is made to infer the control mechanisms of the cellular system generating the available data, i.e. the underlying network of connections between its components. As only time series measurements provide information concerning the dynamics of a cell's regulatory mechanisms, recent studies have concentrated on analyzing such data.x, the parents of x are the variables that have a directed edge pointing to x. At quantitative level, the dependence relationships are described by means of conditional probability distributions. Because of their probabilistic framework, BNs and DBNs can automatically take into account the variability of biological systems, as well as the possible presence of experimental noise in the data.The various reverse engineering methods proposed in the literature range from highly detailed models, such as those based on differential equations, to highly abstract models, such as Boolean networks. The former describe the molecular reactions taking place in a cell, and the latter represent cellular components as binary variables that are linked to each other by logical relationships ,2. Dynamt, and another with the expression level of the same gene at time t + 1. In this representation, the direction of dependencies is constrained by the time dimension, and so the parents are the variables at time t and the children are the variables at time t + 1. In this way, DBNs can also overcome the inability of BNs to represent feedback loops, due to the acyclicity constraint of the graph. This limitation makes BNs unsuitable for representing many biological systems in which feedback controls are a critical aspect of regulation.However, while BNs only offer a static picture of the system, DBNs can show how variables regulate each other over time. For example, when analyzing gene expression data, BNs represent the expression level of each gene by a random variable, and infer a snapshot of the state of the cellular system at mRNA level. DBNs take this representation one step further, and represent the relationships between gene expression levels over time. Assuming a temporal dependency of order 1, one random variable is associated with the expression value of a gene at time The existing methods for learning BNs from observations can be adapted to DBNs. The selection of the best network to represent the data is treated as a Bayesian model selection problem, with different networks being compared by their posterior probability. This score makes a compromise between the ability of the inferred model to describe the data (i.e. its goodness of fit) and the number of parameters used. In this way, a more complex model is preferred over a simpler one only if its fitting ability significantly improves. The sound statistical framework of DBNs also allows them to incorporate prior knowledge and handle the possible presence of missing data and hidden variables (representing unobserved factors) in a principled way.et al. ) corresponds to \u03b1 = 1.8 and has RMSE = 0.227. Different \u03b1 values can thus slightly favor the goodness of fit or accuracy of the inferred models.It is interesting to look at the best network model in terms of goodness of fit, and the best model in terms of its ability to recover the causal relationships between variables. The former (i.e. the model with the lowest RMSE [0.196]) corresponds to \u03b1 = 1.8 is 43% higher than that of the linear model (0.336 vs. 0.235): this corresponds to 40 out of 119 true links recovered, instead of 28. In addition, its precision is 15% higher (0.367 vs. 0.318), thus leading to a 30% improvement in F (0.351 vs. 0.271).Moreover, comparison of the linear regression and nonlinear model showed that the latter performs better at recovering the causal connections between variables . For example, the recall of the above model corresponding to CV) to the simulated profiles. In particular, for every profile at each time point, we added a random variable extracted from a Gaussian distribution with zero mean and standard deviation \u03c3 = CV abs. In terms of the ability to recover true connections, recall slightly decreased in the case of CV = 0.05 and CV = 0.1, but significantly worsened in the case CV = 0.3. The precision for every CV value was less than in the noiseless case and, consequently, so was the F-measure. These results are summarized in Figure CV = {0.05, 0.1} and becomes 16% for CV = 0.3.Comparison of the results of the linear model and those in the noiseless case revealed slight variations in the parsimony of the inferred models (average number of parents), but a worse goodness of fit. As could be expected, the RMSE increased in proportion to the increasing levels of noise .The results of the nonlinear model showed that, in correspondence with every value of the parameter Previous biological knowledge of the length of the cell cycle in the examined system allowed us to restrict our analysis to this time frame. We expected that the more time points we collected , the better the performance of our reverse engineering algorithm would be. However, experimental measurements are often expensive and/or difficult and this is the main reason why biological temporal profiles usually contain few time points. Thus, it is interesting to make a quantitative assessment of the extent to which the performance of the algorithm depends on the sampling interval in order to have some indications concerning the minimum number of time points necessary to obtain satisfactory results.s = {1, 2, 10} minutes, thus producing datasets with respectively {101, 51, 11} time points. This enabled us to examine how the results vary with a larger or smaller number than our baseline of 21 points.Our previous analyses had always used a sampling interval of five minutes and so, once again using the simulated data from time 0 to 100 min (about one cell cycle length) in the case of wild-type cells, we sampled values at intervals s = 5, showed that the average number of parents decreases as s increases. This is probably due to the fact that the addition of parents does not significantly improve the fitting for higher values of s, and so the Bayesian score does not improve. The RMSE was very low at s = {1, 2}, and increased constantly as s increased, whereas recall and precision constantly decreased: F went from 44% at s = 1 to 27% at s = 5, and there was a sharp decline in performance at s = 10, when F was about 11%. The results are summarized in Figure The results for the linear regression model, considered together with those obtained with our baseline of \u03b1, the average number of parents also decreased as s increased. The RMSE was very low at s = {1, 2}, and became higher with longer sampling intervals. Recall decreased as s increased, while precision was greatest at s = 5 (in most cases) or s = 2 (two cases). There thus seems to be a compromise between recall and precision. With \u03b1 < 2, F is more than 27% (most frequently more than 30%) at s = {1, 2, 5}, and becomes 17 \u2013 20% at s = 10. With \u03b1 \u2265 2, F is always more than 20%. Figure \u03b1 = 0.8, the value with the best F (F = 0.359).In the case of the nonlinear model, and considering each value of s = 2 or s = 5. Moreover, the decline in performance at s = 10 is much less than in the case of the linear model.The results thus show that the goodness of fit of the models worsens as the sampling interval increases, whereas the ability of the algorithm to recover the true causal connections between variables is best at It is interesting to compare the results obtained in our study with those of published simulation studies of DBNs with discrete variables.The model used by Husmeier simulates expression time series for a network of nine connected genes, to which another 41 unconnected randomly up- or down-regulated genes were added for a total of 50 genes. The results were evaluated using sensitivity and complementary specificity (1-specificity) rather than recall and precision. The high sensitivity values obtained by the author in some trials performed must always be carefully assessed together with their complementary specificity because, as explained above, even very low values of complementary specificity can correspond to a significant number of false positives. For example, Husmeier himself stressed that the whole set of true connections can be recovered only at a complementary specificity of 75%, which corresponds to an extremely high number of spurious connections. Moreover, an example network shown by Husmeier in relation to only the nine connected nodes has a recall (sensitivity) of 36% and similar precision: these figures are comparable to those obtained in our study.et al. used a linear model to generate data for 10 networks, each containing 20 nodes. Between eight and 12 nodes are connected in each network, whereas the others are unconnected and move in a random walk. Recall and imprecision (1-precision) are used to assess the algorithm's performance. The authors present most results for a much higher number of data points than ours (up to 2000), and show that it is possible to obtain high values of recall and precision only in the presence of more than one hundred points. With 2000 points, they obtained a recall of 90% and a precision of almost 100% (F = 95%); for 300 points, recall decreased to about 50 \u2013 55% and precision to 85% (F = 67%); for 100 points, recall was still about 50% and precision was similar (F = 50%); but with 25 points recall decreased to 30 \u2013 35% and precision to 10% (F = 16%).Yu et al. for short time series are not significantly different from ours, although they simulated data using a simpler model.As mentioned above, our analysis concentrated on short time series because temporal microarray experiments do not typically contain more than a few tens of time points, and so the results of our study should better approximate the recall and precision obtainable when analyzing real gene expression data. The above recall and precision values show that results obtained by Yu et al., who kindly made the simulator they used to generate their data available to us. This simulator produces profiles with continuous values, which Yu et al. need to discretize in order to apply their DBN algorithm.Nonetheless, it is interesting to investigate the performance of our method in the presence of longer time series, and particularly interesting to compare our results with those of Yu et al. in their study, we simulated one dataset with 100 points and another with 300, using a sampling interval of 5. The temporal profiles of each variable were standardized before the analysis with our algorithm.For each of the 10 networks used by Yu \u03b1 = 1. For each number of time points (100 or 300), the values shown are averaged over the 10 datasets simulated in correspondence with the different network structures. As can be seen, with both 100 and 300 time points, the recall obtained with Gaussian networks is greater than that reported by Yu et al., the precision is comparable or slightly lower, and the synthetic F-measure is always higher. In this case, the precision of the linear model outperforms that of the nonlinear model using hyperbolic tangent functions. This may be because the simulator used by Yu et al. is based on a dynamical system which is linear over a wide range of variable values.Table 3 = 8 parameters are required. On the contrary, in Gaussian networks, each parent corresponds to one parameter in the regression equation, and so only three parameters are required for three parents . The reduction in the number of required parameters becomes more obvious as the number of parents or the number of discretization categories increases.It can therefore be said that Gaussian networks seem to have advantages over discrete variable networks if a limited amount of data is available as they do not suffer from information loss due to discretization, and are more parsimonious than discrete models. In the case of discrete models, the number of parameters required to describe the dependence of a variable on its parents depends on the number of possible combinations of the parent values: i.e. assuming a binary variable with three parents that can each have two possible values, 2x) = tanh(\u03b1x). In order to compare the performance of this approach with that of traditional linear Gaussian networks, we undertook a novel simulation study using data from a differential equation model proposed by Chen et al., which reproduces well the complexity and nonlinearity of cell cycle control mechanisms in budding yeast and\u2022 time homogeneous [the transition probability p(Y(t + 1)|Y(t)) is independent of t]\u2022 t and time t + 1 \u00a0\u00a0\u00a0 (9)yi = T is the n \u00d7 1 vector of observations, \u03b2i = )T is the vector of regression parameters, Xi is the n \u00d7 (p(i) + 1) matrix of regression coefficients. For example, the row t is t) when the model in Equation (4) is used, and it is , tanh(\u03b1yi2t),...,tanh(\u03b1yip(i)t)) when the nonlinear model in Equation (5) is used.where \u03b8hi, that consist of the precision \u03c4i and the vector of regression parameters \u03b2i [\u03c4i:We use conjugate prior distributions for the parameters eters \u03b2i . Therefowhere\u03c4i, the prior density of the parameter vector \u03b2i is assumed to be multivariate Gaussian:Conditionally on Ri0 equal to the identity matrix. This choice represents the assumptions that all the variables are independent, and that, conditionally on \u03c4i, the regression parameters are independent. Moreover, we chose \u03bdi0 = 3 and a priori variance We set p(D|Mhi) is given by:With this prior specifications, it can be shown that the local marginal likelihood where:\u03b1i1n = \u03bdi0/2 + n/2\u03bdin = \u03bdi0 + nAs parameter estimates, we consider their posterior expectations:E(\u03b2i|yi) = \u03b2inBy the likelihood modularity described with the factorization in Equation (8), it is possible to learn the network structure by searching for the local regression models with maximum marginal likelihood. In the context of DBNs, time imposes a natural restriction on the set of candidate parents for each variable, because the parents are constrained to be the variables at the previous time point. However, even with this restriction the space of possible parent sets is exponential in the number of candidate parents. To make the search feasible, we adapted the greedy search strategy originally implemented in the K2 algorithm . The algIn order to reduce the risk of finding suboptimal models, we implemented a stepwise search: at each step, the old marginal likelihood is not only compared with the marginal likelihood of the model in which the parent that increases the likelihood most is added to the old parent set, but also with the marginal likelihood values of the models in which this new parent is added to the old parent set with one of the old parents removed. The search strategy is schematically illustrated in Figure DBN: dynamic Bayesian networkBN: Bayesian networkRMSE: root mean squared errorR: recallP: precisionTP: true positivesFP: false positivesFN: false negativesF: F-measureTN: true negativesCV: coefficient of variationThe authors declare that they have no competing interests.FF was responsible for the study: she conceived it together with RB, implemented the system, carried out the analyses, and wrote the manuscript. PS provided methodological support for all of the aspects related to the linear Gaussian network algorithm, and reviewed the paper. MFR supervised methodological and implementation aspects. RB contributed to the conception of the simulation study and the evaluation of the results, suggested the use of nonlinear Gaussian models, reviewed the paper, and supervised the work. All of the authors have read and approved the final manuscript."}
+{"text": "One type of DNA microarray experiment is discovery of gene expression patterns for a cell line undergoing a biological process over a series of time points. Two important issues with such an experiment are the number of time points, and the interval between them. In the absence of biological knowledge regarding appropriate values, it is natural to question whether the behaviour of progressively generated data may by itself determine a threshold beyond which further microarray experiments do not contribute to pattern discovery. Additionally, such a threshold implies a minimum number of microarray experiments, which is important given the cost of these experiments.We have developed a method for determining the minimum number of microarray experiments (i.e. time points) for temporal gene expression, assuming that the span between time points is given and the hierarchical clustering technique is used for gene expression pattern discovery. The key idea is a similarity measure for two clusterings which is expressed as a function of the data for progressive time points. While the experiments are underway, this function is evaluated. When the function reaches its maximum, it indicates the set of experiments reach a saturated state. Therefore, further experiments do not contribute to the discrimination of patterns.The method has been verified with two previously published gene expression datasets. For both experiments, the number of time points determined with our method is less than in the published experiments. It is noted that the overall approach is applicable to other clustering techniques. Recent advances in microarray technologies ,2 and geAn important feature of this category of microarray experiments is dependency among gene expression data corresponding to different time points. An important issue is thus the specification of time points, including their number and the span between them. In the absence of knowledge from the biologist about this specification, one naturally questions whether the behaviour of data generated from a progressive microarray experiment may help by itself determine a \"cut off\", beyond which further micorarray experiments do not contribute to the discrimination of gene expression patterns. Additionally, such a cut-off value implies the minimum number of microarray experiments, which is important because these experiments are costly in terms of time, reagents, and well-trained technicians ,5. ThereA clustering technique is typically used for discovering patterns in gene expression data. There are many clustering algorithms available in literature for gene expression profiling, including the hierarchical clustering , K-meansThere appears to be only few related studies in the literature. The most related work may refer to Hwang et al. , in whick further contributes to the identification of patterns. The procedure compares two clusterings based on data over the first m - 1 and m time points.There are two key ideas in this paper. First, a statistics-based similarity measure for two clusterings produced with the hierarchical clustering technique is defined. Second, a procedure is developed for determining whether an experiment after time point c(m) is employed to measure the similarity of two clusterings based on expression data over the first m and m - 1 time points (see the \"Method\" section for the definition). Figures c(m) with respect to the number of time points m for the fibroblast dataset and for the cdc15 dataset, respectively. Correspondingly, Tables c(m). It can be seen from these two figures that the c(m) values in both datasets initially increase monotonically with respect to the number of time points, reach a maximum, and then appear to randomly fluctuate thereafter.To evaluate the proposed method, a program implementing it was run on two datasets: the fibroblast dataset and the cdc15 dataset (see the \"Methods\" section for the details about these datasets). The function c(m) reaches an initial maximum when data from the first 9 time points are used to cluster genes and then appears to randomly fluctuate when more data are added. Therefore, it is reasonable to claim that nine is the minimum number of time points necessary for clustering genes for the fibroblast experiment. This result matches very well with the fact that the fibroblast dataset from the first nine time points were collected over the first 16 hours after serum stimulation, and the period of cell division is 16 hours (see Table Table c(m) reaches an initial maximum when data from the first 8 time points are used to cluster genes and then appears to randomly fluctuate when more data are added. Again it is reasonable to claim that eight is the minimum number of time points necessary for clustering genes in the cdc15 experiment. This result also correlates very well with the fact that data from the first eight time points were collected over the first 100 minutes after cdc15-based synchronization, and the period of cell division is about 100 minutes is defined to measure the similarity of two k-partition clusterings based on expression data over the first m and m - 1 time points and obtained at a proper level of the corresponding hierarchical clusterings (see the \"Method\" section for the definition). In the following, we examine the behaviour of d by setting k = 3, 4,\u22ef10, respectively. This is very important as we want to understand how the number of clusters, k, could affect the results, specifically, whether the minimum number of microarray experiments obtained with c(m) is applicable to different partitions. Figure d for the fibroblast experiment for the various values of k. When the fourth sample is added, the probability that any two gene pairs are clustering-invariant for partition with 3 clusters is only about 60% while the probability for the partition with 10 clusters is about 80%. It is found that the possibilities for all possible partitions increase as more data are added. For instance, when the seventh sample is added d reaches its initial maximum, and at this point, the probability that any two gene pairs are clustering-invariant is about 95%. When the eighth sample is added d reaches a maximum for the first time, and at this point, the probability that any two gene pairs are clustering-invariant is about 94%. For other partitions, their corresponding values of d reach their initial maxima before the ninth sample is added that fewer samples may be needed to obtain a k-partition when the number of clusters k is known as a priori information. This seems to be reasonable as more clusters require more discriminant features (i.e. more samples). However, there may exist a kind of 'saturated' k , beyond which the increase of k will not call for more samples. For instance, the case of the fibroblast experiment, such a saturated number of clusters is 7, as the same number of samples is required for numbers of clusters more than 7. A similar situation can be observed for the cdc15 dataset.It is interesting to observe from the above discussion regarding the behaviour of The method proposed in this paper to determine the minimum number of time points required in DNA microarray experiments for clustering genes has been shown to be effective by analyzing two previously published datasets: fibroblast and CDC15. These two datasets have temporal gene expression profiles with definite periods; specifically about 16 hours for the fibroblast and about 100 minutes for the cdc15. The periodic behaviours of these two datasets were observed in the originating experiments; specifically the number of time points is 12 for the fibroblast datasets, while the number of points is 24 for the cdc15 dataset. With our method, we obtain the following numbers of microarray experiments: 10 for the fibroblast experiment and 9 for the cdc15 experiment. These minima imply a significant reduction of time points , especially for the cdc15 experiment.\u03b3c measure for clustering similarity employed in our method. Overall our computational experiments have shown that such a combination appears to work well for applications, which is consistent with the result and conclusion obtained by Dougherty et al. ; specifically, r = 1 means that genes g1 and g2 have a co-regulated response to a biological process in a same direction, and r = -1 means that genes g1 and g2 have a co-regulated response to a biological process in an opposite direction. When gioffset are set to the means of expression profiles of genes g1 and g2, respectively, r becomes the Pearson correlation coefficient of genes g1 and g2.where Euclidean distance is defined as:d = 2). In this case, the distance measure = 1 - r may be used n \u00d7 n is called the index cophenetic matrix where the value 1 indicates a perfect agreement between two hierarchical clusterings. To avoid the negative probability, we adopt the following index:where \u03b3c = p(same ordering | untied pairs) \u00a0\u00a0\u00a0 (4)Further,p(same ordering | untied pairs) + p(different ordering | untied pairs) = 1. The index \u03b3c has the range of where the value 1 indicates a perfect agreement between two hierarchical clusterings. Hays [\u03b3c.because gs. Hays providedTwo hierarchical clusterings can also be compared with two particular partitions obtained from their dendrograms. To measure the similarity of the two partitions, we first define a new matrix, which is slightly different from the index cophenetic matrix, as follows:D matrix of a partition with k clusters can be obtained from matrix C by setting D = 1 when C \u2264 n - k, and D = 2 otherwise. Likewise an index denoted by \u03b3D, similar to \u03b3c, can be defined, which has the same form of an expression as Equation (5) except that C1 and C2 are replaced by D1 and D2. The higher \u03b3D, the more similar are the two partitions.In fact, the m time points is to a clustering produced from gene expression data from the first m - 1 time points. Denote the index cophenetic matrix corresponding to the first m time points by Cm. Here the index m begins with 3 because the pattern discovery of gene expression from only 2 time points is trivial when data normalization methods [The basic idea of our method is to measure how similar a clustering produced from gene expression data from the first methods are applc(m) = \u03b3c, for any m \u2265 3 \u00a0\u00a0\u00a0 (6)C2 is an arbitrary symmetrical matrix valued from the set {1,\u22ef,n - 1}, and \u03b3c is calculated from Equation (5). c(m) for m \u2265 3 is clearly a function of the number of time points, m. The larger c(m), the more similar are the two clusterings obtained from the first m time points and the first m - 1 time points. Therefore, the determination of the minimum number of microarray experiments corresponds to the determination of m* such that c(m*) is a maximum.where The rationale for the idea described above is as follows: We assume that given a set of gene expression data, the interesting patterns (clusters in this case) are inherently present. The characteristics of a particular pattern are described by the observed features of genes involved in a biological process under investigation. The discriminant characteristics of the patterns are \"bounded\" such that there is a threshold beyond which any further observation will not add any value to the discrimination of patterns. In other words, such patterns can be eventually discovered in a limited number of experiments. the index cophenetic matrix of a partition with k clusters from a hierarchical clustering based on gene expression data from the first m time points. We can define a similarity function d based on , i.e.Similarly, denote by m and k, cophenetic matrices Cm and can be computed in time O(n3) by a hierarchical clustering algorithm such as that proposed by Duda et al. [c(m) and d can be computed in time O(n2) [n, the overall complexity is O(n3).The computational complexity of the method above is analyzed as follows. For some given a et al. . Both vame O(n2) . Since tsingle linkage clustering, complete linkage clustering, or average linkage clustering [According to what similarity measure is used, the agglomerative hierarchical clustering techniques may further be classified into the ustering ,23. The The proposed method was implemented in MATLAB, using average linkage hierarchical clustering and Euclidean distances. The data were pre-processed by the method proposed by Eisen et al. ; specifiFXW proposed the idea of this paper, implemented the programs, and drafted the manuscripts. WJZ and AJK conceived of the study and helped to modify the manuscript. All authors read and approved the final manuscript."}
+{"text": "In practice many biological time series measurements, including gene microarrays, are conducted at time points that seem to be interesting in the biologist's opinion and not necessarily at fixed time intervals. In many circumstances we are interested in finding targets that are expressed periodically. To tackle the problems of uneven sampling and unknown type of noise in periodicity detection, we propose to use robust regression.The aim of this paper is to develop a general framework for robust periodicity detection and review and rank different approaches by means of simulations. We also show the results for some real measurement data.The simulation results clearly show that when the sampling of time series gets more and more uneven, the methods that assume even sampling become unusable. We find that M-estimation provides a good compromise between robustness and computational efficiency.Since uneven sampling occurs often in biological measurements, the robust methods developed in this paper are expected to have many uses. The regression based formulation of the periodicity detection problem easily adapts to non-uniform sampling. Using robust regression helps to reject inconsistently behaving data points.The implementations are currently available for Matlab and will be made available for the users of R as well. More information can be found in the web-supplement . The detection of periodically behaving gene expression time series has been an area of enormous interest lately. Since more and more microarray data is g-test and the brackets denote the integer part of the rational number, then Fisher's g-statistic is defined as the maximum periodogram ordinate divided by the sum of all of the ordinates. Formallywhere q = [(N - 1)/2] and we can analytically find the p-values for the statistic under the Gaussian assumption (see e.g. [where see e.g. or 6]).).q = T. where y is the measured time series and again omitting aN/2 (and the last column in the matrix) if N is odd. Matrices A1 and A2 are defined asfor . It was noted in , and \u03c3n are scaling factors. The scaling factors \u03c3n are chosen so that the resulting estimator is approximately 95% as efficient as the least squares estimator when applied to normally distributed data with no outliers. In particular, \u03c3n = 4.685\u00b7n, where \u015d = 1.4826 mad{ri} is the scaled median absolute deviation of the residuals from their median and hn = (X(XTX)-1XT)nn, i.e, the nth diagonal element of the \"hat\" matrix. For further details, see [where ils, see and implils, see . In the \u03c8 (Tukey's biweight function in this case) is the derivative of \u03c1. The idea of the biweight M-estimator is to give zero weighting to those data points whose residuals are large if compared to the estimated scale. Therefore, if the scale is estimated robustly as well, we expect the M-estimators to give us good performance on data with different distributional characteristics. For more information on M-estimators, see e.g. [where the influence function see e.g. ,41. In oThe least trimmed squares regression (LTS) isr2)1 \u2264 (r2)2 \u2264 ... \u2264 (r2)N. Then the LTS is defined asThe central idea behind the LTS is that instead of considering complete sets we choose subsets of the measured time points of the variables and use OLS for each of these subsets. We then choose the estimate that yields the smallest residual variance. Quantitatively, order the residuals of the linear model as regression is that it can tolerate outliers in the predictor variables as well, whereas the M-estimators cannot. However, this is not critical in spectrum estimation since the measurement time points usually contain no stochasticity. The implementation (FAST-LTS) is introduced in .X and y are multivariate. This method has similarities to the LTS regression in that it also considers subsets instead of complete sets in the estimation. The subset that yields the smallest covariance determinant is chosen for the estimation. For multivariate regression the model is presented asThe minimum covariance determinant (MCD) regression method is a wely is q-variate, X is p-variate, \u03b2 is the and the covariance matrix (scatter), a logical way of robustifying the regression is to replace the location vector and the scatter matrix with robust alternatives. In [Since the standard LS estimates of ives. In the authives. In .The Lomb-Scargle periodogram (see e.g. ) can be It frequently occurs that besides non-uniform sampling there are some measurement points in some genes that are missing although they are present in other genes, thus reducing the quality of the data. Since we do not use an analytical null hypothesis distribution for all the time series but rather simulate the distribution for each time series separately, these missing points are handled so that we fit the sinusoidals only to the time points that are present. The presented methods thus have no further need for missing value imputation .n in Equation (1) so that it does not need to be integer-valued. We perform a scaling of the measurement time points in the following way: Denote the time points when the actual measurements have been made as a vectorTo take non-uniform sampling into account, we loosen the definition of the variable Then form the vectorN - 1. Note that for any uniformly sampled time series Eq. (19) yields an integer valued vector t.to correspond to the new indices, i.e. we normalise the last time point to \u03c9 \u2208 we consider two interpretations of the \u03c9. The first one is in Equation (2) and the second one isTo find the corresponding real time frequency for an angular frequency f is the frequency of periodicity and Fs is the sampling frequency. Thereforewhere Fs as if the sampling was equidistant with help of the vectors \u03c4 and tIf sampling is not equidistant, we can approximate the average n (as long as the index exists).where the quotient is a constant and independent of the index MA carried out the implementation of the methods, performed the computations and mainly drafted the manuscript. HL helped in developing the statistical methods and co-drafted the manuscript. AG provided the microarray measurement data and performed the Gene Set Enrichment Analysis. IS and OY-H conceived of the study and participated in its design and coordination. All authors read and approved the final manuscript."}
+{"text": "Gene expression microarray and other multiplex data hold promise for addressing the challenges of cellular complexity, refined diagnoses and the discovery of well-targeted treatments. A new approach to the construction and quantification of transcriptional regulatory networks (TRNs) is presented that integrates gene expression microarray data and cell modeling through information theory. Given a partial TRN and time series data, a probability density is constructed that is a functional of the time course of transcription factor (TF) thermodynamic activities at the site of gene control, and is a function of mRNA degradation and transcription rate coefficients, and equilibrium constants for TF/gene binding.Escherichia coli is presented.Our approach yields more physicochemical information that compliments the results of network structure delineation methods, and thereby can serve as an element of a comprehensive TRN discovery/quantification system. The most probable TF time courses and values of the aforementioned parameters are obtained by maximizing the probability obtained through entropy maximization. Observed time delays between mRNA expression and activity are accounted for implicitly since the time course of the activity of a TF is coupled by probability functional maximization, and is not assumed to be proportional to expression level of the mRNA type that translates into the TF. This allows one to investigate post-translational and TF activation mechanisms of gene regulation. Accuracy and robustness of the method are evaluated. A kinetic formulation is used to facilitate the analysis of phenomena with a strongly dynamical character while a physically-motivated regularization of the TF time course is found to overcome difficulties due to omnipresent noise and data sparsity that plague other methods of gene expression data analysis. An application to Multiplex time series data can be used for the construction of the network of cellular processes and the calibration of the associated physicochemical parameters. We have demonstrated these concepts in the context of gene regulation understood through the analysis of gene expression microarray time series data. Casting the approach in a probabilistic framework has allowed us to address the uncertainties in gene expression microarray data. Our approach was found to be robust to error in the gene expression microarray data and mistakes in a proposed TRN. Gene expression microarray -3 and otKinetic cell models have been used for predicting cell behavior -11. UnfoTime series experiments commonly involve monitoring a sample of cells over their cycle or during response to time-varying conditions in the extra-cellular medium such as due to heat shock, transitions to aerobic to anaerobic conditions, from enriched to minimal growth media, or exposure to hormones or drugs. Other dynamical phenomena of interest involve behaviors in response to nuclear transplantation, fertilization or viral infection, as well as the time course of normal development, radiation, transitions to abnormality or drug resistance. Predicting these phenomena, and analyzing of time series data on them can be facilitated using kinetic approaches if the associated dynamic variability is to be explored. In contrast, steady-state approaches can only yield ratios of rate coefficients and not all coefficients independently. Nor can a steady-state approach capture autonomous oscillatory dynamics such as observed during transcription ,14.To quantitatively understand the cell, we must account for the omnipresent uncertainty in observed data and in the structure of a cell model. Thus, a probabilistic framework is needed. We suggest that the probability of interest is a function of the rate parameters and initial concentrations and a functional of the time course of the frontier variables for which we do not know the governing equations or experimental measurements. Since we know the time course of gene expression microarray data, in principle, some of the rate parameters, equilibrium constants, initial concentrations as well as the time profile of the frontier variables are more likely to be consistent with it than others. A new approach to the construction and quantification of TRNs is presented here that integrates gene expression microarray time series data and cell modeling through information theory. Given a partial TRN and time series data, a probability density is constructed that is a functional of the time course of TF thermodynamic activities at the site of gene control, and is a function of mRNA degradation and transcription rate coefficients, and equilibrium constants for TF/gene binding.In attempt to reduce the effect of measurement errors, gene expression microarray data is usually preprocessed via image analysis, statistical approaches and channel normalization before any biochemically viable information is derived -17. A nuThe goal of Boolean model analysis is to infer gene regulatory network structure. However, Boolean network models oversimplify gene expression by using a binary approximation wherein genes are considered either active or inactive. The interaction between genes is then represented by Boolean functions , and hence the state of a gene (active/inactive) is calculated using the state of the controlling genes. A regulatory network is then constructed by searching all possible Boolean functions until a network that best fits all the data is obtained. While such approaches miss the subtler variation in the degree of gene activity, their computational efficiency allows them to be applied to large networks.a priori gene regulatory network, gene expression data and using Bayesian statistics, one can construct the conditional probability of the level of expression for each gene given the expression level of another gene that is assumed to regulate it. This conditional probability is then used to build a Bayesian network by keeping all edges (i.e. assumed regulatory interactions) that have a conditional probability higher than a threshold.In Bayesian networks, the expression level of every gene is specified by a random variable. Starting form an Cluster analysis, Bayesian statistics, ICA and PCA classify genes into groups; genes that have similar expression profiles are assumed to be similarly regulated or share the same biochemical functionality. However, they cannot uniquely predict the TRN as they do not address the role of TFs mediating gene-gene interactions or the effect of external factors (e.g. carbon source or TF activators/deactivators such as hormones). Cluster analysis is based on statistical techniques wherein correlations are sought between the responses of genes. However, the coordination can be extremely complex and circuitous. Thus genes may be part of a multi-branch feedback loop involving several TFs made or activated/deactivated by proteins translated from other genes via a series of kinetic steps that can introduce time delays which can easily mask some interactions or introduce spurious ones. Such effects are even more pronounced in light of noise in the observed expression profiles. Furthermore, for a given gene, there is no established correlation between mRNA expression and the level of protein it translates . These tNCA differs from other techniques in that the structure of the TRN is assumed to be known. A number of assumptions are made in NCA to arrive at the final steady-state model. The approach presented here requires at least a part of the TRN. However, we place no restrictions on the structure of this network, use a kinetic model, construct synthetic gene expression microarray time series, apply a physically motivated regularization constraint for the time-dependence of TF activities that enhances robustness, places the entire computation in an information theory context so that the uncertainty can be assessed, and then analyzes TRN structure and the associated physicochemical parameters. The latter include mRNA production and degradation rates and TF/gene binding constants. The use of a kinetic model also allows us to generalize our approach to proteomic and metabolic data either by themselves or with gene expression microarray data.et al. is the activity of free RNA polymerase and is assumed to be constant and henceforth is subsumed in where as iron . If the as iron , then thHi, can be changed such that competitive binding and TF complexing are accounted for explicitly which in effect will allow for OR logic. Although the extension of our transcription model to include competitive binding is crucial to accurately recover TF activity time courses, this level of description is out of the scope of this study. Further research is needed to obtain specific data on the molecular level about which TF binds to which binding site of a given gene.Finally, our methodology can be generalized by relaxing any of the above assumptions. For example, \u03c1 for the values of the set \u039b of model parameters and the time-dependence of T(t) is obtained through entropy maximization.Gene expression microarray data is fraught with inaccuracies. Much attention has been placed on minimizing systematic and random errors via quality screening, multi-spot/multi-slide analysis and averaging. Software carrying out these functions yields confidence intervals which are quantitative measures of errors in the experimental data. Information theory was introduced as a method for assessing the uncertainty in the state of a system via an entropy measure ,39. In aS,The development starts with the introduction of the entropy \u039b and a functional integration over all time courses T(t) is indicated. The experimental data and model are introduced via a set of error measures labeled l = 1,2,..., each of whose average \u03c1 viawhere cDNA microarray data, \u03c1(T(t),\u039b) by maximizing S subject to the constraints (Eq. 8), normalization of \u03c1, qualitative information on the timescale over which T(t) can evolve, and other factors reflecting one's expertise. The result is a form for \u03c1 which implies that the \u039b, T(t) which are most probable yield the lowest error. Also the T(t) obtained has no time dependence that is unphysically short. Other constraints could be introduced that allow one to assign higher probability to the range of parameter values \u039b that are near those one expects from experience. Given the inherently subjective nature of probability are equally likely), the information theory prescription yields a \u03c1 that is to be consistent with the level of our knowledge of the system. In our procedure, the resulting \u03c1 is then maximized with respect to \u039b and T(t) to determine their most probable values. Cell parameters that must be calibrated to attain a predictive model using our approach are those introduced in the previous section.For EcDNA. Let Mi\u2113 be the microarray expression level for the i-th of Ng genes in the \u2113-th of Nmicro experiments (e.g. time slice). ThenOur analysis starts with preprocessed data, thus the predictions of our method as other genome wide microarray analysis methods depend on the choice of the preprocessing procedure . Analysis of preprocessed microarray data can be placed in our framework by introducing an associated error mi\u2113 = Mi\u2113/MiA with A being the initial time or the standard condition. Here, obs indicates an observed value. Thus EcDNA is a function of the set of model parameters \u039b as contained in a cell model of intra-nuclear TF activities. Following the above information theory formulation we introduce the probability \u03c1 = \u03c1 (T(t), \u039b), a functional of the time course T(t) and a function of \u039b. We construct \u03c1 by maximizing the entropy subject to estimates of the average error measures (here EcDNA) and other information.where T(t) unless more information is known. In our formulation this is introduced via a homogenization constraint that eliminates unphysically short timescale variations in T(t) that the sparseness of the time series data would otherwise allow. In particular, we impose the constraintThe number of time points is restricted due to cost. This fact and the high level of uncertainty in microarray data suggest that the probability functional method cannot yield a meaningful tf;tc is the shortest characteristic time over which we expect that T(t) can change appreciably and we assume T.for a time series run over the interval from 0 to S. If a subnetwork of genes with stoichiometric matrix of processes dn is responsible for the production of Tn, then the associated error measure for these processes isOne can also use a steady-state approximation for information available about post-translational reactions to further constrain Ntimes is the number of discretized times at which the TF activity is computed, zn is the number of genes involved in the production of Tn, and \u03b1n is an equilibrium constant. j-th gene responsible for the creation of the n-th TF.where \u03c1 givesMaximization of the entropy with respect to \u03b21, \u03b22, \u03c9 are Lagrange multipliers. The multipliers are determined by insuring that the constraints are satisfied. With this the most probable values of \u039b, T, given the microarray data, are obtained by solving \u2202\u03c1/\u2202\u039b = 0 coupled to \u03b4\u03c1/T\u03b4 = 0, a functional differential equation for T(t) that we solve numerically (see appendix A). A discussion of a symmetry rule that applies to the invertibility of microarray data is given in appendix B and implies the need for a minimal amount of regulatory information in order to obtain a unique network. Figure a priori TRN is used to infer TF activity time courses and TF/gene binding constants.where \u039e is a normalization constant and -9 M, respectively.To test our implementation of the approach described above, and to find its practical limitations, we used a model network that consists of 20 genes and 10 TFs. None of the 20 genes is assumed to code for any of the 10 TFs. The TRN is shown in Table We generated TF time courses according to the followingTn(t) = 1 - 0.5sin(\u03c5nt + \u03d5n), \u00a0\u00a0\u00a0 (13)\u03c5n, \u03d5n are randomly chosen period and phases. Then we created the synthetic time series microarray data using our transcription model, selecting 10 \"data points\" that are 500 seconds apart. In the following, we demonstrate the robustness of our approach in reconstructing T(t) despite mistakes in the regulatory network and noise in the microarray data, conditions commonly encountered in practice.where QT >> 1, imply that this interaction is unlikely (redundant) as QT/(1 + QT) \u2248 1. A similar argument hold for wrongly assumed down regulation as indicated by small binding constants \u2248 1). Therefore, our methodology filters out incorrect interactions by assigning large/small binding constants for up/down regulation. To check the vulnerability of our approach to such redundant interactions, we added random nonzero factors in the regulatory network, and obtained the \"conjectured full regulatory network\" as shown in Table Promoter sequence analysis can be used to determine the structure of the TRN based on likely binding sites. However, this approach is likely to suggest a large number of false positive interactions in the TRN. It is of interest to test whether our approach can filter the redundant nonzero entries in the control network. In our approach, if a TF is assumed to upregulate a gene, large binding constants, i.e. et al. [E. coli cDNA microarray. Yuen et al. [E. coli. Novak et al. carried out an extensive study of Affymetrix Gene Chip oligonucleotide arrays using either identical RNA samples or RNA from replicate cultures under similar biological conditions [Despite advances in the technology, microarray data has considerable levels of error. There have been few systematic analyses of microarray accuracy due to the many technology platforms available. These platforms have many technological variations which affect the accuracy and reproducibility of the measured expression levels of genes. Such variations are due to multiple techniques of making labeled material, various hybridization conditions, different microarray scanners and settings, etc . Other fet al. estimaten et al. reportednditions . They reIn our implementation, we input raw microarray channel data, perform standard channel normalization . The ret) = mi(t) \u00d7 (1 + noise \u00d7 (2r - 1)), \u00a0\u00a0\u00a0 (14)r is a random number between 0 and 1. noise \u00d7100% represents the coefficient of variation or the percentage noise level in the microarray data. Figure where Eaverage + a\u03c3 where Eaverage is the average mismatch and \u03c3 is the standard deviation of gene mismatch and a is an empirical parameter. Once the genes satisfying this criterion are identified, we change the sign of the regulatory interaction for each of the highest ranked TF (up/down). We also consider additional input that the user provides regarding confidence in each element of the TRN. At a given step in this process, we only change one sign per column of the TRN. After a few iterations, we monitor the mismatch behavior; if the sign change failed to improve the mismatch, we change the sign back. To test this algorithm, we took the TRN of Table Often we rely on TF-gene interactions obtained from questionable quality resources. Therefore, it is important that our algorithm is robust to potential sign mistakes in the TRN due to regulatory differences between the cell line of interest and that for which the network was constructed. In this case, we run our code in a discovery mode that searches for network mistakes. We first rank genes based on the mismatch between the predicted and observed microarray response, highest ranked having the greatest mismatch. Then, we rank the TFs based on the rank and number of genes that they regulate. As calculation progresses, we periodically check the genes whose mismatch is greater than E. coli microarray data obtained for carbon source transition from glucose to acetate media. Details on the experimental conditions and the microarray procedure are provided in Ref. [RegulonDB[EcoCyc [E. coli. Microarray data, our approach is still found to be robust.To test our methodology we used the in Ref. . The datRegulonDB as modifB[EcoCyc . The finecoCyc and assuming the nature of these interactions (up vs down) to be unknown. Figure Brown and Callan have preRegularization is important for discriminating between noise/data sparsity-related spurious oscillations and that arising from the nonlinear dynamics of transcription chemical kinetics ,14. To dA+B+C\u2192ABC can occur through the mechanism A+B\u2192AB, AB+C\u2192ABC or the other two permutations; to identify the actual mechanism, one must provide intermediate measurements on the dimers AB, BC and CA, rather than simply a measurement of the net rate of ABC production. Clearly, in a system with thousands of participating genes and hundreds of TFs, the resolution of the network and the detection of spurious ones, is a grand challenge of combinatorial magnitude. Essentially, all present network discovery methods suffer from this uniqueness difficulty due to the sparsity of available information. In this paper, we demonstrate how our method provides a way to augment an incomplete TRN and to identify inconsistencies in a proposed network based on microarray data.A number of issues must be addressed in developing a TRN discovery strategy as follows. Discovering the structure and quantifying the physical chemistry of the gene regulatory network and the underlying mechanism has the challenges that arise in any chemical kinetics problem. For example, the simple process genes\u2192 mRNAs\u2192 proteins \u2192 TFs \u2192 genes .... is an oversimplification. While these processes could readily be added to our formulation, it is clear that data in addition to gene expression microarray observations would be required to resolve them. Thus we take the perspective that the simple paradigm cited above can be adopted as a starting point if it is recognized that other processes are somehow mediating the network we quantify. For example, if some genes are repressed in one mammalian cell line by methylation this will be reflected as a small transcriptional rate constant our approach reveals. When predicted levels for a given gene expression are found to be in poor agreement with observations and assuming that the probability that other TFs could be regulating that gene has already been explored, we consider this to imply that the simple paradigm has broken down and the other processes must be acting in a dynamical way to affect the gene expression time series. In light of the above, it is evident that network structure, physicochemical parameters and TF activity time courses can not all be extracted from a single approach.Processes such as acetylation, methylation and phosphorylation, and the associated enzymes, play important roles in the wider set of pathways . Thus thMultiplex time series data holds a great promise for the construction of the network of cellular processes and the calibration of the many associated physical chemical parameters. We have demonstrated these concepts in the context of transcription regulation understood through the analysis of microarray time series data. Casting the approach in a probabilistic framework has allowed us to address the uncertainties in microarray data. Our approach was found to be robust to error in the microarray data and mistakes in a proposed regulatory network. Our approach compliments other methods when used as a part of a wider network discovery/quantification algorithm. Given its robustness, its capacity to refine and quantify complex networks of cellular processes, and the potential for extension to other multiplex bioanalytical data, we believe that our approach has great potential in the pure and applied life sciences.KAryote Gene ANalyzer (KAGAN)Project name: Project home page: Operating system(s): Windows (2000 and later versions) or LinuxProgramming Languages: F77, phpLicence: a web interface that allows users to simulate their own data is available from the above site (free registration required).Restrictions to non-academics: NoneAS, KT and PJO formulated the problem; and developed the theoretical model framework. AS and KT carried out the development, and implementation of the numerical algorithms. All authors participated in the writing of the manuscript, and have read and approved the manuscript.kmax and mRNA degradation rate constants \u03bb.The numerical methods used for simulating time evolution of mRNA populations, solving the calibration inverse problem by determining of TF time courses and model parameters are as follows. The latter parameters are sets of TF binding constants i-th gene, mRNA population, Ri, time evolution is computed using an implicit Euler methodFast and accurate solution of the ODE model is crucial to construct thousands of gene expression levels to find the optimum model parameters in a practical time. For the tn+1 used is adaptive and depends on the maximum component of the rate vector k. The microarray expression level at a given experimental time is predicted as the relative abundance of mRNA populations to their reference state at initial timeThe time step \u0394\u2202\u03c1/\u2202\u039b = 0, a gradient steepest descent approach suffers from slow convergence. We overcome this via a combined steepest descent/simulated annealing approach. The key to efficiently solve the inverse problem cited above is to use an iterative alternating parameter approach. The calibration starts by minimization of the microarray error EcDNA with respect to TF binding constants In solving (Eq. 6) ,ki(t). Solving for ki(t) at the given experimental microarray times yields a computationally efficient algebraic approach that allows the use of a simulated annealing algorithm [ki over a grid of microarray experimental times and then interpolating it as a continuous piecewise linear function,The latter establishes an integral equation for lgorithm to find With (A.4) we can evaluate the above integral analyticallyki and (A.6) give us a linear system that can be solved for nij is the type of TF that binds to the j-th site on gene i. We find that, if the rate integral equation is accurately solved, then minimizing this error is equivalent to minimizing the microarray error. Applying simulated annealing enhance the likelihood that we get as close as needed to the global minimum of i. When the resulting solution fails by increasing the error due to numerical instability, we switch to a steepest descent scheme.where \u2202\u03c1/T \u2202 = 0, with no flux boundary conditions [j-th TF at the (n+1)-th iteration, one obtainsFor the TF activities we solve the discretized temporal regularization functional differential equations, nditions for its andl = 1,\u22ef(Ntimes - 1), \u03c9 is the regularization coefficient and \u0394s is chosen small enough by line search to assure that EcDNA is minimized. For the j-th set of equations, one must restrict the analysis to those genes regulated by that TF. The above linear system is efficiently solved using the Thomas algorithm for tridiagonal linear systems [\u03bb, and transcription limiting rate kmax) are found by a steepest descent based on EcDNA.Where systems . The remEcDNA. For example if Qij, \u03b5 > 0 also \u03b5Qij, If only the microarray data was provided, and in absence of direct information and physical measurements on the binding constants and TF activities, it is clear that there is a degeneracy in the solution for this problem. This means that there are many states in the parameter space that have the similar andQT degeneracy.This is self-consistent since it eliminates the cited above T(t). Notably there are H (Eq. 2) contains factors of the form xb/(1 + x) where b is (bij + 1)/2 and x is Qij x/(1 + x) = 1/(1 + 1/x). Thus an up regulation with Qij Qij bij for at least one gene the inversion will allow two equally probable answers corresponding to bij = \u00b1 1 . This implies that for each TF type n we must find at least one gene for which the nature of the regulation (up versus down) is known. This means that if Ng row by NTF column matrix, then at least one entry in each column must be known. (see Table For a general class of models used here, there is a TF up/down regulation symmetry that leads to a multiplicity in the determination of"}
+{"text": "Microarray compendia profile the expression of genes in a number of experimental conditions. Such data compendia are useful not only to group genes and conditions based on their similarity in overall expression over profiles but also to gain information on more subtle relations between genes and conditions. Getting a clear visual overview of all these patterns in a single easy-to-grasp representation is a useful preliminary analysis step: We propose to use for this purpose an advanced exploratory method, called multidimensional unfolding.We present a novel algorithm for multidimensional unfolding that overcomes both general problems and problems that are specific for the analysis of gene expression data sets. Applying the algorithm to two publicly available microarray compendia illustrates its power as a tool for exploratory data analysis: The unfolding analysis of a first data set resulted in a two-dimensional representation which clearly reveals temporal regulation patterns for the genes and a meaningful structure for the time points, while the analysis of a second data set showed the algorithm's ability to go beyond a mere identification of those genes that discriminate between different patient or tissue types.Multidimensional unfolding offers a useful tool for preliminary explorations of microarray data: By relying on an easy-to-grasp low-dimensional geometric framework, relations among genes, among conditions and between genes and conditions are simultaneously represented in an accessible way which may reveal interesting patterns in the data. An additional advantage of the method is that it can be applied to the raw data without necessitating the choice of suitable genewise transformations of the data. Depending on the biological question at hand, one may be interested in finding subsets of genes that can be clustered together based on similarities in their overall expression profile, or in finding subsets of conditions that can be grouped together based on similarities in their overall gene profile. Also more subtle relations between genes and conditions can be envisaged, such as biclusters of genes being co-expressed over a subset of conditions only (modules) or groups of genes being discriminative for subsets of conditions. However, the massive amount of information and relations present in the data, pose a challenge for the data analyst: It is not trivial to know where to start looking for structure and a priori choices can have the consequence that something is missed. For instance, many cluster algorithms require defining in advance the number of clusters to be searched for, a parameter which is difficult to guess in advance. Therefore, having a rough idea on the most prominent patterns present in the data and the (unexpected) particularities, prior to performing a more profound analysis may be most useful. Exploratory methods offer the possibility to reduce the data to a manageable amount of information, for example by means of a clustering of the individual elements to a small number of groups or by means of reducing them to a small number of dimensions . Often, such methods yield insightful graphical representations. Ideally, such representations should display genes and conditions From this perspective, multidimensional unfolding (MDU) seems a promising data exploration technique (for an introduction to MDU see and 2]))2]): ThiAlthough theoretically suitable as a data exploration technique, current MDU algorithms cannot readily be applied due to problems of a general kind and of problems that are specific for the case of microarray gene expression data. As regards problems of a general kind: first, some algorithms do not converge to a local minimum and yield unstable results; second, in many cases MDU representations are not well interpretable due to a sticking together of a majority of gene and condition points implying that they cannot be discriminated from one another. As regards problems that are specific for the case of gene expression data, first, existing MDU algorithms have not been designed for the analysis of data sets of the typical sizes of microarray data as they require a large amount of memory; second, existing MDU algorithms also are computationally very intensive . To deal with these problems, in the present paper we propose a novel MDU algorithm. A subsequent application of it to two publicly available microarray datasets, each of which serving a different biological purpose, will demonstrate its exploratory power.i = 1 ... n, the conditions by j = 1 ... m, and the dimensions of the low-dimensional space by r = 1 ... p. Also, let E be the n \u00d7 m expression matrix for the n genes measured in m conditions with eij representing the expression of gene i in condition j and ei the m-sized vector representing the expression profile of gene i.The purpose of a multidimensional unfolding of gene expression data is to find coordinates in a low-dimensional space, both for the genes and the conditions, in a way that the (Euclidean) distance of a gene point to a condition point is shorter the higher the gene is expressed in that condition. Note that MDU can be considered as an extension of multidimensional scaling (MDS) to the rectangular case < 0). In order to maximize (1), the coordinate matrices X and Y have to be such that the distance vectors correlate as negatively as possible with the profiles while positive correlations are to be avoided. An important aspect of the optimization criterion, is that any set of positive genewise linear transformations of the raw expression data E, will yield the same optimal X, Y, because the correlation is insensitive to linear transformations; as such, tough questions about preprocessing, insofar they pertain to gene-specific linear transformations are bypassed.the average squared correlation between the expression profiles and the distances in the low-dimensional representation. Because higher expression levels are to correspond to shorter distances, the summation runs only over those genes for which there is a negative correlation between the expression levels and the Euclidean distances (denoted by X and Y that maximize (1), we reformulate this optimization problem to an equivalent one, namely minimizingTo find ai, X, and Y under the constraint that ai \u2265 0 for all i; see the appendix for a proof of the equivalence. For reasons given below, two more constraints are added to the optimization problem: First, the ai weights are bounded by an upper bound u (such that 0 \u2264 ai \u2264 u); and second, ||xi|| \u2264 1 for all i in a space centered at the point of gravity for the condition coordinates yj. Note that centering can be done without loss of generality.(with var and std denoting variance and standard deviation respectively) with respect to ai's, X, and Y under the constraints 0 \u2264 ai \u2264 u and ||xi|| \u2264 1, we propose the algorithm GENEFOLD update of the ai weights can be done on the basis of a closed form expression (see appendix). The update of the gene coordinates X under the constraint ||xi|| \u2264 1 for all i, as well as the update of the condition coordinates Y, are based on a numerical technique called iterative majorization (see [For the minimization of (2) with respect to the posed in ). A detaposed in . GENEFOLion (see -7 for thai \u2264 u and ||xi|| \u2264 1. 1) Due to the restriction ||xi|| \u2264 1 in a space centered at the point of gravity of the condition coordinates, the gene points lie on the unit sphere, and 2) limiting ai to values smaller than or equal to u with the value of u well chosen (see the appendix), pulls the variance of the distances di to the variance of the distances on the unit sphere with uniformly distributed points. With regard to problems that are specific for the analysis of (large) gene expression data sets; first, GENEFOLD works on considerably smaller matrices than the (n + m) \u00d7 (n + m) used in classical procedures for MDU ; second, GENEFOLD does not rely on computationally intensive methods like matrix inverses [GENEFOLD solves both the general MDU problems and the problems that are specific for gene expression data. With respect to the general problems, first convergence is guaranteed because the proposed algorithm yields a non-increasing sequence of loss values for a function which is bounded below (by zero). Second, the problem of a lack of discrimination of the coordinates such that a majority of points stick together (which is known as the degeneracy problem in MDU), is solved by the constraints inverses . As an iWe applied multidimensional unfolding to two publicly available data sets, one situated in an experimental context where thThe data discussed in pertain -5; our experience with this value is that it yields stable solutions in a reasonable amount of time. The problem of local minima was accounted for by restarting the algorithm 101 times, using 100 semi-rational starts and a rational start for the initial coordinates, the solution with the lowest loss being retained. The dimensionality of the configuration is determined by a comparison of loss values: For the one up to five-dimensional solution, loss was respectively 0.55, 0.25, 0.19, 0.15, 0.11, which suggests a two-dimensional configuration (one dimension less results in a considerable increase in loss while more dimensions barely reduce the loss). For the two-dimensional configuration, a very good fit was obtained: The average genewise correlation between the distances and the raw data is -0.86. A visual representation of the solution is depicted in Figure Because our unfolding algorithm relies on an iterative procedure with a non-convex solution space and a prespecified dimensionality, some consideration has to be given to the choice of a stopping rule, to the problem of local minima, and to the dimensionality of the configuration. With respect to the stopping rule, we chose to terminate the iterative procedure when the difference in loss between the current and previous solution was less than 10A striking feature of Figure Taking a look at the genes, we see that these, too, are organized in a circular way with a blank spot in the middle. Another feature to look at, is the location of the majority of the genes: Most are located close to the earliest and latest time points, whereas only a few genes are located at intermediate time points. The expression of a gene at the different time points is reflected by the distances from this gene to the time points: The closer a time point is located to a gene, the higher the expression level or, conversely, the more distant a time point is from a gene, the lower the expression level. Note that for these data, we know that the expression at 0 hr corresponds to a neutral state, such that higher expression levels indicate induction and lower expression levels repression. This means that induction occurs at time points close by while repression occurs at distant time points. For ease of interpretation, we used distinctive labels for genes with an induction peak and with a repression peak: Genes with an induction peak being those for which the difference in distance between the reference time point and the time point closest to the gene point is larger then the difference in distance between the reference time point and the time point furthest from the gene point . Remember further that the distances between gene and condition points inversely reflect the expression level. Consider, for example, the gene represented by the square numbered one in Figure k periods falling together. As illustrated by Figure Taguchi and Oono analyzedMany applications of gene expression profiling can be found in clinical settings where genome-wide expression is measured for different patient or tissue groups. A challenge for the exploratory MDU tool within this setting may be to retrieve useful information beyond a mere identification of genes that optimally discriminate between the different groups.We consider gene expression data for 62 colon tissues, 40 of which are tumorous (colon adenocarcinoma) and 22 normal \u2264 -0.80). For the genes closer to the normal tissues, consistent with the results in [Taking a look at the genes in Figure sults in , we founMultidimensional unfolding can be a useful tool when dealing with the challenging task of extracting useful information from microarray gene expression data: As shown in this paper, MDU yields easy-to-grasp representations and appears to be a versatile tool for data exploration that may reveal many kinds of interesting patterns present in the data. For example, in the first application, an intriguing clock-like structure for the time points was revealed, a pattern that has not been uncovered in a direct way up to present for these well-studied data; in the second application, the unfolding analysis revealed an intriguing subdivision of the cancer tissue groups, beyond a mere discrimination of normal and tumor tissues. A possible limitation of the unfolding approach as presented here, is that in case of a large number of heterogeneous conditions, low-dimensional configurations can be obtained that are mainly blurred due to the actual high-dimensional structure of the data. Also, a huge number of genes can result in a configuration that provides little insight into the data. A possible way to overcome this limitation could be the use of a hybrid approach that results in low-dimensional distance-based representations of clustered data. Such an approach, has already been proposed for multidimensional scaling and for We show the equivalence of minimizing loss function (2) and maximizing (1). Consider the loss function for one gene,ai which, under the constraint ai \u2265 0, reaches its minimum atEquation (3) is a plain quadratic form in r < 0, and at 0 if r \u2265 0. Substituting the optimal ai's in loss function (2) yieldsif Obviously minimizing (5) is equivalent to maximizing (1).ais to an upper bound u attracts the solution space to configurations with a positive lower bound on std(di), the spread of the distances, and this will be the more so the stronger the distances correlate with the expression profiles. A suitable value for u depends on the range of the coordinates: We propose to work in a reference space centered at the point of gravity of the condition coordinates and with ||xi|| \u2264 1 (such that the gene coordinates lie within the unit sphere).From Equation (4), it can be seen that subjecting the ai is set equal to u = (mv)-0.5 with v the variance of the Euclidean distance from a point i to points sampled uniformly in the unit sphere of dimensionality p with v calculated using Monte Carlo simulation. Using this upper bound for ai will, for a maximal (absolute) correlation r = -1, pull the variance of the distances towards v or larger values. The constrained update for ai becomes thenFor this reference space, the upper bound for KVD developed the algorithm and applied it to the two gene expression data sets discussed. IVM, KM, and KVD drafted the manuscript. KM and KE made substantial contributions to the biological background. WJH and IVM scrutinized the unfolding method and its application to gene expression data. All authors read and approved the final manuscript."}
+{"text": "Circadian rhythms are prevalent in most organisms. Even the smallest disturbances in the orchestration of circadian gene expression patterns among different tissues can result in functional asynchrony, at the organism level, and may to contribute to a wide range of physiologic disorders. It has been reported that as many as 5%\u201310% of transcribed genes in peripheral tissues follow a circadian expression pattern. We have conducted a comprehensive study of circadian gene expression on a large dataset representing three different peripheral tissues. The data have been produced in a large-scale microarray experiment covering replicate daily cycles in murine white and brown adipose tissues as well as in liver. We have applied three alternative algorithmic approaches to identify circadian oscillation in time series expression profiles. Analyses of our own data indicate that the expression of at least 7% to 21% of active genes in mouse liver, and in white and brown adipose tissues follow a daily oscillatory pattern. Indeed, analysis of data from other laboratories suggests that the percentage of genes with an oscillatory pattern may approach 50% in the liver. For the rest of the genes, oscillation appears to be obscured by stochastic noise. Our phase classification and computer simulation studies based on multiple datasets indicate no detectable boundary between oscillating and non-oscillating fractions of genes. We conclude that greater attention should be given to the potential influence of circadian mechanisms on any biological pathway related to metabolism and obesity. The metabolism of living organisms changes over the twenty-four hour daily cycle in an oscillatory manner. This repeating pattern of \u201cpeak\u201d and \u201ctrough\u201d expression is known as a \u201ccircadian rhythm.\u201d We now know that the body's internal clock is controlled by a discrete group of genes. These important regulators are found in many different organs of the body, and they control expression of many other genes in the body. Using mice as an experimental animal, Ptitsyn and colleagues looked at the overall pattern of gene expression in fat tissues and the liver using three different mathematical tests. They present data indicating that the majority of active genes fluctuate rhythmically over a twenty-four hour period. This work suggests that future studies should pay close attention to the influence of the circadian rhythm in obesity and in fat metabolism. The circadian, or daily, rhythm is one of the most obvious and well-studied periodic processes in living organisms. Studies of transcriptional output in different tissues report that expression of approximately 5%\u201315% of all mammalian genes show a circadian oscillation . This ciWe have completed independent circadian studies in AKR/J mice acclimated to a 12 h light: 12 h dark cycle, harvesting sets of three to five mice at 4-h intervals . Totalg-test of periodogram, autocorrelation, and the permutation test. The Fisher's g-test estimates nonrandomness of the dominating frequency in the periodogram from the signal-to-noise ratio. In our case, the signal is a diurnal frequency, reflected by a specific peak in the periodogram, and the noise level is estimated from the height of all other frequencies represented in the periodogram , Per1, Per2, Per3, and Cry1, as well as the circadian output gene Dbp. Our qRT-PCR studies , then the periodogram exhibits a peak at that frequency with a high probability. Conversely, if the time series is a purely random process , then the plot of the periodogram against the Fourier frequencies approaches a straight line and p is the largest integer less than 1/x.where p-value distribution, and hence is adaptive to the actual data [To account for multiple testing problems, we employ the method of FDR as a multiple comparison procedure . This meual data .p-values p(1), p(2), . . . , p(G) with corresponding genes g(1), g(2), . . . , g(G), and apply the following algorithm:Consider the set of ordered q [It has been shown that this procedure controls the FDR at level q . This alq with excY = {x0,x1,x2, \u2026 xN\u22121}, in which technical variation approaches or even exceeds the amplitude of periodic expression. In a very short time series there is a significant probability to observe a periodicity due to stochastic reasons. However, the periodic change of the base expression level can still be identified in spite of the high noise level. Let YR be a random permutation of the time series Y and its corresponding periodogram IR(\u03c9). If the periodogram IY(\u03c9) contains a significant peak corresponding to a particular frequency this peak results from a particular order of observations in the series Y. A random permutation would preserve the same noise level, but not the periodicity. After DFT, a periodogram IR(\u03c9) represents only the peaks occurring by chance. To avoid random reinstitution of periodicity of length T (in this case circadian), we generate YR by multiple shuffling of randomly selected time points n!,1000) random permutations. Each permutated series YR is transformed to the frequency domain and a single peak of the periodogram IR(\u03c9) is stored. The p-value for the null hypothesis of random nature of a particular peak of periodogram can be estimated by comparing the stored IR(\u03c9) values to the observed I(\u03c9):The alternative test for significance of a particular (in our case circadian) periodicity among large numbers of gene expression profiles is based on the random permutation procedure. Consider a time series K is the number of permutated series YR for which the circadian peak of periodogram is higher or equal to that of the original time series Y. High p-value exceeding the threshold, for example 0.05, means that at least 5 out of 100 random permutations of time series produce a periodogram with the same or higher peak, corresponding to a given periodicity. Low p-values of indicate a significant difference between periodograms IR(\u03c9) preserving circadian periodicity and purely random periodograms with the same level of technical variation.Here Y = {x0,x1,x2, \u2026 xN\u22121} the autocorrelation is simply the correlation of the expression profile against itself with a frame shift of k datapoints .For a given a discrete time series f, defined as f =i + k if i + k < N and f =i + k \u2212 N otherwise:For the time shift R(f) among all possible phase shifts f and use 0.05 significance cutoff values for correlation coefficient. Time series that shows significant autocorrelation R(f) with the lag f corresponding to one day (six datapoints) are considered circadially expressed.For each time series we calculate the maximum positive We have assigned phase to each expression time series by computing cross-correlationx is a gene expression time series of N points and y is an artificially generated profile of ideal cosine functionwhere p is the number of time points in a complete circadian cycle; for example p = 6 time points in the Zvonic et al. [p-value estimated by one of the described algorithms. The heatmap was generated from the table of sorted time series expression profiles using Spotfire Decisionsite software (Spotfire).where c et al. dataset.Figure S1(16 KB PDF)Click here for additional data file.Figure S2Phase is assigned to each expression profile based on the maximal correlation to an artificial cosinusoid profile with a given phase shift. Phase I starts with a peak value at time zero, thus there is a peak in the middle and a rise at the end. For other phases there are two red zones, corresponding to the peak expression values, spaced by dark or green areas. This pattern extends far beyond 575 out of 12,486 genes reported in . As in a(111 KB PDF)Click here for additional data file.Figure S3Phase is assigned to each expression profile based on the maximal correlation to an artificial cosinusoid profile with a given phase shift. Phase I starts with a peak value at time zero, thus there is a peak in the middle and a rise at the end. For other phases there are two red zones, corresponding to the peak expression values, spaced by dark or green areas. This pattern is prominent across the absolute majority of expressed genes, not merely 10%\u201315% of each phase category.(232 KB PDF)Click here for additional data file.Figure S4x-axis) and the corresponding p-value on the ordinate (y-axis).In all three tissues, the mean expression level (raw) is plotted on the abscissa ((2.2 MB PDF)Click here for additional data file.Protocol S1(51 KB PDF)Click here for additional data file.Table S1http://david.niaid.nih.gov/david).The relative abundance of KEGG biological pathways represented in the subset of transcripts for which oscillation is detected in all three tissues . Mapping to the KEGG database was performed using the DAVID online service ((43 KB PDF)Click here for additional data file.Table S2(242 KB PDF)Click here for additional data file."}
+{"text": "A reverse engineering of gene regulatory network with large number of genes and limited number of experimental data points is a computationally challenging task. In particular, reverse engineering using linear systems is an underdetermined and ill conditioned problem, i.e. the amount of microarray data is limited and the solution is very sensitive to noise in the data. Therefore, the reverse engineering of gene regulatory networks with large number of genes and limited number of data points requires rigorous optimization algorithm.i), which is used as confidence level. The unit network with higher P(D|Hi) has a higher confidence such that the unit network is correctly elucidated. Thus, the proposed algorithm is able to locate true positive interactions using P(D|Hi), which is a unique property of the proposed algorithm.This study presents a novel algorithm for reverse engineering with linear systems. The proposed algorithm is a combination of the orthogonal least squares, second order derivative for network pruning, and Bayesian model comparison. In this study, the entire network is decomposed into a set of small networks that are defined as unit networks. The algorithm provides each unit network with P Simulation results show that the algorithm can be used to elucidate gene regulatory networks using limited number of experimental data points. 2) Simulation results also show that the algorithm is able to handle the problem with noisy data. 3) The experiment with Yeast expression data shows that the proposed algorithm reliably elucidates known physical or genetic events. 4) The comparison experiments show that the algorithm more efficiently performs than Sparse Bayesian Learning algorithm with noisy and limited number of data. High-throughput technologies such as DNA microarrays provide the opportunity to elucidate the underlying complex cellular networks. There are now many genome-wide expression data sets available. As an initial step, several computational clustering analyses have been applied to expression data sets to find sets of co-expressed and potentially co-regulated genes -5. As a In the reverse engineering of GRN, essential tasks are developing and comparing alternative GRN models to account for the data that are collected Figure . There aMicroarrays have been used to measure genome-wide expression patterns during the cell cycle of different eukaryotic and prokaryotic cells. The review paper of Cooper and Shedden presentsunderdetermined, which means that there is substantially greater number of genes than the number of measurements. Secondly, reverse engineering of GRN with linear systems is ill conditioned because small relative changes in design matrix E in Eq. 2 due to the noise make substantially large changes in the solution. Therefore, the reverse engineering algorithm named as Bayesian orthogonal least squares (BOLS) is developed to overcome these difficulties. The BOLS method is created by combining three techniques: 1) Orthogonal Least Squares method (OLS) T. Yi is a column matrix of expression levels for target genes, which is defined as Yi = T. E is a N-1 \u00d7 K design matrix, which is defined as E = and ei = T. ni is an Gaussian noises, which is defined as ni = T.where i = 1, 2,..., K. i(t)s in both Eq. 1 and 2 are same and noisy ones. Eq. 1 describes that the current expression levels ei(t)s are determined depending on the previous ones ei(t-1) and noises \u03bei(t-1) and the expression levels are evolved over the time based on the GRN. Hence, \u03bei(t) is a noise added into ei(t)s during the generation of synthetic or real expression levels based on DBN model. Once the \"noisy\" expression levels are available, we consider Eq. 2 for reverse engineering of GRN. Because the given expression levels ei(t)s in Eq. 2 are noisy, we should have a condition such that |Yi - Ewi| = |ni| > 0. If ni = 0, we will have over-fitting solutions, in which model fitting oscillates widely so as to fit the noise. Thus, we can say that ni is a noise related with \"data misfit\" or \"confidence interval\" on the best fit parameters. On the other hand, \u03bei(t) is a noise related with the \"generation\" of expression levels. If ni is modeled as zero-mean Gaussian noise with standard deviation \u03c3n, the probability of the data given the parameter wi isIt should be noted that the expression levels e\u03b2 = 1/\u03c3n2, ED = (Yi-Ewi)T(Yi-Ewi), and ZD = (2\u03c0/\u03b2)N/2. P is called the maximum likelihood. It is well known that maximum likelihood is underdetermined and ill conditioned problems. Thus, we are motivated to develop a novel strategy to overcome these problems.where K \u00d7 K graph matrix G = , with binary elementWe decompose the entire network into a set of small networks that are defined as unit networks. Each unit network consists of one particular target gene and its regulator genes. The unit network is used as input and output of BOLS algorithm. Figure i and node j if and only if gi, j = 1. For each edge, we store the information of unit network where it belongs. The information of these sets can be easily obtainable from all unit networks. Thus, the combination of all unit networks for creating whole GRN is easy and straightforward procedure.Furthermore, this matrix induces a GRN, in which nodes corresponds to genes and an edge joins nodes As briefly described in background section, reverse engineering with a linear system with limited data has to overcome two difficulties. In this section, we describe our efforts to overcome these challenges by developing BOLS algorithm.underdetermined when the number of parameters is larger than the number of available data points, so that standard least squares techniques break down. This issue can be solved with the OLS , XN-1 \u00d7 K = and UK \u00d7 K is a triangular matrix with 1's on the diagonal and 0's below the diagonal, that is,where ELet's say that w is the regression parameter inferred by E and g is the regression parameter inferred by X. It is noted that g and w satisfy the triangular systemThe computational procedure of Gram-Schmidt method is described asi and xj (i \u2260 j) are orthogonal to each other, the sum of square of Y is defined asBecause xi is defined asThe variance of YiTxi)gi2/N-1 is the variance of Yi which is contributed by the regressors and niTni/N-1 is the noise (or unexplained) variance of Yi. Hence, (xiTxi)gi2/N-1 is the increment to the variance of Yi contributed by wI, and the error reduction ratio only due to xi can be defined asIt is noticed that At the first step, for 1 \u2264 i < K, computeFindand selectth step where j \u2265 2, for 1 \u2264 i \u2264 K, i \u2260 i1,..., i \u2260 = ij-1, compute(2) At the jFindand selectmj = umj(ij), 1 \u2264 m < j.where usth step when(3) The OLS is terminated at the K\u03c1 < 1 is a chosen tolerance.where 0 <\u03c1 << 1 (\u03c1 = 1.0e-3 in this study). Then we reduce unnecessary parameters to deal with the ill-conditioned problem by using second order derivative for network pruning techniques and Bayesian model comparison framework. We can obtain the optimal solution by trading off between the complexity of the model and the data misfit T each column matrix in X. The regression parameters we want to infer are g = T. The log posterior probability of data D, given \u03b1 and \u03b2, can be derived [With a Bayesian frame we can c derived as,Most Probable. The evidence P(D|Hi) can be obtained if we marginalize the probability defined in Eq. 5 over the hyper-parameters \u03b1 and \u03b2. Before the estimation of the evidence P(D|Hi), we have to find the most probable value of the hyper-parameters \u03b1 and \u03b2. The differentiation of Eq. 5 over \u03b1 and \u03b2 and the rearrangement gives formulae for the iterative re-estimation of \u03b1 and \u03b2 [where the subscript MP denotes \u03b1 and \u03b2 ,\u03b3 = N - \u03b1Trace(A-1), g = A-1XTY, N is number of data points, and K is number of variables (genes). To rank alternative structures (or complexities) of the model in the light of data set D, we evaluate the evidence by marginalizing the posterior probability P over \u03b1 and \u03b2,where \u03b1 and \u03b2. When the available prior information is minimal, the learning process is often started with an objective prior probability. This uninformative prior probability is referred to as \"vague prior\" for a parameter with a range from 0 to \u221e, which is a flat prior [\u03b1, \u03b2, Hi). The marginalization of P over \u03b1 and \u03b2 can be estimated using a flat prior and Gaussian integration [We have very little prior information about at prior . This pregration ,\u03c3\u03b1|D logand \u03c3\u03b2|D logare the error bars on log\u03b1 and log\u03b2, found by differentiating Eq. 5 twice:where In this study, we create a novel reverse engineering algorithm for linear systems with K number of genes using three techniques described above. The algorithm is run K times so that all genes in the data set are considered as a target gene at least once. The algorithm of BOLS for a unit-network construction is summarized as1. Set certain gene as the target gene and set remaining genes as regulator candidates as input.2. Over-fit the data to Eq. 2 using OLS.3. While the number of parameters is greater than 1.\u03b1 and \u03b2 with iterative re-estimation Eq. 6 and 7.3.1 Estimate i) for the current state network Hi with Eq. 8.3.2 Compute the P as an output unit network.4. Select the network with the maximum P(D|HCSK developed BOLS algorithm, performed experiments for evaluation of BOLS algorithm, and drafted and finalized manuscript.Supplementary Information. This description provides the comparison of performance between BOLS and SBL using the synthetic data generated by Rogers and Girolami [Comparing the performance between BOLS and SBL using the data set generated based on Rogers and Girolami's study \u2013 Girolami .Click here for file"}
+{"text": "Halobacterium under novel perturbations.The Inferelator, a method for deriving genome-wide transcriptional regulatory interactions, successfully predicted global expression in Halobacterium NRC-1. The Inferelator uses regression and variable selection to identify transcriptional influences on genes based on the integration of genome annotation and expression data. The learned network successfully predicted Halobacterium's global expression under novel perturbations with predictive power similar to that seen over training data. Several specific regulatory predictions were experimentally tested and verified.We present a method (the Inferelator) for deriving genome-wide transcriptional regulatory interactions, and apply the method to predict a large portion of the regulatory network of the archaeon Distilling regulatory networks from large genomic, proteomic and expression data sets is one of the most important mathematical problems in biology today. The development of accurate models of global regulatory networks is key to our understanding of a cell's dynamic behavior and its response to internal and external stimuli. Methods for inferring and modeling regulatory networks must strike a balance between model complexity (a model must be sufficiently complex to describe the system accurately) and the limitations of the available data .A major challenge is to distill, from large genome-wide data sets, a reduced set of factors describing the behavior of the system. The number of potential regulators, restricted here to transcription factors (TFs) and environmental factors, is often on the same order as the number of observations in current genome-wide expression data sets. Statistical methods offer the ability to enforce parsimonious selection of the most influential potential predictors of each gene's state. A further challenge in regulatory network modeling is the complexity of accounting for TF interactions and the interactions of TFs with environmental factors . A third challenge and practical consideration in network inference is that biology data sets are often heterogeneous mixes of equilibrium and kinetic (time series) measurements; both types of measurements can provide important supporting evidence for a given regulatory model if they are analyzed simultaneously. Last, but not least, is the challenge resulting from the fact that data-derived network models be predictive and not just descriptive; can one predict the system-wide response in differing genetic backgrounds, or when the system is confronted with novel stimulatory factors or novel combinations of perturbations?de novo, at the Boolean level [A significant body of work has been devoted to the modeling and learning of regulatory networks -3. In than level -11.Additive linear or generalized linear models take an intermediate approach, in terms of model complexity and robustness -15. SuchLearning and/or modeling of regulatory networks can be greatly aided by reducing the dimensionality of the search space before network inference. Two ways to approach this are limiting the number of regulators under consideration and grouping genes that are co-regulated into clusters. In the former case, candidates can be prioritized based on their functional role . In the latter case, gene expression clustering, or unsupervised learning of gene expression classes, is commonly applied. It is often incorrectly assumed that co-expressed genes correspond to co-regulated genes. However, for the purposes of learning regulatory networks it is desirable to cluster genes on the basis of co-regulation as opposed to simple co-expression. Furthermore, standard clustering procedures assume that co-regulated genes are co-expressed across all observed experimental conditions. Because genes are often regulated differently under different conditions, this assumption is likely to break down as the quantity and variety of data grow.cis-acting regulatory motifs in the regulatory regions of bicluster members; and the presence of highly connected subgraphs in metabolic [Biclustering was developed to address better the full complexity of finding co-regulated genes under multifactor control by grouping genes on the basis of coherence under subsets of observed conditions ,16-22. Wetabolic and funcetabolic -26. BecaHalobacterium NRC-1.Here we describe an algorithm, the Inferelator, that infers regulatory influences for genes and/or gene clusters from mRNA and/or protein expression levels. The method uses standard regression and model shrinkage (L1 shrinkage) techniques to select parsimonious, predictive models for the expression of a gene or cluster of genes as a function of the levels of TFs, environmental influences, and interactions between these factors . The proHalobacterium NRC-1. The Halobacterium genome contains 2,404 nonredundant genes, of which 124 are annotated to be known or putative TFs [We applied our method to the Halophilic archaeon tive TFs ,29. The tive TFs ,31. Sevetive TFs . Stronglcis-acting regulatory motifs in bicluster upstream sequences. Biclustering resulted in 300 biclusters covering 1,775 genes. An additional 159 genes, which exhibited significant change relative to the common reference across the data set, were determined by cMonkey to have unique expression patterns and were thus not included in biclusters; these 159 genes were inferred individually.The cMonkey method was applied to this data set to bicluster genes and conditions, on the basis of the gene expression data, a network of functional associations, and the occurrence and detection of Halobacterium NRC-1, and several TFs have their activity modulated by unobserved factors ; the regulatory relations for many genes are therefore not visible, given the current data set. Figure Halobacterium NRC-1 in Cytoscape, available as a Cytoscape/Gaggle web start [The regulatory network inference procedure was then performed on these 300 biclusters and 159 individual genes, resulting in a network containing 1,431 regulatory influences (network edges) of varying strength. Of these regulatory influences, 495 represent interactions between two TFs or between a TF and an environmental factor. We selected the null model for 21 biclusters , indicating that we are stringently excluding under-determined genes and biclusters from our network model. The ratio of data points to estimated parameters is approximately 67 . Our data set is not complete with respect to the full physiologic and environmental repertoire for eb start ,34.kaiC knockout strain, the influence of kaiC can be removed from the equation by setting its weight to zero). We discuss the predicted regulatory model for bicluster 76 further below.An example of the predicted regulation of a single bicluster, bicluster 76 . In this way we evaluated the predictive performance of the inferred network both on experiments in the training data set and on the 24 experiments in the independent test set (which we refer to as the newly collected data set). The expression level of a bicluster is predicted from the level of TFs and environmental factors that influence it in the network, at the prior time point (for time course conditions) or the current condition (for steady state conditions). The error estimates for the 300 biclusters and 159 single genes are shown in Figures We evaluated the ability of the inferred network model to predict the expression state of Although the majority of biclusters have new data RMS values well matched by the training set RMS values, there are also nine biclusters with RMS values significantly higher in the new data than in the training data. We were unable to identify any features of these outlying biclusters that distinguish them from other biclusters. We also investigated predictive performance for the 159 genes that were not included in biclusters by cMonkey. We found good predictive performance (over the new data as well as over the training data) for approximately half of these genes - a much lower success rate than seen for genes represented by biclusters. There are a number of possible explanations for this diminished ability to predict genes that also elude biclustering. Averaging expression levels over genes that are co-regulated within biclusters can be thought of as signal averaging, and thus single genes are more prone to both systematic and random error than bicluster expression levels. Another possible explanation is that these elusive genes are under the influence of TFs that interact with unobserved factors, such as metabolites. There are also about five conditions that we fail to predict well relative to the other 264 conditions and produces significantly more parsimonious models. They also show that models constrained to a single predictor per bicluster perform significantly worse over the new data . Finally, the additional tests show that our inclusion of interactions in the current model formulation improves predictive power .We also performed several tests to determine how well our model formulation and fitting procedure performed compared with three simplified formulations, as described in detail in Additional data file 1. Briefly, these additional tests show that our current formulation for temporal modeling is essential to the performance of this procedure are members of the LrpA/AsnC family, regulators that are widely distributed across bacterial and archaeal species [Halobacterium NRC-1 genes was, before this study, unknown. We predict that four of the trh proteins play a significant role in coordinating the expression of diverse cellular processes with competing transport processes. Figure trh3, trh4, trh5, and trh7. There is significant similarity in the functions represented by the biclusters regulated by each of the trh proteins, giving some indication that the learned influences have biologic significance. Moreover, each trh protein regulates a unique set of biclusters. Using the predicted subnetwork we can form highly directed hypotheses as to the regulation mediating the homeostatic balance of diverse functions in the cell. Our prediction for trh3, for example, is that it is a repressor of phosphate and amino acid uptake systems and that it is co-regulated with (and thus a possible activator of) diverse metabolic processes involving phosphate consumption. Trh3 thus appears to be key to Halobacterium NRC-1 phosphate homeostasis . Similar statements/hypotheses can be extracted from the learned network for other regulators of previously unknown function; in this way, the network represents a first step toward completing the annotation of the regulatory component of the proteome. Figure The species . Their sWe now briefly describe three cases in which predicted regulatory influences were supported by further experimentation.VNG1179C and VNG6193H - two regulators with putative metal-binding domains [VNG1179C and/or VNG6193H are transcriptional activators of yvgX (a member of bicluster 254). VNG1179C is a Lrp/AsnC family regulator that also contains a metal-binding TRASH domain [VNG1179C and yvgX (one of the proposed targets and known copper transporter) resulted in similar diminished growth in presence of Cu. Furthermore, recent microarray analysis confirmed that, unlike in the wild-type, yvgX transcript levels are not upregulated by Cu in the VNG1179C deleted strain. This lack of activation of yvgX in the VNG1179C deletion strain resulted in poor growth in presence of Cu for strains with a deletion in each of the two genes .We predict that bicluster 254, containing a putative Cu-transporting P1-type ATPase, is regulated by a group of correlated TFs containing domains . These rH domain ,36. StraSirR was previously described as a regulator involved in resistance to iron starvation in Staphylococcus epidermidis and Staphylococcus aureus. SirR is possibly a Mn and Fe dependent transcriptional regulator in several microbial systems and a homolog to dtxR [S. epidermidis sirR in the Halobacterium genome but the role of this protein in the Halobacterium regulatory circuit has not been determined. We predicted that sirR and kaiC are central regulators, involved in regulation of biclusters associated with Mn/Fe transport, such as bicluster 76 . Figure sirR, kaiC) for all conditions, including time series, equilibrium measurements, knockouts, and new data. Note that regulatory influences for this bicluster were inferred only using the 189 conditions that cMonkey included in this bicluster; excluded conditions were either low-variance or did not exhibit coherent expression for the genes in this bicluster. SirR mRNA profiles over all 268 original experimental conditions are positively correlated with transcript level changes in these three genes. However, upon deleting SirR, mRNA levels of these three genes increased in the presence of Mn, suggesting that SirR functions as a repressor in the presence of Mn, in apparent contrast to our prediction. In fact, a dual role in regulation has been observed for at least one protein in the family of regulators to which SirR belongs, which functions as an activator and repressor under low and high Mn conditions, respectively [ to dtxR . There i6 Figure . Includeectively . AlthougHalobacterium NRC-1 has multiple copies of key components of its general transcription machinery (TfbA to TfbG and TbpA to TbpF). Ongoing studies are directed at determining the degree to which these multiple copies of the general TFs are responsible for differential regulation of cellular processes , [TfbF is an activator of ribosomal protein encoding genes. The ribosomal protein encoding genes are distributed in seven biclusters; all seven are predicted to be controlled by TfbF. This prediction was verified by measuring protein-DNA interactions for TfbF by ChIP-chip analysis as part of a systems wide study of Tfb and Tbp binding patterns throughout the genome .cation), . We predHalobacterium NRC-1. Many novel gene regulatory relationships are predicted , and in instances where a comparison can be made the inferred regulatory interactions fit well with the results of further experimentation and what was known about this organism before this study. The inferred network is predictive of dynamical and equilibrium global transcriptional regulation, and our estimate of prediction error by CV is sound; this predictive power was verified using 24 new microarray experiments.We have presented a system for inferring regulatory influences on a global scale from an integration of gene annotation and expression data. The approach shows promising results for the Halophilic archaeon bat is a know auto-regulator and is found in a bicluster with genes that it is known to regulate. In general, the current method will perform poorly in similar cases of auto-regulation because it is not capable of resolving such cases, and neither is the data set used in this work appropriate for resolving such cases.The algorithm generates what can be loosely referred to as a 'first approximation' to a gene regulatory network. The results of this method should not be interpreted as the definitive regulatory network but rather as a network that suggests (possibly indirect) regulatory interactions . The preAlthough this method is clearly a valuable first step, only by carrying out several tightly integrated cycles of experimental design and model refinement can we hope to determine accurately a comprehensive global regulatory network for even the smallest organisms. Knockouts and over-expression studies, which measure the dependence of a gene's expression value on genetically perturbed factors, are valuable in verifying causal dependencies. Another important future area of research will be the inclusion of ChIP-chip data (or other direct measurements of TF-promoter binding) in the model selection process . StraighHalobacterium data set contains time series with sampling rates that range from one sample per minute to one every four hours). Instead, we evaluated our prediction error using CV. We have not discussed the topology of the derived network [In the present study we opted not to investigate the predictive performance of our method on simulated data. RNA and protein expression data sets have complex error structures, including convolutions of systematic and random errors, the estimation of which is nontrivial. Real-world data sets are also far from ideal with respect to sampling . We expect such data sets to be available soon for sevey, is influenced by the level of N other factors in the system: X = . In principle, an influencing factor can be of virtually any type . We consider factors for which we have measured levels under a wide range of conditions; in this work we use TF transcript levels and the levels of external stimuli as predictors and gene and bicluster trancript levels as the response. Methods for selecting which of these factors are the most likely regulators, among all possible regulatory influence factors, are described below.We assume that the expression level of a gene, or the mean expression level of a group of co-regulated genes y and X is given by the kinetic equation:The relation between Z = is a set of functions of the regulatory factors X. The coefficients \u03b2j, for {j = 1,2,...,P}, describe the influence of each element of Z, with positive coefficients corresponding to inducers of transcription and negative coefficients to transcriptional repressors. The choice zj(X) = xj for . The constant \u03c4 is the time constant of the level y in the absence of external determinants.Here, = \u03a3\u03b2jxj . To accog, called the 'nonlinearity' or 'activation' function for artificial neural networks, and the 'link' function in statistical modeling. The function g often takes the form of a sigmoidal, or logistic, activation function:Various functional forms can be adopted for the function g:This form has been used successfully in models of developmental biology . In this\u03b2 and are compatible with L1 shrinkage (described below). In this study we use Equation 3 because it allows for simultaneous determination of \u03b2 at several values of the shrinkage parameter (LARS) [Both Equations 2 and 3 allow for computationally efficient determination of r (LARS) . Previour (LARS) ,13.The simplified kinetic description of Equation 1 encompasses essential elements to describe gene transcription, such as control by specific transcriptional activators (or repressors), activation kinetics, and transcript decay, while at the same time facilitating access to computationally efficient methods for searching among a combinatorially large number of possible regulators. To better understand specific details of regulation, it will almost certainly be required to follow up on specific regulatory hypotheses using more mechanistically detailed descriptions.\u03b2 and \u03c4.The experimental conditions are classified either as belonging to a steady-state experiment or a time series experiment. In some cases, we refer to conditions as 'equilibrium' or 'steady-state' measurements out of convenience, but cannot know whether the system, in any strict sense, is at equilibrium; we imply only that an attempt was made to allow the system to reach equilibrium and that we have no knowledge of prior time-points within the same study. By a suitable reformulation of the kinetic equation (Equation 1) for each of these two data classes, we can combine both types of measurements into a single expression to fit the model parameters dy/dt = 0 and Equation 1 reduces to the following:In a steady-state, y = g(\u03b2\u2022SSZ) \u00a0\u00a0\u00a0 (4)Zss is the measured value of Z in the steady state. For time series measurements, taken at times , Equation 1 may be approximated as follows:where \u0394tm= tm+1 - tm is the time interval between consecutive measurements, and ym and zmj are, respectively, the measured value of y and zj at time tm. In this formulation, we place no requirements on the regularity of \u0394tm, and can readily use data with differing time intervals between measurements. It is important to note, however, that if sampling is performed at intervals that are longer than the time scales at which specific regulatory interactions act, those regulatory interactions will be missed in the data sampling and, correspondingly, by the model inference method. In most cases, we have little prior information on the regulation time scale, and hence use Equation 5 for all conditions that were sampled during a part of a time course. A possible limitation is that the inference procedure may misinterpret or miss entirely a regulatory interaction that actually occurs at a faster time scale. Under the stimuli we have considered, steady state is reached by six hours post-stimulation, and samples collected in that time range are therefore analyzed using Equation 4. In short, this method does not lessen the need for correct experimental design, but it facilitates using data with reasonable variation in sampling structure as well as the combination of data from different experiments.where \u03b2 to be fit with one of the many available methodologies for multivariate regression. In regression terminology, the influencing factors, X, are referred to as regressors or predictors, whereas the functions Z specify what is often referred to as the 'design matrix'.In comparing Equations 4 and 5, it can be seen that the right hand sides are identical, allowing for simultaneous model fitting using equilibrium and time series data. Taking together all steady-state measurements and time course measurements, the left hand sides of Equations 4 and 5 can be combined into a single response vector, allowing \u03c4 can be determined iteratively as follows. Beginning with an initial guess for \u03c4, first find the regression solution for \u03b2 using the multivariate regression methods of L1 shrinkage (described below); second, solve for a new \u03c4 that minimizes the prediction error given [g(\u03b2 Z); and third, repeat the first two steps until convergence. If available, results from independent experiments can be used to estimate by \u03c4 [\u03b2 s and \u03c4s for all biclusters (300 in this work) and genes (159 singleton genes in this work) constitute the full model for the regulatory network.The time constant or given and g2, as obtained from the Taylor expansion of g(\u03b2 Z), and combinatorial logic, a useful paradigm for describing many regulatory interactions, is thus only accommodated in a limited manner. More transparent encoding and approximation of interactions can be made by allowing functions in Z to be either the identity function of a single variable or the minimum of two variables. For example, the inner product of the design matrix and linear coefficients for two predictors that are participating in an interaction is:We use the design matrix \u03b2Z = \u03b21x1 + \u03b22x2 + \u03b23 min \u00a0\u00a0\u00a0 (6)x1 and x2 represent the levels of components forming an obligate dimer that activates y (x1 AND x2 required for expression of y), we would expect to fit the model such that \u03b21 = 0, \u03b22 = 0, \u03b23 = 1. This encoding results in a linear interpolation of (linearly smoothed approximation to) the desired Boolean function. This and other interactions . In the limit t = 1 we have the ordinary least squares estimate for \u03b2, (\u03b2 = \u03b2 ols). We determine the optimal value for the shrinkage parameter by minimizing the prediction error (as estimated by tenfold CV), as shown in Figure t that is 1 standard deviation from the minimum on the CV error curve, resulting in a fairly conservative/parsimonious estimate of the shrinkage parameter [where arameter . In this\u03b2, and use the magnitude of \u03b2 to rank the individual interactions by significance are pre-grouped before this search, because they are unresolvable by any data-driven method. The calculation takes less than 1 day on a single top of the line workstation (3 GHz AMD opteron). The calculation can easily be parallelized, which we have done (using PVM) and then runs in less than 1 hour on a modest cluster.Given a set of biclusters, the following algorithm may be applied:For each (bicluster k) {i){\u00a0\u00a0\u00a0For each {\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0foreach }\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0update list of best interactions list , gene knockouts, and samples taken at a range of times after stimulation and at steady state. By dual hybridization, each was compared with a reference condition that was identical in all 292 experiments, ensuring a high level of continuity throughout this large expression data set. The microarray slide is comprised of unique 70mer oligonucleotides spanning the 2,400 nonredundant genes encoded in the Halobacterium sp. NRC-1 genome. For each experimental condition, 8-16 replicates were assayed . Replicates include a reversal in the dyes (dye flip) used for labeling the RNA populations to minimize bias in dye incorporation. The significance of change was estimated by a maximum likelihood method [\u03bb values greater than 15 were removed and set aside as genes for which we observed no significant change . Of the 292 conditions, 24 had not been collected at the time that the model was fit and were thus not used in the training set. These 24 conditions were used as independent verification of the predictive power of the learned network model.The total data set, described in full elsewhere , ,31, contd method , and subThe regulatory network and all data types used in the inference process can be visualized using the data integration and exploration tools Gaggle and Cytoscape, and can be accessed via a Cytoscape java web-start . AlternaThe following additional data are included with the online version of this article: A Word document describing additional tests of individual components of our model formulation (Additional data file R.B. conceived and initiated the project, developed and implemented the method and the resultant computer program, and wrote the manuscript. V.T. conceived and initiated the project, developed and implemented the method and the resultant computer program, and wrote the manuscript. D.J.R. assisted in development and implementation, and assisted with writing of the manuscript. P.S. assisted with visualization of the data/networks via the Gaggle. N.B. conceived and initiated the project, provided feedback on the quality of results, and initiated verification of results with further experimentation. M.F. performed experimental verification. L.H. provided guidance at project inception and assisted in writing the manuscript.Additional tests of individual components of our model formulation.Click here for file"}
+{"text": "Elucidating the dynamic behaviour of genetic regulatory networks is one of the most significant challenges in systems biology. However, conventional quantitative predictions have been limited to small networks because publicly available transcriptome data has not been extensively applied to dynamic simulation.Saccharomyces cerevisiae ribosomal protein gene module, we confirmed that a MASK model can predict expression profiles for various conditions as accurately as a conventional kinetic model.We present a microarray data-based semi-kinetic (MASK) method which facilitates the prediction of regulatory dynamics of genetic networks composed of recurrently appearing network motifs with reasonable accuracy. The MASK method allows the determination of model parameters representing the contribution of regulators to transcription rate from time-series microarray data. Using a virtual regulatory network and a We have demonstrated the MASK method for the construction of dynamic simulation models of genetic networks from time-series microarray data, initial mRNA copy number and first-order degradation constants of mRNA. The quantitative accuracy of the MASK models has been confirmed, and the results indicated that this method enables the prediction of quantitative dynamics in genetic networks composed of commonly used network motifs, which cover considerable fraction of the whole network. With the advent of high-throughput biotechnologies in the last decade of the 20th century, enormous amounts of data have been generated on intracellular molecules -5. The oE. coli SOS module model using green fluorescent protein (GFP) reporter plasmids. The. TheSaccgradation. The conces pombe. The disor block' was emplk'[et al. was emplSaccharomyces cerevisiae RP genes predicted similar temporal patterns of mRNA expression to both the training and test microarray data and thel.[et al.), respecA simulation experiment was performed to determine whether the yeast RP gene module model is capable of predicting data other than microarray data. Specifically, we attempted to calculate transcript levels of the RP genes in the fhl1\u0394 strain, as these had been measured recently. The delThe MASK model reproduced the dynamic behaviour of the virtual network and the yeast genetic module with a sufficient degree of accuracy. Since the tested networks are examples of network motifs including a SIM and a MIM, the coverage of the MASK method is as wide as the frequency of the motifs. This successful example of a MIM regulated by Rap1, Fhl1 and Gal4 supports the contention that the MASK method is capable of representing synergistic effects between multiple co-regulators, since it has been reported that Rap1 binding at promoters is required for Fhl1 binding. FurtherMicroarray data sets for training a MASK model should preferably have prominent variations in expression level and many time points within a short time interval. These desirable features of microarray data restrain the representation space of a MASK model. If the time series expression profile of both a regulator and its target gene are almost flat, it is obvious that their data points will not provide a significant regression line. For a meaningful regression, expression levels should be widely distributed by dynamic variations in transcription to provide sufficiently long confidence bands and magnitude of regulation (R). The basal level transcription rate is hyperbolic with respect to RNA polymerase concentration Figure . The ratKs and ka, represent the dissociation constant of RNA polymerase to DNA and the rate constant for RNA synthesis from the DNA-RNA polymerase complex, respectively ).Rg(t) and Ri(t-\u03c4i) denotes the R value of a target gene g at time t and that of the ith regulator at time t-\u03c4i, respectively. The term \u03c4i represents time delay for transmitting regulatory effect of the ith regulator to the target gene g. The coefficients a and bi are parameters which can be estimated from microarray data. Regulatory effects at the translational level are abstracted by the coefficients of the R terms, such as exponential parameters and time delays. These equations were implemented on E-Cell Simulation Environment version 3.1.102 for Linux (Fedora Core 2/i386)[where e 2/i386).A multiple regression analysis of time series of R values provides the coefficients in Eq. (2). A time-series of R values is obtained by following the data processing steps summarized in Figure array (t), kdeg denotes the relative expression level at time point t, the time difference of the first and second time points and the first order degradation constant of mRNA, respectively. Taking the natural logarithm of Eq. (2), we obtain,where R yields an equation in the same form as Eq. (4). Thus, the least square estimates of ln a and bi were obtained via regression analysis of lnR time series data which are readily calculable from time series microarray data via Eq. (3). Target genes with regression p-values of more than 0.05 were not included in the mathematical models. The length of time delay, \u03c4i, was calculated using the local clustering method [Note that multiple regression analysis of the time series of lng method .ka in Eq. (1) was determined for each gene to minimize mean relative error between experimental data and predictions. The mean relative error E of the two time series data sets was defined as follows:The rate constant Xi and Pi denote the expression level at the ith time point of experimental data and predicted data, respectively. The symbol n represents the total number of time points. See Additional Text 3 for the detailed algorithm to calculate an optimal ka value for each gene.where Yugi developed the mathematical aspects of MASK method and supervised the implementation of this method and the validation experiments. Nakayama provided the concept of the MASK method and directed the project. Kojima contributed to the development of simulation models for validation experiments. Kitayama implemented the method in the E-Cell system, and Tomita is a project leader.Detailed derivations and algorithms . A procedure to adjust R values for regulators, a derivation of Eq.(3) and an algorithm to calculate optimal ka are described in Additional Texts 1,2 and 3, respectively.Click here for file"}
+{"text": "Periodogram analysis of time-series is widespread in biology. A new challenge for analyzing the microarray time series data is to identify genes that are periodically expressed. Such challenge occurs due to the fact that the observed time series usually exhibit non-idealities, such as noise, short length, and unevenly sampled time points. Most methods used in the literature operate on evenly sampled time series and are not suitable for unevenly sampled time series.For evenly sampled data, methods based on the classical Fourier periodogram are often used to detect periodically expressed gene. Recently, the Lomb-Scargle algorithm has been applied to unevenly sampled gene expression data for spectral estimation. However, since the Lomb-Scargle method assumes that there is a single stationary sinusoid wave with infinite support, it introduces spurious periodic components in the periodogram for data with a finite length. In this paper, we propose a new spectral estimation algorithm for unevenly sampled gene expression data. The new method is based on signal reconstruction in a shift-invariant signal space, where a direct spectral estimation procedure is developed using the B-spline basis. Experiments on simulated noisy gene expression profiles show that our algorithm is superior to the Lomb-Scargle algorithm and the classical Fourier periodogram based method in detecting periodically expressed genes. We have applied our algorithm to the Plasmodium falciparum and Yeast gene expression data and the results show that the algorithm is able to detect biologically meaningful periodically expressed genes.We have proposed an effective method for identifying periodic genes in unevenly sampled space of microarray time series gene expression data. The method can also be used as an effective tool for gene expression time series interpolation or resampling. Periodic phenomena are widely studied in biology and there are numerous biological applications where periodicities must be detected from experimental biological data. Because the measured data are often non-ideal, efficient algorithms are needed to extract as much information as possible. Spectral estimation has been a classical research topic in digital signal processing and has recently found important applications in DNA microarray time series data analysis. Many spectral estimation methods have been proposed in the past decades, including the modified periodogram, the autoregressive (AR) model, the MUSIC algorithm and the multitaper method ,2. AlthoRecently, several methods for detecting periodic gene expression have been proposed -15. Lu eIn this paper, we propose a new spectral estimation technique for unevenly sampled data. Our method models the signal in a shift-invariant signal space, for which many theories and algorithms are available -25. In oOur method is based on signal reconstruction in a shift-invariant signal space, where a direct spectral estimation procedure is developed using the B-spline basis. The details of the reconstruction algorithm and the power spectrum density (PSD) estimation are given in the Method Section.g and expression level observed at time ti, we denote the time series by Yg(ti),We first test our spectral estimation algorithm on simulated signals to compare the estimation accuracy with the Lomb-Scargle method. A cosine curve has been used to represent the ideal expression of a gene that goes from an \"on\" state, to an \"off\" state, and then back to \"on\" . For a gwhere i = 1,...,N and g = 1,...,G. Our test data consists of simulated time series data for the expression of G = 1000 genes, where 900 of them are random genes and 100 are noisy periodic genes.for ti = i, . Then, time points are randomly deleted from each time series to simulate the uneven sampling situation.To obtain this dataset, the time series (genes) is first evenly sampled at 48 points. That is, Figure In the second simulation test, we use the 100 simulated noisy periodic gene profiles and compare our method with the Lomb-Scargle approach in terms of errors in the dominant frequency. In Figure \u03bc = 0 and various SD values . In Table q. From Table Finally, we use the entire 1000 simulated genes and the false discovery rate (FDR) gene selection strategy using G-statistic to test the accuracy and sensitivity of our method. Lomb-Scargle's test is used for Lomb-Scargle method. We added artificial Gaussian noise with mean We have tested our algorithm on the gene expression data of Plasmodium falciparum, which is one of the species that cause human malaria . The genThe Plasmodium falciparum dataset was analyzed by Bozdech et al. using th\u03b1 -factor-mediated G1 arrest which covers approximately two cell-cycle periods with measurements at 7 min intervals for 119 min with a total of 18 time points, a temperature-sensitive cdc15 mutation to induce a reversible M-phase arrest, and a temperature-sensitive cdc28 mutation to arrest cells in G1 phase reversibly, and finally, an elutriation synchronization to produce the elutriation dataset of 14 time points . For thSpellman originalFigure As pointed out by Lichtenberg et al. , there iFigure A detail investigation of the expression profiles in the benchmark sets shows that the generally low coverage is really due to the absence of periodicity in many of the profiles. In Figure In , the strThe three benchmark datasets are also analyzed in . Both thIn this paper, we have proposed a new spectral estimation algorithm based on a signal reconstruction technique in an unevenly sampled space. The advantage of our algorithm over the Lomb-Scargle spectral estimation method is that the new algorithm can effectively reduce the effects of noise and spurious oscillation components and therefore improve the estimation accuracy. Experiments on simulated signals and real gene expression data show that our method is effective in identifying periodically expressed genes. Finally, we remark that this paper focuses on the improvement of periodicity estimation accuracy using spectral analysis algorithms. Another important issue is the statistical significance of the periodicity of a time series. Interested readers are referred to Chen , WichertIn the following, we first review existing work on signal analysis in the shift-invariant signal space, and then derive the new spectral estimation algorithm.Shannon's signal sampling and reconstruction theorem states that:Theorem: If f \u2208 B\u03a9 = {f \u2208 L2 : supp A, A]} and 0 < 2TA \u2264 1, thenx : x) \u2260 0} and f defined bywhere supp Equation (1) shows that the space of a bandlimited signal is identical to the space:Dowski et al. have int\u03d5, we first introduce a signal space that is called the shift-invariant signal space:Since the sinc function has an infinite support and slow decay, it is seldom adopted in real applications. Xian et al. found a ci} are related to the choice of basis function, \u03d5.where the coefficients {f \u2208 V(\u03d5), we need to reconstruct signal f from sampled value {f(xi)}, where {xi} is the sampled point set. If {xi} is an evenly sampled point set, this problem can be regarded as signal reconstruction in an even sampled space. Otherwise, this is a signal reconstruction problem in an uneven sampled space.Signal reconstruction in the shift-invariant space is an active research area and there are many mathematical theories and algorithms available -25. Whenx[n] for 0 \u2264 n \u2264 N - 1, the sample at time index n is approximated by a linear combination of the previous K samples in the sequence based on the AR prediction model that can be written as,In fact, the well-known autoregressive (AR) model can be regarded as a special case of signal reconstruction in the above signal space. For a given discrete data sequence n] and e[n] represent the estimation of x[n] and the corresponding estimation error, respectively. Yeung et al. and f(x) \u2208 V (\u03d5) is defined on a finite interval , then f can be determined completely by the coefficients {ck} for k \u2208 \u2229 Z withIn , GrochenIn terms of the definition of the power spectrum density (PSD), we can obtain the following estimation function according to Equation (7)ci} can be calculated according to following steps using least squares method and corresponding discrete function values y = . Assume that J \u2265 Jmin = A2 - A1 + 2\u03a9 - 1 and that the truncated matrix T defined below is invertible. Compute matrix: U = (Ujk), T = Tkl), where(1) Given sampling points c = T-1b according to b = y, where U and T-1 is the inverse of T.(2) Compute Remarks:T has the advantage that T is a positive operator on \u21132(Z).(1) The construction of matrix T is invertible under the condition that(2) In the case of B-spline, matrix T is invertible fixed, then increasing N (which would result in a smoother B-spline with larger support) would ensures a smoother reconstruction. In practice, some over-sampling is done for stability of reconstruction, and we increase the over-sampling rate by time-scaling the original signal support appropriately to a smaller interval.\u2022 The order N, meaning that the reconstructed signal will be smoother with increasing N. This fact can be used for signal denoising.\u2022 From Equation (11), we see that the periodogram decays more rapidly with increasing fm and consider the frequency band as fm's region of influence (ROI). The power ratio S in , and p is the largest integer less than 1/x. Equation (14) is computed over the set of normalized Fourier frequencies \u03c9 = k/N0, where k = 0,1,...,N0/2. Equation (13) gives an exact p-value that allows one to test whether a gene expression profile behaves like a purely random white noise process or whether the maximum peak in the periodogram is significant. However, for gene expression profiles of short length, i.e., < 40 time points, the p-value given by Equation (13) has weak statistical power to determine the number of periodic genes [ic genes ,11. In vic genes and provX(tj), where i = 1,2,...,N0 and zero mean, the normalized periodogram as a function of the angular frequency \u03c9 is defined as [Here, we briefly review the Lomb-Scargle periodogram method. For a time series fined as \u03c4 is defined by the equationwhere The statistical significance of the periodic components detected in the periodogram can be evaluated using an exponential probability distribution test which weAuthors' contributionsAWCL worked on mathematical modelling, implementation, and microarray datasets analysis. JX worked on signal reconstruction theories and is responsible for the simulated signal experiments. SHW worked on signal reconstruction theories. DS provided biological insight to this project. HY initiated the project and worked on spectral estimation. All authors read and approved the final manuscript.Ranked list of 6875 profiles of the Plasmodium falciparum dataset based on the G-statistic. A ranked list of the 6875 profiles from the Plasmodium falciparum dataset [ dataset based onClick here for fileRanked lists of 6178 profiles of the Yeast dataset based on the G-statistic in the alpha factor, cdc15, cdc28, and elutriation experiments. A ranked list of the 6178 yeast expression profiles in each of the four experiments based on their G-statistics. The higher the rank, the more periodic is the profile.Click here for file"}
+{"text": "Periodic processes, such as the circadian rhythm, are important factors modulating and coordinating transcription of genes governing key metabolic pathways. Theoretically, even small fluctuations in the orchestration of circadian gene expression patterns among different tissues may result in functional asynchrony at the organism level and may contribute to a wide range of pathologic disorders. Identification of circadian expression pattern in time series data is important, but equally challenging. Microarray technology allows estimation of relative expression of thousands of genes at each time point. However, this estimation often lacks precision and microarray experiments are prohibitively expensive, limiting the number of data points in a time series expression profile. The data produced in these experiments carries a high degree of stochastic variation, obscuring the periodic pattern and a limited number of replicates, typically covering not more than two complete periods of oscillation.Permutated time, or Pt-test, is able to detect oscillations within a given period in expression profiles dominated by a high degree of stochastic fluctuations or oscillations of different irrelevant frequencies. We have conducted a comprehensive study of circadian expression on a large data set produced at PBRC, representing three different peripheral murine tissues. We have also re-analyzed a number of similar time series data sets produced and published independently by other research groups over the past few years.To address this issue, we have developed a simple, but effective, computational technique for the identification of a periodic pattern in relatively short time series, typical for microarray studies of circadian expression. This test is based on a random permutation of time points in order to estimate non-randomness of a periodogram. The The Permutated time test (Pt-test) is demonstrated to be effective for detection of periodicity in short time series typical for high-density microarray experiments. The software is a set of C++ programs available from the authors on the open source basis. Circadian, or approximately daily, rhythm is one of the most well studied periodic processes in living organisms. Multiple studies in different tissues report that expression of approximately 5\u201315% of all genes show circadian oscillations . CircadiIdentification of periodically expressed genes presents a significant challenge. Most of the statistical methods for detection of periodicity in timeline data have been developed for much longer series than those available for computational analysis of gene expression. Production of long time-series microarray experiments is prohibitively expensive. Even a single time series of 12 time points with a minimal number of replicates costs thousands of dollars in microarray chips, equipment and labor. Among publicly available data sets, few extend beyond 24 time points or cover more than 2 complete cycles. High variability and low expression level for many genes are additional challenges, further obscuring the periodicity of the base expression. The current manuscript describes a permutated time test designed to address these challenges.Pt-test. To present this approach, consider a time series Y = x0, x1, x2, ...xN-1, in which technical variation approaches or even exceeds the amplitude of periodic expression. In a very short time series stochastic noise often obscures periodicity. However, the periodic change of the base expression level can still be identified in spite of the high noise level. If the periodogram of the original time series IY(\u03c9) contains a significant peak corresponding to a particular frequency this peak results from a particular order of observation is the Y. A random permutation would preserve the same noise level, but not the periodicity. Let YR be a random permutation of the time series Y. Its corresponding periodogram is IR(\u03c9). After DFT a periodogram IR(\u03c9) would represent only the peaks occurring by chance. However it will miss the true periodic frequencies unless permutations happen to preserve the period, for example if the rank of each point x in permutated series YR is equal xY \u00b1 n* p where n is a natural number and p is a period corresponding to a significant peak in IY(\u03c9). To avoid random re-institution of periodicity we generate YR by multiple shuffling of randomly selected time points xn \u21d4 xm, where |n - m| \u2260 p, i.e. each shuffle is swaps time points from different phase. Comparing permutations with deliberately wiped out periodicity to the original time series we can estimate whether a particular order of observations (i.e. time series) is important. For each gene expression profile we generate two series of min random permutations. Each permutated series YR is transformed to the frequency domain and a single peak of the periodogram IR(\u03c9) is stored. The p-value for the null-hypothesis of random nature of a particular peak of periodogram can be estimated by comparing the stored IR(\u03c9) values to the observed I(\u03c9):Our alternative test for significance of a particular periodicity (in our case circadian) among large numbers of gene expression profiles is based on random permutation of time points, thus abbreviated as the p-value exceeding the threshold, for example 0.05, means that at least 5 out of 100 random permutations of time series produce a periodogram with the same or higher peak, corresponding to a given periodicity. Low p-values indicate a significant difference between periodogram IR(\u03c9) preserving circadian periodicity and randomly permutated periodogram IY(\u03c9) with the same level of technical variation. This difference leads to rejection of the null-hypothesis of purely random nature of variation in the original time series Y.High p-value. Algorithms to recognize a circadian oscillatory pattern in Storch et al. and Panda et al. as well as many other papers employ some form of permutation. The principal difference of our test is that while we, like others, generate permutations in the time domain, we then analyze the periodograms of the permutated time series in the frequency domain. Since we consider only one frequency , stochastic variations between time points have a reduced impact compared to the time-domain methods. Fisher's g-test is based on the estimation of signal to noise ratio from the periodogram of a time series (i.e. in frequency domain). It is less effective on very short time series, such as those commonly found in functional genomics studies. In such time series, typically derived from a dozen or two microarray experiments, stochastic deviations can easily exceed the underlying baseline variation, which results in high peaks for irrelevant non-circadian frequencies and an overall poor resolution capability of the method. We have developed the Pt-test for short time series in order to combine the benefits of the frequency domain analysis with robust error rate estimation based on a permutation procedure.Permutation is a commonly used procedure applied for estimation of \u03b5 and a decreasing signal to noise ratio. In the first 2 categories all three algorithms identify 100% profiles as oscillating. In every other category our permutation test tends to find more oscillating expression profiles compared to Fisher's g-test and autocorrelation.The results of our analysis of the simulated data set with 3 different algorithms are presented on Figure To test the effectiveness of the new algorithm on real data we have re-analyzed three independent data sets. All of them represent murine liver time series over the equivalent of 2 complete daily cycles (48 hours). All three utilize Affymetrix GeneChips. However the data was acquired in different years by 3 different research institutes in independently designed experiments. One data set, produced by GNF contained 24 time points representing samples collected every 2 hours, while the other two contained 12 time points taken every 4 hours.Analysis of the locally produced murine liver data revealed more circadially oscillating genes than it anticipated from previous publications ,5. We apAnother illustration of the performance and agreement between algorithms is presented on Figure p is the number of time points per one complete circadian period), each one with a phase shift by one time point (4 hours if related to PBRC data). We calculate a pair-wise correlation between gene expression profile and each phase of ideal cosine curve. The phase shift (lag) that produces the highest correlation coefficient is assumed to be the most likely phase of a particular profile. This method is superior to assigning the phase by position of the zenith and nadir, as it takes into account ascending and descending trends in the whole data sets rather than single points. Within each phase, expression profiles are sorted by ascending order of p-value estimated in Pt-test, depicted as gray scale in the first column. The next 2 columns, Fg and aC represent p-values estimated through Fisher's g-test and autocorrelation analysis, respectively. Although they do not follow the same exact order, both these columns display similar patterns as the Pt column. The heatmap shows a characteristic pattern of two red zones (elevated gene expression) spaced by dark or green areas (reduced gene expression) prominent in all phases. In each phase group (i.e. expression profiles with highest correlation to a cosine curve with the same phase shift) the pattern shows some deterioration close to the bottom, but never disappears completely. This test suggests that the permutation is likely to be a correct, although still conservative, estimation of the true number of circadially expressed genes in murine liver. we have apparently identified more periodically expressed genes than was previously reported by the authors. The results of the analysis are summarized in Figure Bmal1 and Clock, were found to regulate glucose levels and thus, play a significant role in the energy balance. This finding is consistent with the accepted view that circadian clocks are important in driving activity and feeding behavior in mammals. This may also explain the difference between heatmaps presented on Figure It is likely that most genes following the circadian pattern are not linked directly to the basic circadian clock mechanism. Rather, they are modulated by the downstream circadian effectors, some of which can be responsive to the supply of energy and macronutrients, which, in turn, is closely linked to the daily feeding pattern. Connections between basic circadian mechanisms and nutrient homeostasis have been demonstrated . SpecifiSacharomices cerevisae measured at 36 consecutive time points. The original authors report an oscillating pattern with a period of ~300 min in a large part of expressed genes. This ultradian oscillating pattern reflects temporal compartmentalization of components of yeast metabolic cycle. Our analysis of the same data corroborates the prominence of oscillation reported in the original paper. However, our approach allows identification of oscillating pattern in even larger portion of expression profiles. Consistently with all other experiments in summarized in Figure In addition to circadian data we have analyzed a different time series microarray data, recently published by Tu et al., . This daFrom the analysis of both simulated and real data from independent sources we conclude that Pt-test consistently reveals more circadially oscillating genes in short time series compared to the other tested algorithms and generally less sensitive to stochastic non-periodic noise. The drawback of this algorithm is that it is much more computationally demanding compared to autocorrelation or Fisher's g-test. However, its effective C++ implementation allows processing of realistic amounts of data in a reasonable time on an average personal computer. Typically, analysis of a complete data set takes less then an hour, which is trivial compared to the time it takes to collect the data. The source code of Pt-test procedure, as well as computationally effective C++ implementations of autocorrelation, DFT and Fisher's g-test procedures are free and available from the authors upon request.rd degree polynomial procedure and median-subtracted. For better compatibility, the same smoothing and median subtraction procedure has been applied to all other data sets.We have completed independent circadian studies in AKR/J mice acclimated to a 12 hr light: 12 hr dark cycle, harvesting sets of 3\u20135 mice at 4 hr intervals in duplicates over a 24 hr period , i = 10, 20, ...,100. To control for possible false positive oscillation we have reproduced each category with the same noise component \u03b5, but with an absent baseline oscillation, i.e. amplitude \u03c6 = 0. The resulting data set is similar in number of profiles to the natural microarray data sets. Each profile consists of 12 data points, which is the same as the majority of published circadian microarray data sets, including two out of the three data sets used in this study.where rd degree polynomial procedure and median-subtracted. For smoothing we use seven-point Savitzky-Golay algorithm , then the periodogram exhibits a peak at that frequency with a high probability. Conversely, if the time series is a purely random process (a.k.a \"white noise\"), then the plot of the periodogram against the Fourier frequencies approaches a straight line and p is the largest integer less than 1/x.where This algorithm closely follows the guidelines recommended for analysis of periodicities in time-series microarray data with theY = x0, x1, x2, ...xN-1 the autocorrelation is simply the correlation of the expression profile against itself with a frame shift of k data points . For the time shift f, defined as f = i + k if i+k 0 are generally bound to the promoters of over-expressed genes while factors with \u03b1 < 0 are binding to the promoters of under-expressed genes. If a transcription factor is an activator, then \u03b1 > 0 implies that the factor is active and \u03b1 < 0 implies that it is inactive. For repressors, the interpretation is the opposite: \u03b1 > 0 and \u03b1 < 0 correspond to the repressing transcription factor being inactive and active, respectively.For the sake of simplicity, we use a linear regression approach to estimate transcription factor activities. Our approach is similar to that used in Motif Regressor and REDUS. cerevisiae. Yeast cells were synchronized using yeast mating pheromone and the expression of all genes were profiled over two periods of the cell cycle [When these calculations are applied to a time series of expression profiles, we obtain the \u03b1-coefficients (\"activities\") of each transcription factor over time. Here, we are interested in the changes in transcription factor \u03b1-coefficients during the cell cycle of ll cycle . From thWe then investigated whether it was possible to model the temporal couplings of these factors in the form of ordinary differential equations . Such a model would enable us to predict which factors are affecting each other's \u03b1-coefficients across the different phases of the cell cycle. We were able to construct such a model by generating a time-translation matrix of transcription factor \u03b1-coefficients using least squares . The sigS. cerevisiae cell cycle. The model captures the time-dependent \u03b1-coefficients of transcription factors and how they are coupled to control this important cellular program. The first step in our approach is to determine the \u03b1-coefficients of transcription factors during the cell cycle and to identify the factors that are the most significant regulators of the cell cycle.We set out to reverse-engineer the interactions of the transcription factors comprising the core module of regulators of the We assume that changes in expression of a specific gene depend on the product of the binding ratios of all the transcription factors that bind to its promoter, i.e.,Ri is the ratio of the expression level of gene i in two conditions, the \u03b1-coefficient \u03b1j is a measure of the contribution of transcription factor j, bij is a constant giving the coupling between gene i and factor j, and N is the number of transcription factors in the model. In practice, we work with logarithms of expression ratios and logarithms of binding profiles so that (see Methods) one may solve for the \u03b1j by multiple linear regression. As discussed earlier, \u03b1j is a surrogate for the activity of transcription factor j and positive values indicate an active activator or inactive repressor and negative values indicate an active repressor or inactive activator. Without prior knowledge of the nature of transcription factor j, we do not know from its \u03b1j if that factor is active or inactive; rather (and more of concern as regards gene expression), we know that it is tending to bind over-expressed (\u03b1j > 0) or under-expressed (\u03b1j < 0) genes. Of course, in some cases we do know whether a transcription factor is an activator or repressor and can therefore easily transform its \u03b1-coefficient to activity. Nonetheless, for the remainder of this manuscript we focus on \u03b1-coefficients and not activities.where p-value, and factors were rank-ordered based on the number of time points in which they were significant. We filtered out factors that did not show periodic behavior. We identified four additional transcription factors \u2013 Bas1, Spt2, Ste12, and Yox1 \u2013 that are likely significant regulators of the cell cycle; stopping at four additional factors maintains a reasonable data-to-parameter ratio in the model. Details of the procedure used appear in the Methods section.We applied the above to model the \u03b1-coefficients of transcription factors during two periods of the yeast cell cycle. We concentrated on modelling the expression data that was obtained by using yeast mating pheromone to synchronize yeast cells, although we also present models based on data from synchronization using a temperature-sensitive cdc15 mutant and from elutriation . The bintransition (or time-translation) matrix T that can be used to determine the \u03b1-coefficients of transcription factors at other time points from the \u03b1-coefficients at the current time point by matrix multiplication. Estimated using least squares, the matrix T permits inference of the network of couplings between transcription factors (see Methods).Having selected significant transcription factors and found their \u03b1-coefficients at each time point in the cell cycle, the second step in our approach is to model the system's dynamics. We accomplished this by computing a T to have only non-negative entries. This choice was made for several reasons. First, it generates models that are more readily interpretable biologically. It is clear that a positive entry may be interpreted as the positive \u03b1-coefficient of transcription factor A increasing the \u03b1-coefficient of transcription factor B at the next time point, as one might expect for activators. A positive entry may also be seen as the negative \u03b1-coefficient of transcription factor A decreasing the \u03b1-coefficient of transcription factor B at the next time point, as one might expect for repressors. In contrast, a negative entry suggests that the negative \u03b1-coefficient of a factor increases the \u03b1-coefficient of another, or vice-versa, which we expect to be less likely in 7-minute intervals for both activators and repressors. A second reason to apply non-negativity constraints to the entries of the time-translation matrix is that an unconstrained matrix tends to have no zero entries . This leads to an undesirable model in which all transcription factors depend on all other factors, which we know not to be true. Finally, applying constraints to the time-translation matrix can reduce the number of parameters needed to be fit and improve the model's data-to-parameter ratio.We chose to perform constrained least squares and require the estimate of We compared our model to the canonical model of the network of cell-cycle-regulating transcription factors Figure that hasTo compare this model to the one generated using our approach, we started with the 8 canonical transcription factors. We computed the \u03b1-coefficients of these transcription factors in the cell cycle time course data , our model may select only a single one of these to best fit the data.The resulting models are quite similar to the canonical one. They show a flow of influence from Ace2, a component of the M/G1 complex, to the Swi4-Swi6-Mbp1 complex active in G1/S. The Swi4 subunit of this complex influences the \u03b1-coefficient of Fkh2, which in turn affects Ndd1 and subsequently Mcm1, both subunits of the G2/M complex. The cycle is then completed with interactions from Ndd1 and Mcm1 back to Ace2 and Swi5. All of these interactions recapitulate the basic flow of the canonical model indicating strong support for this model using our approach. Nonetheless, we note that although 8 of the 16 edges in Simon, et al.'s model are recap-value. We used an iterative approach, keeping only transcription factors that had a p-value below 0.1, re-computing the \u03b1-coefficients and p-values for the reduced set of transcription factors. We retained the factors that were significant in the largest number of time points. Among these, we selected those that had periodic \u03b1-coefficients. Periodicity was determined using a discrete Fourier transform to compute the fraction of the power spectrum of the \u03b1-coefficients of a given transcription factor that laid within a range consistent with the measured time period of 1 cell cycle of S. cerevisiae (see Methods).We next built a more general model by identifying in an unsupervised fashion which additional transcription factors most significantly regulate the cell cycle. As explained in the Methods section, for each of the 204 transcription factors we measure not only its \u03b1-coefficient at each time point, but also the significance of this estimate in the form of a -3). Finally, we computed the transition matrix for the 12 factors . This gives our most complete model of transcription factor couplings in the yeast cell cycle behavior of each factor, which we describe in terms of amplitude and phase (see Methods) and report in Table Many cellular programs, such as control of the cell cycle or the metabolic cycle , are regModel building using the former approach by using sets of individual molecular coupling measurements can severely limit the scope of the model. Such measurements are often only available for a small number of molecules. As a result of this limitation, most of these types of models capture couplings between proteins and not between transcription factors and their target genes due to the larger number of parameters such would involve. Furthermore, these types of models are difficult to validate since they cannot be compared to systematically-collected comprehensive time series data covering all model variables.A previous dynamical model of the ya priori to include only a small subset of molecules. However, since these models only account for the regulation of genes by transcription factors, they have not been used to predict the phenotype of cell cycle mutants as such phenotypes do not usually involve mutations in transcriptional regulation.Utilizing the second paradigm, various methodologies have been developed to reconstruct the regulatory network of the yeast cell cycle from high-throughput data ,15. Thesa priori assumptions about which proteins should be included as these may be determined directly from the data.In contrast to both of these approaches, we have presented a methodology for constructing dynamic models of the core modules of transcriptional regulation from high-throughput data sets. Our model describes the time-dependent \u03b1-coefficients of transcription factors during the cell cycle. As described above, the model was estimated using both expression and transcription factor binding data. As with other cell cycle models derived from high-throughput data, we are able to directly measure the fit of our model with respect to the data is that all factors act cooperatively at each promoter at some level. The \u03b1-coefficient of a given factor at a given time point is a measure of the extent of the global influence that factor has at that time. Cooperation between factors in our model is therefore manifested by correlation in the \u03b1-coefficients of two factors. For example, in Figure In conclusion, unlike other genome-wide models of cell cycle transcription, our approach focuses on the relationships between transcription factors and not on the identification of the regulators of individual genes. Since transcription factors are the key regulators of transcriptional programs, we believe this approach provides a more comprehensive view of how these processes are regulated temporally. Furthermore, although it is not possible to completely validate the resulting networks, the results are reasonable from a biological perspective and largely conform to previous models of transcriptional regulation of the yeast cell cycle. As a result, our methodology should reasonably reconstruct regulatory networks whenever expression time series and transcription factor binding data are both available.We have presented a methodology for describing the temporal regulation of the yeast cell cycle. Our approach allows us to first identify the key regulators of this process, recovering most of the known regulators as well as new ones for which there is some supporting evidence for their involvement. More importantly, our methodology allows us to synthesize the manner in which these transcription factors coordinately regulate the cell cycle. This approach goes beyond existing static models of transcriptional factor networks involved in cell cycle regulation by demonstrating how the estimated transcriptional activities of these proteins are coupled in time.Saccharomyces cerevisiae, time series expression data of a variety of cellular programs are alric shift ). The avTaking a logarithm of both sides of Equation 1, we model the logarithm of the ratio of expression data as a linear combination of the logarithms of the transcription factor binding data:Ri is the ratio of expression of gene i to control, \u03b1j is the regression coefficient of transcription factor j, and bij is the measured binding of transcription factor j to the promoter of gene i. We estimate the coefficients \u03b1j using multiple linear regression as implemented in the MATLAB function robustfit, a function that also returns an estimate of the standard error of each \u03b1j. The binding coefficients bij are from Harbison, et al. [Ri are from Spellman, et al. [Here, , et al. and we t, et al. .We pre-filtered the binding data by eliminating all transcription factors that had more than 10 missing values across all genes, leaving 181 factors. We next eliminated all genes that had more than 3 missing values across the 18 time points, leaving 4,033 genes.j is non-zero. Roughly, a t-test is made for the ratio of each coefficient divided by its standard error. A detailed derivation of the p-value, which is beyond the scope of this manuscript, may be found in standard statistical texts [p-value below 0.1. We first perform a regression step and obtain a p-value for each of the \u03b1j. We then remove from the binding matrix bij all columns that are associated with transcription factor \u03b1-coefficients whose p-values do not pass the threshold. This procedure is repeated with the reduced binding matrix until all transcription factor \u03b1-coefficients have p-values below 0.1. The choice of 0.1 as threshold is somewhat arbitrary, but this choice has only a minimal effect on the calculation; we have repeated the procedure with other p-value thresholds such as 0.01 and 0.001 and we obtain similar lists of significant factors and \u03b1-coefficients as seen in p-values for the final \u03b1-coefficients of the factors that are selected by the iterative procedure are very significant (p < 10-10) in all cases.For each of the 181 transcription factors, robustfit computes the probability that each \u03b1al texts ,34. We nT that predicts the \u03b1-coefficients at the next time point from the \u03b1-coefficients at the current time point:We model the dynamics of the system by determining the transition matrix A(t+1) = TA(t).T is computed using least squares constrained to produce only non-negative entries via the MATLAB function lsqnonneg. From this matrix, we can infer the network of couplings between transcription factors by identifying the most significant interactions. In cases where two factors were linked by arcs in both directions, we display only the more significant arc.The matrix To analyze the dynamics of the time-translation matrix, we compute an eigensystem of it using the MATLAB function eigen. Since we are interested in the long-term asymptotic behavior of our model, we focus on the two non-real conjugate eigenvalues whose magnitude is closest to unity and the two conjugate eigenvectors associated with them. The eigenvectors with eigenvalues smaller than these in magnitude decay exponentially in time compared to this mode and hence contribute nothing asymptotically. The spectrum of eigenvalues is shown in Using the polar formei\u03b8\u03bb = |\u03bb|for the dominant non-real eigenvalue of the time-translation matrix in the upper half-plane, the period of the associated mode is given byt is the time between successive points in the expression time series.where \u0394V and let complex matrix W be the matrix inverse of V. To find the long-term asymptotic phase and amplitude of each transcription factor, let z be the complex number that is the dot product of the row of W corresponding to one of the dominant non-real eigenvalues with the initial state, and let v be the column of V corresponding to this same eigenvalue. Then the asymptotic amplitudes and phases of the transcription factors are the polar forms of the successive entries of 2zv. A detailed derivation is provided in Collect the eigenvectors of the time-translation matrix as the columns of a complex matrix To identify the transcription factors that have periodic \u03b1-coefficients, we compute the discrete Fourier transform of the \u03b1-coefficient profile for each factor. Taking the magnitudes of Fourier components we calculate the fraction of spectral power that falls in components 7, 8, 9, and 10 as these correspond to an interval of periods containing the expected period for a single cycle of the Spellman data.The calculations described in this manuscript were performed in roughly equal contributions by SC, SR, and MP. DH and NGJ provided essential comments and guidance. The manuscript was written by MP and edited by SC. Network layouts, Figure and Additional file drawings, and Additional File 6 were by SC.Transcription factor network of canonical cell cycle regulators as derived from the cdc15 time course. We show the non-zero entries in the model's time-translation matrix as directed arcs between transcription factors. We note the general similarity of the causal flow in this network to Figure Click here for fileTranscription factor network of canonical cell cycle regulators as derived from the elutriation time course. We show the non-zero entries in the model's time-translation matrix as directed arcs between transcription factors. We note the general similarity of the causal flow in this network to Figure Click here for file\u03b1-coefficients of transcription factors regulating the cell cycle. A heat map . They were further ranked based on the periodicity of their \u03b1-coefficients. Canonical factors are shown in red, significant non-canonical factors are shown in yellow, and other factors are shown in white. We see that the same set of 5 non-canonical factors is selected as most significant for all of the p-value thresholds.Transcription factor rankings used to select the 4 non-canonical transcription factors. Factors were ranked based on the number of time points in which their Click here for fileEigenvalues of the time-translation matrix of the 12-factor model. Each application of the time-translation matrix moves time forward 7 minutes.Click here for fileAsymptotic phases and amplitudes. A derivation of the formulas used for the asymptotic phases and amplitudes of components of real systems governed by a general class of real time-translation matrices is given.Click here for file"}
+{"text": "There are some limitations associated with conventional clustering methods for short time-course gene expression data. The current algorithms require prior domain knowledge and do not incorporate information from replicates. Moreover, the results are not always easy to interpret biologically.We propose a novel algorithm for identifying a subset of genes sharing a significant temporal expression pattern when replicates are used. Our algorithm requires no prior knowledge, instead relying on an observed statistic which is based on the first and second order differences between adjacent time-points. Here, a pattern is predefined as the sequence of symbols indicating direction and the rate of change between time-points, and each gene is assigned to a cluster whose members share a similar pattern. We evaluated the performance of our algorithm to those of K-means, Self-Organizing Map and the Short Time-series Expression Miner methods.Assessments using simulated and real data show that our method outperformed aforementioned algorithms. Our approach is an appropriate solution for clustering short time-course microarray data with replicates. Time is an important factor in developmental biology, especially in dynamic genetics. For example, when a number of genes are differentially expressed under two or more conditions, it is often of great interest to know which changes are causal and which are not. When different conditions are represented by different time-points, it helps us to understand not only how a gene gets turned on or off, but also which gene-gene relationships are based on the lags in the changes. Novel genes have been identified by monitoring the transcription profiles during development or by loConventional time-series methods are not well suited to the analysis of microarray data. Since the number of observed time-points in a microarray is usually very small, common methods such as auto-regression (AR), moving-average (MA) or Fourier analysis modeling may not be applicable. Furthermore, these classic autocorrelation approaches generate bias when applied to short time-course data . In addiet al. [et al. [N time-points is standardized and then transformed into a three-digit-sequence with the aid of a tolerance factor. Luan et al. [et al. [et al. [et al. [in order to group similar objects together, can identify potentially meaningful relationships between objects and often their results can be visualized ,6. Phanget al. devised [et al. proposedn et al. used cub [et al. adopted [et al. consider [et al. assignedAs this wealth of approaches shows, a lot of effort has been put into developing clustering algorithms for gene clustering; however, they have some limitations. Geneticists still need a more intuitive and statistically sound methodology. To address this issue, we propose a difference-based clustering algorithm (DIB-C) for a short time-course gene expression data. DIB-C discretizes a gene into a symbolic pattern of the first- and second-order differences representing direction and rate of change, respectively. Replicate and temporal order information from the input data are used in defining the clusters. DIB-C outputs a cross-sectional view of cluster hierarchies with varied cutoffs, shown in a 2-dimensional map for biological interpretation. The clustering procedure used by DIB-C is detailed in the Methods section.We now examine the limitations of standard clustering algorithms and explain how we addressed each of them in developing our algorithm. First, misleading or uninterpretable clusters can occur when one only considers the similarity of expression profiles, thereby disregarding discretization information. An example from real data is shownSecond, the rate of change is ignored when delineating underlying patterns in traditional clustering algorithms. For example, CDC27 in Figure et al. [Third, replicates are not fully utilized in the existing algorithms. Kerr et al. pointed et al. , which ci.e., the columns are not interchangeable [Fourth, conventional clustering techniques such as hierarchical clustering , tend toangeable . The proet al. [Fifth, template-based methods require prior knowledge to choose representative genes. In the yeast example, each gene was assigned to the nearest pre-chosen representative gene based on previous studies. Peddada et al. also preK-means (KM) merely enumerates the list of genes, where each number signifies which cluster a gene belongs to. However, these are simply distinguishing symbols, adjacent numbers do not imply that two clusters are biologically related. Self-Organizing Map (SOM) does a little better, as it displays clustering results in a 2-dimensional grid.Finally, visualization of clustering results is not always informative. K-means, SOM and Short Time-series Expression Miner (STEM) methods using both simulated and real data [We evaluated the performance of our algorithm DIB-C in comparison to eal data ,19,20. Teal data was obtaeal data . We extrFor the simulated data, true clustering membership was used as knowledge external to the gene expression data. Also, the agreement between the true and the resulting cluster memberships was measured. The Adjusted Rand Index (ARI), an updated form of the Rand Index, is the number of agreements divided by the number of total objects defined i and j index the clusters and classes, respectively. Higher ARI values indicate more accurate clusters. The ARI is a more sensitive, generalized version of the original Rand Index and is used as our measure of comparison.where K-means achieved the maximal ARI at 24 clusters, which was not the true cluster number. However, it is notable that DIB-C peaks at the actual cluster number. DIB-C outputs only in the neighborhood of the true cluster number unlike the other three methods because our algorithm refuses to separate insignificant changes.With ARI measure, DIB-C showed better accuracy across the cluster numbers than did the other three methods. Under lower noise simulations of 1, 2, and 5%, the maximum ARI values were obtained by DIB-C at the true number of clusters (19), indicating that DIB-C has the highest accuracy of the three methods was used to delineate the best overall clustering results. APF is the normalized proportion of eigenvalues for each cluster defined by:et al. used the square root of the second eigenvalue as an overall clustering quality index for microarray data [In this paper, eigenvalues are calculated from the within-cluster covariance matrix and assumed to be sorted in decreasing order so that the first eigenvalue corresponds to the largest eigenvalue, the second eigenvalue corresponds to the second largest eigenvalue and so on. From a dimension reduction perspective, the principal components of each resulting cluster lie in the directions of the axes of a constant density multi-dimensional ellipsoid . If the ray data . In the With the APF measure, DIB-C had the largest value in the neighborhood of the true cluster number 19 under 1, 2, and 10% noise Figure . The dotA two-dimensional pattern map (as described in the Methods section) is shown to explain how the simulation data was generated Figure . Each geThe hierarchical layers of the simulated data are shown in Figure et al., the molecular function aspect was used out of the three aspects of GO. After mapping the third-level GO ID to our pancreas genes, a contingency table of 2,179 genes by 13,505 GO IDs was created. Then the total mutual information MIreal between the cluster result and all the GO IDs were computed. Next, MIrandom for a clustering result was obtained after random swapping of genes in the original clusters. This procedure was repeated 3,000 times to get corresponding MIrandoms. Then, we subtracted the mean of MIrandoms from MIreal and divided it by the standard deviation of MIrandoms. This is a Z-score interpreted as a standardized distance between the MI value obtained from clustering after centering and scaling based on those MI values obtained by random assignments of genes to clusters. The higher the Z-score, the better one's clustering result because it indicates the observed clustering result is further away from the distribution of the random clustering results [For real data, gene set enrichment using Gene Ontology (GO) annotation was used instead of ARI (in the simulated data) since true cluster membership is not available. GO is a structured, controlled vocabulary for describing the roles of genes and gene products . Followi results .Z-scores for the mutual information between clustering results and significant GO annotation for the real data decreases with an increase in cluster size, as noted by Gibbons et al.. When significant Z-scores were considered , DIB-C gave higher and more stable Z-score values at 28 clusters. This was obtained from the first- and the second-order thresholds pair of p-value cutoffs for significant differences.Z-score for the pancreas data is shown in Figure K-means ; Level 2 had a Z-score of 2.557 at cluster number 28 with threshold pair . An interactive version of this hierarchical layer can be found at the supplementary webpage [For the pancreas dataset, three representative threshold levels, including the 'optimal' result with 28 clusters (Z-score = 3.247), are applied to construct the corresponding three hierarchical layers Figure . Levels webpage .N, N, N, N, N), ) was found.The optimal clustering result from the last hierarchical layer is reconstructed as a two-dimensional pattern map for the pancreas data Figure . DIB-C pi.e., re-ordering time-points) of input data would give a different output, which is not the case for K -means or SOM because they do not consider the order of input data points. Fifth, DIB-C requires no prior knowledge of representative genes. Even after the appropriate clustering algorithm is chosen, deciding the optimal number of clusters is very important. DIB-C overcomes this problem by exhaustive space searching in an efficient way. Also, DIB-C offers informative visualization. Clusters are arranged so that closely related patterns are gathered together. Such a meta-structure approach is often needed in developmental and cancer biology.DIB-C is a novel clustering algorithm based on the first- and second-order differences of a gene expression matrix. Our algorithm has several advantages over previous clustering algorithms for short time-course data with replicates. First, DIB-C generates interpretable clusters through discretization. Instead of producing many unlabeled partitions, DIB-C offers self-explanatory clusters. The resulting pattern map visualizes using both horizontal (the first-order difference) and vertical (the second-order difference) structures. Each cluster has a label composed of symbols indicating increases or decreases, which have intuitive biological interpretations. Second, our algorithm deals with the rate of change: convex and concave categories are incorporated into the definition of the symbolic pattern. Hence, we can discriminate genes into further subgroups. Third, the identification power is increased by using both the mean and variance of replicates. Conventional algorithms blindly use averaged summary data from replicates. In this way, two average values with different variances are treated equally, thereby decreasing the sensitivity to non-random patterns. Fourth, temporal order is incorporated into the algorithm. Column-wise shuffling , ) is just another symbolic pattern in our algorithm. By excluding this null cluster, simultaneous filtering and clustering can be performed, unlike in other clustering algorithms. Tseng et al. [When a cluster has few members, the APF value tends to get large since most eigenvalues of its within-cluster covariance matrix could be zeros. For this potential bias of APF measure, we have assigned equally 10 members to each 19 cluster in the simulated data. APF values had a tendency to increase as the cluster number decreased Figure in the cg et al. also criOur algorithm is based on conceptual discretizations, such as increasing, decreasing, remaining flat, and convexity or concavity, that are used as basic building blocks to define a pattern or cluster that shares a pattern with itself. This makes each cluster meaningful and interpretable. Although discretiztion may cause some loss of information, what investigators expect in gene expression data with time course may be such simple statements such as: Which genes express more rapidly with an increase of time? Which genes express early and late but remain flat in the middle? Simply finding patterns and clusters may not be sufficient for the biologists, but it is our belief that we need more effort to relate, even at the cost of loosing some information, computational analysis results with biological phenomena.T|2 \u00d7 (2p - 3) times where |T| is the number of threshold values in the Methods section and p is the number of time-points. In practice, the investigator would not need to use as many threshold values as in this study since there are multiple testing issues. Common choice of T would be T = {1 \u00d7 10-5, 2 \u00d7 10-5, ..., 9 \u00d7 10-5, ..., 1 \u00d7 10-4, ..., 9 \u00d7 10-4} which has a length 18. The runtime of our algorithm increases exponentially with the number of time-points. However, most publicly available gene expression datasets have only a small number of time-points. According to the survey by Ernst et. al. [For the exhaustive search, DIB-C iterates | et. al. , most daDIB-C can be generalized for use on other types of ordinal data, including stress-response or drug-treatment data, although the example datasets presented here focus on time-course data. In the future, we plan to extend DIB-C to two-factor designs whose orders are in two directions. For example, the analysis of drug-induced gene expression has both time-order and treatment dose-order. We could apply DIB-C after redefining the first-and second-order differences in order to take this account.Unif , obtaining each such gene using a 190 by 4 condition matrix. For replicates, we added normal noise using N where \u03c3 is taken from {0.04, 0.08, 0.2, 0.4}, so the matrix was extended to 190 by 32.We synthesized a test set of expression data for 190 genes at four time-points and with eight replicates. The range of log-ratio values was . Based on 19 template genes representing 19 clusters, we generated ten member genes for each cluster with uniform noise lowess was performed to remove these spatial effects to enable fair comparison across time-points [We also validated our algorithm with a real dataset involving pancreas gene expression in developing mice . There we-points .p, we should have a vector of (p - 1) t-statistics for each gene. Then each t-statistic is categorized into one of three symbols I (Increase), D (Decrease) or N (No change) depending on the t-distribution and the predefined cutoff. This constitutes the first-order symbolic pattern vector of length (p - 1). Next, the difference of two adjacent t-statistics is calculated. Each difference is discretized into one of three symbols V (conVex), A (concAve) or N (No change) depending on the distribution and the cutoff. These symbols constitute the second-order symbolic pattern vector of length (p - 2). Last a symbolic pattern of vector of length (p - 1) + (p - 2) = (2p - 3) is defined. Once the symbolic pattern is obtained, the gene is automatically assigned to this pattern and the above procedures are repeated for each gene independently.An outline of the algorithm is as following. For each gene, a (moderated) t-statistic is obtained for two adjacent time-points. If the initial number of time-points was There are two inputs for the proposed algorithm: the gene expression data and the experimental design matrix.Y = {ygjk} where,Gene expression data Q = {qjl}n \u00d7 2 where,Experimental design matrix The first-order difference matrix jth and (j + 1)th time-point groups with g respective sample sizes of et al. [h et al. and it rY(1) is categorized into three symbols I (Increase), D (Decrease) or N (No change) to get the pattern matrix F = {fgj}n \u00d7 (p - 1) based on the critical value from the t-distribution. This step is a usual two-sample t-test.g = 1,2,..., n; j = 1,2,..., p-1where dfgj is the empirically estimated degree of freedom by [eedom by The second-order difference matrix Y(2) is discretized into three symbols V (conVex), A (concAve), or N (No change) to get the symbolic pattern matrix S = {fgj}n \u00d7 (p-2). Here, the critical values were set using the difference of t-distributions, say T', empirically. To get the quantiles of T', two random samples of size 10,000 were generated from the t-distribution with degrees of freedom mj - 1 and mj+1) representation of multiple clustering results obtained at different threshold levels. Multiple clustering results are used to construct a DAG. Each node represents a symbolic pattern of Once a final level is chosen, the clusters in that level are reorganized into a 2-dimensional pattern map. The columns represent the first-order difference, and the rows represent the second-order difference. Each cell represents one symbolic pattern and contains a detailed profile of all members, with error bars.JK conceived of the algorithm and carried out the data analysis. JHK oversaw the research and contributed to the evaluation procedure and the visualization. All authors contributed to preparation of the manuscript."}
+{"text": "Boolean network (BN) modeling is a commonly used method for constructing gene regulatory networks from time series microarray data. However, its major drawback is that its computation time is very high or often impractical to construct large-scale gene networks. We propose a variable selection method that are not only reduces BN computation times significantly but also obtains optimal network constructions by using chi-square statistics for testing the independence in contingency tables.Both the computation time and accuracy of the network structures estimated by the proposed method are compared with those of the original BN methods on simulated and real yeast cell cycle microarray gene expression data sets. Our results reveal that the proposed chi-square testing (CST)-based BN method significantly improves the computation time, while its ability to identify all the true network mechanisms was effectively the same as that of full-search BN methods. The proposed BN algorithm is approximately 70.8 and 7.6 times faster than the original BN algorithm when the error sizes of the Best-Fit Extension problem are 0 and 1, respectively. Further, the false positive error rate of the proposed CST-based BN algorithm tends to be less than that of the original BN.The CST-based BN method dramatically improves the computation time of the original BN algorithm. Therefore, it can efficiently infer large-scale gene regulatory network mechanisms. The advancement of high-throughput technologies, such as DNA chips, has enabled the study of interactions and regulations among genes on a genome-wide scale. Recently, many algorithms have been introduced to determine gene regulatory networks based on such high-throughput microarray data, including linear models ,2, BooleIn the linear modeling of a genetic network, the expression data is fitted using a regression model, where the change in expression levels is a response for all other genes . AlthougBayesian network algorithms have limitations with regard to the determination of an important network structure because of their complex modeling strategies (with a large number of parameters to be estimated) and a long computation time for searching all potential network structures on genome-wide expression data. These limitations of the Bayesian network may be overcome by the dynamic Bayesian network (DBN), which models the stochastic evolution of a set of random variables over time ,12. AlthRecently, studies on the hierarchical scale-free network in lower organisms ,15 have Among these methods, the Boolean network (BN) is useful to construct gene regulatory networks observed by high-throughput microarray data because it can monitor the dynamic behavior in complex systems based on its binarization of such massive expression profiling data ,18. The G is defined by a set of node V = {x1, ..., xn} and a set of Boolean functions F = {f1, ..., fn}. A Boolean function fi, where i = {1, ..., n}, with k specified input nodes (indegree) is assigned to node xi. The regulation of nodes is defined by F. More specifically, given the values of the node (V) at time t - 1, the Boolean function are used to update these values at time t.BN models were first introduced by Kauffman . In theset al. [et al. [et al. [The model system has been developed into a so-called Random BN model . BNs havet al. . Many al [et al. for caus [et al. have con [et al. is used [et al. . In PBNs [et al. ,21.Recently, several software packages have been developed for constructing BNs. The random BN toolbox and the The estimation of gene regulatory networks using the BN offers several advantages. First, the BN model effectively explains the dynamic behavior of living systems ,18,26. SO(m\u00b7n\u00b7poly(k)) , the time required to compare a pair of examples respectively) for a fixed indegree k; this is because nCk combinations of variables and for m observations. The Best-Fit Extension problem [O(m\u00b7n\u00b7poly(k)). Although the improved consistency algorithm and Best-Fit Extension problem work in time complexity O(m\u00b7n\u00b7poly(k)) [n and k. Such high computing times are a major problem in the study of large-scale gene regulatory and gene interaction systems using BNs.However, the BN has some drawbacks. One of the major drawbacks is that it requires extremely high computing times to construct reliable network structures. Therefore, most BN algorithms such as REVEAL can thus be used only with a small number of genes and a low indegree value. For higher indegree values, these algorithms should be accelerated through parallelization in order to increase the search efficiency in the solution space . The con problem also worpoly(k)) , they stf = X1 \u2227 \u00acX2 \u2228 X3. Then, the time complexity of the CST-based BN reduces to O(m\u00b7poly(k)), where n is the total number of genes, k is the indegree, m is the total number of time points, nj 1, is the number of first selected genes for the jth gene, and nij 2, is the number of second selected genes when the ith gene is selected in the first step. We have found that the dichotomization of the continuous gene expression values allows us to efficiently perform the independence test for a two-way contingency table on each pairwise network mechanisms. We use the CST to identify genes that are associated with a target gene. A target gene would be expressed in accordance with a Boolean function related to the selected genes. Since the genes have only two levels, (0 and 1), we use 2 \u00d7 2 and 2 \u00d7 2 \u00d7 2 contingency tables to identify the relationship between two and three genes respectively. The proposed method is used along with the Best-Fit Extension problem. This method is described in detail in the Methods section.In order to overcome the time complexity problem of the BN method, we propose a variable selection method based on the chi-square test (CST). The proposed CST-based BN adopts the Best-Fit Extension problem, which is commonly used in the PBN to effectively determine all possible relevant Boolean functions. In our method, the maximum indegree of networks is assumed to be three. We also focus on the Boolean functions that comprise three different literals (input genes in the Boolean function). Each literal is connected by the three Boolean operators NOT(\u00ac), AND(\u2227), OR(\u2228); for example, k) is three. The network structure is composed of 27 Boolean functions that are randomly generated from 40 nodes. Forty sets of binary data are obtained sequentially from the network structure. Each data set has different initial states and seven time points. Since the data sets are generated from definite Boolean functions, the genes in the Boolean functions tend to have strong associations. The CST-based BN uses two thresholds \u03b11 and \u03b12 where \u03b11 is used for selecting variables for the main effect and \u03b12 is used for the conditional effect. A detailed description is provided in the Methods section. The smaller the values of \u03b11 and \u03b12, the stronger is the association of the variables. In order to select the nodes that have strong associations with the target nodes, we used very small cutoff values \u2013 \u03b11 = 1 - e15 and \u03b12 = 1 - e15 \u2013 for the CST-based BN. Table 301) when the noise level is 0.01% in a Boolean function. For the correction of the multiple comparison problem, we compare the results of the original BN and CST-based BN when the noise level of a Boolean function varies from 0.01% to 0.24%.For our simulation study, an artificially generated network structure is illustrated in Figure Table The CST-based BN method yielded the same estimated Boolean functions as those obtained by the original BN method for various noise levels. However, there are large differences between the computing times. Table n), then the difference between the computing times of the two algorithms would be significantly high.In summary, the CST-based BN method was approximately 6.9 times faster than the original BN method. If the network had a larger number of nodes . Depending on the choice of appropriate values of \u03b11 and \u03b12, the proposed CST-based BN method may not be able to determine some Boolean functions that can be estimated by the original BN method. This may be attributed to the missing essential variables due to the usage of extremely stringent cutoff values.Our variable selection method significantly improves the computing time of the BNs. However, the accuracy of our method should be assessed before comparing the computing times of the two methods. The improvement in the computing times primarily depends on the cutoff statistical significance levels, We define the error rate as the discrepancy between the Boolean functions estimated by the original BN and the proposed CST-based BN as follows:BFOriginalBN and BFCSTbasedBN are sets of Boolean functions estimated by the original BN and the CST-based BN, respectively. Three data sets with randomly selected 80, 100, and 120 genes were used to compute the error rate. We have calculated the error rate for various values of \u03b11 and \u03b12 . Some of the results are shown in Figures \u03b11 and \u03b12 should be greater than 1% and 2%, respectively, when the error size is zero and 7.6 times (error size = 1) faster than those of the original BN method.In order to compare the computing times, we executed the BN program based on the Best-Fit Extension problem (written in C language). Figure t. For the demonstration, we randomly selected a Boolean function from the estimated Boolean functions of each gene; the method used for this purposed was similar to that used by the PBN [We also applied the CST-based BN to the subset of the yeast cell cycle data with 800 genes for demo the PBN ,34 to seFor all 800 genes, the CST-based BN required approximately three days to construct the network structure. In order to estimate the total computation time of the original BN, we selected the first target gene and constructed the BN, which required approximately 37,011 s. Therefore, the total computation times for all 800 genes will be approximately 342 days to build the network structure. Hence, the computing time of the proposed CST-based BN method is approximately 114 times faster than that of the original BN method.Recent studies ,15 have k increase, the improvement in the computation time is expected to be significantly greater than the original BN method. Also the proposed method can be easily implemented with the existing BN modeling algorithms such as the PBNs by efficiently selecting only the most relevant genes for determining the Boolean functions. This method is thus demonstrated to reduce the false positive rate, which is an important problem in network studies conducted on a genome-wide scale.In order to overcome this computational drawback, we proposed the variable selection method using the CST for the two-way and three-way contingency tables of Boolean count observations. This method reduces the computation times significantly; for example, for 120 genes, the computation time is approximately 70.8 times faster than that of the original BN method. If the total number of genes and the value of k is assumed to be three. However, it is possible to use a large value of k greater than 3. Since our method uses the Best-Fit Extension problem, a gene can be controlled by more than one Boolean function. Therefore, it appears that k = 3 provides a large number of Boolean functions that can model the gene regulatory network successfully.In our method, the value of The proposed CST-based BN used the two-step discovery of 3-indegree Boolean functions because the prediction information of addtional genes in such high-dimensional Boolean functions is mainly observed after considering, or conditional on, primary genes' effects. This strategy, in turn, resulted in much more efficient discovery of the most predictive high-dimensional Boolean functions in our BN modeling.n. When n is small, the contingency table may contain many cells with low and zero frequencies. To ensure that the expectation value is not equal to zero, a continuity correction is used by adding a small constant 0.1 to the observed frequency in each cell [The result of the proposed CST-based BN may be sensitive to the sample size ach cell . This siach cell that conach cell . The sma\u03b11 and \u03b12. A small values of \u03b1 cause the exclusion of essential Boolean functions and produces a high error rate. On the other hand, large values of \u03b3 cause the inclusion of most of the genes, thereby resulting in long computing times. For the yeast cell cycle data, we applied various combinations of . As shown in Figure \u03b11 and \u03b12 are greater than 1% and 2%, respectively, when the error size of the Best-Fit Extension problem is zero.The improvement of the computing times using the CST-based BN will significantly increase the utility and applicability of the BN to the inference of various regulatory networks, particularly those based on current large screening biological data such as microarrays. In order to apply the proposed variable selection method, we must first select the values of \u03b11 = 1% and \u03b12 = 2% be used when the error size of the Best-Fit Extension problem is zero. We select the cutoff values such that the Boolean functions obtained by the CST-based BN are the same as those obtained by the original BN. However, we can use smaller cutoff values to reduce the number of false positive Boolean functions, because the original BN method tends to yield many false positive relationships, as shown in the simulation result.Therefore, for practical application, we suggest that In addition, a more careful dichotomization is required for a more accurate biological interpretation of the network structure. For example, since microarray data have continuous expression values with a considerably large amount of information, the dichotomization may require the selection of an appropriate threshold value depending on the biological function of each gene . The perThe next step would be to perform a biological evaluation of the selected network structure. However, the main focus of our study is to improve the computation times of the BN by using the CST. Our approach allows the application of the BN to genome-wide network construction and discovery. A future study will evaluate the accuracy of the BN and compare it with other network methods such as the Bayesian network and hierarchical scale-free network.The proposed CST-based BN consists of two steps. The first step is to determine a pair of genes that are associated with each other. The second step is to determine the third gene that is conditionally associated with the pair of genes identified in the first step.n be the total number of genes. In the first step, 2 \u00d7 2 contingency tables are constructed from the dichotomized gene expression data. The pth row comprises the ith gene expression level at time t - 1 while the qth column of the table comprises the jth gene expression level at time t . For the ith and jth genes, a 2 \u00d7 2 contingency table is constructed with four cells: {0, 0}, {0,1}, {1, 0}, and {1,1}, where {p, q} represent the ith gene expression level at time t - 1 and the jth gene expression level at time t, respectively.Let \u03c0pq} in the contingency table, the null hypothesis of independence is H0 : \u03c0pq = \u03c0p+\u03c0q+ (the ith gene at time t - 1 and the jth gene at time t are independent) for all p and q . The conventional Pearson's CST can be used to test H0 using the observed frequency Opq and the expected frequency Epq under H0. For the continuity correction, we add an arbitrary small number a to each observed frequency in order to prevent Epq from becoming zero [a = 0.1 for the correction. Generally, {\u03c0p+} and {\u03c0q+} are unknown. The maximum likelihood (ML) estimates are the sample marginal proportions {p+ = Op+/O++} and {q += Oq+/O++}, where O++ = \u03a3p\u03a3qOpq. Epq is estimated as Epq = O++p+q += Op+Oq+/O++. Therefore, the chi-square statistic is expressed as follows:A CST statistic is then computed for testing the independence between two genes. For multinomial sampling with probabilities {ing zero . We use \u03b11. A further discussion on the appropriate choice of \u03b11 is provided in the Result section.Using this CST, the significant genes are selected by an appropriate selection criterion ith gene at time t - 1 is selected in the first step for the jth gene at time t. Then, a 2 \u00d7 2 \u00d7 2 contingency table can be constructed that consists of three genes \u2013 the ith and jth genes selected in the first step and an additional new gene h at time t - 1. This contingency table consists of eight cells: {0,0,0}, {0,0,1}, {0,1,0}, {0,1,1}, {1,0,0}, {1,0,1}, {1,1,0}, and {1,1,1}, where {o, p, q} represent the hth gene expression level at time t - 1, ith gene expression level at time t - 1, and jth gene expression level at time t, respectively .Assume that the h, there are two 2 \u00d7 2 contingency tables for the i and j genes. We focus on the conditional independence test. The null hypothesis that the ith gene at time t - 1 and the jth gene at time t are conditionally independent when the hth gene expression level at time t - 1 is given by H0 : \u03c0pq|o = \u03c0p+|o\u03c0q+|o for all p and q , where \u03c0o ..|represents the conditional probability for the given o. We use the CST to test H0 using the observed frequency Oopq and the expected frequency Eopq under H0. We also add 0.1 to each observed frequency for the continuity correction. The ML estimates of \u03c0p+|o and \u03c0q|o +are the sample conditional proportions {p+|o = Oop+/Oo++} and, {q|o += Oo+q/Oo++}, respectively where Oo++ = \u03a3p\u03a3qOopq. Epq|o is estimated as Epq|o = Oo++p+|oq|o += Oop+Oo+q/Oo++. Then, the chi-square statistic are given byFor the given expression value of o = 0,1. We assume that the ith and jth genes are not independent from the first step. We select the hth gene if at least one of the two CST in the second step is significant .for h affects the association between two genes i and j. The conditional test approach is very effective in the determination of a relationship for more than two genes.The rationale for using this conditional independence test is that nj 1, genes at time t - 1 that are associated with the jth gene at time t in the first step where 1 \u2264 nj 1, \u2264 n. If the ith gene is one of the nj 1, genes, we select nij 2, genes in the second step for 1 \u2264 nij 2, \u2264 n - 1 (excluding the ith gene). Then, we consider all possible combinations for selected three genes (when k = 3); one gene is the ith gene selected in the first step and the other two genes are selected in the second step. These combinations are used instead of all possible combinations in the original BN algorithms. The time complexity of determining the Boolean functions for the jth gene is O(m\u00b7poly(k)) and that of variable selection using the CST is O). Hence, the total time complexity of the proposed algorithm is expressed as follows:We select n increases, the time complexity of determining the Boolean functions dominates the time complexity of variable selection because the former increases more rapidly than the latter.As f1 = \u00acG3 \u2228 (\u00acG4 \u2227 \u00acG8) and f3 = \u00acG1 \u2228 (G2 \u2227 G6) \u2013 are true. To determine these functions, the original BN searches all possible combinations of genes at time t - 1 for every target gene at time t (8C3 \u00d7 8 = 448 when the indegree is three). We can reduce the BN combinations by using the proposed variable selection method. Table t are excluded because they do not have any genes selected in the first step.Table \u00acG3 \u2228 \u00acG \u2227 \u00acG8 anG1t, G3t - 1) and , respectively. The second step is performed only for the selected genes in the first step in order to identify genes that are conditionally associated with the target gene. Figure t, a gene at time t - 1 from the first step, and new gene at time t - 1: (a) for , (b) for , (c) for , and (d) for . The total number of possible combinations of the CST-based BN is 47. Therefore, the total time complexity of the CST-based BN is 9.5 times faster than that of the original BN.Figures Project name: Boolean networks for Large-scale gene regulatory network Project home page: Boolean functions for a toy example. R code for true Boolean functions in a toy example.Click here for fileData set for toy example. Binary data set of 18 time points \u00d7 8 genes generated by using the two Boolean functions.Click here for file27 Boolean functions for the simulation study. R code for true functions in the simulation study.Click here for fileData set for the simulation study. 43 data set of 7 time points \u00d7 40 genes generated by using 27 Boolean functionsClick here for fileYeast cell cycle data. Binary data set of randomly selected 40\u2013120 genes. The Each data set consist of time points (row) number of genes (column). The mean value was used as a dichotomization criterion.Click here for file"}
+{"text": "A central goal of molecular biology is to understand the regulatory mechanisms of gene transcription and protein synthesis. Because of their solid basis in statistics, allowing to deal with the stochastic aspects of gene expressions and noisy measurements in a natural way, Bayesian networks appear attractive in the field of inferring gene interactions structure from microarray experiments data. However, the basic formalism has some disadvantages, e.g. it is sometimes hard to distinguish between the origin and the target of an interaction. Two kinds of microarray experiments yield data particularly rich in information regarding the direction of interactions: time series and perturbation experiments. In order to correctly handle them, the basic formalism must be modified. For example, dynamic Bayesian networks (DBN) apply to time series microarray data. To our knowledge the DBN technique has not been applied in the context of perturbation experiments.We extend the framework of dynamic Bayesian networks in order to incorporate perturbations. Moreover, an exact algorithm for inferring an optimal network is proposed and a discretization method specialized for time series data from perturbation experiments is introduced. We apply our procedure to realistic simulations data. The results are compared with those obtained by standard DBN learning techniques. Moreover, the advantages of using exact learning algorithm instead of heuristic methods are analyzed.We show that the quality of inferred networks dramatically improves when using data from perturbation experiments. We also conclude that the exact algorithm should be used when it is possible, i.e. when considered set of genes is small enough. Since most genetic regulatory systems involve many components connected through complex networks of interactions, formal methods and computer tools for modeling and simulating are needed. Therefore, various formalisms were proposed to describe genetic regulatory systems, including Boolean networks and their generalizations, ordinary and partial differential equations, stochastic equations and Bayesian networks is a representation of a joint probability distribution over a set of random variables. It consists of two components:A \u2022 a directed acyclic graph whose vertices correspond to random variables and edges indicate conditional dependence relations\u2022 a family of conditional distributions for each variable, given its parents in the graph.Together, these two components determine a unique joint distribution.When applying Bayesian networks to genetic regulatory systems, vertices are identified with genes and their expression levels, edges indicate interactions between genes and conditional distributions describe these interactions. Given a set of gene expression data, the learning techniques for Bayesian networks allow one to infer networks that match this set well. However, as it was shown in , the proIt should be also pointed out that the basic BN formalism has some major limitations. First, several networks with the same undirected graph structure but different directions of some edges may represent the same distribution. Hence, relying on expression levels only, the origin and the target of an interaction become indistinguishable. Second, the acyclicity constraint rules out feedback loops, essential in genetic networks. Third, the dynamics of a gene regulatory system is not taken into account.Dynamic Bayesian networks (DBNs), which model the stochastic evolution of a set of random variables over time. In comparison with BNs, discrete time is introduced and conditional distributions are related to the values of parent variables in the previous time point. Moreover, in DBNs the acyclicity constraint is relaxed.The above limitations may be overcome by Given a set of time series of expression data, the learning techniques adapted from BNs allow one to infer dynamic networks that match well the temporal evolution contained in the series. The papers and 13]13] initiA special treatment is required for experiments in which expression of some genes was perturbed (e.g. knockout experiments). Since perturbations change the structure of interactions (regulation of affected genes is excluded), the learning techniques have to use data selectively.It should be also pointed out that not every perturbation experiment may be realized in practice. The reason is that some perturbations of a regulatory mechanism may be lethal to the organism. On the other hand data from perturbation experiments are particularly rich in information regarding causal relationships.Inferring networks from perturbed expression profiles by means of BNs was investigated in and 20]20]. To oWhen analysing learning procedure's efficiency, the procedure should be applied to the data generated by a known network, which then might be compared with the inferred one. To this aim, most studies apply procedures to gene expression data and compare inferred interactions with those found in biological literature. The disadvantage of this approach is that our knowledge of the structures of real biological networks is far from being complete even in the most deeply investigated organisms. Although many interactions between genes are known, there are very few results stating the absence of some interactions. Thus no conclusion can be drawn from the fact that the procedure inferred unknown interaction. The above disadvantage is no longer present when data are generated from a mathematical model simulating real networks. However, a danger of this approach is that the employed model simplifies the real process, losing important biological features. In that case, analysis of a learning procedure is aimed at its ability to infer an artificial model rather than real biological networks.Husmeier in suggestsIn the present study we generate data using the model proposed in . The modt represents time, kx are kinetic constants of related reactions, [X] means concentration, pX, mX, X and X2 are promoter, mRNA, protein and dimer X, respectively, finally X\u00b7Y stands for a transcription factor bound to a promoter.where The system is composed of structures reported in the biological literature -24, i.e.The total time of each simulation is set to 5000 minutes. At time 1000 minutes the ligand is injected for 10 minutes, changing the expression levels of all genes. The influence of the injection to expression decays throughout the interval 1500\u20133200 minutes (depending on the gene), but at time 2400 minutes system dynamics becomes similar to the initial one.All the equations and parameters of the model, as well as the MATLAB code to integrate it, are available in the supplementary materials to .This model is chosen for two reasons. First, differential equations formalism and biological origin of the structure guarantee realistic simulations. Second, small size of the system allows to avoid a noise arising from heuristic search methods, which are necessary when dealing with large networks. Such noise might influence comparison of methods.G and K from the model are regulated by the same gene C and have the same kinetic constants, trajectories of their concentrations are identical. Consequently, their contributions to the regulatory interactions are indistinguishable, given the expression data. For this reason, Husmeier in 20]. SummGfor each gene G\u00a0\u00a0\u00a0choose all experiments with unperturbed expression of G\u00a0\u00a0\u00a0for each potential parent set Pa of G for Pa and chosen experiments\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute the local score in G yielding optimal score compose the network from the chosen parent sets\u00a0\u00a0\u00a0choose the parent set of Software for finding optimal DBNs is available in the supplementary materials .In the present section our exact algorithm is applied to the datasets from the model modified by introducing perturbations. The results are compared with those obtained from the basic model, as well as with those obtained by Bayesian learning with Markov chain Monte Carlo (MCMC) method .In the first experiment we followed the procedure of Husmeier . The variability may be increased by allowing more discretization levels, but it can disturb inferring. The reason is that the BN formalism disregards the structure of sets of possible values of random variables. For example, the information that the discretized expression level '0' is closer to the level '1' than to '2' is ignored. Consequently, the learning procedure treats equally the situation in which some configuration of regulators causes a regulon to assume the value '0' or '1' with the one in which it is caused to assume the value '0' or '2'. Our experiments with gene expression discretized into more than 3 levels do not improve results (data not shown).The dataset used in the experiment was quite large: 10 series, 12 time points each gives 120 slices. On the other hand, the variability of discretized expression levels is rather low \u2013 as is shown in Fig. The disproportion between a large dataset and a low variability may be avoided by decreasing a number of samples. Hence we decided to choose for the next experiment 3 time points in equal length intervals between 1100 and 1600 minutes. The accuracy significantly improved \u2013 the inferred network contains 7 edges corresponding to direct transcriptional regulation, 1 reflecting posttranscriptional regulation and 2 spurious edges see Fig. .Another time intervals were tried, resulting in networks less accurate than the two above (data not shown), which confirms Husmeier's assertion of low information content of signals from a system being in equilibrium.A\u2192C, B\u2192C and E\u2192F were inferred correctly.Corresponding experiments were also executed for the overexpression data, as well as for both kinds of perturbed data together. Accuracy of overexpression experiments does not match that of knockout ones. However, it is worth pointing out that, unlike the knockout data case, the edges The best results were obtained when both kinds of perturbations were used together. As it is shown on Fig. p-value of a network with k true out of m inferred edges to be the probability of finding at least k true when choosing m edges at random. According to the hypergeometric distribution, the probability of n successful selections out of m from a set of N true and M - N false edges is given byWe define the consequently,the p-value of a network is defined byM equals 102 for the full considered network or 92 for the network restricted to 9 genes (used for unperturbed data). The value of N depends on it if we allow direct transcriptional regulation only. P-values of the networks from Fig. where local interactions between genes, represented by particular edges. In order to analyse the ability to infer a global interaction system, one has to compare the score of the original network with the scores of other networks. Since it is impossible to compute the scores of all graphs (there are n nodes), we sampled randomly 100 000 graphs. For each graph, edges were generated independently, each with the same probability. The uniform distribution on the space of all graphs could be obtained by setting this probability to 1/2, but it would cause scores of most of graphs to be dominated by high penalties due to excessive structures. In order to get networks with scores close to the original one, there was chosen the probability resulting in the expected number of 12 edges in the graph (11 edges between different nodes in the cases of forbidden and forced self-connecting edges).The above considerations refer to inferring Original and randomly generated graphs are available in the supplementary materials .Table The tables show that using perturbed data significantly improves the possibility of inferring the original network. The results obtained in the experiments with 3 time points are usually better than those in the experiment with 12 time points, but the differences between them are relatively small.Comparison of the values for particular versions of the algorithm shows that the best results are obtained when self-loops are forbidden, slightly worse when self-loops are permitted and significantly worse when they are forced. The analysis of the best scored networks with permitted self-loops leads to the statement that self-regulation of genes cannot be handled within our framework correctly and requires more refined methods. Therefore, with respect to our algorithm's variants, the best choice is to forbid self-loops.In the present paper the framework of dynamic Bayesian networks is extended in order to handle gene expression perturbations. A new discretization method specialized for datasets from time series gene perturbation experiments is also introduced. Networks inferred from realistic simulations data by our method are compared with those obtained by DBNs learning techniques.The comparison shows that application of our method substantially improves quality of inference. Moreover, our results lead to the suggestion that the exact algorithm should be applied when it is possible, i.e. when the set of genes is small enough. The reason is high variability of the networks resulting from heuristics and their lower accuracy.Since self-regulating interactions appeared to be intractable by DBN learning techniques, we also suggest to forbid self-connecting edges in inferred networks. Our experiments show that this choice makes the learning procedure more sensitive to other interactions than it would be with self-loops permitted or forced. An important problem for designing time series expression experiments is to determine sampling rates properly. Our experiments show that assuming too short rate results in noisy expression profiles, just as when the samples are chosen from the system being in equilibrium. Consequently, networks inferred from over-sampled datasets have low accuracy.The reason of this surprising finding is the Markovian assumption of DBNs, which states that the value of an expression profile from a particular time point depends on the value of the profile from the preceding time point only. It means that the sampling rate has to match the delay of regulation processes. Most learning procedures working with time series gene expression data make similar assumptions. This is unlike those working with independent expression profiles (e.g. BNs), which assume that considered profiles represent steady states.ND designed the extension of DBN framework incorporating perturbations, performed the statistical analysis and participated in executing the experiments. AG participated in the design of the study. AM implemented and tested the exact algorithm. BW implemented modifications concerning perturbations: to the system of differential equations of regulatory network and to t"}
+{"text": "The detection and analysis of steady-state gene expression has become routine. Time-series microarrays are of growing interest to systems biologists for deciphering the dynamic nature and complex regulation of biosystems. Most temporal microarray data only contain a limited number of time points, giving rise to short-time-series data, which imposes challenges for traditional methods of extracting meaningful information. To obtain useful information from the wealth of short-time series data requires addressing the problems that arise due to limited sampling. Current efforts have shown promise in improving the analysis of short time-series microarray data, although challenges remain. This commentary addresses recent advances in methods for short-time series analysis including simplification-based approaches and the integration of multi-source information. Nevertheless, further studies and development of computational methods are needed to provide practical solutions to fully exploit the potential of this data. A majorpathways ,8,9, andpathways -14. Thuspathways .Arabidopsis, flowering time, abiotic stress, disease progression, and drug responses [Time-series microarrays capture multiple expression profiles at discrete time points of a continuous cellular process. These data can characterize the complex dynamics and regulation in the form of differential gene-expressions as a function of time. Numerous time-series microarray experiments have been performed to study such biological processes as the biological rhythms or circadian clock of esponses ,16-20. Mesponses ,21-23, nsparse time-series data.A significant challenge in dealing with time-series data comes from the limited sampling or number of time points taken, giving rise to short time-series data. In the growing pool of temporal microarray datasets, a typical time-series record has fewer than ten time-points . The mosLimited sampling accentuates the difficulties associated with static or standard time-series analyses. First, the problems arising due to high dimensionality accompanied by a small sample size, such as matrix singularity and model over-fitting , in analImproving short time-series analysis requires addressing the problems that arise due to limited sampling. Recent efforts by investigators to overcome the difficulties associated with limited sampling include decreasing the complexity of continuous time-series data based on simplification strategies ,30 or enI (Increase), D (Decrease) or N (No change)\", and rate of change, that is, \"V (conVex), A (concAve) or N (No change)\". Inevitably information is lost through this simplification. Even so, such conceptual discretization helped achieve more interpretable and biologically meaningful clusters [Simplification strategies reduce time-series data from continuous to discrete representations prior to analysis. These strategies usually transform the raw temporal profiles into a set of symbols ,30,33 orSimplifications methods have a side benefit in reducing the noise in the original data to some degree when decreasing the dimension of the time-series data, thus making the subsequent analysis more robust to noise. This was demonstrated by Sacchi et al. with thea priori representative temporal trends or patterns of gene expression in the discretization step. Defining these patterns have largely depended on the expertise of the researchers, for example, Gerber et al defined six temporal expressions trends in terms of phase and direction (increase and decrease) [Increasing, Decreasing and Steady) over subintervals [A key challenge with simplification strategies is how to pre-define these ecrease) , similarecrease) . Howeverecrease) . Thus, decrease) ,28. Attentervals , and thentervals . Althougntervals may not ntervals .Incorporating multi-source information, including prior knowledge ,39, multDifferent types of prior knowledge have been used to improve the computational analysis of short time-series data. They include applying a prior noise distribution to the expression data . For exaSaccharomyces cerevisiae response to stress [Arabidopsis thaliana [2 on the physiology of A. thaliana [In addition, pre-defined gene sets involving specific pathways or functional categories have focused on pattern changes of sets of genes rather than individual genes and helped to enhance our understanding of cellular processes ,39. Simio stress , and to thaliana , respectthaliana , the effthaliana , and to thaliana . Similarthaliana . Combinithaliana , as wellthaliana and functhaliana under diA key challenge with integrating different datasets is the heterogeneity of the data, that is, each set may have a unique set of sampling rates, time-scales, cell types, and sample populations, as well as varying measurement noise levels, etc. The heterogeneity across the datasets increases the difficulty in extracting meaningful results. To maximize the usefulness and minimize the heterogeneity of the publicly available data, stricter standardization methods should be defined and imposed on procedures such as data collection and pre-processing. Indeed, standards such as MIAME (Minimum information about a microarray experiment), MIAPE (Minimum information about a preoteomics experiment), MSI (Metabolomics standards initiative), MIMIx (Minimum information required for reporting a molecular interaction experiment) have been proposed and implemented for presenting and exchanging gene expression , proteomIn summary, analysis of short time-series microarrays is still at an early stage. Most studies using short time-series data have applied methods that had been developed for static or long time-series microarray data, and which tend to perform poorly with limited temporal sampling. Current efforts, including simplification approaches and the integration of multi-source information, have shed promising light on improving the analysis of short time-series microarray data.Future studies could combine both of these strategies to simultaneously decrease the complexity of continuous time-series representations, yet minimize the information loss with the simplification-based approaches by increasing the information content of the data. Gene-module-level analysis could be a potential solution, in which the concept of modularity not only plays a central role in incorporating multi-source biological information, but also reflect a simplification strategy focusing on groups of genes rather than individual ones. Gene-module-level analysis could efficiently combine both strategies.A recent study by Hirose et al used a sThus far, the predominant focus has still been on lower levels of analyses, such as detecting differently expressed genes or clustering genes with similar temporal profiles, whereas few higher levels of analysis, i.e. network construction, have been reported. With the rapid growth in availability of short time-series data, more theoretical and technical studies are urgently needed to provide practical solutions to exploit fully the potential of this wealth of data."}
+{"text": "Time-course microarray experiments study the progress of gene expression along time across one or several experimental conditions. Most developed analysis methods focus on the clustering or the differential expression analysis of genes and do not integrate functional information. The assessment of the functional aspects of time-course transcriptomics data requires the use of approaches that exploit the activation dynamics of the functional categories to where genes are annotated.We present three novel methodologies for the functional assessment of time-course microarray data. i) maSigFun derives from the maSigPro method, a regression-based strategy to model time-dependent expression patterns and identify genes with differences across series. maSigFun fits a regression model for groups of genes labeled by a functional class and selects those categories which have a significant model. ii) PCA-maSigFun fits a PCA model of each functional class-defined expression matrix to extract orthogonal patterns of expression change, which are then assessed for their fit to a time-dependent regression model. iii) ASCA-functional uses the ASCA model to rank genes according to their correlation to principal time expression patterns and assess functional enrichment on a GSA fashion. We used simulated and experimental datasets to study these novel approaches. Results were compared to alternative methodologies.Synthetic and experimental data showed that the different methods are able to capture different aspects of the relationship between genes, functions and co-expression that are biologically meaningful. The methods should not be considered as competitive but they provide different insights into the molecular and functional dynamic events taking place within the biological system under study. A significant number of statistical methods have been published as microarray time-course experiments that have been expanded to address the analysis of this type of data. Many of the developed algorithms consider the clustering of serial data. Proposed strategies include the use of Gaussian mixed models [ad-hoc algorithms have been developed for short series [Microarray time-course experiments have gained popularity in recent years to address the study of biological phenomena where the dynamics of gene expression is of relevance. In contrast to classical control-case studies, where basically two conditions are compared, time series experiments encompass investigations of diverse nature and complexity. Studies may relate to developmental processes with a large number of sampling points (e.g. ), or to d models , Bayesiad models , and Foud models to modelt series ,11. Anott series a multivt series to specit series and somet series -18. Finat series ,20.In all of these approaches statistical analysis focused on modeling gene expression and identifying those genes with a relevant variation pattern. This orientation, though valid and useful, solves only one (frequently the first) requirement to understand transcriptomics changes from any kind of microarray experiment. In most cases, the analysis proceeds through the identification of cellular processes and functions which are represented by the gene selection, i.e. genes are identified by their functional role and the question is then which functional modifications can be derived from the observed gene regulation. The incorporation of functional information into data analysis is normally obtained by the use of functional annotation databases that define and assign function labels to known genes. The most widely used functional annotation scheme is the Gene Ontology (GO) , which cThis strategy for the functional evaluation of differential gene expression has a number of limitations . FirstlyIn this paper we have set out to address the problem of the functional assessment of gene expression in time series data in an alternative manner. We have developed and tested three distinct strategies which respond differently to the various concerns mentioned above. The proposed methods derive from previous methodologies developed in our group for the analysis of short, multiple series data which follow a gene-centric orientation: the maSigPro , a two-s.We have used both synthetic and experimental datasets to assess the different methods. Simulated data provides a means of understanding the working of the methodologies while experimental data offers insights into the biological relevance of the strategies. Furthermore, we provide a comparison with other available methods. Algorithms were implemented in the R language and are available at This methodology derives from maSigPro, a regression-based approach for the analysis of multiple series time-course microarray data . The maSThe adaptation of maSigPro to consider functional information -maSigFun- is quite straightforward: the regression model is not fitted gene-wise as in maSigPro, but to the data matrix composed the expression values of all genes belonging to the functional class, thus one regression model is fitted to each functional category. In this approach individual genes are considered as different observations of the expression profile of the class. As genes belonging to the same class may show different basal expression levels and this may negatively influence the estimation of model parameters, expression data is standardized gene-wise to better capture the correlation structure within the functional group. After this transformation, statistical analysis proceeds as in regular maSigPro Figure . The expIn this strategy we consider that a functional block might display not only one but several patterns of coordinative gene expression. These distinct patterns are extracted by following a strategy similar to that proposed by to direct-statistic or a similar statistic. Enrichment analysis is performed along this rank by assessing the differential distribution of each functional block along the ranked gene list. In the case of ASCA-functional, ASCA-genes is first applied to create PCA submodels associated with each experimental factor. Similarly to the previous method, the genes loadings at each component of each submodel are a measure of the similarity of each particular gene expression profile to the pattern depicted by the component of the submodel. Genes with high positive loadings will mostly follow the pattern indicated by the component; genes with high negative loadings follow an opposite pattern, while genes with loadings close to zero do not resemble the behavior represented by the principal component. Those badly modeled genes are identified in ASCA-genes by their high SPE [The ASCA approach was developed by to analyhigh SPE and are high SPE ,33 to idSynthetic and experimental datasets were used to assess the proposed methods. Synthetic data was designed to depict different scenarios of co-expression while the experimental sets reflect two microarray studies involving different probe sizes and biological systems.Two simulation studies were designed to evaluate the effect of class size and the percentage of co-expressed genes in the identification of time-course by changing functional categories. Both studies use the same primary data structure. The hypothetical experiment contained two series (Control and Treatment) and three time-points . Synthetic datasets consisted of a total of 10,000 genes in study A and several sizes in study B, distributed in 250 classes from which 225 classes contain only flat genes and 25 classes include at least one differentially expressed gene. Modeled responsive genes follow one of four possible patterns of expression: 1) Flat profile for control and continuous induction for treatment, 2) Flat profile for control and continuous repression for treatment, 3) Flat profile for control and transitory induction for treatment and 4) Flat profile for control and transitory repression for treatment. In all of the 25 classes with some non-flat genes only one of the four patterns is present, meaning that all changing genes in the class follow the same profile, have a positive correlation and could be regarded as \"co-expressed\". In each individual simulation, noise was introduced into the datasets by adding to the defined profiles random values taken from a normal distribution N.The first simulated study (A) analyzes how the percentage of co-expressed genes within the functional class affects the identification of the category. In this study, functional classes varied in size (number of genes), taking values from 5, 10, 30, 55 and 100. Seven different datasets were created in this study, each of them with a different percentage of co-expressed genes for all of the 25 non-all-flat classes present in the dataset (Table In the second simulated study (B) we evaluated the effect of the class size. Here, 4 \u00d7 3 datasets were created, each of them having a fixed value for the class size and a fixed value for the percentage of genes with change , values of false positives (FP), false negatives (FN), sensitivity and specificity (proportion of negatives which were correctly identified) were computed. In all methods the significance threshold was set at 0.05 false discovery rate (FDR).In the case of maSigFun, analysis recall statistics were calculated at different values of the R2 parameter since this was expected to have a great influence on the results. The R2 or goodness of fit indicates how well the model fits the data and therefore reflects the coherence within the observations. Previous studies with maSigPro indicated that a cut-off value of 0.6 would be appropriate for the selection of d.e.g.'s in time-course microarray data . In thisin silico analysis revealed that the maSigFun methodology is sensitive at identifying functional classes with a high proportion of changing genes (70%) when a moderate R2 cut-off (0.4) is imposed. At higher R2 sensitivity drops, while the consequence of releasing the R2 filter (R2 = 0) was that functional classes with a low proportion of regulated genes (20%\u201330%) could also be selected . In most cases one or two GO-components were obtained per GO term and only in very generic classes, such as translation or ribosome, up to 3 patterns of correlated behaviors were extracted. maSigPro analysis on the matrix of these new functional variables resulted in the identification of 33 BP 15 MF and 10 CC significant features (Table atty acid metabolism and oxidation (-), cell adhesion (-), amino acid metabolism (-), translation microtubule organization (+), endopeptidase inhibitor activity (-) and vesicular fraction (+). Functions associated with the second pattern include translation (+), negative regulation of cell proliferation (+), acute inflammatory response , xenobiotic metabolic process , signal transduction , biopolymer methylation (-), maintenance of localization (+), response to toxic compound (+), iron ion binding , exopeptidase activity (+), kinase activity (+), epoxide hydrolase activity (+), ribosome . Finally, in the third pattern we found cation homeostasis (+), nitric oxide mediated signal transduction (+), copper ion binding (+) and lysosome (+). It is important to mention that, in most cases, only a subset of each GO term annotated genes showed significant contributions to the GO-component, indicating the predominant role of these genes in the determination of the pattern. In a few cases, corresponding to very general categories such as translation or ribosome, none of the annotated genes reach the threshold of significant contribution, but a continuum signal was observed, which would indicate a small but coordinated gene activity within the class. Finally, in some cases, such as xenobiotic compound and acute-phase, genes were observed that display either a positive or negative significant contribution to the component, which implies that coordination is present but with positively and negatively acting elements. For example, in the case of acute-phase, the alpha-1-glycoprotein, a positive acute phase protein, was found to have a significant contribution to the acute-phase GO-component pattern that represented gene expression activation with high BB at 24 h. Another three proteins, alpha-1-inhibitor, albumin and tripsin, known as negative acute-phase proteins [Analysis by es Table . Interpres Table and locaes Table , i.e. gees Table . The firASCA-functional method gave an intermediate result between the two previous approaches. Analysis by ASCA indicated three main independent patterns of variation within the transcriptomics signal. As in the other approaches, the first component, which collects 46% of the gene expression variability, represents the pattern of change (induction or repression) by high BB at 24 hours plus one control series measured along 3 time points on the NSF potato 10 k chip. In general, the three different approaches behaved in a similar fashion as in the toxicogenomics dataset although a much richer functional response was observed in this study. The major gene expression pattern within this dataset corresponds to the differential behavior of the cold and salt stresses with respect to the control and heat conditions. A differential regulation is observed between the two pairs of series already at 3 hours, peaking at 9 hours and maintained till the end of the experiment or repression (-) of the class as a whole for the cold and salt stressors with respect to the control and heat conditions. Down-regulated processes included photosynthesis-related terms, fructose metabolism, cell-wall modification, lateral root morphogenesis and reductive pentose-phosphate cycle. Up-regulated processed referred to protein turnover, response to hypoxia and glucose stimulus, multi-drug transport, salicylic acid signaling pathway and diverse enzymatic activities. PCA-maSigFun again gave a much richer view on cellular processed (447 selected GO terms) and highlighted additional functions such as response to stress, chitinase activity, oxidoreductase activity, transmembrane transport, secretory pathway, jasmonic acid signaling and abscisic acid pathways, among many others. Finally ASCA-functional analysis indicated the major pattern of variability as the difference between the cold and salt stresses on one hand and heat and control conditions on the other, this pattern 57% comprising of the variability contained in the dataset functional categories for which a time and dose effect was significant activity and cyclopropane-fatty-acyl phospholipids synthase activity showed a down-regulation pattern. The PCA-maSigFun analysis of these data revealed, as expected, the more detailed picture of the functional aspects of auxin treatment. This method selected 92 functional classes growth -cell morphogenesis, cell-wall modification, regulation of meristem size, root hair elongation- and other regulatory and enzymatic activities such as transcription factor activity, ligase activity, protein serine/threonine phosphatase activity (early induction) and amino acid transporter, pectin esterase inhibitor activity, proteasome complex, oxidorreductase activity and beta-fructofuranosidase activity (late induction). Interestingly PCA-maSigFun shows a regulation of the class response to water deprivation which corresponds to repression of plasma membrane aquaporin genes and negative for other class members, At2g17500 and Atg76520 , and one for the difference between low and high indole-acetic acid (treatment series).chloroplast, structural constituent of ribosome and membrane, while the treatment series solely detected the response to auxin stimulus as enriched. A closer look to the STEM results revealed that several GO terms did have significant single test p. values but not when adjusted by FDR, and that significant clusters had related profiles. This suggests that the limitations of the method to report significant functional classes might be related to the corrections imposed by the multiple testing scenario and/or by a functionally suboptimal data partitioning.By running STEM with default parameters on the two one-series datasets a number of significant genes and clusters were found in each case: 253 genes/10 clusters for the high bromobenzene series, 102 genes/3 clusters for the salt treatment, 10078 genes/6 clusters for the time series in the Arabidopsis study and 1971 genes/4 clusters for the treatment series in the ATH data and considers individual genes as observations of the values that time and treatment take for the functional class. The simulation studies indicated that only classes with a high proportion of coordinately changing genes (~70%) were readily detected by this method. The experimental datasets confirmed this tendency and also showed a bias in class selection for those with a reduced number of annotated genes and a relatively high (~60%) inner correlation. This is not surprising since large \u2013 and frequently more general \u2013 functional classes are more likely to include different regulation patterns and to capture more noise. The consequence is that this method is able to reveal specific cellular functionalities which are affected by the experimental conditions but may escape to other interesting phenomena which are not so well defined by a one-block behavior of the functional class. This, which might be sufficient in some cases, may imply a partial result in others where a broader view of the transcriptional changes is sought. In the case of the toxicogenomics dataset maSigFun analysis provided a clearly limited result. Although some detected functions such as heme oxygenase activity and bile acid transporter activity are key makers of the toxicological response [xenobiotic metabolic process, acute-phase response and epoxide hydrolase activity did not show up in this analysis. In the case of the abiotic stress study, however, maSigFun analysis did already provide quite an extensive functional view of the regulated processes, possibly due to the involvement of numerous specific enzymatic activities and cellular locations with a low number of annotated genes, and the more extensive transcriptional profiling (~10 k probes) of the potato dataset. On the contrary, for the ATH \u2013 IAA treatment study, this method only selected a few functional classes, although these were highly significant for the biological scenario under study . In all three datasets maSigFun selected specific terms, with a reduced number of annotated genes which were highly correlated. These results clearly reveal the detection capacity of the method and also show that this is applicable for datasets of different sizes.The understanding of the cellular and functional implications of global gene-expression changes measured through microarrays is in many cases the ultimate and most important goal of the biological experiments analyzed by this technology. When the experiment includes a time component, the data has a dynamic nature that needs to be incorporated into the functional analysis. The statistical approaches presented and evaluated in this study try to exploit this dynamic property from different perspectives and offer methods that explicitly focus on coordinative behaviors within the cellular functionality along the time span. This is in contrast to more traditional approaches that require a gene selection method and a partitioning algorithm before reaching the stage of functional assessment. maSigFun is, from the three algorithms proposed, the method that more strongly concentrates in co-expression. By fitting one regression model on the expression data gathered by each functional class, it follows that class members need to be highly correlated. Conceptually, maSigFun could be related to the -workers where onresponse , many otregulates_positively and regulates_negatively) (see ) would help to consider these situations more formally, but to our knowledge there are no functional assessment methods yet that incorporate these relationship descriptors. It is also important to indicate that although PCA-maSigFun is not an enrichment method, it does not return just any functional class. Firstly, PCA assures that selected categories must contain a structure of correlation above the level of noisy variance of each particular dataset and secondly, the maSigPro analysis on the selected components means that these patterns can be fitted to a time-dependent model. In fact, in most of the selected functional terms the significant profile corresponded to the first component of the PCA analysis of the class (data not shown). This implies that the major function-dependent patterns of variation also corresponded to time-related events and consequently are consistent with the biological scenario investigated by the time-course experiment. A possible draw-back of this method is the large size of the resulting selections. This means that browsing the analysis results could be time consuming and that some too general-low informative classes may \"artificially\" enlarge the output. We partially solved this problem by including only non-annotation redundant GO terms in the analysis (a GO term is considered annotation redundant if it has the same set of annotated genes as any of its child terms). Other options would be to filter results according to the GO structure or to group significant functional patterns by some clustering method. The last option was implemented in the PCA-maSigFun method and is included in the standard output.The above-mentioned aspect of the broader evaluation of the transcriptional response from a functional point of view is probably best addressed by the PCA-maSigFun method. In this strategy sub-patterns of time-associated changes within each functional class are identified by PCA analysis followed by regression modeling on the principal components. PCA-maSigFun provided the largest GO term selection in both experimental datasets and the simulated study indicated that the method is able to identify any functional group in which some correlation structure is present. The method should not be considered as an enrichment analysis strategy, but more a methodology to dissect and investigate how genes, functions and co-expression relate. This exercise can be very interesting in some cases, such as in the acute-phase example shown in the toxicogenomics section. Here, PCA-maSigFun clearly showed the correlation and anti-correlation relationships between acute-phase positive and negative genes, which would presumably result in an activation of the process. Another example of this was the class auxin:hydrogen symporter activity in the Arabidopsis data, where also induction and repression of different membrane proteins was observed. Methods that concentrate only in shared profiles would fail to identify these classes in which co-regulation is clearly present. Possibly recently-introduced term relationships in Gene Ontology (An intermediate result between the restricted view of maSigFun and the profusion of classes given by PCA-maSigFun is obtained by ASCA-functional. In contrast to the two previous methods, this strategy does not imply a transformation from a gene profile to a class profile, but simply ranks genes according to a pattern of variation and assessing a functional enrichment along this rank. This pattern of variation is provided by the ASCA-genes model and, although in this work this is related to time series analysis, the method is generally applicable when more than two conditions are present in the study. In this sense ASCA-functional can be considered as an extension of GSA to multi-class and time series data. Other adaptations of the GSA methodology propose the employment of diverse statistics such as linear modeling and/or posterior probability to measure the association of the gene expression with the phenotype , but to We can conclude that the methodologies presented in this paper are valuable and offer different approaches to study microarray time series data from a functional perspective. The methods should not be considered as competitive but as providing different insights into the molecular and functional events taking place within the biological system under study.d.e.g: differentially expressed genes; ANOVA: Analysis of Variance; PCA: Principal Component Analysis; SCA: Singular Component Analysis; GSA: Gene Set Analysis; BB: Bromobenzene; GO: Gene Ontology; BP: Biological Process; MF: Molecular Function; CC: Cellular Component; FDR: False Discovery RateThe authors declare that they have no competing interests.MJN helped to conceive the study, performed simulation studies and helped draft the manuscript, PS performed STEM analysis, ST performed FatiScan analysis, FG-G performed analysis with developed algorithms on experimental datasets, JD contributed to design study and provided research infrastructure, AF supervised statistical developments and helped draft the manuscript, AC conceived and coordinated the study, wrote statistical algorithms and drafted the manuscript.Results simulation studies for the three proposed methods.Click here for fileGene Ontology term selection by the three proposed methods on three experimental datasets.Click here for fileClustering results of STEM analysis on three experimental datasets.Click here for fileGene Ontology term selection by the pair-wise methods on three experimental datasets.Click here for fileSelected gene expression patterns for the functional analysis of the Arabidopsis IAA treatment study.Click here for file"}
+{"text": "A gene regulatory module (GRM) is a set of genes that is regulated by the same set of transcription factors (TFs). By organizing the genome into GRMs, a living cell can coordinate the activities of many genes in response to various internal and external stimuli. Therefore, identifying GRMs is helpful for understanding gene regulation.Integrating transcription factor binding site (TFBS), mutant, ChIP-chip, and heat shock time series gene expression data, we develop a method, called Heat-Inducible Module Identification Algorithm (HIMIA), for reconstructing GRMs of yeast heat shock response. Unlike previous module inference tools which are static statistics-based methods, HIMIA is a dynamic system model-based method that utilizes the dynamic nature of time series gene expression data. HIMIA identifies 29 GRMs, which in total contain 182 heat-inducible genes regulated by 12 heat-responsive TFs. Using various types of published data, we validate the biological relevance of the identified GRMs. Our analysis suggests that different combinations of a fairly small number of heat-responsive TFs regulate a large number of genes involved in heat shock response and that there may exist crosstalk between heat shock response and other cellular processes. Using HIMIA, we identify 68 uncharacterized genes that may be involved in heat shock response and we also identify their plausible heat-responsive regulators. Furthermore, HIMIA is capable of assigning the regulatory roles of the TFs that regulate GRMs and Cst6, Hsf1, Msn2, Msn4, and Yap1 are found to be activators of several GRMs. In addition, HIMIA refines two clusters of genes involved in heat shock response and provides a better understanding of how the complex expression program of heat shock response is regulated. Finally, we show that HIMIA outperforms four current module inference tools , and we conduct two randomization tests to show that the output of HIMIA is statistically meaningful.HIMIA is effective for reconstructing GRMs of yeast heat shock response. Indeed, many of the reconstructed GRMs are in agreement with previous studies. Further, HIMIA predicts several interesting new modules and novel TF combinations. Our study shows that integrating multiple types of data is a powerful approach to studying complex biological systems. Single-cell organisms such as yeasts constantly face changing or even harsh environments such as high temperature that threaten their survival or, at least, prevent them from performing optimally . By orgaet al. Construction of a high-confidence TF-promoter binding matrix bi,j = 1 if (1) the p-value for TFj to bind the promoter of gene i is \u2264 0.01 in the ChIP-chip data and the promoter of gene i contains one or more binding sites of TFj or (2) the disruption of TFj results in a significant change of the expression of gene i and the promoter of gene i contains one or more binding sites of TFj. Otherwise, bi,j = 0In this matrix, C = Construction of a high-confidence TF-gene regulatory matrix ci,j = 1 if bi,j = 1 and if TFj is shown by the dynamic model to have a large regulatory effect on the expression of gene i (see Appendix for details). Otherwise, ci,j = 0In this matrix, E = Identification of heat-inducible genes and construction of their gene expression matrix E = , where ep,q is the expression value of the p-th heat-inducible gene at time point q.A gene is called a heat-inducible gene if at least two time points of its gene expression profile measured under heat shock are induced by at least three folds compared to that under the unstressed condition. We then collect all the time profiles of the identified heat-inducible genes to form a matrix Identification of heat-responsive TF setsR ={TFu, TFv, TFw} be a TF set, G = {gene k | ck,u = ck,v = ck,w = 1} be the set of genes that are regulated by all the TFs in R, S be the set of heat-inducible genes in the yeast genome identified in Step 3, T = G \u2229 S be the set of heat-inducible genes that are regulated by all the TFs in R, and Y be the set of all genes in the yeast genome. Then the p-value for rejecting the null hypothesis (H0: R is not a heat-responsive TF set) is calculated as represents the expression profile of the target gene at time point t, N denotes the number of TFs that bind to the promoter of the target gene inferred from the TF-promoter binding matrix B, di indicates the regulatory ability of TF i, xi[t] represents the regulatory profile of TFi at time point t, k represents the basal level induced by RNA polymerase II, \u03bb indicates the degrading effect of the target gene's expression at present time point y[t] on the target gene's expression at next time point y[t + 1] and \u03b5[t] denotes the stochastic noise due to the modeling error and the measuring error of the target gene's expression profile. \u03b5[t] is assumed to be a Gaussian noise with mean zero and unknown standard deviation \u03c3. The biological meaning of Equation (1) is that y[t + 1] is determined by N TFs at present time point and RNA polymerase II) and -\u03bb\u00b7y[t] (the degradation effect of the target gene at present time point).where xi[t] (the regulatory profile of TFi at time point t) as a sigmoid function of zi[t] (the gene expression profile of TFi at time point t):It has been shown that TF binding usually affects gene expression in a nonlinear fashion: below some level it has no effect, while above a certain level the effect may become saturated. This type of binding behavior can be modeled using a sigmoid function ,39-42. Tr denotes the transition rate of the sigmoid function and Ai denotes the mean of the gene expression profile of TFi.where We rewrite Equation (1) into the following regression form:\u03c6[t] = [x1[t] \u22ef xN[t] 1 - y[t]] denotes the regression vector and \u03b8 = [d1 \u22ef dN k \u03bb]T is the parameter vector.where et al.'s study [xi[tv], y[tv]} for i \u2208 {1,2, \u22ef, N}, v \u2208 {1, 2, \u22ef, M}, where M is the number of the time points of a target gene's expression profile. Equation (3) at different time points can be put together as follows:From the gene expression data under heat shock in Caustion 's study , it is eY \u03a6 and e to represent Equation (4) as follows:For simplicity, we can further define the notations \u03b8 can be estimated by the maximum likelihood (ML) method as follows ) [p-value computed by the i-distribution is then adjusted by the Bonferroni correction to represent the true \u03b1 level in the multiple hypothesis testing [i is said to be a true regulator of the target gene if the adjusted p-value padjusted \u2264 0.01.Since se \u03b5[t]) . The p-v testing . Then, TWSW developed the algorithm, performed the simulation and wrote the manuscript. WHL conceived the research topic, provided essential guidance and revised the manuscript. All authors read and approved the final manuscript.Additional file Click here for file"}
+{"text": "Comparative analysis of genome wide temporal gene expression data has a broad potential area of application, including evolutionary biology, developmental biology, and medicine. However, at large evolutionary distances, the construction of global alignments and the consequent comparison of the time-series data are difficult. The main reason is the accumulation of variability in expression profiles of orthologous genes, in the course of evolution.Saccharomyces cerevisiae) and fission (Schizosaccharomyces pombe) yeast, which are separated by more then ~400 myr of evolution. We found that the global alignment (time warping) properly matched the duration of cell cycle phases in these distant organisms, which was measured in prior studies. At the same time, when applied to individual ortholog pairs, this alignment procedure revealed groups of genes with distinct alignments, different from the global alignment.We applied Pearson distance matrices, in combination with other noise-suppression techniques and data filtering to improve alignments. This novel framework enhanced the capacity to capture the similarities between the temporal gene expression datasets separated by large evolutionary distances. We aligned and compared the temporal gene expression data in budding (Our alignment-based predictions of differences in the cell cycle phases between the two yeast species were in a good agreement with the existing data, thus supporting the computational strategy adopted in this study. We propose that the existence of the alternative alignments, specific to distinct groups of genes, suggests presence of different synchronization modes between the two organisms and possible functional decoupling of particular physiological gene networks in the course of evolution. Alignment of time series data or time warping allows side by side comparison of orthologous gene expression on a relative time scale -5. The tSaccharomyces cerevisiae) and fission (Schizosaccharomyces pombe) yeasts. Traditionally, yeast cell cycle served as a model system to study regulation of the periodic gene expression, replication, and cell division [Here, we tested several noise suppression techniques, in order to optimize global alignment between the time series data from two species, separated by ~400 million years of evolution, budding and due to desynchronization of cell culture over time; the external noise is the result of accumulated in evolution differences in the orthologous expression or differences in expression caused by experimental conditions, such as selection of cell culture synchronization method. In this perspective, problems connected with the alignment construction are largely problems related to noise reduction and noise overriding.S. cerevisiae and S. pombe datasets [Ability to judge quality of alignments critically depends on the input data; data selection helps to find the least noisy/most reliable datasets. Therefore all 70 pairwise combinations of publicly available datasets -13,15-17datasets based onS. cerevisiae synchronized by \u03b1-factor [S. pombe synchronized either using cdc25 temperature sensitive mutant or elutriation [We adopted Pearson distance matrices to produce highly informative comparisons between time series and to cope with the external (evolutionary) noise (see Methods). The distance matrices revealed discernible periodic patterns similar to that observed in simulated periodic datasets [see Additional file \u03b1-factor , and S. triation .i) cell-cycle dependent (oscillating) (ii) constitutively expressed and (iii) inducible (not expressed or expressed constitutively in our datasets). Low oscillating and constitutive genes contribute less or no information to the global alignment, moreover, their actual expression dynamics can be masked by the internal noise. Therefore, we removed the low-variant genes from the selected datasets to improve sensitivity of the method. In the prior studies [Based on the amplitude of gene expression in the course of the life cycle, all genes in the yeast genome can be conditionally separated into to 3193 or otherwise to 2518 genes in S. pombe and 2169 genes in S. cerevisiae (see ortholog matching in Methods). An apparent contradiction between the previously reported number of cycling genes (500) [The two selected datasets were filtered using the SNR method to remove the low-variance profiles and smoothed using Gaussian method to minimize the internal noise (see Methods section). The SNR filtering reduced the initial number of the orthologous profile pairs in the selected datasets of the small (40S) subunit and displays concordant expression in the unaligned data sets as well. However, after time warping, the ribosomal protein has identical expression phasing in both yeast species. The majority of other ribosomal genes and genes involved into the ribosome biogenesis also displayed identical phasing of expression in the aligned datasets [see Additional file Inspection of orthologous profile pairs in the aligned datasets revealed instances of both concordantly and discordantly expressed genes. 518 expression profiles, corresponding to approximately ~400 genes had very similar or nearly identical expression phasing in both organisms. Figure 2 Figure is the ogression ,20. Geneisrupted . AnotherB Figure encodes Along with the global alignment, alignments for each individual pair of orthologs from the selected datasets were constructed and explored as well. This analysis has shown that the majority of concordantly expressed orthologous genes produced pairwise alignment paths similar to the global one formed the largest time cluster or the largest synchronization group Figure , containS. cerevisiae dataset by consequent clustering of the expression profiles and GO-terms enrichment analysis. It has been found that the largest S. cerevisiae time cluster maintained substantial temporal synchrony in the course of evolution. The time warping in combination with the path and profile clustering allowed tracing synchronization for some housekeeping, structural and replication genes. However, analysis of regulatory components of cell cycle, such as cyclins, revealed no such synchronization or other shared evolutionary trends (data not shown). One possible reason for this is the dramatic rewiring of regulatory pathways during evolution.i) variability in gene expression under different experimental conditions strikingly different alternative alignment paths, specific to large group of genes distortion of gene expression profiles as the result of normalization and Gaussian smoothing.Mathematical strategies, described in the current work, can be applied to comparative analysis of expression data in any pair of organisms, separated by hundreds of millions years of evolution. The following factors may limit the area of application: , upsampling, Gaussian smoothing and Z-score normalization. Z-score normalization was done using standard methods . Upsamplx between the neighboring time points i and i+1 in jth expression profile:Original SNR filter based on non-parametric statistics has been designed for the analysis of microarray time series data, which lacks biological replicates. Consider local point-to-point variation \u0394If the variance 2(\u0394x) (pseudocount) is the average variance of the point-to point variation (noise) taken for all profiles of the entire data set } one can compute value of the Uncentered Pearson correlation r for a given kth pair of the orthologous profiles a and b at the time points i and j as follows:Euclidean similarity matrices take into account only the levels of gene expression at a given time point . We founn time points, centered on time point i and j correspondingly. Similarity between the time point i from dataset j from dataset M pairs of orthologous profiles, was computed as a standard Pearson distance:This procedure returns agreement between segments of the two profiles, each of length The Pearson similarity matrices have higher sensitivity and produce better alignments [see Additional file The described method has been implemented in software package GT-Warp. The package includes the following programs and utilities: \"AVF-filer\" and \"RZ-smooth\" are programs for low-level data filtering and processing. These programs include common methods such as Fourier analysis, ANOVA, F-test, Gaussian smoothing, resampling, and normalization. AVF-filer program also includes SNR method described above and \"VF-stat\" utility for simulating SNR score distribution in random data. The program \"Time-warp\" incorporates both Euclidean and Pearson methods (see above), generates global alignment matrices using Kruscal-Liberman algorithm , and hasS. cerevisiae expression profiles from each temporal cluster (or \"time cluster\") were clustered again, using K-mean clustering method, producing 10 sub-clusters within each of the 10 time clusters. Enrichment of gene ontology terms in the time clusters and subclusters was carried out using GeneMapp 2.0 package [Alignment paths for individual profile pairs were generated using Euclidean method (Gene-warp program). The paths were clustered using K-means clustering method, producing 10 temporal clusters using Cluster 3.0 program with default parameter settings ,37. S. c package [see AddUG and DP conceived the study and designed the research strategy. UG prepared yeast datasets for the analysis; DP developed algorithms, software and carried out computations. UG and DP analyzed results and wrote the manuscript. All authors read and approved the final manuscript.Time warping of temporal gene expression data: algorithms and controls. The file contains details for signal to noise ratio filtering (SNR) algorithm, time-warping algorithm, results of benchmarking versus existing methods and programs, and assessment of program parameters.Click here for file"}
+{"text": "Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter.Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data.The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the PMDL principle is effective in determining the MI threshold and the developed algorithm improves precision of gene regulatory network inference. Based on the sensitivity analysis of all tested cases, an optimal CMI threshold value has been identified. Finally it was observed that the performance of the algorithms saturates at a certain threshold of data size. Bayesian Networks .R) and precision (P) are used to evaluate the performance of inference algorithms. While different definitions exist [R is herein defined as eC/(eC+eM) and P as eC/(eC+eF), where eC denotes the edges that exist in both the true and the inferred network, eM are the edges that exist in the true network but not in the inferred network and eF are the edges that do not exist in the true network but do exist in the inferred network.The proposed algorithm is compared with on synthns exist , R is heP and R are calculated. The algorithms are run for 20, 30, 40 and 50 numbers of genes. The P vs. R curves for each of these networks with different threshold values are given in Figure For a specific size of the network, both the algorithms are run for different threshold values 30 times each and the average of Zhao et al. reportedP of the proposed algorithm is higher but R is lower in most of the cases as compared to network MDL. The number of false negatives is fewer in the proposed algorithm and as most biologists are interested in true positives, our proposed algorithm is preferred over the network MDL.In Figure The time series DNA microarray data from was usedAs mentioned earlier, pre-processing plays an important part in reverse engineering process. As there were some missing values in the data we pre-processing the data as in . InitialThe true biological network used for comparison purposes was derived from the yeast cell cycle pathway -25. A toHere we report the performance of our scheme based on different values of the user specified threshold over synthetic networks. For threshold values of 0.15 and 0.2, a high precision (over 90% in most cases) was observed but the recall for these thresholds was low (from 25% to 30%) as compared to a threshold value of 0.1 which had a fair recall (over 47%) and good precision (63% to 79%) performance.Figure The performance of the algorithm depends on three factors. The number of genes, the number of time points and most importantly the number of parents inferred for each gene by the algorithm. To see what role these factors play we looked into the time and space complexities of the algorithm.n2m times, where n is number of genes and m is the number of time points; from line 5 to line 18 the algorithm iterates n4 times; lines 15 and 16 of the algorithm iterates n3m times. Finally, from lines 20 to 31 the algorithm iterates n3 times. Thus, the time complexity of the algorithm is \u0398(n4 + n3m).Step 4 of the algorithm iterates From the time complexity it can be seen that if the number of genes is larger than the number of time points then the run time depends more on the number of genes. And if the number of time points is larger than the number of genes then the run time depends more on the number of time points.n parents then the conditional probability tables take 2n units of space. Thus, the amount of memory needed by the algorithm depends on the number of parents inferred by the network. As the space complexity grows exponentially based on the number of parents it is possible that the algorithm may run out of memory for a data set with as few as 50 genes but run for as little as 5 minutes for a data set with several hundred genes. There are 2 ways to overcome this limitation: When it comes to space complexity the conditional probability tables play a major role. If a gene has 1. Restrict the number of parents and2. Take the next smallest description length, instead of using the smallest one.The first approach guarantees results when the number of parents is restricted to small values but this may lower the accuracy of the result. The second approach may take more time to run but as we are not restricting the number of parents the accuracy of the algorithm is not affected. We plan to perform some bench marking studies on the above two approaches to see which one works better.A network and a data set with 75 time points were generated. Each of the algorithms was run 13 times starting with the first 15 time points. An increment of 5 time points was made for every subsequent run. For every run the values of precision and recall were computed. In Network MDL the free parameter was set to 0.2 and in PMDL algorithm the conditional mutual information threshold was set to 0.1. The plots for precision and recall are as shown in Figure For the PMDL algorithm it is observed from Figure For further analysis we considered the recall/precision ratio Figure . SaturatWe have proposed a new gene regulatory inference algorithm that implements the PMDL principle. The simulation results show that the PMDL principle is fair in determining the MI threshold. A problem with the proposed algorithm is determining the CMI threshold. We have tested the sensitivity of the threshold and based on the performance of our scheme we identified that the value of 0.1 is optimal for most synthetic networks. This was also true in the case of reverse engineering of gene regulatory networks from biological time series DNA microarray data. In synthetic network simulations the proposed algorithm produced fewer false edges compared to ; howeverG represents a network where V denotes a set of genes and E denotes a set of regulatory relationships between genes. If gene x shares a regulatory relationship with gene y, then there exists an edge between x and y (x \u2192 y). Genes can have more than one regulator. The notation P(x) is used to represent a set of genes that share regulatory relationships with gene x. For example, if gene shares a regulatory relationship with y and z then P(x) = {y, z}. Also every gene is associated with a function xPf(x) which denotes the expression value of gene x determined by the values of genes in P(x). The network formulation is similar to the one used in . A graphq levels and a gene x has n predecessors then the look up table has nq rows and q columns and every entry in the table corresponds to a conditional probability.The gene expression is affected by many environmental factors. Since it is not possible to incorporate all factors the regulatory functions are assumed to be probabilistic. Also, the gene expression values are assumed discrete-valued and the probabilistic regulation functions are represented as look-up tables. If the expression levels are quantized to x which shares regulatory relationship with two other genes y,z and the data is quantized to 2 levels, the look up table is as in Table y and z are lowly expressed then the probability that x is also lowly expressed is 0.6.Say we have a gene Entropy: Entropy (H) is the measure of average uncertainty in a random variable. Entropy of a random variable X with probability mass function p(x) is defined [ defined byMutual information: MI measures the amount of information that can be obtained about one random variable by observing another one. Since MI by itself does not contain directional information, using ad hoc time delay has been proposed in the past to overcome this issue. The gene system is assumed to be event driven, i.e. all the regulations are performed step by step and in each step all regulations happen only once. Therefore, the latency parameter is set by default to a unit step.MI is defined asMI can also be defined in terms of entropies as I = H(X) + H(Y) \u2212 H\t(3)As stated earlier this above quantity does not contain directional information and hence a time lag is introduced and the quantity after time lag is estimated as:I = H(tX) + H(Yt+1) \u2212 H\t(4)Figure Conditional mutual information: High MI indicates that there may be a direct or indirect relationship between the genes. CMI is the reduction in the uncertainty of X due to knowledge of Y when Z is given [X and Y given Z is defined [is given . Time la defined asCMI can also be expressed in terms of entropies as:I = H + H \u2212 H(Z) \u2212 H\t(6)Again this quantity does not contain directional information. After introducing time lag the quantity is estimated as:I = H + H \u2212 H(tZ) \u2212 H\t(7)Figure qA = {0, 1, \u2026, q \u2212 1} then, the probability mass function from m amples s1, \u2026, ms is estimated asThe proposed algorithm deals with quantized data. In general, it is assumed that the q-level quantization admits the alphabet {.}(.) is the indicator function, defined asWhere, 1By substituting (9) in (1) entropies can be estimated and these entropy estimates can be substituted in (3) and (6) to obtain the MI and CMI estimates .The description length of the two-part MDL principle involves calculation of the model length and the data length. As the length can vary for various models, the method is in danger of being biased towards the length of the model . The NorWe chose to implement the PMDL principle as it suits time series data . The coThe description length for a model in PMDL ,26 is gip(Xt+1|tX) is the conditional probability or density function. We calculate the description length as data length given in [Where given in .tX = T and the transition probability p(Xt+1|tX) can be derived as follows:A gene can take any value when transformed from one time point to another due to the probabilistic nature of the network. The network is associated with Markov chain which is used to model state transitions. These states are represented as n-gene expression vectors p can be obtained from the look-up table associated with the vertex ix and is assumed to be time invariant. It is estimated as follows:The probability Each state transition brings new information that is measured by the conditional entropy: H(Xt+1|tX)= \u2212 log(p(Xt+1|tX))\t(12)X1, \u2026, mX) is given byThe total entropy for given m time-series sample points, (H(X1) is same for all models it is removed thus the description length is As Mn\u00d7n was evaluated using (4). A connectivity matrix Cn\u00d7n was maintained which had two entries: 0 and 1. An entry of 0 indicates that no regulatory relationship exists between genes, but an entry of 1 at Ci\u00d7j indicates that gene i regulates j. The algorithm is given in Figure Th) the connection is deleted.Given the time series data, the data was first pre-processed, which involved filling missing values and quantizing the data. Then the MI matrix The authors declare that they have no competing interests.VC, CZ and PG1 developed the algorithm and implemented the algorithm on synthetic and biological data sets. An in-depth analysis of results was also performed on the results. EP, PG2 and YD coordinated the study. All authors read and approved the final manuscript."}
+{"text": "RNA was isolated from carboplatin and control-treated 36M2 ovarian cancer cells at several time points, followed by oligonucleotide microarray hybridization. Carboplatin induced changes in gene expression were assessed at the single gene as well as at the pathway level. Clinical validation was performed in publicly available microarray datasets using disease free and overall survival endpoints.We performed a time-course microarray experiment to define the transcriptional response to carboplatin in vitro, the NRF2, NF-kB, and cytokine and inflammatory response pathways were also found to be upregulated prior to chemotherapy exposure in poor prognosis tumors.Time-course and pathway analyses identified 317 genes and 40 pathways (designated time-course and pathway signatures) deregulated following carboplatin exposure. Both types of signatures were validated in two separate platinum-treated ovarian and NSCLC cell lines using published microarray data. Expression of time-course and pathway signature genes distinguished between patients with unfavorable and favorable survival in two independent ovarian cancer datasets. Among the pathways most highly induced by carboplatin in vitro can identify both genes and pathways that are correlated with clinical outcome. The functional relevance of this observation for better understanding the mechanisms of drug resistance in EOC will require further evaluation.Dynamic assessment of gene expression following carboplatin exposure Epithelial ovarian cancer (EOC) is the leading cause of cancer mortality from gynecologic malignancies . The majIn an attempt to understand the mechanisms of chemotherapy resistance, several studies have used microarray technology to assess tumor gene expression at the time of diagnosis or at the time of subsequent recurrence -8. A limin vitro might have clinical relevance and provide insights into the mechanisms of chemoresistance. For this purpose, we employed time-course and pathway analysis approaches in order to capture changes in individual genes as well as gene pathways ) .We treated the 36M2 cell line with varying concentrations (0.1 \u03bcM to 200 \u03bcM) of carboplatin for 24 hrs. Cells were then washed thoroughly with PBS and allowed to grow in fresh medium. Cell death was assessed for each carboplatin dosage at distinct time points using the previously described ELISA assay.Doses of carboplatin up to 10 \u03bcM resulted in apoptotic cell death similar to control at all time points, while treatment with 200 \u03bcM was associated with significant apoptotic death at early time points according to manufacturer's instructions. cDNA synthesis and hybridization on oligonucleotide microarrays containing approximately 54,700 transcripts were carried out using standard protocols. Microarray experiments were performed at the Beth Israel Deaconess Medical Center (BIDMC) Genomics Core ,17,18 asTime series analysis was performed using BRB-ArrayTools Version 3.6 [developed by Dr Richard Simon ]. Time series analysis is a regression analysis of time course microarray data . We analyzed all predefined Biocarta pathways (obtained through the NCI public database) for differential expression between the carboplatin-treated and the vehicle-treated cells. Biocarta' is a trademark of Biocarta Inc. .The statistical significance for differential expression of each pathway was estimated using the functional class scoring method . In briePublicly available gene expression data from the ovarian cancer cell line A2780 and the non-small cell lung cancer (NSCLC) cell line A549 were used for unsupervised hierarchical clustering with the average linkage method as implemented in the BRB Array Tools Version 3.6. All genes were median-centered across the experiments.The gene signatures used for hierarchical clustering were mapped across different platforms (from U133 Plus 2.0 to U95Av2 platforms and from U133 Plus 2.0 to U133A platforms) using the Affymetrix 'best match' tool. Mapped time-course and pathway signatures were used without additional filtering for unsupervised hierarchical clustering. and , respectively) . B) Association between pathway signature and OS .Click here for file"}
+{"text": "Scmv1 and Scmv2, have ealier been shown to confer complete resistance to SCMV. Custom-made microarrays containing previously identified SCMV resistance candidate genes and resistance gene analogs were utilised to investigate and validate gene expression and expression patterns of isogenic lines under pathogen infection in order to obtain information about the molecular mechanisms involved in maize-potyvirus interactions.The potyviruses sugarcane mosaic virus (SCMV) and maize dwarf mosaic virus (MDMV) are major pathogens of maize worldwide. Two loci, Scmv1 resistance locus at chromosome 6 and the other isogenic lines. Most differentially expressed genes in the SCMV experiment (75%) were identified one hour after virus inoculation, and about one quarter at multiple time points. Furthermore, most of the identified mapped genes were localised outside the Scmv QTL regions. Annotation revealed differential expression of promising pathogenesis-related candidate genes, validated by qRT-PCR, coding for metallothionein-like protein, S-adenosylmethionine synthetase, germin-like protein or 26S ribosomal RNA.By employing time course microarray experiments we identified 68 significantly differentially expressed sequences within the different time points. The majority of differentially expressed genes differed between the near-isogenic line carrying Our study identified putative candidate genes and gene expression patterns related to resistance to SCMV. Moreover, our findings support the effectiveness and reliability of the combination of different expression profiling approaches for the identification and validation of candidate genes. Genes identified in this study represent possible future targets for manipulation of SCMV resistance in maize. Poaceae family QRT-PCR was conducted with One-Step QuantiTect SYBRRaw intensity and background values generated by Array Vision, version 8.0 were utilized for data analysis. The main interest was to determine the expression patterns of pair-wise contrasts between genotypes at the same time point , whereas contrasts of a genotype at two different time points were of secondary interest . Locally weighted scatterplot smoothing (LOWESS) regression was performed to adjust for differences within an array.The following linear mixed model was fitted:yijkl is the log2-signal intensity, gi fixed effect for genotype, tj fixed effect for the time point, ak random effect for the array, dl fixed effect for the dye, (g*t)ij genotype and time point interaction and (g*d)il genotype and dye interaction. The calculations were performed with the SAS System for Windows, Version 9.1.where Pair-wise contrasts between different genotype*time combinations in the SCMV experiment were estimated, considering only contrasts between genotypes within one time point and contrasts of one genotype at different time points. The corresponding FDR adjusted p-values and fold changes were determined. Least square means of genotype*time were calculated, i.e., the value of a certain genotype at a specific time point averaged over the other effects. The degrees of freedom for the tests were calculated according to the containment method. SAS (Institute Inc. (1999): SAS/STAT User's Guide, Version 8. Cary, NC). For MDMV data analysis the same linear model was fitted but separate variance terms for mock control and normal data were specified. for maize was performed in order to reveal the putative function of unknown sequences from Arabidopsis thaliana, barley, maize, rice, rye and wheat, with a cut off e-value of 10 . Additional blastn analyses were performed in MIPS and IRGSP databases for gaining maximum information about the genes of interest.Blastn analysis in TIGR Unique Gene Indices The calculations of significances for the number of genes between time points were calculated using the McNemar exact test .Relative expression rates of the target genes were calculated as follows:Etarget is the PCR efficiency for the target gene and Eref is the PCR efficiency for the endogenous reference. PCR efficiencies (E = 10(-1/slope)-1), were derived from calibration data of serially diluted RNA: 100%, 50%, 10%, 5%, 1%, 0.5%, 0.1% and water. \u0394Cttarget and \u0394Ctref values were determined as described by Dilger et al. [.where r et al. . Baselin2O2: hydrogen peroxide; IRGSP: International Rice Genome Sequencing Project; JGMV: Johnsongrass Mosaic Virus; MIPS: Munich Information Center for Protein Sequences; MDMV: Maize Dwarf Mosaic Virus; NIL: near-isogenic line; PCR: polymerase chain reaction; qRT-PCR: quantitative real-time polymerase chain reaction; QTL: quantitative trait locus; RGA: resistance gene analogue; RNA: ribonucleic acid; ROS: reactive oxygen species; SCMV: Sugarcane Mosaic Virus; SSH: Suppression Subtractive Hybridization; SrMV: Sorghum Mosaic Virus; T1-T9: time point 1\u20139; TIGR: The Institute for Genomic Research; X2: Chi-square test.cM: centimorgan; Cy3: cyanine 3; Cy5: cyanine 5; EST: Expressed Sequence Tag; FDR: false discovery rate; GO: gene ontology; HAU prepared cDNAs for spotting SCMV arrays, conducted inoculation and harvesting of maize plants with SCMV and MDMV viruses and carried out all microarray experiments, GD designed the SCMV array layout, performed cDNAs quantification for spotting, fabricated SCMV arrays and submitted microarray data into ArrayExpress, BS undertook bioinformatic analysis of microarray data and was involved in outcome discussion, H-PP prepared statistical design for microarray experiments, coordinated data analysis and participated in the discussion of results, MX donated clones of resistant genes and resistant gene analogues, CRI coordinated and supervised the greenhouse design and plant inoculations with both viruses, GW and TL were project initiators and supervisors, participated in the discussion of all experimental parts of the project and preparation of the manuscript.All authors read and approved the final manuscript.SCMV within-time-point significantly differentially expressed sequences. File 1 illustrates the 65 significantly differentially expressed sequences identified within time points in the SCMV experiment. All information available for the genes is provided in the file.Click here for fileSCMV between-time-point significantly differentially expressed sequences. File 2 illustrates the 28 significantly differentially expressed sequences identified between time points in the SCMV experiment, and gives basic information about the genes retrieved from the analysis.Click here for fileGenes up-regulated in single MDMV time points. File 3 illustrates the percentages of all differentially expressed vs. significantly differentially expressed up-regulated genes identified in the between-time-point MDMV experiment for each of the four applied time points (including mock control).Click here for fileGenes expressed in the MDMV experiment based on their folds of change. File 4 illustrates the percentages of all differentially expressed vs. significantly differentially expressed genes in the between-time-point MDMV experiment distributed according to their folds of change.Click here for fileMDMV-between-time-point significantly differentially expressed sequences. File 5 illustrates the 2 significantly differentially expressed sequences identified within time points in the MDMV experiment, and gives basic information about the genes retrieved from the analysis.Click here for fileVectors and primers for insert amplification. File 6 illustrates the 10 different E. coli vectors and their primer sequences utilised in this study for the amplification of inserts to be spotted on the SCMV cDNA microarray.Click here for file"}
+{"text": "Clustering genes based on these profiles is important in discovering functional related and co-regulated genes. Early developed clustering algorithms do not take advantage of the ordering in a time-course study, explicit use of which should allow more sensitive detection of genes that display a consistent pattern over time. Peddada et al. proposedet al. [et al. [We propose a computationally efficient information criterion-based clustering algorithm, called ORICC, that also takes account of the ordering in time-course microarray experiments by embedding the order-restricted inference into a model selection framework. Genes are assigned to the profile which they best match determined by a newly proposed information criterion for order-restricted inference. In addition, we also developed a bootstrap procedure to assess ORICC's clustering reliability for every gene. Simulation studies show that the ORICC method is robust, always gives better clustering accuracy than Peddada's method and saves hundreds of times computational time. Under some scenarios, its accuracy is also better than some other existing clustering methods for short time-course microarray data, such as STEM and Wanget al. . It is a [et al. .Our ORICC algorithm, which takes advantage of the temporal ordering in time-course microarray experiments, provides good clustering accuracy and is meanwhile much faster than Peddada's method. Moreover, the clustering reliability for each gene can also be assessed, which is unavailable in Peddada's method. In a real data example, the ORICC algorithm identifies new and interesting genes that previous analyses failed to reveal. Then the data log-likelihood isHere, we briefly present the order-restricted MLE under simple order (3) and umbrella order constraints (4), which are needed for ORICC analysis in our simulation and real data example. For more general results, we refer to ,45. Supp\u03bc = T and v = T. Let wt = nt/vt, v are known, we have the order-restricted MLE of \u03bc asWhere \u03bc1 \u2264 \u03bc2 \u2264 \u22ef \u2264 \u03bcT:(1) \u03bc1 \u2265 \u03bc2 \u2265 \u22ef \u2265 \u03bcT:(2) \u03bc1 \u2264 \u22ef \u2264 \u03bch \u2265 \u22ef \u2265 \u03bcT:(3) \u03bc1 \u2265 \u22ef \u2265 \u03bch \u2264 \u22ef \u2264 \u03bcT:(4) \u03bb-th candidate profile is then obtained by plugging in the corresponding order-restricted MLE into (11).The maximum log-likelihood under the v are unknown, we need to impose the assumption v1 = v2 = \u22ef = vn = v. In microarray data, this assumption is reasonable after the data are properly normalized. Now, the data log-likelihood isIf the variances \u03bc can be obtained similarly as in the known variance case by letting wt = nt instead of wt = nt/vt. And the MLE of v is thenUnder this situation, the order-restricted MLE of \u03bb-th candidate profile is then obtained by plugging in the corresponding order-restricted MLE into (12).Again, the maximum log-likelihood for the .We have implemented ORICC in an R program, which can be downloaded from NL, NS and BZ had the initial idea and initiated the study. NL and TL conducted the data analyses, created all tables and figures, and prepared the manuscript under the supervision of BZ. All authors read and approved the final manuscript.Gene clusters from the ORICC analysis of the breast cancer cell line data. This table presents gene clusters given by the ORICC algorithm using ten candidate inequality profiles for the breast cancer cell line microarray data in [ data in . The fou data in .Click here for file"}
+{"text": "Current technologies have lead to the availability of multiple genomic data types in sufficient quantity and quality to serve as a basis for automatic global network inference. Accordingly, there are currently a large variety of network inference methods that learn regulatory networks to varying degrees of detail. These methods have different strengths and weaknesses and thus can be complementary. However, combining different methods in a mutually reinforcing manner remains a challenge.We investigate how three scalable methods can be combined into a useful network inference pipeline. The first is a novel t-test\u2013based method that relies on a comprehensive steady-state knock-out dataset to rank regulatory interactions. The remaining two are previously published mutual information and ordinary differential equation based methods that use both time-series and steady-state data to rank regulatory interactions; the latter has the added advantage of also inferring dynamic models of gene regulation which can be used to predict the system's response to new perturbations.in-silico network inference challenge. We demonstrate complementarity between this method and the two methods that take advantage of time-series data by combining the three into a pipeline whose ability to rank regulatory interactions is markedly improved compared to either method alone. Moreover, the pipeline is able to accurately predict the response of the system to new conditions (in this case new double knock-out genetic perturbations). Our evaluation of the performance of multiple methods for network inference suggests avenues for future methods development and provides simple considerations for genomic experimental design. Our code is publicly available at http://err.bio.nyu.edu/inferelator/.Our t-test based method proved powerful at ranking regulatory interactions, tying for first out of Predicting how a cell will respond, at the molecular level, to environmental and genetic perturbations is a key problem in systems biology. Molecular regulatory systems-level responses are governed by several regulatory mechanisms including the underlying transcriptional regulatory network (RN). Recently, there has been an increase in the number of genome-wide datasets appropriate for large scale network inference, which has driven a large interest in methods for learning regulatory networks from these datasets. In general, the question of inferring a transcriptional RN can be posed in the following way: given a set of regulators (transcription factors - TFs) and a set of targets (genes), what are the regulatory relationships between the elements in these two sets? These relationships can be directed (e.g. gene A regulates gene B) or undirected (e.g. there is a regulatory relationship between gene A and gene B), and can have parameters describing the strength, confidence and/or kinetics of the regulatory interaction (depending on the method used). RN inference techniques use three main types of genome-wide data: 1) steady-state transcriptional profiling of the response to perturbations , 2) collections of time series observations following relevant perturbations, and 3) measurements of TF-DNA binding. Different types of RN inference methods produce RNs that vary in detail and comprehension. One critical distinction is the scalability of any given method. Typically, methods that learn less detailed regulatory models scale to larger systems and data sizes than methods that learn more complex models. Another critical difference between methods is whether causal (directed) edges or undirected relationships are learned. Several current methods aim to learn dynamical parameters, such as TF-target activation rates and rates of degradation of gene products. Ideally, a computational biologist should choose the most detailed method that the data will support, as more detailed models can suggest more focused biological hypothesis and be used to model a system's behavior in ways that simple network models cannot. Given this constant need to balance the specific features of any given biological dataset with the capabilities of multiple RN inference algorithms, testing of RN inference methods using a variety of datasets is a critical field-wide activity. Several recent methods aim to do so by generating biologically meaningful datasets with a known underlying topology To this end, the Dialogue for Reverse Engineering Assessments and It should be noted that biological systems present several advantages not relevant to the DREAM4 challenge. These advantages (not discussed here) are leveraged by integrative methods for learning modularity prior to inference Several methods for detecting significant regulatory associations are based on similarity metrics derived from information theory, such as MI. Halobacterium salinarium transcriptional regulatory network, and was able to predict mRNA levels of 85% of the genes in the genome over new experimental conditions Ordinary differential equation based methods for RN inference attempt to learn not only the topology of the network (i.e. \u201cwho regulates who\u201d), but also the dynamical parameters associated with each regulatory interaction. Regulatory network models resulting from these methods can be used to predict the system-wide response to previously unseen conditions, future time-points, and the effects of removing system components. A drawback of these methods is that they generally require time-series data and more complete datasets than many alternative methods. ODE methods model the rate of change in the expression of a gene as a function of TFs (and other relevant effects) in the system. ODE based methods differ in their underlying functional forms, how the ODE system of equations is solved (coupled or uncoupled solution), and how prior knowledge and sparsity constraints are imposed on the overall inference procedure. For example, several methods have been proposed that use complex functional forms Resampling refers to a broad class of statistical methods that are often used to assess confidence bounds on sample statistics by empirically generating distributions Here we focus on which data types (time-series or steady-state), and which methods can be expected to perform best at either reconstructing network topology or predicting the response of the system to new perturbations. Our analysis suggests several simple considerations for determining the correct balance between time-series and steady-state data required for large-scale network inference, and how to use these distinct data types in a mutually reinforcing manner.The DREAM4 datasets consisted of both time-series and steady-state data, and participants were challenged to predict: 1) the topology of the network (as a ranked list of regulatory interactions), and 2) the response of the network to combinations of genetic perturbations (double knock-outs). We have participated in both challenges. For challenge 1 we used a relatively simple t-test based method, Median Corrected Z-scores , and asked to predict the steady-state expression of all other genes in response to the perturbation. The accuracy of the prediction was evaluated by calculating the mean square error (MSE) between the actual and predicted expression of the The underlying model for the expression data in DREAM4 was generated by stochastic differential equations. Each measurement can be thought of as the observation of only a few cells, as opposed to a population of cells. Accordingly, each measurement of wild-type expression, contained in We further improved our estimate of the population wild-type expression by taking the median of in-silico network challenge. However, for DREAM4 the noise for each gene was a function of the gene's expression (higher noise for higher expression), more accurately simulating the noise found in real microarray measurements. Thus, we used a method that takes into account a more biologically relevant gene-specific noise model to rank regulatory interactions. A natural way of identifying if a gene, Previously, we have observed that the genetic knock-out data, MCZ performed very well in reconstructing the topology of the network (i.e. ranking regulatory interactions based on confidence), however it cannot be used to learn dynamical models of regulation (and hence cannot be used to make predictions of the system's response to double knock-outs). Additionally, it requires a very complete dataset to rank all possible regulatory interactions. Moreover, if a regulator is not highly expressed in the wild type condition then the prediction of its targets using MCZ is not very reliable (in this dataset expression and activity seem to be correlated).tlCLR is a MI based method that extends the original CLR algorithm to take advantage of time-series data E.coli regulatory associations as well as predicting novel interactions. However, CLR can only predict undirected regulatory interactions, and must rely on additional data to determine directionality . By taking advantage of the temporal information available from time-series observations, we have shown that CLR can be extended , allowing us to infer directed regulatory interactions, and improving overall performance Mutual information is a metric of dependency between two random variables As previously suggested We now describe dynamic-MI, which is motivated by our previous work on the Inferelator 1.0 where The purpose of the next two steps is to separate the terms in (5) that involve the putative regulators (the explanatory variables) from the terms in (5) that involve the target (the response variable). We do so first for time-series data and then for steady-state data.For every gene pair We pair this response variable with a corresponding explanatory variable, For steady state experiments, the derivative, To simplify notations, we will define As a measure of confidence for a directed regulatory interaction between a pair of genes comprise the core network inference methods on which our inference pipelines were built. We now present our four inference pipelines and how they were used to generate topology predictions for the DREAM4 competition. For pipeline 1 (MCZ), a ranked list of regulatory interactions is trivially obtained by using the values in The predictions made by pipeline 2 placed The entries of each row, in-silico challenge we ranked regulatory interactions using The dynamical parameters stored in Denote the predicted expression of The main challenge in combining these confidence scores is that they are not guaranteed to be on the same scale. Thus, we developed a single rank based heuristic can be phrased as: given a simultaneous knock-out of two genes . Thus we weighted the prediction of double knock-outs by the predictive merit of each model. We computed the final double-KO predictions as follows:in-silico regulatory network competition was to predict the topology of five networks. Predictions were made in the form of a list of regulatory interactions ranked in decreasing order by confidence. We evaluated the performance of four pipelines for learning regulatory networks, namely: MCZ , tlCLR-Inferelator , tlCLR-Inferelator+MCZ , and Resampling+MCZ . We developed these pipelines with a focus on combining results from multiple methods in a mutually reinforcing manner. In all four cases we evaluated the quality of the rankings of all possible regulatory interactions using the area under precision recall curve (AUPR), as this was the basis for the evaluation of performance in DREAM3 and DREAM4.The main challenge in the DREAM4 100 gene We submitted the results of MCZ as our ranked list of regulatory interactions for the DREAM4 challenge. This method tied for first place (out of 19 teams). In in-silico challenge all methods, including several similar to the ones we test herein, were found to perform significantly worse for networks with very high in-degree (targets regulated by many TFs) and to be relatively insensitive, performance-wise, to the out-degree of TFs For the DREAM3 orks 3\u20135 , and lowks 3\u20135 , which takes advantage of the time-series data, to predict topology and dynamical parameters for each network in a way that was more robust to the median expression of the regulators than methods that use solely genetic knock-out data. We submitted topology predictions and bonus-round (double knock-out) predictions generated by pipeline 3. The topology predictions ranked of AUPR . PipelinUpon receiving the gold-standard networks we analyzed our ability to rank regulatory interactions using the different pipelines. Dissecting our performance, in a gene-by-gene manner, we saw the that there are instances when predictions made by pipeline 3 are more accurate than those made by MCZ. Given the performance of each of the methods, as evaluated by AUPR , this isWe submitted predictions of system-wide expression in the presence of double knock-outs for the DREAM4 bonus-round challenge. The predictions we submitted were based on the initial conditions derived from wild-type expression levels , as well as methods that incorporate direct binding data, will continue to be fruitful avenues of future investigation."}
+{"text": "The advance in high-throughput genomic technologies including microarrays has demonstrated the potential of generating a tremendous amount of gene expression data for the entire genome. Deciphering transcriptional networks that convey information on intracluster correlations and intercluster connections of genes is a crucial analysis task in the post-sequence era. Most of the existing analysis methods for genome-wide gene expression profiles consist of several steps that often require human involvement based on experiential knowledge that is generally difficult to acquire and formalize. Moreover, large-scale datasets typically incur prohibitively expensive computation overhead and thus result in a long experiment-analysis research cycle.We propose a parallel computation-based random matrix theory approach to analyze the cross correlations of gene expression data in an entirely automatic and objective manner to eliminate the ambiguities and subjectivity inherent to human decisions. We apply the proposed approach to the publicly available human liver cancer data and yeast cycle data, and generate transcriptional networks that illustrate interacting functional modules. The experimental results conform accurately to those published in previous literatures.The correlations calculated from experimental measurements typically contain both \u201cgenuine\u201d and \u201crandom\u201d components. In the proposed approach, we remove the \u201crandom\u201d component by testing the statistics of the eigenvalues of the correlation matrix against a \u201cnull hypothesis\u201d \u2014 a truly random correlation matrix obtained from mutually uncorrelated expression data series. Our investigation into the components of deviating eigenvectors after varimax orthogonal rotation reveals distinct functional modules. The utilization of high performance computing resources including ScaLAPACK package, supercomputer and Linux PC cluster in our implementations and experiments significantly reduces the amount of computation time that is otherwise needed on a single workstation. More importantly, the large distributed shared memory and parallel computing power allow us to process genomic datasets of enormous sizes. The rapid growth of genomic sequence data starting in early 1980's has spurred the development of computational tools for DNA sequence similarity searches, structural predictions, and functional predictions. The emergence of high-throughput genomic technologies in the late 1990's has enabled the analysis of higher order cellular processes based on genome-wide expression profiles such as oligonucleotide or cDNA microarray. A typical microarray dataset contains hundreds of sample points for thousands or tens of thousands of genes. A colossal amount of profound knowledge at genome level is hidden inside such immense expression data. A single gene is usually extracted to differentially identify expression genes at a significant level. However, such point level analysis does not address the full potential of genome-scale experiments. Nowadays genes can be affiliated by their co-regulated expression waveforms in addition to sequence similarity and proximity on the chromosome as in gene content analysis. Genes ascribed to the same cluster are usually responsible for a specific physiological process or belong to the same molecular complex. Such transcriptome (mRNAs) datasets deliver new knowledge and provide a revealing insight to the existing genome (genes) datasets, and can be used to guide proteome (proteins) and interactome research that aims to extract key biological features such as protein-protein interactions and subcellular localizations more accurately and efficiently.However, organizing genome-wide gene expression data into meaningful function modules remains a great challenge. Many non-supervised and supervised computational techniques have been proposed to conjecture the cellular network based on microarray hybridization data. The widely employed techniques include Boolean network methods, differential equation-based network methods, Bayesian network methods, hierarchical clustering, K-means clustering, self-organizing map (SOM), and correlation-based association network methods.Boolean network method is a coarse simplification of gene networks to determine the gene state as either 0 or 1 from the inputs of many other genes . DiffereThere exists a wide range of microarray clustering and visualization tools available with statistical analysis support, including affy, cclust, cluster, mcluster, hybridHclust, SOM packages from R environment , integraWe propose to develop a system that constructs and analyzes various aspects of transcriptional networks based on random matrix theory (RMT) ,14 usingThe program in this work is implemented in C and MPI Fortran, and currently runs on a Linux cluster with eight nodes. We are now in the process of transiting our system from the Linux cluster to supercomputers with thousands of compute nodes. The experimental datasets are extracted from two public project websites, namely yeast cycle and human liver cancer projects.Yeast cell cycling data is one of the best known microarray datasets that have been extensively evaluated. Since the structure of the network has been quite well understood, we are able to evaluate our clustering results by referring to an extensive set of published works.The entire yeast genome is partitioned into a large number of functional modules sharing similar expression patterns. The large components of a deviating eigenvector computed from the Pearson correlation matrix are identified as gene members that belong to a specific functional module involved in a similar cellular pathway.Fig. We use cclust package under the R environment to conduct K-means clustering, which repeatedly moves all cluster centers to the mean of their Voronoi sets. The distance between the cluster center and the data points is based on the Euclidian distance and polynomial learning rate is used. The major drawback of K-means method is that a user must specify the number of clusters, which is usually unknown for unexplored microarray datasets. For experimental purposes, we set the cluster number to be 20 based on previous results we obtained from the RMT method. K-means is able to identify protein biogenesis group with 114 genes; however some closely related protein biogenesis genes are assigned to several other unrelated groups. K-means algorithm tends to break down a coherent group with a large or medium number of gene members but lacks the capability of identifying small groups such as histone group of 9 genes, which has been successfully identified by our RMT method.Hierarchical clustering method has been widely used by contemporary biologists to cluster microarray datasets. Groups of genes are nested at different levels of details represented by a dendrogram. A user can choose either to build the hierarchical structure in a bottom-up or a top-down fashion. The bottom-up approach can identify small clusters but not large ones, while the top-down approach can easily discern a few large clusters. Chipman proposed a novel hybrid clustering method , which cWe characterize the expression pattern of gene expressions in hepatocellular carcinoma (HCC) using RMT method. There are about 20000 genes with more than 200 samples, including 97 primary HCC, 76 nontumor liver tissues, 7 benigh liver tumor samples with 3 adenoma and 4 FNH, 7 metastatic cancers, and 9 HCC cell lines. We cluster the microarray data for both genes and samples. The liver samples are roughly divided into two major groups, namely the HCC tumor tissues and nontumor liver tissues, where a few HCC tumor samples are found in the nontumor cluster. Adenoma and FNH samples are dispersed within the HCC cluster. Metastatic colon cancer samples are identified as a single cluster due to their highly similar expression patterns. Two metastatic granulosa cell tumor samples are also grouped together. We observe that our method is also able to detect subclusters within a big cluster. For example, since tumor samples that are acquired from the same patient usually display similar expression patterns, 6 HCC samples from patient HK64 are grouped together as a subcluster within HCC cluster; 5 samples from patient HK66 are found in the same subcluster; 3 samples from patient SF34 also appear in one subcluster. Our clustering results conform nicely to the results published in the literature calculated from R, with a few deviating from the upper (\u03bb+) and lower bounds (\u03bb\u2212) conveying the true correlation information. This observation enables us to separate the real correlation from the randomness. This denoising process is necessary since microarray data is extremely undersampled and may introduce significant measurement noise. Interestingly, Kwapien et al.[K. However, in practice, a large sample size K from the perspective of mathematical view is not always feasible for most biological datasets due to the considerable time and material resources involved in bio-related experiments.We further compare the probability distributions en et al. found thWe consider the set of eigenvalues that deviate from the eigenvalue range of the random matrix as genuine correlation. The amount of variance contributed by each eigenvector (factor) can be reflected by the proportion of eigenvalue over the sum of all eigenvalues based on principle component analysis (PCA). In other words, principle factors are responsible for the majority of variation within the data. Thus, only large eigenvalues and their corresponding eigenvectors are retained for further treatment and gene group analysis. The rest of the eigenstates contain either insignificant or noisy information.N eigenvectors iu in total, i = 1\u2026N. Each eigenvector iu has N components corresponding to N gene variables. All eigenvectors are perpendicular to each other and are normalized to length of 1. The probability distribution of eigenvector components for different eigenvalues are plotted and compared against that of a random matrix, which follows Gaussian distribution with zero mean and unit variance.Deviating eigenvalues naturally lead us to the investigation of their corresponding deviating eigenvectors. There are k from the bulk \u03bb\u2212 \u2264 \u03bbk \u2264 \u03bb+ from human liver cancer data shows a good agreement with Gaussian distribution as indicated by the upper figure in Fig. The probability distribution of the eigenvector components with the corresponding eigenvalue \u03bbAfter acquiring a set of normalized eigenvectors, we transform the eigenvector components to loading factors by taking the multiplication of vector components and the square root of corresponding eigenvalue. Each eigenvector represents one factor leading to one gene cluster. A larger loading factor (we select 0.5 to be the cutoff point) indicates that the corresponding gene \u201cload\u201d more on that eigenvector, or that gene is more expression-dominating for that cluster. To simplify the eigenvector structure and make the interpretation of gene clusters easier and more reliable, we apply orthogonal rotation to the retained eigenvectors. Since the rotation is performed in the subspace of the entire eigenstates, the total variation explained by the newly rotated factors are always less than the original space, but the variation within the subspace remains the same before and after rotation, only the partition of variation changes.R can be determined to specify such rotation as following:VARIMAX is a simi,j is the rotation angle from old axis i to new axis j.where \u03b8iz (s) of a certain eigenvector iu under a sample series s is computed as the scalar product of the sample series on the eigenvector as shown in Eq. 6. The stability of gene clustering based on our eigenstates analysis can be evaluated in terms of variance of total expression signals denoted by iz for eigenvector i among different samples and time series. The variances are directly associated with the corresponding eigenvalues as one of the most important properties of eigensignals [The eigensignal nsignals in Eq. 7Time-series gene expression data provides temporal dimension of knowledge space. However, most similarity measurement techniques including Pearson correlation usually consider the expression patterns between two genes under the same conditions or at the same time points, no time-lagging analysis is taken into account. Thus, expression time-lagged genes will not be correctly identified as the same group. In fact, a certain time lag usually exists before a transcription factor begins to influence the expression of some other genes because of the delay in signaling mechanism. Such co-regulation behavior could be categorized as either up-regulation or down-regulation, namely the expression of a gene may either stimulate or inhibit the expression of other genes. It is our interest to capture and explore the time-lagging relationship among all the genes in our future work. Cross-correlation method in Eq.8 tx is the expression signal of a known gene at time t, and tx. k+ty is the expression level of a gene at time k+t, and y+ty. N denotes the number of time points and k represents the time delay/lag. Likewise, the correlation will also be large if the first expression signal leads the second with the expression waveform shifted to the left of the second.where Corrk denotes the correlation of two genes x and y.To capture the discrete correlation of two gene samples with time-series expression data, we can utilize discrete Fourier transform to compute the correlation. X(f) and Y(f) are the discrete Fourier transforms of tx and ty, and the asterisk represents complex conjugation.where In addition, after genes are clustered by time-lagging analysis, people usually conduct upstream sequence analysis to identify consensus regulatory elements for those genes that are controlled by the same transcription factor.A single workstation is no longer fast and powerful enough to cope with emerging large scale microarray datasets. High performance computing facilities become indispensable tools for biologists to reduce the computing time and improve efficiency. In order to calculate the eigenvalues and eigenvectors for a large correlation matrix, we install ScaLAPACK on our Linux cluster. ScaLAPACK is an acronym for Scalable Linear Algebra Package, and is a library of high-performance linear algebra routines for distributed memory message-passing computers. PDSYEVX routine is used to compute selected eigenvalues and eigenvectors of a real symmetric matrix by calling the recommended sequence of ScaLAPACK routines. We use 2D block-cyclic data distribution for work load balance among all computer nodes to achieve performance and scalability. The size of the subblock dividing the symmetric correlation matrix is chosen to be 64 \u00d7 64, and the computer grid configuration is set to be 2\u00d74 for an eight node cluster. With the help of parallel computing packages, we are able to finish some heavy computing tasks within short period of time (one hour), which might take up to days for a single workstation to run.The authors declare that they have no competing interests.MZ designs and implements the RMT method on a single workstation. QW implements the parallel version of the RMT method. MZ also analyzes the yeast cycle and human liver cancer microarray datasets. Competing interests"}
+{"text": "Circadian rhythm is a crucial factor in orchestration of plant physiology, keeping it in synchrony with the daylight cycle. Previous studies have reported that up to 16% of plant transcriptome are circadially expressed.Arabidopsis thaliana.Our studies of mammalian gene expression revealed circadian baseline oscillation in nearly 100% of genes. Here we present a comprehensive analysis of periodicity in two independent data sets. Application of the advanced algorithms and analytic approached already tested on animal data reveals oscillation in almost every gene of This study indicates an even more pervasive role of oscillation in molecular physiology of plants than previously believed. Earlier studies have dramatically underestimated the prevalence of circadian oscillation in plant gene expression. Arabidopsis thaliana, researchers have determined that a \"substantial\" part of plant transcriptome cycles follows a circadian rhythm. This estimation is based on microarray experiments that search for the genes following the circadian rhythm among the entire set of transcripts that is examined by the microarray. Early attempts to identify these genes employed two-color spotted arrays, resulting in a cumbersome experimental design, or tried to minimize expenses by increasing the time span between the sample collections. The latter produced data with a very low sampling rate, which obscured the oscillation pattern in all but a few of the least noisy genes. More recent studies , then the periodogram exhibits a peak at that frequency with a high probability. Conversely, if the time series is a purely random process (a.k.a \"white noise\"), then the plot of the periodogram against the Fourier frequencies approaches a straight line and p is the largest integer less than 1/x.where This algorithm closely follows the guidelines recommended for analysis of periodicities in time-series microarray data with theY = x0, x1, x2, xN-1 the autocorrelation is simply the correlation of the expression profile against itself with a frame shift of k data points . For the time shift f, defined as f = i + k if i+k, where Bij is the strength of the ith TF binding to the promoter of the jth gene, and Ckj specifies the expression state of the jth gene under condition k. For simplicity, we consider only binary states: \"up-regulated\" and \"not-up-regulated\", while it can be easily generalized to any number of states. In this paper, we refer to up-regulated and not-up-regulated genes as positive and negative genes, respectively . The strength of a TF binding to a promoter sequence is represented by the negative logarithm of the binding p-value.A training set contains a set of instances (genes), each of which is represented by a vector. The vector corresponding to the Once we have constructed the training set, we then learn a set of decision trees to classify gene expression states based on TF binding data. A decision tree is a rooted tree consisting of internal nodes and leaf nodes. Each internal node corresponds to a test of the binding of a selected TF to a gene, for example, \"can TF A bind to gene g?\". Each leaf is a prediction of the state of that gene, for example, \"gene g is up-regulated\". An internal node has two branches: the right branch is chosen when the test succeeds; and the left branch is chosen when it fails. Therefore, a path from the root to a leaf defines a possible regulatory rule, for example, \" if a gene can be bound by TF A and TF B, it will be up-regulated at time t\".As decision tree learning algorithms are typically greedy, they are not guaranteed to find the optimal tree ,29. FurtFrom decision trees, we extract regulatory rules, and calculate a significance score for each rule . Only significant rules are retained. For a rule that appears in multiple decision trees corresponding to the same training data, the most significant p-value is taken. Furthermore, a regulatory rule may very often be discovered at multiple time points. The negative logarithm of the p-value of a rule at a given time point reflects the regulation strength of the rule at that time. Thus it is informative to plot the negative logarithm of the p-value of a rule as a function of time; such a plot is referred to as a rule profile. Finally, when two or more microarray time series are available for the same biological process, we combine the rule profiles learned from different time series. As different experiments may have different sampling rates, we approximate each rule profile with a spline interpolation, and combine the profiles for the same rule from different time series to construct a single integrated profile. In the last step, we identify for each rule the most probable experimental conditions under which it functions and the set of genes that it regulates, and organize this information into a transcriptional regulatory network.\u03b1-factor [Gene expression during the yeast cell cycles has been measured with several different synchronization methods. We applied our method to three data sets obtained from the methods of CDC28, CDC15 and \u03b1-factor ,34. To iFigure e-2.47 and by Mcm1 with a p-value less than e-3.82 are up-regulated at the 70-minute time point. For simplicity, we omit the p-value thresholds of binding data in later discussions, and simplify the two rules as Ndd1 \u2229 Mcm1 and Ndd1 \u2229 Fkh1, respectively. It is worth noting, however, that the thresholds are learned automatically and may be different in different rules.We then extracted regulatory rules from the trees by a depth-first search from the root node to all leaf nodes labeled as positive. A node was included in a rule only if its right branch was taken by the path. For example, we extracted the following two rules from the 70-minute tree Figure : Each rule has some number of supporting genes in the training set, from which a p-value can be calculated. For example, the rule Ndd1 \u2229 Mcm1 in the 70-minute tree is supported by 18 positive and 1 negative genes out of a total of 41 positive and 416 negative genes. This corresponds to a p-value \u2248 10p = 10-19). The other three significant rules are Swi4 \u2229 Swi6 (p = 10-9), Mbp1 \u2229 Dot6 (p = 10-5) and Mbp1 \u2229 Ash1 (p = 10-5) , Fkh1 \u2229 Fkh2 (p = 10-6), and Met4 \u2229 Met31 (p = 10-4) and Ndd1 \u2229 Fkh1 (p = 10-6). Rules identified for 100-minute time point include early G1 phase TFs, Swi5 \u2229 Ace2 (p = 10-5), as well as late G1 phase TFs Mbp1 (p = 10-20) and Swi4 (p = 10-8).The most significant rule identified from the 20-minute time point is Mbp1 \u2229 Swi6 Figure . Ash1 is) Figure . Met4 an) Figure are bothThe above examples illustrated the ability of the single decision tree approach in identifying the known TFs and associating them with appropriate cell cycle phases. However, as there might be other decision trees that explain the data as well, we may have missed some interesting TFs or TF combinations. In this subsection, we show how an ensemble learning approach can be used to extract alternative regulatory rules, thus providing a more complete image of the transcriptional regulation for the yeast cell cycle.Many machine learning approaches have been developed for learning tree ensembles , includ\u03b1-factor data sets, and the resulting regulatory rules are listed in Additional Files \u03b1-factor data set involve novel transcription factors: Yap5 (p = 10-10 at the 14-minute time point and 10-8 at the 77-minute time point) and Gat3 (p = 10-9 at the 14-minute time point and 10-8 at the 77-minute time point). The roles of these two TFs in G1 are still unknown and may deserve further investigation. Later, we will introduce a method for combining the rules learned from the three data sets.We repeated the learning method on the CDC15 and A critical issue of classification algorithms is generalization, i.e., how well a learned model can be applied to new data that has not been seen by the learning algorithm? When the number of features is large, a classifier is often over-fitted, in that it performs very well on training data, while performs poorly on unseen data. Therefore, it is important to evaluate the accuracy of a classifier on unseen data, which is typically done by a cross-validation procedure . In this work we used 10-fold cross-validation.A). However, this tends to underestimate the true error, especially when the ratio of positive and negative instances is skewed. For example, if there are 990 negative and 10 positive instances, simply predicting everything as negative will achieve 99% accuracy. Therefore, we compute the kappa statistic K to measure accuracy. K is a better estimation of the true classification accuracy, and is guaranteed to be no greater than A . Furthermore, it has been suggested that K < 0.4 indicates a poor classifier, K > 0.75 implies an excellent classifier, and 0.4 n), we calculated the p-value for the rule as the probability that we would select at least m positive genes if we randomly pick m + n gene. This can be calculated as:For each learned decision tree, we extracted rules by following the branches from the root node to leaf nodes labeled as positive. A node was included in a rule only if its right branch was taken to reach the leaf node of the rule. For example, given a path \"Ndd1 \u2265 2.47 \u2229 Mcm1 < 3.82 \u2229 Fkh1 \u2265 3.44 \u21d2 Positive\", we will omit the second term and extract a rule \"Ndd1 \u2265 2.47 \u2229 Fkh1 \u2265 3.44 \u21d2 Positive\". The reason is that the biological meaning of the second term is ambiguous. We calculated a p-value for each rule with a hypergeometric distribution, and we considered a rule to be significant if its p-value is smaller than 10T (s) = a * s + b for the conversion, where s is the actual time in an experiment and T (s) is its converted time. The coefficients a, b were obtained from [a = 0.70 and b = -1.58 for CDC15, and a = 1.37 and b = 5.71 for \u03b1-factor, meaning that the length of a cell cycle in CDC28 is 0.70-fold of the cell cycle length in CDC15 and 1.37-fold of that in \u03b1-factor, and the cell cycle in CDC28 starts 1.58 minutes earlier than in CDC15. We then approximated each rule profile with piecewise polynomial functions using the spline function in the MATLAB software. An integrated profile was obtained for each rule by summing its three splines from CDC28, CDC15 and \u03b1-factor experiments. A rule was considered cell cycle dependent if its integrated profile has two peaks and the distance between the two peaks is approximately 80 \u2013 100 minutes.We converted the time scale for the three expression data sets to a common scale. We used a linear function ned from . Using tThe rules with notable cell cycle dependency (in Additional File The authors declare that they have no competing interests.JR and WZ conceived of the research. JR designed the study and carried out the computational analysis. JR wrote the paper and WZ helped with the manuscript preparation. YD and EJP helped to improve the algorithm and the manuscript. All authors read and approved the final manuscript.This PDF file contains all the significant regulatory rules learned from the CDC28 data set using the ensemble approach.Click here for file\u03b1-factor data set using the ensemble approach.This PDF file contains all the significant regulatory rules learned from the Click here for fileThis PDF file contains all the significant regulatory rules learned from the CDC15 data set using the ensemble approach.Click here for fileThis PDF file contains the integrated rule profiles that show a cell-cycle dependency.Click here for fileThis PDF file contains the integrated rule profiles that do not show a clear cell-cycle dependency.Click here for file"}
+{"text": "Gene expression time series array data has become a useful resource for investigating gene functions and the interactions between genes. However, the gene expression arrays are always mixed with noise, and many nonlinear regulatory relationships have been omitted in many linear models. Because of those practical limitations, inference of gene regulatory model from expression data is still far from satisfactory.\u03b8 = 55%, 18 relationships between genes are identified and transcriptional regulatory network is reconstructed. Results from previous studies demonstrate that most of gene relationships identified by SPM are correct.In this study, we present a model-based computational approach, Slice Pattern Model (SPM), to identify gene regulatory network from time series gene expression array data. In order to estimate performances of stability and reliability of our model, an artificial gene network is tested by the traditional linear model and SPM. SPM can handle the multiple transcriptional time lags and more accurately reconstruct the gene network. Using SPM, a 17 time-series gene expression data in yeast cell cycle is retrieved to reconstruct the regulatory network. Under the reliability threshold, With the help of pattern recognition and similarity analysis, the effect of noise has been limited in SPM method. At the same time, genetic algorithm is introduced to optimize parameters of gene network model, which is performed based on a statistic method in our experiments. The results of experiments demonstrate that the gene regulatory model reconstructed using SPM is more stable and reliable than those models coming from traditional linear model. Gene expression arrays, which measure mRNA expression levels of thousands of genes simultaneously, make it possible to understand the complexities of biological system. By using the gene expression array in a time series paradigm, we can study the effects of certain treatments, diseases, developmental stages and drug responses on gene expression. Moreover, the underlying gene regulatory networks can be reconstructed by collecting and analyzing expression array data. Therefore, identifying gene regulatory networks from gene-expression data is now an extremely active research field.In previous studies, the time-series data of gene expression arrays are very useful for investigating regulatory interactions between genes. Cho publisheIn this paper, a novel Slice Pattern Model (SPM) is proposed to identify gene regulatory networks from gene expression arrays mixed with noise data. It is a hybrid approach that combines linear model and pattern recognition. In general, models have more variables than available data points. Therefore, a genetic algorithm (GA) is introduced to optimize the parameters of regulation in gene networks ,21. We aTraditional linear model defined N is the number of gene in gene network, xi denotes the expression level of gene i at time point tk+1, weight wji indicates the influence of gene j regulated by gene i, T is the number of time point in gene expression data, and \u0394t represents the average time of interaction response. Given a set of time equidistantly expression data, the weights wji can be solved by using linear algebra when the number of data points is more than the number of variables.where The task of identifying gene regulatory networks is to optimize parameters, and minimize the residual between the linear model and the gene expression data, which is showed in Equation (2).y(tk+1) is the expression level of gene i at time point tk+1 in gene expression data, tk+1 in linear model.where However, linear model only considers that interaction response takes place between genes with one average time delay. In fact, some interactions between genes possibly take multiple transcription time lags, and the transcription time lags are variable for different regulatory relationships in gene networks ,23. MoreIn order to solve the limitations of linear model, we propose a new method, slice pattern model (SPM), to reconstruct the gene regulatory networks from gene expression data mixed with noise. SPM is designed to identify a set of genes whose expression levels change not only at the next time point, but also at more time lags. Some regulatory interactions take place with more time lags, for example, the known relationship SWI4 \u2192 MBP1 shows significant statistical correlation when transcriptional time lag is identified as three time units (three time units = 30 min) represents a set of gene expression data in multi data points. When a sliding window with size k slides on G from point g1 to gT-k+1, it will generate (T-k +1) slices for a gene. This operation is performed on each gene expression profile, and a total of N \u00d7 (T-k + 1) slices are formed a gene expression dataset with N genes. A matrix of expression slice is constructed according to the matrix of gene expression dataset.For the time-series expression data, the local regulation relationship is considered, and the gene expression data in the multi consecutive times is divided into series slices with For further analysis, the rank patterns of gene expression levels in each slice are extracted, and those slice patterns indicate the feature of a gene. Considering a slice S with k data,P(S) = (RS(s1), RS(s2),..., RS(sk)), where RS(si) denotes the rank of Si in P(S). Thus, each gene can be represented as a set of frameworks combines with a series slice patterns , RSs,..., RS use the following formulation:\u03c4j is the time lag of regulatory interaction between gene j and gene i. xi(tk) is the expression level of gene i at time tk. \u03b7 is the max time lag with biological meaning, and L is the size of gene set which regulate gene i.where Since the real expression array data are usually mixed with noises, the comparison between two genes is always disturbed by noise. For ranking pattern in each slice of our method, the spearman rank correlation (SRC) is introduced to estimate the similarity between two patterns, which has been used to assist in measuring the similarity between two genes .S and S' is given by the following equation:The SRC score between two slice pattern RS(si) is the rank of si in the profile . The SRC satisfies -1 \u2264 SRC \u2264 1 for all S, S'. The SRC score \"-1\" represents the complete opposite for the two rank patterns. So we can identify the similarity between two patterns according to the SRC score. It is fit for handling distinct fluctuation data mixed in one point, which takes place by accident in a microarray experiment.where wji, and to maximize the SRC between SPM and the gene expression data.Thus, gene regulatory network identifying becomes to an question to optimize a set of parameters Oi(j) is the j-th slice pattern of gene i in gene expression data, and Si(j) is the j-th slice pattern of gene i.where For optimizing parameters of gene network to satisfy those genes slice, an improved genetic algorithm (GA) is introduced to optimize the model that SPM retrieved from gene expression data. The genetic algorithm (GA) was formally introduced in the 1970s by John Holland, which has been used in many research fields as an optimization method [\u03b8 in repeated modeling, the connection is added into a final gene regulatory network with the value of parameters equal to the average of those in the repeated modeling.Since the number of gene N is always more than the number of time point T in most publicly available gene expression data set, repeated modeling is needed to get a statistical result. The genetic algorithm is a stochastic algorithm, so the result of each GA run is not same. In current study, if a gene connection is presented more than the threshold value In this study, we test the performances of linear model and slice pattern model in an artificial gene network. Then, in order to evaluate the feasibility of SPM on real gene expression array data, a yeast cell cycle gene network with nine specific genes is reconstructed by SPM, and verified by comparing with established relationships in previous investigations. \u03b8 is set as 60%.We take an artificial gene network with known structure Figure coming fFirstly, initial condition and status Table are set A gene expression dataset, yeast cell cycle time-series gene expression arrays which is obtained from Cho , is take\u03b8 = 55%.In this study, the modeling process had been run 20 times independently to reconstruct the gene network. The result is shown in table Previous studies -36 identOur results confirm these observations and further identify the details of regulation relationships, such as the active/inhibitive interaction with transcriptional lags. Some novel interactions reconstructed by SPM are needed to be studied further. ACE2 and SWI5 are transcription factors that function at the M/G1 boundary ,37. HoweLinear model gives a description of the continuous expression data modeling, which reflect the property of gene expression levels tending to be continuous. Reconstruction of gene regulatory network is a reverse engineering to infer all of the unknown parameters in linear model from gene expression data. However, due to the limitations of experiment, such as the multiple transcriptional time lags and lack of data points, the traditional linear models lead to misleading modeling. We showed the unreliability of linear model when inferring gene network with variable multiple transcriptional time lags. In fact, many studies have demonstrated that some interactions between genes take more than one unit of time lag, and the transcriptional lag is diversity.In our approach, we suggest that the time lag is determined, and those time lags far from biologically meaning will be removed during modeling . And feature retrieved from expression data may reduce noise interference to a certain extent.For identifying gene regulatory networks, the parameters of gene networks are optimized via genetic algorithm (GA). The novel development of genetic operations is implemented different from other methods. Our approach reconstructs a model that has the optimal pattern matching to the expected slice patterns.Along with the analysis of experiments discussed above, we suggest that the pattern matching to modeling of gene network may enhance the performance. According to the result of experiment on yeast cell cycle time-series gene expression data, three features of the resulting network model are notable. First, the stability of the gene regulatory model reconstructed using SPM is better than those models coming from traditional linear model. Second, SPM can determine not only the influence of regulator on target gene, but also the time lags of regulation. Finally, and most importantly, the reconstruction of the gene regulatory networks is automatic and required no prior knowledge of the direction of regulation. SPM represents a general method for constructing the regulatory networks from the time series expression data.We present a model-based computational approach, Slice Pattern Model (SPM), to identify gene regulatory networks from time series gene expression arrays. By testing the performance in an artificial gene network, SPM can handle the multiple transcriptional time lags and more accurately reconstruct the gene networks than traditional linear model. A 17 time-series gene expression data in yeast cell cycle is used to reconstruct the regulatory network. The results demonstrate that the gene regulatory model reconstructed by SPM is more stable and reliable than those models coming from traditional linear model.The authors declare that they have no competing interests.YW, GW, YB and YL contributed to the design of the study. GW, YB and YL designed and performed the computational modelling and drafted the manuscript. YW, HT, YYJ, YD and YL participated in coordination, discussions related to result interpretation and revision of the manuscript. All the authors read and approved the final manuscript.The procedure of slice pattern model.Click here for file"}
+{"text": "Time series methods are commonly used to detect disease outbreak signatures from varying respiratory-related diagnostic or syndromic data sources. Typically this involves two components: (i) Using time series methods to model the baseline background distribution (the time series process that is assumed to contain no outbreak signatures), (ii) Detecting outbreak signatures using filter-based time series methods.We consider time series models for chest radiograph data obtained from Midwest children's emergency departments. These models incorporate available covariate information such as patient visit counts and smoothed ambient temperature series, as well as time series dependencies on daily and weekly seasonal scales. Respiratory-related outbreak signature detection is based on filtering the one-step-ahead prediction errors obtained from the time series models for the respiratory-complaint background.Using simulation experiments based on a stochastic model for an anthrax attack, we illustrate the effect of the choice of filter and the statistical models upon radiograph-attributed outbreak signature detection.We demonstrate the importance of using seasonal autoregressive integrated average time series models (SARIMA) with covariates in the modeling of respiratory-related time series data. We find some homogeneity in the time series models for the respiratory-complaint backgrounds across the Midwest emergency departments studied. Our simulations show that the balance between specificity, sensitivity, and timeliness to detect an outbreak signature differs by the emergency department and the choice of filter. The linear and exponential filters provide a good balance. Well-known, as well as previously uncharacterized infections continue to (re)emerge around the globe. To avoid casualties from outbreaks of these infections and from the potential criminal uses of bioagents, surveillance systems are needed that have the capacity to identify such outbreaks accurately and rapidly. The accuracy and timeliness of biosurveillance systems rests on the ability to model the uncertainty, severity, and aberrancy of clinical symptoms that are likely to portend disease outbreaks as expressed through the data monitoring system. Shmueli summarizAmong children, respiratory symptoms are an attractive target for surveillance. These are a prominent feature of many childhood epidemics and an early presentation of diseases like avian influenza, severe acute respiratory syndrome (SARS), and inhalational anthrax that have recently come to the public's attention. Unfortunately, respiratory complaints are also a feature of many common childhood illnesses, reducing the ability of biosurveillance systems to detect epidemics of greater public health concern. What is needed, therefore, is clinical information that is readily accessible and pre-processed in a manner that reflects the severity and aberrancy of respiratory symptoms. Using such data, discrimination between common childhood diseases and more serious respiratory epidemics would be possible.Chest radiographs (X-rays), because they are readily available and are generally ordered by clinicians to evaluate respiratory complaints that are atypical or severe, have the potential to act as such a bio-monitoring and validation tool. In addition, detection based on models of radiograph ordering can indicate when in-depth follow-up is needed, as may occur when ordering of radiographs by clinicians is excessive for a given time of the year. Such in-depth review of radiographs may confirm clinical suspicions of an emerging epidemic or signal the need to perform a targeted review of medical charts to identify anomalous findings or groupings of aberrant findings that might herald the early stages of a respiratory-related outbreak.In this article we consider time series methods for the modeling and detection of respiratory-related outbreak signatures based on chest radiograph ordering patterns from a number of pediatric emergency departments (EDs) located in the Midwestern region of the United States. These models include ambient temperature records collected in each city, as a covariate. We use the temperature series as a surrogate measure of the annual influenza season. Also, a patient visit count series is included in the models to account for variations between-EDs (like ED sizes) and within-EDs . Addressing the fact that the underlying process is neither \"independent or stationary\", our interest is to model the underlying \"respiratory-complaint background\", using the available covariates and significant temporal dependencies present in the data. Without modeling the spatial dependence directly, we investigate whether or not there is evidence of spatial homogeneity in the statistical models across cities. We use filter-based prediction methods to indicate evidence of respiratory-related outbreaks using chest radiograph data. We describe the form and function of various filters that are commonly used to detect outbreak signatures. Using a stochastic model for an anthrax attack, we assess the performance of these methods. Since there are no data that contain outbreak patterns, the use of a model is key to providing realistic outbreak patterns that can accurately be used to evaluate these statistical detection methods.Reis, Mandl, and others use time series methods to detect evidence of disease outbreaks at Boston Children's Hospital -4, modelOur manuscript is organized as follows: we start by summarizing the data of interest, and propose a statistical model for the ED data in each city. Using the growing literature on the subject, we outline a stochastic model for an anthrax outbreak along with a healthcare utilization model for simulating people entering the ED. We then describe the methodology and theory for detecting unexpected outbreak signatures in time series sources using filters. We examine the form and function of the filters used for detection. In the results section, we explore the time series models obtained for each Midwest ED included in our study. We assess these detection methods using a simulated anthrax attack, and we end with a discussion.The data of interest consist of daily counts of ED visits and chest radiographs taken between January 1st, 2003 and September 9th, 2004, in five metropolitan children's hospitals in the Midwest of the USA , supplemented with time series of daily average temperature, obtained from the Average Daily Temperature Archive at The University of Dayton . These sdirectly assess the effect of additive stochastic outbreaks signatures upon the radiograph series.We now discuss two aspects of the statistical model for these daily chest radiograph counts: distribution and scale. Although the outcome variable of interest is counts, like other researchers in the field -4,6-9, wIn our study, the first ten months of data were used as the training set, while the remaining ten months were used, as a test dataset, to evaluate the model and detection methods.Rk,t} denote the number of chest radiographs for the ED in city k on day t and letSince the goal is to predict the number of chest radiographs, for each city we fit a linear time series regression model, using the number of chest radiographs as the response, and the number of visits and temperature as predictor variables. We smooth the temperature series, because we believe that long-range temporal trends are more predictive of chest radiograph counts. Let {Vk,t} are the visit counts for city k and {Tk,t} is the smoothed time series of temperatures for city k, filtered by taking a thirty day moving average to remove intra-month variation. To complete the model we assume that {Xk,t} is a zero mean stationary time series (we discuss the consequences of this assumption in the Discussion), that we shall represent using a seasonal autoregressive integrated moving average process (SARIMA). The SARIMA model (defined in the Appendix) allows for simultaneous modeling of dependencies on both the day as well as the weekly seasonal scales. To aid in the comparison of the dependencies across cities, the order of the time series model, as determined by choosing the autoregressive (pk and Pk) and moving average orders (qk and Qk), will be the same for each city k.Here {We now propose a simple stochastic model for an inhalational anthrax outbreak, based on the work of Buckeridge, et al. and Brookmeyer et al. ,17. As p1. A stochastic model of infection and progression of the disease.2. A model of health-care-utilization that, on a day-by-day-basis, tracks the behavior of each infected individual.dispersion of the anthrax spores. Once spores are inhaled by a subject, in the incubation stage spores either germinate or are cleared out the lung. For the spores that germinate during incubation, the later stages of the disease are the prodromal and fulminant stages, followed by death. Buckeridge, et al. [Any inhalational anthrax outbreak starts with the , et al. model thN say, of children. Brookmeyer, et al. [\u03b8 represent the hazard rate per unit time that a spore is cleared from the lung and \u03bb be the rate of germination. Suppose that each individual inhales a dose of D spores. Then, the probability that at least one spore germinates is called the attack rate (AR) and is calculated using a Poisson approximation, asInstead of using the region-based approach of Buckeridge, et al., we use the individual-based infection scheme of Brookmeyer, et al. . Althoug, et al. define tD spores germinates within t days is given byFor a given attack rate, the probability that at least one of the t \u2192 \u221e F(t) = AR. Based on a statistical analysis of an anthrax outbreak that occurred in Sverdlovsk, Russia, Brookmeyer, et al. [\u03b8 . The value of \u03bb is not estimated in their data analysis \u2013 based on animal studies they found that the rate \u03bb lies between 5 \u00d7 10-7 and 10-5. Buckeridge, et al. [Note that the limit, lim, et al. estimate, et al. propose Pd on a weekday and Pw on weekends. At the fulminant stage the probability of entering the ED, Pf say, is larger than the probabilities in the prodromal stage. The reason is that at the fulminant stage, the anthrax symptoms are similar to those of a heart attack and therefore people enter the ED with higher probability. The differentiation between weekday and weekend is irrelevant at this stage. We suppose that a small percentage of people in the prodromal or fulminant stages are misdiagnosed and thus need to re-enter the system. Also, we assume that people can potentially be misdiagnosed a maximum of two times during the same attack (10% in the first visit and 5% in the second visit). The probabilities of entering the ED after being misdiagnosed are increased by an additive factor, C, for every additional entry. Our model allows for a small probability of drop-out, to account for other ways of leaving the system . The health-utilization model could easily be extended to include, e.g., varying probabilities of entering the ED by stage/time, and/or more advanced ways to exit the system.Next, we describe a simplified health-utilization model for people entering the ED, based on ideas discussed in . During Dk,t}, for each time point t, which is a weighted average of the diagnostic or syndromic data to be used for the detection of an outbreak signature. The weights that appear in {Dk,t} are defined using the form of the time series model and a filter, {al} say. Extreme positive values of the detection process at a time point indicate a possible outbreak signature. The definition of the detection processes has its origin in process control [The main idea of filter-based methods is to create a detection process { control , and is control -4,6,8. Nck,j}, that first decorrelates (whitens) this residual process using the one-step-ahead prediction errors and second filters this whitened series using {al} to yield the detection process {Dk,t}. Commonly used examples of the filter {al} include the differencing, moving average, or exponential filter. For a fixed value of \u03b1, let 1 - \u03b1 denote the specificity . We declare evidence of an outbreak signature at time t if the detection process at that time point, Dk,t, exceeds a threshold \u03c4k,\u03b1, calculated using the data or the process. Further details are given in the Appendix.A common parametric approach that we follow in this study is to first obtain the residual process by subtracting off the non-stationary part of the model . We then define the filter {Dk,t} for different choice of filters, {al}, one should consider the filter involved in the calculation of {Dk,t} using the residual series, {Xk,t}. Using the results from the Appendix, at time point t,To understand the detection process {fk,l} is the filter defined by the convolution of the filters {al} and {ck,l}. Hence, we conclude that the detection of outbreaks not only depends on the choice of filter, but on the statistical properties of the time series model which defines {ck,l}. We will now focus on the effect of {al} (we will investigate the effect of changing the time series model in the Simulations section). As Reis, et al did [al}, each of which is a form of difference filter. Each filter is an average of a number of days close to the time point minus a weighted average of the remaining values in the past:In this expression, {t al did , we examal} = ;1. 1-day filter: {al} = /7;2. 7-day filter: {al} = /28;3. Linear filter: {al} = /127.4. Exponential filter: {fk,l} for the detection processes vary according to the amount of autocorrelation within each time series. As an illustration, Figure fk,l} filter obtained when m = 28 for each of the four {al} filters defined above, using a SARIMA model based on the Akron data (details of the SARIMA model are shown in the Results section).The filters {al} is a 1-day filter, the {fk,l} filter consists of the current time point of the process minus a smaller weighted decaying average of the past values. The weekly seasonal terms in the SARIMA model are reflected in this filter.1. When {al} is the 7-day filter, {fk,l} is a weighted average of the last 7 days (with most weight on the current day), minus a weighted decaying average of the remaining past days. The weekly seasonal terms are not as strong, compared to the filter that is the convolution of the 1-day filter.2. When {al} is the linear filter, {fk,l} is an average of 6 days in the past. Far more weight is put on the current day relative to the previous 6 days. From this we subtract an average of values previous to the 6 days. Most of the weight from the second average comes from around days 8\u201310 in the past.3. When {al} is the exponential filter, {fk,l} is a combination of the 4 most recent days . We subtract an average of the past values (mostly days 6 \u2013 11 in the past).4. When {\u03c4k,0.03, calculated using the normal approximation used in the Appendix (equation 10), assuming an innovation variance of In Figure k. After specifying a model for {Rk,t}, we have a regression model with time series errors, that can be fit using standard maximum likelihood methods. We selected the order of the SARIMA model for the time series errors, {Xk,t}, as defined by (4) in the Appendix using standard identification techniques based on the sample autocorrelation and partial autocorrelation , we set Pk = 1 and Qk = 1, so that the random seasonal component is a combination of an autoregressive and moving average term, each of first order over a period of seven days. The seasonal component of the time series model corresponds to an evolution of a first order autoregressive process with a measurement error over the weeks.We fit the chest radiograph model given by (1) to the first ten months of data for each ED in city n . To fact error of the estimated innovations of the time series component. Figure The parameter estimates for each city are summarized in Table In the simulations described in this section we used the following experimental design. For each city, we fit the regression time series model using the first ten months of data (training data). We used the second half of the data from November 1st, 2003 onwards and added outbreak-related counts to test for the detection of an outbreak signature.m = 28 days). In the absence of other information ; Pw = 0.4 and Pf = 0.80 (entry probability at the fulminant stage). The daily drop-out probability was set at 0.05. We assumed that with probability 0.9, the infected persons that enter the ED for the first time during a given outbreak receive a chest radiograph, i.e., 10% of the infected persons are misdiagnosed at the first visit. In a subsequent visit, 5% of the infected persons that re-enter the ED are misdiagnosed. The misdiagnosis additive factor, C, was assumed to be 0.05. We added the counts from the ED visits and subsequent number of chest radiographs generated from the healthcare utilization model on each day during the outbreak period. Let {Ok,t} denote the number of chest radiographs attributable to the anthrax attack on day t. Since each simulated outbreak will be different, we start by summarizing the distribution of {Ok,t}. Figure t. Examining the progression of the quantiles over time allows us to explore the center and tails of the extra radiograph count distribution. The counts increase rapidly in the first week, then stay fairly constant (except for the spikes), for a week and then slowly drop to zero after the second week. The small spikes are due to the different probabilities of entry in the prodromal stage. Although the shapes at each quantile are similar, the magnitude and duration of the extra radiographs counts differ. The counts drop to zero at different time points for the different quantiles. They drop to zero after 3 weeks for the 0.025 quantile, after 4 weeks for the 0.5 quantile, and after 7 weeks for the 0.975 quantile. These patterns in the observed time-varying distribution of the outbreak signature are a strong motivation to not use deterministic outbreak patterns, as used by some authors, since deterministic outbreak patterns do not give a realistic assessment of detection methods.We simulated an outbreak, as previously described, 500 times on 500 individuals using an attack rate of 0.5. The first day of the outbreak was randomly picked from a uniform distribution in the period December 5th, 2003 to June 22nd, 2004. , exceeds a given threshold, \u03c4k,\u03b1. Suppose that {Yk,t} is the process defined as the sum of {Ok,t}, the extra radiograph counts attributable to the outbreak, and the radiograph process {Rk,t} that follows model (1). Removing the non-stationary part due to the estimated effect of the covariates yieldsWe declare that there is evidence of an outbreak signature in the radiograph data at time By applying the filter to this series we obtain the detection process that we would observe in the presence of extra counts attributable to an outbreak:To examine the effect of the filter upon the outbreak signature we examine the standardized quantity\u03b1 = 0.03, which corresponds to a false alarm rate of one per month [\u03c4k,\u03b1, is chosen by solving P = \u03b1 for \u03c4k,\u03b1. Either we can estimate this value from the data using the 1 - \u03b1 quantile of the {Dk,t} process of non-outbreak-based training data, or via a normal approximation, given by (10). There were some differences between the values of \u03c4k,\u03b1 for the two methods in the simulations we studied. We chose the normal approximation method as it tended to preserve the specificity across the filters and EDs that we considered. Scaling gk,t by \u03c4k,0.03 and using the same simulated radiograph realizations due to the outbreak, allow us to compare the filtered signals consistently across filters.for the filters that we defined previously. We set er month . Then, at, based on the model fit to the Akron ED series. Figure The filtered series will be different for each simulated outbreak pattern realization. Just as in Figure There are some similarities in the shapes of the three quantiles presented in Figures \u03b1, in the range 0.01 to 0.10 in steps of 0.005. Figure \u03b1) for different values of \u03b1. Curves above or below the horizontal dotted line at zero, indicate departures from the nominal \u03b1. The Chicago series preserves the nominal value of the specificity (curves are clustered around the zero line). For the Minneapolis-St.Paul series the actual specificity is biased downwards, and for the other cities, the departure from the nominal value changes as \u03b1 increases. The choice of filter affected the calibration. To understand the tradeoff between the specificity and the sensitivity we show the receiver operator characteristic (ROC) curves, for each city, in Figure We calculated the proportion of true non-detects in the absence of an outbreak signature, based on the test data (the specificity), as well as the proportion of true detects during the outbreak (the sensitivity) averaging over the 500 simulated anthrax outbreaks, for each filter and city. We calculated the actual specificity and sensitivity using the test data, for values of Tables Rk,t} are the number of chest radiographs for the ED in city k on day t, {Vk,t} are the ED visit counts, and {Tk,t} are the thirty day smoothed time series of temperature. For each day, t, and day of the week, d = 1, ..., 7, let Dd,t be an indicator function that is one if that day is the dth day of the week, and zero otherwise. The four models we compare are:We now compare the performance of our model, as defined by equation (1), with three other models, in order to investigate the effect of different covariates and time series components upon outbreak signature detection. Remember {Rk,t = \u03b2k 0,+ \u03b2V,kVk,t + \u03b2T,kTk,t + Xk,t, where {Xk,t} is the SARIMA model used in the Results section.1. Our covariates plus SARIMA errors model: Xk,t} is an ARMA model (equation (4), without the \u03a6k and \u0398k terms) with orders pk = 1 and qk = 1. Instead of modeling the weekly effect using a random seasonal component we use day-of-the week as a fixed effect (covariate).2. Covariates with autoregressive moving average (ARMA) errors model: Xk,t} is a mean zero white noise process with innovation variance 3. Covariates and no time series errors: Rk,t = \u03b2k 0,+ Xk,t, where {Xk,t} is the same ARMA model as in Model 2.4. No covariates and ARMA errors: \u03b1) for different values of \u03b1 for these four models. For illustration we use the linear filter, which represents a compromise between the actual specificity and sensitivity . Except for Columbus, Model 1 achieved a specificity closer to the nominal value across all EDs (Model 3 outperforms Model 1 in Columbus). Except in Chicago, Model 4 had the largest magnitude of bias.Figure Figure In terms of sensitivity, except for Chicago, Model 3 tended to outperform Model 2 for all filters in the cities we studied. This is counter-intuitive, as we would expect that a model that includes the significant ARMA time series component (Model 2) should outperform one that did not contain any time series components (Model 3). In practice there is a tradeoff between estimation and prediction. Table Our intention in this study was to find a flexible set of statistical models that could be applied across a number of emergency departments. We employ time series models that include covariates, such as patient visit counts and ambient temperature, as well as random seasonal terms. We use chest-radiograph ordering data from emergency departments of five regional Midwest children's hospitals to detect signatures of respiratory outbreaks. We include visit counts series as a covariate in the chest radiograph model to account for variations due to, for example, ED sizes, changes of staff within the ED, and even some seasonalities across the time period of interest. We use the temperature series as a surrogate measure of the influenza season \u2013 the colder months in the western hemisphere. This is a more accurate measure of the influenza season than using a fixed covariate such as a sinusoid. To reflect uncertainty in the variation of the influenza background over seasons, these models allow for randomness in the seasonal components. The use of random seasonal components is an advantage over traditional fixed effect models, since temporal patterns are not assumed to repeat precisely the same way. Thus, signature detection capabilities are improved for the majority of EDs \u2013 sensitivity is higher. For increased accuracy and timeliness, the use of our model for the data analysis should represent one component of an integrated detection system. Once a signal is triggered by any of our models, we recommend the use of clinical follow-up to corroborate or refute the emergence of a bona fide epidemic. For example, radiographs and medical charts will need to be reviewed to identify highly anomalous findings or groupings of aberrant findings that would be expected to be present at early stages of outbreaks. We believe that the approach utilized in this work will aid in this process and is more appropriate than models using fixed periodicities that do not have the ability to capture the underlying variabilities across seasons.Of note, our study shows that there are similarities in the chest radiographs series from different EDs that can, for the most part, be modeled by similar time series models. Similarities of the time series model across EDs have a number of ramifications for detection of outbreak signatures. First, by borrowing information across the different EDs we can build more complicated multivariate time series models, possibly involving the joint modeling of chest radiograph and visit counts across locations. Second, we could use these models to jointly detect outbreak signatures across large spatial regions. In this context, Diggle et al. use spatXk,t}, after accounting for these covariates, is stationary. We also ignore the fact that an individual may have multiple ED visits. Furthermore, these data do not contain any known anthrax outbreaks. Instead, outbreak patterns are simulated using a simple stochastic model. Although more care needs to be taken when summarizing simulation results, we agree with Buckeridge, et al. [Our study has a number of limitations. We assume that the resulting series {, et al. and Broo, et al. that sto, et al. . Althoug, et al. ,8 as extWe present in this paper a stochastic model of chest radiograph ordering patterns and temperature as an adjunct to a biosurveillance system for detecting emerging respiratory-related epidemics, focusing on a potentially high-impact public health hazard such as inhalational anthrax. We show that in time series analysis of respiratory-related data it is important to capture important seasonal effects that are present in such data, as well as to consider the influence of important covariates that can be easily obtained and incorporated into the models. We also demonstrate the importance in assessing the sensitivity and specificity of these methods, of utilizing more realistic stochastic, rather than deterministic, models for outbreaks patterns. We demonstrate spatial homogeneity in chest radiograph data across EDs and suggest ways in which these observations may be used to improve regional biosurveillance for (re)emergent infections.Regardless of the choice of filter, our simulations demonstrate that the specificity calculated using the training data varied across the five EDs studied Figure . The traThe author(s) declare that they have no competing interests.Dr. Bonsu provided the motivation, scientific background, and data for this work. Under supervision, N. Kim developed the preliminary simulation model for an anthrax attack. Dr. Craigmile worked on the filtering theory used in the paper, and together with N. Kim and Dr. Fernandez analyzed the data, planned and carried out the simulations, and wrote the paper. Dr. Craigmile took the lead with writing. All authors jointly edited this work.k, we define a SARIMA \u00d7 with period of seasonality s for the time series {Xk,t} using the notation of Section 6.5 of Brockwell and Davis [B denote the backshift operator defined by BrZk,t = Br-1 Zk,t-1 for r > 0, with B0 Zk,t = Zk,t, we define Xk,t byIn city nd Davis . Lettings denotes the period of the seasonality. The characteristic polynomials for the SARIMA model areIn this model the parameter \u03c6k(z) and \u03b8k(z) define an autoregressive moving average process on the unit time scale, whereas the terms \u03a6k(z) and \u0398k(z) define an autoregressive moving average process on a time scale of s units. Thus we can model dependencies simultaneously on two different time scales. It is customary to restrict to the class of causal time series models \u2260 0 and \u03a6k(z) \u2260 0, for complex valued z such that |z| \u2264 1.The terms . Let \u03b3X,k(\u00b7) denote the autocovariance function of the stationary error component {Xk,t}. Consider only the last m values of the process Rk,t, {Rk,t-1, ..., Rk,t-m}. Given the model parameters, by linearity of the prediction operator, the best linear one-step predictor of Rk,t is given bySuppose that the radiograph series {wherebk,1, ..., bk,m} are obtained from the solution of the set of m linear equations:denotes the one-step ahead prediction for the SARIMA process and {Ek,t} isThe one-step prediction error process, {ck,0 = 1 and ck,j = -bk,j for j = 1, ..., m. The process {Ek,t} is a filtering of {Xk,t} and so if {Xk,t} is stationary, by the linear time invariant filtering result for stationary processes, {Ek,t} is also stationary process .where we define tionary . Similarly, the sensitivity is defined to be the probability of true positive. The threshold, \u03c4k,\u03b1, is chosen by solving P = \u03b1 for \u03c4k,\u03b1. Either we can estimate this value from the data using the 1 - \u03b1 quantile of the {Dk,t} process of non-outbreak-based training data, or if {Xk,t} is Gaussian then for large m,We declare evidence of an outbreak signature (test positive), at time Z is a standard normal random variable. The solution for the threshold under this approximation iswhere -1(\u00b7) denotes the inverse cumulative distribution function for a standard normal random variable.where \u03a6The filter-based detection method requires knowledge of the \"true\" values of the model parameters. In our work, we replace the parameters with their maximum likelihood estimates.The pre-publication history for this paper can be accessed here:"}
+{"text": "The use of DNA microarrays opens up the possibility of measuring the expression levels of thousands of genes simultaneously under different conditions. Time-courseexperiments allow researchers to study the dynamics of gene interactions. Theinference of genetic networks from such measures can give important insights for theunderstanding of a variety of biological problems. Most of the existing methods forgenetic network reconstruction require many experimental data points, or can onlybe applied to the reconstruction of small subnetworks. Here we present a method thatreduces the dimensionality of the dataset and then extracts the significant dynamiccorrelations among genes. The method requires a number of points achievable incommon time-course experiments."}
+{"text": "Our task was to learn the networks from two types of data, namely gene expression profiles in deletion strains (the \u2018deletion data\u2019) and time series trajectories of gene expression after some initial perturbation (the \u2018perturbation data\u2019). In the course of developing the prediction method, we observed that the two types of data contained different and complementary information about the underlying network. In particular, deletion data allow for the detection of direct regulatory activities with strong responses upon the deletion of the regulator while perturbation data provide richer information for the identification of weaker and more complex types of regulation. We applied different techniques to learn the regulation from the two types of data. For deletion data, we learned a noise model to distinguish real signals from random fluctuations using an iterative method. For perturbation data, we used differential equations to model the change of expression levels of a gene along the trajectories due to the regulation of other genes. We tried different models, and combined their predictions. The final predictions were obtained by merging the results from the two types of data. A comparison with the actual regulatory networks suggests that our approach is effective for networks with a range of different sizes. The success of the approach demonstrates the importance of integrating heterogeneous data in network reconstruction.We performed computational reconstruction of the The expression of genes is tightly controlled by the regulatory machinery in the cell. A major part of which involves regulator proteins such as transcription factors (TFs). Transcription regulation can be modeled as a directed network with each node representing a gene and the proteins that it encodes, and an edge from one node to another if the former is a regulator of the latter. In addition to the directionality, the edges are also signed, with a positive sign indicating a positive regulation (activation) and a negative sign indicating a negative regulation (suppression).Methods have been proposed for computationally reconstructing regulatory networks. One common approach is to use differential equations to model how the expression levels of genes change according to the abundance of their regulator proteins over time In the resulting dataset, each data point measures the expression level of a gene in a specific condition at a certain time point. Each such observed value is determined by a mixture of different factors, including the previous expression level of the gene, the activity of its regulators, decay of mRNA transcripts, randomness, and measurement errors. The many entangled parameters make it difficult to reconstruct the regulatory network based on this type of data alone.knocked out) by mutagenesis knocked down) by RNA inference (RNAi) To decode this kind of complex systems, one strategy is to reduce it to a series of subsystems with manageable sizes by keeping the values of most parameters constant and varying only a small number of them. Thanks to the creation of large-scale deletion libraries Sophisticated computational methods have been developed in previous studies to use deletion data to infer regulatory networks. For example, Bayesian approaches have been used to model biological pathways and the effects of gene deletion While deletion data is good for detecting simple, direct regulatory events, they may not be sufficient for decoding those that are more complicated. For example, if a gene is up-regulated by two TFs in the form of an OR circuit, so that the gene is expressed as long as one of the TFs is active, these edges in the regulatory network cannot be uncovered by single-gene deletion data. In such a scenario, traditional time course data could supplement the deletion data in detecting the missing edges. For instance, if at a certain time point both the TFs have a low abundance and the expression rate of the gene is observed to be impaired, this observation could potentially help reconstruct the OR circuit.As another example, if a regulator is normally not expressed, deleting its gene would not cause an observable effect to the expression of other genes. Yet if in a certain perturbation the expression of the regulator is induced by the external stimuli, its regulation of other genes could be detected.Therefore, the two types of data are complementary in reconstructing regulatory networks. In this study we demonstrate how they can be used in combination to improve network reconstruction. We first propose methods for predicting regulatory edges from each type of data, and then describe a meta-method for combining their predictions. Using a set of fifteen benchmark datasets, we show the effectiveness of our approach, which led our team to get the first place in the public challenge of the third Dialogue for Reverse Engineering Assessments and Methods (DREAM) We first formally define our problem of reconstructing regulatory networks. The target network is a directed network with We use two types of data features: perturbation time series data and deletion data. Deletion data are further sub-divided into homozygous deletion and heterozygous deletion.In a perturbation time series dataset, an initial perturbation is performed at time 0, which sets the expression levels of each gene to a certain level. Then the regulatory system is allowed to adjust the internal state of the cell by up- and down-regulating genes according to the abundance of the TFs. The expression level of each gene is taken at subsequent time points. Thus, for each perturbation experiment, each gene is associated with a vector of real numbers that correspond to its expression level at different time points after the initial perturbation. If there are In a deletion dataset, a gene is deleted, and the resulting expression level of each gene at steady state is measured. By deleting each gene one by one, and adding the wild-type (no deletion) as control, each gene is associated with a vector of We assume that both types of deletion data, as well as perturbation data, are available, although it is trivial to modify our algorithm by simply removing the corresponding subroutines if any type of data is missing.Our basic strategy is to learn the simple regulation cases from deletion data by using noise models, and to learn the more complex ones from perturbation data using differential equation models. We first describe the two kinds of models and how we learn the parameter values from data, then discuss our way to combine the two lists of predicted edges into a final list of predictions.We consider a simple noise model for deletion data, that each data point is the superposition of the real signal and a reasonably small Gaussian noise independent of the gene and the time point. The Gaussian noise models the random nature of the biological system and the measurement error. Based on this model, the larger is the change of expression of gene Notice that the regulation could be direct of differential equation models learned from perturbation dataBatch 3: all predictions with an objective score two standard deviations below the average according to all types of guided differential equation models learned from perturbation data, where the regulator sets contain regulators predicted in the previous batches, plus one extra potential regulatorBatch 4: as in batch 2, but requiring the predictions to be made by only one type of the differential equation models as opposed to all of themBatch 5: as in batch 3, but requiring the predictions to be made by only one type of the differential equation models as opposed to all of themBatch 6: all predictions with a probability of regulation larger than 0.95 according to both the noise models learned from homozygous and heterozygous deletion data, and have the same edge sign predicted by both modelsBatch 7: all remaining gene pairs, with their ranks within the batch determined by their probability of regulation according to the noise model learned from homozygous deletion dataIn general, we put the greatest confidence in the noise model learned from homozygous deletion data as the signals from this kind of data are clearest among the three types of data. We are also more confident with predictions that are consistently made, either by the different types of differential equation models (batches 2 and 3 over batches 4 and 5) or by the noise models learned from homozygous and heterozygous deletion data (batch 6).in silico regulatory network reconstruction, provided by Marbach et al. We used the algorithm described above to take part in the third Dialogue for Reverse Engineering Assessments and Methods Challenge (DREAM3) The predictions are compared against the actual edges in the networks by the DREAM organizer using four different metrics for evaluating the accuracy:AUPR: The area under the precision-recall curveAUROC: The area under the receiver-operator characteristics curvepAUPR: The p-value of AUPR based on the distribution of AUPR values in 100,000 random network link permutationspAUROC: The p-value of AUROC based on the distribution of AUROC values in 100,000 random network link permutationsWhile the statistics related to the ROC curve are commonly used to evaluate prediction results, those related to the PR curve could be more sensitive when there is a much larger negative set than positive set.These metrics are further aggregated into an overall p-value for each size using the geometric mean of the five p-values from the five networks, and finally an overall score equal In the evaluation by the DREAM organizer, edge signs (activation vs. suppression) are not considered. We note that our algorithm can actually detect edge signs. In the noise model, a regulation is determined as an activation if the resulting expression is higher than the estimated wild-type expression, and a suppression otherwise. For different equation models, a positive sign of the coefficient The challenge of size 10 has attracted 29 teams to participate, the one of size 50 has 27 teams and the one of size 100 has 22 teams. The large number of participants makes the challenge currently the largest benchmark for gene network reverse engineering Our algorithm ended up in first place on all three network sizes. The complete set of performance scores for all teams can be found at the DREAM3 web site We notice that in some cases our first predictions are already very close to the actual network. The overall scores are 5.124, 39.828, and As hypothesized, the noise models learned from homozygous deletion data made very accurate predictions. In many cases, most actual edges were already predicted correctly in batch 1. Also, if an actual edge is not predicted in batch 1, it is also likely missed by subsequent batches. For instance, for the 173 actual edges in the Yeast3-size50 network, 100 are detected in batch 1, and among the remaining 73, only 21 are detected in batches 2 to 6.While the above results suggest the importance of the noise models learned from homozygous data, it is still not clear whether these models are indeed more effective than the other models. It could still be the case that other models could also make the same predictions made in batch 1, just that as these predictions had already been covered in batch 1 that subsequent batches were not allowed to make the same predictions again. To verify if this was the case, we swapped the order of the first two batches for the size 10 networks, so that the first batch is composed of predictions made by the differential equation models and the second batch is composed of predictions made by the noise model learned from homozygous deletion data and not covered by the first batch. The results are shown in Comparing This analysis reveals two interesting observations. First, as the noise models learned from deletion data gave higher accuracy than the differential equation models, our decision to use the former to make the first batch predictions is justified. Second, while the differential equation models had a lower accuracy, they had some small contributions to the prediction accuracy as they made some unique correct predictions that were missed by the noise models. As discussed, these are probably indirect or more complex regulation events.To evaluate quantitatively the importance of the differential equation models, we use hypergeometric distribution to compute the probability of having at least the observed number of correctly predicted regulation events in batches 2\u20136 by chance, given the total number of predictions in these batches. For example, for the Ecoli1-Size10 network, we compute the probability of having 3 correct predictions (in batches 2\u20136) out of the 4 missed by batch 1, when making 16 predictions out of 89 node pairs see . The resOverall, in about half of the cases, the predictions made in batches 2\u20136 are significantly better than random at the 0.05 level. We observe that for networks with a large portion of real edges missed by batch 1 (such as Yeast3-Size100), the predictions of batches 2\u20136 are more significant. Our results thus suggest that the two types of models, based on two different types of data, are potentially capable of complementing each other and make some orthogonal contributions to the overall predictions.The second example is related to the Ecoli2-size10 network . G6 is aWe have also briefly studied if the differential equation models can be improved by considering pairs of potential regulators instead of one single regulator at a time. For the five size-10 networks, we use the same algorithm as before, except that in batches 2\u20136 each model involves two potential regulators. The resulting AUC values for Ecoli1, Ecoli2, Yeast1, Yeast2 and Yeast3 are 0.887, 0.913, 0.943, 0.697 and 0.655 respectively. Comparing these numbers with those in Our prediction results demonstrate the advantage of combining multiple types of data. While the perturbation data allow the learning of differential equation models that could capture complex interactions in the regulatory network, deletion data also facilitate the detection of some simple interactions using only very basic noise models. As technological advancements are made rapidly, new data types are expected to come out from time to time. For method developers who try to improve existing prediction methods, besides deriving more advanced algorithms using the same data, it is also rewarding to investigate what kinds of information emerging data could provide, and how such information can be extracted to supplement existing methods.As mentioned earlier, in this study we did not attempt to address the issue of indirect regulation. Indeed we observed that indirect regulation is one of the factors that confounded our method and caused it to make some wrong predictions. We expect that in a complete network with thousands of nodes, long regulation chains are prevalent and the problem of indirect regulation would be more serious. It is therefore interesting to see if filtering indirect regulation (for example by some existing techniques In some previous work, more sophisticated noise models allowing for gene-specific and experiment-specific errors are proposed, with the aid of extra control experiments In this study, we adopt an unsupervised learning setting, in compliance with the setup of the DREAM3 challenge. For organisms with some known regulation edges as domain knowledge, they can be used as training examples to train a supervised learner, or be used to transform the existing method into a semi-supervised one One issue that we have not touched on is the computational cost. Using a high-end cluster, our predictions for networks of size 10, 50 and 100 took about 2 minutes, 13 hours, and 78 hours, respectively. While there is room for optimizing our code, fitting the differential equation models intrinsically requires a lot of computational power. Given that most correct predictions are made by the noise models, which only took a tiny portion of the computational time, when working on complete networks it is possible to tradeoff some accuracy for much shorter running time. Alternatively, since a lot of the models are learned independently of each other, it is fairly straightforward to parallelize the computation and reduce the total running time by adding in extra machines."}
+{"text": "We propose a Bayesian procedure to cluster temporal gene expression microarray profiles,based on a mixed-effect smoothing-spline model, and design a Gibbs sampler to sample fromthe desired posterior distribution. Our method can determine the cluster number automaticallybased on the Bayesian information criterion, and handle missing data easily. When appliedto a microarray dataset on the budding yeast, our clustering algorithm provides biologicallymeaningful gene clusters according to a functional enrichment analysis. Microarray technology enables thescientist to measure the mRNA expression levels of thousands of genessimultaneously. For a particular species of interest, one can make microarraymeasurements under many different conditions and for different types of cells(if it is a multicellular organism). Genes' expression profiles under theseconditions often give the scientist some clues on biological roles of thesegenes. A group of genes with similar profiles are often \u201ccoregulated\u201d orparticipants of the same biological functions.When a series of microarray experiments are conductedsequentially during a biological process, we call the resulting dataset a\u201ctemporal\u201d microarray dataset, which can provide insights on the underlyingbiology and help decipher the dynamic gene regulatory network. Clustering geneswith similar temporal profiles is a crucial first step to reveal potentialrelationships among the genes.Conventional clustering methods, suchas the K-means and hierarchical clustering, do not takeinto consideration the correlation in the gene expressionlevels over time. Although it is possible to use a general multivariateGaussian model to account for the correlation structure, such a model ignoresthe time order of the gene expressions. As evidenced in our example, the timefactor is important in interpreting the results of gene expression clusteringin temporal data. It is also possible to use an autoregression model todescribe the gene expression time series, but such a model often requiresstationarity, which is unlikely to hold in most temporal microarray data.Recently, nonparametric analysis of data inthe form of curves, that is, functional data, issubject to active research, see , 2for aIn this paper, we propose a Bayesian clusteringmethod, which optimally combines the available information and provides aproper uncertainty measure for all estimated quantities. Our method is based ona mixture of mixed-effect smoothing splines models. For each cluster, we modelits mean profile as a smoothing spline function and describe its individualgene's variation by a parametric random effect. Based on the theory ofreproducing-kernel Hilbert spaces , we reprOur method is not restricted to temporal microarraydata, and can be applied to all curve clustering problems, especially forsparsely and irregularly sampled temporal data.ith gene at time t be ity. To accommodate missing data that occasionally occursin microarray experiment, we denote ti = and yi = T, where in is the numberof measurements of ith gene. Ourmixed-effect smoothing spline model [\u03bc(ti) = (\u03bc(t1),\u2026\u03bc(nit))T is thecluster's mean profile, bi \u223c N is the randomeffect to capture the intragene correlation, iZ is the knowndesign matrix for the random effect, and i\u03f5 \u223c N is the randomerror independent of b and of eachother.Let the expression value of the ne model for geneb vectors, we canaccommodate different nonrandom effects. For example, when bi = ib and iZ = 1, the expression profile of the ith gene isparallel to the mean profile \u03bc T and iZ = , the difference between the ith geneprofile and the mean profile is a linear function in time. More complicatedstructures such as periodicity can be modeled byletting the iZ be basis of acertain functional space.By taking different rofile \u03bc . If bi \u03bc in areproducing kernel Hilbert space \u210b \u2286 {\u03bc : M(\u03bc) < \u221e} in which M(\u03bc) is a squareseminorm, we can represent \u03bc asjs} is a setconsisting of all distinct {it}, q is the numberof {js}, and MR is the kernelof \u210b. The choice of M(\u03bc) = \u222ba0 (d2\u03bc/dt2)2dt yields thecubic smoothing spline with+ = max [By considering ax .iS is in \u00d7 m with the th entry \u03d5\u03bd(it) and R is in \u00d7 q with the th entry MR. Substituting T and S, R, Z, \u03f5 similarly, wehave the matrix representationb = T \u223c N).Writing in a vec\u03bc(ti)=Sidtituting , wehaveThe prior distributions are specified asfollows:dV = )\u22121, bV = )\u22121, cV = (TRR/ \u03c32 + 1/\u03c42I)\u22121, and SSR = (y \u2212 Sd \u2212 Rc \u2212 Zb)T (y \u2212 Sd \u2212 Rc \u2212 Zb).These priors lead to the following full conditionalposteriors, which are used in our Gibbs sampler:ith gene has aGaussian mixture distribution:\u03bck and \u03a3k = kZTZB + \u03c32I are the meanand covariance matrix for the kth component,as given by , \u03bc = , \u03a3 = , and \u03c0 is the jointprior distribution.To ease thecomputation, we introduce a \u201clatent\u201d membership labeling variable K is unknown, weused the following Bayesian information criterion (BIC):KM is the currentmodel with parameters \u03b8K,Kl is the totalnumber of parameters in our model. A small BIC score indicates the adequacy ofthe corresponding model. An alternative to our current approach is to use a Polya Urn prior , which postulates that when a new member comes in, its a prioriprobability for joining an existing cluster of size im is (im + c)/(m + c), and for forming a new cluster of its own is c/(m + c), where m is the totalnumber of existing members. This prior, however, favors unbalanced clusterconfigurations and may not be appropriatein our applications.Since \u03b11,\u2026, \u03b1K) for , the cluster proportions. Thus, given the clusterindicator J, the posterior distribution of the p's is again aDirichlet distribution.To complete our Bayesian analysis, we employ theDirichlet prior Di , and the initial values of dk, bk, ck,kB, where k = 1,\u2026, K, as well as \u03c32, we iterate the following iterative conditionalsampling steps:i = 1,\u2026, n, draw a new ij from theconditional distribution from \u223c Di, where jn is the numberof genes in the jth cluster.update under aerobic condition in galactose medium.Under aerobic condition, the oxygen concentration was lowered gradually untiloxygen was exhausted during a period of ten minutes. Microarray experimentswere conducted at 14 time points under aerobic condition. A reference samplewas obtained from a pooled RNA collected from all time points forhybridization.To study oxygen-responsive gene network, Lai et al. used cDNP-valuesbased on hypergeometric distributions are reported.We found 23 clusters out of 31 clusters discovered have biological functionsover-represented. Among them, estimated mean gene expression profiles of threeclusters are given in For the analysis, Lai et al. normalizP-value = 10\u22124) as well ascell rescue and defense are over-represented in this cluster . Furthermore,genes involved in molecular functions of oxidoreductase and coproporphyrinogenoxidase are also presented, which explains the upregulation of the geneexpression levels.In cluster A, which consists of 40 genes, the estimatedmean expression goes up progressively as oxygen level goes down, which suggeststhat the genes in this cluster were transiently upregulated in response toaerobisis. Accordingly, genes involved in stress response . Moreover,ribosome biogenesis are also over-represented . Theseprocesses were affected by oxygen level initially, but were quickly adjusted tohigh expression levels to maintain living of yeast.We have 92 genes in cluster B, where the estimatedmean gene expression drops down at the beginning rapidly and then goes upgradually. In this cluster, 34 genes are involved in protein synthesis, carbonutilization , andcarbohydrate metabolism . The initialupregulation of gene expression under aerobic condition can be partly explainedby the fact that the cell increases the energyuptaking through the carbon utilization as oxygen level goes down; but as theoxygen level continues to drop down, these processes are replaced by the moreenergy-efficient processes, which drives the expression levels of genes to bedownregulated.Contrast to cluster B, cluster C (68 genes) consistsof genes involved in galactose fermentation (function enrichment www.stat.uiuc.edu/~pingma/BayesianFDAClust.htm.Conventional clustering methods do not take into consideration the correlationin the gene expression levels over time. Multivariate Gaussian models and timeseries analysis cannot model the time factor and correlation properly. Theselimitations can be readily overcome by the full Bayesian approach developedhere. Although certain prior distributions and the related hyperparameters needto be input by the user, we found the clustering results rather robust tovariations in such inputs. Moreover, our Bayesian clustering algorithm servesas a platform to incorporate more biological knowledge. Open source R code isavailable at"}
+{"text": "Using yeast cell-cycle time-series gene expression data, we demonstrate that the predictive power of lead-lag R2 for the identification of co-regulated genes is significantly higher than that of standard similarity measures, thus allowing the selection of a large number of entirely new putatively co-regulated genes. Furthermore, the lead-lag metric can also be used to uncover the relationship between gene expression time-series and the dynamics of formation of multiple protein complexes. Remarkably, we found a high lead-lag R2 value among genes coding for a transient complex.Current methods for the identification of putatively co-regulated genes directly from gene expression time profiles are based on the similarity of the time profile. Such association metrics, despite their central role in gene network inference and machine learning, have largely ignored the impact of dynamics or variation in mRNA stability. Here we introduce a simple, but powerful, new similarity metric called 2lead-lag R. We used the lead-lag R2 similarity measure to predict the presence of common transcription factors between gene pairs using an integrated dataset consisting of 13 yeast cell-cycles. The method was benchmarked against six well-established similarity measures and obtained the best true positive rate result, around 95%. We believe that the lead-lag analysis can be successfully used also to predict the presence of a common mechanism able to modulate the degradation rate of specific transcripts. Finally, we envisage the possibility to extend our analysis to different experimental conditions and organisms, thus providing a simple off-the-shelf computational tool to support the understanding of the transcriptional and post-transcriptional regulation layer and its role in many diseases, such as cancer.Microarrays provide snapshots of the transcriptional state of the cell at some point in time. Multiple snapshots can be taken sequentially in time, thus providing insight into the dynamics of change. Since genome-wide expression data report on the abundance of mRNA, not on the underlying activity of genes, we developed a novel method to relate the expression pattern of genes, detected in a time-series experiment, using a similarity measure that incorporates mRNA decay and called Gene expression is a highly regulated process composed of two fundamental biological events: synthesis and degradation. Transcription regulation is achieved by modulating the frequency of transcription initiation and, although the most studied, this event represents just the first of the many complex stages leading to a mature mRNA. Recent experimental work is beginning to shed light on the complex architecture underlying mRNA degradation pathways by identifying the factors and enzymes involved. Therefore, it is now widely accepted that mRNA decay contribution to the control of gene expression is not simply a biological waste-disposal system, but a key player for the temporal coordination of cellular functions. Moreover, a number of highly complex and sophisticated specific mechanisms have been identified co-expression is a good indication for co-regulation2 value obtained from a linear regression model between two given gene expression time profiles denoted by Am(t) and Bm(t). Their co-varying degree is therefore measured as the fraction of the total variance explained by the regression Am(t)\u200a=\u200ac1Bm(t)+c2. Such coefficient, indicated in this paper as the simultaneous R2 of the corresponding gene pair, is the square of the Pearson correlation and takes values between 0 and 1.Recent technologies, such as microarrays, are able to provide measurements of mRNA abundance over time under different experimental conditions. In order to decipher the intricate regulatory network underlying the highly coordinate cell behavior, effective computational methods have been developed to take advantage of gene expression data. The basic idea underlying such methods stems from the experimental observation that genes are organized in groups showing similar time profiles In order to infer the gene regulatory network, several laboratories have combined microarray data with protein-DNA interaction data, taking advantage of ChIP-on-chip experiments et al.lead-lag relationship. The term \u201clead-lag\u201d has been taken from the field of control systems engineering where the same relationship holds between the input and the output of the so called \u201clead-lag compensator\u201d, which is the fundamental building block for the design of automatic control systems 2 for the identification of co-regulated genes is significantly higher than that of standard similarity measures, allowing the selection of a large number of entirely new putatively co-regulated genes. Furthermore, the lead-lag metric can also be used to uncover the relationship between gene espression time-series and the formation of protein complexes.Here we focus on the development of a novel computational tool aiming to uncover co-regulated genes through transcriptional and post-transcriptional regulatory mechanisms. To this purpose, starting from the computational approach developed by Farina i.e. different affinities of the transcription factor to promoter regions). A computer simulation of this situation is depicted in 2 is maximal (R2\u200a=\u200a1). Indeed, the \u201cconverse\u201d situation is very different. 2 will not be maximal (R2<1) as can be seen from differences in the normalized profiles . An example of such behavior can be seen by comparing Another important issue is that the differences of transcription rate regulation with respect to degradation rate regulation cannot be clearly seen by simply looking at the long term behavior of the response, Such \u201closs of correlation\u201d phenomenon due to differential stability regulation can be further understood by considering a time varying rates, resulting in a transient mRNA time profile, as shown in The scenario depicted above naturally leads to the possibility that co-regulation may involve both the transcriptional and post-transcriptional machinery. Therefore, a large variety of temporal profiles can be obtained by combining any of those shown in lead-lag relationship. Such terminology is borrowed from the field of system and control engineering where it refers to the basic building block for the realization of a regulatory device able to provide optimal properties to a given process and called \u201clead-lag compensator\u201d lead-lag R2, able to incorporate in a single parameter such relationship and consequently potentially enhancing the predictive power of gene expression analysis for the identification of putatively co-regulated genes. In fact, we aim to study here the possibility that an high lead-lag R2 between expression time profiles of two given genes is a good indication for the presence of a common regulation mechanism.In this paper we consider a novel relationship between gene expression time profiles which includes also the possible presence of mRNA stability variations as a further mechanism to modulate transcript abundance over time. Such new coordinated relationship will be called 2 is quantitatively defined by a linear multiple regression model among the two given gene expression time profiles Am(t) and Bm(t) and the area under curve until time t :lead-lag R2, that is the fraction of the total variance explained by the above multiple regression model. Such coefficient is computed directly from at least 6 time points of gene expression data and takes values between 0 and 1. The rationale behind such new relationship stems from a simple mathematical model conceived to capture, from gene expression time series data, those genes which are co-regulated at the transcriptional level having an equal or different mRNA stability.The lead-lag R2c\u200a=\u200a3c\u200a=\u200a4c\u200a=\u200a0) so that the magnitude of the lead-lag R2 is always larger or equal than that of the simultaneous R2. In the following we will show that the magnitude of the increase from simultaneous R2 to lead-lag R2 is specific for each gene pair and that it is statistically correlated both to the presence of a common transcriptional signal and to differences between the half-lives. More details of the lead-lag R2 and its numerical computation are given in the It is worth noting that the simultaneous relationship is also a particular lead-lag relationship and equal or different transcript stabilities. Consequently, we postulated that two given genes showing a lead-lag relationship are likely to be regulated by common transcription factors. To test this hypothesis, we selected a list of 1159 genes indicated as cell-cycle regulated in at least one out of six yeast genome-wide studies et al. dataset consists of a list of targets for 203 TFs using different conservative criteria. Among those available 203 TFs, we selected a p-value for binding of 0.001 obtaining a list of 3107 genes, containing 660 of the genes in the list of the cell cycle regulated ones. We then choose the 10 TFs widely recognized as having a fundamental role during the cell cycle 2 for each gene pair in the dataset (N(N\u22121)/2 pairs, N\u200a=\u200a660) and considered as putatively co-regulated those pairs whose R2 values were over a threshold hight and, as putatively non co-regulated, those pairs whose R2 values were below a threshold lowt. Gene pairs with scores between thresholds were not considered. In order to construct a ROC curve we used varying thresholds: as an upper threshold hight for co-regulation we selected the value corresponding to percentiles p ranging from 50th to 90th with a step of 10 and, as a lower threshold lowt for non-coregulation, we selected the value corresponding to the \u201csymmetric\u201d percentile 100\u2212p. For each threshold we could compute true positives, true negatives, false positives, false negatives and therefore construct a ROC curve , whereas false positives are those pairs in the class without a common transcription factor . Analogously, true negatives and false negatives were computed within the class of gene pairs having the lower thresholds corresponding to the percentiles from 50th to 10th with a step of \u221210 of the R2 distribution.The mathematical model used to define the lead-lag R2 we repeated the same analysis using the simultaneous R2 as a similarity measure between two given genes . The simultaneous R2 also shows a mildly significant decrease of the mean values between the first and the last bin. To further evaluate the statistical significance of this analysis we computed the Z-score corresponding to 100000 randomizations of the half-life measurements. The results are shown in the scatterplot of 2 of gene pairs is statistically correlated to their half-life ratios. In fact, a high positive Z-score (about 5) corresponds to the highest half-life ratio bin and a negative Z-score (about \u22125) corresponds to the first half-life ratio bin. On the other hand, Z scores for the simultaneous R2 are all within the values \u22123 and 3 and therefore the observed difference of the mean values between the first and the last bin is not significantly affected by the randomizations. This scenario is consistent with the biological process underlying the mathematical model used to define the lead-lag R2 thus showing that our analysis well captures the effects of post-transcriptional control on gene expression time profiles during the cell-cycle.i.e. the presence of a common transcription factor, from gene expression time profiles. As previously discussed, truly co-regulated genes do often display large differences of gene expression time profiles, e.g. peak shifts, delays or other kinds of nonlinear relationships. In this paragraph, we consider other similarity measures relevant to the analysis of gene expression data and compare their performances with those obtained using the lead-lag R2. In particular, we used 5 similarity measures other than the lead-lag: Spearman's rank, Kendall's tau, cosine, dynamic time-warped and time-delayed correlation, all squared to capture inverted relationships also. Spearman's rank, Kendall's tau and cosine correlation are the most common choices for the analysis of gene expression data in the presence of nonlinear relationships between time series, but they do not take into account the time ordering of data. By contrast, time-warped and time-delayed correlation have been specifically developed to analyze gene expression time profiles. The time-delayed correlation analysis has been proposed by Schmitt et al.2 value is obtained by selecting the highest simultaneous R2 over all admissible time delays between profiles. The dynamic time-warped correlation has been recently used by Aach and Church 2 by selecting the highest simultaneous R2 over all the possible time warped paths. For any similarity measure, we performed the same analysis reported in a previous section using the same data, and the results are shown in The results presented so far have clearly shown that lead-lag correlation analysis outperforms the usual simultaneous correlation analysis (squared Pearson coefficient) for the prediction of co-regulation, First of all, the cosine correlation analysis produces the poorest performances, very close to a random choice, and therefore such similarity measure is not reported in th percentile of the distribution for each of the R2 values considered in this paper is provided in the supporting information file In this section we present some examples of \u201ctypical\u201d lead-lag relationships using the most recent yeast cell cycle data The budding yeast cell cycle is characterized by consecutive waves of expression of key regulators such as cyclins and transcription factors Cell Division Cycle 6 (CDC6) is a component of the pre-replicative complex essential for the initiation of DNA replication, normally expressed at the end of mitosis. It has a lead-lag relationships with ASH1 which enSWI5 encodes a key transcription factor that activates transcription of genes expressed at the M/G1 boundary and in G1 phase of the cell cycle. NCE102 is a non-classical export protein involved in alternative clearance/detoxification pathway to eliminate damaged material YOX1 is a transcription factor involved in the repression of ECB acitivity All the above examples consist of pairs of genes that are under the control of the same transcription factor and that show differential mRNA stability values consistent with their lead-lag relationship . Moreover, it is worth noting that large differences in half-lives value coding for proteins of the cytoplasmic ribosomal large subunit . Ribosomal proteins are under the transcriptional control of IFH1/FHL1 vs. lead-lag R2 values shows that, whereas the POL/MCM pairs display high values of lead-lag R2 and low values of simultaneous R2, the control pairs POL/RIB and MCM/RIB display a very different pattern spread over a larger range thus denoting the absence of any meaningful relationship.Using gene expression time profiles during one cell cycle . and that of the POL group is 19\u00b16 min i.e. simultaneously correlated) profiles often provide clues for the presence of common transcription factors regulating both genes. Such computational analysis (known as \u201cclustering\u201d) is very useful since it allows the prediction of the underlying regulatory actions based exclusively on the available gene expression data obtained from a given experiment. The rationale behind such belief is a sort of a \u201cguilty by association\u201d approach: genes' products appearing and disappearing at the same time are likely to have some common transcriptional regulation. Nevertheless, it may well be the case that the same transcriptional signal regulating two (or more) genes may yield quite different outcomes on each transcript. In fact, a number of biological events following transcription may selectively affect cytoplasmic mRNA abundance, such as, for example, the activity of the enzymatic machinery involved in mRNA processing and degradation. In order to address this issue, we provided a novel computational methodology that, based exclusively on the available gene expression data, is able to effectively predict co-regulation even with variation in the dynamic response due to mRNA stability differences. Moreover, our approach also captures the relation of simultaneous or time shifted co-expression so that it provides a single integrative general index \u2013 the 2lead-lag R\u2212able to uncover the presence of a common regulatory signal underlying gene expression time dynamics also at the post-transcriptional level.The expression of genes in the cell is to a large extent controlled at the level of mRNA accumulation. One key point in the analysis of gene expression dynamics is that mRNA abundance is determined by two regulated processes: transcription and degradation both specifically affecting transcript levels. Computational analysis of genome-wide expression time series has shown that clusters of co-expressed . Our results clearly indicate that co-regulation is not generally equivalent to simultaneous expression.In order to test the validity of our approach on real data, we used yeast genome-wide cell-cycle expression time series obtained by several independent groups using different synchronization methods. In fact, by doing so, we could integrate the available cell cycle data and obtain a much more reliable aggregated dataset. We considered those gene pairs with the highest lead-lag Ri.e. the presence of a common mechanism able to stabilize or de-stabilize specific transcripts, as for the members of the PUF proteins family We believe that the same analysis can be successfully used to predict post-transcriptional regulation, Am and Bm, respectively. The simultaneous R2, is the usual squared Pearson correlation coefficient which measures the fraction of the total variance explained by a linear fit between the two variables Am and Bm, that is\u03b7 accounts for intrinsic and extrinsic noise.The mRNA relative abundance time course data obtained from cell populations experiments for gene A and B is denoted by lead-lag R2 is the following. We considered two genes, A and B, subject to the same regulatory signal (promoter activity) \u2013 possibly of different strength \u2013 due to the presence at their promoters of the same TF complex in its active state. Moreoever, we assumed that the change in mRNA levels due to the degradation rate could be reasonably well captured by a first order rate kinetics Am and Bm measure gene expression on a linear scale (fold induction), XP is the promoter activity time profile of the TF complex relative to gene X, X\u03b1 is its maximal strength, Xk is the degradation rate (Xk\u200a=\u200alog(2)/1/2t) and X\u03b7 accounts for intrinsic and extrinsic noise. In order to remove size effects, the common signal between the promoter activities of the two genes is indicated as p(t) and is such that5c accounts for the integration constant. The lead-lag R2 is the fraction of the total variance explained by model (3). Note that the lead-lag R2 depends on the time order of the data, whereas the simultaneous R2 remains the same after a time shuffling of the data. Moreover, it is worth emphasizing that model (3) may well describe other biologically relevant mechanisms, such as time-shifted profiles as shown in the supporting information file ic, which depend on the underlying model, will change accordingly. Any pair of time profiles, satisfying model (3) will be said to have a lead-lag relationship and a good fit to (3) can be obtained also in situations different from those assumed to derive it. This property is very useful since it provides flexibility in modeling different biological phenomena resulting from the presence of a common regulatory signal.The rationale behind the c4\u200a=\u200ac5\u200a=\u200a\u03b4\u200a=\u200a0), model (3) in the Laplace domain is as follows:Am(s) and Bm(s) is that of a lead-lag compensator:The reason for the term \u201clead-lag\u201d is due to the fact that two signals satisfying model (3) also define the transfer function of a \u201clead-lag compensator\u201d widely used in control systems engineering. Assuming, for the sake of simplicity, the signals devoid of linear trends and noise 2 can be computed directly from gene expression data and values near unity indicates that the model well fits the available time series.Let the available experimental time series of two genes mRNA]t measured at times 1t,\u2026,Nt, we computed its time integral in two steps. First, we used a piecewise cubic Hermite interpolation formula to obtain, for each time interval, 4 more samples. Over the interpolated time series we computed the integral by using a 2-points closed Newton-Cotes formula .Given a gene expression time profile [We considered the extended list of 1159 cell cycle regulated genes reported in reference et al. group consist of genome-wide gene expression data during the yeast cell cycle using three different synchronization methods. We denoted as ELU, the elutriation based dataset composed of one cell cycle, as ALPHA, the pheromone \u03b1 arrest factor based dataset composed of two cell cycles and as CDC15 the temperature sensitive CDC15 mutant based dataset composed of three cell cycles. Only two cell cycles of the CDC15 dataset could be used due to the large number of missing data. The dataset in Cho et al.We considered yeast cell cycle data measured by three independent groups et al. dataset p-value<0.001). The MacIsaac et al. dataset contained 660 of the 1159 cell cycle regulated genes. Therefore, we ended up with a list of 660 genes available for the subsequent computational analysis.We considered the main cell cycle TFs according to Bahler et al.et al.We used half-life genome-wide measurements of the yeast transcript measured by Wang R2 for all possible pairs using N\u200a=\u200a660 genes, that is we computed such parameters for N(N\u22121)/2\u200a=\u200a217470 pairs. More precisely, the R2 values were computed for each cell cycle in each dataset, thus obtaining 13 values for each gene pair . The average dataset has been constructed by computing the R2 values for each cycle and for each dataset, for a total amount of 13 cycles. The mean R2 value for each genes pair was obtained by computing the mean of the 13 available values. In case of missing data in the original dataset, computation of the mean R2 value was performed only when at least 8 out of 13 cycles were available. Such data were used to compute the diagram showed in For each dataset, we computed the simultaneous and lead-lag\u2013Text S1Supporting Information file(0.10 MB DOC)Click here for additional data file."}
+{"text": "Microarray gene expression time-course experiments provide the opportunity to observe the evolution of transcriptional programs that cells use to respond to internal and external stimuli. Most commonly used methods for identifying differentially expressed genes treat each time point as independent and ignore important correlations, including those within samples and between sampling times. Therefore they do not make full use of the information intrinsic to the data, leading to a loss of power.We present a flexible random-effects model that takes such correlations into account, improving our ability to detect genes that have sustained differential expression over more than one time point. By modeling the joint distribution of the samples that have been profiled across all time points, we gain sensitivity compared to a marginal analysis that examines each time point in isolation. We assign each gene a probability of differential expression using an empirical Bayes approach that reduces the effective number of parameters to be estimated.betr is available through Bioconductor. BETR has also been incorporated in the freely-available, open-source MeV software tool available from http://www.tm4.org/mev.html.Based on results from theory, simulated data, and application to the genomic data presented here, we show that BETR has increased power to detect subtle differential expression in time-series data. The open-source R package The analysis of microarray time-course data presents a number of challenges. First, microarray gene expression data has an inherent complexity due to its high dimensionality and hidden correlations driven by co-expression of genes in biological networks and other factors. Added to this is the fact that additional correlations exist between time points, but time-course sampling is often sparse and irregular due to experimental constraints. Further, temporal processes governing gene expression in cells operate on a wide range of different time scales, making any sampling less than optimal for some applications.When analyzing microarray gene expression data in general, and time-course experiments in particular, two common goals are to identify genes with similar expression profiles (often using clustering approaches) and to identify those that are differentially expressed across conditions such as disease states. Most commonly used techniques are extensions of methods developed for static (non time-course) experiments. They ignore the sequential nature of time-course data and the resulting time-dependent correlation structure. Analysis methods tailored to time-course data make use of this additional information, improving power to draw conclusions from the data.Several linear modeling approaches designed for non-time-course experiments -5 can beEfron et al. and EckeAn alternative approach involves fitting curves to the data -12 and pHere we present BETR , a novel technique to identify differentially expressed genes that overcomes many of the limitations of existing methods. Our approach explicitly uses the time-dependent structure of the data, employing an empirical Bayes procedure to stabilize estimates derived from the small sample sizes typical in microarray experiments. It is applicable to one- or two-color replicated microarray data, and can be used to detect differences between two conditions or changes from baseline in a single condition.In building a model for time-course data, we decompose the variability in our experimental measurements into its component parts, most importantly time effects, treatment effects and random technical and biological noise. The aim of a two-group experiment is usually to identify genes with a treatment effect as manifested in a significant difference between the groups' individual expression profiles.\u03b4 representing the log-ratio between the treatment groups at each time point.Figure 1 and t2, are more highly correlated than those at non-consecutive time points, such as t1 and t4. BETR takes advantage of the time series structure of the data by allowing correlation between the magnitude of differential expression at different time points; values of \u03b4 that are close in time are likely to be more correlated than those with greater separation. The data from all genes is used jointly to estimate parameters representing the covariances between time points, as well as gene-specific error terms.Time-course data is a special case of repeated measures data with the distinguishing feature that the data points are ordered. A key result of this ordering is that correlation between time points in non-uniform. For example, it is often the case that measurements at consecutive time points, such as tAfter estimating the model variance parameters, we fit two models to each gene. The simpler model assumes identical mean profiles between conditions Figure , while tAs an intuitive example of the benefit of taking the ordering of time points into account, consider a gene where the differential expression and random noise are of similar magnitude. When analyzing each time point independently the signal will often be masked by the background noise. In contrast, the true signal should have a detectable correlation across time points, making its identification possible.Ig be a Bernoulli (0 or 1) random variable indicating whether gene g is differentially expressed across conditions. Our interest is in estimating the probability of differential expression for each gene, given the data . We will first describe BETR as applied to a two-group comparison using single channel (Affymetrix) gene expression data, denoting the two experimental conditions as treatment (Tx) and control (Co). At least two replicates are required in each experimental group, although the sample sizes (NTx and NCo) need not be balanced. Let Xgi = denote the log transformed expression values for replicate i of gene g at the T time points:Let g's mean expression in the two groups. The error terms egi are assumed to be multivariate normal, MVNT , random effects with a compound symmetry covariance structure. This relatively simple two-parameter covariance structure allows for within-replicate correlation between the errors at different time points accounting for biological or experimental replicate effects in the case of repeated measurements on the same experimental unit.where N = NTx = NCo. Within each group we define Our aim is to determine if there is a difference in expression between the groups, that is, whether \u03b4g = to be the vector of log-ratios at each time point. A notable feature of BETR is that the parameters of interest \u03b4g are modeled as random effects rather than fixed effects for differentially expressed genes. In statistical models fixed effects are typically used for variables of interest, such as differences between experimental groups, or between time points. Random effects are usually used to account only for the remaining sources of variation that are not of interest, such as random technical measurement error or biological variability between subjects in the same treatment group. By modeling the \u03b4g as random effects, we are able to capture non-uniform correlation between them that arises from the time-course structure of the data.We define Ig to describe whether or not a gene is differentially expressed. For those genes without differential expression \u03b4g is modeled as a mean zero fixed effect at all time points. In the case of a differentially expressed gene (Ig = 1), we model \u03b4g as a random effect, allowing for non-zero log-ratios.We define the indicator By modeling the log-ratio as a non-zero realization of a random effect we allow correlation between the magnitude of differential expression at different time points.Yg, takes on different forms depending on whether the gene is differentially expressed or not:It follows that the distribution of the gene's data points, By considering which of the two distributions above better fit the gene's data, we can make an inference about the probability of differential expression using Bayes' rule:p represents the proportion of differentially expressed genes (see next section for estimation). We report as differentially expressed those genes whose probability of differential expression, P(Ig = 1|Yg) is greater than a user-defined threshold 1 - \u03b1.where p, and the two components of variance, \u03a3Eg and \u03a3Dg are unknown. \u03a3Eg represents the sample variance about the treatment group mean and is estimated using the pooled sample covariance SCo. When sample sizes are small we recommend constraining the structure of \u03a3Eg to be compound symmetric, requiring the estimation of only two parameters; the variance and covariance terms are obtained by averaging the diagonal and off-diagonal terms respectively. To further lessen the impact of small sample sizes the variance and covariance estimates are stabilized using the empirical Bayes shrinkage procedure of Smyth [In the above model the proportion of differentially expressed genes, of Smyth .Dg, relates to the primary quantity of interest, the magnitude of differential expression. Since we model the differential expression vector as correlated random effects with known mean 0, we can estimate \u03a3Dg by the sample covariance matrix which simplifies to SDg = Yg Dg are estimated by SDg and a target covariance matrix. Since \u03a3Dg is non-zero only for differentially expressed genes, we base our target matrix only on the mean of SDg for those genes where the probability of differential expression, P(Ig = 1 Y|g), is greater than the user-defined significance cutoff, 1 - \u03b1. The fraction of genes where \u03b1 is used to estimate p, the proportion of differentially expressed genes.The second covariance parameter, \u03a3nd Speed ,13. The Dg, depends on knowledge of the Ig, which in turn depend on knowledge of the covariance parameters and the proportion, p, of differentially expressed genes. We therefore use an iterative procedure that alternates between updating the Ig estimates and the \u03a3Dg estimates until the process converges. The process starts with an initial default estimate of p and a rough gene ranking obtained for example by ANOVA. \u03a3Dg is estimated using an initial target covariance matrix derived from the mean SDg of the top ranked genes. Given the \u03a3Dg estimates we then obtain a rough first iteration estimate of \u03b1 are used to construct a new shrinkage target matrix allowing a seconditeration estimation of \u03a3Dg. We have found that the In the parameter estimation procedure described above, estimation of the covariance parameters, \u03a3rocedure .Xgi represent the log ratio of expression of gene g at T time points for replicate i. We then express Yg the average log ratio, similarly to the one color model above:When using two color microarrays two samples are co-hybridized to each chip, and data is obtained in the form of expression ratios between conditions. Let The rest of the procedure is identical to that for the one-color case.To assess the performance of BETR we compared it to three established methods, the linear model approach with variance shrinkage implemented in the R/Bioconductor package limma , the splMycobacterium tuberculosis (MTB) progress to active disease. Host genetic factors that influence the outcome of TB infection have been identified both in humans and mouse models of the disease. The known factors only explain a fraction of the variability observed in the host reaction to infection. To better understand host genetic factors and their impact on the dynamics of infection response, we analyzed gene expression in a mouse model of TB infection using C57BL/6 and C3H.B6-sst1 mice, resistant and susceptible, respectively, to infection. Bone marrow-derived macrophages were extracted from three mice of each strain, primed with interferon gamma and then infected with MTB in vitro. RNA was extracted for gene expression analysis on Affymetrix 4302 microarrays at four time points: prior to infection (0 hours) and 6, 30 and 78 hours post infection. Details are given in Additional File Tuberculosis is a significant and growing public health problem, with an estimated two billion people infected worldwide and increasing concern about multi-drug resistant strains. Despite its prevalence, only about ten percent of people infected with Although simulated data may fail to capture all of the features of and correlations within gene expression data, it is useful for understanding the properties of a new analytical method. Simulated data has the advantage that we know the 'truth' and allows us to compare the performance of different methods helping us broadly define the conditions under which particular methods are most suitable. A major difficulty in simulating microarray data sets lies in our lack of understanding of the true correlation structure of such data. This includes the correlation between genes and in the case of time-course data, the correlation between successive time points. To address these concerns, we began with the TB data and randomly selected 2000 genes that were expressed above background in the C57BL/6 resistant strain, thus preserving some of the correlations in the real data. To create data for the second condition, we then selected 100 genes, shifted their expression levels by 1.5- to 3-fold at 1 to 4 time points, and added random Gaussian noise with a mean of 1.5 fold to the expression levels for all 2000 genes.In order to identify the set of differentially expressed genes in an experiment genes are ranked in order of decreasing evidence for differential expression and a cutoff is chosen that balances the numbers of false positives and false negatives. A receiver operating characteristic (ROC) curve plots the true positive rate as a function of the false positive rate as the cutoff is changed and can be used to assess the performance of the ranking criteria. For simulated data where it is known which genes are differentially expressed, ROC analysis is possible. Consequently, we analyzed each simulated dataset using limma, EDGE, the MB-statistic and BETR, and evaluated their ROC performance. In limma we fit an eight coefficient linear model to model the two conditions and four time points. Genes were ranked according to the moderated F-test of the four between-strain contrasts . The corFigure To test the hypothesis that BETR's advantage would be most pronounced in the case of noisy data with small but sustained effects, we characterized BETR's performance under a variety of different conditions, varying the duration of differential expression. In each case we chose a cutoff to achieve a false positive rate of 5% and assess the power to correctly identify differentially expressed genes (the true positive rate). The results presented are the average of ten simulations.We estimate the power to detect differentially expressed genes as a function of the number of time points with differential expression, ranging from a short spike at a single time point to differential expression across all four time points Figure . The truThe tuberculosis infection time-course experiment was analyzed with each of the four methods to detect differentially expressed genes. For BETR, limma and EDGE we chose a cutoff to obtain an estimated false discovery rate of 1%. Since there is currently no way to estimate false discovery rates with the MB-statistic the size of this gene list was chosen for the purposes of comparison to be the average of the other methods. The union of the four lists contained 528 genes of which only 146 were common to all methods.To investigate the differences between the results produced by the four methods we used Gene Set Enrichment Analysis (GSEA) to identify Gene Ontology terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways that show concordant differences between the strains ,19. In ecell adhesion molecules shown in Figure Based on simulated data our proposed method, BETR, outperforms three commonly used techniques in the analysis of time-course data. This advantage is particularly noticeable for genes with a small but sustained differential expression signal. When the magnitude of differential expression is of similar magnitude to background noise, it is difficult to identify by examining each time point in isolation. These patterns of differential expression become easier to identify when the time series structure of the data is taken into account; a small, noisy signal becomes identifiable if it is sustained across several adjoining time points.While BETR has no advantage when the differential expression signal is transient, its relative performance improves as the signal is sustained over additional time points. This improvement is due to the fact that BETR accounts for the correlation between successive time points. The significance of this correlation increases as a function of the number of differentially expressed time points increases. Analysis of the mouse TB host-response data confirms that our method has greater power to detect such sustained differences in a real dataset. We identified a set of genes involved in cell homing during immune response that was not detected by the other methods. Several genes in this class respond with small expression changes whose significance is only apparent when their sustained nature is taken into account. These results suggest that poor control of tuberculosis infection is in part driven by deficient regulation of cell migration factors, resulting in poor granuloma formation and subsequent inability to limit bacterial growth.An inherent challenge in genomic data analysis is identifying effects that are robust yet subtle. Based on results from theory, simulated data, and application to the genomic data presented here, we expect BETR to outperform existing methods under these circumstances. This increased sensitivity has the potential to elucidate important biological themes that may otherwise go unobserved.TM and MA conceived of the statistical technique. MA developed and implemented the technique and drafted the manuscript. IK conceived of the mouse model microarray experiments and helped interpret the findings. JG performed the microarray experiments. JQ participated in the conception and coordination of the project and helped draft the manuscript. All authors read and approved the final manuscript.Supplementary methods and tables. This file includes additional information on the methods used and supplemental tables supporting the results presented in the paper.Click here for file"}
+{"text": "Microarrays are widely used for estimation of expression of thousands of genes in a biological sample. The resolution ability of this method is limited by the background noise. Low expressed genes are detected with insufficient reliability and expression of many genes is never detected at all.M.musculus) and yeast (S. cerevisae).We have applied the principles of stochastic resonance to detect expression of genes from microarray signals below the background noise level. We report the periodic pattern detected in genes called \u201cAbsent\u201d by traditional analysis. The pattern is consistent with expression of the conventionally detected genes and specific to the tissue of origin. This effect is corroborated by the analysis of oscillating gene expression in mouse (Most genes usually considered silent are in fact expressed at a very low level. Stochastic resonance can be applied to detect changes in expression pattern of low-expressed genes as well as for the validation of the probe performance in microarrays. Microarrays have become a standard technique in biological research. From the beginning, the focus in microarray experiments has been on taking the simultaneous snapshot of a large number of genes, rather than exact measurement of expression level for a small number of selected genes. Over the years the technology has undergone a significant evolution, allowing a reliable identification of functional relation of co-expressed genes and a good estimation of expression level for particular genes In 2006\u20132007 we have published a series of papers characterizing the oscillating patterns of gene expression in metabolically active peripheral tissues in mice. The circadian oscillation we reported extends far beyond the commonly accepted 10\u201315% of genes directly regulated by the circadian molecular clock The heat map plot in p<0.1 is lower and the expression profiles are generally noisier compared to analysis of entire set of transcripts In most studies such \u201csilent\u201d genes are excluded from further analysis on the early stages. The filtration criteria are usually more stringent, selecting only genes called \u201cPresent\u201d in at least half of all time points S.cerevisae reveals a large group of genes (or rather transcripts interrogated by Affymetrix probe sets) that demonstrate a clear oscillating pattern consistent with that of highly expressed genes is dominated by metabolic oscillation in respiratory cycle and this rhythm is also observed in nearly 100% of all genes (transcripts). However, the effect of SR can be achieved even without a natural baseline oscillation. Oscillation can be generated by repetitive application of perturbation in a biological system. Periodicity does not have to be time-wise. Regular placement of replicate probes on a lattice can also be viewed as a periodic signal across the surface of microarray and in combination with background noise it can create the effect of SR. In this case application of SR would require a new specifically designed microarray as well as significant modification of the analysis pipeline, starting from the image analysis on and between the spots of attached probes. In all cases the algorithm for detection of signal is based on a test for periodicity in a series of measurements rather than static comparison of signal and background noise levels.Why do the \u201cabsent\u201d genes, with expression pattern otherwise indistinguishable from the background noise suddenly show signs of expression consistent with that of reliably detectable genes? The explanation has been outlined in the abstract of the very first paper reporting the effect of SR \u03c9 and stochastic noise the null hypothesis (H0: gene Y is expressed) is equivalent to H0: expression signal for gene Y is periodic with frequency \u03c9; H0 could not be rejected if test for periodicity is positive, alternative hypothesis (HA: gene X is silent) is accepted if there is no evidence of oscillation with frequency \u03c9. There are a few available tests for periodicity; we suggest Pt-test Using the SR methodology, the test for gene silence can be formulated as follows: in presence of both periodic signal with known frequency Detection of the extra-low gene expression has a few important implications. Long term practice of using microarray and RT-PCR technology has created a perception that a gene for which signal has the same intensity as ambient noise is not expressed. However, this fact relates to the resolution ability of the method rather than a real property of the gene. Using the principle of SR we greatly improve our ability to detect weak signals, but this method also has its limit. We observe expression of a large number of genes previously considered silent, but again this signal sinks into noise with no clear landmark separating expressed and silent fractions of genes. Could it be that the latter fraction does not exist and all genes are expressed, even at a miniscule rate? The entire concept of \u201csilent\u201d genes is created by our inability to detect extremely low transcript concentrations. There is no obvious landmark separating low-expressed genes and below-detection-threshold genes. Summing up the number of conventionally detected transcripts and transcripts detectable by SR leaves a very small fraction of truly silent gene candidates. This fraction also contains transcripts for which microarray probes are not performing as intended, which further reduces the number of potentially silent genes almost to none. Recent publication has already demonstrated that most human protein-coding genes are primed for transcription initiation, including those for which no transcripts could be detected rd degree polynomial procedure and median-subtracted. For better compatibility, the same smoothing and median subtraction procedure has been applied to all other data sets.We have completed independent circadian studies in AKR/J mice acclimated to a 12 hr light: 12 hr dark cycle, harvesting sets of 3\u20135 mice at 4 hr intervals in duplicates over a 24 hr period We have re-analyzed the time series data set provided by Dr. Tu and Dr. McKnight rd degree polynomial procedure and median-subtracted. For smoothing we use seven-point Savitzky-Golay algorithm Profiles have been smoothened by a 3x with N samples of the formI(\u03c9):\u03c0], then the periodogram exhibits a peak at that frequency with a high probability. Conversely, if the time series is a purely random process (a.k.a \u201cwhite noise\u201d), then the plot of the periodogram against the Fourier frequencies approaches a straight line For purposes of spectral analysis, consider a series of microarray expression values for gene g-statistics, as recently recommended in g-statisticI(k\u03c9) is a k-th peak of the periodogram. Large values of g indicate a non-random periodicity. We calculate the p-value of the test under the null hypothesis with the exact distribution of g using the following formula:n\u200a=\u200a[N/2] and p is the largest integer less than 1/x.The significance of the observed periodicity can be estimated by Fisher This algorithm closely follows the guidelines recommended for analysis of periodicities in time-series microarray data Y\u200a=\u200ax0, x1, x2,\u2026 Nx\u22121 the autocorrelation is simply the correlation of the expression profile against itself with a frame shift of k data points . For the time shift f, defined as f\u200a=\u200ai+k if i+k 0, then this means that the activation of gene ga influences the activation of gene gb at a later time.In particular, if the measure ga \"to\" gene gb, this is the way TimeDelay-ARACNE can recover directed edges. On the contrary, the ARACNE algorithm does not produce directed edges as it corresponds to the case \u03ba = 0, and the Mutual Information is of course symmetric.In other terms there is a directed link \"from\" gene a and gene c losing gene b which is regulated by a and regulates c while short time delay can be not sufficient to evidence the connection between gene a and gene b, so using some few delays we try to overcome the above problem. Other approaches based, for example, of conditional mutual information, such as in [We want to show direct gene interactions so under the condition of the perfect choice of experimental time points the best time delay is one because it allows to better capture direct interactions while other delays ideally should evidence more indirect interactions but usually time points are not sharply calibrated to detect such information, so considering few different time points could help in the task. If you consider a too long time delay you can see a correlation between gene ch as in , could oInfl estimations, TimeDelay-ARACNE filters them using an appropriate threshold, I0, in order to retrieve only statistical significant edges. TimeDelay-ARACNE auto-sets this threshold using a stationary bootstrap on the time data. The bootstrap is a method for estimating the distribution of a given estimator or test statistic by resampling available data. The methods that are available for implementing the bootstrap, and the improvements in accuracy that it achieves in terms of asymptotic approximations, depend on whether the data are a random sample from a distribution or a time series [\u03bc, and the standard deviation \u03c3. The threshold is then set with I0 = \u03bc + \u03b1\u00b7\u03c3. In figure After the computation of the e series . If the e series -53 detaie series . It conse series ,55 or ove series ,57 The bIn the last step TimeDelay-ARACNE uses the DPI twice. In particular the first DPI is useful to split three-nodes cliques (triangles) at a single time delay. Whereas the second is applied after the computation of the time delay between each pair of genes as in (8). Just as in the standard ARACNE algorithm, three genes loops are still possible on the basis of a tolerance parameter. In particular triangles are maintained by the algorithm if the difference between the mutual information of the three connections is below the 15% .n genes and t samples, we have to compute O(Kn2) estimations of the mutual information between two vectors of samples having t elements or less, K being the maximum value of the parameter \u03ba. We adopt a kernel-based estimator of the density of data used in the computation of Mutual Information; it is based on procedure proposed in [http://cran.r-project.org/. It performs a smoothing of data and an interpolation on a grid of fixed dimensions; the procedure also performs an automatic selection of the kernel bandwidth, by choosing the bandwidth which (approximately) minimizes good quality estimates of the mean integrated squared error [The computational performance of the TimeDelay-ARACNE algorithm is influenced by the number of genes, by the mutual information estimation algorithm and by the adopted scheme of bootstrapping for the estimation of the threshold parameter. In particular if the network has posed in and impled error . Indeed,ed error ,60 deal t and on the size of the fixed grid, which in all our experiments we fixed at 100 \u00d7 100. The algorithms were developed in R and available at the site http://bioinformatics.biogem.it. To have an idea of the computational time required by each network reconstruction, the estimation of the mutual information on a standard platform between two expression profiles of size from 10 to 100 points ranges on the average between 0.07 and 0.13 seconds. The whole procedure, apart from the bootstrapping required to estimate the threshold I0, on a network of 50 genes and 50 time points, requires less than 7 minutes. Therefore the most computational demanding step is the bootstrapping, it is needed to compute the threshold I0. It consists in randomly permuting the dataset , and then calculating the average mutual information and standard deviation of these random values. Depending on the number of samples in the bootstrap steps, the computational time changes; in all the reported experiments we used a number of 500 bootstrapping samples, this turns out to produce the reconstructed network of 50 genes and 50 time points in about 47 minutes.Therefore, each inner mutual information estimation just depends on R and can be downloaded at http://bioinformatics.biogem.it or by contacting the corresponding author.The software was implemented in PZ designed the procedure and discussed the results, SM contributed to the implementation of the procedure and performed the elaboration on data, MC designed the procedure, proposed the biological problem and discussed the results. All authors contributed to the design of the whole work and to the writing of the manuscript. All authors read and approved the final manuscript."}
+{"text": "However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient.The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared.This paper performs an analysis of several existing evolutionary algorithms for Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established. Finding regulatory interactions between cell products is one of the most important objectives of Systems Biology and has stimulated considerable research efforts -5. DNA MOne approach is to mathematically model the GRN and to find parameters of the model from available data. Once built, these models can be used to predict the behaviour of the organism under certain conditions, related to different treatments or diseases. Also, once the basic mechanisms of life are revealed, it has been postulated that it should be theoretically possible to create synthetic organisms, . A largeinter alia, analysing only subnetworks of the entire GRN. A useful approach is to combine levels of detail, in a top-down or bottom-up approach, i.e. to move from a coarse to a more detailed model or vice versa [In order to model a GRN, genes are viewed as variables that change their (expression) values in time. Depending on the type of variables used, methods can be classified as discrete or continuous, deterministic or stochastic, or as hybrid methods, (using more than one type of variable). Two different approaches are distinguished in the literature, : coarse-ce versa -36. Furtce versa ,38, provThis paper concentrates on quantitative modelling of gene regulatory networks (GRNs) using DNA microarray data, as this is more informative than qualitative analysis of biological data. Although more sophisticated high throughput technologies have been developed lately, (such as next-generation sequencing), that may give more accurate results, , microarPrevious (pair-wise) algorithm comparisons for the methods analysed here have been reported, ,42. HoweIn order to analyse the performance of evolutionary algorithms for model parameter inference, we have implemented seven different approaches and compared them on several datasets. These methods use different continuous fine-grained models to represent the GRN, and rely on EAs to find the model that best fits the experimental data. Further information on the implemented techniques can be found in [Additional file Generally, GRN models, based on systems of differential equations, express the change in the expression level of each gene in time as a function of the expression levels of the other genes, . S-Syste\u03b1i and \u03b2i, the rate constants, represent the basal synthesis and degradation rate, and gij and hij, which indicate the strength of the influence of gene j on the synthesis and degradation of the product of gene i, are the kinetic orders. In real GRNs, it is, of course, possible that the expression level of a gene does not depend on the other genes, but only on its own concentration or that of metabolites or other external factors. Self regulation is modelled by S-Systems (parameters gii and hii), and metabolite concentrations can also be introduced in the model, when measurements are available.In the case of GRN modelling, the two terms in Equation 1 correspond, respectively, to synthesis and degradation, influenced by the other genes in the network; Due to the fact that they are considered one of the most complete models for GRNs, S-Systems have received a lot of attention in the literature .These are naturally-inspired models, which mimic the activity of the animal nervous system . The netchange in expression value, with time, of one gene, while the other calculates the expression value itself at a certain moment in time. Inputs are the expression values of the regulators at the previous time point. The latter has been used here, (for one of the methods implemented), [Two different ways of modelling GRNs with ANNs are common. The first one computes as the output of the ANN the mented), .individuals or chromosomes), which evolve over a number of generations. The goodness of each individual, i.e. its fitness, is given by a function defined for the specific optimisation problem. Evolution is performed using genetic operators that depend on the specific problem and encoding, e.g (i) mutation, which modifies one solution from the population, to obtain a new one and (ii) crossover, which uses several parents to create a number of offspring. For each generation, a new set of solutions is produced from the previous population, either by replacing some parent individuals by children, or by performing fitness -based selection on all parents and children, , evolution strategy, (ES), genetic programming, (GP), evolutionary programming, (EP), differential evolution, (DE). Each maintains a population of solutions to the optimisation problem, . Even given strict differences between each individual in the EA family of methods, the distinction has become fuzzier with time, as new hybrid approaches have appeared, such as the real-encoded GA used in this paper.Although these are common features of EAs, , they are also the elements that differentiate one type of EA from the others. For instance, individuals of GAs are typically encoded as binary arrays, DE and ES use arrays of real numbers as an encoding for the solution, while GP evolve tree-encoded expressions. At the same time, these methods use different genetic operators, (applied to the different encodings), or use one The generic methodology of fitting a GRN model to data using EAs involves a given model, a set of data, and evolution of the model parameters. For a population of parameters, representing different models, genetic operators are applied and the fittest individuals in the population are selected for the next generation. Usually, in the case of GRNs, the fitness function is defined as the difference between the observed data and the output of the model, . Since every model has its individual features, algorithm steps differ from one approach to another, but the main skeleton is usually preserved.Here, we have implemented and analysed seven such algorithms: CLGA , MOGA , or to previous biological knowledge . Robustness to noise analysis is performed by maintaining as fixed the number of fitness evaluations and other EA meta-parameters (e.g. mutation and crossover operators), and observing the decrease in fitness and solution quality. Scalability analysis involves increasing the number of fitness evaluations allowed and observing the quality of results obtained. The number of fitness evaluations was empirically chosen to allow the population to converge towards a stable fitness value . Table In order to be able to evaluate our implementation on the chosen criteria, Table , six datRobustness to noise was tested on the synthetic data for the five-gene networks to which 1%, 2%, 5% and 10% Gaussian noise was added to all values. The assumption of Gaussian noise has been used before in relation to gene expression data, ,50, and,Ideally, in order to be able to build an S-System model, or to train an ANN, for a large scale network, a large number of measurements (time points) is required. This number increases further when data are noisy, . HoweverAs evolutionary algorithms are stochastic in nature, multiple runs were performed for each experiment. Multi-objective analysis was performed over 20 runs for each algorithm. The methods analysing the entire system were applied seven times on each dataset, while those using the divide and conquer approach were run five times for each of the first five genes, resulting in 25 runs per dataset. The quantitative results for the algorithms are displayed using notched box plots, , which sFor a first analysis, we applied five algorithms to the five-gene synthetic dataset from . We chosAs Figure Although methods using the S-System model display similar average performance, , GA+ES and DE+AIC obtain the best parameters overall , while, (in sensitivity terms), GLSDC has higher value, indicating that the latter is more suitable for a quantitative analysis than the two former, which, despite finding parameter values close to the real ones, can miss smaller values.Table An important feature for inferential GRN algorithms, in a real biological setting, is robustness to noise. We have analysed the behaviour of the algorithms implemented on noisy datasets, and the results are displayed in Figures false positives, which is not desired here.The sensitivity and specificity criteria allow for a qualitative analysis of results. From the sensitivity point of view, the methods can be divided into three categories: with (1) stable sensitivity values , (2) decreasing sensitivity with noise (GA+ES), and (3) increasing sensitivity with noise (PEACE1). Specificity values, on the other hand, decline with noise for all methods, which is explainable by the fact that algorithms concentrate on finding null interactions, so the number of true negatives discovered decreases with noise. However, the first two categories seem to exhibit significantly better behaviour than the third. This explains why PEACE1 achieved a maximum sensitivity with maximum noise: a very small proportion of parameters were found to be null, so almost all genes were found to interact. This results in a large number of true positives, however, accompanied by a very large number of The quantitative perspective has been analysed using the two criteria in Figures In conclusion, the ANN model and the GLSDC mechanism for controlling noise seem to give good quantitative results even with a high noise rate. The best balance for sensitivity-specificity is achieved with GA+ANN, while GA+ES, DE+AIC and GLSDC exhibit the best qualitative behaviour with noise under the S-System model, .Scalability analysis was performed on four synthetic datasets corresponding to four different networks: 10, 20, 30 and 50 genes. For these data, quantitative results using box plots are displayed in Figures Due to the characteristically low connectivity of the networks, all methods analysed displayed good specificity, . However, the sensitivity values tend to decrease with the increase in size, which indicates that, for larger networks, these methods tend to set more and more parameters to zero, so that more interactions are missed. However, the number of false positives remains small. GA+ANN maintains a good qualitative performance up to 30 genes, while DE+AIC and GLSDC display good behaviour with the 10-genes dataset, but do less well as the size of the gene set increases. On the 50 gene dataset, all methods perform poorly, with respect to the sensitivity values.per gene rather than per parameter. Given the similar connectivity of the four different networks (3 to 5), this offers a good measure of parameter quality that neither depends on the number of genes in the network, which would have been the case if we had chosen the total squared error, nor is biased by the large number of null parameters usually discovered by the algorithms.In order to analyse the quantitative behaviour of the methods implemented, values for two criteria were provided: ability to reproduce data Figure and paraAs Figure GA+ANN achieves good parameter quality, , have been consistently assigned parameters with the wrong sign, by all the methods, in multiple runs. This indicates noise interference, which explains lower values compared to the similar 6-gene experiment. GLSDC, however, seems to identify a number of interactions comparable to the 6-gene experiment, which confirms that it is more robust to noise than the others. GA+ES and PEACE1 also seem to correctly identify many interactions, but, the fact that the simulated gene values are highly dependent on the rest of the network, means they are unable to reproduce the experimental data.Note that, for some methods, the fittest individual identifies fewer interactions than the overall value, which confirms that good ability to reproduce data does not necessarily correspond to a model containing biologically relevant connections. Qualitative analysis indicates that, for the small networks, where all the genes are known to interact, the connections identified by the best-fitting methods are mostly correct. For the 7-gene experiment, two of the known interactions, , and percentage of interactions, and adds fewer false interactions outside the PHO gene family, (connections from SIC1 and APC/C). This suggests that, when the added nodes are not connected to the existing ones, the algorithm is better at finding correct qualitative interactions, although fit obviously suffers.Introducing more genes into the analysis triggers a different response from each method and gene analysed. In the PHO genes on CLN2, but these are not biologically plausible. At the same time, when moving to the larger dataset, it correctly adds a positive effect from FUS3, that affects the gene through FAR1, but fails to identify the SBF complex (SWI4/6) as an activator. The fact that it does not succeed in identifying the main activation link explains the poor performance when reproducing the data. DE+AIC and GA+ANN preserve the connections from SWI4, SWI6 and CLN3 from one analysis to the other, but at the same time add some false connections to PHO80, PHO4 and APC/C.In the second experiment, where most of the new nodes are connected to the initial network, GA+ANN and DE+AIC perform better both from the data fit and validity of interactions point of view. However, the number of false positives increases when moving to the larger dataset. GLDSC finds many effects of All in all, the results indicate GA+ANN and DE+AIC as better choices when a continuous simulation of the system is required, with less concern for qualitative analysis of connections, (i.e. a black box approach). GLDSC seemed to identify correct interactions in most experiments, but, however, is not able to reproduce the data as well as the other two methods. The methods aiming to analyse all genes simultaneously displayed very poor performance in reproducing the data, although succeeded in qualitatively identifying some correct interactions for the small-scale datasets.As CLGA and MOGIn order to ensure the validity of our comparison we performed twenty 100,000-fitness call runs for each of the three algorithms and the results are summarised in Table p 0.02 when comparing the single with the multi-objective approaches. However, no significant improvement is introduced by fuzzy dominance selection in this case.Figure A more general observation is that, if we perform two rankings of the 20 solutions obtained, , results differ, for all three methods. So, improved fitness does not necessarily mean better parameters. This indicates that some parameters may be more important than others, so that a slight change in the values of the more meaningful ones strongly influences the ability of the model to reproduce the data. Another argument for this is the observed difference between the robustness of kinetic orders and that of rate constants, which suggests that the latter can vary more without affecting goodness of fit too much. These observations also suggest that alternative models are possible, so that more precise discrimination is needed.In conclusion, we have shown that, splitting the squared error objective into smaller sub-objectives, for a MO approach, significantly speeds up convergence for EAs. Nevertheless, after a large number of iterations, final results are comparable. This could be due to the fact that this approach forces the algorithm to fit all parts of the time series at the same time, instead of allowing it to converge more slowly by improving only some of the objectives, which is an advantage, especially when dealing with large dimension problems as performing a very large number of iterations is not viable. This suggests that, even when analysing only one gene at-a-time, we can still split the time series into shorter parts, to speed up convergence in a MO setting. Further analysis, to investigate to what extent this objective division is useful and at what point the overhead becomes greater than the gain, would be valuable.Two different approaches for GRN model parameter inference are advocated in the literature: finding relations for the entire network, ,34,57, oin favour of division found in the literature is increased scalability due to decrease in number of parameters, (linear instead of quadratic dependency on the number of genes in the network), and ease of solution evaluation, as only the time series for the current gene needs to be simulated. However, these arguments do not take into account the fact that this method has to be iterated for all genes, so, ultimately, the number of parameters and the number of simulated time series is the same, . Also, when simulating one series at-a-time, the values of the rest of the genes are considered to be those of the experimental data. However, the effect of the current gene on the others is not taken into account, and this can give the impression of finding a good solution when, in reality, the difference between the data and the simulation in a whole system setting could be larger. This effect is exacerbated for real noisy data. In order to compensate for this disadvantage, a complete network analysis can be performed, to fine tune the parameters obtained for each gene in each sub-problem.The argument In order to avoid the resource problem and be able to scale up even when analysing the entire network simultaneously, parallelisation is clearly desirable. In a parallel setting, division loses its advantages, becoming less viable than the complete network analysis, which can be parallelised in a more convenient way, to avoid simulating only part of the network when evaluating individuals.During our experiments, division proved to be more useful when analysing real data, statistically significant differences being observed in one of the small scale experiments. Nevertheless, in both of these experiments, probably due to noise, the two methods analysing the complete networks failed to reproduce the time series, even for a small number of genes. However, a more detailed analysis, in a multi-CPU setting, is required with respect to their behaviour with real microarray data.Although microarray data provides measurements for a large number of genes, the number of time points available is usually not enough for a quantitative analysis of the underlying GRN . A very For instance, previously known interactions could be introduced during initialisation, and links maintained until the end of the optimisation (similar to ). In theSimilarly, binding affinities and gene sequence structure could boost performance for the algorithms. This type of knowledge has been used before with a Bayesian model , howeverProducing long time-series experiments is very costly and not feasible for most laboratories. However, short series from different sources, but describing the same process, are available. Nevertheless, no efforts have been made to combine these for model inference. It is possible, by using adequate normalisation techniques, to combine these heterogeneous datasets, and be able to model the common features. The same gain could be obtained by fitting different replicates of the same experiment as a separate time series. This should also increase the ability of algorithms to handle noise, as, by combining data with heterogeneous perturbations, over fitting of the noise is reduced.This article presented a comparison of existing methods of inferring parameters for continuous models of gene regulation, based on DNA microarray data. We have implemented seven algorithms , while identifying, at the same time, some of the known interactions in the data. GLSDC also identified known interactions, but had limited ability to reproduce the data. The two methods analysing the entire network simultaneously, (GA+ES and PEACE1), failed to reproduce real data, which suggests that existing methods are not as yet capable of simulating the entire network in a real experimental setting, even when analysing small-scale systems.We have shown that splitting the evolutionary algorithm objective into smaller sub-objectives, (for a multi-objective approach), speeds up convergence. This suggests that, even when analysing only one gene at-a-time, we can still split the time series into shorter parts. Furthermore, we believe that using multi-objective optimisation along with a hybrid approach can improve learning performance.Importantly, it should be noted that parallel implementation of the evolutionary algorithms is necessary, , in order to underpin relationships between genes. These data include (1)ChIP Chip data and binding affinities, which identifies which proteins bind to which genes, indicating possible interactions, (2) knockout microarray experiments, which allow for mutant behaviour to be analysed, (3) protein-protein interactions, which indicate groups of co-regulated genes, (4) miRNA interference data, which indicates other causes for a gene to be under-expressed. These data can be potentially included in the evolutionary algorithm in a multi-objective setting, in order to speed up convergence.EA: evolutionary algorithm; ES: evolution strategy; GA: genetic algorithm; GRN: gene regulatory network; ANN: artificial neural network; MSE: mean squared error; MO: multi-objective; GA+ES: method nesting a genetic algorithm with an evolution strategy; AIC: Akaike's Theoretic criterion; DE+AIC: method using differential evolution as a search strategy, and AIC-based fitness; GLSDC: method using genetic local search; PEACE1: iterative algorithm based on GA; CLGA: classic GA; MOGA: multi-objective GA; GA+ANN: method using an ANN as a model and GA for parameter inference.AS implemented the methods in Java and applied them to the different datasets. All three authors participated in the design of the experiments, interpretation of results, composition and drafting of the manuscript. All authors read and approved the final manuscript.Implemented evolutionary algorithms for gene regulatory network inference. This PDF file gives details on the 7 algorithms implemented and analysed here.Click here for fileUsing the framework. This PDF file provides information on downloading and using the Java implementation for algorithm comparison.Click here for fileEvA2 framework. This archive contains the code and resources published by EvA2 authors (Minimum corresponding code). More details on how to use it can be found in Additional File Click here for fileAlgorithm implementation. This archive contains the code for the seven methods implemented (Corresponding application code). More details on how to use it can be found in Additional File Click here for fileData sets. An archive containing the datasets used for the experiments presented here.Click here for file"}
+{"text": "One of the challenges in exploiting high throughput measurement techniques such as microarrays is the conversion of the vast amounts of data obtained into relevant knowledge. Of particular importance is the identification of the intrinsic response of a transcriptional experiment and the characterization of the underlying dynamics.The proposed algorithm seeks to provide the researcher a summary as to various aspects relating to the dynamic progression of a biological system, rather than that of individual genes. The approach is based on the identification of smaller number of expression motifs that define the transcriptional state of the system which quantifies the deviation of the cellular response from a control state in the presence of an external perturbation. The approach is demonstrated with a number of data sets including a synthetic base case and four animal studies. The synthetic dataset will be used to establish the response of the algorithm on a \u201cnull\u201d dataset, whereas the four different experimental datasets represent a spectrum of possible time course experiments in terms of the degree of perturbation associated with the experiment as well as representing a wide range of temporal sampling strategies. This wide range of experimental datasets will thus allow us to explore the performance of the proposed algorithm and determine its ability identify relevant information.In this work, we present a computational approach which operates on high throughput temporal gene expression data to assess the information content of the experiment, identify dynamic markers of important processes associated with the experimental perturbation, and summarize in a concise manner the evolution of the system over time with respect to the experimental perturbation. With the advent of microarray technologies for measuring genome-scale transcriptional responses, there has been a renewed interest in using computational methodologies to study systemic responses For deciphering the dynamics of biological responses, temporal gene expression experiments record transcriptional changes over time with the goal of establishing a broader range of co-expression characteristics In this paper we hypothesize that an emergent relation between genes may be an important feature denoting biological relevance of a gene by being part of a coherent response. This hypothesis arises from a basic concept of systems biology in which the response of an organism to an external stimulus is made up of the synchronized response of a group of genes In this paper we extend the analysis earlier presented in We provide first a short overview of the approach so that the reader can follow the discussion without delving into the algorithmic details which will be extensively discussed in later parts of the manuscript. The algorithm is an integrative clustering and selection algorithm. Rather than selecting genes based upon differential expression, the algorithm selects patterns (motifs) within the data based upon the over-representation of that specific pattern and its contribution to the overall response of the system. The proposed algorithm consisted of two primary steps: (i) a fine grained clustering algorithm to identify an extensive list of putative clusters, and (ii) a selection operation to determine which of the clusters are representative of the underlying response. The fine-grained clustering, based on a symbolic transformation, allows for the identification of a large number of possible expression motifs. A selection process which follows allows for the selection of the subset of most critical and characteristic responses. The combinatorial selection of the informative subset of expression motifs will be performed using a greedy and/or a global method. We have identified two metrics for quantifying deviations from homeostasis: a global metric, denoted by \u0394, and a time dependent, termed the Transcription State denoted by \u0394(t). Our underlying hypothesis is that only informative motifs should contribute to deviations in the metrics from homeostasis. demonstrates that the incorporation of additional motifs past a certain maximum (24) does not introduce any new information as indicated by the decrease in the transcriptional state. Associated with this maximum in the transcriptional state, max(\u0394(t)), is the plot for the transcriptional state \u0394(t) vs. time as well as the twenty-four clusters associated with the selection . In , we are plotting the deviation of the set of informative genes from the control state at time 0. The interesting feature of this plot is that the transcriptional state appears to change in a periodic manner with period spikes present at 12 and 36 hours, accounting for a 24 hour periodicity within the data. Thus, while the proposed algorithm has utilized no specific information about the periodicity of the data, it was still possible to discern the underlying pattern within the data. This is in contrast to the original analysis utilizing the Lomb-Scargle algorithm which assumes periodicity. The motifs which were selected by the algorithm are presented in . While not all of the selected clusters have a clear 24 hour periodicity, the majority of the clusters do have a 24 hour oscillatory behavior.In the case of the circadian dataset, the application of the greedy selection demonstr demonstr demonstr demonstr. Beyond this point, as more motifs are added, there is a decrease in deviation exhibited by the transcriptional state. In a similar fashion to the circadian dataset, this decrease in the value of the transcriptional state is associated with the incorporation of motifs which are either very similar to ones that have already been added and thus add no information, or are sparsely populated and thus not significant. The progression of the transcriptional state of the acute corticosteroid dataset shown in In the case of the acute administration of corticosteroids the maximum deviation of the transcriptional state occurs at some intermediate level . Beyond . Beyond . These motifs follow two distinct patterns. The first pattern is an initial deviation away from the baseline state, and a return back to baseline, whereas the second motif represents a sustained up-regulated response. This response is unusual because it is different from those observed under the acute case where there all of the profiles had similar characteristics albeit the profiles were anti-correlated with a few being up-regulated and the others being down-regulated. With chronic administration, it appears that the responses are very different because one of the profiles shows diminished response despite a sustained input of corticosteroids, whereas other profiles show a sustained response due to the infusion.Under a chronic administration of corticosteroids, we identify a similar level of over-representation in the population dynamics as in the acute administration of corticosteroids. However, while the level of correlation associated with this dataset is not as low as that of the acute corticosteroid dataset, it is evident that there exists a subset of motifs that do show a significantly non-exponential characteristic. During the greedy selection process, we see a response which is qualitatively similar to that of the acute corticosteroid case as well as the circadian dataset in which a maximum is reached at an intermediate number of clusters (4), after which there is a decline, (solid) appears to show a more pronounced two-stage effect in which there is an initial deviation away from homeostasis, a slight return, after which a secondary effect takes over. Thus, the small decrease that was evident in the greedy selection was not an artifact of the data, but rather some intrinsic event.The transcriptional state for this drug administration shows a similar dynamic, in which there is an initial deviation, and a slight return to baseline, before a second sustained response takes over and 5 clusters with 307 genes under the optimal selection . The global dynamics exhibits a two-wave response consistent with the notion of genetic reprogramming as previously documented in vivo and in vitro . The optimal selection was conducted upon a subset of 10 clusters all of which had at least 49 genes within them.The burn injury dataset (GDS599) yielded 4 clusters with 281 probes under the greedy selection and 5 cl and 5 cl and 5 clThe profiles associated with the four clusters can be described as an early up-regulation event which returns back to a different state, two bi-phasic responses which contain genes which spike at two different time points, and finally a late up-regulatory event. In contrast to the result of the greedy selection, the optimal selection shows a clear progression in the activation of different genes. In the optimal selection, our first cluster shows a similar bi-phasic response as was selected under the greedy selection, whereas our other clusters essentially show spikes at different time points, which indicate a cascade of events occurring in sequence, with spikes occurring at different time points indicating a short period of time when specific stages in a particular cascade are active. Unlike the corticosteroid datasets, the optimal selection vs. the greedy selection yielded some clusters which were qualitatively different, specifically, the appearance of the gene expression profiles which spiked at different time points.However, through the examination of the transcriptional state, we are able to draw a link between the two results. It is observed that the burn injury appears to have an initial deviation as the organism responds to the original stimulus. Then after the cessation of the initial response of the system, there is a slight return to the baseline at hour four. However, in both cases, there is a massive secondary event which occurs that drives the system either to a new steady state as suggested in the case of the optimal selection, or uncontrollably in the case of the greedy selection. Therefore, while the transcriptional state of the system from hours 0-8 appears to be consistent, the final response at 24 hours appears to be different. Because of the inconsistencies of the burn dataset, numerous questions arise, specifically whether the inconsistencies between the two different selection methods represent an artifact within the algorithm itself, or whether there is some relationship between the two different results, which if resolvable may be more indicative of the underlying biological response, as well as aid in understanding the nature of the differences between the selection techniques. an over-representation of certain cluster sizes which are indicative of coordinated responses that cannot be solely explained by random events. When evaluating the population dynamics of the different datasets, it was observed that the circadian dataset showed a population distribution very similar to that of the null dataset. However, when performing the greedy selection on the null dataset and the circadian dataset , we can see a clear difference between the progression of the transcriptional state of the null dataset and that of the circadian dataset. For the null dataset we see that the maximum deviation from the baseline occurs when a single cluster has been added. The incorporation of additional clusters on the other hand serves to decrease the deviation in the transcriptional state. Thus, it appears that the incorporation of additional patterns into an informative set only serves to add \u201cnoise\u201d into the system rather than the incorporation of additional information. Therefore, despite the fact that the population dynamics between the circadian dataset and the null dataset appear similar, this does not discount the fact that significant coordination between the different clusters exist.The micro-clustering performed has allowed us to identify a large family of clusters. Depending on the nature of the data sets, we expect an over- an over- an over- we see a simple trend, where a maximum is reached at an intermediate number of clusters, and then there is a smooth decline after this point has been reached. For the two corticosteroid datasets, a similar response is observed, with an intermediate number of clusters reached before the incorporation of additional clusters decreases the expense metric. From the response of the burn dataset as well as the response of the two corticosteroid datasets, we hypothesize that the maximum number of informative motifs has indeed been obtained. After the maximum deviation has been reached, we hypothesize that there is a smooth reduction in the aggregate transcriptional state due to the addition of similar gene expression profiles. This is seen in the greedy selection as well, in which after a certain point, we see a consistent decline in transcriptional state. In an alternative strategy, we look at the performance of the greedy selection and observe where the globally optimal motifs lie. In One of the difficulties associated with the optimal selection of motifs lies in the combinatorial nature of the problem. Thus, even after eliminating the large number of motifs via their population, the combinatorial problem is not eliminated, only mitigated. Adding to this issue is the fact that the problem must be solved parametrically. Currently, we perform an exhaustive search on all possible combinations of m optimal motifs from a base population of M. For the burn dataset, there were only 10 motifs which were over-represented. Thus, parametrically exploring all possible cluster sizes was possible. However, for the two corticosteroid datasets this was not the case. In both of the corticosteroid datasets, we solved the problem parametrically for m<7. By plotting the progression of the cumulative transcriptional state for the burn dataset we see a we see aOne issue of concern for us is the reliance upon over-represented motifs when conducting the optimal selection. Because we are using an exhaustive enumeration of motifs, it is critical for us to identify a subset of possibility meaningful motifs. However, in the case of the circadian dataset, we are unable to identify over-represented motifs, and would thus have to run it upon all of motifs in the dataset. This combinatorial problem has not been addressed in the current algorithm, but can be addressed by more complex heuristics that can be implemented in the future.a priori. Associated with this underlying dynamic, are dynamic signals whose activity show important correlations with the underlying phenomena being investigated. In the corticosteroid datasets, the dynamic response under the acute case mimicked the response predicted by the indirect effect model in which the dynamic showed a time lag before the maximal activity was reached, and a decline as the drug was cleared from the system. In the chronic corticosteroid case, we saw profiles that corresponded well with the observations that some corticosteroid responsive phenomena were transient exhibiting a significant tolerance mechanism, whereas other corticosteroid responsive phenomena such as muscle wasting was sustained. Finally in the case of the burn dataset, our profiles appeared to illustrate the impact of a signaling cascade with significant genes turning on and off in sequence, signaling the short-term progression of pathways the organisms uses to respond to the severe injury.The proposed algorithm represents a different method for processing high throughput temporal gene expression data. Rather than assessing the importance of a single gene on a case by case basis, we instead propose examining the importance of a specific pattern. Furthermore, the importance of this pattern is evaluated within the context of its contribution to an inherent underlying dynamic which is not known We have structured a compendium of transcriptional responses in order to elucidate the insights of the overall approach. A synthetic dataset is created where values are drawn from a N distribution in order to illustrate basic properties of the calculations. In addition 4 experimental dataset are evaluated and the raw data can be found in Gene Expression Omnibus database The first experimental dataset, accession number GDS1629 The second dataset, accession number GDS253 In this experiment, a significant, yet reversible, perturbation has occurred to the system such that there should be a clear deviation away from the baseline case followed by return to the control state.This case was selected to validate the fact that the presence of a significant perturbation is visible along with the non-randomness of the dataset. This dataset has the added advantage of having a well-characterized mechanism which allows for the assessment as to whether the temporal variations in the transcriptional state have meaning with respect to the underlying biological phenomenon. Given the number of time points associated with this dataset, this will be the only dataset which was run with piece-wise averaging. Thus, this dataset was run with a piecewise averaging of 2, such that adjacent points are averaged to obtain a single data point. Because of the fact that 17 time points do not divide evenly into 2, this dataset was extrapolated to 18 time points with the final time point occurring at 80 hours.The next dataset, accession number GDS972 The final dataset which is evaluated is listed under accession number GDS599. This dataset represents a serious cutaneous burn administered to a rat over 30 percent of the skin. After the administration of the burn, the livers were harvested at 5 individual time points , and the gene expression data was obtained using the RG-U34A microarray. Unlike the corticosteroid datasets in which there is a single reversible perturbation to the system, this final dataset represents the induction of a complex series of events in response to the severe injury. Thus, this dataset will be used to investigate the ability of the algorithm to identify significant and salient changes within the system in response to a complex phenomenon.The preliminary step, i.e., the fine grained clustering operation, divides the temporal expression data into a large number of clusters in which the similarity between the different expression profiles in a cluster is expected to be very high. As such, any clustering algorithm could in principle accomplish this first task. However, we have elected to explore the basic principles of the HOT SAX representation In order to emphasize the role of the shape of the responses the data is first z-score normalized such that all of the expression profiles are of the same magnitude:The set AB defines the so-called \u201calphabet\u201d which is a set of symbols with cardinality equal to the number of equiprobable partitions of the Gaussian curve.i(t) hashed to unique identified, Hi, and each such identifier corresponds to a cluster. By definition there are a finite number of hash values, which would correspond to the maximum possible number of clusters in the data. From (2) it is evident that the number of possible values for Hi, and therefore clusters, is related to the length of the signal and the number of possible symbols which the gene expression profile has been broken up into. The use of the equiprobable distribution for the discretization step is important because signals will be assigned to the different clusters with equal probability, provided that the signals were randomly generated via an N distribution.After a gene expression profile has been converted into a sequence of symbols, the sequence is converted into an integer through the use of an appropriate hashing function. Following the formalism of Because of the underlying equiprobable distribution associated with HOT SAX, randomly generated expression profiles will be assigned different hash values with equal probability. Because of the equi-probable assignment of hash values with respect to randomly generated data, the population of a given cluster can be modeled via a Poisson process. However, in the case where there exists approximately the same number of possible hash values as genes to be hashed, this Poisson distribution can be modeled as an exponential distribution2 correlation to the exponential distribution than the real data. From this random trial, the p-value is calculated, as p-value\u200a=\u200aProb to establish the confidence that the data is non-random. We expect that if there is a non-significant stimulation within the experiment the p-value should be quite high, and if there is a significant simulation within the system that the p-value should be quite low i.e. statistically significant, suggesting that indeed the system deviates reliably from the hypothetical exponential distribution.To evaluate whether a significant perturbation exists within the data, the hash-based clustering is run on the experimental data and a distribution of cluster membership is obtained. This is compared to a synthetic null dataset of the same size in terms of the number of genes and the number of time points generated from the random data with the same number of time points and genes as the experimental dataset. A standard permutation analysis is performed for estimating the statistical significance of a result The behavior of HOT SAX to randomly generated data thereby allows us to select the parameter AB in a systematic manner. In a real dataset, it is hypothesized that significant coordination will occur, and therefore, the performance of the hashing operation should show deviations from the theoretical exponential distribution. Thus, the HOT SAX algorithm should be run on a given dataset where AB is varied parametrically, and the AB which corresponds to the lowest correlation to the null response will be chosen as the optimal AB.The majority of approaches for analyzing time course gene expression data are based on the fundamental premise that over-populated motifs are indicative of significant events and thus searches for them as the main priority In order to address these two issues we introduce a term which we denote as the \u201ctranscriptional state\u201d. The transcriptional state is a metric, which quantifies the deviation from a control. The control state can be arbitrarily defined, since we are interested in deviations and not absolute values. We assume that the \u201ccontrol\u201d state corresponds to time t\u200a=\u200a0, i.e, right before the systemic perturbation, if any. The baseline state is defined as the distribution of expression values of a set of genes at the control state. To quantify the deviation from this baseline state, the difference in the distribution at any future time t and the control state is evaluated. To do so, we make use of the Kolmogorov-Smirnov (KS) Statistic While it has been shown that if one takes a large enough set of genes the distribution of values is expected to follow the log-normal distribution 1-norm. The use of the L1 norm, quantifies the deviation over all time points during the duration of the experimental protocol. Therefore, the scalar quantifying the difference between the two distributions is defined as:The sequence \u0394(t) is defined as the transcriptional state of the system as it quantifies the level of transcriptional deviation from a control state. In order to evaluate a time-independent metric of the difference between two distributions any norm can be used on \u0394(t). We opt for the simplest evaluation using the LHaving defined a metric quantifying the deviation of the current state from a control, the selection step of the algorithm identifies a subset of motifs composed of genes whose transcriptional state is responsible for the maximum deviation from the control state Two interesting properties of the transcriptional state are worth exploring further. The first relates to the changing character of the transcriptional dynamics, i.e., the deviation from the control, as more and more clusters are added. Based on the hypothesis that the totality of the transcriptome hides the informative components of the response, one should expect (see T, where T is the number of piecewise averaged time points, the number of combinations that need to be evaluated is computationally intractable. To compensate for the combinatorial nature of the problem, we propose two different methods for carrying out the selection of informative motifs. The first method which we propose is the use of a greedy algorithm However, the identification of an informative subset of motifs represents a difficult combinatorial problem. Given that the number of possible motifs is defined as ABThe advantage of utilizing a greedy selection lies in the fact that the combinatorial nature of the problem is ignored at the cost of finding a sub-optimal though possibly \u201cgood enough\u201d solution.T different motifs, we will limit our evaluation to over-populated motifs. Thus, by limiting the superset to only the over-populated motifs, the number of combinations that must be evaluated is decreased to a more tractable number. To perform this reduction, we define an over-populated motif as a motif whose population is greater than would be expected if the HOT SAX hashing algorithm were performed upon a randomly generated dataset which comprises the same number of probe sets and time points as the dataset being evaluated. After the initial set of motifs has been filtered, we generate all possible combination of motifs and evaluate them for the value of their transcriptional state, and like in the greedy selection, the set of motifs which yield the maximum transcriptional state will be identified as the informative subset. The advantage of utilizing this method is that unlike the greedy algorithm we can be sure that the set of motifs is indeed optimal rather.\u201d However, while this filtering step has eliminated a large number of combinations that need to be considered, it still requires the evaluation of a large number of possible combinations and thus is computationally expensive.An alternative method for selecting an optimal subset of motifs is to limit the set of motifs that will be considered. Thus, rather than considering a superset of AB2 value of the corresponding fit to an exponential. Furthermore, it was observed that by setting the AB parameter to 3, one is able to maximize the deviation away from the exponential distribution for the corticosteroid datasets, whereas an AB parameter of 4 maximized the deviation away from the exponential distribution for the circadian dataset and the burn datasets. Furthermore, the response of the randomized datasets stayed at an R2\u200a=\u200a0.9 or higher under all alphabet sizes, thus establishing a cutoff for us to determine explicitly whether the dataset represents a significant perturbation or not.Because the result of the HOT SAX algorithm itself depends upon the selection of the alphabet, AB, we further investigated how well the datasets correspond to the underlying exponential distribution as the value of AB is altered parametrically. Therefore, the previous evaluation as to whether a dataset consists of a significant perturbation was re-run by varying the AB parameter from 3 to 5 which are commonly used alphabet sizes. In From this behavior, we hypothesized that the selection of the AB parameter should aim to maximize the observed deviation. Thus to conduct the selection of informative motifs, we have elected to utilize the results from the parametric evaluation and select an AB of 3 for the corticosteroid datasets, and an AB of 4 for the circadian and burn datasets. Despite the fact that the circadian dataset does not illustrate any defining perturbation, the selection of AB of 4 allows us to maintain a consistencyFor the optimal alphabet size, we evaluate the population distribution of the individual datasets. In"}
+{"text": "In a time-course microarray experiment, the expression level for each gene is observed across a number of time-points in order to characterize the temporal trajectories of the gene-expression profiles. For many of these experiments, the scientific aim is the identification of genes for which the trajectories depend on an experimental or phenotypic factor. There is an extensive recent body of literature on statistical methodology for addressing this analytical problem. Most of the existing methods are based on estimating the time-course trajectories using parametric or non-parametric mean regression methods. The sensitivity of these regression methods to outliers, an issue that is well documented in the statistical literature, should be of concern when analyzing microarray data.In this paper, we propose a robust testing method for identifying genes whose expression time profiles depend on a factor. Furthermore, we propose a multiple testing procedure to adjust for multiplicity.Through an extensive simulation study, we will illustrate the performance of our method. Finally, we will report the results from applying our method to a case study and discussing potential extensions. Caenorhabditis elegans for which the expression level depends on the dauer state. Graham et al. m genes tgkj evaluated at t > 0. The parameter \u03bb is the smoothing parameter. Here gkj is estimated based on the expressions belonging to group k for gene j. For gene j the hypotheses of interest are formulated as testingWhere Versusg will no longer depend on k. As such, under the null we will consider estimating a single time-trajectory for each gene as a solution toNote that under the null, the function gj is estimated based on all observations of gene j regardless of group membership by pooling within each time-point. Following , common to all m genes. The unknown parameters for gene j are denoted by \u03b2kj = . As discussed in [k is then given byThe function ollowing permutation sample compute a permutation replicate 2. From the 3. Single-step procedure to control the FWER:b-th permutation data, calculate (a) From the j, calculate the adjusted p-value as (b) For gene \u03b1, consider gene j significant if (c) For a specified FWER level Caenorhabditis elegans dauer developmental data discussed in [K = 2). For notation brevity, we will refer to genes whose time-trajectory depends on the factor as prognostic and non-prognostic otherwise. Similarly, we will refer to the genes whose corresponding FWER P-value is less than the given nominal level as significant or non-significant otherwise. For all of these illustrations, as in [p - 1),2/(p - 1), ..., (p - 2)/(p - 1), 1 quantiles of the observed time points pooled across both samples. For these illustration we set p = 4. Our method is developed within the framework of R statistical environment [rq from the quantreg package [We investigate the performance of our method by conducting an extensive simulation study. This will be followed by a discussion of the application of our method to the ussed in . These ds, as in , we willethod of and the ethod of . Given tironment . The fun package is used For the simulation study, the expressions will be generated from an outlier contaminated additive error model of the form\u03bckj(tl), denotes the time-trajectory function at time tl for group k. For non-prognostic genes, we will set \u03bcj1(t) = \u03bcj2(t) = 0 while for prognostic genes, we will specify \u03bcj1(t) = 0 and \u03bcj2(t) = 1.5e-t respectively. The error terms are mutually independent and identically distributed according to a standard normal law. The second term in the model, aklij , represents the random outlier which will assume a value of 4, 0, or -4 with probabilities \u03c0/2, 1 - \u03c0 and \u03c0/2 respectively. In the case of a normal law, the mean and median coincide. As such, in the absence of the outlier effect the quantile function gkj(t) = \u03bckj(t) for all t > 0.The first term, Caenorhabditis elegans dauer developmental data, by choosing 11 time points t = 0, 1, 2, 3, 4, 5, 6, 7, 8, 10, and 12. We will generate four replicates at each time-point from each group .For these simulations, we will adopt a design similar to the m = 200 non-prognostic genes. A block exchangeable correlation structure with correlation coefficient \u03c1 and block sizes of 10, is imposed on the measurement errors. The null distribution of the test statistic is approximated from B = 200 resampling (permutation or bootstrap) replicates. Empirical FWER is computed as the proportion of samples rejecting \u210d0 by our testing procedure at a two-sided FWER level of 0.05 among N = 200 simulations. Simulation results are reported in Table To evaluate the FWER, we generate \u03c0 = 0.05 or by a factor of six when \u03c0 = 0.1. This can be explained by the fact that the parametric regression bootstrap is based on the assumption of homoscedasticity of the error terms.As shown in Table Under simulation model 1, the error terms as the sum of the outlier and measurement error components, although no longer normally distributed, are identically distributed within and among all time-points and groups. Under simulation model 2, the error terms are no longer homoscedastic. As such, it is not surprising that the To investigate the global power of this procedure, we generate 10 prognostic genes and 190 non-prognostic genes. A correlation structure similar to that of the FWER case is specified. The corresponding results are reported in Table As illustrated in Table \u03c0 = 2. The result of \u03c0 = 2 have a similar trend the results of \u03c0 = 0.05 and \u03c0 = 0.1 under simulation model 1 and 2. We also evaluated the different proportions for 20 prognostic genes (10%) and 180 non-prognostic genes (90%) and 5 prognostic genes (2.5%) and 195 non-prognostic genes (97.5%). These results have a similar trend the result of 10 prognostic genes (5%) and 190 non-prognostic genes (95%).we evaluated the empirical power and FWER under the simulation model 1 and 2 for Caenorhabditis elegans dauer data. Wang and Kim [m = 18, 556 genes. For the experimental group, the worms are harvested at 0, 1.5, 2, 3, 4, 5, 6, 7, 8, 10, and 12 hours after feeding with three to four replicates at each time-point. For the control group, are harvested 0, 1, 2, 3, 4, 5, 6, 7, 8, 10, and 12 hours after feeding withe four replicates at each time-point. For this illustration, we have set the t = 1.5 time-point in the experimental group to t = 1. This data set is available for download from http://cmgm.stanford.edu/~kimlab/dauer/.In this section, we will summarize the results from applying our proposed method to the and Kim use cDNAFor the case study based on the Wang and Kim data, thP-values from the P-values, are shown in Figure We provide a Venn diagram on the set of of genes identified as significant at the 0.05 FWER level in Figure K sample means for each group are computed. The algorithm is then applied to the centered versions of the observed expressions K = 2), as shown in the methods section, the method can be extended to the case where K > 2. The method can be easily extended to account for multiplicity by controlling a false-discovery rate (FDR). The unadjusted permutation P-value for gene j, based on the notation in the algorithm presented in the methods section, is P-values can then be computed based on these unadjusted P-values. We finally note that the method can also be applied in the one-sample cases. In this setting, one is interested in identifying genes whose time-trajectories are time-dependent. The marginal hypotheses are formulated as testing Hj : gj(t) = cj for all t > 0 versus Hj : gj(t) \u2260 cj for some t > 0 for a constant say cj . As under the null, all of the expressions are exchangeable, the null sampling distribution is generated by permuting all observed expressions for a given gene. The corresponding n expressions observed for gene j. For many regressions problems, the target function to be estimated is the mean of the distribution of the outcome conditional on a set of co-variables. In a time-course microarray experiment, this would correspond to the mean of the expression profile over time. In this paper, we have proposed to estimate the conditional quantile, rather than the conditional mean, of the distribution of the outcome variable a as function of time. Specifically, we use the special case of the median. Consider the standard additive mean regression problem of the form Yi = g(ti) + \u03f5i, where g(t) is the conditional mean of Y at time t and \u03f5 is a mean-zero error term. One criterion that is often used to find an estimate of g is to minimize \u03a3i(Yi - g(ti))2. Restricting this optimization to the set of linear functions yields the standard least-squares estimate. Optimizing over the set of all \"smooth\" functions yields an estimator that interpolates the observations. As a balancing act between these two extremes, one may consider optimizing the following criterionIn these discussions, we have assumed that any difference, including vertical shifts, among the time trajectories, are biologically relevant and of interest. In some applications, one may want to ignore vertical shifts, as these may be often caused by batch effects, and primarily focus on genes for which there are actual differences among the time trends. The procedures we have discussed can be easily modified to accommodate this situation. To this end, for gene g''(t))2dt is a so called penalty term. The amount of smoothing is determined by the parameter \u03bb \u2208 . The estimation procedure used in this paper is based on a similar regularization approach where the terms (yi - g(ti))2 are replaced by \u03c1(yi - g(ti)) and the penalty term \u222b(g''(t))2dt is replaced \u222b|g''(t)|dt.where \u222b(We proposed a robust method for identifying genes whose time trajectories depend on a phenotypic or experimental factor. Furthermore, we proposed a multiple testing procedure to adjust for multiplicity. Our method is based on IS proposed the research project. IS and KO performed statistical analysis and wrote the manuscript. SLG and SHJ contributd to the research and critically revised the manuscript. SK conducted the biological interpretation of the statistical analysis results. All authors read and approved the final manuscript.Properties of 40 genes that are discovered only by . The data provided the biological properties of 40 genes that are discovered only by Click here for file"}
+{"text": "Time-course gene expression analysis has become important in recent developments due to the increasingly available experimental data. The detection of genes that are periodically expressed is an important step which allows us to study the regulatory mechanisms associated with the cell cycle.Sacharomyces cerevisiae and Arabidopsis time-course datasets, and to outperform existing methods when outliers are present.In this work, we present the Laplace periodogram which employs the least absolute deviation criterion to provide a more robust detection of periodic gene expression in the presence of outliers. The Laplace periodogram is shown to perform comparably to existing methods for the Time-course gene expression data are often noisy due to the limitations of current technology, and may include outliers. These artifacts corrupt the available data and make the detection of periodicity difficult in many cases. The Laplace periodogram is shown to perform well for both data with and without the presence of outliers, and also for data that are non-uniformly sampled. The cell-division cycle is regulated by a complex interaction of a set of mechanisms which include genes such as cyclins and cyclin-dependent kinases (CDKs). These genes are known to be expressed periodically with respect to the cell-division cycle , the classical periodogram is computed asFor time series samples \u03c9 \u2208 . For frequencies k = 1, 2, ..., the classical periodogram can be alternatively written asfor the frequency range where y to the regressor xtT, and xt = T, where xt,1 = cos(\u03c9t), and xt,2 = sin(\u03c9t). We first consider the following equation,To solve for the LAD coefficients, we can convert (8) into a set of equations and constraints to be solved using linear programming . Let \u03b3 =ut and vt are non-negative variables. By setting \u03b3j = bj - cj, where bj and cj are non-negative variables, we can obtain the best L1 approximation by solving the following linear programming problem:where To solve the LAD approximation for non-uniformly spaced samples, we follow the same steps to solve for the LAD coefficient in the following,t1, t2, ..., tN ] are the N non-uniformly sampled time instants.where [In this formulation, the LAD coefficients can be easily solved using standard algorithms for solving linear programming problems. For our implementation in MATLAB, we used the LINPROG function in the Optimization Toolbox. In terms of computational time required to process the data, for experiment Alpha which consists of 6075 genes and 18 samples each, the total time to compute 1000 permutations for the p-value analysis takes approximately 24 hours on a Pentium Core 2 CPU at 2.66 GHz, which is similar to the amount of time taken by the M-estimator-based method, also implemented in MATLAB using the ROBUSTFIT function in the Statistics Toolbox.KL implemented the Laplace periodogram in MATLAB, performed the simulations and comparisions, and contributed in the writing of the draft. TL developed the Laplace periodogram in his earlier work. Both XW and TL conceived of the project and coordinated its implementation."}
+{"text": "Macrophages are versatile immune cells that can detect a variety of pathogen-associated molecular patterns through their Toll-like receptors (TLRs). In response to microbial challenge, the TLR-stimulated macrophage undergoes an activation program controlled by a dynamically inducible transcriptional regulatory network. Mapping a complex mammalian transcriptional network poses significant challenges and requires the integration of multiple experimental data types. In this work, we inferred a transcriptional network underlying TLR-stimulated murine macrophage activation. Microarray-based expression profiling and transcription factor binding site motif scanning were used to infer a network of associations between transcription factor genes and clusters of co-expressed target genes. The time-lagged correlation was used to analyze temporal expression data in order to identify potential causal influences in the network. A novel statistical test was developed to assess the significance of the time-lagged correlation. Several associations in the resulting inferred network were validated using targeted ChIP-on-chip experiments. The network incorporates known regulators and gives insight into the transcriptional control of macrophage activation. Our analysis identified a novel regulator (TGIF1) that may have a role in macrophage activation. Macrophages play a vital role in host defense against infection by recognizing pathogens through pattern recognition receptors, such as the Toll-like receptors (TLRs), and mounting an immune response. Stimulation of TLRs initiates a complex transcriptional program in which induced transcription factor genes dynamically regulate downstream genes. Microarray-based transcriptional profiling has proved useful for mapping such transcriptional programs in simpler model organisms; however, mammalian systems present difficulties such as post-translational regulation of transcription factors, combinatorial gene regulation, and a paucity of available gene-knockout expression data. Additional evidence sources, such as DNA sequence-based identification of transcription factor binding sites, are needed. In this work, we computationally inferred a transcriptional network for TLR-stimulated murine macrophages. Our approach combined sequence scanning with time-course expression data in a probabilistic framework. Expression data were analyzed using the time-lagged correlation. A novel, unbiased method was developed to assess the significance of the time-lagged correlation. The inferred network of associations between transcription factor genes and co-expressed gene clusters was validated with targeted ChIP-on-chip experiments, and yielded insights into the macrophage activation program, including a potential novel regulator. Our general approach could be used to analyze other complex mammalian systems for which time-course expression data are available. Dynamic cellular processes, such as the response to a signaling event, are governed by complex transcriptional regulatory networks. These networks typically involve a large number of transcription factors (TFs) that are activated in different combinations in order to produce a particular cellular response. The macrophage, a vital cell type of the mammalian immune system, marshals a variety of phenotypic responses to pathogenic challenge, such as secretion of pro-inflammatory mediators, phagocytosis and antigen presentation, stimulation of mucus production, and adherence. In the innate immune system, the first line of defense against infection, the macrophage's Toll-like receptors (TLRs) play a crucial role by recognizing distinct pathogen-associated molecular patterns (PAMPs), such as flagellin, lipopeptides, or double-stranded RNA Computational analysis of high-throughput experimental data is proving increasingly useful in the inference of transcriptional regulatory interaction networks et al.For the reasons described above, in order to computationally infer transcriptional regulatory interactions in a mammalian system, it is necessary to include additional sources of evidence (beyond expression data) to constrain or inform the transcriptional network model selection. Computational scanning of the promoter sequences of clusters of co-expressed genes for known transcription factor binding site (TFBS) motifs has proved particularly valuable when combined with global expression data This work is concerned with using computational data integration to identify a set of core differentially expressed transcriptional regulators in the TLR-stimulated macrophage and, in the form of statistical associations, the clusters of co-expressed genes that they may regulate. The clusters are differentiated based on temporal and stimulus-specific activation, and in this sense, the inferred associations constitute a preliminary dynamic transcriptional network for the TLR-stimulated macrophage. To achieve this, we used a novel computational approach incorporating TFBS motif scanning and statistical inference based on time-course expression data across a diverse array of stimuli. Our approach involved four steps. (i) A set of genes was identified that were differentially expressed by wild-type macrophages under at least one TLR stimulation experiment. (ii) These genes were clustered based on their expression profiles across a wide range of conditions and strains, grouping genes based on the similarity of the timing and stimulus-dependence of their induction. Gene Ontology annotations were used to identify functional categories enriched within the gene clusters. (iii) Promoter sequences upstream of the genes within each cluster were scanned for a library of TFBS motifs, each recognized by at least one differentially expressed TF, to identify possible associations between TFs and gene clusters. (iv) Across eleven different time-course studies, dynamic expression profiles of TF genes and target genes were compared in order to identify possible causal influences between differentially expressed TF genes and clusters.Several techniques have been developed specifically for model inference from time-course expression data, notably dynamic Bayesian networks (DBN) Tgif1), which was not previously known to play a role in macrophage activation. As a targeted experimental validation of the inferred network, two transcriptional regulators, p50 (a component of NFkB) and IRF1, were assayed for binding to cis-regulatory elements in LPS-stimulated macrophages using ChIP-on-chip, and were confirmed to bind the promoters of genes in four out of five predicted target clusters at significantly higher proportions than expected for a random set of TLR-responsive genes.By combining the promoter scanning-based evidence with the evidence obtained by the time-lagged correlation analysis of the expression data, we were able to identify a network of statistically significant associations between 36 TF genes and 27 co-expressed clusters. Overall, 63% of differentially expressed genes are included in the network. The network provided insights into the temporal organization of the transcriptional response and into combinations of TFs that may act as key regulators of macrophage activation. Finally, the analysis identified a potential transcriptional regulator, TGIF1 , viral-associated (poly I:C), and anti-viral (R848) stimuli, and are listed in Rel, Nfkb1), three AP1 Jun, Junb, Fos), two ATF family genes Atf1, Atf3), six IRF family TF genes (Irf1/2/3/5/7/9) Stat1/3/4/5a). The 80 TF genes were taken to constitute the set of potential regulators in the TLR-stimulated macrophage network.To probe a diverse set of transcriptional responses of Toll-like receptor (TLR)-stimulated macrophages, primary bone marrow-derived macrophages (BMMs) from five mouse strains so that the transformed measurements would all lie between \u22121 and 1, with zero indicating the intensity in the reference experiment. This technique, which we call the signed difference ratio (SDR), has previously proved useful in clustering genes based on temporal expression in a mammalian system 2 intensity measurement pjy for probeset p and non-reference experiment j, was transformed to an SDR value pjx byRj is the index of the global reference experiment . By construction, \u22121\u2264pjx\u22641 for all p and j. A positive SDR value indicates higher expression than in the reference experiment, and a negative value indicates lower expression. The SDR-transformed log2 intensities of all 1,960 target genes across all 94 non-reference experiments were clustered using an unsupervised algorithm (K-means with Euclidean distance), with the number of clusters chosen using the Bayesian information criterion (BIC) Clustering was used to identify cohorts of genes that were co-expressed across the diverse set of TLR-stimulation experiments, based on the assumption that genes within a cluster are likely to share common 3CSK4, poly I:C, and R848 are shown in 3CSK4, poly I:C, and R848) enabled the segregation of these clusters based on the signal transduction pathway through which they are likely primarily regulated (Ticam1(Lps2/Lps2) and Myd88(\u2212/\u2212) responses under LPS also appear to be inducible through either pathway, from analysis of the LPS response in Ticam1(Lps2/Lps2) and Myd88(\u2212/\u2212) macrophages.The clusters , which regulated . Groups LPS see togetherTnf (TNFa) as well as Ccl3, Ccl4, Cxcl1, and Cxcl2; C25 includes the cytokines Cxcl10 and Il10; and C15 includes the interleukin cytokine genes Il1b, Il6, and Il12b. Cluster C24, enriched for signal transduction genes, also includes the important cytokine Ifnb1 (IFNb). The early-unregulated clusters, C24\u201328, show a high proportion of induced TFs and are enriched for TFs relative to the genome information was used to identify GO term enrichments within the gene clusters see . The 460nome see . Across nome see . SubsequNoting the high proportion of induced TFs in early-upregulated clusters, we chose a signal processing technique, the time-lagged correlation (TLC), to assess potential transcriptional regulatory interactions using the time-course expression data g1 denote a differentially expressed TF gene, and let g2 denote a differentially expressed gene. We wish to estimate our degree of confidence in the null hypothesis, that g1 does not transcriptionally regulate g2, given time-course expression data for both genes. In the simplest case, the alternative hypothesis could be that g1 codes for a TF protein that binds the promoter of g2, thereby regulating its transcriptional activity. Let t be a fixed time lag for which the TLC between g1 and g2 is to be computed. Let T denote a set of discrete time points at which gene expression is measured, and let T\u2032 denote the set of time points T+t. Let TX(g1) denote the vector of expression measurements of g1 at the time points T, and let T\u2032X(g1) denote the measurements of g2 at times T\u2032 (which can be estimated by interpolation). The time-lagged correlation (TLC) coefficient between g1 and g2 with time lag t is defined asg1 being a TF, it can be applied to any gene pair, for example, to obtain a background distribution of TLC coefficients of gene pairs satisfying the null hypothesis. Two examples of a TF exhibiting a high time-lagged correlation with a target gene are shown in Rel\u2192Nfkb1Irf7\u2192Stat1Let rses see . We noteg1 and a target gene g2, the overall transcriptional regulatory time delay tc (where \u201cc\u201d stands for the combined gene-gene delay) can be decomposed as a sum of two contributions, one for translation of the TF and post-translational processing/translocation . This distribution was discretized to the set of time lags for which the TLC was computed, to obtain an estimate of the discrete probability for observing a given optimal time lag, P(\u03c4|H\u03050) , whose derivation we describe next.Assessing the significance of an observed sequence of time-lagged correlations between two genes (as a function of the time lag) as an indicator of possible transcriptional regulation necessitates formulating our prior expectation for the time lag of a true transcriptional regulatory interaction. For a TF gene \u03050) see . These pg1,g2) for which the TLC approach was to be applied, an \u201coptimal time lag\u201d \u03b8 was selected, so that a single representative TLC could be obtained for the pair. The set of time lags and the set of time-course experiments to use were selected according to a constraint (imposed to minimize interpolation error) that the target gene expression at maximum time lag must be interpolated from at least three measurements. Based on this constraint, and taking into account the expected precision at which the optimal time lag can be estimated was computed for each of the t values, for each pair of genes, using data from all eleven time-course experiments combined from the squared TLC coefficient \u03c4\u03c12. It is not ideal to simply select the t at which \u03c4\u03c12 is maximal, as some studies have done h1,h2) satisfying the null hypothesis, and let tmax\u2261max(T), where T is the set of time points for a single time-course. In practice the expression of h2 cannot be extrapolated beyond tmax, so the effective number of data points for computing the TLC \u03c4\u03c12 is limited to the number of time points within T that are less than tmax\u2212t. Thus, the number of measurements that can be used to compute the TLC is t-dependent, and the distribution of TLCs for pairs of genes satisfying the null hypothesis depends on t. Therefore, one will more frequently observe (by chance) a TLC exceeding a given value , by selecting the largest possible t. In addition, the high degree of synchronization within the transcriptional response, as well as the fact that all the SDR-transformed expression levels are zero at the initial time point, result in a second bias towards zero time lag. This effect is strengthened as the number of time points in the data set (per time-course) decreases. Therefore, selecting the optimal tto maximize \u03c4\u03c12 introduces an unwanted bias towards the smallest and largest tvalues investigated .For each pair was computed separately for each time lag t, using a large set H of gene pairs such that there is no direct transcriptional regulatory interaction (TRI) for each gene pair in the set ,\u03be,P(\u03c4|H\u03050) defined above, a probability ratio R(\u03c4) was computed as the ratio of the probability of the null hypothesis (that there is no direct TRI between g1 and g2) given the measured optimal time lag, to the marginal probability of the null hypothesis,q due to the discretization of time lags leads to uncertainty in the estimation of R(t). However, the effect of this uncertainty on the cluster-combined P value (see Equation 10 below) is small, due to the fact that time lag estimation errors for genes in a cluster are not strongly correlated. The marginal probability P(\u03c4) was estimated from the optimal time lags of all gene pairs, and the marginal probability P(H0) was estimated from data in the literature was constructed, taking into account both the optimal time lag \u03b8 and the fractional lag-specific significance \u03be,s scores for gene pairs satisfying the null hypothesis, the significance of the association between g1 and g2 based on expression data can be computed,g1,g2) where g1 ranged over the set of 80 TFs, g2 ranged over the set of all 1,960 differentially expressed genes, and g1\u2260g2 were combined into a P value for the cluster, Pexp. For each pair , a Fisher score Fexp was computed,C\\{f} means that if the TF gene f was a member of cluster C, the self-association Ptlc was excluded. For each cluster C, the number of degrees of freedom, denoted by d(C), was estimated using K-means clustering (see d(C) values were used to obtain a TF-to-cluster P value, Pexp, using a \u03c72 test \u226410\u22123, was 23. The differential expression levels for the strongest pairs in wild-type time-courses following stimulation by LPS over all TF-to-cluster pairs, and the estimated false discovery rate (FDR), are shown in To estimate the overall significance (based on time-course expression data) of the association between a TF gene ring see . The d representing motifs recognized by murine TFs. A motif was selected if it is recognized by at least one TF of which at least one component protein was differentially expressed in the expression dataset, ensuring that the TF had at least one expression profile that could be compared with target genes using the TLC. For each PWM, the fraction of genes with at least one above-threshold match within the promoter was computed, within a reference set of all genes detected as expressed within the TLR-stimulated macrophage, and within each co-expressed gene cluster. A total of 150 position-weight matrices were selected from the TRANSFAC database C and position-weight matrix m, enrichment statistics were computed based on the fraction of genes in C possessing at least one match for m. For each pair for which the fraction of genes containing a match for m within the cluster C was greater than in the reference set of genes, a P value was computed using Fisher's exact test . This P value represented the significance of the enrichment of matrix m within the promoters of cluster C, relative to the reference set of promoters (expressed genes). A matrix representation of the strongest motif enrichments \u226410\u22122) with the clusters grouped by expression similarity are provided in As a next step towards inferring a transcriptional network, enrichments of TFBS motifs were computed for individual gene clusters. For each cluster test see and denomilarity reveals m, a list of genes coding for TFs (or TF components) that bind the motif corresponding to m were obtained from a TRANSFAC-derived mapping , was defined as the minimum over all Pscan for all matrices m that are associated with the TF gene f. The distribution of the resulting P values and the false discovery rate are shown in Pscan\u226410\u22123, indicating a statistical power that is slightly higher than with the TLC-based evidence.To enable integration of the promoter scanning evidence with the time-lagged correlation evidence, PWMs that were enriched for matches within gene clusters, were mapped to differentially expressed TF genes as follows. For each PWM ping see . For eacf,g) of TF gene f and target gene g, provided as Rel associated with Icam1Cebpd associated with Il6To identify the set of all possible TF gene-to-target interactions consistent with motif scanning evidence, for each TFBS motif match within the promoter of a target gene, the time-lagged correlation was computed for all possible TF genes that map to the TFBS motif. The resulting list of 54,253 pairs where f is one of 80 TF genes and C is one of 32 gene clusters, a combined P value Pcomb was computed from the P values for the scanning and expression evidences, Pscan and Pexp. The P values were combined using Fisher's method , and a cutoff was selected so that the estimated false discovery rate did not exceed 0.025 \u22640.0248). Additionally, two filtering criteria were imposed: (i) Pscan\u22640.05, to ensure that there is a minimal enrichment of TFBS; and (ii) a cluster-average optimal time lag between f and C that was greater than 10 min, i.e., \u2329\u03b8\u232af,C\u226510 min cutoff, no dependency between the evidences is evident. A total of 90 interactions involving 36 TF genes and 27 clusters , were accepted based on the above criteria , due to the large number of TF gene families that map to a common motif. Overall network coverage was estimated by taking the fraction of differentially expressed genes that (i) are members of the 27 clusters in the network; and (ii) possess a match for a motif recognized by one or more of the TFs associated with the cluster. From this estimate the network contains 1,232 genes, or 63% of the 1,960 genes that are differentially expressed under TLR stimulation.To construct a combined transcriptional network of the TLR-stimulated macrophage, thod see , a stand min see . A scatteria see . If the P values for association with target clusters have weak time-lagged correlation evidence, but a very significant motif scanning P value. In contrast, the downregulated gene clusters and TF genes are not as stratified as the upregulated clusters in terms of the time of differential expression, and thus associations appear throughout the lower-right quadrant.To reveal patterns among TFs that may regulate multiple clusters, the connections between the 36 TFs and the 27 clusters in the inferred network were arranged in a matrix in which each row represents an induced TF and each column represents a cluster of differentially expressed genes . Both thJun, Junb, or Fos and a cluster would suggest a hypothesis that the TF AP1 regulates that cluster. The network also recapitulates several known transcriptional regulatory interactions. First, the NF\u03baB component Rel is associated with C15, which is enriched for cytokines and contains many NF\u03baB targets including Nfkb1Il6, and Il12bJun, a component of AP1 is associated with C17, an induced gene cluster enriched for endosome-associated genes directly leads to hypotheses regarding TF regulation of clusters. For example, a statistical association between any of the TF genes Pscan\u22640.0033), the association between Nfkb1 and C17 would not have been detected, but by including the effect of the strong TLCs between Nfkb1 and C17 genes, an association between Nfkb1 and C17 was detected. As a second example, the network includes an association between the TF gene Irf1 and cluster C25; based on promoter scanning evidence alone, only a general association of the IRF family with the cluster would have been possible with C26, a cluster containing genes involved in immune response, ubiquitin cycle, and leukocyte activation. Specifically, C26 contains the cytokines Csf2 in the region relative to the transcription start site, and Gm1960 also has three TGIF1 motif matches approximately 1.5 kbp upstream of the start site (best match score>0.95). In humans, TGIF1 is known to interact with several protein members of the SMAD/AP1 transcriptional complex . Furthermore, from microarray-based measurement (Affymetrix probeset 1422286_a_at), Tgif1 expression is \u223c2.4-fold reduced in unstimulated Ticam1(Lps2/Lps2) BMMs relative to wild-type (with no apparent effect in MyD88(\u2212/\u2212) BMMs relative to wild-type), suggesting that basal expression of Tgif1 is TRIF-dependent.To validate the microarray-based expression measurement, P<0.01 based on scanning, but not significant based on Pcomb. Consistent with the integrated analysis, C30 was not significantly enriched for IRF1 binding, based on the ChIP-on-chip assay.Genome location analysis based on chromatin immunoprecipitation-on-chip (ChIP-on-chip) hybridization was used to validate five high-confidence associations in the transcriptional network, between NF\u03baB/p50 and clusters C13, C17 and C28; and between IRF1 and clusters C13 and C25. This validation consisted of demonstrating a statistical enrichment of ChIP-on-chip\u2013identified binding for a given TF among genes within a cluster with which the TF was associated through our computational method, as compared to randomly selected TLR-responding genes. A custom-fabricated oligonucleotide microarray was used, with probes tiling up- and downstream of genes that were differentially expressed under TLR stimulation in a murine macrophage-like cell line see . MacrophIn this study we inferred a transcriptional network underlying dynamic TLR-stimulated activation of the murine macrophage. This network consists of statistical associations between differentially expressed transcription factor (TF) genes and co-expressed clusters of genes, each indicating a possible role for the associated TF in regulating the cluster. Such associations have proved useful for generating and prioritizing testable hypotheses regarding transcriptional regulation \u03c4 dependence of \u03c3.The time-lagged correlation (TLC) was used to analyze temporal gene expression for TFs and gene clusters, and in addition to the correlation strength, the biological plausibility of the estimated optimal time lag was factored into the significance assessment for the TLC. This time lag is useful for distinguishing between genes that are linked by a regulatory interaction and genes that are merely co-expressed. The TLC is efficient to compute, and in general requires fewer measurements than methods that rely on estimating the joint probability distribution of the expression of two genes the magnitude of the correlation, and (ii) the probability of observing the optimal time lag, to the significance of a pairwise association. Second, the probability distribution for time lags among true interactions was incorporated as a prior in the significance calculation. This enabled taking into account the biological plausibility of the time lag in computing the significance. This significance test for the TLC has not, to our knowledge, been previously reported.The specific implementation of the TLC approach used in this study has two key advantages. First, by selecting the optimal time lag for a TF\u2013gene pair based on minimizing the lag-dependent Lmo2). In summary, with a limited expression dataset, a high-significance TLC by itself should not be regarded as sufficient evidence to infer a TF-to-target association, underscoring the importance of incorporating additional sources of evidence.With any network inference method based on pair-wise comparison of the expression profiles of a regulator and a possible target (including the TLC method), it is difficult to accurately resolve the multi-factorial control of a target gene. This is particularly true when the effect of one TF is simply to modulate (amplify or dampen) the time-varying influence of another TF on a target gene. Several additional mechanisms can confound or eliminate the correlation between the expression level of a TF gene and the chromatin-bound activity of the corresponding TF, including multimeric TF assembly from protein products of several genes, post-translational activation of the TF, dynamically regulated nuclear translocation, and dynamically regulated TF protein turnover. For example, in the case of ATF3, there is little correlation between differential expression and nuclear localization P values are conservative estimates of the genome-wide binding enrichment, due to the fact that genes were selected for inclusion in the tiling array based on differential expression under LPS stimulation in a macrophage-like murine cell line (RAW 264.7). We note that for each of the two TFs assayed, two pairs were found to be enriched for binding based on ChIP-on-chip, but not based on the network analysis. Such false-negative predictions may be the result of binding sites sometimes occurring upstream of the 2 kbp region selected for TFBS motif scanning, the target TF being cross-linked to a DNA-bound co-regulator recognizing a different motif than the TF, or due to the TF recognizing a TFBS motif variant not represented in the motif database.In the present work, promoter sequence scanning was used to identify TFBS motifs enriched within co-expressed gene clusters. Due to the often one-to-many mapping between TFBS motifs and TFs, the scanning-based evidence often identifies multiple candidate TFs with a gene cluster, of which perhaps a single TF may be the relevant regulator in the given condition. The TLC approach described here provides an objective statistical framework for evaluating the suitability of a proposed TF-to-target association based on a large set of time-course expression measurements. In particular, the approach enabled the preferential identification of TF-to-target associations for which the optimal time lag is biologically plausible, and the rejection of associations with a biologically implausible time lag. Four associations were validated using ChIP-on-chip assays, in which enriched binding of the relevant TF was shown among genes within the relevant cluster. The ChIP-on-chip enrichment The inferred transcriptional network resulting from our analysis associates at least one TF with 27 of the 32 clusters. The 27 clusters comprise 86% of all differentially expressed genes, with an overall network coverage of 63%. An average of 3.3 TF genes were associated with each cluster, which may reflect the prevalence of combinatorial control in the transcriptional network. The TFs implicated in the network are also highly interconnected at the level of protein-protein interactions, and interacting TFs are found to co-associate with clusters in the network. Many TFs known to play a role in macrophage activation were strongly associated with clusters in the inferred network . NF\u03baB and AP1 appear to be the most prolific activators in the network. EGR family members appear to be associated with early-induced clusters, and IRF family members are associated with later-induced clusters. In particular, the network associated specific TFs with immunologically important gene clusters . Finally, incorporating expression data enabled identifying a specific TF from among members of a large TF family recognizing a motif enriched within a target cluster; for example, the predicted interaction between IRF1 and C25 was validated by ChIP-on-chip. However, we note that more ChIP-on-chip data, with a variety of TF targets, would be required to quantitatively assess the performance of the combined network analysis compared to single-evidence analysis using sequence data or expression data alone.no transcript-level differential expression; this trade-off enabled network inference based on dual criteria of motif match enrichment and the estimated time lag prior probability. Work is in progress to extend the analysis to include all 208 TFBS motifs corresponding to TFs that are transcriptionally expressed in the TLR-stimulated macrophage. Another limitation related to sequence scanning is that the promoter sequence data set used is purely upstream of the annotated transcription start site (TSS); recent evidence suggests that some TFs may be equally likely localized downstream of the annotated TSS We note that by including in the analysis only TFBS motifs for which at least one associated TF gene was differentially expressed, the inferred network does not include TFs for which there is P<10\u22122) and cluster C4 (P<10\u22122), and Tgif1 is strongly (11-fold) upregulated in murine macrophages in response to Streptococcus pyogenes infection Csf2 and Gm1960.In addition to recapitulating known regulators, the analysis identified a potential transcriptional regulator not previously known to play a direct role in TLR-stimulated macrophage activation, TGIF1. TGIF1 is a three-amino acid loop extension homeobox protein that acts as an obligate repressor through either direct binding to the retinoic acid responsive element on DNA, or through its interaction with SMAD2 in the TGF\u03b2 pathway cis-regulatory region, could be incorporated into the method.The approach of combining promoter scanning-based evidence with expression dynamics-based evidence enabled more specific identification of the TF gene(s) regulating a cluster than would have been possible using promoter scanning alone. Time-course expression data allowed, in some cases, the disambiguation of which TF gene (out of a family of TF genes associated with a given TFBS motif) is the likely regulator of a cluster enriched for the corresponding TFBS motif. Inclusion of expression data provided a second source of evidence to indicate the relevance of a given TF gene for predicting the condition- and time-specific expression of a target gene cluster. In total, these results validate the strategy of computationally integrating two distinct large-scale data sources (expression and genomic sequence) to infer a murine macrophage transcriptional network. In a future study, additional sequence-based data sources, such as evolutionarily conserved elements in the All data were analyzed in MatLab unless otherwise stated. In all cases where Fisher's exact test was performed, the test was one-tailed, using the cumulative distribution function (CDF) of the hypergeometric distribution.5 cells/cm2 (1\u00d7106 cells per well in a 6-well dish) on tissue culture-treated plastic. On day 7 cells were stimulated with TLR agonists at the concentrations indicated in Mutant strains see were gen2 intensities Expression data were acquired from 216 microarray hybridization experiments comprising 95 combinations of strain, stimulus, and time point . For each probeset and for each of the wild-type TLR-stimulation time-course experiments, a differential expression test was performed using a spline-based multivariate regression method P value for the difference in the sum-squared residuals under the alternative and the null hypotheses. A fourth-order polynomial basis was used, with 1,000 iterations for the bootstrap resampling. For each time-course experiment, a separate P value threshold was selected based on a maximum Benjamini-Hochberg false discovery rate (FDR) Significance testing was performed using mean logA probeset selection algorithm was carried out to select a representative probeset for each gene, eliminating probesets that are annotated as cross-hybridizing to transcripts from different genes.2 intensity exceeding a fixed cutoff, in at least one replicate-combined experiment; (iii) it had a P value less than a fixed cutoff, for at least one experiment; and (iv) its probeset name did not contain \u201c_x_\u201d or \u201c_s_\u201d, and was not associated (by GeneID annotation) with transcripts of two distinct genes. Criterion (iv) was imposed in order to eliminate probesets containing probes that cross-hybridize to transcripts from different genes P value, across all non-reference experiments, was selected as the \u201crepresentative probeset\u201d for the GeneID (or GeneID list).Representative probesets from among the 45,037 probesets (excluding on-chip control probesets) on the Affymetrix Mouse GeneChip 430.2 were selected based on four criteria. A probeset was selected if and only if: (i) it possessed an Entrez GeneID in the Affymetrix probeset annotation database 2 intensity and P value, as summarized in 2 intensity cutoff of 6 was used, and a P value cutoff of 10\u22124 was used. The resulting number of representative probesets for target genes was 1,960. The complete list of the 1,960 target genes, and their expression measurements, are provided in 2 intensity cutoff of 6 and no filtering for differential expression. The 8,788 resulting genes were used as the reference set for applying Fisher's exact test to the promoter scanning results (see Promoter Scanning below). (iv) To generate the set of all genes represented by \u201c_at\u201d or \u201c_a_at\u201d probesets on the GeneChip, the algorithm was run with no filtering for minimum intensity or differential expression. This generated a list of 20,905 genes that constituted the genome-wide set used in the gene ontology enrichment analysis .This selection procedure was applied with four different cutoffs for logA set of 388 position-weight matrices (PWMs) corresponding to murine TFs was obtained from the TRANSFAC Professional database version 10.3 To estimate the fraction of genes in the mouse genome that are TFs, a genome-wide list of 1,245 murine TF genes (and probable TF genes) was assembled by mapping a list of 1,800 human TF genes from the literature pjx for log2 intensity, where p indicates the probeset and j indicates the experiment, were clustered using a fast implementation of the K-means algorithm K was chosen to minimize the Bayesian Information Criterion (BIC) K represented as BIC(K),pk is the cluster to which the pth probeset is assigned, jth coordinate of the centroid of the kth cluster in the SDR-transformed space of expression measurements, N\u200a=\u200a1,960 (the number of target genes), M\u200a=\u200a94 (the number of non-reference experiments), and \u03b52\u03c3 is the average intra-cluster variance evaluated at K\u200a=\u200a3. The K-means clustering was carried out for integer values 18\u2264K\u226450, for 1,000 iterations at each value of K; the optimal clustering occurred at K\u200a=\u200a32 of a GO term ID i and gene cluster C, using Fisher's exact test (under-occurrences of a GO term relative to the reference set were discarded). Any pairs in which less than 5% of the genes within C possess GO term ID i, or with a term level in the GO hierarchy less than 3, were discarded. The resulting 629 pairs were ordered by P value, and a P value cutoff was selected by demanding that the estimated false discovery rate be 0.02 . The resulting 460 GO term enrichments are shown in Jackson Laboratory Mouse Genome Informatics GO annotations The list of 32 TLR-regulated murine cytokines was obtained by screening for all differentially expressed genes possessing an annotation for cytokine or chemokine activity, and by refining the list by using NCBI PubMed searches to determine whether each gene is a cytokine.Q of 484 genes were selected such that each gene: (i) does not correspond to a TRANSFAC transcription factor as described above; (ii) has at least two GO process and two GO function annotations; (iii) is not annotated as \u201cregulation of transcription, DNA-dependent\u201d (GO:0008015); (iv) does not have a gene name with the prefix \u201cZfp\u201d (zinc finger protein); and (v) is not listed among the 1800 TF genes (see Selection of Transcription Factors). The time-lagged correlations between genes within this group were taken as the null distributions of time-lagged correlations, for the purpose of computing the P value of a time-lagged correlation between a TF and a gene (see Time-lagged Correlation below).To form the null distribution of time-lagged correlation, a set of non-TF genes was generated. From the set of 1,960 differentially expressed genes, a set t is 20 min), the set L of time lags was chosen to be 0\u201380 min (inclusive), at 10 min intervals. The precision at which the optimal time lag can be estimated, at |\u03c1\u03c4|\u22650.9, was determined to be \u00b15 min, based on simulated independent Gaussian noise added to the replicate-combined array data with standard deviation given by the measured replicate-standard deviation of the log2 intensity in each experiment. The upper limit of 80 min was selected to ensure that in each time-course with time points T, the target gene expression evaluated at time points {t+\u03c4|t\u2208T and t+\u03c4\u2264max(T)} would always be based on measurements from at least three time points. The conditional probability density P(\u03c4c|H\u03050) of the overall transcriptional time delay \u03c4c, for true interacting TF\u2013target gene pairs, was defined using the gamma distribution , the upper limit of 80 min for \u03c4 included approximately 97% of transcriptional delays.Given the time resolution of the expression data of TF gene f, potential target gene g, and time lag \u03c4 \u2208 L. There were 80 TFs and 1,960 target genes. The TLC was computed as follows, for a given (fixed) time lag \u03c4. Let the vectors TX(f) and TX(g) represent the log2-transformed, SDR-normalized expression measurements for f and g in a time-course, where T is the set of time points, and let tmax\u2261max(T). Let \u03c4T\u2261{t\u2208T|t\u2264tmax\u2212\u03c4}. Let f and g, respectively, at the times \u03c4T. We now define the set of shifted time points \u03c4T'\u2261\u03c4T+\u03c4\u200a=\u200a{t+\u03c4|t\u2208\u03c4T}. The expression values f and a combined vector g. The TLC \u03c4\u03c1 was then computed using Equation 2 and using \u03c4T, and (ii) a minimum of three measurements contributing to the interpolated values The time-lagged correlation (TLC) was computed for all possible triples , where h1 and h2 are drawn from the set Q of non-TF genes (see Selection of Genes for Null Distribution above). The background distributions were constructed from the \u03c4\u03c12 values, using Gaussian kernel density estimation Ptlc\u226410\u22123). For each \u03c4 and each \u03c4\u03c1, the complementary CDF To build the background (null) TLC distribution G of gene pairs , where g1 was drawn from the set of 80 TFs (see Selection of Transcription Factors above), g2 was drawn from the set of 1,960 differentially expressed (\u201ctarget\u201d) genes (see Probeset Selection above), and g1\u2260g2 . For each pair , the time lag that maximized \u03b8.The TLC was then analyzed for the set R(\u03c4) was computed using Equation 5. The marginal probability P(H0) was estimated to be \u223c0.94 based on an analysis of the transcriptional network of P(\u03c4)was obtained from \u03b8(H).The probability ratio P value for , denoted by Ptlc, was computed according to Equation 7 of TF gene f and gene cluster C (see Expression Clustering above), an overall F score, Fexp was computed using Equation 8, combining the |C\\{f}|P values. Because the genes within a cluster are grouped by expression similarity, their TLCs with respect to f are not independent, even under the null hypothesis that f does not regulate any of the genes within the cluster. Thus, among a large collection of pairs satisfying the null hypothesis, the F scores Fexp will not be distributed according to the \u03c72 distribution with 2|C\\{f}| degrees of freedom. Instead, the number of intra-cluster degrees of freedom was computed for each cluster by clustering the SDR expression profiles of the genes within a cluster using the K-means algorithm. For a range of numbers k of sub-clusters, the BIC was computed using the variance at k\u200a=\u200a3 for normalizing the bias term k at which the BIC was minimized was doubled to obtain the effective number of degrees of freedom, d(C), within each cluster. The average over all clusters was \u2329d(kC)\u232ak\u200a=\u200a11.03, where kC denotes the kth cluster. The \u03c72 test was applied with d(C) degrees of freedom, to obtain an overall P value for the association between f and C:Fexp is defined in Equation 8, and \u03b3 is the incomplete gamma function The combined, cumulative, TLC-based f,C),A second statistic, the average time lag, was computed for each pair . Of the 12,117 genes that were not expressed in any of the microarray experiments, 7,503 were mapped to UCSC promoter sequences . The 1,960 differentially expressed genes were mapped to 1,713 unique promoter sequences. Low-complexity repeats were masked from all promoter sequences prior to motif scanning, using RepeatMasker m, the number of promoter sequences in the reference set that had at least one above-threshold match was denoted by \u03bdexp (m). For each cluster C, the mapped promoter sequences for the genes within the cluster (the number of which was denoted by \u03bc(C)) were scanned, and the number of sequences with at least one above-threshold match was denoted by \u03bd. For each matrix m and cluster C, a P value Pscan was computed from the values \u03bcexp, \u03bdexp (m), \u03bc(C), \u03bd, using Fisher's exact test. Let \u03a6 denote the mapping between the 80 TF genes and subsets of the 150 TRANSFAC matrices (see f) is the set of TRANSFAC matrices associated with the TF gene f. For each TF gene f and cluster C, a P value representing the association between f and C was computed as follows,\u03bcexp, \u03bdexp (m), \u03bc(C), and \u03bd for all clusters, are provided in Mouse position-weight matrices corresponding to the 80 differentially expressed TF genes, were obtained from TRANSFAC Professional of TF gene f and co-expressed gene cluster C, an overall combined P value, Pcomb for the significance of the association between f and C based on both promoter scanning and expression time-course data, was computed using Fisher's method,f,C) were selected, satisfying the following criteria: (i) Pcomb \u22640.0248 \u22651.61, where the P value cutoff was obtained using an FDR of 0.025); (ii) Pscan \u22640.05 \u22651.3); and (iii) \u2329\u03b8\u232af,C\u226510 min. Criterion (iii) was used to ensure that a pair would not be accepted based solely on a very low Pscan value; the average optimal time lag must be biologically plausible. A total of three TF-cluster associations were rejected, that passed criteria (i) and (ii), but not criterion (iii). A total of 90 TF-cluster associations were identified based on these criteria, involving 36 TF genes. The out-degree of a TF gene f within the network was estimated by summing was accepted) the product z |C\\{f}|, where z is the fraction of genes within C that have at least one binding site for any matrix m \u2208 \u03a6(f).For each pair , treated with DNAase (Ambion), and used as template for reverse transcription according to the manufacturers' instructions. qPCR was performed using Applied Biosystems ABI 7900 HT. Expression units were computed relative to the housekeeping gene Nfkb1) . If there were multiple significant probes within a 200 bp region, the combined statistical significance of region was computed by performing a t-test in which the distribution of probe intensities within the 200 bp region is compared to a background region of probe intensities. For each identified chromosomal region, the annotated gene nearest to the region in the 5\u2032 direction was recorded, along with the distance to the nearest flanking gene. Significance testing of the enrichment of ChIP-on-chip binding among genes within a specific cluster was carried out using Fisher's exact test with a background set consisting of all 520 differentially expressed mouse genes for which at least one probe on the array is located within 20 kbp upstream of the TSS.Five pairs were selected for ChIP-on-chip validation based on several criteria: (1) the gene members of the cluster needed to be well-represented on the tiling array (at least 30% of the genes in the cluster must be represented on the ChIP-on-chip array); (2) a correlation between TF gene and cluster expression consistent with known function (activator or repressor) for the TF; (3) the availability of a high-quality polyclonal murine antibody for a relevant TF protein; (4) demonstrated specificity of the antibody based on Western blot analysis; (5) a successful ChIP assay for several known targets of the TF. Genome location was assayed using ChIP-on-chip hybridization as described in Nfkb1) . A custoAll microarray expression data from this study have been deposited into the ArrayExpress Text S1Mathematical Derivations. This document provides a complete mathematical description of the significance test used for the time-lagged correlation. In addition, it provides background information on the Gaussian kernel density estimation method and some key theorems supporting the derivation of the method.(0.23 MB PDF)Click here for additional data file.Figure S1K used for K-means clustering. The cluster analysis was repeated for K varying between 18 and 50, with the BIC computed for each number of clusters. The optimal number of clusters, for which the BIC is minimized, was found to be K\u200a=\u200a32 . The horizontal axis indicates the number of clusters (0.15 MB TIF)Click here for additional data file.Figure S2Differential expression profiles of gene clusters, in TLR-stimulated macrophages, across all microarray expression experiments. Each row represents an experiment , and each column represents a cluster. Clusters are displayed in the order that minimizes the sum of pairwise distances between adjacent clusters (see (1.38 MB TIF)Click here for additional data file.Figure S3Cluster-median differential expression profiles in wild-type macrophages stimulated with LPS show a diversity of time scales. Each data point shown is the median of the SDR-transformed (see Equation 1) differential expression levels of the genes within the indicated cluster, at the indicated time after stimulation.(0.34 MB TIF)Click here for additional data file.Figure S43CSK4 show a diversity of time scales. Each data point shown is the median of the SDR-transformed (see Equation 1) differential expression levels of the genes within the indicated cluster, at the indicated time after stimulation. Cluster C26 shows sustained activation under this stimulus, as opposed to the case of stimulation with LPS Click here for additional data file.Figure S5Cluster-median differential expression profiles in wild-type macrophages stimulated with poly I:C show a diversity of time scales. Each data point shown is the median of the SDR-transformed (see Equation 1) differential expression levels of the genes within the indicated cluster, at the indicated time after stimulation. The core response Clusters C27 and C28 induce later in this time-course experiment than in the case of stimulation with LPS .(0.32 MB TIF)Click here for additional data file.Figure S6Cluster-median differential expression profiles of wild-type macrophages stimulated with R848 show a diversity of time scales. Each data point shown is the median of the SDR-transformed (see Equation 1) differential expression levels of the genes within the indicated cluster, at the indicated time after stimulation. Cluster C26 shows sustained activation under this stimulus, as opposed to the case of stimulation with LPS see .(0.34 MB TIF)Click here for additional data file.Figure S7P(\u03c4|H0) of observing an optimal time-lag \u03c4, for a gene pair that have a transcriptional regulatory interaction. Here, the symbol \u223cH0 denotes the complement of the null hypothesis, i.e., that there is a transcriptional regulatory interaction (this is denoted by an overbar in the main text and in the Discretized prior probability distribution (0.26 MB TIF)Click here for additional data file.Figure S8Histogram of time lag values that maximize the absolute time-lagged correlation coefficient, for randomly drawn pairs of non-transcription factor genes. The non-uniformity of the histogram shows the inherent bias in the standard method of selecting the optimal time lag, i.e., maximizing the absolute lagged correlation coefficient. Time-lagged correlations could not be reliably estimated for time lags greater than 80 min, due to limited effective sample size for higher time lags (see (0.22 MB TIF)Click here for additional data file.Figure S9Pexp \u2264 5\u00d710-3, and all satisfy the minimum average time lag criterion <\u03b8> \u2265 10 min. Differential expression levels are relative to wild-type unstimulated macrophages, with positive/negative values indicating upregulation/downregulation. The names of the TF gene and the correlated cluster are shown above each plot. The cluster expression level, shown in green, is the centroid from the K -means clustering algorithm in wild-type macrophages stimulated with LPS, for 38 pairs of transcription factor genes and gene clusters. The pairs all show high-significance time-lagged correlation based on the significance criterion (0.52 MB TIF)Click here for additional data file.Figure S1010Pexp values for the significance of the time-lagged correlation; and (ii) the estimated false discovery rate, as a function of the -log10Pexp value. The Pexp values were computed for all possible pairs of of transcription factor gene f and coexpressed gene cluster C. The histogram was generated using 40 bins.Combined plot showing (i) the histogram of -log(0.27 MB TIF)Click here for additional data file.Figure S11Histogram of positions of transcription factor binding site motif matches relative to transcription start site. The median distance from the transcription start site is 537 bp. The density of motif matches can be seen to peak at \u221220 bp relative to the start site.(0.25 MB TIF)Click here for additional data file.Figure S1210Pscan values for enrichment of TFBS motifs within co-expressed gene clusters; and (ii) the estimated false discovery rate as a function of the -log10Pscan value. The Pscan values were computed for all possible pairs pairs of transcription factor gene f and cluster C, using the position-weight matrix associated with f that had the smallest enrichment P value for the promoters of the genes in cluster C. The histogram was generated using 40 bins.Combined plot showing (i) the histogram of -log(0.28 MB TIF)Click here for additional data file.Figure S13P value, at FDR\u200a=\u200a0.1. Data points to the lower left of the line have a Pcomb value smaller than the cutoff pair. The solid line indicates the cutoff for the combined (0.59 MB TIF)Click here for additional data file.Figure S14f implicated in the network, the minimum P value Pcomb of association with any cluster C, was used as a measure of the overall significance of the association of TF gene in the transcriptional network. Transcription factor genes are displayed in decreasing order of estimated out degree (number of target genes). Transcription factors associated with larger clusters are seen to correlate with higher significances in the network, as a consequence of the sample size-dependence of the statistical tests used for the motif scanning and expression dynamics evidences.The set of transcription factor genes has a 20-fold variation in out-degree (number of target genes), within the transcriptional network. (a) Estimated out degree of transcription factor genes. The out degree of a transcription factor gene is the number of genes estimated to be regulated by the transcription factor(s) associated with that TF gene . For each gene cluster with which a TF gene was associated, the number of genes within the cluster for which a motif match was found (corresponding to the TF gene), was tabulated. The number of target genes was summed over all clusters with which the TF was associated, based on the combined expression and promoter scanning data (see (0.51 MB TIF)Click here for additional data file.Figure S15and share a protein interaction . A purple arrow indicates a known protein-DNA interaction between the source node's human ortholog protein and the promoter of the human ortholog of the gene indicated by the target node. Brown ellipses denote the core transcription factor complexes NF\u03baB and AP1.Transcription factors involved in macrophage activation are highly interconnected in the protein interaction network, and the interacting TFs co-associate with clusters. Nodes indicate TF genes whose transcript levels are differentially expressed in LPS-stimulated macrophages, and that are associated with the transcriptional network through the combination of scanning- and expression-based evidences. Node labels are gene names. A red node indicates upregulated gene expression under LPS, and green indicates downregulation, and a purple node indicates transient up- and downregulation. A blue arc indicates that the human orthologs of the murine proteins associated with the murine TF genes connected by the arc, have an interaction in the Human Protein Reference Database (0.64 MB TIF)Click here for additional data file.Figure S16TGIF1 interacts with many members of the SMAD/AP-1 transcription complex. Shown here is a network diagram of 16 proteins that interact with the SMAD family of transcription factors SMAD1/2/3/6, the histone deacetylaces HDAC1/2, and the TG-interacting factors TGIF1/2. Nodes indicate proteins, and a blue line between two nodes indicates that the human orthologs of the two proteins have an interaction, in either the Human Protein Reference Database (HPRD) (2.02 MB TIF)Click here for additional data file.Figure S17\u03c9, for the \u03c9 values for all sample points with \u03c8\u200a=\u200a80 min. Strict uniformity of this distribution (for each and every outcome \u03c8\u200a=\u200a\u03c4\u03b5L) would imply that \u03c9 is totally independent of \u03c9|\u03c8. Here, conditioning on \u03c8 is seen to not introduce a significant bias in the distribution of \u03c9 values (see Histogram of the cumulative density function of (0.30 MB TIF)Click here for additional data file.Table S1Summary of mutant mouse strains used in this study. Expression data from available mouse strains with mutations of known TLR signaling adapter molecules or known transcriptional regulators were included in the cluster analysis, in order to maximize the diversity of expression patterns in the data set used for clustering. Column 1 is the mutant strain name. Column 2 is the name of the molecule affected by the mutation. Column 3 gives the gene title. Column 4 briefly summarizes the relevance of the molecule in TLR-stimulated macrophages.(0.03 MB DOC)Click here for additional data file.Table S2in vitro stimulation of macrophages.Stimuli used for macrophage gene expression experiments. Column 1 indicates the purified TLR agonist. Column 2 gives the description of the agonist. Column 3 indicates the receptor(s) that are stimulated by the agonist. Column 4 indicates the adapter molecule(s) associated with the receptor. Column 5 indicates the concentration used for (0.04 MB DOC)Click here for additional data file.Table S3List of microarray experiments included in this study. Each row indicates a microarray experiment. Column 1 indicates the mouse strain, with \u201cWild-type\u201d indicating C57BL/6. Column 2 indicates the stimulus . Column 3 indicates the elapsed time post stimulation. Column 4 indicates the number of biological replicates combined in the experiment. Column 5 indicates whether the expression measurements for the experiment were used in identifying differentially expressed genes. Column 6 indicates if the experiment was used for the clustering analysis. Column 7 indicates if the experiment was used for time-lagged correlation (TLC) analysis. The alternating shaded pattern for rows is used to visually distinguish between experiments from different genotypes.(0.21 MB DOC)Click here for additional data file.Table S42 intensity observed, across all experiments. Columns 10-104 provide the log2 intensity measurements of the probesets across all 95 microarray experiments.Target genes with microarray expression data. This spreadsheet contains the replicate-combined probeset intensities for all 1,960 differentially expressed genes see . Column (4.30 MB XLS)Click here for additional data file.Table S5Differentially expressed transcription factor genes considered as possible regulators of co-expressed gene clusters in this study. Column 1 contains gene symbol. Column contains the NCBI Entrez GeneID for the gene. Column 3 contains the representative Affymetrix probeset selected for the gene. Column 4 contains the co-expressed gene cluster of which the transcription factor is a member. Column 5 contains the TRANSFAC position-weight matrices that are associated with the transcription factor (or TF component) coded for by this gene (see (0.13 MB DOC)Click here for additional data file.Table S62 fold change to reach 25% of its extremal value . Column 6 lists the known cytokines and chemokines that are members of the indicated cluster.Summary of co-expressed gene clusters. Column 1 indicates the cluster name. Clusters were numbered in order of decreasing size. Column 2 indicates the number of genes in the cluster. Column 3 is a heat-map representation of the within-cluster median of the normalized differential expression intensity , over time, in wild-type macrophages stimulated with LPS. The color red indicates upregulation relative to wild-type unstimulated macrophages, and green indicates downregulation (see color bar in (0.13 MB DOC)Click here for additional data file.Table S7Myd88(\u2212/\u2212) macrophages vs. the fold-change in wild-type. Column 3 indicates the time post-stimulation. Columns 4 and 5 are the within-cluster medians of the log2 of the ratios for the condition comparison indicated in column 2, for the clusters C27 and C28, respectively. The data indicate that the early response of these clusters is largely dependent on the MyD88 signaling pathway, and that the later response (2 hours) is more strongly dependent on the TRIF signaling pathway.The timing of induction of core response clusters C27 and C28 is adapter molecule-dependent. Column 1 indicates the stimulus. Column 2 indicates the microarray conditions compared, for example, fold-change (stimulated relative to unstimulated) in (0.03 MB DOC)Click here for additional data file.Table S810P value (significance) for the enrichment of the GO term in the indicated cluster. Column 6 contains the level of the GO term in the gene ontology hierarchy. Column 7 indicates the number of genes within the cluster that possess this GO term. Column 8 indicates the frequency at which this GO term appears in the set of all annotated genes in the genome (see Gene Ontology enrichments in co-expressed gene clusters. Column 1 indicates the cluster. Column 2 contains the Gene Ontology ID (GOID) for the GO term. Column 3 contains the GO term. Column 4 indicates the GO hierarchy to which the GO term belongs. Column 5 contains the -log(0.54 MB XLS)Click here for additional data file.Table S9Time-course macrophage stimulation microarray experiments used for time-lagged correlation analysis. Only time-course expression studies with a sufficient number of time points to admit time-lagged correlation analysis are shown (see (0.04 MB DOC)Click here for additional data file.Table S102 intensity observed, across all experiments. Columns 11\u2013105 provide the log2 intensity measurements of the probesets, across all 95 microarray experiments.Transcription factor genes with microarray expression data. This spreadsheet contains microarray probeset intensities for all 80 differentially expressed transcription factor genes see . Column (0.21 MB XLS)Click here for additional data file.Table S11P values of the enrichments of the PWM matches within each of the 32 clusters (see Transcription factor binding site (TFBS) motif position-weight matrices, threshold scores, and number of matches for promoter TFBS motif searching. This spreadsheet contains the results from scanning the promoters of all genes in the reference set and in each co-expressed cluster, for transcription factor binding site motifs from TRANSFAC (see (0.15 MB XLS)Click here for additional data file.Table S12Time-lagged correlation data for all gene pairs in which a motif associated with the TF gene was found to match within the promoter region of the target gene. Column 1 contains the transcription factor gene symbol. Column 2 contains the transcription factor gene's Affymetrix probeset ID. Column 3 contains the target gene symbol. Column 4 contains the target gene's Affymetrix probeset ID. Column 5 indicates the co-expressed gene cluster (1-32) of which the target gene is a member. Column 6 indicates the time-lagged correlation coefficient between the TF and the target genes, at the optimal time lag. Column 7 indicates the optimal time lag selected for the gene pair. Column 8 contains the score assigned to the motif match by MotifLocator.(7.84 MB XLS)Click here for additional data file.Table S13Ptlc from time-lagged correlation. Column 8 indicates whether the gene's promoter region was represented on the promoter array. Column 9 indicates the ChIP-on-chip P value; a blank cell in this column indicates that no significant ChIP-on-chip binding was found pairs. Each row in the table shows integrated data sources for a specific gene target. Column 1 indicates the TF gene predicted to regulate the target cluster. Column 2 gives the probeset of the TF gene. Column 3 indicates the gene symbol of the target gene. Column 4 gives the target gene probeset. Column 5 gives the co-expressed cluster of which the target gene is a member. Column 6 gives the score for the best motif match for the indicated TF, within the promoter of the target gene . Column 7 indicates the (0.05 MB XLS)Click here for additional data file.Table S14P value for the ChIP-on-chip hits within the cluster .ChIP-on-chip enrichment results for co-expressed gene clusters that are well-represented on the promoter array. Each row in the table gives results for the ChIP-on-chip assay for a particular cluster and for a particular TF target. Each row in the table is associated with a particular cluster and a particular TF target, for all pairings of p50/NFKB1 and IRF1 with the nine clusters for which at least 30% of the member genes were represented on the tiling array. The first column indicates the TF target. The second column gives the cluster number. The third column gives the number of genes on the ChIP-on-chip array for which binding was observed upstream of the transcription start site. The fourth column gives the number of genes within the cluster, that were represented on the ChIP-on-chip array. The fifth column gives the number of genes within the cluster that showed evidence of TF binding in the upstream region, in the ChIP-on-chip assay. The sixth column gives the fraction of genes in the cluster that are represented on the array. The seventh column gives the enrichment (0.02 MB XLS)Click here for additional data file.Table S15List of key materials and reagents. Column 1 indicates the type of material (mouse strain or stimulus reagent). Column 2 indicates the specific strain or reagent. For mutant mouse strains, the Mouse Genome Informatics accession number of the allele is provided. Column 3 indicates the source laboratory from which the mouse strain or reagent was obtained.(0.05 MB DOC)Click here for additional data file.Table S162 absolute probeset intensity that must have been recorded in at least one experiment, for the gene to be included in the selection described in Column 1. Column 3 indicates the false discovery rate used to determine the P value cutoffs for each of the seven time-course experiments used for differential expression testing Click here for additional data file.Table S17The total numbers of genes that possess gene ontology (GO) annotations, from each GO term hierarchy. Representative genes are selected from the set of annotated Affymetrix probesets as described in (7.84 MB XLS)Click here for additional data file."}
+{"text": "Transcription factors (TFs) regulate downstream genes in response to environmental stresses in plants. Identification of TF target genes can provide insight on molecular mechanisms of stress response systems, which can lead to practical applications such as engineering crops that thrive in challenging environments. Despite various computational techniques that have been developed for identifying TF targets, it remains a challenge to make best use of available experimental data, especially from time-series transcriptome profiling data, for improving TF target identification.In this study, we used a novel approach that combined kinetic modelling of gene expression with a statistical meta-analysis to predict targets of 757 TFs using expression data of 14,905 genes in Arabidopsis exposed to different durations and types of abiotic stresses. Using a kinetic model for the time delay between the expression of a TF gene and its potential targets, we shifted a TF's expression profile to make an interacting pair coherent. We found that partitioning the expression data by tissue and developmental stage improved correlation between TFs and their targets. We identified consensus pairs of correlated profiles between a TF and all other genes among partitioned datasets. We applied this approach to predict novel targets of known TFs. Some of these putative targets were validated from the literature, for E2F's targets in particular, while others provide explicit genes as hypotheses for future studies.Our method provides a general framework for TF target prediction with consideration of the time lag between initiation of a TF and activation of its targets. The framework helps make significant inferences by reducing the effects of independent noises in different experiments and by identifying recurring regulatory relationships under various biological conditions. Our TF target predictions may shed some light on common regulatory networks in abiotic stress responses. Plants often respond and adapt to different environmental stresses, such as drought, cold and chemicals through various transcriptional regulatory systems . IdentifArabidopsis thaliana, a model organism for plants / pKtK\t\t\t\t.Where mT' are stated in the following equations:The contiguous restrictions on miT' (t) = Tmi + 1' (t), where t = i it = 1,\u2026, n - 1.\t(13)iA and iB can be obtained by solving sets of linear algebra equations, and are functions of i\u03b1, i\u00df, \u03b3, tK and PK.After substituting Equation (12) into Equations (9), (10), (13) and (14), t K, the active regulator turnover rate PK, and \u03b3, which is equal to actKtr\u03b1nRmb\u03b1sl / Tmb\u03b1sal . K\u03b1ctK represents the strength of regulator protein effect on the target gene; tramK is the translation rate of regulator mRNA. They lump together with the ratio of basal mRNA concentrations of regulator and target to form parameter \u03b3, which determines the magnitude of the relative target mRNA level but not its shape. It is the parameters tK and PK that determine the shape of the relative target mRNA level, such as how fast the target gene responds to the regulator. For gene expression experiments under stress conditions in plants, the kinetics model can be trained with known regulator-target pair reported in the literature with a non-linear regression model [\u03b3 as a free model parameter (\u03b31 = n\u03b32 leads to Tm1'= nTm2' when other parameters are kept the same in Equations (8), (9) and (10)). Therefore, only two parameters tK and PK are estimated from the non-linear regression model, and are used to predict other regulators and their targets in plant stress response.For each regulator-target pair, there are three parameters involved in Equation (8), the target mRNA turnover rate on model . When thThe theoretical TF-target mRNA expression profiles are calculated for all the genes annotated as TFs and are substituted in place of TFs' profiles during further computation for co-expression calculation. The theoretical target profile of any TF in terms of relative expression levels among different time points is independent of actual targets of that TF as it is solely calculated based on the kinetic model. According to the model, the theoretical target profile of a TF should match the profile of its actual targets in the trend of expression although not in the absolute abundance. With this assumption, we can use Pearson correlation coefficient to find similarity of co-expression between the theoretical/shifted profile of a TF and rest of the genes to find potential targets of this TF.We used a statistical meta-analysis technique to identT is a t\u2013random variable with n-2 degree of freedom and n is the number of conditions of the gene expression profiles. Since we assume that the datasets are obtained independently, we apply the inverse chi-square method and obtain the meta chi-square statistics:where iP is the p-value obtained from the thi\t\t\t\t\t data set for a given gene pair defined in Equation (15). When there is no linear correlation between a gene pair in any of the multiple datasets, the above chi-square statistics 2n and hence the p-value for meta-analysis can be obtained bywhere 2n degrees of freedom. We calculate significance level of the gene pair from multiple datasets. The significance level of gene pair represents the count of datasets in which that gene pair has significant correlation based on Equation (15). We used meta p-value statistics (Equation (17)) combined with significance level to rank potential targets for a TF [where for a TF .The meta p-value combined with significance level and the Pearson correlation coefficient were used as co-expression statistics for finding putative targets for a TF. For a single dataset (without partitioning of microarray data), we ranked all the potential targets of a TF based on Pearson correlation coefficient and select targets such that TF-target correlation > 0.75 (medium size network) or 0.70 (large size network). For multiple datasets, we ranked all TF-target pairs based on the number of individual p-values that are smaller than 0.01 across multiple datasets; for pairs that have the same number of significant p-values, they were ranked by the corresponding meta chi-square statistics defined in Equation (16). Here we used meta chi-square instead of meta p-value since the meta p-value for many gene pairs are very close to zero and hard to distinguish computationally; both meta chi-square and meta p-value should result in the same order when the degrees of freedom for each gene pair is same. In the end, a fixed number of TF-target pairs were selected based on ranking.In case of meta-analysis, number of target genes for a TF was determined in two methods, i.e., (1) selecting fixed number of targets from top (50 or 75) or (2) choosing targets form top-ranked genes that shows significance correlation as TF-target pair in at least certain number of microarray datasets used for meta-analysis. For example, we used significance cutoff 9 (out of 9 datasets) for small network and cutoff 8 (out of 9) for medium network and cutoff 7 (out of 9) for large network. The second method worked better in general.The authors declare that they have no competing interests.JL and PL conceived the initial study and prepared relevant data and their preprocessing. GPS and DX designed the statistical method. PL and JL implemented kinetic model. GPS and DX performed the data analyses. All wrote the manuscript.A list of identified 178 putative E2F-target genes.Click here for fileA complete list of putative targets for each TF.Click here for fileA list of 27 different microarray experiments, out of which 10 experiments are time series.Click here for file"}
+{"text": "As an alternative to the frequently used \"reference design\" for two-channel microarrays, other designs have been proposed. These designs have been shown to be more profitable from a theoretical point of view (more replicates of the conditions of interest for the same number of arrays). However, the interpretation of the measurements is less straightforward and a reconstruction method is needed to convert the observed ratios into the genuine profile of interest (e.g. a time profile). The potential advantages of using these alternative designs thus largely depend on the success of the profile reconstruction. Therefore, we compared to what extent different linear models agree with each other in reconstructing expression ratios and corresponding time profiles from a complex design.On average the correlation between the estimated ratios was high, and all methods agreed with each other in predicting the same profile, especially for genes of which the expression profile showed a large variance across the different time points. Assessing the similarity in profile shape, it appears that, the more similar the underlying principles of the methods (model and input data), the more similar their results. Methods with a dye effect seemed more robust against array failure. The influence of a different normalization was not drastic and independent of the method used.Including a dye effect such as in the methods lmbr_dye, anovaFix and anovaMix compensates for residual dye related inconsistencies in the data and renders the results more robust against array failure. Including random effects requires more parameters to be estimated and is only advised when a design is used with a sufficient number of replicates. Because of this, we believe lmbr_dye, anovaFix and anovaMix are most appropriate for practical use. Microarray experiments have become an important tool for biological studies, allowing the quantification of thousands of mRNA levels simultaneously. They are being customarily applied in current molecular biology practice.In contrast to the Affymetrix based technology, for the two-channel microarray technology assays, mRNA extracted from two conditions is hybridised simultaneously on a given microarray. Which conditions to pair on the same array is a non trivial issue and relates to the choice of the \"microarray design\". The most intuitively interpretable and frequently used design is the \"reference design\" in which a single, fixed reference condition is chosen against which all conditions are compared. Alternatively, other designs have been proposed (e.g. a loop design). From a theoretical point of view, these alternative designs usually offer, at the same cost, more balanced measurements in the number of replicates per condition than a common reference design. They are thus, based on theoretical issues, potentially more profitable ,2. For iWhen focusing on profiling the changes in gene expression over time, the factor of interest is the time profile ,2. For set al. (2005) [et al. (2004) (Limma) [Several profile reconstruction methods are available for complex designs. They all rely on linear models, and for the purpose of this study, we subdivided them in \"gene-specific\" and \"two-stage\" methods. Gene-specific profile reconstruction methods apply a linear model to each gene separately. The underlying linear model is usually only designed for reconstructing a specific gene profile from a complex design, but not for normalizing the data. As a result, normalized log-ratios are used as input to these methods (see 'Methods'). Examples of these methods are described by Vinciotti, . (2005) and Smytt al. 200 (Limma) (Limma) .So far, comparative studies focused on the ability of different methods to reconstruct \"genes being differentially expressed\" from different two-color array based designs -9 or theWe compared to what extent five existing profile reconstruction methods were able to reconstruct similar profiles from data obtained by two channel microarrays using either a loop design or an interwoven design. We assessed similarities between the methods, their sensitivity towards using alternative normalizations and their robustness against array failure. A spike-in experiment was used to assess the accuracy of the ratio estimates.We compared to what extent the different methods agreed with each other in 1) estimating the changes in gene expression relative to the first time point (i.e. the log-ratios of each single time point and the first time point) and 2) in estimating the overall gene-specific profile shapes. Results were evaluated using two test sets, each of which represents a different complex design.The first dataset was a time series experiment consisting of 6 time points measured on 9 arrays using an interwoven design Figure . This deThe balance with respect to the dyes (present in the loop design) ensures that the effect of interest is not confounded with other sources of variation. In this study, the effect of interest corresponds to the time profile. The replication (as present in the interwoven design) improves the precision of the estimates and provides the essential degrees of freedom for error estimation . MoreoveWe first assessed to what extent the different methods agreed with each other in estimating similar log-ratios for each single gene at each single time point. To this end, we calculated the overall correlation per time point between the gene expression ratios estimated by each pair of two different methods. Table For this loop design the ratio estimates T3/T1 or T4/T1 obtained by each of the different methods are on overall more correlated than estimates of respectively T5/T1 and T6/T1. As can be expected, direct estimates, i.e. estimates of a ratio for which the measurements were assessed on the same array and alterations in the underlying model (including a dye or random effect) are confounded in affecting the final result. Therefore, in order to assess in more detail the specific effect of including either a dye or a random effect in the model, we compared results between methods that share the same input data.To assess the influence of including a dye effect on profile estimation, we compared the results of the gene-specific methods , exhibiting a \"flat\" profile. We wondered whether removing such flat genes, with a noisy profile would affect the similarity in profile estimation between the different methods. Indeed, because the cosine similarity with centering only measures the similarity in profile shape, regardless of its absolute expression level, the higher level of similarity we observe between the methods might be due to a high level of random correlation between the \"flat\" profiles. Therefore, we applied a filtering procedure by removing those genes for which the profile variance over the different time points was lower than a certain threshold . The similarity was assessed for any pair of profile estimates corresponding to the same gene if at least one of the two profiles passed the filter threshold Table .Overall, the results obtained with each of the different variance thresholds confirmed the observations of Table In practice, when performing a microarray experiment some arrays might fail with their measurements falling below standard quality. When these bad measurements are removed from the analysis, the complete design and the results inferred from it will be affected. Here we evaluated this issue experimentally by simulating array defects. In a first experiment, the interwoven design (dataset 1) was considered as the original design without failure. We tested 9 different possible situations of failure, by each time removing a single array from the design, resulting in 9 reduced datasets. The same test was performed with the loop design (dataset 2).We compared for each of the different profile reconstruction methods the mean similarity between the ratios obtained either with the full dataset or with each of the reduced datasets (9 comparisons). Table For the interwoven design Table , it appeFor the loop design, the situation was quite different Table . Note thNote that overall, all methods seem to be more robust to array failure under the interwoven design than under the loop design. This is to be expected as the latter design contains more replicates.In the previous section we compared profiles and ratio estimates obtained by the different methods after applying default normalization steps. However, other normalization strategies are possible, and could potentially affect the outcome. To assess the influence of using alternative normalization procedures, we compared profiles reconstructed from data normalized with 1) print tip Loess without additional normalization step (the default setting for anovaMix and anovaFix as used throughout this paper), 2) print tip Loess with a scale-based normalization between arrays , and 3) Table So far we only assessed to what extent changes in the used methodologies or normalization steps affected the inferred profiles. This, however, does not give any information on the accuracy of the methods, i.e., which of these methods is able to best approximate the true time profiles. Assessing the accuracy is almost impossible as usually the true underlying time profile is not known. However, datasets that contain external controls (spikes) could prove useful in this regard. Spikes are added to the hybridisation solution in known quantities, so that we have a clear view of their actual profile. In the following analysis, we used a publicly available spike-in experiment in attempt to assess the accuracy of each of the profile reconstruction methods . For theAs lmbr and lmbr_dye and limmaQual gave exactly the same results using this balanced design, we further assessed to what extent lmbr, anovaFix and anovaMix agreed with each other. Fig. In this study, we evaluated the performance of five methods based on linear models in estimating gene expression ratios and reconstructing time profiles from complex microarray experiments. From a theoretical viewpoint, two major differences can be distinguished between the methods selected for this study: 1) differences related to alterations in the input data: the selected two-stage methods make use of the log-intensity values while the gene-specific methods use log-ratios, 2) differences related to the model characteristics: some of the models include an explicit dye effect or an explicit random effect (anovaMix).Although Kerr assumed In general we observed that, gene-specific methods without dye effects, and two-stage models with dye effect behaved more similar with each other than when they were compared among each other. Lmbr_dye (a gene-specific model with dye effect) is situated somewhere in between when the design is unbalanced with respect to the dyes. Indeed, the gene-specific models lmbr and limmaQual contain a combination of log-ratios plus an error term. However, when adding a dye effect to these models as is the case of lmbr_dye, the formulations and estimations converge with those of the two-stage ANOVA models for unbalanced designs.et al. (2005) [et al. (2005) [Originally, Vinciotti, . 2005) and Wit,. (2005) added th. (2005) , Kerr . We observed that all five tested linear methods generated biased estimations, consistently overestimating changes in expression relative to a reference with low mRNA-concentration. These results were independent of the method used (gene-specific or two-stage) or of the number of effects included the model.On average the correlation between the estimated ratios was high, and all methods more or less agreed with each other in predicting the same profile. The similarity in profile estimation between the different methods improved with an increasing variance of the expression profiles.We observed that when dealing with unbalanced designs, including a dye effect, such as in the methods lmbr_dye, anovaFix and anovaMix, seems to compensate for residual dye related inconsistencies in the data . Adding a dye effect also renders the results more robust against array failure. Including random effects requires more parameters to be estimated and is only advised when a design is used with a sufficient number of replicates.Conclusively, because of their robustness against imbalances in the design and array failure, we believe lmbr_dye, anovaFix and anovaMix are most appropriate for practical use (given a sufficient number of replicates in case of the latter).Xenopus tropicalis expression profiling experiment. The array used consisted of 2999 oligos of 50 mers, corresponding to 2898 unique X. tropicalis gene sequences and negative control spots . Each oligo was spotted in duplicate on each array in two separated grids. On each grid, oligonucleotides were spotted in 16 blocks of 14 \u00d7 14 spots. Pairs of duplicated oligo's on the two grids of the same gene sequence were treated as replicates during analysis, corresponding to a total of 2999 different duplicated measurements (a few oligos were spotted multiple times on the arrays). MWG Biotech performed oligonucleotide design, synthesis and spotting. X. tropicalis gene sequences were derived from the assembly of public and in-house expressed sequence tags. The temporal expression of X. tropicalis during metamorphosis was profiled at 6 time points, using an experimental design consisting of 9 arrays. Each time point was measured three times, with alternating dyes as shown in Figure The first dataset used in this study was a temporal From this original design a second test set containing a smaller loop design was derived by picking the combinations of five arrays that connect five time points in a single loop Figure and withA publicly available spike-in experiment was usedThe microarray design used for the spike-in experiment was a common reference design, with dye swap for each condition, and the concentrations of spikes ranges from 0 to 10,000 copies per cellular equivalent (cpc), assuming that the total RNA contained 1% poly(A) mRNA and that a cell contained on average 300,000 transcripts. This concentration range covered all biologically relevant transcript levels.10 \u03bcg of total RNA were used to prepare probes. Labeling was performed with the Invitrogen SuperScript\u2122 Indirect cDNA labeling system (using polyA and random hexamers primers) using the Amersham Cy3 or Cy5 monofunctional reactive dyes. Probe quality was assessed on an agarose minigel and quantified with a Nanodrop ND-1000 spectrophotometer. Dye quantities were equilibrated for hybridization by the amount of fluorescence per ng of cDNA. The arrays were hybridized for 20 h at 45\u00b0C according to the manufacturers protocol (QMT ref). Washing was performed in 2\u00d7 SSC 0.1% SDS at 42\u00b0C for 5' and then twice at room temperature in 1\u00d7 SSC, 0.5\u00d7 SSC each time for 5'. Arrays were scanned using a GenePix Axon scanner.The raw intensity data were used for further normalization. No background subtraction was performed. Data were log-transformed and the intensity dependent dye or condition effects were removed by using a local linear fit loess on these log-transformed data (Printtiploess command with default settings as implemented in the limma BioConductor package ). As thiet al. (2005) [RCORR) and green (GCORR) channels were calculated from the Loess corrected log-ratios and mean absolute intensities (A) as follows: RCORR = (A + MCORR)/2, and GCORR = (A - MCORR)/2.For the gene-specific methods , Loess corrected log-ratios (per print tip) were subjected to an additional quantile normalization step ,12 as su. (2005) in orderAvailable R implementations (BioConductor ) of the Gene-specific profile reconstruction methods apply a linear model on each gene separately. The goal is to estimate the true expression differences between the mRNA of interest and the reference mRNA, from the observed log-ratios. The presented models assume that the expression values have been appropriately pre-processed and normalized ,20. The 1) lmbr, the linear model described by Vinciotti et al. (2005) [. (2005) :yjk is the log-ratio of condition j and condition k. For each gene a vector of n observations y = can be represented asAn observation X is the design matrix defining the relationship between the values observed in the experiment and a set of independent parameters \u03bc = lmbr_dye, an extension of lmbr including a general dye effect:The previous model can be extended to include a gene-specific dye effect n times a constant value representing the gene-specific dye effect \u03b4. Alternatively, one could write y = XD\u03bcD + \u03b5 where XD is the design matrix X with an extra column of ones, and \u03bc = . Note that in the case of dye-balanced designs, the addition of a dye effect will not yield any different estimators for the contrasts of interest. In a balanced design, each column of X will have an equal amount of 1's and -1's. I.e. the ith column of X, corresponding to the true expression difference \u03bci1, reflects how condition i was measured an equal number of times with both dyes. As such, the positive and negative influences of the dye effect will cancel each other out in the estimation of true expression differences. The use of lmbr_dye will thus only render different results compared to lmbr when using it to analyze unbalanced experiments.where D is a vector of XD must be of full rank. If the column representing the dye effect is not linearly independent, the matrix is rank deficient. This situation occurs for example when an array is removed from the loop design used in this paper. In this case, there are an infinite number of possible least squares parameter estimates. Since we expect a single set of parameters, a constraint must be applied (this is done on the dye effect) in which case the true expression estimates are the same as for lmbr.In order to estimate all parameters, the matrix Lmbr and lmbr_dye were implemented in the R language using the function 'lm' for linear least squares regression.3) limmaQual, the Limma model [ma model ,20,21 inet al., (2005) but the variance of the observations y includes the weight term. In this case, the weighted least squares estimator of The quality adjustment assigns a low weight to poor quality arrays, which can be included in the inference. The approach is based on the empirical reproducibility of the gene expression measures from replicated arrays, and it can be applied to any microarray experiment. The linear model is similar to the model describes by Vinciotti et al., 05 but thwhere \u03a3 is the diagonal matrix of weights.The weights in the limmaQual model are the inverse of estimated array variances, down weighting the observations of lower quality arrays in order to decrease the power to detect differential expression. The method is restricted for use on data from experiments that include at least two degrees of freedom. When testing the array failure in case of the loop design, there is no array level replication for two of the conditions so the array quality weights can not be estimated: the Limma function returns a perfect quality for all the arrays .gls.series) of the Limma package.The fit is by generalized least squares allowing for correlation between duplicate spots or related arrays, implemented in an internal function models. They can normalize microarray data and provide estimates of gene expression levels that are corrected for potential confounding effects.Since the global methods are computationally time-consuming, we selected two-stage methods that apply a first stage on all data simultaneously and a second stage on a gene by gene level. These models use partially normalized data as input , as spot effects are explicitly incorporated. They return normalized absolute expression levels for each channel separately (i.e. no ratios), which can then be used to reconstruct the required time profile.4) anovaFix, two-stage ANOVA with fixed effects [ effects :yijkgr that represents the measurement observed in the array i, labeled with the dye j, representing the time point k, from gene g and spot r. The first stage is the normalization model:We denote the loess-normalized log-intensity data by \u03bc captures the overall mean. The other terms capture the overall effects due to arrays (A), dyes (D) and labelling reactions (AD). This step is called \"normalization step\" and it accounts for experiment systematic effects that could bias inferences made on the data from the individual genes. The residual of the first stage is the input for the second stage, which models the gene-specific effects:where the term Here G captures the average effect of the gene. The SG effect captures the spot-gene variation and we used it instead of the more global AG array-gene effect. The use of this effect obviates the need for intensity ratios. DG captures specific dye-gene variation and VG (variety-gene) is the effect of interest, the effects due to the time point measured. The MAANOVA fixed model computes least squares estimators for the different effects.5) anovaMix, two-stage ANOVA with mixed effects [ effects ,16:The model applied is exactly the same as anovaFix, but in this case the SG effect was treated as a random variable, meaning that if the experiment were to be repeated, the random spot effects would not be exactly reproduced, but they would be drawn from a hypothetical population. A mixed model, where some variables are treated as random, allows for including multiple sources of variation.We used the default method to solve the mixed model equation, the REML (restricted maximum likelihood) method. Duplicated spots were treated as independent measurements of the same gene. For MAANOVA and Limma packages the option to do so is available, for lmbr and lmbr_dye duplicated spots were taken into account by the design matrix.Applying the gene-specific methods mentioned above results in estimated differences in log-expression between a test and a reference condition or in log-ratios. To reconstruct from the different designs a time profile, the first time point was chosen as the reference. A gene-specific reconstructed profile thus consists of a vector which contains as entries ratios of the measured expression level of that gene at each time point except the first, relative to its expression value at the first time point. For instance, for the loop design shown in Table In contrast to the gene-specific methods, two-stage methods estimate the absolute gene expression level for each time point rather than log-ratios. In this case, for the loop design shown in Table To assess the influence of using different methodologies on the profile reconstruction, the following similarity measures were used to compare the consistency in reconstructing profiles for the same gene between the compared methods:1. Overall similarity in the estimated ratios: we assessed the similarity between the estimations of each single ratio of the time profile generated by two methods using the Pearson correlation. Since two-stage methods estimate gene expression levels (variety-gene effect in the model) instead of log-ratios, we converted these absolute values into log-ratios by subtracting from the absolute expression levels estimated for each of the conditions the estimated level of the first time point (the reference).2. Profile shape similarity: the profile shape reflects the expression behaviour of a gene over time. For each single gene, we computed the mutual similarity between profile estimates obtained by any combination of two methods. To make profiles consisting of log-ratios obtained by the gene-specific methods comparable with the profiles estimated by the two-stage methods, we extended the log-ratios profile by adding as a first time point a zero. This represents the contrast between the expression value of the first time point against itself in log-scale and 1 (similar shape), 0 being no correlation. The cosine similarity only considers the angle between the vectors focusing on the shape of the profile. As a result, it ignores the magnitude of ratios of the profiles, resulting in relatively high similarities for false positives .No variance normalization was performed on the profiles to preserve their shape. Instead of normalizing by the variance, the profiles were filtered using the standard deviation.Constitutively expressed genes or genes for which the expression did not significantly change over the conditions were filtered by removing genes of which the variance in expression over different conditions was lower than a fixed threshold . A pair wise similarity comparison was made for all profile estimates (corresponding to the same gene) that were above the filtering threshold in at least one of the two methods compared. Similar results were obtained when applying as a filter that all profile estimates had to be above the filtering threshold in both methods compared (data not shown).AF performed the analysis and wrote the manuscript. RT and NP were responsible for the microarray hybridization. NP and GB critically read the draft. KE contributed to the analysis. KM coordinated the work and revised the manuscript. All authors read and approved the final manuscript.Plot of corresponding ratios estimated by two linear methods. Comparison of corresponding ratios estimated by lmbr and anovaMix using the loop design. The line indicates the identity between both methods and most of the points are situated near this identity line.Click here for filePairwise correlation between ratios estimated for the interwoven design. The table shows the pairwise correlation between ratios estimated by each pair of methods (columns 1 and 2) for the interwoven design. The ratios correspond to the change in expression compared to the first time point. The last column corresponds to the mean correlation of the 5 estimations.Click here for fileMean similarity between profiles using different filtering thresholds. Values in the table correspond to the similarity between any two methods, expressed as the mean profile similarity of the genes. Results are shown for both the interwoven and loop design using different filtering thresholds. Since the loop design is balanced with respect to the dyes, the results for lmbr and lmbr_dye were the same (see 'Methods' section), which is why they are not treated differently. A) No filtering applied, similarity is assessed for all 2999 profile estimates, B) a filtering threshold (SD) is used on all profiles estimated by each of the methods, a pairwise similarity comparison is made for all profile pairs (corresponding to the same gene) estimated by each of the two methods compared, for which at least one profile is above the filtering threshold .Click here for fileEffect of array failure for the interwoven design. The table shows the effect of array failure in reconstructing profiles from an interwoven design. Profile similarities were assessed using the cosine similarity. The different methods for which the influence of the failure was assessed are represented in the columns. Each row shows the mean cosine similarity between the corresponding profiles estimated from the complete design and those obtained from a defect design (where one array was removed compared to the complete design). Mean: shows the overall mean similarity for a given method.Click here for fileEffect of array failure for the loop design. The table shows the effect of array failure in reconstructing profiles from a loop design. Profile similarities were assessed using the cosine similarity. The different methods for which the influence of the failure was assessed are represented in the columns. Each row shows the mean cosine similarity between the corresponding profiles estimated from the complete design and those obtained from a defect design (where one array was removed compared to the complete design). Mean: shows the overall mean similarity for a given method.Click here for file"}
+{"text": "Time-course microarray experiments can produce useful data which can help in understanding the underlying dynamics of the system. Clustering is an important stage in microarray data analysis where the data is grouped together according to certain characteristics. The majority of clustering techniques are based on distance or visual similarity measures which may not be suitable for clustering of temporal microarray data where the sequential nature of time is important. We present a Granger causality based technique to cluster temporal microarray gene expression data, which measures the interdependence between two time-series by statistically testing if one time-series can be used for forecasting the other time-series or not.A gene-association matrix is constructed by testing temporal relationships between pairs of genes using the Granger causality test. The association matrix is further analyzed using a graph-theoretic technique to detect highly connected components representing interesting biological modules. We test our approach on synthesized datasets and real biological datasets obtained for Arabidopsis thaliana. We show the effectiveness of our approach by analyzing the results using the existing biological literature. We also report interesting structural properties of the association network commonly desired in any biological system.Our experiments on synthesized and real microarray datasets show that our approach produces encouraging results. The method is simple in implementation and is statistically traceable at each step. The method can produce sets of functionally related genes which can be further used for reverse-engineering of gene circuits. Keeping in mind, the final goal of microarray data analysis being identification of interactions between genes at the third level, the quest for this goal should ideally start when the data is being grouped together at the clustering stage. One of the ultimate goals of all gene clustering algorithms is to discover the underlying gene pathways representing the biological processes. Genes that are lying in the same pathway are often activated or depressed simultaneously or sequentially upon receiving stimuli. The biological signal is typically transmitted through intermediate gene interactions due to physical or chemical activities. The simultaneous or sequential activation, or depression, is delineated by the underlying network connection patterns. In this paper, we present a novel approach for clustering of temporal microarray data based on the notion of temporal interaction between the genes. The temporal recording of gene expressions provides an excellent opportunity to view the gene profiles with respect to time and helps in understanding the underlying causal processes driving the behavior of the genes and the system in turn. Like any dynamical system, in a system with a temporal expression profile, time plays a crucial role in the way the system behaves. The primary hypothesis behind the approach presented in this paper is: the observed effect on any gene is due to some cause propagated over time. The observed expression of a gene could be due to the effect of other genes present in the system which may be activating or inhibiting the gene under observation with different time-lags. In other words, we perceive the system as a set of interacting entities, where each entity is a stochastic process and the interactions between them are temporal activities taking place between a pair of processes.Microarrays allow simultaneous measurement of thousands of genes in a short span of time. This approach provides abundant opportunities for scientists to detect and experimentally validate the hypothesis that the data might be generating. Microarray experiments have traditionally focused on measurement of gene expressions at a single time point and are increasingly being applied to measure expression-levels across multiple time points. Such time-course measurements can help in gaining insights into the dynamics of gene interactions -3. The cfunctional module can be defined as a separate substructure of a network having a group of genes or their products that are related by physical or genetic interactions. In graph-theoretic sense, a functional module can be represented by highly connected regions in a network, where the functions are predicted using connections in a graph based on the assumption that genes which lie close to one another are more likely to have similar functions or constitute gene complexes [A system with such behavior is a widely accepted concept in Economics and Neuroscience. Granger proposedomplexes ,14. We womplexes which diThere are many clustering techniques proposed for clustering of gene expression data. However, majority of these techniques do not take into account the sequential nature of time series data, and thus are inappropriate for clustering such datasets. The earlier proposed approaches can broadly be divided into three categories.1. Point-wise distance based methods - group genes by minimizing an objective function based on a distance measure computed between gene pairs. The distance measure could be Euclidean distance, mutual information, correlation or its respective variants , etc. Th2. Feature based clustering methods - aim at detecting salient features and local or global shape characteristics of the expression profiles. As opposed to a distance based similarity measure, looking for general shape among the gene profiles can uncover more intricate relationships, such as time shifts and inversion in expression profiles. Ji and Tan proposed3. Model based clustering methods - shift the similarity emphasis from the data to the unknown model that describes the data. Such methods are based on statistical mixture models which assume that data is generated by a finite mixture of underlying probability distributions, with each component corresponding to a distinct cluster -35. ModeThe method proposed in this paper for clustering of temporal gene expression data takes advantage of the essential behaviour of the Granger causality test, which determines if one time-series is useful in forecasting the other time-series or not. The network obtained after applying the Granger causality test is representative of the association between gene-pairs which pass the test. In order to detect the potential functionally related genes, we use a graph theoretical technique to detect dense regions in the association network. Our approach shows that the detection of dense regions in association with Granger causality test plays an equally important role in the proposed clustering technique. The method is tested using both synthetic as well as real datasets obtained to monitor senescence in Arabidopsis Thaliana. To the best of our knowledge, this is a new approach in clustering of temporal gene-expression data which can be used for automated grouping of interesting genes from a large dataset.We test our method on three sets of synthetic multivariate datasets. Each set represents a collection of stochastic processes in the form of time-series. We construct each set in such a way that the processes belonging to the set are interdependent, whereas the sets themselves are disjoint from each other.Dataset 1:Dataset 2:Dataset 3:i ~ N represents the uncorrelated random error associated with each process. In Dataset 1, x1 is the driving force for x2, x3 and x4 with time lags 2,3 and 2 respectively. x4 further drives x5 and they both share a feedback loop. Similarly, in Dataset 2, x1 drives x2 with time lag 3 and x2 in turn drives x3. x1 and x3 both together drive x4. Similarly, in Dataset 3, we have x1 driving x2. x2 drives x3 with lag 2 and x3 in turn drives x4. The process x5 is driven by x2 and x4 with time lag 2 and 1 respectively. In the end, x6 receives the drives from x1, x5 and x3 with time lags 2,1 and 3 respectively.In the above datasets, \u03f5The datasets are disjoint from each other due to different sources of initiation. The datasets show different arrangements of connections between the processes which include feedback loops, low and high coefficients of drive between processes, multiple processes together driving a single process and all the processes interacting with other processes on a different time lag.\u03b1 = 0.05 was chosen for the F-test to accept or reject the hypothesis. The causal hypothesis H0 was tested for each pair of processes denoted by in both ways i.e, X causing Y, and Y causing X. Since we are only interested in the presence of interaction between , we ignore the directionality of causal influence and quantify the association between the pair with the higher of causality value obtained from both directions. If there is no causal relationship between the pair, the association between X and Y is quantified as zero. The networks obtained after computing the Granger causality and weighing the edges for all the synthetic datasets are shown in Figures We apply the Granger causality to infer the interactions between different entities in each dataset. The Granger causality test was implemented in Matlab and the source code is available on request from the first author. The standard critical value of We see in Figure The connections are simpler and more sparse in the case of Figure x and y axes represent the 15 \u00d7 15 matrix of processes in the system. The interaction strengths between the processes are shown on the z-axis. We can clearly see three different island-like structures in the graph where entities 1 to 5 interact within themselves, 6 to 9 within themselves and 10 to 15 within themselves. The plot clearly shows that there is no cross talk between the entities across different sets even though they are present within the same system.Having analyzed the individual datasets, we further investigate what happens when all the three datasets are put together to form a bigger system of processes and the pairwise interaction between the processes are computed. We create a system of 15 entities where the first 5 entities represented the processes in Dataset 1, the entities from 6 to 9 represented the processes from Dataset 2, and the last 6 entities represented the processes in Dataset 3. We then test for Granger causality for all possible pairs of processes in the system. We plot the interaction strength between the processes in Figure We test our method on real biological dataset obtained from in-house microarray experiment designed to measure gene-expression level of around 31,000 genes for Arabidopsis thaliana plant . This se\u03bcmol m-2s-1 light intensity, 16 h day length. Leaf 7 was tagged on emergence and biological replicates were harvested both the morning and evening (7 h and 14 h into light period) at 2 day intervals until fully senescent. This resulted in 22 time point samples from before full expansion to senescent.Arabidopsis (COL-0) was grown in a controlled environment at 20\u00b0C, 70% relative humidity, 250 RNA was isolated from 4 individual leaves as separate biological replicates using the Triazol method (Invitrogen) followed by RNeasy column purification (Qiagen). RNA was amplified using a MessageAmp II (Ambion) and then labeled with Cy3 or Cy5 using reverse transcriptase . Each amplified RNA sample was labeled twice with Cy3 and twice with Cy5 giving 4 technical replicates for each leaf sample. Two Cy3 and C5 labelled samples were mixed in different combinations for hybridization to microarray slides.http://www.biodiscovery.com/).Microarrays (CATMA) carrying 31,000 Arabidopsis gene probes (constructed in house as described in ) were hyIndividual text files quantifying the output for Cy3 and Cy5 were used in the further data analysis.After testing our method on the synthesized datasets, we test our method on the Arabidopsis data discussed above. We test our method on two samples of different sizes of the same dataset. We first test our method on a smaller sample of 85 genes belonging to three different categories of biological processes. This smaller sample helps us mimic the scenario shown by our synthetic model. The primary advantages of choosing the smaller dataset is that it helps us in minimizing the search space for ontological validation of clusters by mining on-line repositories which may not be complete for all the genes. Later, we apply our technique on a larger dataset of 1800 genes and study the clusters obtained and the general structural properties of the network.http://www.arabidopsis.org/index.jsp to find the names of the genes which are experimentally confirmed to perform above mentioned biological functions. It should be noted that this interface does not provide any p-value associated with the GO terms for the selected genes. This selection should be considered just as a weak indication of a gene performing the mentioned biological function. While verifying the results, we use another gene annotation tool (BinGO) [\u03c3 technique and discarded. We finally had a set of 30 genes responsible for circadian rhythm, 34 genes involved in the aging process and 21 genes participating in the cell death, total leading to a set of 85 genes. Figure For the smaller dataset, we selected 85 genes belonging to three different categories of biological processes according to the Gene Ontology (GO) database . The sel (BinGO) which prX, Y), we selected the maximum of the causality values for directions X \u2192 Y and Y \u2192 X and assigned that value as the weight for the edge between X and Y. To further simplify the network, we applied a threshold corresponding to 0.975 quantile of all the edge value to select the dominant edges in the network. The final network is presented in Figure The temporal profiles of genes were adjusted by taking the first difference of successive time points to obtain the stationary behavior. We then applied the causality test to all the pairs of genes in the system. A complete network with 85 genes has total links equal to 2 \u00d7 , a multiple testing correction database using the BinGO tool. Table rrection ) is applThe subgraph in Figure k-core score which resulted in a number of different clusters. We present some of the clusters we found in Figures We next applied our method on a larger dataset of 1800 genes selected according to their frequency profiles . We ranWe computed certain network statistics to confirm that our network is not a randomly generated network and has the properties desired in a biological network. A total of 1353 nodes were present in the network after filtering out weaker edges. The total number of edges present in the network was 21,214 which is around 1.1% of the total possible directed edges in the network, which is an indication of sparseness, a common characteristics of biological networks . There ip(k) of the genes, measuring the probability that a given gene interacts with k other genes. Barabasi and Albert [We calculated the degree distribution d Albert used theP is the number of partners shared between nodes i and j, that is, nodes that are neighbors of both i and j. The shared neighbors distribution gives the number of node pairs with P = k for k = 1,2, 3.... The distribution again shows a power law like distribution indicating the presence of motifs with large numbers of connected components in the network.Figure C) of a network with n nodes is computed as the reciprocal of the average shortest path length is computed as follows: C(n) = L is the length of the shortest path between two nodes i and j. Figure Closeness centrality is a measure of how fast information flows from a given node to other reachable nodes in the network. Closeness centrality (TC(k), is a relative measure for the extent to which a gene in the network shares interaction partners with other genes. Also the topological coefficient as shown in Figure Another characteristics of interaction networks can be captured by calculating the topological coefficients ,52. The Results and Discussion section. We then extend our comparison to the smaller Arabidopsis dataset of 85 genes.In order to have a comparison of our proposed method with some existing methods, we use the synthetic datasets and the smaller Arabidopsis dataset of 85 genes discussed in the earlier sections. We apply two widely used techniques to establish association between the pairs of genes in the dataset. The association between genes are measured using a) the Pearson correlation coefficient and b) the Euclidean distance. First, we computed the correlation coefficients and the Euclidean distances for the node pairs in the synthetic datasets. The results are presented in Table The small size and the knowledge about the functionality of genes are the main advantages of using the smaller dataset for Arabidopsis. The small size of dataset also allows us to present the results in an easy-to-view graphical format. The genes in the dataset were arranged in an ordered fashion before computing the association between them, i.e., the first 30 in the dataset of 85 genes preformed circadian rhythm related activity, the next 34 genes were associated with aging, and the last 21 genes participated in cell death. Figure In an ideal scenario, where the genes performing similar activity group together, we expect three distinct regions in Figures To investigate further, we applied a threshold to keep the strongest edges in the graphs obtained from the association matrices. The criteria to choose the threshold for selecting the strong edges in the graphs was same as the one used before in case of smaller Arabidopsis dataset. The filtered graphs were analyzed using the graph-theoretic technique with the similar settings as used before. The correlation based associative graph resulted in two subgraphs shown in Figure forecasting others. The proposed method facilitates a way to study such forecasting relationships between two variables. In other words, we are asking if a variable X can predict another variable Y. Equivalently, we can say if X is exogenous in time-series sense with respect to Y or not. Yet a third expression meaning the same thing is, if X is linearly informative about future Y. The basic idea behind this method is, if an event X causes another event Y, then X should precede Y in time. This is why our illustrative models are based on time, and within that time frame the lags like t - 1, t - 2, \u22ef etc. denote the temporal association within the processes.We have used a fresh and distinct approach to cluster temporal microarray gene expression data. One of the key questions that we have tried to address using this method is that how some variables are useful for X, Y), the association measure between them does not change. As for example, let the original observation be X = {xt-1, xt-2, xt-3} and Y = {yt-1, yt-2, yt-3}. The Association measure using correlation/distance for = C. After reordering of the observations, let X' = {xt-3, xt-1, xt-2} and Y' = {yt-3, yt-1, yt-2}. The new association measure using correlation/distance for = C' where C = C'. Hence, this assumption is not suitable for dynamical systems. This is the reason why the usual pairwise association methods can give us less reliable results than the ones by our method. And hence a comparison between the two methods will not be fair. There has been some work in model based clustering methods based on Bayesian statistics where the dynamics of profiles (modeled as regressive processes) have been used to create clusters [X, Y) according to what describes the variables best.While discussing widely used pairwise association methods for clustering, like any form of correlation or distance based methods, the time is static. In these methods, the time does not play any role. The core of these methods rely on association rather than prediction. So if we re-order the sequence of observations for any pair of variables be the variance of the corresponding forecast error. Granger's definition of causality between X and Y included three scenarios.In accordance to general equilibrium theory, economists assume that everything depends on everything else; and hence, the notion of causal relationship between different time-series arises. The idea of causality is related to the idea of succession in time and that the cause always precedes the effect. Consider two processes Granger proposedariables . The incariables . We assuY is Granger causal to X if and only if the future values of X can be predicted better i.e with a lower variance, if the current and past values of Y are used.1. Granger Causality: Y is instantaneously Granger causal to X if and only if the application of an optimal linear function leads to the better prediction of future value of X, xt+1 if the future value of Y, yt+1 is used in addition to the current and past values of Y.2. Instantaneous Granger Causality: X and Y exists if X is causal to Y and Y is causal to X.3. Feedback: The feedback between Feedback is only defined for the case of simple causal relations because the direction of instantaneous causality cannot be determined without additional information or assumption.p, and we can estimate the following unrestricted equation by ordinary least squares (OLS):The bidirectional Granger causality can be tested in the context of linear regressive models. For a pairwise interaction between two variables, we use autoregressive specification of a bivariate vector autoregression. Assume a particular autoregressive lag length Xt is the is the prediction of the X at time t based on its own past values as well as the past values of Y, \u03b1i and \u03b2i are the weighting factors, and ut is the prediction error with a variance that measures the strength of the prediction error. If all the weighting factors \u03b2i in Equation (1) are equal to zero then we can conclude that Y does not contribute towards the prediction of X, but in the case of any \u03b2i being not equal to zero, we will say that the past values of Y are contributing towards the prediction of the current X. Therefore we can have two hypotheses as follows -where We can conduct a F-test of the hypotheses by estimating the following equation using Ordinary Least Squarest is the prediction error or residual.where \u03f5RSS1 and RSS0 be the sum of squared residuals of Equation (1) and (4), respectively, i.e.Let andS is greater than the specified critical value specified critical value, we reject the null hypothesis that Y does not Granger-cause X.If the test statistic p which minimizes the AIC value is chosen as the lag order.The results are strongly dependent on the number of lags of explanatory variables. To find a suitable lag value in Equations (1) and (4) we use Akaike Information Criteria. Any va\u03c3 is the estimated noise covariance, m is the dimension of the stochastic process and n is the length of the data window used to estimate the model.where X \u2192 Y or from Y \u2192 X, we add an edge in the network. We are not interested in the direction of the edge and the association network is not directional at all.We will use the test of Granger causality to establish association between gene pairs in our interaction network. If the test for causality passes in any direction, either from Even though most of the biological networks are sparse in their connectivity, the complexity of connections increases with the increasing number of nodes. A network of interacting entities can be readily modeled as a graph where the entities are represented by nodes and the associations between them as edges. It is often argued ,62 that G = , where V and E being the sets of vertices and edges respectively, the density of a graph is based on the connectivity level and is defined as DG = |E|/|Emax|, where Emax is the total number of all possible edges in a complete graph G.The functioning of the method by Bader and Hogue can be understood in the following way. Given a graph k-core of the vertex neighborhood. A k-core is a graph of minimal degree, \u2200v \u2208 V and the degree of v \u2265 k. The highest k-core of a graph is the central and most densely connected subgraph. The highest k-core component gives us the highest k-core level, kmax in the vertex neighborhood. The final weight of the vertex is the product of kmax and the density of the corresponding highest k-core component.The vertex weighting in the graph starts by weighing all the vertices based on their local network density using the highest This type of weighting amplifies the weighting of heavily connected graph regions while removing the less connected graph regions which are present in abundance.Once the vertex weighting is done, the algorithm seeds a subgraph(complex) with highest weighted vertex and moves outwards to include vertices in the neighborhood whose weight is greater than a given threshold. The algorithm propagates through the included neighbors and recursively checks the subsequent nodes. The process stops when no more nodes can be added to the complex and is repeated for the next highest unseen weighted vertex in the network.GC = is defined as the product of the density of the subgraph and the number of vertices (Dc \u00d7 |Vc|). Other scoring schemes are also possible but are not tested in the original algorithm.In the post-processing stage, the complexes which do not contain at least 2-core (graph with minimum degree 2) are filtered out. Finally, all the complexes in the network are scored and ranked. The complex score for a given subgraph RK and CL conceived and designed the study. VBW performed the experiments. RK analyzed the data. All authors have read and approved the final manuscript."}
+{"text": "By combining information on the yeast transcription network and high-resolution time-series data with a series of factors, support is provided for the concept that dynamic cumulative regulation is a major principle of quantitative transcriptional control. The regulation of genes in multicellular organisms is generally achieved through the combinatorial activity of different transcription factors. However, the quantitative mechanisms of how a combination of transcription factors controls the expression of their target genes remain unknown.By using the information on the yeast transcription network and high-resolution time-series data, the combinatorial expression profiles of regulators that best correlate with the expression of their target genes are identified. We demonstrate that a number of factors, particularly time-shifts among the different regulators as well as conversion efficiencies of transcription factor mRNAs into functional binding regulators, play a key role in the quantification of target gene expression. By quantifying and integrating these factors, we have found a highly significant correlation between the combinatorial time-series expression profile of regulators and their target gene expression in 67.1% of the 161 known yeast three-regulator motifs and in 32.9% of 544 two-regulator motifs. For network motifs involved in the cell cycle, these percentages are much higher. Furthermore, the results have been verified with a high consistency in a second independent set of time-series data. Additional support comes from the finding that a high percentage of motifs again show a significant correlation in time-series data from stress-response studies.Our data strongly support the concept that dynamic cumulative regulation is a major principle of quantitative transcriptional control. The proposed concept might also apply to other organisms and could be relevant for a wide range of biotechnological applications in which quantitative gene regulation plays a role. One of the important elements of gene regulation is mediated by the binding of transcription factors to specific binding sites of promoters or other gene regulatory control regions. In eukaryotes, a combinatorial activity of specific transcription factors is generally responsible for the expression of genes in certain tissues, at specific times, or under specific environmental conditions -4. AlthoEukaryotic promoters usually contain several binding motifs representing multiple-regulator-to-single-target-gene network structure motifs (regulation modes). A multiple-regulator set may control several different target genes Figure , which aet al. [et al. [et al. [et al. [In order to obtain further insight into the potential quantitative mechanisms of target gene activation, use can be made of gene expression data and knowledge of the available transcriptional gene network of yeast -20. A nuet al. have stu [et al. , Griffin [et al. , and Gha [et al. . In theset al. [Plasmodium falciparum. In their study, strong time delays between mRNA and protein accumulation have been found, indicating the importance of this factor. The difference among these delays for individual genes encoding regulators, the difference among the time used for posttranslational modifications for different proteins, and other unknown differences will possibly cause a shift in the time at which the various regulators function. Therefore, we think another kind of potential time-shift exists among different transcription factors themselves, in addition to the well-studied delay from the time when transcription factors are expressed to the time when their corresponding target genes are induced or repressed [An alternative explanation could be the importance of time delays between the mRNA expression of genes and the accumulation of their corresponding proteins. Le Roch et al. have sysepressed ,17,26,27Many steps are involved in the conversion of mRNA from a transcription factor gene into an activated, fully functional, binding regulator. The efficiency of each of these steps can be expected to vary from transcription factor to transcription factor, although the precise mechanisms are still unknown. Different transcription factors have different mRNA turnover rates ,29. P-boComplex biological systems often display nonlinear dynamic behavior. This is probably also the case for the activation of target genes as a result of the combinatorial activity of different transcription factors. Nonlinear systems are computationally extremely difficult to handle. However, approximations with linear system analysis can be useful. For example, Liao's group has developed a linear method to inferTo dissect the mechanisms of quantitative combinatorial gene regulation, we have considered all the factors mentioned above. By assuming a combinatorial mode of transcription factor activity as the principle of gene regulation in cases in which multiple regulators are known to control one specific target gene, and by integrating two kinds of time-shifts and conversion efficiencies, we have developed a strategy to study combinatorial gene regulation. Not only have we considered the delays from the time when transcription factors are expressed to the time when their corresponding target genes are induced or repressed, but, for the first time, we have also taken into account time-shifts among the regulators themselves. The strategy Figure is basedP value of 2.7E-3 between the single transcription factor and the target gene be found.In general, one would expect a significant correlation between the expression profile of a regulator and its corresponding target gene. In our previous studies, we employed the Pearson correlation coefficient (PCC) , the locet al. [We postulate that this lack of correlation might be a result of the regulation of individual target genes through the combinatorial activity of several regulators. We have addressed this problem by analyzing the time-series dataset of Cho et al. . In theiet al. . A two- In all cases to find optimal correlations, we have also integrated the well-known delay from the time when the regulators are expressed to the time when their target genes are expressed. However, we have not constrained the time when the target genes are expressed to be the same among different target genes in a given convergence mode. We have then included individual conversion efficiencies, limited to the non-negative range, in which both regulators simultaneously and cumulatively control the target gene, but without opposite activity between the two regulators. We have systematically tested the effect of all possible conversion efficiencies of individual regulators (non-negative) and of all possible time delays between the regulators and their target genes on the expression profiles of the regulators. These individual time-series profiles of the two regulators in the convergence mode have then been combined into a synthetic combinatorial time-series profile in an attempt to identify the combinatorial expression profile that best correlates with the expression of the target genes Figure .P < 2.7E-3 between expression profiles of two genes ) correlation between the combinatorial profiles of two regulators and the profile of their target gene in 35 two-regulator motifs. This corresponds to 6.43% . This results in the detection of a significant correlation in additional (48 of 544 (8.82%)) two-regulator motifs, indicating the existence of opposite regulation.However, 48/544 still represents only a small fraction of the gene regulatory motifs analyzed and indicates that other crucial factors might need to be taken into consideration. So far, the relative time-shifts among individual regulators have been neglected. Consequently, we have also considered this type of time-shift. Surprisingly, the number of gene regulatory structural motifs in which the combinatorial expression profile is now significantly correlated with a target gene sharply increases from 48 to 179 of 544 . The substantial improvement from 8.82% to 32.9% ). Details of results are provided in Additional data file 2.t-test and Wilcoxon matched-pairs signed-ranks test. We found that the success percentage . Because we do not know whether the distributional assumption of normal-theory-based t-tests is satisfied in the distribution of the success percentage, we applied the Wilcoxon matched-pairs signed-ranks test (P \u2264 4.88E-4 for both two- and three-regulator motifs). The results and Wilcoxon matched-pair signed-ranks test (P \u2264 4.88E-4 for both two- and three-regulator motifs) show a significant difference between the success percentages of the original network and random networks. We have also found that the success percentage at each threshold in the original network is higher than that in random networks with that of the target gene (CKB2). We also show the time-shifts and the conversion coefficients of the regulators derived for the regulation of CKB2 in Figure Shifted cumulative regulation can be nicely demonstrated in the following example. In yeast, the transcription factors YML027W (YOX1) and YMR016C (SOK2) have been described to regulate the transcription expression of YOR039W (CKB2) . The latet al. have alsA given regulator might display some similarities in quantitatively controlling its different target genes. Therefore, we examined whether these similarities occur in our results. In our algorithm, the time when a given transcription factor begins to function is already constrained to an identically shifted time point among different target genes in the same convergence mode. Hence, the time-shifts among the two or three transcription factors are kept constant for different target genes in the same convergence mode. The algorithm itself first guarantees the consistency of time-shifts for a given regulator across different target genes within the same convergence mode. Within the entire transcription network known so far, there are a total of 78 regulators contributing to two-regulator motifs . Out of the 78 regulators, 34 regulators are involved in only one convergence mode. So, the time-shifts of these 34 regulators are completely consistent among different target genes.Next, we asked whether the time-shifts of a given regulator in different convergence modes are concordant among different target genes since one of the two regulators in a convergence mode might also be a regulator in other convergence modes. Because of computational explosion, we cannot constrain the time-shifts of a given regulator for all different target genes in the whole regulatory network to one shifted time point. Therefore, if an enriched distribution of shifted time points occurs in a short contiguous time window for a given regulator, the shifted time points of that regulator are consistent among different convergence modes.P < 5E-2; Additional data file 1). Each area comprises a short (one to three time points) contiguous time window . The distribution pattern of shifted time points of a given regulator appears to be concentrative . Therefore, the conversion efficiency of a given regulator is also quite consistent among different convergence modes and, hence, consistent among different target genes in the whole available transcription regulatory network analyzed.Our algorithm also constrains the conversion efficiencies of a specific transcription factor among different target genes to an identical value in a given convergence mode. Therefore, to further assess the consistency of the conversion efficiencies of given regulators in the whole transcription network, we only need to check whether the conversion efficiencies of those regulators in different convergence modes distribute in a concentrative manner. Forty-four regulators are involved in more than one convergence mode. One regulator has the same time-shift among different convergence modes. For each of 29 out of the 43 regulators, the shifted time points in different convergence modes mainly concentrate in one or two areas . Each area comprises a short (one to five points - only one regulator distributes in five points) contiguous conversion efficiency window. For example, in 22 out of the 25 modes, the conversion efficiencies of YOX1 only distribute in the short range 0-0.4 of a given regulator is measured by the standard deviation of time-shifts (conversion efficiencies) among different convergence modes that the given regulator controls. We take the standard deviation of time-shifts (conversion efficiencies) of all the regulators across all the different convergence modes as background deviation. It turns out that 25 out of the 38 regulators show a smaller standard deviation of time-shifts than the background deviation . We have observed that 29 out of the 43 transcription factors have a smaller standard deviation of conversion efficiencies than the background deviation .et al. [If, for the same multi-regulator transcriptional regulatory network motifs, the shifted cumulative regulation can also be found in another independent dataset, these results would corroborate our discoveries. For this purpose, we have utilized the high-resolution time-series yeast expression dataset of Spellman et al. . In theiet al. . In 59 oP < 0.53). These possibilities alone are not significant in terms of the chance to obtain these overlapping numbers. However, these tests alone cannot justify whether these overlapping significant motifs could be easily obtained by chance. We need to further evaluate whether the other aspects of these common significant motifs are consistent between the two experiments. One could expect that sometimes these overlapping numbers could be obtained by chance, although one could not also expect that the accordance of the time-shift and the conversion efficiency between the two experiments could be obtained in the common significant motifs by chance. Note that the consistency of the time-shift and the conversion efficiency between the two experiments is independent of the consistency in significance of correlated scores of the motifs.We then examined whether these overlapping numbers of 21 and 16 could be obtained by chance. If we assume there are only 59 motifs showing a significant correlation in the whole population of 208 two-regulator motifs, the possibility to obtain 21 or more significant motifs by randomly taking 67 motifs is 0.31 (hypergeometric test). The possibility to obtain 16 or more significant three-regulator motifs can also be calculated by hypergeometric test (P < 2.52E-7). The total P value to obtain this number of overlapping significant motifs with a significant consistency in both time-shift and conversion efficiency is 2.61E-13.We therefore examined whether the time-shift and the conversion efficiency are significantly consistent between the two experiments. Among the 42 regulators of the common significant two-regulator motifs, the difference in the time-shifts between the two experiments for 25 regulators is less than or equal to 2 time points . The binomial distribution test shows that the possibility to have a difference less than or equal to 2 time points in a concentrative way for 25 regulators among a total of 42 regulators is 3.35E-6. Therefore, even if one could obtain these 21 common significant motifs by chance, it is still very difficult to obtain a difference in time-shift less than or equal to 2 time points for 25 regulators between 2 experiments by chance. Furthermore, we tested whether the consistency of the conversion efficiency could be obtained by chance. Among the 42 regulators of the common significant motifs, the difference in the conversion efficiency between the 2 experiments for 19 regulators is less than or equal to 0.3. The binomial distribution test was used to examine the possibility that the difference between the two experiments in the conversion efficiency concentrates in the short contiguous window less than or equal to 0.3 for 19 regulators among 42 regulators . Taken together, even if one could obtain the overlapping numbers of significant motifs by chance, it is also very difficult to obtain a highly significant consistency between the two experiments in both time-shift and conversion efficiency by chance.Analogously, the possibilities to have consistency in both time-shift and conversion efficiency for the three-regulator motifs are significant two-regulator motifs and 25 out of 38 (65.8%) three-regulator motifs, a significant correlation can be found in the Spellman dataset; these similarities indicate that shifted cumulative regulation is a major principle for multi-regulator transcriptional network structure motifs.In short, for both two- and three-regulator convergence motifs, it is very difficult to obtain this kind of observed overlap between the Spellman and Cho datasets by chance. These results have excluded the risk of overfitting.The feed-forward loop (FFL) has been found to be over-represented in various biological systems ,40-43. AP = 0.507) show that there is no significant difference in the two-regulator motifs in the Cho dataset. FFLs are also involved in 29 three-regulator motifs. A high percentage (21 out of 29 (72.4%); Additional data file 2) shows a significant correlation between the combinatorial expression of the regulators and the target gene expression. In 87 out of the 132 non-FFL three-regulator motifs (65.9%), a significant correlation has also been detected in the Cho dataset. The difference between the FFL and non-FFL groups in the three-regulator motifs is also not significant in the Cho dataset. Similarly, there is no significant difference between the FFL and non-FFL groups for the two-regulator motifs and three-regulator motifs from the Spellman dataset. Thus, even in the FFLs, shifted cumulative regulation is also a major principle. Although the first transcription factor can regulate the target gene twice, by a direct path and an indirect path, only the second regulator directly regulates the expression of the target gene in the indirect path. Therefore, the first regulator and the second regulator directly regulate the target gene only once per se. This is the reason that the frequency of shifted cumulative regulation is similar in the groups of FFLs and non-FFLs.Hence, we have evaluated whether there is a significant difference between the FFL and non-FFL groups in terms of the frequency of shifted cumulative regulation. Among all of the 544 two-regulator motifs from the Cho dataset, 73 motifs are also FFLs . Of these 73 motifs, 27 (37.0%) show a significant correlation between the combinatorial expression profile of the regulators and the expression of the target gene. Among the 471 non-FFL two-regulator motifs, a significant correlation is found in 152 motifs. The Yates chi-square test has been used to determine the difference between the success frequencies of the FFL and non-FFL groups. The results stress [To examine whether the principle of shifted cumulative regulation only prevails in the synchronized yeast cell cultures, we next performed a similar analysis under other conditions, such as stress responses. Because the high-resolution time-series expression data with equal sampling interval were required for this analysis, we chose only two conditions from the available data. The first one was originally used for studying the transcriptional response of steady-state yeast cultures with a low-level glucose pulse perturbation . The sec) stress .P value cutoff , 141 out of the 557 two-regulator motifs show a significant correlation between the combinatorial expression of the regulators and the target gene expression. The data obtained under H2O2 stress include 453 two-regulator motifs. Among them, a significant correlation can be detected in 114 two-regulator motifs . These success percentages are higher in three-regulator motifs under both conditions . These success percentages are relatively lower than those in the data used to study cell-cycle regulation. However, percentages of approximately 45% and 25% are still considered to be high at the systems level. Consequently, we can conclude that shifted cumulative regulation is also applicable to other conditions, rather than only being constrained to the synchronized yeast cell cultures, which were originally used to study cell-cycle regulation.Under the condition of low-level glucose pulse perturbation, 557 two-regulator motifs are included on the available regulatory network . If we choose the same Major efforts are currently directed toward the identification of the components of biological systems. These include the sequencing of whole genomes and the analysis of genome-wide expression profiles of transcripts or proteins in specific physiological or pathophysiological states. However, mere knowledge of the components is not sufficient to reveal the complexity of biological systems. We also need to understand the dynamics of the interactions between the individual components.In the work presented here, we have used genome-wide high-resolution time-series expression data from yeast in orderA number of groups have tried to carry out genome-wide correlation analyses, for example, by using the PCC to identWe have hypothesized that this might be attributable to the finding that most genes are regulated through the combinatorial activity of more than one transcription factor. We have also considered potential differences in the conversion efficiencies between the transcription of individual regulators and their functional activity. Because many factors contribute to the conversion of a transcription factor transcript into a functional binding regulator, a coefficient representing this conversion efficiency has been integrated into our analysis. Such a conversion efficiency factor needs to be looked at as a comprehensive parameter, integrating factors such as differences in the translation efficiency from mRNA to protein, in the assembly efficiency from protein to regulator, in posttranslational activation (inhibition), and in the binding efficiency of the regulators to their binding motifs. We derive these conversion efficiencies by testing all possible conversion efficiencies of the transcription factors of the convergence mode in order to find the specific combination of conversion efficiencies to form a combinatorial expression profile of the transcription factors that best correlates with the expression of the target gene. A specific regulator can display different conversion efficiencies dependent on the specific convergence modes.As shown in Table The influence of time delays between regulators and target genes is a well-known phenomenon and is considered in all our calculations. In addition to time delays between regulators and target genes, time delays among regulators themselves might also be important. When we incorporate the influence of this second type of time delay, a further significant increase from 8.8% to 32.9% in identifying a significant correlation of regulators and target genes is obtained for the two-regulator motifs. For the three-regulator motifs, this percentage even increases to 67.1%. This dramatic increase demonstrates the extreme importance of the time-shift when different transcription factors begin to regulate the transcriptional expression of target genes. The time-shift among regulators is mainly attributed to the intrinsic asynchronous characteristics of activation/inhibition of genes or proteins. Possibly, these built-in characteristics of genes or proteins are also required for the delicate dynamic regulation of the genes or proteins. In fact, exquisite quantitative expression, rather than simple on-off expression, is also reported to be biologically functionally required in a recent study by Wan and Flavell . TherefoThe proposed shifted cumulative model here assumes that the regulators control the target expression independently. However, in some cases, the regulators may form a heterodimer and then act on the promoter or other regulatory regions of the target gene, for example, SBF (SWI4-SWI6) and MBF (MBP1-SWI6) ,48. If tGiven regulators might exhibit some similarities in quantitatively regulating the transcription expression of their different target genes. We determined the consistency of the time-shifts and the conversion efficiencies of given regulators among different target genes in the same convergence modes by integrating a direct constraint in our algorithm. Our results also demonstrate that the time-shifts and the conversion efficiencies of given regulators are significantly consistent among their different target genes in different convergence modes.As discussed above, conversion efficiency is a comprehensive parameter, integrating factors such as differences in the translation efficiencies from mRNA to protein, in the assembly efficiencies from protein to regulator, in posttranslational activation (inhibition), and in the binding efficiencies of the regulators to their binding motifs. In general, for a given regulator, the translation efficiency from mRNA to protein is assumed to be independent of its target genes. However, other factors, such as assembly efficiencies from protein to regulator and posttranslational processes, may still be dissimilar because several different signaling pathways or mechanisms might possibly be involved in those processes for a given regulator. For example, MYC responds differently to different inputs from other factors and/or signals . AnotherThe time-shift among the two or three regulators is the same for their different target genes in the same convergence mode. It is also reasonable that different target genes in different convergence modes show some differences in the time-shifts when a given regulator begins to function to control its target genes relative to the time when the mRNA of that regulator is expressed. One reason for this could be that the regulator may activate its different target genes at different times ,54,55. Set al. [et al. [The lower percentage of two-regulatory motifs compared with three-regulatory motifs showing a significant correlation between combinatorial expression profiles of regulators and the expression of target genes might be the result of incomplete knowledge concerning the structure of the underlying gene regulatory network. Among both two-regulator- and three-regulator motifs, some parts cannot be well explained by our proposed cumulative regulation modes. One alternative possibility is that the utilization of promoter regulatory modes are condition- or environment-dependent . Anotheret al. once pro [et al. once proStringent time control is an intrinsic property of the cell-cycle process. Therefore, the mechanism of shifted cumulative transcription factor activity might be a particularly prominent feature of regulatory motifs in the target genes involved in the cell cycle. Remarkably, out of 60 two-regulator motifs, in which at least the target gene has been assigned to certain phases of the cell cycle and/or aet al. [To validate the principle of shifted cumulative regulation in multi-regulator transcriptional regulatory network motifs, we have utilized another independent set of high-resolution time-series expression data . We haveet al. publisheWe have further found that shifted cumulative regulation is also dominant in FFLs. Many two-regulator motifs are actually FFLs. FFLs are also involved in some three-regulator motifs. Because, in FFLs, the first regulator can directly and indirectly regulate the target gene, this additional function characteristic may affect the quantitative regulation mechanism of the target gene. However, the results show that the frequencies of identifying a significant correlation in FFL and non-FFL groups are not significantly different. This is readily understandable because, although the first regulator can directly and indirectly regulate the target gene twice, the indirect pathway functions via the second regulator. Eventually, the first regulator and the second regulator directly regulate the target only once. Therefore, no matter what form the middle processes take, the final results of FFL regulation are similar to that of normal convergence mode regulation.2O2 stress and glucose pulse perturbation. The success percentages in the two studies under stress conditions are relatively lower than those in the data used to study the cell cycle. This may stem from the fact that some of the transcription factors are not regulated at the transcriptional level in response to stress.We want to note that the shifted cumulative regulation mode is also applicable in other conditions and not only in synchronized yeast cell cultures. The results demonstrate that a considerable fraction of two- and three-regulator motifs also shows a significant correlation under stress conditions, such as HIn this work, we provide a strategy to dissect the basic regulatory principles of multi-regulator transcriptional regulatory networks. Our results point to a dynamic quantitative linear combinatorial model of gene regulation. We confirm the results with high consistency in two independent high-resolution time-series datasets. In addition, a significant difference that exists between results obtained for real and randomly generated data strengthens the biological relevance of this observation. The success percentages of finding a significant correlation between the combinatorial expression profiles of regulators and their target gene expression among the studied motifs are even higher among regulatory network motifs involved in the cell-cycle process. We further demonstrate that the success frequencies of the shifted cumulative regulation mode are similar between the FFL and non-FFL groups. We also found that the shifted cumulative regulation mode is dominant under other stress conditions, rather than being restricted to datasets from cell-cycle studies. Taken together, our data strongly indicate that shifted cumulative regulation is a predominant principle underlying the quantitative gene regulatory mechanism of multi-regulator transcriptional regulatory network motifs. The model presented here provides evidence, for the first time, regarding the mechanism of the quantitative regulation of target genes by multiple transcription factors.In order to understand the mechanism of gene regulation, therefore, not only is it important to follow the expression profile of single transcription factors over time, but the expression of quantitative combinations of regulators over time should also be considered. This can be achieved only through high-resolution time-series measurements. We believe that the proposed strategy can also be utilized for understanding quantitative gene regulation in other organisms.Our strategy allows us to estimate the relative time when each of the different regulators in a specific motif begins to function. We can also estimate how much mRNA transcribed by a transcription factor gene is translated into a fully functional binding regulator. This strategy will become even more powerful with future improvements in our knowledge concerning the components of regulatory network structure and expression measurement technology. The proposed concept might be relevant for a wide range of biotechnological and biomedical applications in which quantitative gene regulation plays a role. It also provides a new perspective for experimental biologists to reveal the real quantitative multi-dimensional mechanisms of complex regulatory systems.To study the basic quantitative principles of gene regulation, we carried out a correlation analysis between combinatorial profiles of regulators and their target genes within regulatory network structure modes in which multiple regulators are known to control a specific target gene. For this purpose, we propose a shifted cumulative mode Figure of gene C for the mRNA of each regulator gene is assigned for a given convergence mode. Numerically, for each regulator Ri in each motif, a constrained conversion efficiency Ci(-1 \u2264 Ci \u2264 1) is chosen. This is based upon the assumption that the probability that all the expressed mRNA of one regulator gene can be finally converted into the fully activated binding regulator is low, as discussed above. A negative Ci value for a regulator means it has a regulation function (activation or suppression) opposite to that (suppression or activation) of a regulator with a positive value.A conversion efficiency n successive time points. The subscripts i1 and i2 are used to represent the regulators 1 and 2, respectively. Assuming sh for the relative time-shift between the two regulators and Ri,j for the expression level of the regulator i at time point j, the expression level Ak,j of the combinatorial profile of the two regulators at time point j in the given motif k can be calculated as follows:The approach is implemented in computer programs (programs are available on request) for quantifying the shifted cumulative regulation of genes in large-scale high-resolution time-series gene-expression profiling data. The major original aspect of our method is the combination of expression profiles of the regulators by considering time delays among regulators and conversion efficiencies of regulators in network structure motifs. This is illustrated below for a two-regulator-to-single-target-gene motif with n 40) is good and improves as the number of time point observations increases. It is anticipated that new technologies which are less expensive and include flexible design will giGene regulatory networks are only one example of a more general biological pathway. Other applications include the study of an organism's metabolome or proteome over time . Analogo"}
+{"text": "Large scale microarray experiments are becoming increasingly routine, particularly those which track a number of different cell lines through time. This time-course information provides valuable insight into the dynamic mechanisms underlying the biological processes being observed. However, proper statistical analysis of time-course data requires the use of more sophisticated tools and complex statistical models.http://www.mas.ncl.ac.uk/~ncsg3/microarray/.Using the open source CRAN and Bioconductor repositories for R, we provide example analysis and protocol which illustrate a variety of methods that can be used to analyse time-course microarray data. In particular, we highlight how to construct appropriate contrasts to detect differentially expressed genes and how to generate plausible pathways from the data. A maintained version of the R commands can be found at CRAN and Bioconductor are stable repositories that provide a wide variety of appropriate statistical tools to analyse time course microarray data. This is because time-course information provides valuable insight into the dynamic mechanisms underlying the biological processes being observed. However, a proper statistical analysis of time-course data requires the use of more sophisticated tools and complex statistical models. For example, problems due to multiple comparisons are increased by catering for changing effects over time. In this case study, we demonstrate how to analyse time-course microarray data by investigating a data set on yeast. We discuss issues related to normalisation, extraction of probesets for specific species, chip quality and differential expression. We also discuss network inference in the Additional file cdc13-1 temperature sensitive mutation (in which telomere uncapping is induced by growth at temperatures above around 27\u00b0C). These replicates were sampled initially at 23\u00b0C and then at 1, 2, 3 and 4 hours after a shift to 30\u00b0C to induce telomere uncapping. The thirty resulting RNA samples were hybridised to Affymetrix yeast2 arrays. The microarray data are available in the ArrayExpress database > > biocLiteaffy, affyPLM, limma, and gcrma . Addi. Addiaff#From Bioconductor> c)> biocLite)> Bioconductor packages are updated regularly on the web and so users can easily update their currently installed packages by starting a new R session and then usingupdate.packages> See for furtA list of packages used in this paper is given in the Additional file The data used in this paper can be downloaded from ArrayExpress into R using the commandslibrary(ArrayExpress)> > yeast.raw = ArrayExpress('E-MEXP-1551')ArrayExpress package for Bioconductor 2.4 (the default version for R 2.9) produces an error and so we must use the package in Bioconductor 2.5 (the default version for R 2.10). Details for downloading the latest ArrayExpress package can be found in the Additional file Unfortunately due to changes in the ArrayExpress website, the yeast.raw object can be obtained by using the print(yeast.raw) command:A brief description of the AffyBatch objectsize of arrays = 496 \u00d7 496 features (3163 kb)cdf = Yeast_2 (10928 affyids)number of samples = 30number of genes = 10928annotation = yeast2.cel files can be loaded into R using the ReadAffy command.If the Affymetrix microarray data sets have been downloaded into a single directory, then the Also available from ArrayExpress are the experimental conditions. However, some preprocessing is necessary:> ph = yeast.raw@phenoDatadata.frame,> exp_fac = $Factor.Value.GENOTYPE.,+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0strain = ph@data$Factor.Value.INDIVIDUAL.,+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0replicates = ph@data$Factor.Value.TIME.)+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0tps = ph@datalevels(exp_fac$strain) = c> with, ])> exp_fac = $replicate = rep, each = 5, 2)> exp_facexp_fac stores all the necessary information, such as strain, time and replicate, which are necessary for the statistical analysis.The data frame S. pombe and S. cerevisiae. Also, amongst the 10,928 probesets (with each probeset having 11 probe pairs), there are 5,900 S. cerevisiae probesets.Note that there are two yeast species on this chip, S. cerevisiae and S. pombe, we first need to extract the S. cerevisiae data before normalisation. This can be done by filtering out the S. pombe data using the s_cerevisiae.msk file from the Affymetrix website (see ]source('ExtractIDs.R')> > c_df = ExtractIDs(probe_filter)yeast.raw to the x- and y- coordinates of the S. cerevisiae probesets in the cdf environment by usingWe also need to restrict the view of #Get the raw dataset for S. cerevisiae only> library(affy)> library(yeast2probe)> source('RemoveProbes.R')> > cleancdf = cleancdfname(yeast.raw@cdfName)> RemoveProbesRemoveProbes.R are listed in the Additional file yeast.raw, obtained via print(yeast.raw), are nowNote that the commands in AffyBatch objectsize of arrays = 496 \u00d7 496 features(3167 kb)cdf = Yeast_2(5900 affyids)number of samples = 30number of genes = 5900annotation = yeast2S. pombe probesets have been removed.and the number of genes is 5,900 now that the Before any formal statistical analysis, it is important to check for data quality. Initially, we might examine the perfect and mismatch probe-level data to detect anomalies. Images of the first five arrays can be obtained usingpar)> op = for(iin 1:5) {> paste + \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0plot_title = $data_order [i]+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0d = exp_facimage+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ }These commands produce the image shown in the Additional file Another useful quality assessment tool is to examine density plots of the probe intensities. The command$data_order[1:5]> d = exp_fachist intensities')> produces the image shown in the Additional file Other exploratory data analysis techniques that should be carried include MAplots, where two microarrays are compared and their log intensity difference for each probe on each gene are plotted against their average. Also of interest is to examine RNA degradation (see ), althouThere are number of methods for normalising microarray data. Two of the most popular methods are GeneChip RMA (GCRMA) and Robust Multiple-array Average (RMA); see ,15. EsseSince we have thirty microarray data sets and believe that the levels of transcriptional activity are similar across strains, we will use the RMA normalisation method. This technique normalises across the set of hybridizations at the probe level. The data can be normalised via> yeast.rma = rma(yeast.raw)$data_order]> yeast.matrix = exprs(yeast.rma)> gene_positions = MB.2Drownames(yeast.matrix)> gnames = > gene_probes = gnames[gene_positions]The expression profiles can also be easily obtained. The profile for the top ranked expression is found usingrownames(yeast.matrix))> plotProfileT contains the expression values for gene g across the n arrays, X is a design matrix which describes key features of the experimental design used and \u03b1g is the coefficient vector for gene g. In the analysis studied here, the yeast data consists of data from n = 30 arrays. The entries in the columns of X depend on the experimental design used: there are two yeast strains (mutant and wild type), each measured at five separate time points, and we are interested in comparing the gene expressions between mutant and wild type strains over time. Thus we seek a linear model describing the ten strain \u00d7 time combinations by determining values for the ten coefficients in the coefficient vector \u03b1g. We will label these ten coefficients as , where the first five coefficients represent the levels of the mutant strain at time points t = 0, 1, 2, 3, 4 and the remaining five coefficients are the equivalent versions for the wild type strain. Statistically speaking, the model has a single factor with ten levels. The design matrix X links these factors to the data in the arrays by having zero entries except when an array contributes an observation to a particular strain \u00d7 time combination. For example, array 26 measures the expression of the first wild type microarray at time t = 0 and so contributes an observation to level 'w0', the sixth strain \u00d7 time combination. Thus the entry in row 26, column 6 of the design matrix X = 1. Further, the arrays are arranged in groups of three replicates. Thus the overall experimental structure (expt_structure below) has three arrays on level 'm0', then three arrays on 'm60', and so on. Setting up the factor levels and the design matrix is done in R by usingThe ribed by ,20. The library(limma)> factor(colnames(yeast.matrix))> expt_structure = #Construct the design matrix> model.matrix(~0 + expt_structure)> X = colnames(X) = c> \u03b1g is estimated via the commandand then the coefficient vector lm.fit = lmFit> C. For these data, we are mainly interested in differences at the later time points, and so a possible set of contrasts to investigate is that of differences between the mutant and wild type strains at each time point, that is, . The limma package allows complete flexibility over the choice of contrasts, however this necessarily includes an additional level of complexity. The values in the coefficient vector of contrasts, \u03b2g = CT\u03b1g for gene g, describe the size of the difference between strains at each time point. The relevant R commands areDetermining the differentially expressed genes amounts to studying contrasts of the various strain \u00d7 time levels, as described by a contrast matrix levels = X)> mc = makeContrasts> c.fit = contrasts.fit(> eb = eBayes(c.fit)eBayes function to produce moderated t-statistics which assess whether individual contrast values \u03b2gj are plausibly zero, corresponding to no signifficant evidence of a difference between strains at time point j. The moderated t-statistic is constructed using a shrinkage approach and so is not as sensitive as the standard t-statistic to small sample sizes. It also gives a moderated F-statistic which can be used to test whether all contrasts are zero simultaneously, that is, whether there is no difference between strains at all time points.The final command uses the There are a number of ways to rank the differentially expressed genes. For example, they can be ranked according to their log-fold change#see help(toptable) for more options> > toptableF-statisticsor by using > topTableF(eb)F-statistics over the log fold change is that the F-statistic takes into account variability and reproducibility, in addition to fold-change.The advantage of using ?p.adjust to obtain further details). The following commands rank genes according to their (corrected) F-statistic p-value and annotates the output by indicating the direction of the change for each contrast for each gene: +1 for up-regulated expression (mutant type having higher expression than wild type at a particular time point), -1 for down-regulated expression and 0 for no significant change.Our analysis is based on a large number of statistical tests, and so we must correct for this multiple testing. In our example we use the (very) conservative Bonferroni correction since we have a large number of differentially expressed genes and the resulting corrected list is still long. Another common method of correcting for multiple testing is to use the false discovery rate (fdr) < 0.05> indx = > sig = modFpvalue[indx]#No. of sig. differential expressed genes> length(sig)> nsiggenes = > results = decideTests$F> modF = eborder> modFordered = #Retrieve the significant probes and genes> $probe [modFordered [1:nsiggenes]]> c_rank_probe = c_df$genename [modFordered [1: nsiggenes]]> c_rank_genename = c_df#Create a list and write to a file> > updown = results[modFordered [1:nsiggenes],]write.table,> file = 'updown.csv', sep = ',', row.names = FALSE, col.names = FALSE)+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0F-statistic plots t#Rank of Probesets, also output gene names> par, ask = TRUE, cex = 0.5)> for (i in 0:99){> rank(modF) == nrow(yeast.matrix) -i+ \u00a0\u00a0\u00a0indx = + \u00a0\u00a0\u00a0$probe [indx]+ \u00a0\u00a0\u00a0id = c_df$genename [indx]+ \u00a0\u00a0\u00a0name = c_dfpaste, sprintf , 'Rank =', i +1)+ \u00a0\u00a0\u00a0genetitle = + \u00a0\u00a0\u00a0+ \u00a0\u00a0\u00a0exprs.row = yeast.matrix+ \u00a0\u00a0\u00a0plot , ylim = range(exprs.row), ylab = 'Expression',+ \u00a0\u00a0\u00a0+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0xlab = Time, main = genetitle)+ \u00a0\u00a0\u00a0for (j in 1:6){+ \u00a0\u00a0\u00a0as.character(exp_fac$strain [5 * j])+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0pch_value = points, exprs.row[(5 * j-4):(5 * j)], type = 'b', pch = pch_value)+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ \u00a0\u00a0\u00a0}+ }RNR2. This gene is highly significant because of low variation in its time course. However the actual difference in expression levels between wild-type and mutant stains is relatively small. We address this problem in the next section.When interpreting rank orderings based on statistical significance, it is important to bear in mind that a statistically significant differential expression is not always biologically meaningful. For example, Figure timecourse package over the limma package is that it allows for correlation between repeated measurements on the same experimental unit, thereby reducing false positives and false negatives; these false positives/negatives are a significant problem when the variance-covariance matrix is poorly estimated. An advantage of the limma package is that it allows more flexibility by allowing users to construct different contrasts. In general we might expect both packages to produce fairly similar lists of say the top 100 probesets. In the analysis of the yeast data, we can determine the overlap of the top 100 probesets by usingBoth packages have different strengths. One advantage of the > N = 100$pos.HotellingT2[1:N]> gene_positions = MB.2D> tc_top_probes = gnames[gene_positions]$probe[modFordered[1:N]]> lm_top_probes = c_dflength)> limma package also yields similar results as those given by the timecourse library.The result is a moderately large overlap of fifty-three probesets. We note that changing the ranking method in the When looking for \"interesting\" genes it can be helpful to restrict attention to those differential expressed that are both statistically significant and of biological interest. This objective can be achieved by considering only significant genes which show, say, at least a two-fold change in their expression level. This gene list is obtained using the following code (adapted from )#Obtain the maximum fold change but keep the sign> function(foldchange)> maxfoldchange = which.max(abs(foldchange))]+ \u00a0\u00a0\u00a0foldchange)]> hfc = ordered_hfc> lpv = ordered_lpv#Average of WT> apply> wt_means = matrix(nrow = dim(c_probe_data) = + }colnames(m) = sort(unique(exp_fac$tps))> heatmap.2 from the library gplots via the following codeThe heatmap in Figure library(gplots)> #Cluster the top 50 genes> col = greenred (75),> heatmap.2 , sepcolor = 'white', sepwidth = 0.05,+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0function (c){hclust},+ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0hclustfun = + \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0labRow = NA, cexCol = 1)t = 0 are very similar. However, as time progresses, groupings of genes appear whose levels are up-regulated (red) or down-regulated (green). Note that the intensity of the colour corresponds to the magnitude of the relative expression. Gene names appear on the right side of the figure and on the left side, the cluster dendrogram shows which genes have similar expression. The dendrogram suggests that there are perhaps six to ten clusters.Figure Mfuzz package implements soft clustering using a fuzzy c-means algorithm. Analysing the data for c = 8 clusters is achieved by usingSoft clustering methods have the advantage that a probe can be assigned to more than one cluster. Furthermore, it is possible to grade cluster membership within particular groupings. Soft clustering is considered more robust when dealing with noisy data; for more details see ,22. The library(Mfuzz)> new> tmp_expr = c = 8, m = 1.25)> cl = mfuzz, new.window = FALSE)> mfuzz.plot should always be investigated. For example, if c is chosen to be too large then some clusters will appear sparse and this might suggest choosing a smaller value of c. Figure Of course, it is usually not clear how many clusters there are (or should be) within a dataset and so the sensitivity of conclusions to the choice of number of clusters (> cluster = 1> cl [[4]]cdc13-1 strains was expected to share features in common with responses to cell cycle progression, environmental stress, DNA damage and other types of telomere damage. The statistical analysis determined lists of probesets associated with genes involved in all of these processes. The techniques used focussed on making best use of the temporal information in time-course data. The use of cdc13-1 strains, which uncap telomeres quickly and synchronously, also allowed the identification of genes involved in the acute response to telomere damage. This case study has demonstrated the power of R/Bioconductor to analyse time-course microarray data. Whilst the statistical analysis of such data is still an active research area, this paper presents some of the cutting-edge tools that are available to the life science community. All software discussed in this article is free, with many of the packages being open-source and subject to on-going development.The response to telomere uncapping in The authors declare that they have no competing interests.AG conducted the microarray experiments. All authors participated in the analysis of the data and in the writing of the manuscript.Additional R commands and analysis. 1. R commands for extracting S. cerevisiae ids, removing unwanted probesets and converting probesets to genes. 2. R commands for genetic regulatory network inference. 3. A list of R packages used in this manuscript. 4. Additional figures.Click here for file"}
+{"text": "Modelling of time series data should not be an approximation of input data profiles, but rather be able to detect and evaluate dynamical changes in the time series data. Objective criteria that can be used to evaluate dynamical changes in data are therefore important to filter experimental noise and to enable extraction of unexpected, biologically important information.Bacillus subtilis and Arabidopsis thaliana grown under stress conditions; in the former, only gene expression was studied, whereas in the latter, both gene expression and metabolite accumulation. Our method not only identified well-known changes in gene expression and metabolite accumulation, but also detected novel changes that are likely to be responsible for each stress response condition.Here we demonstrate the effectiveness of a Markov model, named the Linear Dynamical System, to simulate the dynamics of a transcript or metabolite time series, and propose a probabilistic index that enables detection of time-sensitive changes. This method was applied to time series datasets from This general approach can be applied to any time-series data profile from which one wishes to identify elements responsible for state transitions, such as rapid environmental adaptation by an organism. Biochemical systems in living cells are robust and flexible. Investigating the responses of cells (and organisms) to environmental changes typically requires a system-level analysis of the interactions between the various molecular elements that comprise the system. A key step to analyze system responses to environmental changes is identifying large state changes or \"transitions\". A statistical method that could detect such transitions would be a powerful analytical tool for finding important factors in large-scale profiles, such as variations in gene expression.yt are recursively defined by the following Equation [Previous analyses of gene expression profiles have often made use of graphical models, such as Bayesian Networks ,2, GraphEquation :yt is an observational vector of genes or metabolites at time t, A is an observational transition matrix, and \u03b5t is Gaussian noise. Because this model does not distinguish observational and inherent noises, identification of transition states becomes difficult in the presence of substantial noise.where Bacillus subtilis and Arabidopsis thaliana maintained under stress conditions.Here we propose an extension of the Auto-Regressive model , which hOur method has two steps. First, transition time points for each time series are detected using LDS, which mathematically distinguishes transitional fluctuations from experimental noise. The transition point is detected by the logarithm of the likelihood values. Here \"likelihood value\" means the generative probability of current data based on the condition of the past datasets. If this value is low, then the current data cannot be adequately explained by past datasets. In other words, a transition has occurred. In the second step, relevant factors such as genes and/or metabolites related to the transitions are extracted by Batch-Learning Self Organizing Mapping (BL-SOM) using changes in expression levels . In summLDS uses internal state variables in the generative model for cellular internal state changes. These internal states correspond to the compressed description of the observed biological system prior to adding noise factors.YT 1:= {y1, y2,..., yT} and XT 1:= {x1, x2,..., xT}, respectively. Each element in these vectors is defined as:The total experimental dataset of the time series and the corresponding internal state are denoted by t = 1, 2,..., T is the measurement order of the time series, D is the dimension of vector yt representing expression levels of D genes or metabolites, and N is the dimension of vector xt representing internal states. To distinguish observational noise from true information on cellular transitions, we focus on two probability densities: the density between internal state variables p(xt|xt-1), and the density for evaluation of observational noise p(yt|xt). The proposed model is further defined as follows:where V is a D \u00d7 N observational matrix, W is an N \u00d7 N internal state transition matrix, D-dimensional vector \u03b7t is observational noise, N-dimensional vector \u03b5t is transition noise. The vectors x1, \u03b5t, \u03b7t are generated according to the following equations:where Np is a probability density function when a p-dimensional probabilistic vector x obeys a Gaussian distribution whose mean vector is m, and covariance matrix \u03a3 (Equation 9).The next step is to define the relevant probability densities. We assume that the observational and internal transition noises are both Gaussian, and therefore the relationship is a first-order Markov process (Equation 10).\u03b8:The model parameter of (4)\u2013(8) is defined as the parameter set \u03b8 is known [Note that the model corresponds to a Kalman Filter when section) .x1 is defined as:The initial state From Equations (5) and (7), the following function is obtained:From Equations (4) and (8), the following function is obtained:Using these results, the following joint probability is obtained:The parameter optimization follows a standard EM algorithm (see Methods section).t when all time points are given is defined by Equation (16):Using the resulting estimated parameter, the log-likelihood with respect to the present time point This is calculated using the E-step formula (see Equation 23 in Methods) after parameter estimation using the Kalman filter.Lt becomes much lower than log Lt-1, then yt cannot be explained by Yt-11:, i.e., the cellular internal state has changed at time t. In this study, the point at which the log-likelihood value becomes relatively low between whole time points is defined as the state transition point. If the log-likelihood value remains low over a certain period, then the cells are changing their states continuously during that period.When the log-likelihood value log Bacillus subtilis in the metabolite accumulation profile in roots values corresponded to molecular species with various acyl groups, such as phosphatidylglycerol, phosphatidylethanolamine, phosphatidylcholine, phosphatidic acid, and sulfoquinovosyl diacylglycerol, are shown in Figure Our results also suggested the presence of lipid metabolic responses in A profile of sulfate accumulation was generated using capillary electrophoresis Figure . In compFrom these results, we hypothesize that sulfate is in an active form and is distributed throughout the plant at the transition time. During this period, in order to maintain the intracellular environment, membrane lipids are temporarily degraded and re-synthesized after the transition. This suggestion is consistent with the reported decrease in lipids under conditions of sulfur starvation .Bacillus subtilis and Arabidopsis thaliana to environmental stresses. By focusing on transition information based on a well-defined probabilistic index, we obtained novel observations on apoptosis in Bacillus subtilis and the regulation of lipid metabolism connected with sulfur-stress responses in Arabidopsis thaliana. As this approach uses probabilistic values to detect the transitions, the results are objectively supported without the risk of misinterpretation due to experimental noise. The results of this approach will enable us to more effectively design experiments specifically tailored for functional identification of genes and metabolites. By obtaining time series data with higher temporal resolution around the transition time points, we can obtain more precise information on the details of the responses. The strategy described here was successful in identifying a small number of candidate genes and metabolites, from the vast number of genes and metabolites in comprehensive \"omics\" databases.In summary, by using a linear dynamical system, we have identified transition times in the adaptation processes of Bacillus subtilis time series data used in the present study were obtained from microarray analysis of cells sampled from 8 different experimental conditions. The data were produced using a two-colour fluorescence cDNA microarray that included 3100 Bacillus subtilis genes. The LB medium was developed to maximize cellular growth, and cells grown in this medium represented the control, unstressed population. In the initial phase of culture, the cell number increases by binary division \u2013 this is called the exponential growth phase in contrast to the stationary phase where the cell number has reached equilibrium. Data were collected from cells grown at 37\u00b0C in LB medium; the total length of culture was 12 hr and sampling was performed at 8 time points. Other culture conditions were also used with the aim of inducing stress responses in the cells. Cells were grown in Minimum Glucose Medium (MGM) at 37\u00b0C for 13 hr and sampled at 8 time points. Glucose starvation (GS) was achieved by eliminating the sugar from MGM; the cells were cultured in this medium for 10 hr and were sampled at 5 time points. Phosphate starvation (PS) was achieved by eliminating phosphoric acid from the MGM; the cells were cultured in this medium for 11 hr, and were sampled at 6 time points. Some cells were grown in Competence Medium (CM), which increases the ability of the cells to ingest DNA from the external environment. The cells were grown in CM for 9 hr and were sampled at 5 time points. Some cells were grown in Competence-Sporulation medium (CSM) for 15 hr and were sampled at 13 time points. A second sporulation medium, Difco sporulation medium (DSM), was also used. Cells were grown in this medium for 12 hr and were sampled at 19 time points. We also used Difco Glucose Glutamine (DGG) medium in which glucose and glutamine have been added to DSM medium in order to inhibit sporulation. The cells were grown for 9 hr in DSM and were sampled at 6 time points.The Arabidopsis thaliana data used in the present study were obtained from DNA microarray experiments and by Fourier-transform ion cyclotron resonance mass spectrometry (FT-ICR-MS), as previously described [Arabidopsis was cultured in sulfur-sufficient control medium for three weeks, transferred to control or sulfur-deprived medium, and cultured for up to one more week. Rosette leaves and roots were harvested at 3, 6, 12, 24, 48 and 168 hr after transfer, and subjected to transcriptome and metabolome analyses [The escribed . In brieanalyses .Transcriptome data were obtained using the Agilent Arabidopsis 2 microarray , which carries 21,500 Arabidopsis genes . The datq and is used to approximate the true posterior distribution. The Kullback-Leibler divergence takes the minimum value of 0 if the two distributions are equivalent.The test distribution is defined as q and \u03b8 is equal to the calculation of the maximum likelihood estimate \u03b8 with respect to YT1:.In Equation (17), maximization of the free energy with respect to The free energy is maximized using the Expectation-Maximization algorithm consistiStep 1. Parameter set \u03b8 is initialized.Step 2. E-step(step 2.1) and M-step (step 2.2) are successively repeated until the free energy converges.Step 2.1. E-step:k is a repeat loop index. By fixing the parameter \u03b8k-1) is maximized with respect to q.According to Equation (17), the solution is\u03b8, the calculations needed to calculate the value of Equation (18) are as follows:After the fixation of parameter Yt-1 1:and the parameter of the prior distribution p of xt are given, the posterior distribution of xt, given the data Yt1:, isWhen both the data xt+1, given the data Yt1:, isUsing (19), the prior distribution of p(xt | YT 1:\u03b8) with arbitrary t is obtained. This repeating method is called the Kalman Filter [By successively iterating Equations (19) and (20), n Filter .If all data are given, the following joint probability is obtained:P is given, the following distribution is obtained:If the parameter of p and p, which are necessary to calculate the value of Equation (18), are obtained.By successively iterating Equations (21) and (22), This repeating method is called the Kalman smoother .Using the Kalman smoother, the statistical values necessary for parameter estimation are obtained.p is given, the following likelihood is calculated:If Using (23), the log-likelihood is calculated asStep 2.2. M-step\u03b8 that will maximize F under the condition qx = qxk) :In this step, the value of The objective function to be maximized is defined aswhich is obtained by the following equation:\u03b8 is calculated that maximizes F.and the solution of parameter \u03b8 is then updated, and the process goes back to E-step.Parameter The author(s) declares that there are no competing interests.Bacillus experiments. KS proposed and supervised the study. All authors read and approved the final manuscript.RM designed the LDS method and carried out the computer simulations. SK designed the BL-SOM and carried out the computer experiments. MYH supplied the Arabidopsis dataset. MY analyzed FT-MS data with RM. NO supervised the"}
+{"text": "The status of a disease can be reflected by specific transcriptional profiles resulting from the induction or repression activity of a number of genes. Here, we proposed a time-dependent diagnostic model to predict the treatment effects of interferon and ribavirin to HCV infected patients by using time series microarray gene expression profiles of a published study.In the published study, 33 African-American (AA) and 36 Caucasian American (CA) patients with chronic HCV genotype 1 infection received pegylated interferon and ribavirin therapy for 28 days. HG-U133A GeneChip containing 22283 probes was used to analyze the global gene expression in peripheral blood mononuclear cells (PBMC) of all the patients on day 0 (pretreatment), 1, 2, 7, 14, and 28. According to the decrease of HCV RNA levels on day 28, two categories of responses were defined: good and poor. A voting method based on Student's t test, Wilcoxon test, empirical Bayes test and significance analysis of microarray was used to identify differentially expressed genes. A time-dependent diagnostic model based on C4.5 decision tree was constructed to predict the treatment outcome. This model not only utilized the gene expression profiles before the treatment, but also during the treatment. Leave-one-out cross validation was used to evaluate the performance of the model.The model could correctly predict all Caucasian American patients' treatment effects at very early time point. The prediction accuracy of African-American patients achieved 85.7%. In addition, thirty potential biomarkers which may play important roles in response to interferon and ribavirin were identified.Our method provides a way of using time series gene expression profiling to predict the treatment effect of pegylated interferon and ribavirin therapy on HCV infected patients. Similar experimental and bioinformatical strategies may be used to improve treatment decisions for other chronic diseases. Chronic diseases such as infectious disease, cancer, and diabetes are among the most common and costly health problems. The therapy of chronic diseases often lasts for a long time, while the treatment effect may be questionable and yet the side effects may be serious. Hepatitis C virus (HCV) is one of the major causes of chronic hepatitis, cirrhosis, and hepatocellular carcinoma. The current recommended treatment for chronic HCV infection is the combination of pegylated alpha interferon (peginterferon) and the oral antiviral drug ribavirin given for 24 or 48 weeks, but the chance to induce a sustained response is only 54%\u201356%. Using iWe analyzed a published time series microarray dataset of a virological research in which the effects of pegylated interferon and ribavirin on 33 African-American (AA) and 36 Caucasian American (CA) patients with chronic HCV infection were studied. We estaAlthough the focus here is on how HCV infected patients respond to pegylated interferon treatment, the model described is generally applicable to other chronic diseases undergoing long term treatment. under accession number GSE7123. The initial data set consists of the gene expression profiles of 33 African-American and 36 Caucasian American patients with chronic HCV genotype 1 infection on day 0 (pretreatment), and 1, 2, 7, 14, and 28 of pegylated interferon and ribavirin therapy. HG-U133A GeneChip containing 22283 probes was used to analyze the global gene expression in peripheral blood mononuclear cells (PBMC) of the patients at each time point. For each patient the decrease of HCV RNA level was calculated by subtracting baseline level (before treatment) from the level on day 28. Good response was defined as a decrease of more than 1.4 log10 IU/ml of HCV RNA level; and poor response was defined as less than 1.4 log10 IU/ml decline from the base level. Only patients with all the gene expression data of 6 time points were involved in our analysis, including 30 Caucasian Americans (CA) of whom 17 were good responders and 13 were poor, 28 African-Americans (AA) of whom 19 were good responders and 9 were poor.The original time-series microarray data used in this work is from a study of Milton W. Taylor which was published on Journal of Virology last year, and pub2-trasformed them. Only probes that were present in at least 75% microarrays with log2 intensities greater than 7 were kept for further analysis. This resulted in a subset of approximately 13620 probesets representing 9100 different genes.First, we normalized the data of total 348 microarrays using quantile method and logTo avoid bias that may be created by single feature-selecting statistical method, we constructed a voting method based on several methods including Student's t test, WilcoxoThe classifier used in our program at each time point was C4.5-a decision tree classification method . With leEach patient in the training set was regarded as an instance and the class label for him was the outcome of the treatment. At each time point, differentially expressed genes between good and poor response group were identified using the voting method described above as the marker probe sets of this time point. At the first time point , the features were that day's gene expression values of the marker probe sets at that time point; at the following time point during the treatment, the features were the combination of that day's gene expression values of marker probe sets at that time point and features of previous time points. For example, the features at day 1 are the expression values of differentially expressed genes at day 1 and the expression values of differentially expressed genes at day 0.Every patient is assumed treatable until predicted as nontreatable with sufficient differentially expressed genes at that day. For each time point, if the number of differentially expressed genes was equal or greater than 5, the C4.5 classifier will be constructed at this time point; otherwise, differentially expressed gene number at this time point will be set as null and no C4.5 classifier will be trained at this time point. This check step helps to avoid false negative decision.At each time of leave-one-out cross-validation, we used the data of N-1 patients to build a model and then applied it on the one left patient to predict his treatment outcome. If a patient was predicted as treatable by every time point's classifier, this was a positive prediction. If the final outcome according to the HCV RNA level decline was good response for this patient, this was a true positive prediction; otherwise, it was a false positive prediction.If a patient was predicted as nontreatable by one of the six classifiers , this was a negative prediction. That means this patient should be eliminated from the treatment and the workflow of this patient will stop at that time point. If the real outcome was poor response for this patient, this was a true negative prediction; otherwise, this was a false negative prediction.The prediction accuracy Q of leave-one-out cross-validation was calculated as follows:tp, tn, fp and fn stand for true positive, true negative, false positive and false negative, respectively. Detailed information about this model, including processed microarray data, R code and results, can be found in Additional file is a tool to carry out automated extraction of explicit and implicit biomedical knowledge from publicly available gene and text databases to create a gene-to-gene co-citation network by automated analysis of titles and abstracts in MEDLINE records[To assess the biological relevance of the identified candidate biomarkers which were important for CA response prediction, we used PubGene to find relationships between these candidate biomarkers and IFN (Interferon)/HCV (Hepatitis C viruses). PubGene E records. MoreoveThe true power of time series microarray analysis does not come from the analysis of single time point, but rather, from the analysis of a series of time points to identify a biomarker chain. The main idea of our model is to fully utilize gene expression profiles before and during treatment to predict the final treatment outcome.The time-dependent diagnostic results of all patients, AA patients and CA patients are shown in Figure We have known that if only static gene expression profiles before treatment were used the prediction accuracy was rather low (data not shown). However from the above results, it occurred to us that the seemingly complicated models may actually be simplified to day 1 classifier \u2013 depending only on gene expression profiles of very early treatment time point. The leave-one-out cross-validation accuracy based on day 1 classifier (including day 0 and day 1 gene expression profiles) of CA patients could achieve 100%, the same as the result using data of all the time points. With AA patients, the accuracy dropped some, but still much better than if only using pre-treatment gene expression profile. The leave-one-out cross-validation results of all patients, AA patients and CA patients on day 1 are given in Table As stated above, CA patients of HCV infection are more sensitive to the therapy of interferon and ribavirin, and after one day treatment the outcome could be one hundred percent predicted. Using the feature selection methods described in Methods section, we identified 30 differentially expressed genes or probes on day one between 17 good response CA patients and 13 poor response CA patients as the candidate biomarkers relevant to interferon therapy response. They are EIF3S5, HSPA9, ABLIM1, RPL4 (201154_x_at), MARCKS, HTRA2, SH2B3, KIAA0999, LCK, C8orf70, TTLL1, CD86, TUFT1, KLRK1, PARP1, KPNB1, NT5C2, RPL4 (211710_x_at), MRPS27, AOF2, HSD17B8, RBMX, TNFSF10, SMARCA4, C14orf122, KIAA0748, PCID2, DNAPTP6, TLE2 and CYFIP2. Their detailed probe information are provided in Additional file The time series expression profiles of two representative genes TNFSF10 (tumor necrosis factor (ligand) superfamily, member 10) and KLRK1 are shown in Figure To further evaluate whether these expression signatures are associated with therapeutic outcome (good or poor response), we conducted clustering of CA patients using differentially expressed genes on day 1 between CA groups of good and poor outcome Figure , and comIt can be seen that Figure The relationship between those candidate biomarkers and IFN (Interferon)/HCV (Hepatitis C viruses) were explored by using PubGene. The two literature networks are shown in Additional file GO category enrichment analysis results . Additional file . AdditioClick here for fileProcessed microarray data, R code and results of time-dependent diagnostic model part 2). Additional file . AdditioClick here for fileDetailed probe information of thirty candidate biomarkers. The probe information comes from the original microarray probe set.Click here for fileDynamic expression graphics of thirty candidate biomarkers. For each of the thirty candidate biomarkers a graph of its expression levels in four groups of patients at all time points is given.Click here for fileLiterature networks of thirty candidate biomarkers in relation to IFN (Interferon)/HCV (Hepatitis C viruses). The six genes that have direct connections with both IFN and HCV are framed with blue boxes.Click here for fileGO and KEGG category enrichment analyses of thirty candidate biomarkers. Thirty-seven enriched GO biological processes and ten GO molecular functions as well as one enriched KEGG pathway are shown (p < 0.01).Click here for file"}
+{"text": "This paper addresses key biological problems and statistical issues in the analysis of large gene expression data sets that describe systemic temporal response cascades to therapeutic doses in multiple tissues such as liver, skeletal muscle, and kidney from the same animals. Affymetrix time course gene expression data U34A are obtained from three different tissues including kidney, liver and muscle. Our goal is not only to find the concordance of gene in different tissues, identify the common differentially expressed genes over time and also examine the reproducibility of the findings by integrating the results through meta analysis from multiple tissues in order to gain a significant increase in the power of detecting differentially expressed genes over time and to find the differential differences of three tissues responding to the drug.Bayesian categorical model for estimating the proportion of the 'call' are used for pre-screening genes. Hierarchical Bayesian Mixture Model is further developed for the identifications of differentially expressed genes across time and dynamic clusters. Deviance information criterion is applied to determine the number of components for model comparisons and selections. Bayesian mixture model produces the gene-specific posterior probability of differential/non-differential expression and the 95% credible interval, which is the basis for our further Bayesian meta-inference. Meta-analysis is performed in order to identify commonly expressed genes from multiple tissues that may serve as ideal targets for novel treatment strategies and to integrate the results across separate studies. We have found the common expressed genes in the three tissues. However, the up/down/no regulations of these common genes are different at different time points. Moreover, the most differentially expressed genes were found in the liver, then in kidney, and then in muscle. Despite rapid advancements in statistical methods for gene expression microarray analysis, much more work is needed for multiple source heterogeneous genomic data, such as multiple organisms/tissues, multiple platforms, multiple species and even more from transcriptome, genome, to proteome in order to develop valid and dependable methods that are mainly applicable to microarray data. The congruency of these different data sources needs a unified framework for combining the multiple sources and testing associations between them, thus obtaining a robust and integrated view. In the meantime, we may find a surprising discrepancy present elsewhere between gene expressions given multiple source of genomic data sets.Meta-analysis is a set of statistical procedures designed to integrate experimental and correlational results across independent studies that address a related set of research questions -4. DevelSo far a few studies have attempted to integrate the gene expression data sets from different sources in order to yield a model for disease dynamics such as development and behavior. Ghosh et al. discussed the issues of combining the results across various studies using meta-analysis including different experimental platforms . Rhodes In our earlier studies we have provided detailed reviews of statistical methodologies for time-course gene expression analysis -13. MixtCorticosteroids are a class of compounds that exhibit the most potent immunosuppressive and anti-inflammatory activities. These drugs are widely used in a variety of acute and chronic disease states, such as asthma, leukemia, and organ transplantation. Although their therapeutic effects result from regulation of immune system genes, many adverse events occur due to unwanted influence of the drug on other genes, primarily those genes involved in metabolic processes . The corBecause drug activity requires a sequential series of events in order to elicit its effects, different genes may exhibit different expression profiles over time following the administration of a drug dose. The particular genes that are either up-regulated or down-regulated, in combination with specific time-course patterns and interactions with other genes, may be predictive of the ultimate outcome(s) that result from drug therapy. Therefore, it is important to improve our understanding of the time-dependent changes in gene expression and their interactions caused by corticosteroid therapy in order to potentially discover the precise genes that may be the most important in producing favorable therapeutic outcomes versus those that may instigate negative, unwanted effects. Moreover, all systemic phenomena such as blood pressure involve multiple genes in multiple tissues and pathologies such as diabetes and hypertension are complex phenomena involving altered expression of multiple genes in several tissues.\u00ae Rat Genome (R_U34A). This is a pre-clinical study performed on experimental rats. There were forty-eight animals that received a single IV bolus 50 mg/kg dose of the anti-inflammatory drug, methylprednisolone (MPL) [Our multiple-tissues/organs time courses affymetrix data sets are fromne (MPL) . Liver, for background subtraction, signal intensity normalization between arrays, and non-specific hybridization correction. By Affymetrix software (MAS 5.0), each probe set in our data assigns an \"average difference\" value corresponding to the expression level of the particular gene it represents. The calculated average difference is then used as the measure of expression levels and data normalization throughout the data analysis [ti by the gene expression level at time t0, where i represents the specific post-dose time-point and t0 represents baseline at time = 0 hours (i.e. the control group that did not receive drug). These ratios were subsequently natural-logarithmically transformed to produce normally distributed gene expression levels at each sampling time-point.To limit potential source of non-biological variation such as those introduced from experimental procedures and to extract real biological variation regarding potentially important changes in gene expression due to MPL dosing, the following procedures will be employed in data quality control and data analysis. To determine expression measures of probe set from probe signals with lowest data variance and bias, we performed one of the most popular probe set algorithms of MAS5 analysis . Gene exTotal RNA was separately extracted from the liver samples from each animal and purified. The isolated RNA was then used to create biotinylated cRNA. Some of these oligonucleotide sequences were from different parts of the same gene (i.e. 5' vs. middle vs. 3' ends of the transcript), but for the most part, each probe identified a unique gene sequence. According to the Affymetrix Microarray Suite 4.0, an initial step was performed to classify signal values as Present (P), Marginal (M), or Absent (A) based on the intensity of the signal . A class1, n2, n3) have a multinomial distribution with n = \u2211ni and parameters \u03c0 = ' [i = ni/n} be the sample proportion. The likelihood is proportional to Bayesian categorical model is developed to estimate the proportion of the 'call' information of P, A and M and Bayesian statistical analysis is conducted to filter genes according to the 'call' data by estimating the parameters under multinomial distribution assumption. Genes that have less 'call' of P than expected or more 'call' of A than expected are excluded. We know that the 'call' has three categories. With 3 categories, suppose the counts ' . Let {piwhere i = 1: Absent; i = 2: Marginal; i = 3: PresentComputations of marginal posterior distributions and their moments could be estimated by simulating samples from them . Markov Hierarchical Bayesian mixture model is developed to model the complex distributions of gene expressions -17. The xit is expression value for the i'th gene at time t, i = 1,...,I, t = 1,...,T. \u03d5(.) denote the mixture density given the gene expression data. \u03b8j are component proportions with nonnegative quantities that sum to 1. C is the number of clusters to be determined based on model selection criteria DIC discussed next. fj (xit | \u03b1j) is each component density of the mixture.where We denote the parameters in latent cluster j as\u03bcj can be written asfor which the conjugate prior for \u03bcj are chosen from various values to give broad distributions, for instance,The parameters of the prior on \u03b3j0 is an initial guess on the mean in cluster j with \u03baj0 a prior sample size reflecting strength of belief in the guess about mean [Here out mean .The precision parameter in cluster j is given byvj is the guess of the prior degrees of freedom, typically vj = 2 or lower and tj = 1/where \u03bc1 = 0). The C clusters in (1) include both non- differentially expressed gene clusters and the clusters with genes with both positive and negative gene expression patterns over time, which are declared differentially expressed clusters. These differentially expressed clusters can be further clustered based on their dynamic changes/patterns over all time points (either up or down regulated across different time points). For C categories components/clusters, the corresponding dynamic patterns can be represented with underlying latent variables. Each underlying latent variable Zit for gene i at time point t is discrete, taking value j = 1,..., C with probability \u03b8j; Zit ~ Categorical(\u03b8j). \u03b8j can follow either parametric prior such as a beta distribution or a Dirichlet distribution or non-parametric Dirichlet Process Prior [\u03b8j are assumed to follow Dirichlet prior \u03b8j ~ Dirichlet(priorj).For the cluster of the genes that are non-differentially expressed across time, we define the mean of this component as 0 , Inverse Gamma 0.1, 0.1). For example., when assigning uninformative distributions to these parameters, i.e. Inverse Gamma, the models do not converge. They converged with Inverse Gamma.Various initial values for prior and hyper-prior distributions for the mean and variances in the mixture model are tested in order to obtain model convergence, such as The computations of marginal posterior distributions and their moments of all the parameters were conducted by MCMC algorithm with Gibbs sampling . Our WinSpiegelhalter et al. proposedOur Bayesian mixture model produces the gene-specific posterior probability of differential expression and the 95% credible interval (which covers 95% of the posterior probability distribution), which is more informative than directly conducting hypothesis tests. Hypothesis tests require setting up the null/alternative hypothesis for each tested value of the gene expression, one at a time for the chosen model, which is less efficient than providing confidence interval/credible interval. Most existing works -17,19 inOne important feature of the above Hierarchical Bayesian mixture model is that it is appropriate for meta-analysis due to its ability to account for dependence among the genes and pool the means or slopes (together with estimated standard error) of dose-response curves into each cluster/component and thus can summarize the concordance (intersection) among the tissues through the estimated posterior distributions as credible intervals. Therefore, our finite mixture model is used to calculate credible interval for the differentially expressed genes for each tissue. The gene expression measures among tissues are different from one another. However, we have normalized and standardized the measurements to allow comparison between tissues.Furthermore, our meta-analysis of multiple tissue data here is not simply based on combining gene expression measures across three separate studies, but is based on combining summaries, such as the estimated posterior distributions as credible intervals from our hierarchical mixture models. Conlon et al. showed that the probability integration model identified more true discovered genes and fewer true omitted genes than combining expression measures . The resTherefore, hierarchical Bayesian mixture model is conducted for each tissue separately first. This indicates random samples are from three separate distributions not a common population distribution, which is more accurate for approximating and estimating the parameters of the tissues. Then, instead of combining expression measures by including a separate parameter to model the inter-tissue variability for meta-analysis, we compared the resulting estimates of 95% credible intervals from our hierarchical Bayesian model, similar to Conlon et al. with theis fluctuated around a value and the variations were stable and tiny, which showed convergence. The densities of the parameters are approximately normally distributed, which show the appropriateness of the prior assumptions of the model. We then conducted another 1000 iterations to estimate proportions of the 'call' of three categories, Present (P), Marginal (M), or Absent (A). Table is. Fig. is for kidney data. All the Pis are estimated with very small standard deviations, which show the appropriateness of the model and the estimates under assumptions. Thus, we conducted filtering step according to this result. All the genes having \"call\" of A greater than 0.2526*n were excluded from our study, as well as the genes having \"call\" of P less than 0.7224*n (n = 52). 2430 genes from kidney data satisfied the above criteria and were left in our study.Bayesian categorical model provided earlier was applied to estimate the proportion of the 'call' for pre-screening genes. After 1000 times burn in via MCMC algorithm, the estimated proportion for each 'call' category pHierarchical Bayesian Mixture (HBM) model was further applied for the identifications of differentially expressed time related genes and dynamic clusters. We allowed the number of components/clusters in HBM to vary from 5 to 35. Based on DIC and convergence of MCMC, the best model had 15 components. This number of clusters shows the most convergence in all the models with smallest DIC value. We still conducted 1000 iterations (burn ins) via MCMC algorithm, and the estimates converged well. We then processed another 1000 iterations to obtain the estimates of the parameters of HBM. Fig. Similarly to kidney data, we applied Bayesian categorical model for estimating the proportion of the 'call' for pre-screening genes for both Muscle . However, for muscle and liver the \u03c72 = 12.814, df = 6, P = 0.0461, which indicate marginal evidence of association. The number of common expressed genes is relatively small. These findings can be further reevaluated by releasing the stringent family wise error rate to false discovery rate. Results show that the up/down/no regulations of these common genes for compared tissues at different time points are not associated and they can be significantly different.To further examine the congruency and discrepancy between (1) kidney and muscle data; (2) kidney and liver data; (3) muscle and liver data regarding the up/down/no regulation at given time points Cochran-Mantel-Haenszel test was applied, which are reported in Tables As we have discussed earlier we varied the number of mixtures from 5\u201335 components for three tissue data sets. Results show that 11\u201315 provided the most precise and reliable results and also the smallest DIC. For further comparisons and meta-analysis, we used 15 as the number of components for final models for all three tissue sets. The gene expressions among tissues were different from one another, which violates the criteria of meta-analysis using pooled data. We analyzed the data tissue by tissue using our proposed HBM and then compared the credible intervals of each gene expression at 2 hrs and 6 hrs among the 6 common genes , i = 1,2,...,15, where \u03c32 ~ InverseGamma. Since we had no information on the precision (inverse of \u03c3), small precisions were tested, such as 0.001, 0.01, 0.1 and 1 in order to the models to converge. For instance, when assigning uninformative distributions to these parameters, Inverse Gamma , the models did not converge, but Inverse Gamma converged.Various prior distributions, hyper-priors and initial values were tested for sensitivity analysis and to ensure the convergence of MCMC. The prior models were generated from normal distributions with means zero; the variances of the normal distributions were equivalent to that of the biological data. The hyper-prior distribution for means of the mixture model: \u03bcj), which followed conjugate normal priors \u03bcj ~ Normal, j = 1,...,C. The parameters of the prior on \u03bcj were chosen from various values to give broad distributions, for instance, \u03b3j0 ~ Normal, Inverse Gamma 0.1, 0.1. Here, \u03b3j0 is an initial guess on the mean in cluster j with \u03baj0 prior sample size reflecting strength of belief in the guess about the mean. The precision parameter in cluster j is given by \u03c4j = 1/Inverse Gamma , where vj is the guess on the prior degrees of freedom, typically vj = 2 or lower. tj = 1/Sensitivity analysis was also conducted by varying the prior distributions. We consider several prior models with various hyper-priors and initial values. We first generated the prior model for the means among groups in the mixture model, we calculated Pearson correlation coefficients for pair-wise clusters given the above model. Fig. In this paper, we present methodology for identifying genes that are differentially expressed over time and for identifying common profiles across different tissue types in order to address the inherent dependence between data observations when samples are collected in a time-ordered sequence, and also for increasing the power of the analysis. We have presented both Bayesian categorical model for estimating the proportion of the 'call', which are used for pre-screening genes and Hierarchical Bayesian Mixture model for identifying temporal differentially expressed gene expression and dynamic patterns. There are several advantages of the Hierarchical Bayesian Mixture model.First, the model clusters gene expressions into up-regulation groups, down-regulation groups and no-regulation groups according to the mean expression level and the credible interval of posterior distributions of the gene expressions obtained for each mixture cluster. It provides probabilistic clustering in terms of the estimated posterior probabilities of component membership, which include the partitioning of the genes into C non-overlapping clusters, determined by the genes' highest estimated posterior probabilities of group membership. Second, it provides the uncertainty estimates of all the parameters through more accurate posterior intervals of differentially expressed genes versus less informative point estimates of p-values (hypothesis testing), which formed our base of making inferences about a sub-group of time related genes being differentially expressed (up or down regulated). Recall that providing 95% credible interval for gene based is equivalent to control the family wise error rate with significance level 0.05, which can be easily modified into controlling the false discovery rate to achieve less stringent results. Third, it provides us the posterior distribution of the clustered data as opposed to using standard normal distribution assumption models, that may not be valid or based on the empirical distribution from bootstrap or other resampling methods, which may have poor small sample properties.Our model produces the clusters of non-differentially expressed genes and up/down regulated genes and also low-variation clusters and outliers. The model clusters the outliers and noise into one or several groups (clusters), which may make the preprocessing of excluding outliers before data analysis unnecessary in the future, although we have preprocessed our data in this paper for computational efficiency. Also, this model did not exclude the genes with expression profiles that had very low variation. These 'low-variation' genes may not provide any additional valid information about the time course of gene expression changes due to the drug effects, but they may be important to associate with certain pathways and their tiny fluctuations are exquisitely informative biological distinctions.We have observed that our Bayesian estimation methods can deal with complex clustering situations and identify clusters of irregular shape, unequal size, or different dispersion. Furthermore, our developed model can deal with irregularly spaced time intervals and provides both the solutions for identification of differentially expressed time related genes and dynamic clustering. One important feature of our developed finite mixture model is that it is appropriate for further comparison and meta-analysis due to its ability to account for dependence among the genes and thus can summarize the concordance (intersection) among the tissues through the estimated posterior distributions as credible intervals. The resulting null standard deviations illustrate the precision of the resulting estimates.Our hierarchical Bayesian model can be generalized and applied to any other time course microarray experimental data since it is based the dynamic patterns over all time points, similar to other function data analysis approaches -36. One Last, but more importantly, recall our model is hierarchical, by selecting different priors and hyper-priors, we also can achieve shrinkage and automatic selections effects, to further produce credible interval and posterior probabilities very close to 0 or 1. In this way no p-values; no type I error; no multiple comparisons are needed. This is one of the major advantages of our hierarchical mixture models. Some recent theoretical work has shown that there are similarities/equivalence between building Hierarchical Bayesian models with automatic shrinkage effects and designing optimization functions with L1- L2 norm or combined L1-L2 norm in statistical learning model in order to achieve automatic selection effects and avoiding the multiple testing/issues ,37-40. TBoth authors were involved in design, acquisition of data, analysis, interpretation of data, and manuscript preparation. Both authors have given final approval of the version to be published.WinBUGS code. Use WinBUGS software to view WinBUGS code.Click here for file"}
+{"text": "To predict gene expressions is an important endeavour within computational systems biology. It can both be a way to explore how drugs affect the system, as well as providing a framework for finding which genes are interrelated in a certain process. A practical problem, however, is how to assess and discriminate among the various algorithms which have been developed for this purpose. Therefore, the DREAM project invited the year 2008 to a challenge for predicting gene expression values, and here we present the algorithm with best performance.We develop an algorithm by exploring various regression schemes with different model selection procedures. It turns out that the most effective scheme is based on least squares, with a penalty term of a recently developed form called the \u201celastic net\u201d. Key components in the algorithm are the integration of expression data from other experimental conditions than those presented for the challenge and the utilization of transcription factor binding data for guiding the inference process towards known interactions. Of importance is also a cross-validation procedure where each form of external data is used only to the extent it increases the expected performance.Our algorithm proves both the possibility to extract information from large-scale expression data concerning prediction of gene levels, as well as the benefits of integrating different data sources for improving the inference. We believe the former is an important message to those still hesitating on the possibilities for computational approaches, while the latter is part of an important way forward for the future development of the field of computational systems biology. Saccharomyces cerevisiae, were presented . One was also allowed to utilize any public data available.The massive growth of high throughput data within molecular biology during the last decade has sparked an interest in systems biology and generated a great variety of suggestions on how to infer knowledge from these data sets. That is, whether the data belong to the genomics, transcriptomics, proteomics or metabolomics domain, they still need to be structured before one can learn anything from them. Here, networks have proved to be a unifying language for different biological systems involving, genes, proteins, metabolites and also small molecules. These networks, defined by protein-protein, protein-to-gene, metabolic interactions etc., determine cellular responses to input signals and govern cellular dynamics Integration of data, which this challenge implicitly called upon, has been the subject of much attention recently; see for example the review by Hecker et al. wiki.c2b2.columbia.edu/dream/index.php/The_DREAM_Project, accessed October 10, 2008] we quote for the gene expression prediction challenge within DREAM3:The challenge for predicting gene expression provided by the DREAM project is of great importance to explore the benefits and bottlenecks of the state-of-the-art algorithms in a fair competition. It represents a solution to the non-trivial problem of designing relevant challenges which at same time addresses biological and computational interesting problems. From the DREAM web-site [S. Cerevisiae), after perturbation of the cells. The challenge is to predict the rank order of induction/repression of a small subset of genes a strain that is wild-type for all three transcription factors , (ii) a strain that is identical to the parental strain except that it has a deletion of the GAT1 gene (gat1\u0394), (iii) a strain that is identical to the parental strain except that it has a deletion of the GCN4 gene (gcn4\u0394), and (iv) a strain that is identical to the parental strain except that it has a deletion of the LEU3 gene (leu3\u0394).Expression levels were assayed separately in all four strains following the addition of 3-aminotriazole (3AT). 3AT is an inhibitor of an enzyme in the histidine biosynthesis pathway and, in the appropriate media (which is the case in these experiments) inhibition of the histidine biosynthetic pathway has the effect of starving the cells for this essential amino acid.Data from eight time points was obtained from 0 to 120 minutes. Time t\u200a=\u200a0 means the absence of 3AT.The challenge. Predict, for a set of 50 genes, the expression levels in the gat1\u0394 strain in the absence of 3-aminotriazole (t\u200a=\u200a0) and at 7 time points following the addition of 3AT. Absolute expression levels are not required or desired; instead, the fifty genes should be ranked according to relative induction or repression relative to the expression levels observed in the wild-type parental strain in the absence of 3AT.This challenge is biologically relevant, and the fact a gold standard exists but is hidden makes the challenge objective and fair. Further, the probe names were given, which allows for data integration of publicly available experiments and a priori knowledge, making the challenge even more realistic in describing a situation which can occur in one's laboratory. However, the problem is somewhat different from the normal setting in systems biology where the aim is not only to predict future experiments but also to obtain interpretable models from which we can gain an increased biological understanding The goal of the challenge of DREAM was to predict the order of the chosen 50 genes within the gat1\u0394 strain for the eight time points at which they were measured. All details about the algorithm we utilized and how it was developed can be found in the Comparing with the training results of An observation here is that the submitted prediction for the gat1\u0394 strain correlates less well with the gold standard than each of the series explored during the development of the algorithm. Neil Clarke points out in his referee report (published on-line accompanying this article) that he picked some of the genes to be predicted because of their surprising or non-trivial expression pattern the gat1\u0394 strain. This fact, combined with a general observation that cross-validation often underestimates the error Considering the result for each time point, If we instead consider the obtained rank correlations with the gold standard per gene, instead of per time point, we get the result in The importance of challenges as DREAM lies to a large extent in its objectiveness. When an inference algorithm comes from the same laboratory as the one which has performed the assessment experiments, sometimes even in the same article, it is likely the algorithm has been tuned to fit with the expected outcome. This is most probably often over fitting, and decreases then its performance for other data sets. Also, the value of this procedure as an assessment is questionable, since the testing of only a few of the predictions of the algorithm has a clear anecdotic flavor, especially when the researcher can choose by him- or herself which parts should be presented. As a contrast, the DREAM challenges provide the community with workbenches where all are welcome to submit the predictions of their algorithms, and thereby getting the opportunity to assess and compare them with the performance of others. No one knows the gold standard beforehand, and even if the evaluation data is limited, it is well defined but still no fine tuning can be carried out. This makes a huge difference compared with the case mentioned above, when the same laboratory both performs the experiments and present inference algorithms with alleged generalizability.However, this appreciated objectiveness and fairness of DREAM holds of course true only as long as the gold standard is hidden. As soon as it is revealed, one can start improving one's algorithm to better fit the expected outcome, but at the same time taking the risk of exposing it to over fitting. Any \u201cimprovements\u201d at this stage must be very well motivated in order to make any sense at all. For example, for our algorithm, we could consider the possibilities to use local fitting parameters instead of a global one for the prior, or to further prune the model by choosing parameter values not at the cross-validation minimum, but one standard deviation below, etc. Due to the above mentioned reasons, we refrain from such actions, though, and instead look forward to the next round of DREAM.The algorithm here presented represents one efficient way of predicting rankings of expression values. A key component in the development of the algorithm has been the inclusion of results of measurements not directly associated with the experimental condition for which the expression values should be predicted. Whether this inclusion has been for more expression data or for prior knowledge of TF-DNA bindings, a cross-validation scheme has helped us not to rely more on these measurements than the original data allow. This is denoted as \u201csoft integration\u201d and forms a cornerstone of our work. The success of the algorithm clearly shows that prediction of expression levels is a possible task, even when the number of genes in the system exceeds the number of experiments 100-fold.Surprisingly, the inclusion of a priori knowledge of TF-DNA bindings did not improve the performance of the algorithm substantially. The reason for this needs more research to find out, since the quality of this kind of data is generally believed to be reasonably high. A hypothesis is that our choice to have just one global parameter k-nearest neighbors, KNN after the gold standard was revealed. When we consider the similarities per experiment to discriminate among models. We hold one of the three time-series provided by DREAM out from the inference, and utilize the other two, and occasionally also other data sets, for finding the searched parameters. We then use data from the left out strain to predict the expression values of the 50 searched genes for each time point in the this series, rank them according to the predicted levels such that the highest expressed gene obtain rank number one, second highest rank number two, etc., and calculate the Spearman rank correlation with the observed ranking of the same series. This is repeated three times, holding each of the provided time-series out a time. We end up with 24 different ranking lists for the 50 genes in i, we have 15 509 free parameters to determine from 16 linear equations (one per time point). We explore the following three scenarios:Before we start exploring various versions of the penalty term in (3), we try to simplify the model (2). The strategy is to primarily work with the DREAM data, in order to reduce the model. When this first reduction is obtained, we will utilize also other publicly available data in order to further strengthen the predictive power of our mathematical model. This first model selection is performed among the models with perfect fits, i.e., the ones where the terms for the first sum of absolute values in (3) all are zero, making the exponent Only expression values are includedBoth expression values and expression rates (derivatives) are includedOnly expression rates (derivatives) are includedBy picking the solution with zero value for the objective function and choosing the coefficients m and the explicit form and value of the penalty term.We see that the highest values for the correlations are obtained when we only include the expression levels. Inclusion of the expression rates makes the result slightly worse, except for the least squares where the correlations are equal. However, with the same predictive power, we apply Occam's razor and prefer the simplest model. To only use the rates gives the least satisfying result of them all. Therefore, in the sequel, we choose to discard all derivative terms and determine the parameters according toBy choosing the exponent The penalty term can take many different forms. A review for least squares of more classical forms as Mallow's Cp, Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), Minimum Description Length (MDL) etc in the context of gene networks can be found in This convex combination is a compromise between the two goals of a good solution, to have good predictive power and to be interpretable We show in One way to improve the performance of the algorithm is to include more data. This is a challenging problem which is crucial for all kinds of large-scale inference problems A straightforward way to include other types of expression sets is to extend the sum of squares in (4) over more data sets. We integrate two collections of expression sets, which reduce the number of possible genes to use as explanatory variables further; down to 4140 genes . The final collection comprises:A set of total of 515 steady-state profiles from a collection of the gene knock-out experiments A set of 256 time-series profiles, comprising a collection of time-series experiments downloaded from ncbi omnibus However, the experimental conditions can vary a lot, and most of them are probably distant from the conditions we actually are interested in. It is therefore likely that these profiles have less impact on the actual problem than the primary profiles presented for the actual problem. We therefore introduce an extra coefficient j somehow, to be specified below, is co-regulated with gene i. By utilizing the cross-validation scheme, we integrate this prior knowledge in a soft way, i.e., we bias the search to known relations, but allow also the possibility of novel links. Explicitly, expression (5) is modified by the introduction of parameters not to be co-regulated with gene i, to look likeno correspondence between gene j and gene i, since a high value of j as a predictor for gene i. We let the parameters The TF-DNA binding data are taken from Yeastract j binding upstream of gene i and j and gene i, The reason why we focus on co-regulation rather than regulatory interactions is that the values of the inference are based on transcript levels, and TFs are known to be expressed on a low level. Also, their activity is often determined by phosphorylation and other effects rather than their amount To summarize the discussion above, the objective function takes the formEffectively, for each target gene Finally, from a computational point of view, we remark that all implementations and calculations have been performed on an ordinary laptop in the languages R and Matlab. That is, the complexity of the problem is not worse than it can be handled in any laboratory."}
+{"text": "Biomolecular networks dynamically respond to stimuli and implement cellular function. Understanding these dynamic changes is the key challenge for cell biologists. As biomolecular networks grow in size and complexity, the model of a biomolecular network must become more rigorous to keep track of all the components and their interactions. In general this presents the need for computer simulation to manipulate and understand the biomolecular network model.hypotheses that are tested against high-throughput data sets (microarrays and/or genetic screens) for both the natural system and perturbations. Notably, the dynamic modeling component of this method depends on the automated network structure generation of the first component and the sub-network clustering, which are both essential to make the solution tractable.In this paper, we present a novel method to model the regulatory system which executes a cellular function and can be represented as a biomolecular network. Our method consists of two steps. First, a novel scale-free network clustering approach is applied to the large-scale biomolecular network to obtain various sub-networks. Second, a state-space model is generated for the sub-networks and simulated to predict their behavior in the cellular context. The modeling results represent Experimental results on time series gene expression data for the human cell cycle indicate our approach is promising for sub-network mining and simulation from large-scale biomolecular network. Understanding the biomolecular network implementing some cellular function goes beyond the old dogma of \"one gene: one function\": only through comprehensive system understanding can we predict the impact of genetic variation in the population, design effective disease therapeutics, and evaluate the potential side-effects of therapies. As biomolecular networks grow in size and complexity, the model of a biomolecular network must become more rigorous to keep track of all the components and their interactions. In general this presents the need for computer simulation to manipulate and understand the biomolecular network model. However, a major challenge of modeling the dynamics of a biomolecular network is that conventional methods based on physical and chemical principles require data that are difficult to accurately and consistently measure using either conventional or high-throughput technologies, which characteristically yield noisy, semi-quantitative, and often relative data.We are in the era of holistic biology. Massive amounts of biological data await interpretation. This calls for formal modeling and computational methods. In this paper, we present a method to model the regulatory system which executes a cellular function and can be represented as a In this paper, we present a hybrid approach that combines data mining and state-space modeling to build and analyze the biomolecular network of a cellular process. Our method consists of two steps. First, a novel scale-free network clustering approach is applied to the large-scale biomolecular network to obtain various sub-networks. Second, a state-space model is generated for the sub-networks and simulated to predict their behavior in the cellular context. It integrates the process of obtaining network structure directly with state-space dynamic simulation robust to qualitative (molecular biology) and noisy quantitative data to iteratively test and refine hypothetical biomolecular networks. In the following, we review some related work in community structure analysis, and biomolecular networking modeling.The study of community structure in a network is closely related to the graph partitioning in graph theory and computer science. It has also closely ties with the hierarchical clustering in sociology . Recent A variety of approaches to state models have been implemented for gene and protein networks, including among others, hidden Markov models ,13, BayeSNBuilder proposed in this paper. The sub-network as shown in Figure To evaluate the accuracy and feasibility of state-space biomolecular network modeling, we considered the gene network corresponding to a sub-network found using In this paper, we employ the human cell cycle gene expression data to constAs the human cell cycle gene expression data are very noisy, some data preprocessing techniques are applied to the log-ratio gene expression data. Firstly a filter is applied to gene expression profiles one by one. At a given time point, the new expression value is the average of three raw values at the immediately previous, current and immediatly behind points. As the mean values and magnitudes for genes and microarrays mainly reflect the experimental procedure , then thC is calculated by equation (5), and further the expression profiles of the internal state variables are calculated and collected in matrix Z by formula (8). Control matrix B is determined such that it maximizes the ratio of the squared sum of the elements of CB corresponding to non-zeros in S to that of the elements of CB corresponding to zeros in S. As the human cell cycle gene expression data are collected at the equally spaced time points. The least square method for the linear regression problem is applied to determine the elements of matrix A in model (1).This experiment treats the Thy-Thy3 as the training dataset and the Thy-Noc as the testing dataset. Figure A are calculated. The eigenvalues of matrix follow as: -0.0715, 0.2479, 0.9018, 0.6749 \u00b1 0.3959i, 0.8125 \u00b1 0.2924i, 1.0396 \u00b1 0.1536i. All eigenvalues except for the last pair of matrix A lie inside the unit circle in the complex plane, and the last pair is very closed to the boundary of the unit circle. This means that the inferred network is almost stable and robust. Furthermore, the dominant eigenvalues of the inferred network are pairs of conjugate complex number: 1.0396 \u00b1 0.1536i. Accordingly, this implies that the network behaves periodically T consists of the observation variables of the system and xi(t) represents the expression level of gene i at time point t, where n is the number of genes in the network. The vector z(t) = [z1(t) \u22ef zp(t)]T consists of the internal state variables of the system and zi(t) represents the expression value of internal element (variable) i at time point t which directly regulates gene expression, where p is the number of the internal state variables. The vector u(t) = [u1(t) \u22ef ur(t)]T represents the external input (control variable) of the internal state governing equation. The matrix A = [aij]p\u00d7p is the time translation matrix of the internal state variables or the state transition matrix. It provides key information on the influences of the internal variables on each other. The matrix B = [bik]p\u00d7r is the control matrix. The entries of the matrix reflect the strength of a control variable to an internal variable. The matrix C = [cik]n\u00d7p is the observation matrix which transfers the information from the internal state variables to the observation variables. The entries of the matrix encode information on the influences of the internal regulatory elements on the genes. Finally, the vectors n1(t) and n2(t) stand for system noise and observation noise. In model (1) the upper equation is called the internal state governing equation while the lower one is called the observation equation.The meaning of the variables follows as: in terms of linear system theory , equatioX be the gene expression data matrix with n rows and m columns, where n and m are the numbers of the genes and the measuring time points, respectively. The constructing of model (1) using microarray gene expression data X may be divided into three phases. Phase one identifies the internal state variables and their expression matrix, and estimates the elements of observation matrix C; Phase two defines the control internal variables; and Phase three estimates the elements of matrices A and B.Let The internal states are latent variables in gene regulatory networks. They could be any unobserved molecules in cell which participate the process of gene regulation. From the biological viewpoint, it is reasonable to assume that the latent variables are some regulatory factors (protein) or missed genes. Many statistical methods have beeX is the n\u00d7m observation data matrix, each column of which is viewed as an observation sample; C is the n\u00d7p transformation matrix, and z represents the expression profile of an internal state, and N is the n\u00d7m noise matrix consisting by m n-dimensional observation noise vectors. Assume that the sample mean is shifted to zero. The log-likelihood of PPCA model [where CA model is expreD = CCT + \u03c32I and \u03c32is the variance of the observation noise, and S = X * X'/m. For the given number of internal variables, p, the global maximum log-likelihood of the PPCA model is calculated bywhere when\u03bbj are the first p largest eigenvalues of the sample variance matrix S, Up is a n\u00d7p matrix, each column of which is a corresponding eigenvector of S.where p. The redundant internal state variables may result in a complicated model. Since the PPCA has a solid probabilistic foundation, some statistics-based criteria such as BIC and AIC can be used to determine the number of internal state variables [From Equation (4), the values of the maximum log-likelihood for the PPCA model increase with the increased numbers of internal state variables, ariables ,53. As tvp (=np+1) is the number of parameters (elements of matrix C) in the PPCA model. Since the terms nm(log(2\u03c0)+1)/2 in Equation (6) is a constant for a given dataset, the calculation of AIC can be simplified aswhere C is determined, the expression profiles of internal variables accumulated in matrix Z can be calculated by formulaBy this definition, the model with the largest AIC is chosen. After the transformation matrix u(t) = x(t) as the input of the internal state equation. Therefore, from the model (1), it follows thatIn state-space model (1), the control variables together with current internal states determine the next internal states. From the viewpoint of biology, the overall expression level of all genes in the network affects the internal (hide) variables ,47,54. ICB. On the other hand, using the algorithm SNBuilder, the subnetwork can be presented by a graph as shown in Figure CB should be the same as that of matrix S. That is, the -th element of CB is nonzero (or zero) if the -th element of S is nonzero (or zero).This equation quantitatively describes the regulatory relationships among genes through the matrix B such that the structure of matrix CB is the same as that of matrix S. Considering that in reality the weak connections among genes may be ignored in the structure of the network, we reformulate the problem as follows: find a matrix B such the squared sum over the elements of CB corresponding to non-zeros in S is much larger than that over other elements.It is nontrivial to find a control matrix B and the profiles of internal variables Z, one can estimate the parameters of the state transition matrix in the internal state governing equation:With the calculated control matrix n1(t). This is equivalent to minimize the cost functionby minimizing the system noise v(t) has the same dimensions as the internal state vector z(t + 1) and is calculated by the following difference equationwhere the time-variant vector v(0) = z(t0), and control values u(0),\u22ef,u(t).with the initial state value A by using an optimization technique such as those in Chapter 10 of Press's text [A contains p2 unknown elements while the matrix Z contains m\u00b7p known expression data points. If p 0.5) have a steadily low protein level was observed for cell cycle genes in general highlighting further the discrepancy between the transcriptomic and proteomic levels of many of the genes that contained the motifs [The transcriptome and the proteome have long been compared to gain insights on RNA turnover -20. Thusut study . Genes wut study . Both tye motifs . Given te motifs , the disWe believe that the shapes of the time-course profiles of genes containing the RNA motifs can provide useful information regarding the pattern of their decay in G2-phase. We used standard statistical measures such as skewness and kurtosis of a time course to descrS. pombe, we took a 3-step approach. First, using genome-wide expression data from multiple experiments, we reconstructed a gene regulatory network based on 531 downstream target genes of 36 transcription factors that were identified to have strongly periodic activity during the cell cycle at the onset of M phase [FoxF2, P < 10-10) and HMG box respectively. Many of these promoters actually contain both types of binding sites, and in opposing-strand orientation, potentially indicative of combinatorial regulation cluster [clb2 with the help of a chromatin-remodeling ATPase that re-positions nucleosomes in the clb2 promoter [An ideal window for exploring an interesting phase-specific sub-network is the early M phase, which is the onset of intense regulatory activity involved in mitosis. The regulation by multiple forkhead TFs of different pathways leading to mitosis is well-studied ,29-31. I M phase . Assumin M phase , it is irevisiae . On the revisiae . In S. c cluster , it alsopromoter .S. pombe than in S. cerevisiae [SPBC19G7.04 is linked with well-known periodically expressed mitosis and cell division proteins such as klp6 and klp8 (kinesin microtubule motor proteins required for chromosome segregation), some of which also contained both forkhead and HMG-1 motifs in their upstream regulatory sequences (data not shown) and are potential candidates of associated regulation.Intriguingly, no systematic study of HMG-box TFs in fission yeast is known although it is the only TF family that has (60%) more regulatory members present in revisiae . As a reS. pombe regulatory cascade as compared to S. cerevisiae, it was suggested that post-transcriptional regulation might play a major role in the much longer G2 phase of S. pombe [S. cerevisiae, genes encoding factors involved in ribosomal RNA (rRNA) synthesis and ribosome assembly were among those having the least stable transcripts [S. pombe [While pointing out the gap in S. pombe . With thS. pombe , with diS. pombe . In S. cnscripts . Given tnscripts , it is pS. pombe .cis-elements in the 3' untranslated region (UTR) of mRNAs and hundreds of different RNA binding proteins (RBPs) in the cell [S. cerevisiae orthologs of the 65 genes post-transcriptional regulation.Transcript stability is often regulated by specific interactions between the cell ,42. Usinthe cell ,43,44, wlcp5, rrp5, rrp9, utp14, esf2, utp6, utp16) encode small nucleolar U3 RNP (ribonucleoprotein) associated proteins that form parts of a complex involved in rRNA processing and ribosome biogenesis [S. pombe.Statistical analysis of the time course profiles of the 65 ribosome biogenesis genes containing the motifs indicated low RNA turnover. Despite the general pattern of their expression -- peaking early in G2 followed by fast decay by mid G2, there were exceptions and Cdc25 block-release (Cdc25) -- and referred to as Peng Cdc25 and Peng Elu; Oliva Elut1 & 2 and Oliva Cdc25; Rustici Cdc25 1 & 2 and Elu 1, 2 & 3. See the previously published work -8 for moescribed ,8. The described . For posmbe from , and alsmbe from . Data onmbe from . Sequenc\u03bb = 0.0 and threshold = 1 \u00d7 10-3 controlled the sparseness and the complexity of the network respectively) was used to reconstruct the \"consistent\" interactions of the 36 TFs with significant activities and the 531 regulatory targets of the TFs based on the five smaller Rustici time course experiments . The contribution of each experiment to the reconstruction of the gene network was weighted by the average signal to noise ratio from the RPM was computed using Spearman's rank correlation. Let \u03c6g denote the estimated phase angle of gene and let T (in minutes) denote the estimated cell cycle period of a particular experiment [The time lag correlation between the periment . Then thjth TF with intensity values f(xi), i = 1,2,..., m + 1, and a gene g with expression values yg = the most significant Spearman's rank correlation coefficient rjg is determined between f(xi), i = 1,2,..., m + 1 and the sub-vector of the full time course of the gene g m is index of the time point corresponding to the first full period of the cell-cycle for the gth gene over values of the offset k, an integer in the \"inclusive\" range {-3,...3}, to account for uncertainty in the determination of the true lag. The significance of each rjg is determined with a two-sided p-value based on 100,000 random permutations corresponding to the null hypothesis that the TF-gene pair is uncorrelated.For S. pombe (such as protein-serine/threonine kinase hsk1 required for S phase initiation) were used as demarcations to assess the phase separation between the expression of a TF and the response of a gene. A pair is filtered if its phase separation interval contained demarcation genes from multiple transition points or exceeded 120 degrees. If so, the p-value of that correlation was assigned to 1. This phase-specificity imposed a biological constraint on the TF-gene pairs, limiting the extent of a TF's regulatory influence based on its phase information. Second, within each of the ten experiments the p-value for every pair was subjected to multiple hypotheses testing with a q-value threshold of 0.05. Only a TF-gene pair with significant correlations in at least a third of the nine experiments was qualified for NCA based on data from the tenth. Assuming a Binomial distribution model, the probability that a TF-gene correlation was significant in k \u2265 3 of the n = 9 experiments by chance alone is 0.0023. The Peng Cdc25 and Elu data sets were individually held out of the correlation analysis and reserved for separate runs of NCA.Further criteria were applied to filter out spurious and non-significant TF-gene correlations. First, genes with known peak expression at phase transition points in the cell cycle in Phase Coherence among multiple experimentsk cell cycle experiments then one can perform k pairs of circular regressions to align all the circles. Using this principle a methodology was developed [In this section we are not introducing any new methodology but describing previously published work for compeveloped with whieveloped . In our ith gene and jth TF to be input as a connectivity matrix, we used only the significant TF-gene correlations from the time lag correlation analysis for this purpose and as follows:To infer transcription factor activities (TFAs), we used the Network Component Analysis (NCA) program . The actwhere +1 is activation, -1 is inhibition and 0 no interaction. The Peng Cdc25 and Elu time course experiments were individually used as the gene expression data input to NCA. Briefly, NCA models gene expression as a function of TFA and the corresponding control strengths (CS) of a transcription factor target interaction. In log-log form the model is written in matrix notation as:ij = log(gi(tj)/gi(to), gi(tj) is the expression of i-th gene evaluated at time tj, A is the regulatory network of control strengths where Aij denotes the control strength of transcription factor j on gene i (Aij = CSij), Pij = log(TFAi (tj)/TFAi (t0)) with TFAi (tj) being the i-th transcription factor activity evaluated at time tj, and \u0393 represents the noise from the DNA microarray experiment. The dimensions of E, A and P are (N \u00d7 M), (N \u00d7 L) and (L \u00d7 M), respectively, where N is the number of genes in the network, M is the number of data points or experiments conducted, and L is the number of transcription factors used in the analysis. TFAi (tj) and CSij are estimated using the regulation (indicator) matrix (equation 3), the gene expression data and the Expectation-Maximization (EM) algorithm with an epsilon = 1 \u00d7 10-6 for conversion. Using the Peng Cdc25 data 39 TFs with TFAs on 784 were obtained and using the Peng Elu data 47 TFs with TFAs on 894 targets were obtained. The intersection of the results from the two NCAs led to the identification of 36 TFs with TFAs at significance level 0.05 for the null hypothesis that a gene's phases are uniformly distributed across 10 experiments. The same count increased to more than 2,100 genes when the experiment in which a gene deviated most from its median phase was excluded . Hence w-9, were output. No motif was output for a diffuse cluster. A motif clustering program was used to filter the redundant motifs [10(TFA) profiles were determined with Fisher's g-statistic dot text file containing the interactions between regulatory targets. The network can be opened and viewed with the GVedit function of Graphviz.Click here for fileM and G1 phase regulatory modules. An XML file containing the S. pombe M and G1 cell cycle phase regulatory modules. The data can be opened and viewed with the GeneXPress software.Click here for fileSupplemental materials. Supplemental materials containing the Bayesian co-clustering algorithm and supplemental figures and tables.Click here for fileCluster assignments. The assignment of each of the top 2000 S. pombe genes (those with highly consistent cell cycle phase characteristics) to a cluster using the Bayesian co-clustering algorithm.Click here for fileLikelihood scores for the occurrence of RNA motifs in the 65 co-regulated genes from clusters #25 and #26. The likelihood scores for the occurrence of RNA motifs in the 65 co-regulated genes from clusters #25 and #26.Click here for fileS. pombe genes with highly consistent cell cycle phase characteristicsThe top 2000 . The top 2000 S. pombe genes with the lowest circular variance.Click here for fileTFA for 36 TFs based on Peng Elu gene expression time course data. The NCA generated TFA for the 36 TFs using the Peng Elu gene expression time course data.Click here for file"}
+{"text": "The central role of transcription factors (TFs) in higher eukaryotes has led to much interest in deciphering transcriptional regulatory interactions. Even in the best case, experimental identification of TF target genes is error prone, and has been shown to be improved by considering additional forms of evidence such as expression data. Previous expression based methods have not explicitly tried to associate TFs with their targets and therefore largely ignored the treatment specific and time dependent nature of transcription regulation.In this study we introduce CERMT, Covariance based Extraction of Regulatory targets using Multiple Time series. Using simulated and real data we show that using multiple expression time series, selecting treatments in which the TF responds, allowing time shifts between TFs and their targets and using covariance to identify highly responding genes appear to be a good strategy. We applied our method to published TF \u2013 target gene relationships determined using expression profiling on TF mutants and show that in most cases we obtain significant target gene enrichment and in half of the cases this is sufficient to deliver a usable list of high-confidence target genes.CERMT could be immediately useful in refining possible target genes of candidate TFs using publicly available data, particularly for organisms lacking comprehensive TF binding data. In the future, we believe its incorporation with other forms of evidence may improve integrative genome-wide predictions of transcriptional networks. Recently, Heard and coworkers also proposed a clustering method that can use multiple gene expression time series )e series , but witArabidopsis thaliana was selected as the most technically and biologically appropriate. In particular, TF-target prediction in higher eukaryotes is appealing due to the aforementioned unfeasible task of comprehensive mapping of all TF-target gene interactions, meaning that such a method could have immediate biological application.A prerequisite for incorporating temporal information into TF-target gene prediction is the selection of an appropriate dataset and the concomitant selection of a model organism. As will be discussed, the AtGenExpress consortium's stress series dataset for the model plant We developed a method to identify potential TF-target genes as those responding strongly to the same stimuli as their controlling TF(s) in a coordinate temporal response. Initially, simulated data with predefined TF-target gene expression relationships showed that, by selecting treatments and incorporating temporal information, our algorithm can improve performance as compared to conventional co-expression based methods. We then applied the method to identify known TF \u2013 target gene relationships, as experimentally determined, using expression profiling on TF mutants. These data revealed that the method was useful to enrich targets for a diverse set of experimentally determined TF \u2013 target gene relationships. Furthermore, for half of the studied TFs, the enrichment of true targets among extracted genes was sufficient to obtain usable numbers of high-confidence target genes. By looking at a large set of annotated TFs, we also observed that the targets predicted using our approach are more enriched with both functional annotations and putative cis-elements compared to those obtained by conventional methods, hence, indicating a higher biological relevance.We envisage our method could be immediately useful in narrowing the search for target genes of candidate TFs using publicly available data either through direct prediction or by filtering data obtained by expression profiling of TF mutants. This would be particularly applicable for organisms lacking comprehensive TF binding data. We show that considering other evidence has the potential to improve the methods performance and, in the future, we believe its incorporation into methods using multiple forms of evidence may improve integrative genome-wide predictions of transcriptional networks.In a simple scenario, assuming full transcriptional control of gene expression, the target genes will have the same characteristic expression pattern as the regulating TF itself, although possibly shifted forward in time. However, other genes can have similar expression pattern as a direct response to an applied treatment even though they are unrelated in a regulatory sense. In order to separate such co-expression from the more interesting co-regulation one has to look at many different time series of the same system but exposed to different perturbations.physical time frame is the same, the biological time frame might differ. For example, transcription is affected by temperature and so the time shift between the TF and its target is likely to be different under high versus low temperature treatments. Ergo, the problem is two-fold. In order to make good predictions of plausible targets we propose that it is necessary to, from the total set of considered treatments, I: pick the 'right' subset of treatments and II: introduce the 'right' time shift for these. Finally, the use of the correlation coefficient implies that the scale of changes in gene expression is irrelevant and only the shape matters. To overcome the background noise from the numerous untranscribed genes one usually applies some sort of variance threshold. Here, we instead experiment with using the covariance, which pays attention to both shape and magnitude, instead of correlation and thus assume that big changes are more relevant than small changes. To summarize the previous section, the following assumptions will lay the ground for our approach:A direct approach to utilize such data for co-expression analysis is to concatenate the available time courses and compute correlations based on the constructed pseudo time series as was done in the co-expression databases CSB.DB and ATTED-II ,18. Howe\u2022 The expression of the true target genes are transcriptionally controlled by the investigated TF.\u2022 The TF-targets have a similar (covariant) response to the TF but may show a treatment dependent delay.\u2022 Genes can be part of more than one regulon so any treatments in which the TF does not respond are also not informative.\u2022 Because not all genes are transcribed at the same time, the sought TF-targets will have higher variance than the bulk of genes.regulon. Figure Given a set of gene expression time series and a TF of interest, the output of the proposed method is a cluster of co-expressed genes that, given the assumptions above, look like they are controlled by the TF of interest. Because the cluster is directly associated with a known TF, we will instead refer to it as a predicted Arabidopsis )lgorithm , and estTo assess the plausibility of our assumptions about regulon characteristics, we counted the over-represented hexamers and functional annotations in the experimentally defined regulons. Six of the seven target pools that contained more than 30 genes had between 5 to 37 over-represented hexamers and seven had between 3 to 8 over-represented functional annotations, see Additional file From the simulated data, we could draw the conclusion that, if the proposed method is better than a simple correlation based measure using concatenated time series, then the improvement would be most apparent when there is a time shift between the TF and its regulon. Therefore, we ran the proposed algorithm, treating the TFs as both inducers and repressors, on each of the 1484 genes in our data set annotated in the MapMan software to bin 2Figure We designed a method for extracting potential targets to known TFs using gene expression data in the form of multiple time series. The method provides a heuristic for solving the combinatorial problem of selecting informative treatments and appropriate time shifts between the TF and its targets. By maximizing the overlap in covariance between the TF and all other genes in two treatments and then systematically adding further treatments, we not only avoid the need for computationally expensive optimizations, but also increase the interpretability and quality of the predictions.Using existing experimental data on target associations for twelve TFs, the method showed higher performance than existing steady-state co-expression tools, but indicated that both methods could be complementary. This not only highlighted the utility of the method but also showed that the targets identified by mutant profiling in normal conditions indeed often are highly covariant with the associated TF in a treatment and time dependent fashion in the wild-type plant.The predicted regulons for unknown TFs also showed appealing properties in terms of enriching both annotations and upstream motifs. These results indicate that the described approach could be used both as a method for exploratory analysis of regulatory relationships of a particular TF, and as a means of obtaining high-confidence subsets from putative target genes identified by mutant profiling or other experimental techniques.Gene expression based techniques are especially useful for extracting potential targets when no information about the regulatory relationships is available. Such methods can therefore just as well be used for aiding hypothesis generation regarding regulatory properties of e.g. metabolites.y\" = X\"B + E\", versus the old one, y' = X'B + E', where B is the vector with regression coefficients and E the residual matrix. The predictive capability is measured by calculating the Q2 statistic using repeated five-fold cross-validation. By using Student's t-test we test the hypothesis H0 : Q2' > Q2\" (i.e. including the next treatment led to a decrease in predictive performance) and only include the treatments where we fail to reject. Q2 is defined as following:In order to investigate if there are more treatments for which (1) is high for the same genes, we order the remaining treatments according to (2) by setting the first treatment to the artificial pseudo treatment. For the remaining treatments we measure the goodness-of-fit by estimating the change in predictive performance between the one-component partial least squares (PLS) regression model predictiall treatments used. Q2 will not increase when a treatment is added that requires high regression coefficients for genes that are unrelated to the TF in the other treatments and it therefore provides a valid, albeit indirect, tool for deciding whether to leave treatments out or not.PLS is designed for developing models with strong predictive performance, although this is not our direct interest, it is suitable here as it is desirable to find a set of genes of undefined size that are strongly related to a given TF for Our method does not allow for more than one time shift per treatment. Therefore, we are only interested in looking for targets responding in the same (induced) or opposite (repressed) direction as the TF. Hence, we modified the PLS algorithm slightly to set all negative or positive coefficients respectively, to zero. According to H\u00f6skuldsson this doeR2.The Gap statistic has previously been proposed as a method for simultaneously choosing a suitable cluster size and assessing its statistical quality . The metR2 minus the 95th percentile of the null-distribution \u2013 R2*.We then define the Gap statistic as the observed The recommended regulon size is given by:th percentile, a positive Gap curve can directly be translated to a significant regulon at the 5% confidence level.Because we use the 95x, at time point i \u2208 {1, 2,..., 7} for gene j \u2208 {1, 2,..., 10000}, in treatment k \u2208 {1, 2,..., 6} was simulated in a naive way asThe gene expression, wherecj was one or zero depending on if gene j was part of the planted regulon or not. The constant lk defined the planted lag for treatment k. The lag was either set to zero or allowed to vary between 1 and 2 time points. The parameters for the distribution of \u03c3y were picked to resemble a real world dataset. Tomake the data more illustrative the term pj, kyi-l, k in (9) was added where pj, k was one or zero depending on whether or not the gene j belonged to a 'masking' regulon in treatment k, a non-intersecting group of genes of the same size as the true regulon. Thus, in order to recover the hidden regulon it is necessary to combine information from different treatments.and Simulating time series data using random normal deviates is naive in the sense that the different time points are independent of each other. For this particular application it is however acceptable as the simplification only becomes detrimental when comparing methods that utilize the time series aspect of the data, as for now CERMT does not do this.The data from abiotic stress series of the AtGenExpress project was downloaded from and normThroughout this study, we only considered the time shifts 0, 0.5 and 1 h, as further time shifts would result in relying on too few time points and unrealistically long transcriptional delays.Q0.75 + 1.5 \u00d7 IQR, i.e. the third quartile plus 1.5 times the inter-quartile range, to the distributions of the maximum responses and maximum deviations from the control, given the probes on the arrays with only insignificant expression signals, as judged by the MAS5 algorithm.The thresholds used for judging whether a TF responded to a treatment or not were set to the standard moderate outliers threshold, P-values for over-representation were calculated using the hypergeometric distribution, FDR corrected [The enrichment of hexamers in predicted regulons was calculated by first building a dictionary with all possible hexamer, minus those that resembled the TATA-box, and counting their occurrences in the 500 base upstream regions of all considered genes. The obtained global distribution was then compared with that of the predicted regulon. orrected and overThe calculation of annotational enrichments was based on the method proposed by Hannah et al. , which uArabidopsis thaliana, we also provide the method as a web-service which allows the user to select the TF of interest, extract and plot the suggested regulon using a fast but simplified version of the proposed algorithm.The R package is contains all methods discussed in this paper and the part of the AtGenExpress data as it was used here. It is not organism specific and makes it possible to apply CERMT to other species after collation of the appropriate gene expression time series. For Project name: cermtProject home page: Operating systems: Platform independentProgramming language: R package with Java based web-interfaceLicence: GPL v2Any restrictions to use by non-academics: NoHR designed and implemented the methods and wrote the manuscript. DW implemented the web-service. JS provided essential mentoring and supervision to HR. MH initiated and supervised the project, made the literature study and wrote the manuscript. All authors read and approved the final manuscript.Literature study of transcription factors and their targets considered in this study. We focused on TFs that have previously been implicated in stress responses, but as not all respond under the conditions used to generate the AtGenExpress dataset, some were therefore excluded from our test set. Wherever multiple genes were knocked out or over-expressed, or where functional redundancy has been implicated, the average of those genes was used.Click here for fileThe numbers of over-represented hexamers and annotations (MapMan bins) in the experimentally defined regulons. With randomly chosen genes we would not expect any over-representation. A clear majority of the larger regulons have several over-represented hexamers annotations. 'OX' indicates that the targets were found using over-expression, 'KO' using knock-out and 'ChIP' using ChIP-chip experiment.Click here for file"}
+{"text": "PER1 is a key regulator of the blood transcriptional network, in which multiple biological processes are under circadian rhythm regulation. The fasted and fed dynamic Bayesian networks showed that over 72% of dynamic connections are self links. Finally, we show that different processes such as inflammation and lipid metabolism, which are disconnected in the static network, become dynamically linked in response to food intake, which would suggest that increasing nutritional load leads to coordinate regulation of these biological processes. In conclusion, our results suggest that food intake has a profound impact on the dynamic co-regulation of multiple biological processes, such as metabolism, immune response, apoptosis and circadian rhythm. The results could have broader implications for the design of studies of disease association and drug response in clinical trials.Gene expression data generated systematically in a given system over multiple time points provides a source of perturbation that can be leveraged to infer causal relationships among genes explaining network changes. Previously, we showed that food intake has a large impact on blood gene expression patterns and that these responses, either in terms of gene expression level or gene-gene connectivity, are strongly associated with metabolic diseases. In this study, we explored which genes drive the changes of gene expression patterns in response to time and food intake. We applied the Granger causality test and the dynamic Bayesian network to gene expression data generated from blood samples collected at multiple time points during the course of a day. The simulation result shows that combining many short time series together is as powerful to infer Granger causality as using a single long time series. Using the Granger causality test, we identified genes that were supported as the most likely causal candidates for the coordinated temporal changes in the network. These results show that Peripheral blood is the most readily accessible human tissue for clinical studies and experimental research more generally. Large-scale molecular profiling technologies have enabled measurements of mRNA expression on the scale of whole genomes. Understanding the relationships between human blood gene expression profiles and clinical traits is extremely useful for inferring causal factors for human disease and for studying drug response. Biological pathways and the complex behaviors they induce are not static, but change dynamically in response to external factors such as intake/uptake of nutrients and administration of drugs. We employed a randomized, two-arm cross-over design to assess the effects of fasting and feeding on the dynamic changes of blood transcriptional network. Our work has convincingly shown that feeding or increasing nutritional load affects the human circadian rhythm system which connects to other biological processes including metabolic and immune responses. We believe this is a first step towards a more comprehensive population-based study that seeks to connect changes in the blood transcriptome to drug response, and to disease and biology more generally. Elucidating networks that define biological pathways underlying complex biological processes is an important goal of systems biology. Large-scale molecular profiling technologies have enabled measurements of mRNA and protein expression on the scale of whole genomes. As a result, understanding the relationships between genes and clinical traits, and inferring gene networks that better define biochemical pathways that drive biological processes, has become a major challenge to understanding large-scale data sets generated from these technologies. For the majority of published gene expression profiling experiments, they are carried out at a single pre-defined time point across all samples, where the implicit assumption is that the steady state for the corresponding biological system is well approximated at a single time point. The steady state in this context represents a baseline state of the system under study in which the system is least likely to change and has the least amount of variability due to environment.E.coliDrosophilaBecause biological pathways and the complex behaviors they induce are dynamic Coexpression networks are based on pair-wise gene-gene correlations of expression data, revealing functional modules in the network that elucidate pathways that drive core biological processes A static Bayesian network (SBN) is a graphical model that encodes a joint probability distribution ce model . The intractions . Third, E.coliDrosophilain vivo in humans.The DBN is a popular approach in computer sciences, such as Kalman filter and Hidden Markov Model (HMM) in voice recognition One of challenges of applying the Granger causality test to human samples is how to generate long time series data. We overcome the problem by combining multiple short time series. Our simulation results show that data combined from multiple short time series is as informative as a long time series. One of challenges of applying DBN to human samples is limited sample size. We tackled this problem by reconstructing the intra-slice structure from a large data set generated at static states, then reconstructing the inter-slice structure from the time series data.PER1 as the key regulator of the blood gene expression pattern in which multiple biological processes were under circadian rhythm regulation. Furthermore, the genes under PER1 regulation in the fed network are enriched for obesity causal genes. Finally, using the DBN, we show that over 72% of all inter-slice links are self links and when the SBN and the DBN were compared, we found that different processes such as inflammation and lipid metabolism, which are disconnected during fasting, are now dynamically linked together in response to food intake.In the present study we have applied methods based on Granger causality and DBN to a set of human blood gene expression profiles generated at multiple time points during the course of a day, shown in The two-way or three-way ANOVA analysis defining time- and state-dependent gene expression signatures provides meaningful way to characterize expression changes on a global scale in vivo. We have previously shown that over 80% of transcripts have significant inter-individual variances Traditionally, a long time series is required to apply Granger causality test. However, it is hard to obtain a long time series of human samples collected ults see . Both neults see . From thRNF144B, a putative ubiquitin-protein ligase that plays a role in mediating p53-dependent apoptosis. Genes under RNF144B regulation including PTEN are enriched for the GO biological process of negative regulation of cellular metabolic process . The top causal gene in the fed network is PER1, a transcription factor regulating the circadian clock, cell growth and apoptosis. The genes under PER1 regulation are enriched for genes correlated to plasma concentration of triglyceride in the Icelandic Family Blood (IFB) cohort PER1's downstream genes are involved in diverse biological processes including CREB5, in circadian rhythm, PTEN and P53INP2 in apoptosis, IL1R1, IL1RAP and TLR2 all involved in inflammation response, FASN and ACSL1 in fatty acid metabolism and MVK in cholesterol biosynthesis. These results suggest that food intake interacts with circadian rhythm and the circadian rhythm has impacts on many biological processes as has been previously shown in mouse studies PER1, PER2, PER3 etc.) mRNA expression rhythm in human peripheral blood cells and linked that to individual's circadian phenotype PER1 is a top causal gene illustrates a potential mechanism of how the CNS control and environmental influences can affect circadian rhythm gene expression which in turn regulating a host of other biological functions. More specifically, circadian rhythm genes (PER1 in particular) play important roles in cell cycle regulation and cancer processes PER1 control are involved in apoptosis and cell cycle regulation .There are more causal links inferred for fast time series than for fed time series. The fasted network consists of many small subnetworks and the fed network consists of mainly two subnetworks shown in . The topFTO were replicated in many populations ADA, BBS5, CBL, CCND3, FASN, FTO and SCARB1) overlapped with PER1's downstream genes in the fed network in blood , genetically modified animals , and chemical perturbations have all been used successfully to establish a causal relationship between genes and phenotypes in mammalian systems. Here we have detailed the use of time series data in a human population to predict causal regulators using a Granger causality test and a DBN. Our Granger causality networks showed that multiple biological processes such as apoptosis, inflammation response and lipid metabolism are under circadian rhythm regulation and obesity causal genes are under circadian rhythm regulator The time series data provided a path to go beyond the characterization of interesting patterns of expression and network differences associated with complex states (like fasting and feed status), by allowing for the identification of putative causal regulators driving these differences. While extensive experimental validation will be required to assess the full utility of the approach detailed in the present study, we believe these methods and the characterizations of time and state dependent changes in gene expression and network topology, will motivate a need to integrate a time domain into gene expression experiments that aim to elucidate complex system behavior.Our data consist of many short time series from multiple individuals instead of a single long time series. Our approach for combining multiple short time series was based on the assumption that individual response slopes are similar. First, the population under study is relatively homogeneous, i.e. only males, similar age, same population, same ethnicity and each individual consumed the meal of same size and composition. Second, we reduced the individual specific variance by normalizing each individual data according to its own expression data at the first time point. This essentially reduces the number of parameters to fit in the model, at the cost of reducing the number of time points available to feed into the model. In contrast, if the population under study was genetically heterogeneous, we would treat the response slope differently for different individuals and would employ the mixed-effects model as suggested by Berhane and Thomas PER1 as the main causal regulator in the fed time series.Our implementation of the Granger causality test is a special form of DBN where there is no causal structure within a single time slice. There are also many variations of the Granger causality test including stationary or non-stationary, dynamic or time-invariant Granger causality tests. Our simple implementation of Granger causality test identified the transcription factor The intra-slice network (SBN) was reconstructed from an independent data set and is fixed in our current model of DBN. Even though the SBN was reconstructed using about 455 samples, there are still many uncertainties about the network structure and edge directions. Further researches on using the SBN as flexible priors for intra-slice structure rather than fixed one are warranted.et al.et al.et al. showed that the sampling interval is also an important parameter. When the sampling interval is small, the difference between data at consecutive time points will be small. In other words, the independent information added is small. Our time series simulation result while the networks in the feeding state are more highly interconnected. It is well established, that the circadian rhythm interacts with metabolic Human peripheral blood is the most readily accessible human tissue for clinical studies. Our work on peripheral blood has demonstrated that feeding or increasing nutritional load affects the human circadian rhythm system, which becomes highly connected to other biological processes including metabolic and immune responses. And these effects can be observed in peripheral blood. We believe the results of the present work have broader implications for studies of drug response and for genetic and experimental studies on blood chemistry and vascular related clinical traits. Our results suggest that how blood networks respond to feeding is an important variable that may bring us closer to dissecting the underlying causes of obesity and associated disorders. Our results also provide a guideline on how much data are required for inferring causal relationship in human blood for future experiments.40 healthy participants from an Icelandic company were recruited to participate in a randomized, two-arm, cross-over study to examine the effects of fasting and feeding on human blood gene expression A total of 560 peripheral blood samples were collected from the 40 participants at 7 time points for each period of the study. Significant inter-individual variation has been noted in human blood gene expression profiles The time series based causality test was proposed by Wiener Traditionally, Granger causality test is applied to long time series. However, it is hard to collect long time course data from human samples. Our data consists of many short time series from multiple individuals. There are several theoretical studies related to combining multiple time series in a general regression frame work, including for instance that of Berhane & Thomas Our approach is a simplified version of the Berhane & Thomas approach Under first order stationary BVAR model, a set of data was simulated for causal relationship The test of Granger causality For the 1000 time series simulated above, the p-values of Granger causality If only 6 time points are used, no Granger causality test is significant if considering the time series independently. If assuming To estimate the false positive rate, we permuted the assignment of 1000 time series generated above model normally requires comparing model residuals and statistics at different p-value thresholds. However, because of the small sample size (40) and limited number of time points (6), we restricted our analyses here using AR models with only first order time dependency, similar to what has been done in previous studies A bootstrapping procedure of re-sampling individuals with replacement, was used. At each time, one subject was sampled from a pool of 40 individuals. A bootstrapped data set consisted of 40 sampled individuals (40\u00d76 data points). The same Granger causality test outlined above was applied to the re-sampled data. The bootstrapping procedure was performed 100 times. The link confident value is the percentage of a link's p-values above a multiple testing corrected threshold in the results of the 100 bootstrapping tests.455 male samples in IFB cohort For a two-slice dynamic Bayesian network represented in Figure S1X\u2192Y based on Equation 1. Each time series consists of 240 time points (only the first 50 points are shown here). Blue lines are for X, and red lines are for Y.A Montage display of independently simulated time series for (0.06 MB EPS)Click here for additional data file.Figure S2X\u2192Y using the simulated time series shown in Prediction accuracies of Granger causality (0.03 MB TIF)Click here for additional data file.Figure S3The distributions of bootstrapping confident values of links inferred in both fast and fed Granger causality networks. (A) 80% links in the fast network have confident values above 0.5 (B) 90% of links in the fed network have confident values above 0.5.(0.12 MB TIF)Click here for additional data file.Figure S4The out-degree distributions of both fasted and fed Granger causality networks exhibit scale-free properties. (A) The out-degree distribution for the fasted network; (B) the out-degree distribution for the fed network.(0.03 MB PDF)Click here for additional data file.Table S1Inferred causal links in the fast blood Granger causal network.(0.03 MB TXT)Click here for additional data file.Table S2Inferred causal links in the fed blood Granger causal network.(0.01 MB TXT)Click here for additional data file.Table S3Inferred inter-slice causal links in the fast blood Dynamic Bayesian network.(0.02 MB TXT)Click here for additional data file.Table S4Inferred inter-slice causal links in the fed blood Dynamic Bayesian network.(0.02 MB TXT)Click here for additional data file."}
+{"text": "MicroRNA (miRNA) target prediction is an important component in understanding gene regulation. One approach is computational: searching nucleotide sequences for miRNA complementary base pairing. An alternative approach explored in this paper is the use of gene expression profiles from time-series microarray experiments to aid in miRNA target prediction. This requires distinguishing genuine targets from genes that are secondarily down-regulated as part of the same regulatory module. We use a functional data analytic (FDA) approach, FDA being a subfield of statistics that extends standard multivariate techniques to datasets with predictor and/or response variables that are functional.In a miR-124 transfection experiment spanning 120 hours, for genes with measurably down-regulated mRNA, exploratory functional data analysis showed differences in expression profiles over time between directly and indirectly down-regulated genes, such as response latency and biphasic response for direct miRNA targets. For prediction, an FDA approach was shown to effectively classify direct miR-124 targets from time-series microarray data , providing better performance than multivariate approaches.Exploratory FDA analysis can reveal interesting aspects of dynamic microarray miRNA studies. Predictive FDA models can be applied where computational miRNA target predictors fail or are unreliable, e.g. when there is a lack of evolutionary conservation, and can provide posterior probabilities to provide additional confirmatory evidence to validate candidate miRNA targets computationally predicted using sequence information. This approach would be applicable to the investigation of other miRNAs and suggests that dynamic microarray studies at a higher time resolution could reveal further details on miRNA regulation. MicroRNAs (miRNAs) are a class of small non-coding RNAs that are found in both plants and animals. They are known to play important roles in gene regulatory networks by post-transcriptionally regulating the expression of other genes. These miRNAs target other transcripts by forming near-perfect or imperfect base pairings. Such formations silence genes either by mRNA cleavage or translational repression .Computational sequence-based methods have been developed to predict miRNA direct targets. As animal mRNAs have imperfect complementary base pairing to their targets and as they are short in length, most such approaches first search for a perfect 7-mer seed in the 5' end of miRNAs that match to their targets. Searching for such small motifs can lead to high false positive rates and so additional tests such as conservation filters are typically applied . HoweverAlthough animal miRNAs were originally believed to primarily translationally suppress gene expression, they have also been found to lead to mRNA destabilization or degradation. MiRNA transfection microarray experiments capable of detecting such effects at the mRNA level of targets have shown that a large number of transcripts are down-regulated by over-expression of miRNAs . Such hidirect targets in the sequel), or the mRNA of genes could also be indirectly down-regulated if they are a part of a miRNA-mediated regulatory module controlled by the direct miRNA targets (indirect targets in the sequel) [The mRNA expression changes after miRNA transfection could be a result of miRNAs directly targeting these messages ( sequel) . In thisTime-series microarray experiments that repeatedly measure gene expression simultaneously for multiple genes over a time period can capture temporal patterns and dependency of gene expression changes. A recent study by Wang and Wang applied T, in the corresponding multivariate analysis method. For example, standard PCA can be extended to functional PCA (FPCA) [V\u03be = \u03bb\u03be for variance \u2013 covariance matrix V = n-1 X'X, where X is the (mean centred) n \u00d7 p data matrix, with n samples and p features; and \u03bb and \u03be are the corresponding eigenvalue and eigenvector. In functional PCA, the equivalent eigenequation is generalized to V\u03be = \u03bb\u03be where \u03be is now an eigenfunction and V is the covariance operator defined by: Vx(s) = \u222bvx(t)dt = \u27e8, x\u27e9; where the variance-covariance function V, defined by using the covariance function v as the kernel of an integral transform, and this formulation of the eigenequation in terms of inner products, \u27e8v, \u03be\u27e9 = \u03bb\u03be, can be applied to either multivariate or functional data, with the respective definition of inner product, where s \u2208 T for function domain T for the functional case, and index set T = 1,..., p in the multivariate case. In most cases, this expression can be computed quickly using only matrix expressions utilizing the sampled data points and a matrix of inner products between basis functions, avoiding estimation of the integral [Functional data analysis -9 is a sA (FPCA) ,10 as foIn this paper, we use FPCA for initial exploratory analysis and a nonparametric functional data analysis (NPFDA) approach for predWe analyzed the miR-124 transfection time-series microarray data previously published in [We retrieved the miRNA-124 transfection time-series microarray expression dataset from the GEO database (accessi2 expression fold change (FC) of less than -0.5) for at least one time point were considered to be down-regulated genes. We subsequently restricted our analysis to these down-regulated genes in this study .We next identified genes showing substantial evidence for down-regulation of mRNAs. Genes with over 1.4-fold under-expression tiveness , to incrIndirect-target training set: we selected the down-regulated genes that had annotated 3'UTRs but in which no target sites could be found by either predictor and even no weak sites could be found by using PITA prediction with relaxed parameters. From these genes, a set of size identical to the direct-target training set was randomly sampled to form the indirect-target training set.(2) We also constructed two independent datasets that had not been used for training that we used to further evaluate our model.Non-conserved direct-target test set: we combined the down-regulated genes that were predicted from either of the predictors, yet that had no evidence of conservation. This set consisted of 424 genes and presumably would be enriched for direct targets.(1) Indirect-target test set: we constructed another independent dataset which met the same criteria as the indirect-target training set but excluding genes that had been used for training the model. This set consisted of 372 genes.(2) \u03bb = 0.01) was applied .We used FPCA to perform exploratory data analysis. To fit a smooth function to the discrete sampled data, we used a set of 9 B-spline basis functions of order 4 (for cubic smoothing splines). Knots were located at the data points. Additional regularized smoothing response variable. The NPFDA used B-spline basis functions (with 7 knots) to produce smooth first derivative estimates. Performance evaluation was by 10 \u00d7 stratified 10-fold cross-validation (CV). Major parameters were determined via nested CV separately for each fold; other parameter settings were set to their defaults or as appropriate for the data size. To evaluate the discriminability of the classes, the Receiver Operating Characteristics (ROC) curves were calculated as well as the associated area under the ROC curve (AUC).R code implementing the analysis is available from the authors.In the miR-124 dataset, Wang and Wang [FPCA analysis reveals the modes of variation across the samples. Figure Figure 2) fold change of -1. Thus, average fold change differences between the direct and indirect target sets were normalized away, and the shape of the response curves alone was effectively used to distinguish the direct targets from the indirect targets. Using curve shape information only would be expected to provide a more robust predictor than also relying on the absolute fold change as a feature, as it varies substantially between genes. An NPFDA discrimination model was trained on the direct and indirect target training sets (see Methods). The performance was compared with linear discriminant analysis (LDA) as an example of a standard multivariate analysis.As noted above, the gene expression curves show a large variance in overall fold change in the first principal component of the functional PCA. A key preprocessing step in FDA is registration of the curves to remove unimportant amplitude or other variations. Prior to classification, therefore, each curve was standardized to have a mean . Figure The results are shown in Table To further validate the trained model, we also used it to classify data that was independent of the training set: the non-conserved direct-target test set and the indirect-target test set (See Method). Although the true status of these data are not known and so explicit accuracy and AUC results cannot be computed, we would expect the majority of the non-conserved test set to be true direct targets. However without conservation information we would expect a proportion of false positives i.e. indirect targets in actuality. For the indirect-target test set, we would expect predominantly true indirect targets. We generated histograms of the posterior probabilities from the NPFDA in these datasets figure . Compari> 0.9 gives a set of 203 new non-conserved miRNA target predictions .Figure In this study, we presented an FDA analysis of the differences in expression profiles between direct and indirect miRNA targets in a miR-124 miRNA transfection experiment. An exploratory FDA analysis showed differences in response latency, with direct targets showing an immediate down-regulation whereas indirect targets typically showed an approximately 32 hour delay. Also, direct miRNA target curves show evidence of a biphasic, two-component response with an initial early decrease presumably due to direct effects of the miRNA on mRNA, and a later component matching the response of the indirect targets, possibly due to secondary effects.These time profile differences can be utilized for classification, and in the prediction analysis we showed that FDA can provide very good discrimination, substantially better than a standard multivariate analysis, as FDA utilizes the prior knowledge that the biological process of regulation generates a smoothly varying time profile. Such a predictive model would be especially useful in cases where computational approaches are less reliable e.g. lack of evolutionary conservation. Further, this approach can be used to provide additional confirmatory evidence (posterior probabilities) for computationally predicted miRNA targets and so improve computational miRNA target prediction. This approach would be applicable to the investigation of other miRNAs and these results suggest that dynamic microarray studies at a higher time resolution could reveal further details on miRNA regulation.The authors declare that they have no competing interests.BJP and JW designed the study and analyzed the data. Both authors drafted, read and approved the final manuscript. BJP and JW contributed equally to this work."}
+{"text": "Recent discoveries in the field of somitogenesis have confirmed, for the most part, the feasibility of the clock and wavefront model. There are good candidates for both the clock and the wavefront . Nevertheless, the mechanisms through which the wavefront interacts with the clock to arrest the oscillations and induce somite formation have not yet been fully elucidated.In this work, we propose a gene regulatory network which is consistent with all known experimental facts in embryonic mice, and whose dynamic behaviour provides a potential explanation for the periodic aggregation of PSM cells into blocks: the first step leading to the formation of somites.To our knowledge, this is the first proposed mechanism that fully explains how a block of PSM cells can stop oscillating simultaneously, and how this process is repeated periodically, via the interaction of the segmentation clock and the determination front. Segmentation of the body axis is a basic characteristic of many animal species ranging from invertebrates to mammals. The vertebrate body is organized, along the antero-posterior (AP) axis, in a series of functionally equivalent units, each comprising a vertebra, its associated muscles, peripheral nerves, and blood vessels. These units originate from the earlier pattern of the embryonic somites, which are blocks of cells generated in a rhythmic fashion from the mesenchymal presomitic mesoderm (PSM).clock and wavefront model of Cooke and Zeeman Several models of somitogenesis have been put forward. However, the c-hairy1 in the PSM of chick embryos provided the first clear molecular evidence for a segmentation clock The discovery in 1997 of an oscillatory expression of the gene In 2001 Dubrulle et al. It is generally proposed that the interaction between the segmentation clock and the gradient of signalling pathways specifies a segment in the anterior PSM. A crucial question in this scenario, however, is: \u201cHow is this interaction achieved?\u201dAulehla and Herrmann In a recent paper, Goldbeter et al. Thus, the mechanisms by which the segmentation clock interacts with the gradient of signalling pathways in the PSM remain a mystery, despite great advances in the last few years. Some recent papers have suggested models to explain how the somite clock can be stoped by the FGF or Wnt gradients In this paper we approach this problem from a mathematical modelling perspective. Based upon extant experimental evidence on mice embryos, we propose a regulatory network involving two genes in the Notch and FGF/Wnt signalling pathways and show that its dynamic behaviour is sufficient to explain the rhythmic segmentation of PSM cells in this species.Several modelling studies support the claim that, given the involved regulatory mechanisms, the expression of one or more genes under the Notch regulatory pathway can oscillate spontaneously. They also suggest that the underlying mechanism is a simple negative feedback loop with relatively long time delays due to transcription and translation of the corresponding genes Dll1), a Notch ligand strongly expressed in the PSM, indicate that Dll1 in the paraxial mesoderm and tail bud of mouse embryos is regulated by Wnt signalling Axin2 (which is up-regulated by Wnt) oscillates in mice embryos. They further proposed that these oscillations are originated by the following negative feedback loop: protein Axin2 binds protein Dvl \u2014dishevelled, dsh homolog 1 (Drosophila)\u2014 and so decreases the concentration of this last protein free form, which is an activator of gene Axin2.However, the experimental evidence regarding the interaction between the Notch and FGF/Wnt signalling pathways is still scarce. Recently, however, experimental data from the analysis of the regulation of Delta-like1 (W is up regulated by the FGF and/or Wnt levels (this is accounted for by k which is a monotone increasing growing function of the FGF and/or Wnt concentrations). Finally, the network of Based on the above considerations we propose two gene regulatory networks, one of which is a simplified version of the other. These networks are schematically represented in N and W are respectively proportional to the active protein levels resulting from the expression of genes N and W. The model equations have been normalized so that N and W are dimensionless and their maximum possible value is one, which they attain when the corresponding genes are maximally expressed. The degradation rates for proteins N and W are n\u03b3 and w\u03b3, respectively. nT and wT are the total time delays due to transcription, mRNA processing, and translation of both genes, and the notation [L]T indicates that all variables inside the square brackets are delayed a time T, e.g. [x(t)]T\u200a=\u200ax(t\u2212T). nnF, wnF, and nwF are functions representing the network regulatory interactions: nnF(N) accounts for the self inhibition of gene N, and so it must be a decreasing function; wnF(W) and nwF(N) respectively stand for the influence that proteins W and N have on the expression of genes N and W. Since these two interactions up-regulate the targeted genes, both functions are monotone increasing functions of their argument. Finally, as discussed above, k is an increasing function of the FGF and/or Wnt levels, and accounts for the assumed positive influence of these chemical species on either the expression level of gene W or the activity of the corresponding protein.The interactions described above for the network of \u03b5<1 represents transcriptional leakage.Monotone decreasing and increasing Hill-type equations as follows can model the regulatory functions:wwF(W) accounts for the self-inhibition of gene W and is defined byTo model the network of \u22121) for different proteins of the Hes family and their corresponding mRNA. Since the corresponding genes are in the Notch signalling pathway we takeMonk Hes1 and Lfng (under the Notch pathway) they respectively estimated the following ranges: 11 to 33 min and 16 to 66 min. Meanwhile, for gene Axin2 (under the Wnt pathway) the estimated range is 45 to 116 min. Here we chooseRodr\u00edguez-Gonz\u00e1lez et al. For the regulatory functions we take the following parameter values:Finally, we chose the transcriptional leakage parameter to beGiven that many of the parameters were not estimated from reported experimental data, it is necessary to assess the robustness of the model results to variations in the parameter values.The models' time-delay differential equations were numerically solved using the software xppaut. The same program was used to calculate the corresponding bifurcation diagrams.k\u22641, the TB conditions can be simulated by setting k\u200a=\u200a1. After doing so, we numerically solved the equations corresponding to both models and plotted the results in N and W oscillate with a period of about 2 hrs, and that the W oscillations are out of phase by half a cycle with respect to those of N. These two observations are in agreement with reported experimental results in It is known that various genes under the Notch and Wnt signalling pathways oscillate in cells located within the tail bud (TB), where high FGF and Wnt levels are found. Since we assume in our models that either the gene W is up-regulated or the corresponding protein is activated by FGF and/or Wnt, and that this interaction is accounted for by the function After carefully inspecting how variations on all the parameter values influence the oscillatory behaviour of the network in n\u03b3 for N, the Hill coefficient of the negative feedback regulatory function (nnF) must satisfy nnn\u22655 in order for the system to show sustained oscillations. Larger values of n\u03b3 would allow the existence of a stable limit cycle (stationary oscillations) with smaller nnn values.Given the degradation rate n\u03b3 and nT: the longer the time delay, nT, and the smaller the degradation rate, n\u03b3, the longer the oscillation period. Given that we consider n\u03b3 fixed , we picked the time delay necessary to have a 2 hrs cycling period. Nonetheless, it is important to emphasize that the resulting time delay value lies within the range estimated in the section on parameter estimation.The oscillation period is essentially determined by the parameters N and W oscillations depends on the difference between the time delays nT and wT. For the N and W curves to be out of phase by half a cycle, wT must obey either of the following relations: 40 min\u2264wT\u226460 min or 150 min\u2264wT\u2264170 min, given nT\u224840 min.The phase shift between the W degradation rate, w\u03b3, the larger the corresponding oscillation amplitude. Here, we chose a w\u03b3 value such that the amplitudes of the W and N oscillations are comparable.Finally, the larger the We also analysed the influence that changes on the parameters of the network in n\u03b3, the Hill coefficient nnn must be larger than 5 in order for the corresponding subsystem to oscillate spontaneously. Similarly, given the value of w\u03b3, wwn\u22655 in order for the negative feedback loop associated to gene W to show a cyclic behaviour.In this network there are two subsystems capable of generating sustained oscillations: the delayed negative feedback loop corresponding to the self-inhibition of gene N, and that corresponding to the self-inhibition of gene W. Given the degradation rate nwK<0.8 and wnK<0.5. Similarly, given the network symmetry, the W subsystem can cause the N subsystem to oscillate if nwK<0.5 and wnK<0.8.If the parameters of the N and W negative feedback loops are set such that the N subsystem can, and the W cannot, oscillate spontaneously, the N subsystem can make the W subsystem oscillate whenever The oscillation frequency of the N and W subsystems (when they can oscillate spontaneously) largely depends on their respective degradation rates and time delays: the larger the time delay and the smaller the degradation rate, the longer the corresponding oscillation period.N and W oscillations depends on the difference between the time delays nT and wT. For the N and W curves to be out of phase by half a cycle, wT must obey either of the following relations: 40 min\u2264wT\u226460 min or 150 min\u2264wT\u2264170 min, given nT\u224840 min.When both subsystems can oscillate by themselves and they are coupled as in k simulates a decrease on the FGF/Wnt levels. To better understand the system stationary behaviour as a function of k, we calculated the bifurcation diagrams for the models corresponding to the networks in As the embryo grows the tail bud recedes and, while doing so, leaves some cells behind. After a cell leaves the TB, the FGF and Wnt levels in the surrounding medium start decreasing until they reach a given threshold and, according to the clock and wavefront model, arrest clock oscillations and promote somite formation. In our two proposed networks, decreasing k larger than k\u2033\u200a=\u200a0.76, only a stable limit cycle (stationary oscillatory state) exists; for kwnK, because otherwise the oscillations damp out for k>k\u2032.Parameters The bifurcation diagram in k reaches the value k\u2032. After that, the oscillations will abruptly stop and the system will jump to the low N-induction-level stable steady state. This behaviour is consistent with the hypothesis of Aulehla and Pourqui\u00e9 From these bifurcation diagrams we can see that the two proposed models predict that once a PSM cell leaves the TB \u2014and so the FGF/Wnt levels start decreasing\u2014 it will keep oscillating with a more or less constant amplitude until k value linearly decreasing in time, after remaining constant for a few hours. Note that in both cases the system still undergoes a few damped oscillations after k drops bellow the bifurcation value, k\u2032. The time delays in the gene regulatory pathway are the reason why the system does not stop cycling immediately at the bifurcation point, as one would expect from the bifurcation diagram of As noted above, the FGF and Wnt levels start decreasing right after a cell leaves (or is left behind by) the PSM, and continue to do so until they eventually reach a threshold value that triggers cycle arrest. In k decays in time is delayed in proportion to how much later a given cell left the TB. In other words, if k(t) describes the time evolution of parameter k for the first cell, k(t-(i-1)\u0394T) is the corresponding function for the i-th cell, with \u0394T\u200a=\u200a20 min.We used the model corresponding to the network in k reaches the bifurcation value k\u2032 at different stages of each cell's cycle. On the other hand, the cells repeat the overall behaviour in the next period, in accordance with the cyclic nature of the phenomenon. Finally, it is important to emphasize that most of the cells within one period stop oscillating at very similar times. This sort of discrete behaviour may explain the clustering of PSM cells into somites once oscillations stop. These processes can be visualized in N vs. t curves in In To carry out the simulations plotted in wnT and nwT represent additional time delays resulting from the non-instantaneous nature of the interactions between the Wnt and the Notch pathways. We repeated the simulations in wnT and nwT. The results for wnT\u200a=\u200anwT\u200a=\u200a10 min are plotted in wnT\u200a=\u200anwT\u200a=\u200a20, 30, 40 min and noted that: (1) when the delays are equal to 20 min, many of the cells within one cycle still tend to stop oscillating at similar times, but not as markedly as in We have assumed in our models that the interaction between the Wnt and Notch pathways is very rapid relative to the transcription, post-transcriptional modification, and translation processes. However, the genes N and W are most likely not directly connected, but affect each other through a series of intermediate steps that involve other chemical species. To account for this, we modified the model corresponding to the network in Recent discoveries in somitogenesis research have confirmed, for the most part, the basic veracity of the clock and wavefront model. We now know various genes that are expressed cyclically in the tail bud of vertebrate embryos, and at least three different substances whose expression levels vary along the PSM. Thus, there are good candidates for both the clock and the wavefront. Nevertheless, the mechanisms by which the wave front interacts with the clock to arrest the oscillations and induce somite formation have not yet been fully elucidated. In this paper we have proposed two gene regulatory networks (one being a simplified version of the other) consistent with the known experimental facts in mice, and whose dynamic behaviour provides a potential explanation for the periodic aggregation of PSM cells into blocks: the first step leading to the formation of somites.In agreement with previous models We paid special attention to the estimation of the parameters in the two proposed models from experimental results. Given the scarcity of available data, we were only able to find rough estimates for the degradation rates and time delays. We carried out numerical parameter sensitivity analysis for the rest of the parameters in the two models to remedy this deficiency. Regarding the time delays in the negative feedback loops, we estimated them by adding the times necessary for gene transcription, to eliminate introns from the resulting mRNA, to shuttle the processed mRNA into the cytoplasm, and to translate it. There exist other possible contributions to these delays, such as post-translational modification of some enzymes or enzyme-to-enzyme interactions. Nevertheless, we expect them to be at least one order of magnitude shorter than the estimated delays (which are of the order of 50 min), and this justifies our not considering them explicitly.N and W oscillate out of phase by half a cycle. We explored the parameter space to test the robustness of the model results, and concluded that the previously described behaviour can be obtained with a wide range of parameter values, with orders of magnitude compatible with the experimental data.A careful analysis of the dynamic behaviour of the simplified network revealedN and W oscillations are out of phase by half a cycle. This model, in contrast to the simplified one, agrees with the results of Aulehla et al. In the second network , there ak, we would be able to reproduce the observed increase on the oscillation period.It has been observed that the oscillation period of the somitogenesis clock gradually increases as the cells are displaced in the PSM k linearly decreases in time. According to our results, when k decreases below a given threshold, it arrests the segmentation clock oscillations in such a way that well defined groups of PSM cells stop cycling at roughly the same time. Very similar results were obtained when the model was modified to account for additional delays associated to the interactions between the Notch and Wnt pathways. If, as some people suspect, cycling arrest triggers the processes that eventually lead to the creation of a somite, then these results may explain the periodic formation of equally sized somites in mice embryos.We further tested the response of a set of cells distributed along the PSM to a decay of the FGF/Wnt levels starting just after each cell left the tail bud. For these simulations we used the second, more complex, network , and modN oscillations), allows us to assert that the spontaneous oscillation of both pathways is essential to the proper performance of the segmentation clock and its interaction with the determination front. Aulehla and Pourqui\u00e9 It is important to emphasise that, in order for the PSM cells to stop cycling in a discrete fashion, the delayed negative feedback loop associated with gene N must be capable of generating sustained oscillations by itself. Otherwise, if the feedback loop of gene W oscillates by itself and makes the expression of gene N cycle, the cells along the PSM stop oscillating at almost uniformly distributed times. This result, together with the fact that the W feedback loop must generate sustained oscillations Click here for additional data file.Movie S2N vs. t curves of Animation of the time evolution of 18 PSM cells after they leave the embryo tail bud. To develop this animations we used the data plotted in the (0.77 MB MOV)Click here for additional data file."}
+{"text": "The ability to monitor changes in expression patterns over time, and to observe the emergence of coherent temporal responses using expression time series, is critical to advance our understanding of complex biological processes. Biclustering has been recognized as an effective method for discovering local temporal expression patterns and unraveling potential regulatory mechanisms. The general biclustering problem is NP-hard. In the case of time series this problem is tractable, and efficient algorithms can be used. However, there is still a need for specialized applications able to take advantage of the temporal properties inherent to expression time series, both from a computational and a biological perspective.BiGGEsTS makes available state-of-the-art biclustering algorithms for analyzing expression time series. Gene Ontology (GO) annotations are used to assess the biological relevance of the biclusters. Methods for preprocessing expression time series and post-processing results are also included. The analysis is additionally supported by a visualization module capable of displaying informative representations of the data, including heatmaps, dendrograms, expression charts and graphs of enriched GO terms.. We present a case study on the discovery of transcriptional regulatory modules in the response of Saccharomyces cerevisiae to heat stress.BiGGEsTS is a free open source graphical software tool for revealing local coexpression of genes in specific intervals of time, while integrating meaningful information on gene annotations. It is freely available at: Extracting relevant biological information from expression data provides important insights into the relations between genes participating in biological processes. This information can be used to identify co-regulated genes corresponding to transcriptional regulatory modules, thus contributing to the challenging goal of regulatory network inference.Processing expression data is time and resource consuming. In this context, the development of novel computational algorithms and tools for expression data analysis is primarily focused on efficiency and robustness. Clustering techniques have been extensively applied to both dimensions of expression matrices separately, focusing on either gene or sample expression patterns. However, many patterns are common to a subset of genes only in a specific subset of experimental conditions. In fact, our general understanding of cellular processes leads us to expect subsets of genes to be coexpressed only in certain experimental conditions, but to behave almost independently in other. These local expression patterns can only be discovered using biclustering techniques ,2, whichBiGGEsTS (BiclusterinG Gene Expression Time Series) is a free and open source graphical application using state-of-the-art biclustering algorithms specifically developed for analyzing gene expression time series. The current version integrates the methods proposed by Zhang et al. and Madek-means, self-organizing maps (SOMs), principal component analysis and support vector machines, together with filtering and normalization methods.Several applications are available for the analysis of gene expression data using clustering -15, biclk-means) and biclustering techniques. Expander also performs clustering using SOMs and CLICK [Expander and BicAnd CLICK . Regardind CLICK , and Bicnd CLICK , the Itend CLICK , the Ordnd CLICK , xMotif nd CLICK and BiMand CLICK . Both toFew applications actually address the problem of analyzing time series and they typically apply clustering -14. CAGETimeClust offers hThis section describes the functionalities of BiGGEsTS from a biology/medical researcher's perspective, providing further insight into the underlying methods [see Additional file The input of expression time series is straightforward Figure and is uOccasional errors may occur when measuring the abundance of mRNA in cells, leading to missing values, not always supported by biclustering algorithms. This can be addressed by filtering all genes with missing values, thus eliminating all rows with at least one missing value, and may be regarded as a good strategy to reduce noise. However, when analyzing a small number of genes, removing some of them can lead to a significant reduction in the dimension of the dataset, potentially compromising further analysis. The tradeoff between the elimination of missing values and the dimension of the dataset is usually mitigated by establishing an upper bound for the percentage of missing values allowed per gene. Genes with percentages higher than a user-defined threshold are filtered. The remaining missing values must be filled.Systematic errors, on the other hand, affect every measurement action and are associated with the differences between the experimental settings of each trial. Sources of this kind of errors include the different incorporation efficiency of dyes, and the different scanning and processing parameters of distinct experiments. Normalization is a widely used technique, which attempts to compensate for these systematical differences between time points and highlight the similarities and differences in the expression profiles. Additionally, a smoothing algorithm acts as a low-pass filter, attenuating the effect of outliers. Depending on the biclustering algorithm, it may be necessary to discretize data, reducing the range of expression values to an adequate set of discrete values.e-CCC-Biclustering [Three biclustering algorithms are available: CCC-Biclustering , e-CCC-Bustering and CC-Tustering , are highlighted. Additionally, the list of genes annotated with each term is displayed in a popup window by clicking the corresponding row using the left button of the mouse.The GO terms that annotate the genes in the dataset can be displayed in a table, where each row corresponds to a GO term and contains: the GO term ID, the term name, and the total number of genes annotated with it. In the case of a bicluster Figure , each roTerm-for-term results can be used to generate tree structured graphs highlighting the enriched terms in each of the three GO ontologies Figure . Graphs BiGGEsTS is a software for analyzing gene expression time series using biclustering. It was designed to comply with the broad specifications of a software tool, essentially focused on user-friendliness, platform independence, modularity, reusability and efficiency.Saccharomyces cerevisiae to heat stress [Additional material includes a case study describing how to use the software to discover transcriptional regulatory modules in a dataset containing the response of t stress , reprodut stress [see Addt stress [see Addt stress [see Addt stress .\u2022 Project name: BiGGEsTS \u2013 BiclusterinG Gene Expression Time Series\u2022 Project home page: \u2022 Operating systems: Platform independent\u2022 Programming language: Java\u2022 Other requirements: Java 1.5 or higher, 1024 MB of RAM, Graphviz (in OSs other than Windows and Mac OS)\u2022 License: GNU GPL version 3 or higherThe authors declare that they have no competing interests.SCM implemented the techniques for preprocessing gene expression data, the biclustering algorithms and the post-processing approaches. JPG and SCM designed the software. SCM supervised the implementation of the software. JPG implemented the software and integrated the preprocessing, biclustering, and post-processing methods. Additionally, JPG produced the documentation, created the website, and wrote the first draft of the manuscript. All authors worked together in the final version of the manuscript. All authors read and approved the final manuscript.Multi-platform distribution of BiGGEsTS. A multi-platform distribution of BiGGEsTS. The archive biggests.zip contains a directory with several files, including installation files, the application, sample datasets and sessions, the Quickstart Guide to the software (\"BiGGEsTS Quickstart.pdf\") and a simple text file with installation instructions (\"readme.txt\"). In Windows (or Mac OS X) run the \"install.bat\" file, for installing the Graphviz dot application and the GO files, and then the \"biggests.bat\" (\"biggests.sh\") file, for running the software. For Linux and other operating systems, please install the Graphviz dot application first and edit the \"install.sh\" file to append the path of the dot executable file to the last line. Then run the \"install.sh\" script followed by \"biggests.sh\". Detailed instructions on how to install and use BiGGEsTS are also available in the Quickstart Guide [see Additional file Click here for fileBiGGEsTS Quickstart Guide. This document introduces users to BiGGEsTS, providing instructions on how to install and use this software, to analyze time series gene expression data using biclustering.Click here for fileSource code of the BiGGEsTS software. The source code of BiGGEsTS. The archive contains two directories, named \"biggests\" and \"smadeira\", inside a main directory, named \"src\". Each of the directories contained in \"biggests\" and \"smadeira\" contains the source files of the classes included in the packages identified by the same names. The Javadoc documentation of the source code is available at the official website.Click here for fileSample expression dataset . This file contains a sample time series gene expression matrix corresponding to a short subset of a real dataset from Gasch et al. [Saccharomyces cerevisiae in 8 time points . The gasch_ yeast_hs1_short.txt file is also included in the multi-platform distribution. To load this dataset into BiGGEsTS, run the software and use the \"Browse...\" button on the panel on the right to browse the file in the file system. Once it is found, press the \"Open\" button followed by the \"Load\" button . This fh et al. , concernClick here for fileArchive of a sample BiGGEsTS session. This file contains a BiGGEsTS session with matrices and biclusters obtained by manipulating the time series gene expression data also provided as additional material [see Additional file Click here for fileCase study: discovering transcriptional modules using BiGGEsTS. Case study describing how to use BiGGEsTS to discover transcriptional regulatory modules using the transcriptional response of Saccharomyces cerevisiae to heat stress. The results published in [ished in are reprClick here for file"}
+{"text": "Excessive exposure to dietary fats is an important factor in the initiation of obesity and metabolic syndrome associated pathologies. The cellular processes associated with the onset and progression of diet-induced metabolic syndrome are insufficiently understood.To identify the mechanisms underlying the pathological changes associated with short and long-term exposure to excess dietary fat, hepatic gene expression of ApoE3Leiden mice fed chow and two types of high-fat (HF) diets was monitored using microarrays during a 16-week period. A functional characterization of 1663 HF-responsive genes reveals perturbations in lipid, cholesterol and oxidative metabolism, immune and inflammatory responses and stress-related pathways. The major changes in gene expression take place during the early (day 3) and late (week 12) phases of HF feeding. This is also associated with characteristic opposite regulation of many HF-affected pathways between these two phases. The most prominent switch occurs in the expression of inflammatory/immune pathways and lipogenic/adipogenic pathways . Transcriptional network analysis identifies NF-\u03baB, NEMO, Akt, PPAR\u03b3 and SREBP1 as the key controllers of these processes and suggests that direct regulatory interactions between these factors may govern the transition from early to late hepatic adaptation to HF feeding. This transition observed by hepatic gene expression analysis is confirmed by expression of inflammatory proteins in plasma and the late increase in hepatic triglyceride content. In addition, the genes most predictive of fat accumulation in liver during 16-week high-fat feeding period are uncovered by regression analysis of hepatic gene expression and triglyceride levels.The transition from an inflammatory to a steatotic transcriptional program, possibly driven by the reciprocal activation of NF-\u03baB and PPAR\u03b3 regulators, emerges as the principal signature of the hepatic adaptation to excess dietary fat. These findings may be of essential interest for devising new strategies aiming to prevent the progression of high-fat diet induced pathologies. Pathologies associated with metabolic syndrome, such as overweight and obesity, insulin resistance, hypertension, hyperlipidemia, non-alcoholic hepatic steatosis and diabetes are becoming a health problem of epidemic proportions in Western societies It is generally acknowledged that excessive exposure to dietary lipids disrupts the homeostasis of cellular metabolism and triggers an inflammatory response in adipose tissue In this study, we have used Apolipoprotein E3-Leiden (ApoE3L) mice to investigate the effect of two types of standard laboratory high-fat (HF) diets on development of metabolic syndrome. ApoE3L transgenic mice are an established model for studying the effect of dietary interventions on hyperlipidemia, atherosclerosis and other diet-related pathologies To investigate the processes associated with the high-fat (HF) induced metabolic stress and the progression of the metabolic syndrome, male ApoE3L mice were fed one of the three standard laboratory diets: chow (control diet), HFBT and HFP , for a period of 16 weeks . The priTo focus on the molecular mechanisms underlying the development of metabolic syndrome, hepatic mRNA expression of HF- and chow-fed ApoE3L mice (n\u200a=\u200a150) was monitored using DNA microarrays over a period of 16 weeks. At the day 0 and eight additional time-points mice were sacrificed, their livers were sampled and the RNA expression was analysed using NuGO Affymetrix mouse arrays To assess temporal changes in hepatic gene expression over the 16-week period under the control and two high-fat diets, each of the time points per diet was compared to time-point day 0 in a pairwise fashion using limma statistical package In addition to overall temporal effects, comparing each of the time points to day 0 allowed assessment of dynamics of transcriptional response by detecting the magnitude of the gene expression changes at each time point compared to the starting condition. In both HF conditions we observed three phases of hepatic transcriptional response characterized by local peaks in the number of differentially expressed genes: early (day 1 to week 1), mid (week 2 and week 4) and late (week 8 to week 16) . In addiAfter determination of the temporal effects of all three diets across 16-week time-course, we focused on dissecting the HF-specific effects on the hepatic transcriptome. To analyse the effects of HF diets independently of changes occurring due to the animal age, we compared the gene expression of mice fed HFBT and HFP diets to those of mice fed the chow diet in each of the corresponding time-points. The pairwise comparisons were performed using the limma statistical package, and FDR q-value<0.1 was used as a threshold for significance. The numbers of identified differentially expressed genes in each diet and time-point are shown in To compare gene expression patterns of identified HFBT and HFP DEGs across all conditions, expression profiles of genes that are changing in either of HF diets compared to chow diet per time-point were hierarchically clustered . The cloThree distinct phases of hepatic transcriptional response to HF diets were observed by comparing each time-point to day 0 . These pTo identify which cellular processes are most affected by the hepatic exposure to excess dietary fat over the entire time-course, we first analyzed the overrepresented functional categories among the 1663 HF-responsive genes To include prior biological knowledge in pathway analysis, both public and proprietary gene sets were used for Gene Set Enrichment Analysis (GSEA) An examination of the cluster heatmap reveals that the vast majority of the gene sets change the direction of their transcriptional regulation throughout the time-course. The first and largest module contains gene sets that are generally significantly upregulated during the early phase (day 3) and downregulated at the late phase of the HF-response . The most characteristic gene sets in the module 1 are associated with inflammation and immune response and their regulation, such as Interleukin (IL)-1, IL-2, IL-3, IL-4, IL-5, IL-7 and IL-9 pathways, CD40 pathway, antigen processing and presentation, T and B cell receptors signaling, natural killer (NK) cell cytotoxicity, leukocyte migration and Tumor necrosis factor , Nuclear factor-kappa B (NF-\u03baB) and Toll-like Receptor (TLR) signaling pathways. The second representative functional theme in module 1 is related to cell growth, proliferation and differentiation. Examples of these gene-sets include cyclins and cell cycle regulators, G1 to S and G2 to M checkpoint controllers, DNA replication reactome and Mitogen-activated protein (MAP) kinases, Epidermal growth factor (EGF) and Transforming growth factor beta (TGF-\u03b2) signaling pathways. Additionally, various gene sets related to cancer and development also follow the early induction/late repression expression pattern.In contrast to module 1, gene sets in the last module (5) are generally repressed during the early phase and significantly upregulated during mid and late phases. The most prominent functional characteristic of module 5 is the presence of the PPAR signaling pathway as well as many PPARs-regulated gene sets, including those associated with adipocytes differentiation , fatty acid oxidation and lipogenesis. The transcriptional activation of these gene sets during the mid phase and their amplification during the late phase implies an important role for PPARs in regulating the transition from short- to long-term effects of hepatic exposure to excess dietary fat. Additionally, hepatic activation of the genes involved in adipocytes differentiation and lipogenesis suggests that fat accumulation and adipogenic transformation likely take place in the liver after long-term exposure to a high-fat diet. Other aspects of the metabolic control, such as amino acid metabolism and tricarboxylic acid cycle, are also upregulated during mid and late phases of the HF-response.Despite the large overall similarity in transcription response to HFBT and HFP diets, a comparison of pathway activities reveals specific differences between the two high-fat conditions. This is particularly evident in the gene sets clustered in the lowest part of the module 3. Notably, the regulation of gene sets involved in energy metabolism is sensitive to variations in fat origin and/or specific compositions of HF diets. Specifically, a palm oil-based HF diet (HFP) causes a transient induction of the gene sets involved in energy metabolism at day 1 but shows attenuated induction of these gene sets in the late phase compared to a beef tallow-based HF diet (HFBT). Similar deviation in the gene set activity patterns between HFP and HFBT conditions is also visible in the module 5.Finally, a small fraction of all the represented gene sets retains a constant transcriptional pattern throughout the time-course. These are represented in modules 2 and module 4 .The reciprocal transcriptional profiles of the pathways represented in modules 1 and 5 emerge as the principal signature of the transition from early to late hepatic transcription response to excess dietary fat. The coincidence of the repression of inflammatory, immune and cell proliferation pathways and the induction of metabolic, lipogenic and adipogenic pathways prompted the hypothesis that these events may be interdependent and their swap relevant for driving the transition from a stressed to pathological hepatic state. The relationships between inflammatory and metabolic processes and the key regulators controlling them were further investigated using biological network analysis.To further explore the control of and the biological connectivity between the HF-responsive genes, the set of 1663 genes was usedThe analysis identified several major regulators of the cellular response to high-fat diets appearing as the hubs in the resulting network: PPAR\u03b3, SREBF1 and SREBF2 - regulators of lipid, fatty acid and cholesterol metabolism, NF-\u03baB \u2013 regulator of immune response and Akt \u2013 regulator of cell growth, proliferation and differentiation. Aside from the constant transcriptional shutdown of the SREBF2 local subnetwork that regulates cholesterol biosynthesis, the majority of network components show a characteristic swap in transcription response between early (day 3) and late (week 12) phase of the time-course , 5.PPAR\u03b3, SCD, SREBF1, ACOX1, CIDEC and CFD (adipsin) are repressed during early phase and induced during the late phase of hepatic response to HF diets. The exception to the observed global repression of inflammatory and immune response in the late phase are interleukin-1 pathway components and, somewhat, the acute phase reactants that show statistically insignificant, low-grade re-induction at week 12 and week 8, respectively. This indicates that a modest fraction of the inflammatory response, likely mediated by JNK/AP-1 pathway parallel to NF-\u03baB signaling escapes the global repression To further investigate relations between the network components that show the observed swap in transcription response and to characterize their associated functions in more detail, we examined \u201cFunction and disease\u201d categories that are overrepresented in the network. By overlaying the most significant categories over the network, the categories \u201chepatic steatosis\u201d and \u201cimmune response\u201d p-value 5.85E-07) were identified as the best representatives of processes that are reciprocally regulated during the early and the late phase of high-fat response. In addition to the largely reciprocal regulation, these processes are interconnected via few key network components . It has been previously shown that PPAR\u03b3 antagonizes inflammatory responses by a transrepression of NF-\u03baB regulators and that its hepatic activation leads to the development of liver steatosis PPAR\u03b3 during the early phase of the high-fat response. In contrast, at the mid and/or late phases, PPAR\u03b3 and its target genes are induced, while NEMO is simultaneously significantly repressed to steatotic (late) transcriptional program in livers of ApoE3L mice observed by gene expression profiling is supported by expression of plasma proteins and liver triglyceride content. A series of inflammatory plasma proteins were quantified by multiplex immunoassay during the complete high-fat feeding time-course . The traACOX1, SCD, PPAR\u03b3, CFD and CIDEC were re-discovered by the regression analysis. Also, matrix metallopeptidase 13 (MMP13) implicated in liver fibrosis and ENTPD5, associated with hepatopathy and hepatocellular tumors were identified by the regression approach CRAT, ACAA1b and ACAT), matrix endopeptidase related proteins and several genes of unknown function. The gene identified as the best predictor of hepatic steatosis by the regression analysis, butyrylcholinesterase (BCHE) has been mostly studied in the context of its effect on the brain's cholinergic system BCHE may be the potential marker of interest for further investigation of the hepatosteatosis development.The activation of the steatotic transcriptional program during the late phase of the high-fat feeding time-course observed by transcriptome analysis harmonizes with the increased hepatic triglyceride content at the late time-points . To idenTogether, inflammatory plasma proteome and hepatic triglyceride content support findings obtained by mRNA expression analysis, further substantiating the proposed shift between the hepatic inflammatory and steatotic transcriptional program during prolonged exposure to high-fat diets.In this study, we focused on investigating the molecular mechanisms underlying the onset and progression of the metabolic syndrome during a 16-week high-fat feeding time-course in ApoE3L transgenic mice. The changes in hepatic transcriptome revealed that the adaptation to excess dietary fat proceeds in three phases: early, mid and late. The early (day 1 to week 1) and the late (week 8 to week 16) phases are characterized by the most prominent, and often reciprocal peaks in gene expression changes.During the early phase, the initial sensing of the fat overload triggers the cellular stress response, characterized by activation of acute phase reactants, inflammatory and immune response , 3, 4. TPPAR\u03b3 gene itself, stearoyl-CoA desaturase 1 (SCD1) and CIDEC (Fsp27) , PPAR\u03b3 and SREBP1 , 3, 5. T (Fsp27) , 3, 5, 6 (Fsp27) . The incThe observed hepatic adaptation to excess dietary fat occurs similarly in investigated beef tallow- (HFBT) and palm oil- fat (HFP) based diets. Nevertheless, specific differences in the gene expression response to these two diets do exist. This is particularly evident in the expression of pathways related to energy metabolism during the early and the long-term adaptation to high-fat feeding. In general, mice fed HFP diet show more pronounced transient induction of these pathways at the very beginning of the time course, but fail to activate them as efficiently as the mice fed HFBT diet at the late phase of the time-course . These dThe majority of pathways affected by the high-fat diets exhibit opposite regulation during the early and the late phase of the high-fat feeding time-course. This synchronous swap of the major functional signatures between the early and the late phase and the fact that the key controllers of the reciprocally regulated processes are impNEMO in the mid and the late phases of HF response could be the most important regulatory event controlling the shut-down of NF-\u03baB driven inflammatory response at the switch point between stressed and pathological hepatic state. In addition to NEMO, other two IKK related genes and the gene coding for NF-\u03baB subunit RelB also show characteristic early induction and late repression transcription mode , have been previously reported to act synergistically ion mode . The coiependent . This hyThe dynamic functional landscape of the hepatic transcriptional adaptation to excess dietary fat during the 16-week time-course suggests a model in which sequential physiological changes underlie the transition from metabolic stress to metabolic syndrome, summarized in Finally, the identified central role of ppar and nemo/nf-\u03bab regulators in coordinating the onset and progression of metabolic syndrome may have important implications in treatment of the disease. currently, ppar\u03b3 ligands are used in clinics for their anti-inflammatory and insulin-sensitizing effects in diseases such as psoriasis, atherosclerosis, inflammatory bowel disease and type 2 diabetes Animal experiments were approved by the Institutional Animal Care and Use Committee of the Netherlands Organization for Applied Scientific Research (TNO) and were in compliance with European Community specifications regarding the use of laboratory animals.The study involved 186 male ApolipoproteinE3-Leiden transgenic mice ad libitum. Series of control experiments employing a metabolic cage setup showed that C57Bl/6 mice, the genetic background of the APOE3L mice, have isocaloric food intake when fed low fat and HFBT and HFP diets. The light cycles were identical for all animals. For mRNA expression profiling, six mice from each diet group were sacrificed at time points 0 days (chow only), 1 day, 3 days, 1, 2, 4, 8, 12 and 16 weeks, their livers were dissected after 4 hour fasting period , snap frozen in liquid nitrogen and stored at \u221280\u00b0C until further processing.From three weeks prior to diet intervention onwards, all animals were fed a standard chow diet. At the beginning of the study, mice were divided into three groups: (1) control group fed chow diet, (2) group fed HFBT diet and (3) group fed HFP diet. Because the interest of the study was to asses effects of high-fat diets under physiological conditions, animals were fed Total RNA was isolated using TRIzol reagent according to the manufacturer's instructions. RNA was treated with DNAse and purified using the SV total RNA isolation system . Concentrations and purity of RNA samples were determined on a NanoDrop ND-1000 spectrophotometer . RNA integrity was checked on an Agilent 2100 bioanalyzer with 6000 Nano Chips according to the manufacturer's instructions. RNA was considered as suitable for array hybridization only if samples exhibited intact bands corresponding to the 18S and 28S ribosomal RNA subunits, displayed no chromosomal peaks or RNA degradation products, and had a RNA integrity number (RIN) >8. Applying this criterion, 142 RNA samples were used for hybridization to microarrays, including 5 to 6 biological replicate samples per diet, per time-point. RNA samples were hybridized to NuGO Affymetrix Mouse GeneChip arrays (NuGO_Mm1a520177) containing 23865 probesets including 73 control probesets Quality control of microarray data, normalization, differential expression analysis and Gene Set Enrichment Analysis http://brainarray.mbni.med.umich.edu/Brainarray/Database/CustomCDF/CDF_download_v9.asp and http://nugo-r.bioinformatics.nl/NuGO_R.htmlQuality control of the hybridized microarrays was performed using simpleaffy and affyplm packages. Upon rigorous examination of the resulting diagnostic plots, 116 microarrays of the supreme quality were taken for the further analysis. This resulted in analysis of 3 to 6 biological replicate samples per diet, per time-point. Gene expression estimates were calculated using the library GC-RMA, employing the empirical Bayes approach for background correction followed by quantile normalization. The custom MBNI CDF-file (MmNuGOMm1a520177 version 9.0.1), available at Differentially expressed genes between control and each of treatment groups per time point, as well as between each of time points and day 0, were identified using the limma package, applying linear models and moderated t-statistics that implement empirical Bayes regularization of standard errors The t-test values of differential expression between control and each of treatment groups per time point calculated using limma package were used as the input for the PreRanked scoring method within the Gene Set Enrichment Analysis (GSEA). Gene sets collection included 880 gene sets compiled from MSigDB C2, Biocarta, Kyoto Encyclopedia of Genes and Genomes (KEGG) and GenMAPP databases as well as the expert curated gene sets. Detailed information about gene sets used for GSEA analysis, including source websites is available upon request. Gene set size filter resulted in filtering out 405 of 880 gene sets. The number of permutations for was set to 1000. Gene sets are considered significantly enriched at false discovery rate (FDR) smaller than 10% . In total, 314 gene sets are identified as significantly enriched in at least 1 of 16 comparisons (HFBT vs. chow and HFP vs. chow per time-point). Normalized enrichment scores (NES) of significantly enriched pathways and the corresponding FDR q-values across all experimental conditions are available upon request.Hierarchical clustering and visualization of gene expression changes in 2 ratios of 1663 high-fat responsive genes were used (HFBT and HFP divided by median of chow per time point). To facilitate visualization of temporal trends, the starting time point was also included in the analysis.Temporal analysis of the gene expression profiles where each of HF groups is compared to chow per time point and S5 wIdentification of overrepresented functional categories among 1663 high-fat responsive genes and their grouping into functionally related clusters was performed using DAVID Functional Annotation Clustering tool The network analysis was generated through the use of Ingenuity Pathways Analysis (version date March 2008) For the microarray experiments described in this study, MIAME compliant protocols and datasets in Tab2MAGE are accessible from ArrayExpress microarray data repository Plasma proteins were quantified by multiplex immunoassay measurements at Rules Based Medicine . Plasma antigens immunoassay panel included in the Rodent Multi-Analyte Profile was used for measurement of expression levels of 58 proteins (RodentMAP version 2 antigen panel). Of these, 47 proteins had sufficient detectability of the expression signals and were included in the further analysis. Statistical significance of protein expression in HFBT and HFP fed mice compared to chow fed mice per time-point was assessed by t-test. The p value of 0.05 was used as a threshold for significance.Liver lipid content was defined as total triglyceride content (mmol) per mg of protein. Extraction was performed using a modified Bligh and Dyer extraction protocol, optimized for steatotic liver material. Triglyceride content was measured enzymatically using the Roche TG kit . Protein content was measured by BCA analysis . Statistical significance of hepatic triglyceride content in HFBT and HFP fed mice compared to chow fed mice per time-point was assessed by t-test. The p value of 0.01 was used as a threshold for significance.To determine the relation between gene expression data and triglyceride levels in the liver tissue, Random Forests regression was used through randomForest package of R statistical computing environment Random Forests is an improved Classification and Regression Trees (CART) method. It grows many classification trees or regression trees, hence the name \u2018Forests\u2019. For quantitative outcomes the forest is made of regression trees, where the tree predictor is the mean value of the training set observations in each terminal leaf. In our application of random forests the outcomes are quantitative, therefore the regression algorithm and not classification was used. Every tree is built using a deterministic algorithm and the trees are different owing to two factors. First, at each node, a best split is chosen from a random subset of the predictors rather than all of them. Second, every tree is built using a bootstrap sample of the observations. The out-of-bag (OOB) data, one-third of the observations, are then used to estimate the prediction accuracy. Unlike other tree algorithms, no pruning, trimming of the fully grown tree, is involved. Each observation is assigned to a leaf, the terminal node of a tree, according to the order and values of the predictor variables. For a particular tree, the predictions for observations are given only for the OOB data. The Random Forest predictor is computed by averaging the tree predictors over trees for which the given observation was OOB. Because the prediction for an observation is based on trees grown without the observation, an idea akin to cross-validation, the estimated errors are unbiased and the data were not divided in test and training sets.Figure S1Temporal changes in gene expression for HFBT, HFP and chow diet over the 16-week period compared to day 0. The number of statistically significant differentially expressed genes (DEG) identified by pairwise comparison of each time-point versus day 0 in chow, HFBT and HFP dietary conditions. Venn diagram shows overlap of total (in any of time points) number of DEGs in the three diets.(0.37 MB PDF)Click here for additional data file.Figure S2Temporal gene expression profiles . The results of Smoothing Spline Clustering analysis (0.07 MB PDF)Click here for additional data file.Figure S3Temporal gene expression profiles . The results of Smoothing Spline Clustering analysis (0.02 MB PDF)Click here for additional data file.Figure S4Temporal gene expression profiles . The results of Smoothing Spline Clustering analysis (0.07 MB PDF)Click here for additional data file.Figure S5Temporal gene expression profiles . The results of Smoothing Spline Clustering analysis (0.02 MB PDF)Click here for additional data file.Table S1Detailed overview of numbers of differentially expressed genes in HFBT vs. chow and HFP vs. chow comparisons . The information summarized in (0.05 MB DOC)Click here for additional data file.Table S2Annotation and statistics for 1663 high-fat responsive genes. The list of 1663 genes differentially expressed under the high-fat conditions , including detailed annotation of these genes, expression ratios and the statistics (as determined by limma package) in each of the experimental conditions.(3.10 MB XLS)Click here for additional data file.Table S3Overrepresentation analysis of functional categories among all, upregulated and downregulated HF-responsive genes. The complete results of functional categories overrepresentation analyses and their grouping into functionally related clusters generated using DAVID Functional Annotation Clustering tool (1.89 MB XLS)Click here for additional data file.Table S4The macronutrient content and the fatty acid composition of chow and high-fat diets. The macronutrient content and the fatty acid composition of chow, HFBT and HFP diets.(0.04 MB DOC)Click here for additional data file."}
+{"text": "One of the challenges with modeling the temporal progression of biological signals is dealing with the effect of noise and the limited number of replicates at each time point. Given the rising interest in utilizing predictive mathematical models to describe the biological response of an organism or analysis such as clustering and gene ontology enrichment, it is important to determine whether the dynamic progression of the data has been accurately captured despite the limited number of replicates, such that one can have confidence that the results of the analysis are capturing important salient dynamic features.By pre-selecting genes based upon quality before the identification of differential expression via algorithm such as EDGE, it was found that the percentage of statistically enriched ontologies (p < .05) was improved. Furthermore, it was found that a majority of the genes found via the proposed technique were also selected via an EDGE selection though the reverse was not necessarily true. It was also found that improvements offered by the proposed algorithm are anti-correlated with improvements in the various microarray platforms and the number of replicates. This is illustrated by the fact that newer arrays and experiments with more replicates show less improvement when the filtering for quality is first run before the selection of differentially expressed genes. This suggests that the increase in the number of replicates as well as improvements in array technologies are increase the confidence one has in the dynamics obtained from the experiment.We have developed an algorithm that quantifies the quality of temporal biological signal rather than whether the signal illustrates a significant change over the experimental time course. Because the use of these temporal signals, whether it is in mathematical modeling or clustering, focuses upon the entire time series, it is necessary to develop a method to quantify and select for signals which conform to this ideal. By doing this, we have demonstrated a marked and consistent improvement in the results of a clustering exercise over multiple experiments, microarray platforms, and experimental designs. As biology has transformed from a descriptive to a quantitative field, there has been growing interesting in creating mathematical models which describe the dynamic evolution of biological processes. Thus rather than taking measurements pre vs. post perturbations, there has been growing interest in modeling the dynamic progression of biological responses -3. HowevAn example of a dynamic biological signal which is of interest researchers are the changes in mRNA gene expression level over time in response to external perturbations such as gene silencing, induction of disease states, or the administration of a drug/toxin,5. The sThe standard procedure for obtaining the necessary information consists of taking a set of gene expression measurements at predetermined time points and reconstructing the dynamic signal. Due to the low signal to noise ratio associated with these experiments, replicates are taken in order to compensate for the lack of fidelity in the signals. However, because of issues such as cost in terms of time and money, oftentimes very few replicates are obtained at each time point. Therefore, while it may be relatively simple to determine whether the system changes in a measurable fashion during the time horizon of the experiment via statistical tests such as ANOVA, t-test, EDGE9]9] or SAMTherefore, the question is not how to select for biologically relevant signals, but rather how to select for the signals whose inherent expression dynamics given the large biological variance and the limited number of replicates is accurately captured by the ensemble average. While there exist several methods for assessing the quality of a signal given a limited number of replicates, such as calculating the Signal-to-Noise Ratio (SNR) or utiliAs a generalized method for assessing the quality of a single temporal signal given a specific number of replicates, we need to satisfy two intuitions:1. The accuracy of the ensemble average increases as the number of replicates per time point increases2. The accuracy of the ensemble average increases as the coefficient of variance decreases at each time pointFor all of the datasets, the p-value cutoff was selected at p < .05 for both the EDGE as well as the LOOCV Quality Assessment. While it is arguable as to whether such a threshold is appropriate given the number of genes present within the dataset, what weLooking at the results for the three datasets in Table For the GDS972 chronic corticosteroid dataset, we see the smallest amount of improvement between filtering the dataset utilizing an EDGE vs. utilizing EDGE along with the proposed LOOCV filtering algorithm. The pre-selection step via LOOCV yielded 2776 genes, of which EDGE determined that 2127 of them were differentially expressed. This is in contrast to running EDGE independently in which 5344 genes were selected as being differentially expressed. In this instance, it would appear that filtering via LOOCV identifies a subset of genes in which over 75% of the genes show significant differential expression. In terms of the gene ontology enrichment, it is evident in Figure The burn dataset (GDS599), showed the greatest improvement when the LOOCV algorithm was used as pre-filtering step. As a result of the selection via EDGE 1292 genes were selected as being differentially expressed. The result of running the LOOCV algorithm by itself yielded 644 genes, out of which 396 were selected to be differentially expressed under EDGE. Pre-filtering this dataset for quality before conducting selection for differential expression showed the greatest level of improvement due to the fact that it contained the fewest number of replicates as the fact that it was run on an older array Figure For the acute corticosteroid dataset GDS253), after the initial filtering via LOOCV, there existed 898 genes, of which 820 were shown to be differentially expressed via EDGE. When conducting the selection via EDGE itself, 2267 genes were denoted as being differentially expressed In Figure 53, afterIn general, for all of the datasets, the majority of genes which were selected as to having their dynamics being reliably measured also showed significant activation though the reverse is not true. This is not surprising because the LOOCV quality assessment requires that the presence of a change or lack of change to be consistent for all data points, whereas techniques such as the EDGE only attempt to detect a significant change over the time course of the experiment. However, with a sufficient number of replicates or an increase in the signal to noise ratio, both sets should be essentially the same as seen in the dataset associated with a chronic administration of corticosteroids. However, in the cases where the number of replicates is quite small or the system has a low inherent signal to noise ratio, the differences can be significant.Given the fact that the number of significantly enriched ontologies appears to be reasonably constant whether we use the pre-filtering step or not, one may assume that the intersection of the significantly enriched ontologies between the two cases is quite high. However, from our results, this did not appear to be the case. Running the pre-filtering step along with EDGE would yield 55\u201360% commonality in terms of the significantly enriched ontologies identified for the two corticosteroid related datasets GDS972 and GDS253. In the case of the burn dataset GDS599, it was found that the commonality between the two sets changed from 60% when 2 clusters were used to less than 10% when 19 clusters were used. Furthermore, not all of the ontologies found after running LOOCV and EDGE were found to be enriched when EDGE was run by itself. Given the large percentage of genes which passed the LOOCV pre-filtering step which also showed differential expression, this result appears to suggest that the enrichment of individual ontologies can be quite sensitive to the incorporation or removal of a few genes.The primary issue which this algorithm was developed to address was the fact that just because a gene shows significant changes in its temporal expression does not mean that such a gene expression profile has been measured in such a way which is amenable to mathematical modeling. Given the difficulty in quantifying the accuracy of a given mathematical model, clustering was used as a surrogate. In the dataset which was obtained with a newer Affymetrix array and a higher number of replicates, we generally found that genes which showed significant activation were also very likely to have been accurately measured. However, for some of the older arrays, we have found that this was not to be the case. In one case, we found that many of the genes which had been reported as having statistically significant changes in activity did not have profiles which were amenable to modeling.Aside from the simple explanation that such variability between the replicates is due biological variability, we hypothesize that other factors may play a role and by identifying these factors, we can minimize the variations between samples. Such factors may include issues with the microarray itself as evidenced by the minimal difference between the proposed LOOCV quality assessment metric and the EDGE when utilizing the newer RAE230A array vs. the older RG-U34A arrays. Other factors which may play a role is the imperfect synchronization between replicates, especially for genes with quick early responses. Thus, wDue to uneven temporal sampling, one significant issue has arisen, specifically how to deal with the samples which encompass a shorter time duration vs. samples that represent the response over a longer duration of time. For instance in the case of the GDS253 dataset, the sampling rate ranges from 15 minutes to 24 hours. Thus while, the majority of the signal in terms of duration of time may have been well captured, the overall correlation coefficient may be low given the high variability in the early time points Figure The reason for this problem is the fact that the algorithms essentially treat the data as a vector of values without time dependence. Essentially the data points themselves are all given equal weight whether they take place during a short period of time, or whether the data point encompasses a greater period of time. Thus the correlation coefficient or clustering analysis may not also agree with one's judgment utilizing visual inspection of the data. However, while the results of the algorithm may not agree with one's intuition when visually assessing the data, the fact that researchers have selected such an uneven sampling strategy means that the dynamics early may play just as important role as the later dynamics despite their transient effect. Therefore, while there exists algorithms that will normalize the data based upon the time duration via techniques such as interpolation or curve fitting, they maWhile the intent of our algorithm was to identify a set of gene expression profiles whose temporal profiles are amenable to modeling, we assessed the effect of these high quality expression profiles through an analysis of clustering results. In our evaluation of clustering quality, we have established the fact that our algorithm identifies a condensed set of genes which show a strong co-functionality/co-expression relationship, and rejects many genes which do not show co-functionality with the dominant biological processes due to incorrect cluster assignments due to ambiguities in the underlying signal. However in many cases, this specificity comes at the expense of generality, with some of the results exhibiting fewer total enriched gene ontologies. Thus, if one wanted to use this reduced set of genes as a representative population, one important question is whether this smaller set of reduced ontologies exhibit a more focused set of biological processes or whether there is significant amount of information which has been rejected.do not pass the LOOCV filter, we see that the majority of enriched ontologies are the same as the overall set of differentially expressed genes (~95% similarity). In the cases that additional ontologies are found in this set, very few of the additional biological processes have been previously associated with injury, inflammation, the immune response, or metabolism which are hallmarks of burn injury or corticosteroid administration. This is in contrast to the set of differentially expressed genes which do pass the LOOCV filter. In this case, many additional biological processes were found to be over-represented. Furthermore, these additional processes have been linked to our experimental perturbations Table To make this assessment, we evaluate the set of genes that have passed EDGE, but rejected by the LOOCV algorithm, and the set of genes that pass both filters. When this evaluation is performed upon our three datasets, we see an interesting result. In the set of differentially expressed genes which From this result, it appears that the set of genes rejected by the LOOCV filter are qualitatively similar to the original set of differentially expressed genes. This is in contrast to the set of differentially expressed genes which pass the LOOCV filter, which show significant differences in the identified ontologies. By looking at the set of ontologies which have identified after LOOCV filtering, but not present under selection via EDGE, it appears that LOOCV filtering is able to identify ontologies which predominately relate to the biological processes associated with our experimental perturbations. However, because it is difficult to assess whether \"unrelated\" ontologies reflect an artifact of the selection/clustering/enrichment process, or part of important, but previously uncharacterized processes, one strategy may be to utilize a union of both the results obtained from EDGE filtering and set of enriched ontologies after the additional LOOCV filtering. Similar to the fact that LOOCV was designed as an addendum to gene selection algorithms to identify high quality temporal profiles for modeling, the additional ontologies that have been identified serve as an addendum to the original processes identified via the original gene selection process. Because we have shown that these additional processes are relevant, we see this as adding information to what has been previously identified. Thus, the LOOCV filtering step should not supplant results from EDGE or any other selection algorithm, but can be used to complement the original results.One of the primary motivations for creating a new way of performing gene selection is that given inherent biological variability as well as deficiencies in measurement precision, the temporal evolution of a given piece of data may not be an accurate reflection as to the underlying dynamics. Therefore, if one were to mathematically model a given dynamic response, one must be certain that the data is sufficiently precise. Given the difficulties in evaluating the change in utility between modeling accurately vs. inaccurately measure data, the effect upon ontology enrichment was used instead, and we found that in all cases, there was an improvement in the overall quality of the clustering results, though with better data acquisition platforms and experimental design this improvement was minimized.Though most of the analysis has been performed upon microarray data, this data was selected primarily for the ease by which it would be possible to evaluate the improvement, this technique can be expanded to other data types, and evaluate whether sufficient data has been obtained to for modeling purposes. Thus, this technique can be expanded for use in techniques such as ELISA over multiple time points or metabolite measurements over multiple time points, and not just microarray data, thus allowing the researcher to determine that a sufficient number of replicates has been obtained, or that more replicates are needed.To satisfy these constraints we propose utilizing a variation of the Leave One Out Cross Validation (LOOCV) techniqua priori knowledge about the dynamics themselves. For instance when utilizing b-splines, one needs to specify the number of knots or control points to be used by the spline. In the case of AR models, the order of the model must be specified a priori. In both of these methods, the specification of these parameters will have a significant effect upon how the data is fitted by the model, and therefore a significant effect upon the estimation of how accurate the measured data reflects the underling dynamic. Therefore, we seek a method which is independent of model parameters, and is dependent only upon the confidence interval selected by the researcher.Though there are classes of mathematical models such as b-splines or auto-Ideally, we would like to predict whether utilizing an additional replicate for each time point would be change the gene expression profile obtained. While we cannot predict the effect of having an additional replicate, we can simulate the effect by measuring the stability of the signal given n-1 replicates. Thus, treating the ensemble average of a temporal signal as the model, we essentially are evaluating whether taking a subset of the measured data, reflects a similar underlying model. Because the algorithm evaluates a sub-sampled signal utilizing n-1 replicates, this is similar to LOOCV in which one attempts to determine whether a given model can predict the occurrence of a data point which was not utilized in the original training.4 or 16 possible sub-sampled signals, a signal with length N will have 2N possible sub-sampled signals. A sub-sampled signal of length four could have the maximum data value removed at time points 1,3,4 and the minimum data value removed at time point 2. This signal would be compared to its inverse which is a signal generated by the removal of the minimum data value at time points 1,3,4 and the maximum removed at time point 2. By iterating through all possible sub-sampled signals, we can establish a lower bound to the quality of a given signal.Rather than performing the standard LOOCV in which a point is randomly removed from the dataset, we will remove either the minimum or maximum at each time point. Given the small number of replicates per time point, we have elected to use this strategy shown in Figure (1) as a method for assessing similarity. Pearson's correlation was selected over other similarity measures because it is scale invariant allowing the comparison of signals of different magnitude. One of the inherent requirements for utilizing the Pearson's correlation for assessing the similarity between two datasets is that they need to be linearly correlated. However, because we are assessing whether different sub-sampled signals are sufficient in capturing the same underlying dynamic, it follows that the different sub-sampled signals will be linearly correlated.Given the ability to generate hypothetical gene expression profiles utilizing n-1 replicates, it is then necessary to quantify the difference between these hypothetical signals. To do so, we have utilized Pearson's correlation 2 correlation coefficient associated with it can easily be converted into an s-value via Error! Reference source not found. which can later be converted into a p-value by utilizing the t-distribution[Furthermore, the use of Pearson's correlation is attractive because the Rtribution. This neWhile it is possible to rank all of the genes in a given array by this correlation score, we can also calculate a p-value associated with this correlation. Since the hypothesis underlying this method is that the inherent variation is visible given the limited number of replicates and inter-sample variability, the null hypothesis is that there is no inherent variation in the data, and all the variability is due to noise. Thus, a p-value will be established by generating a synthetic population of signals with the same number of replicates per time point as the microarray dataset. The synthetic signals that form the basis of comparison will have the same dimensions as the original dataset with the same number of replicates and time points, but the labels will be randomly permuted. The same LOOCV cross validation operation will be run on the synthetic data and the r-values computed for the synthetic signals. In a given dataset had 6 time points with 4 replicates each and the desired p-value was P < .05, 20 synthetic signals would be generated each with 6 time points and 4 replicates, and the LOOCV operation performed on this synthetic set. Within this population of genes, the highest level of correlation showed by a randomly generated signal was .55. Thus, the selection of high quality genes with P < .05 would entail the selection of genes which showed minimum correlation between its sub-sampled signals greater than .55.While the proposed algorithm is applicable to biological signals in general, the data being utilized will be obtained from temporal gene expression experiments obtained via microarrays. These experiments are advantageous because they present a wide range of different signal to noise ratios, different number of replicates, as well as the ease in which it is possible to evaluate the biological relevance of the results in a quantitative fashion.The data to be used are all publically available via the GEO database. The datBased upon our two initial guiding principles that a lower coefficient of variance should lead to more accurately measured signals as well as the fact that increasing the number of replicates should lead to more accurately measured signals, we hypothesize that the burn dataset (GDS599) should show the greatest difference in the clustering performance due to its few replicates as well as due to its older experimental platform while the dataset consisting of the chronic administration of corticosteroids should show the least amount of difference because of improvements in the microarray itself as well as due to the greater number of replicates.While the motivation behind utilizing this filtering technique was to improve the confidence one has in the mathematical models derived from temporal biological data, such confidence is difficult to quantify without conducting additional experiments to validate a generated mathematical model. However, because the data used in the evaluation consists of high throughput gene expression profiles, a surrogate metric can be used. For temporal gene expression profiles, one of the primary hypotheses is that groups of genes with similar temporal progressions of their gene expression profiles will have similar functionalities. This assessment is to quantify the effect high quality signals have upon clustering. While there are numerous methods to assess the quality of a given clustering result such as external clustering similarity, we have(2). The ontologies themselves are obtained from the Affymetrix Annotations provided with each individual microarray. This hypergeometric distribution essentially calculates the probability that a subset of genes has been selected from an overall population. To evaluate the overall quality of a given enrichment, the metric will be the percentage of identified ontologies which have been selected as enriched. It is hypothesized that if the clustering is more reliable, then there should be a lower number of ontologies which had been spuriously included due to ambiguities within the signals.In our case, we have elected to use the clustering package cluto, with thGiven that the initial claim of the manuscript is that it is important to select for genes which show not only significant differential expression, but also genes which show accurately measured expression profiles, thus we have elected to compare the performance of the proposed LOOCV algorithm vs. a standard method for selecting genes based upon differential expression EDGE.One of the difficulties with this assessment is that the evaluation of gene enrichment is dependent upon the number of clusters with the data is partitioned into. Determining the number of clusters itself is an open area of research, and thus it is difficult to determine the proper number of clusters present within the data. Therefore, instead of focusing upon the number of clusters present in the data, the evaluation will be conducted over a continuum of different cluster numbers. It is hypothesized that if the filtering has been successful, then the percentage of significantly enriched ontologies will be greater for any given cluster number.IPA designed the study; EY performed the analysis and conducted all calculations. All authors read and approved the final manuscript."}
+{"text": "In the limit that mRNAs are produced in short-lived bursts, binary thresholding of the FISH data provides a robust way of reconstructing dynamics. In this regime, prior knowledge of the type of dynamics \u2013 cycle versus switch \u2013 is generally required and additional constraints, e.g., from triplet FISH measurements, may also be needed to fully constrain all parameters. As a demonstration, we apply the thresholding method to RNA FISH data obtained from single, unsynchronized cells of Saccharomyces cerevisiae. Our results support the existence of metabolic cycles and provide an estimate of global gene-expression noise. The approach to FISH data presented here can be applied in general to reconstruct dynamics from snapshots of pairs of correlated quantities including, for example, protein concentrations obtained from immunofluorescence assays.Recently, a novel approach has been developed to study gene expression in single cells with high time resolution using RNA Fluorescent In Situ Hybridization (FISH). The technique allows individual mRNAs to be counted with high accuracy in wild-type cells, but requires cells to be fixed; thus, each cell provides only a \u201csnapshot\u201d of gene expression. Here we show how and when RNA FISH data on pairs of genes can be used to reconstruct real-time dynamics from a collection of such snapshots. Using maximum-likelihood parameter estimation on synthetically generated, noisy FISH data, we show that dynamical programs of gene expression, such as cycles is a single-cell technique that offers both high time-resolution and precise quantification of mRNA molecules, but requires fixed cells. We have explored how, when, and with what prior information FISH snapshots of pairs of genes can be used to accurately reconstruct gene-expression dynamics. The technique can be readily implemented, and is broadly applicable from bacteria to mammals. We lay out a principled and practical approach to extracting biological information from RNA FISH data to reveal new information about the dynamics of living organisms. Bacillus subtilis, induction of the lac operon in Escherichia coli depending on \u201cmemory\u201d of previous exposure to lactose and the presence of lactose permease Saccharomyces cerevisiae (budding yeast) temperature-sensitive mutants to a shift to non-permissive temperature depending on the position of cells in their division cycle B. subtilisCells are well known to respond to external conditions by altering their gene expression. In recent years, many examples of altered gene expression programs have been revealed by population level studies, including microarray studies of yeast, mammalian, and bacterial cells. But many cells are also known to alter gene expression is ways that are heterogeneous across a cell population. Examples include the acquisition of competence for DNA uptake Since population level studies are not well suited to reveal heterogenous behavior, how can heterogeneous changes in gene expression be studied and quantified? Fluorescent reporter proteins have been used successfully to report on expression of a small number of genes either via FACS analysis or fluorescence microscopy. However, the use of fluorescent reporters is generally limited to highly expressed genes, with time resolution severely limited by fluorescent protein maturation and the low turnover rates of the fluorescent marker. Moreover, construction of fluorescent reporters can be laborious and impractical for studies of large-scale transcriptional responses.e.g. by employing probes with different fluorescent spectra A promising approach that has recently been used to study gene expression on a cell-by-cell basis is Fluorescence In Situ Hybridization (FISH) cf.Saccharomyces cerevisiae have been synchronized in chemostats Saccharomyces undergoes metabolic oscillations outside the chemostat, Silverman et al.Escherichia coliBacillus subtilisSaccharomyces cerevisiaeHere, we present an approach to extracting information about the dynamics of gene expression from FISH data by considering correlations of expression between pairs of genes cf.. The appet al.Specifically, we show how Maximum Likelihood Estimation (MLE) Importantly, the method we present here for inferring intracellular dynamics from data in the form of \u201csnapshots\u201d is quite general, relying only on measurements of pairs of quantities in single cells, with no requirement for exact counts. The method can therefore be applied with little modification in other contexts such as quantitative immunofluorescence or single-cell sequencing studies.e.g. over the cell cycle. We refer to this case as the continuous regime. The second regime is the opposite limit where mRNA production is highly intermittent We presume that production of mRNA transcripts is a stochastic process. Transcription factors bind to DNA at random times, with a probability that depends on other signals, and which can therefore also vary with time. Binding of one or more transcriptional activators, or unbinding of repressors, typically leads to production of a \u201cburst\u201d of mRNA transcripts. One can distinguish three regimes, two of which are illustrated in i.e. sets of parameters), they are called \u201clikelihoods\u201d and hence this approach to parameter inference is called Maximum Likelihood Estimation (MLE). Below, we demonstrate the practicality of the MLE approach using synthetically generated FISH data in both the continuous and bursty mRNA regimes.For each regime of mRNA expression, our approach consists of defining a class of possible dynamics, and choosing the one for which the observed data is most likely. Specifically, for a given set of model parameters, we calculate the probability of the observed data, and then ask for the particular set of parameters that maximizes this probability. Since the probabilities don't sum to one over all models given the data states:We first consider the continuous regime where many bursts typically contribute to the instantaneous mRNA number. To demonstrate the MLE algorithm, we reconstruct the dynamics of gene expression using synthetic FISH data for which the underlying dynamics is known. We focus on analyzing cyclic dynamics, For each FISH observation, one therefore has:We generate synthetic FISH data by first choosing the parameters in Eq. (4) for the oscillating mRNA levels To test the accuracy of reconstruction of mRNA dynamics using our MLE approach, we generated a large number of sets of parameter, and for each parameter set generated synthetic FISH data and then applied MLE to reconstruct the true dynamics. Specifically, for each parameter set defining a trajectory of mean mRNA levels As shown in To quantify the accuracy of the MLE algorithm, we computed the reconstruction error We now consider the bursty regime where a cell will typically either have few (or no) mRNAs of a particular type, or the mRNAs present will come from a single recent burst of transcription. In this limit, the information provided by FISH is essentially binary - either mRNAs for a particular gene are present at significant levels, indicating a recent burst, or they are not. Formally, if a significant number of mRNAs for gene i.e. simultaneous measurement of three different mRNA types, leads to terms We denote by i.e. the number necessary to describe the For simplicity, let us now consider only the lowest harmonic, as in the previous section. We introduce the An important consideration in analyzing FISH data is that overall transcription rates may vary from cell to cell. Indeed, measurements of gene-expression noise in single yeast cells at the protein level reveal We now consider a model where the expression pattern can switch stochastically among It is straightforward to see that i.e.For either cyclic or switching dynamics, maximum likelihood parameter estimation in the regime of bursty mRNA production requires the following steps, (1) estimating the mean burst probability and covariance from the FISH data, (2) determining the uncertainty of these estimates, and (3) obtaining the parameters for which the observed data is most likely. Taking an average over FISH data provides an estimate To estimate the uncertainty in the covariances, we first note that the true variance in e.g. from triplet FISH data.As discussed above, for bursty mRNA production the means and covariances alone cannot distinguish cyclic from switching dynamics. However, if one has prior evidence that gene expression is cyclic, maximum likelihood estimation can be usefully employed to reconstruct the dynamics. If e.g. interval durations or branching ratios.) The MLE algorithm works well, as shown in A stochastic switch between 2 states implies a covariance matrix of rank 1, and therefore can be distinguished from cyclic dynamics, which leads to a minimum rank of 2 . Still, one piece of additional information is required to reconstruct the dynamics. For example, it is sufficient to know the expression level of a single gene in one state. Here, we instead assume that the probability e.g. via Expectation Maximization (EM) In principle, with enough FISH data it should be possible to reconstruct more than just the probability of observing a burst. For example, the entire distribution of mRNAs of each type in each switching state could be obtained using MLE, In the regime of bursty mRNA production, all of the information from FISH is contained in the mean burst probabilities and the covariance matrix, suggesting that Principal Component Analysis (PCA) could be usefully applied. For example, for a 2-state switch the covariance matrix has rank 1. Thus, according to Eq. (15), performing PCA by diagonalizing In practice, elements of the PCA and MLE approaches can be usefully combined. The main utility of PCA lies in diagonalizing Saccharomyces cerevisiae grown in chemostats can undergo synchronized metabolic oscillations In recent years, McKnight and coworkers demonstrated that the yeast To analyze the dynamics, we first binarized the FISH data of We assumed that the dynamics is cyclic and considered the expansion of Eq. 6 up to the first harmonic. Such a model has 74 independent parameters for 25 genes. Moreover, the number of data points per pair of genes varies from 175 to 16032, with only 29 pairs having more than 2000 data points. Thus some of the correlations are well-characterized, but others are not. If only the 29 gene pairs with more than 2000 data points are considered, even a single-harmonic model is under-constrained. To circumvent this problem, we are guided by the observation apparent from Next, the likelihood of all the observed FISH correlations was maximized with respect to the phases i.e.To quantify our results statistically, we define for each cluster, i.e. 55%), which is significant, but considerably smaller than the typical variation during a cycle From the chemostat studies e 2 fold should nIn In general, Maximum Likelihood Estimation (MLE) requires finding the set of model parameters for which the observed data are most likely. Finding the global maximum in the space of model parameters can be a challenging task, particularly as there may be many local maxima in which a search algorithm can get stuck. For synthetic FISH data in the regime of continuous mRNA production, we found that such local maxima occurred frequently. To find the global maximum in the continuous regime, we developed a heuristic algorithm that worked very well in practice to reconstruct simple cycles.One approach is to consider various initial parameter values, and to use a steepest-descent algorithm to find the local maximum of the likelihood. Then the global maximum (with the highest likelihood) could be chosen among the different solutions. However, in practice this procedure can be very time-consuming if initial conditions are chosen randomly. Here we propose two approaches to first compute estimates of the parameters, and then use these estimates to initiate the optimization protocol. In these two approaches we estimated the parameters as follows: (1) For the mean expression level we took We assign In the second approach, we tried to approximate the relative phases rather than their absolute values. The initial parameters Results of optimization using approaches (a) and (b) to set initial parameter values are shown in Saccharomyces cerevisiaeThe ability to count mRNA molecules in single cells by Fluorescence In Situ Hybridization (FISH) e.g. cycle vs. switch, and, even so, will in some cases require additional inputs, such as triplet FISH data. In applying our approach, how should one choose among models to reconstruct gene dynamics? For example, when is it better to use multiple harmonics instead of a single harmonic to model a cycle? The answer depends on the type of data. We discuss first the regime of continuous mRNA production. For this case, a standard and reliable way to choose among models when fitting data is \u201cleave-one-out\u201d validation, which both rewards a good fit while punishing overfitting. In the leave-one-out approach, a model is selected and its parameters are optimized on the entire data set, but with one data point left out. The resulting parameterized model is then used to fit the neglected data point. The average fitting error, taken over all possible left-out data points, is a robust measure of the quality of the model. Among competing models, the one that minimizes this error can be selected as the better choice. In the regime of continuous mRNA production, leave-one-out validation can be applied within the MLE framework by using the log(likelihood) of the left-out data point in place of the fitting error. Among competing models, the one with the largest average log(likelihood) is the best choice.In contrast, finding the \u201cbest\u201d model for data in the bursty mRNA regime is generally an under-constrained problem. We showed explicitly that for many cases it is impossible in principle to distinguish among different types of models, or even to find a unique best set of parameters for a given model. Intuitively, reduction of bursty FISH data to pairwise covariances means that even as the number of FISH data points approaches infinity, the number of model constraints stays finite. So, for bursty FISH data inference alone cannot guide one in choosing the model, and one must also use common sense. Clearly, prior knowledge of the system under study should be used in selecting a model. In addition, a simple rule is that one should use models that are sufficiently parsimonious in parameters not to have degenerate solutions. For example, in analyzing FISH data on metabolic cycles, we chose the one-harmonic model because there were not enough low-noise covariances to constrain a two-harmonic model. More generally, it is advisable to choose a model with significantly more well-constrained data than parameters. If the model is barely constrained, the peak of likelihood will generally be close to flat in some directions in parameter space and the reconstruction will be poor. e.g. preventing blurring of boundaries between switching states). However, a higher threshold reduces the number of coinciding bursts in the data, requiring more overall FISH observations. An important related issue is the possibility of correlated noise in the transcription of different genes. An example of such noise is the observed global correlation among transcription rates in yeast Reconstruction of gene-expression dynamics from FISH data presents multiple practical challenges. One important issue is noise in the measurement of mRNA levels. For the regime of continuous mRNA production, we have shown that sufficient data can compensate for both the noise inherent in gene expression and the noise arising from uncertainty in measurement. For the regime of bursty mRNA production, \u201cbinarizing\u201d the data into the presence or absence of a significant number of mRNA molecules substantially reduces the impact of measurement noise. A practical question here is the best threshold to use for binarizing the data. In many cases, the dynamics will be best reconstructed by setting the threshold well above 1 mRNA transcript; for example, in treating the data for metabolic cycles we chose the median expression level for each gene as its threshold. A higher threshold is less sensitive to measurement noise , and to occasional transcripts produced by promoter leakage (better identification of true bursts), and a higher threshold also allows finer time resolution, as a given burst will remain above threshold for a shorter time ).Another practical issue in reconstructing gene-expression dynamics from FISH measurements is that data may come in mixed forms, et al.While these and other practical issues are important to consider, our successful reconstruction of yeast metabolic cycles using the FISH data of Silverman The many advantages of FISH \u2013 absolute quantification, high time resolution, use of wild-type cells, ability to simultaneously measure multiple mRNA types, and broad application across species from bacteria Figure S1j(t)as defined in the text, taking into account the presence of global transcriptional noise.Average cluster activities Q(0.96 MB TIF)Click here for additional data file.Table S1Rank of covariance matrix and required number of additional constraints (obtained from triplet-FISH measurements or other sources) necessary for complete parameter inference in the regime of bursty mRNA production, for both cyclic and stochastic switching dynamics.(0.44 MB TIF)Click here for additional data file."}
+{"text": "Aspergillus fumigatus plays a critical role in mammalian and avian infections. Thus, the identification of its adaptation mechanism to higher temperature is very important for an efficient anti-fungal drug development as well as fundamental understanding of its pathogenesis. We explored the temporal transcription regulation structure of this pathogenic fungus under heat shock conditions using the time series microarray data reported by Nierman et al. .The thermotolerance of A. fumigatus shows that the heat shock proteins are strongly negatively associated with central metabolic pathway genes such as the tricarboxylic acid cycle (TCA cycle) and carbohydrate metabolism. It was 60 min and 120 min, respectively, after the growth temperature changes from 30\u00b0C to 37\u00b0C and 48\u00b0C that some of genes in TCA cycle were started to be upregulated. In these points, most of heat shock proteins showed lowest expression level after heat shocks. Among the heat shock proteins, the HSP30 (AFU6G06470), a single integral plasma membrane heat shock protein, presented most active role in transcription regulation structure in both heat shock conditions of 37\u00b0C and 48\u00b0C. The metabolic genes associated with multiple genes in the gene regulation network showed a tendency to have opposite expression patterns of heat shock proteins. The role of those metabolic genes was second regulator in the coherent feed-forward loop type of regulation structure having heat shock protein as its first regulator. This type of regulation structure might be very advantageous for the thermal adaptation of A. fumigatus under heat shock because a small amount of heat shock proteins can rapidly magnify their regulation effect on target genes. However, the coherent feed-forward loop type of regulation of heat shock proteins with metabolic genes became less frequent with increasing temperature. This might be the reason for dramatic increase in the expression of heat shock proteins and the number of heat shock response genes at heat shock of 48\u00b0C.The estimated transcription regulation structure of A. fumigatus by state space model with times series microarray data in terms of transcription regulation structure. We suggest for the first time that heat shock proteins might efficiently regulate metabolic genes using the coherent feed-forward loop type of regulation structure. This type of regulation structure would also be efficient for adjustment to the other stresses requiring rapid change of metabolic mode as well as thermal adaptation.We systemically analysed the thermal adaption mechanism of Aspergillus fumigatus is both a primary and opportunistic pathogen as well as a major allergen associated with severe asthma and sinusitis [3 [A. fumigatus in mammalian and avian infection depends basically on its thermotolerance [A. fumigatus.inusitis . The hossitis [3 . The spoolerance . Therefoet al. [A. fumigatus. They suggested that host temperature alone (37\u00b0C) is insufficient to turn on many virulence-related genes because no known genes implicated in pathogenicity showed higher expression at 37\u00b0C than that at 48\u00b0C. Lamarre et al. reported another genome-wide expression study of this fungus at 37\u00b0C with different age of conidia [A. fumigatus are transcriptionally regulated for its thermal adaptation.Nierman et al. examined conidia . Their m conidia . Thus, i2 to 104. To overcome such a limitation, Hirose et al. [A. fumigatus in heat shock with microarray data consisting of very short length of time point, i.e., 6. The time series microarray data reported by Nierman et al. [i.e., shift from 30\u00b0C to 37\u00b0C and 48\u00b0C, respectively.Several methods have been proposed for modelling gene networks including Boolean networks ,8, Bayese et al. proposedA. fumigatus in heat shock condition, the genes with over 11 % of missing values in total observations were excluded (see Methods section for detail). Thus, 4027 and 4771 genes from 9,516 genes on the array were first chosen in this study for heat shock of 37\u00b0C and to 48\u00b0C, respectively. From these two gene lists, we selected the significantly differentially expressed genes showing over two fold change in expression level and below 0.05 of p-value in t-test for at least a time point. Here, we will call these genes heat shock response genes. Finally, the number of heat shock response gene became 726 and 2200 for heat shock of 37\u00b0C and 48\u00b0C, respectively. Our study was focused on these heat shock response genes. Eighty two percent (596 genes) of heat response genes at 37\u00b0C were also observed in the set of heat response genes at 48\u00b0C. Almost half these common genes were involved in the central metabolic pathways such as carbohydrate, amino acid and lipid metabolism . The regulation relationship between sub-modules are more clear than that between modules was in M1-. This assignment of heat shock proteins to each sub-module shows a good agreement with their expression patterns (see Methods section for detail). Each module is assumed to follow the states of a first-order Markov chain and a gene may belong to more than one module. The dimension of the state variable M1 containing heat shock proteins shows strong positive association with module M2 including many pathway genes in heat shock of 37\u00b0C while the module M2 including many pathway genes in heat shock of 48\u00b0C represents the strong positive self-regulation. This indicates that the regulation of metabolic gene by heat shock proteins might be stronger at heat shock of 37\u00b0C than at heat shock of 48\u00b0C.The regulation relationships among modules are shown in the lower part of Figure et al. [A. fumigatus in the heat shock of 48\u00b0C might have less dense metabolic network than those in heat shock of 37\u00b0C. In our model, each element of the autoregressive coefficient matrix stands for interaction strength of corresponding gene pair (see Methods section for detail). Thus, the higher value it has, the stronger interaction the gene pair has. We can construct a sparse biological network by choosing a threshold. In this study, we have chosen a threshold of 0.015 from the region showing almost constant edge density in both heat shock conditions of 37\u00b0C and 48\u00b0C , 1465 and 1808 appeared in both networks of 37\u00b0C and 48\u00b0C while 293 is observed only at network of 48\u00b0C. The gene number associated with the 1808, a single integral plasma membrane heat shock protein, dramatically increased at network of 48\u00b0C. This suggests that the 1808 might have more critical role in the high temperature than in low temperature.We estimated the transcription regulation structure by mapping the identified transcriptional modules in the above section onto the gene-level networks via the first order vector autoregression form (see Methods section for detail). The edge density in the network of heat shock 37\u00b0C is always higher than that of 48\u00b0C over the threshold range surveyed by 1808 in network of 48\u00b0C shows the possible regulation of trehalose production during heat shock condition. Trehalose is known to accumulate at high levels to stress conditions [From the network, we can estimate possible regulation between heat shock response genes. For example, the regulation of 561 at network of 37\u00b0C towards other genes may be result of indirect regulation through other intermediate genes hidden. Unlikely to the network of 37\u00b0C, many transcription factors appeared at the network of 48\u00b0C including 909 ), 1060 , 1208 ) and 1957 Figure . 909 and) Figure .The absence of heat shock transcription factor in the networks is due to its lower interaction value with other genes than threshold. Both heat shock conditions, the heat shock transcription factor (AFU5G01900) showed upregulation like most of heat shock proteins during time course and AFU6G02280 (Asp F3) are appeared as heat response genes and these genes are upregulated at all time points except for 120 min . The gene expression was examined at six time intervals including 0, 15, 30, 60, 120 180 min after shift of growth temperatures from 30\u00b0C to 37\u00b0C and 48\u00b0C, respectively. Two pairs of dye-swap arrays, i.e., four arrays, are used for each time point. To remove biases that arise from microarray, we conducted two-stage normalization, i.e., dye-swap normalization and lowess normalization. After dye-swap normalization, each time point has two arrays value across k = 1 to 9. However, The BIC curves were monotonically decreasing with increasing state dimension, which makes it difficult to determine the optimal dimension. Such a tendency becomes prominent when the number of sample is much smaller than the dimension of data. Thus, we directly surveyed regulation relationship between modules or sub-modules for each dimension using heat map such as the upper part of Figure k = 2 because regulation relationship between sub-modules was most clearly shown at this dimension. This implies implicitly that the time series microarray data of heat shock response genes might be divided into four groups because each component of state variable is divided into two sub-modules which usually have opposite pattern. The clustering analysis using the R package mclust showed that the optimal cluster number for both datasets of 37\u00b0C and 48\u00b0C is four, where the highest BIC value is observed and wn ~ Np are the system noise and the observation noise, respectively. The initial state vector x0 is assumed to be a Gaussian random vector with mean vector \u03bc0 and covariance matrix \u03a30. For the inference of gene regulatory system underlying the observation data, the parameters \u03b8 = {H, F, Q, R, \u03bc0} \u2208 \u0398 were estimated by EM algorithm with a fixed k. For the identifiability of the estimated model, we used following constraints on the parameters:where H.Imposement of an arbitrary sign on the elements of the first row of \u03b8 = {H, F, R, \u03bc0}. The estimation of these parameters is limited when the length of time series is very short, e.g., less than 10. This limitation can be overcome if replicates of time-course gene expression profiles are available. The dataset in each heat shock condition consists of six time points with six replicates (see pre-processing of time series microarray data in Methods section). Thus, we incorporate all replicates into parameter estimation. For this, we assume that each of the replicated time-courses is independently and identically distributed according toThus, the parameters are reduced to l-th replicate and the corresponding state vector at time n, respectively. The total number of replicates is denoted by m . Given this generative model, the parameter estimation amounts to maximizing the likelihood function l(\u03b8) over \u03b8:where Rk with the projection matrix D \u2208 Rk \u00d7 p by transforming the observation model (equation 4) under above constraints as follows:where lgorithm . After pwhere the projection matrix is parameterized ask is less than p, the dimensionality of the noise-removed gene expression vectors D.If the state dimension k modules of genes that are relevant to the temporal structure of gene expressions. We can choose groups of genes, i.e., modules using the projection matrix D of which element dij represents the contribution of the jth gene to the ith coordinate of the state variable, dij| for a fixed i can be used for selection of genes belonging to the ith module from gene list. That is, genes with high contribution to the ith coordinate of state variable would be top-ranked and selected as the ith module. A module can also be divided two sub-module by the signs of dij and the sub-module is determined using a similar approach to the module determination. The dynamic interactions between modules can be obtained through the system model (equation 3). Because it describes effects from i.e., F. For the estimation of gene regulatory network using parameters obtained above, we converted the SSM (equations 3 and 4) to a parsimonious representation of the first order vector autoregressive form as belowDuring the parameter estimation process, the reduced-rank data (equation 6) are possibly constructed such that they are likely follow first-order Markov process of equation 3. This process automatically discovers DT\u039bFD. The gene regulatory network was estimated based on \u03a8 which represents magnitude of interactions between genes.where the autoregressive coefficient matrix is given by \u03a8 \u2261 JHD conceived the study, carried out data analysis, and wrote the manuscript. RY provided his valuable experience in SSM and reviewed the manuscript. SM supervised the application of SSM to identification of transcriptional network. All authors read and approved the final the manuscript.A. fumigatusThe thermal response of each metabolic pathway in . This document contains the number of heat shock response genes appeared in each metabolic pathway at 15, 30, 60,120 and 180 min after heat shock.Click here for fileAverage gene expression profiles of heat response genes. This Exel file contains average gene expression profiles of heat response genes in two heat shock conditions of 37\u00b0C and 48\u00b0C.Click here for fileThe node index and its corresponding gene. This Exel file contains the gene information corresponding to each node in network estimated by state space model.Click here for fileBIC values for model-based clustering of gene expression profiles. This Exel file contains the BIC values obtained by model-based clustering with the R package mclust for two heat shock datasets of 37\u00b0C and 48\u00b0C.Click here for file"}
+{"text": "Gene expression levels in a given cell can be influenced by different factors, namely pharmacological or medical treatments. The response to a given stimulus is usually different for different genes and may depend on time. One of the goals of modern molecular biology is the high-throughput identification of genes associated with a particular treatment or a biological process of interest. From methodological and computational point of view, analyzing high-dimensional time course microarray data requires very specific set of tools which are usually not included in standard software packages. Recently, the authors of this paper developed a fully Bayesian approach which allows one to identify differentially expressed genes in a 'one-sample' time-course microarray experiment, to rank them and to estimate their expression profiles. The method is based on explicit expressions for calculations and, hence, very computationally efficient.The software package BATS presented here implements the methodology described above. It allows an user to automatically identify and rank differentially expressed genes and to estimate their expression profiles when at least 5\u20136 time points are available. The package has a user-friendly interface. BATS successfully manages various technical difficulties which arise in time-course microarray experiments, such as a small number of observations, non-uniform sampling intervals and replicated or missing data.BATS is a free user-friendly software for the analysis of both simulated and real microarray time course experiments. The software, the user manual and a brief illustrative example are freely available online at the BATS website: Gene expression levels in biological systems can be influenced by different stimuli, e.g. pharmacological or medical treatments. The response is a dynamic process, usually different for different genes. One of the goals of modern molecular biology is the high-throughput identification of genes associated with a particular treatment or a biological process of interest. The recently developed microarray technology allows one to simultaneously monitor the expression levels of thousands of genes, thus providing a \"molecular picture\" of a biological system under study and a potential of describing evolution of gene expressions in time. However, this potential has not yet been fully exploited since there is still a shortage of statistical methods which take into account the temporal relationship between the samples in microarray analysis. In fact, most of the existing software packages essentially apply techniques designed for static data to time-course microarray data. For example, the SAM software package (see ) was rectimecourse).All these methods can still be very useful when very short time course experiments have to be analyzed (up to about 4\u20135 time points), however the shortcoming of these approaches is that they ignore the biological temporal structure of the data producing results that are invariant under permutation of the time points. On the other hand, most classical time series or signal processing algorithms have rigid requirements on the data which microarray experiments rarely meet. The past few years saw new developments in the area of analysis of time-course microarray data , a user-friendly software package which implements a novel, truly functional and fully Bayesian approach of , specifiThe software allows an user not only to automatically identify and rank differentially expressed genes, but also to estimate their expression profiles. The latter feature allows an user, for each differentially expressed gene, to visualize its response to the treatment in the course of time as a single smooth curve and, hence, to reveal important biological features that can be hidden in the raw data. The estimates of gene expression profiles are, in fact, more robust than the classical straight-line connecting of the raw data and allow to compare responses of genes to treatment at any arbitrary time point. The truly functional approach of BATS successfully manages various technical difficulties such as non-uniform sampling intervals and replicated or missing data.N genes, of the differences in gene expression levels between the sample of interest and a reference in the course of time. Each record is modeled as a noisy measurement of a function si(t) at a time point tj) and measuring relative expression values on a time grid. Thus, in a 'one sample' problem the data consists of the records, for N of time points is relatively small, with very few replications available at each time point (K), while the number N of genes is very large, and a total of si(t) \u2260 0), and then to evaluate the effect of the treatment ).Here the number i, we expand its expression profile si(t) into series over some standard orthonormal basis on with coefficients l = 0,...,Li:For each gene T] are supported in the current version of BATS.Legendre polynomials and Fourier basis suitably rescaled and normalized in , . The maximum degree Lmax allowed in the expansion is an integer value, default value [n/2] as a compromise between the goodness of fit and variance of the estimate. The parameter \u03bb of the Poisson distribution truncated at Lmax has to be chosen in order to match the prior expected degree of the polynomial.The type of basis functions can be either Legendre or Fourier, with default choice Legendre. The global regularity Choosing appropriate parameters for the analysis of a particular data-set with BATS usually requires some preliminary knowledge of statistics and some level of expertise. However, a user who is not an expert in statistics should not be discouraged, since for all parameters BATS provides default values that can be used in most cases, and the parameters' sub-windows are hidden by default. If necessary, hidden windows can be opened in order to change the default values.\u03c00 and \u03c30, or enter their values manually by choosing the CUSTOM option (see Step 2 of the Algorithm). In the current version of BATS, estimation of the global parameters is based only on the Nc genes for which the complete set of M observations is available. If the default option STANDARD remains selected, for each array of observations at a time point tj) on the set of values j = 1,...,M, which are treated as a sample from the distribution of \u03c3 , then ). If the user selects alternative option CHOICE 1, he/she has to fix \u03b3 and then parameter b will be automatically evaluated by matching the mean of IG with HOICE 2 an user does not have to specify any parameters. With CHOICE 1, an user have to specify the positive parameter \u03b3 . The two options produce slightly different lists of genes and allow to check the robustness of the selections.Once one of the three error models has been selected in the box CLi by the posterior mean (option MEAN) or the posterior mode (option MAP) (Step 5 of the Algorithm) from the box ESTIMATION OF THE POLYNOMIAL DEGREE, and what procedure to use for testing which of the genes are differentially expressed from the box TEST PROCEDURE. In the latter, the default option BINOMIAL refers to the Binomial prior elicited on the number of alternative hypotheses, option TRUNCATED POISSON for the top 'nfirst' genes according to ranking, either in 'Global scale' or in 'Auto scale' . We note that visual inspection of the profiles can be very useful for a quick assessment of the fit.LOT PROFILES.Alternatively, expression profiles of individual genes can be generated later using the Utility \u2013 POnce the necessary parameters have been defined, an user has to choose a Project name and launch the analysis. By default, for each run of the analysis, three files are generated in the folder Projects: a summary of the analysis _SR.txt , the ordered list of differentially expressed genes _GL.xls for Windows systems or _GL.txt for Linux or Mac Osx, and the estimated gene profiles _SH.xls for windows systems or _SH.txt for Linux or Mac Osx. The dialog window shows intermediate results and stages of the algorithm during the execution of the analysis.IMULATIONS application enables an expert user to generate, analyze and save synthetic data. This feature can be useful for planning experimental design , for preliminary verification whether BATS is a suitable tool for a given type of experiments, or for generating synthetic data which can be used for comparison of other statistical tools. This application can also be used to enhance understanding of some features of the proposed software. Simulations are indeed a typical tool for validation and comparisons of statistical procedures. They are also widely used in microarray analysis, see, for example, . Polynomials of degree zero are avoided since a nonzero constant signal is questionable from a biological point of view. After that, for each gene i, a vector of coefficients ci is randomly sampled from the multivariate normal distribution \u03c32 is chosen by the user (on the basis of user's experience and other available information). Matrix Qi is set to be Qi = diag \u03bdi ~ U. An user can also choose the range from where the gene specific variance IGNAL TO NOISE RATIO RANGE the user can choose parameters a and b such that a, b].In the process of generating data-sets, an user has to choose the following parameters: the total number of genes N and Student T with at least 3 degrees of freedom. In order to make results of several simulation comparable, Student noise is scaled to have the same variance \u03c32 as in the normal case. In addition, setting a threshold T in the box THRESHOLD FOR UNREALISTIC VALUES forces simulated values larger than T to be filtered out and replaced with \"missing values\", mimicking pre-processing of real data where unrealistic values are eliminated.Synthetic data-sets are generated according to the model (1) by adding i.i.d noise to the simulated profiles. Two types of noise distributions are supported in the current version of BATS: normal LOT PROFILES and visually inspected in order to assess their biological resemblance. In Figure t = 2, 8, 16); the values of the other parameters were N = 8000, D = 600, Lmax = 6, \u03bb = 9, \u03bd = 0, \u03c3 = 0.3, SNR = [T(5). It should be noticed that synthetic data can only provide basic suggestions about the performance of BATS since real data often has complex structure which is very hard to model precisely in mathematical terms.The simulation scheme is similar to the one proposed in . If the si and several different noise realizations. Each synthetic data-set can be analyzed assuming any of MODEL 1, 2 or 3.Using the same simulation set-up, several data-sets can be created with several randomly generated sets of profiles Performance of the technique is automatically evaluated using the False discovery rate (FDR), False negative rate (FNR), the numbers of correctly detected, not detected or misclassified genes and some other standard measures . The results are automatically averaged in order to provide statistically relevant information which is not dependent on a particular random realization. An output .txt file contains the results of the analysis, while the dialog window shows intermediate messages during computations.TILITIES menu . A new BATS input data file will be created containing the filtered data.The procedure FATA BOX PLOTS can be used to compactly represent data for inspection before starting the analysis files and, hence, investigate the robustness of the lists of the differentially expressed genes. Two files are created by this option: a _common.xls (or _common.txt) file containing the intersection of all the selected lists and reporting for each gene the rank obtained in each analysis, and a _union.xls (or _union.txt) file containing all the genes present in at least one of the lists.After a series of analyses have been performed on the same data-set using different parameters or models, the utility CLOT PROFILES provides an alternative way to visualize the data and the selected gene expression profiles. For this purpose, an user can choose whether to plot the raw data or the expression profiles for differentially expressed genes, or both. The input data-set needs to be loaded from the sub-window Select a raw data file name together with the name of the file which contains estimated expression profiles resulted from the previous analysis, if the profiles of the differentially expressed genes need to be plotted. Then, the list of all genes in the files is shown, and the user can select the genes of interest. Additionally, the user can choose some plotting options such as the color of the line or the type of the marker. The corresponding individual profiles are displayed sequentially, and the plots can be saved as image files.Finally, the function Ptimecourse after being maintained for 4 days in steroid-free medium. RNA samples were extracted before the stimulation and after 1, 2, 4, 6, 8, 12, 16, 20, 24, 28 and 32 hours of stimulation. The cDNA microarray analysis was carried out with Human UniGEM V 2.0 glass arrays . For each time point at least two replicates were available .The data-set refers to the experiment described in . In the Complete data can be downloaded from the NCBI public gene expression data repository Gene Expression Omnibus (GEO Acc: GSE186). In this context the results of provides2 treated to control fluorescence intensity ratio. Data contained in the provided file have been already pre-processed, normalized and presented in the BATS input format.The data file 'Cicatiello_et_at.xls' contains the relative expression values ODELS 1, 2 and 3 and various combinations of parameters. Different outputs were then compared in order to seek for genes common to all options of the analysis and for those which are selected only under a particular combination of parameters. After each analysis, the list of genes detected as differentially expressed was saved in a project_name_GL.xls file. After several runs of the analysis, the _GL.xls files were compared using the function COMPARE RESULTS in the UTILITY menu. In what follows, we report the results of the analysis with MODELS 1, 2 and 3 and various choices of \u03bb. Table Lmax = 6, \u03bd = 0 and \u03bb ranging between 6 and 12 . It is easy to see that the results are quite robust with respect to the number of detected genes, with smaller \u03bb providing larger lists. Using the function COMPARE RESULTS we discovered that the technique is also robust with respect to the list of genes declared differentially expressed: 574 genes were common to all 28 lists while 958 genes have been selected in at least one of the 28 lists. A more detailed discussion of the results of the analysis is provided in [LOT PROFILE function allows an user to visualize both raw data and estimated profiles. Figures The data set has been analyzed using Mtimecourse to the same data-set. To be fair, we should mention that functional statistical approach implemented in EDGE was originally designed for the \"two-sample\" problem following the paper of [Next, for comparative purposes, we applied EDGE and paper of and aftepaper of applies paper of the authq-values, which we chose to be q = 0.05 and q = 0.1 in our analysis. Timecourse neither allows missing values nor suggests a specific procedure for treating them; moreover, it requires that each time point has the same number of replicates. Thus, in order to apply the method, we filtered out all the genes with missing observations and discarded the third observations which was available at time points t = 2, 8, 16. To be fair, we should mention that since timecourse is designed for data where replicates are biologically meaningful. Since dataset [timecourse package could not take advantage of the replicate identification. On the other hand, the information about the time measurements is not used by timecourse method. Since the method only provides rank-ordered list of genes (without any automatic cut off point), we performed the comparisons taking the top 500 and 1000 genes in the resulting list.Since the EDGE software does not automatically account for missing values but only suggests a preliminary procedure (K-nearest-neighbors) for filling them in, we repeated the analysis both using this procedure and filtering out genes with missing values. Additionally, EDGE allows an user to choose the degree of the splines or the polynomials common to all genes. We carried out the analysis with different choices for the maximal degree of the polynomials and found out that the results were robust with respect to those choices (we do not report these results here). To estimate the distribution of the statistics under the null hypothesis, EDGE uses a bootstrap approach, thus requiring a high computational effort and appropriate memory resources. We used 1000 permutations in our comparisons and we discovered that the gene selections were robust with different random seeds (only a few different genes). In order to control the multiplicity error, EDGE uses the dataset containstimecourse and [timecourse and [timecourse).Table urse and were alsurse and and 166 urse and were alsurse and , all verThis paper describes BATS, a novel statistical user-friendly software specifically designed for time course microarray data. In particular, BATS allows an user to analyze time series microarray experiments having possibly non-Gaussian errors and as few as 5\u20136 time points per gene, although a modest increase in the number of available time points will produce a significant improvement of the findings. Presence of replicated measurements is recommended, but not required. It is highly computationally efficient, since all calculations are based on analytic expressions. BATS automatically manages irregular experimental design issues, such as non-uniform sampling intervals and missing or replicated data. The method accounts for multiplicity of errors, selects and ranks differentially expressed genes.Analysis of the human breast cancer data-set from is proviVersion 1.0 of BATS is designed for the 'one sample' problem. The extension of the statistical model to the 'two sample' case is currently under development, its implementation will be added in future releases.The BATS software, user manual and illustrated examples can be downloaded from the BATS website.1. Project Name: BATS (version 1:0)2. Project home page: 3. Operating system(s): Windows, Linux, Mac Osx4. Programming language: MATLAB5. Other requirements: 512 MB RAM, 2.0 GHz Pentium 4 CPU, 300 MB free disk space on hard drive, MATLAB Component Runtime (available from the software web site).6. License: BATS permissive license (derived from the MIT license)All authors participated in writing the code for the software package, developing the project website, the documentation, and writing the manuscript. All authors also read and approved the submitted manuscript."}
+{"text": "Gene clustering of periodic transcriptional profiles provides an opportunity to shed light on a variety of biological processes, but this technique relies critically upon the robust modeling of longitudinal covariance structure over time.We propose a statistical method for functional clustering of periodic gene expression by modeling the covariance matrix of serial measurements through a general autoregressive moving-average process of order (Through simulated data we show that whenever it is necessary, employment of sophisticated covariance structures such as ARMA is crucial in order to obtain unbiased estimates of the mean structure parameters and increased precision of estimation. The methods were implemented on recently published time-course gene expression data in yeast and the procedure was shown to effectively identify interesting periodic clusters in the dataset. The new approach will provide a powerful tool for understanding biological functions on a genomic scale. Examples of temporal microarray experiments include studies of the cell cycles in yeast DNA microarray technologies are widely used to detect and understand genome-wide gene expression regulation and function. Microarray experiments typically collect expression data on thousands of genes and the high dimensionality of the data impose statistical challenges. The statistical issues become even more pronounced when transitioning from static microarray data to There has been a long history of using parsimonious mathematical functions, e.g. the Fourier series, to describe periodic biological processes Although the approach proposed by Kim et al. Unlike AR(1), the ARMA covariance matrix generally does not have closed form solutions for its inverse and determinant, which imposes challenges in parameter estimation and likelihood function evaluation. We use a recursive method The rest of the article is organized as follows. The model and the inference procedure are described in Section 2. Section 3 includes simulation studies to investigate the improvement in estimation accuracy and efficiency comparing the ARMA . It decomposes the periodic expression level into a sum of the orthogonal sinusoidal terms. The general form of the Fourier signal isThe total number of parameters to be estimated with Denote the entire set of unknown parameters as ormation .Our mixture model assumes that the number of components in the mixture model process was simulated and the model was used to identify periodic clusters. Here, the AIC and BIC selected three clusters, but the mean functions for the three clusters are all nearly zero compared to the standard deviation of the noise (approximately .3). This is illustrated in This methodology is applied to time time course gene expression data published in To keep the model relatively parsimonious, a single covariance structure was used to model the collection of genes, i.e. Genes are classified to the cluster if they have an estimated probability of 90% or greater of belonging to the cluster. The mean functions for the identified nine distinct clusters are depicted in The analysis of the real data set did suggest some interesting clusters, and we performed a gene ontology (GO) analysis on the tight clusters along with clusters 1 and 2. Basic GO organization consisting of three major categories \u2013 \u201cbiological process\u201d, \u201ccellular component\u201d, and \u201cmolecular function\u201d \u2013 is considered in addition to a more specific GO classification. The most striking result is seen in cluster 4 where each of the ten genes in the network is categorized with GO:0003677, which represents \u201cDNA binding\u201d under the molecular function ontology. Other significant GO categories in cluster 4 include GO:0005634 model when the true intercorrelation structure of the time-dependent expression data is of a higher order. However, the ARMA covariance structure requires that the gene expression is evaluated at equally spaced times points, which makes it inapplicable when the data are collected irregularly or at gene specific time intervals. Moreover, accurate estimation and classification of gene expression profiles are in need of reasonable approximation of the assumed covariance model to the truth. The simulations also indicate that any parametric methods could be non-robust and produce misleading results when deviation from the true covariance exists. Under these considerations, semi-parametric approaches arise as a promising alternative to the ARMA assumption in the current model Since we usually would not expect periodic expression to exactly follow an ARMA process, the real data analysis was useful to see the effectiveness of the methods in practice. Both the AIC and BIC selected the AR(2) covariance structure suggesting the flexibility in the ARMA parameters provides a improved fit over the more simplistic AR(1) covariance structure. The graphical views of the model fit impressively demonstrate the utility of the proposed method to real datasets.Details of the EM Algorithm are provided as supporting information .Text S1Supporting information EM algorithm.(0.15 MB PDF)Click here for additional data file."}
+{"text": "Understanding gene expression and regulation is essential for understanding biological mechanisms. Because gene expression profiling has been widely used in basic biological research, especially in transcription regulation studies, we have developed GeneReg, an easy-to-use R package, to construct gene regulatory networks from time course gene expression profiling data; More importantly, this package can provide information about time delays between expression change in a regulator and that of its target genes.The R package GeneReg is based on time delay linear regression, which can generate a model of the expression levels of regulators at a given time point against the expression levels of their target genes at a later time point. There are two parameters in the model, time delay and regulation coefficient. Time delay is the time lag during which expression change of the regulator is transmitted to change in target gene expression. Regulation coefficient expresses the regulation effect: a positive regulation coefficient indicates activation and negative indicates repression. GeneReg was implemented on a real Saccharomyces cerevisiae cell cycle dataset; more than thirty percent of the modeled regulations, based entirely on gene expression files, were found to be consistent with previous discoveries from known databases.GeneReg is an easy-to-use, simple, fast R package for gene regulatory network construction from short time course gene expression data. It may be applied to study time-related biological processes such as cell cycle, cell differentiation, or causal inference. In general, the existing models for network inference can be grouped into three categories: logical models, continuous models and single-molecule level models . LogicalThe most well known software of information-theoretic approaches for gene network inference is ARACNE ,4. The iBanjo is a representative gene network inference software based on Bayesian network formalism . BecauseOrdinary differential equations (ODEs) based reverse-engineering algorithms relate changes in gene transcript concentration to each other and to an external perturbation. To reverse-engineer a network using ODEs requires selection of an ODE function and estimation of unknown parameters from gene expression data using some optimization technique. ODE-based approaches yield directed graphs and can be applied to both steady-state and time-series expression profiles, but they are often very complex and slow, and do not provide insight into the biological meanings of each parameter.To address the limitations of existing approaches for gene regulatory network construction from short time course gene expression data, we have developed an easy-to-use, simple, and fast R package: GeneReg. GeneReg is based on a time delay linear regression model, which is similar to Kim's ordinary differential equation. The funThe most important improvement of our model was the time delay of each regulator can be exactly calculated and different regulators could have different time delays. Time delay is an important concept in biological regulatory mechanisms, especially for transcription factors. As we known, transcription factor could only regulate its target genes in protein form but in microarrays studies, the measured abundance of transcription factor is its mRNA expression level. The mRNA of transcription factor must be translated into protein and then the proteins of transcription factor regulate the expression of downstream genes. There is a time delay from the mRNA of transcription factor being generated to the actual regulation of transcription factor. What's more, time delay is an almost unmeasureable variable by traditional experiments as there were too many unclear processes during translation and the factors which could affect the process were unpredictable. The time delay calculated in our model provided a higher level estimation of this important but unmeasureable biological variable.The biological meanings of the two parameters in our model are both clear and important for understanding gene regulatory mechanisms. As the linear model is simple and the forward selection optimization of parameters is easy to compute, GeneReg is much faster than similar software and could be used in deciphering genome-wide gene regulatory networks. Our models do not require prior knowledge about regulatory mechanisms, although prior knowledge could be integrated, for example if certain regulators were already known to regulate the target gene, they could be added into the model first. The model with time delay and regulation coefficient can be used to obtain qualitative insights about regulatory networks and discovery novel regulations. This linear model assumption may ignore parts of the nonlinear regulations, but it allows a high level of abstraction and efficient inference of network structure and regulation functions. When higher resolution to detailed regulatory relationship is desired, the linear model can be replaced with more complex nonlinear models, such as mass action models or Hill models.GeneReg was implemented on a real Saccharomyces cerevisiae cell cycle dataset. The results were found to reflect known dynamic expression profiles, with 32.45% of the regulations modeled in wild type cells and 32.61% of regulations in cyclin mutant cells consistent with what can be found in the YEASTRACT and/or STRING databases. These are fairly good results, considering that our large scale gene network construction was only based on a single small time series gene expression dataset .t - \u0394t against the expression level of their target genes at time t. \u0394t is the time delay between expression of transcription factors and expression of downstream genes, and can differ from gene to gene.The model is based on a linear regression of the expression levels of regulators at time T1, T2,..., Tk, and target gene g is regulated by n regulators tf1, tf2,..., tfn in a linear manner. These relationships can be formulized as below:Suppose we have a set of time course data covering time points g) is the relative expression level of target gene g at time point tg to the baseline, tfi at time point tg - \u0394ti, \u0394ti, is the time delay of tfi's regulation to gene g, and ai is the regression coefficient of tfi.where Our method aims to select a set of possible regulators with certain time delays to estimate the dynamic expression pattern of a target gene. The method uses an AIC (Akaike information criterion) model sekp is the number of parameters in the statistical model, and L is the maximized value of the likelihood function for the estimated model. AIC increases as the number of parameters increase and decreases as the residual variance decreases. The smaller AIC indicates better model.where In practical computation, it is time-consuming to compute the AIC statistics for all possible regression models. Forward selection method was used2 [2 is an improved R2 that adjusts for the number of explanatory terms in a model. The adjusted R2 is defined asThe model's explanatory ability is evaluated by adjusted R2 . Adjustenr is the number of regulators in the linear model, and ks is sample sizewhere The computational procedure of our method can be summarized as following:1) Sorting the regulators based on their relevance with target gene2 of best single regulator regression model with smallest AIC was calculated. Regulators didn't meet a pre-specified adjusted R2 cutoff were considered as irrelevant with the target gene and were filtered. The left M regulators were sorted according to their smallest AICs.The goal of this procedure was to filter the irrelevant regulators first and sort the regulators according to their importance to the regulation. To each regulator, all possible time delays were traversed and the adjusted RS is provided:After the pre-evaluation procedure, a regulator set tfa has smaller AIC than tfb and tfa is considered to be better than tfb.The index reflects the evaluations for regulator. For example, If a < b, 2) Regulation model optimization with forward selection of regulators and time delaysStep one can only provide a list of relevant regulators by sorting them according to their importance to the regulation, but it is still unknown which fore regulators in the list should be selected to establish the time delay regression model. The best fore regulators are selected by testing all possible top regulator sets, and choosing the regulator set that can achieve the smallest AIC with forward selection. The forward selection procedure was illustrated in Figure si can be expressed using the following equation:The possible regulator subset The initial regulator subset is m regulators is defined as S indexes of regulators in \u03a9s are n regulators is defined as S indexes of regulators in \u03a9t are The selected regulator set with t will be successively tested whether it can be added into \u03a9s. During each test process, the time delays of \u03a9s are held the same. All possible time delays of the regulator in \u03a9t with smallest S index that is t to \u03a9s. Otherwise, t. In next round, the new regulator in \u03a9t with smallest S index will be tested until \u03a9t is empty.In the forward selection procedure , each reM - 1 regulators have been tested to add into the initial regulator subset 2 of this final optimized model was calculated. If the adjusted R2 meets pre-specified criteria, it suggests the optimized time delay model could explain this target gene's expression pattern and is applicable. Otherwise, this target gene's expression pattern can't be explained by our time delay linear regression model and maybe other more complex nonlinear models are needed.After all http://www.ncbi.nlm.nih.gov/geo with accession number GSE8799. The dataset includes the gene expression profiles of wild type cells and cyclin mutant cells at 15 time points during two cell cycles and each genotype has two replicates with different 15 time points on life line. In our study, we merged the two replicates of each genotype based on their life lines and then there were 30 time points for wild type cells and cyclin mutant cells. 1271 periodic genes which were defined in Orlando's work[http://www.yeastract.com/.To evaluate our approach on biological time course gene expression data, we applied the method to Saccharomyces cerevisiae cell cycle data publicly available at GEO do's work and consdo's workhttp://ww2 ratio scale. The expression level of the first time point was taken as baseline. The gene expression level at each time point subtracted the baseline to get the relative expression level. Then B spline interpolation [First, the data were transformed to a logpolation was appl2 cutoff for single regulator regression and multiple regulator regression set at 0.8 and 0.9. Additional files Time delay linear models then were constructed based on the interpolated expression data and candidate pool of regulators, with the adjusted RHO gene in wild type cells. HO encodes an endonuclease responsible for initiating mating-type switching, a process in which MATa cells change to MATalpha cells or vice versa. This process is controlled by Swi4p-Swi6p, Swi5p, and Ash1p according to the Saccharomyces Genome Database (SGD) [ASH1, TEC1, and SWI5 to be the mostly likely regulators of HO.As an example, Figure se (SGD) . In our Finally, the whole regulatory network was plotted based on the series of time delay linear models. Additional files To evaluate the performance of the gene regulatory networks constructed based on time delay linear models, we generated a reference network from the YEASTRACT ,16 and SThe above prediction accuracies were calculated based on the data presented in additional files The above comparison was on the probe level. Figure The activity of transcription factors is an important factor for gene regulation. As many transcription factors are post-translationally controlled, their activity cannot always be observed directly by measuring changes in their mRNA expression level. Additionally, certain condition-specific regulators vary with different perturbations; for example, many regulatory relationships differ between wild type and cyclin mutant cell networks. All of these issues affect the overlap of our network with that documented in the known databases. In general, there is no final or best network, only a group of possible networks that are nearly equally useful.To better understand the impact of time delay in network construction, we build the networks without time delay in wild type cells and cyclin mutant cells by setting the parameter of our program, max time delay as 0. When time delay was not considered, there were only 3050 and 2489 predicted regulations in wild type cells and cyclin mutant cells under the same criteria; 32.40% of predicted regulations in wild type cells and 31.94% of predicted regulations in cyclin mutant cells were consistent with YEASTRACT or STRING. As shown in additional files Eight well known transcription factors studied in Orlando's work were speAs expression profiling technology has grown in popularity, much effort has been devoted to building gene regulation networks based on the wealth of profiling data generated. In this contribution, a new method is proposed to not only construct dynamic gene regulatory networks, but also to calculate the time delays between regulators and downstream genes. Time delay between transcription factor activation/repression and that of its target genes has long been suspected. Our tool allows a visualization of exact time delays, calculated from real in vivo data. Our approach can be applied to investigate important time-related biological processes, such as the cell cycle, cell differentiation and development. Similarly such a method may be important for researchers studying the mechanisms of specific transcription factors, their pathways, and possible interventions for associated diseases.http://cran.r-project.org/web/packages/GeneReg/index.html. Additional file Time delay regression models can be easily constructed using the R package GeneReg, which is freely available from CRAN (1) B spline interpolation was appldata(wt.expr.data)wt.bspline.data < - ts.bspline), data.predict = 100)(2) A series of time delay linear models were constructed based on the interpolated expression data. Detailed explanation of each parameter in the following code can be found in the help document of the GeneReg package at the above website.data(tf.list)wt.models < -timedelay.lm.batch, single.adj.r.squared = 0.8, multiple.adj.r.squared = 0.9, maxdelay = ncol(wt.bspline.data)*0.1, min.coef = 0.25, max.coef = 4, output = T, topdf = T, xlab = 'Time point (lifeline)', ylab = 'Relative expression level (in log ratio)')(3) The whole network was plotted based on the series of time delay linear models constructed in step (2).plot.GeneRegProject name: GeneReg\u2022 Project home page: http://cran.r-project.org/web/packages/GeneReg/index.html\u2022 Operating systems: Platform independent\u2022 Programming language: R\u2022 Other requirements: GeneReg depends on two other R packages: splines and igraph\u2022 License: LGPL\u2022 Any restriction to use by non-academics: none\u2022 (ODE): Ordinary differential equation; (AIC): Akaike Information Criterion; (ORF): open reading frame.The authors declare that they have no competing interests.TH, KT and ZQ carried out the study. TH and LL wrote the manuscript. LX and YL supervised the project. All authors read and approved the final manuscript.Time delay models in wild type cells. Each file is a time delay model of one target gene.Click here for fileTime delay models in cyclin mutant cells. Each file is a time delay model of one target gene.Click here for fileTime delay network of wild type cells. The time delay network of wild type cells includes 4717 predicted regulations.Click here for fileTime delay network of cyclin mutant cells. The time delay network of cyclin mutant cells includes 3466 predicted regulations.Click here for fileCollection of time delay models in wild type cells. Each row represents one predicted regulation. The seven columns show regulator, target gene, regression coefficient, time delay, adjusted R2, consistency with the YEASTRACT database, and consistency with the STRING database. In these last two columns, 1 indicates consistency with the database, 0 indicates inconsistency when the regulator and target gene are included in the database, and NA indicates the database does not include the regulator or target gene. Regulations consistent with either the YEASTRACT database or STRING database (1 in either column) were considered as true, while regulations consistent or inconsistent with either the YEASTRACT database or STRING database (0 or 1 in either column) were considered as total regulations. Regulations with NA in both the YEASTRACT database and STRING database were excluded from accuracy calculation.Click here for fileCollection of time delay models in cyclin mutant cells. Each row represents one predicted regulation. The seven columns show regulator, target gene, regression coefficient, time delay, adjusted R2, consistency with the YEASTRACT database, and consistency with the STRING database. In these last two columns, 1 indicates consistency with the database, 0 indicates inconsistency when the regulator and target gene are included in the database, and NA indicates the database does not include the regulator or target gene. Regulations consistent with either the YEASTRACT database or STRING database (1 in either column) were considered as true, while regulations consistent or inconsistent with either the YEASTRACT database or STRING database (0 or 1 in either column) were considered as total regulations. Regulations with NA in both the YEASTRACT database and STRING database were excluded from accuracy calculation.Click here for fileZip file of GeneReg version 1.1.1. Processed example data are contained within the package.Click here for fileR code for analysis. R code for analysis of wild type cells and cyclin mutant cells.Click here for file"}
+{"text": "Our experiments show the value of using spline representations for sparse time series. More significantly, they show that our time warping method provides more accurate alignments and classifications than previous standard alignment methods for time series.We present an approach for answering similarity queries about gene expression time series that is motivated by the task of characterizing the potential toxicity of various chemicals. Our approach involves two key aspects. First, our method employs a novel alignment algorithm based on time warping. Our time warping algorithm has several advantages over previous approaches. It allows the user to impose fairly strong biases on the form that the alignments can take, and it permits a type of local alignment in which the entirety of only one series has to be aligned. Second, our method employs a relaxed spline interpolation to predict expression responses for unmeasured time points, such that the spline does not necessarily exactly fit every observed point. We evaluate our approach using expression time series from the E dge database, which contains microarray observations collected from mouse liver tissue over the days following exposure to a variety of treatments. Our approach takes as input an unknown query series, consisting of several gene-expression measurements over time. It then picks out treatments from a database of known treatments that exhibit the most similar expression responses. This task is difficult because the data tends to be noisy, sparse in time, and measured at irregular intervals. We start by reconstructing the unobserved parts of the series using splines. We then align the given query to each database series so that the similarities in their expression responses are maximized. Our approach uses dynamic programming to find the best alignment of each pair of series. Unlike other methods, our approach allows alignments in which the end of one of the two series remains unaligned, if it appears that one series shows more of the expression response than the other. We finally return the best match(es) and alignment(s), in the hope that they will help with the query's eventual characterization and addition to the database.We are developing an approach to characterize chemicals and environmental conditions by comparing their effects on gene expression with those of well characterized treatments. We evaluate our approach in the context of the E Characterizing and comparing temporal gene expression responses is an important computational task for answering a variety of questions in biological studies. We present an approach for answering similarity queries about gene expression time series that is motivated by the task of characterizing the potential toxicity of various chemicals. Our approach is designed to handle the plethora of problems that arise in comparing gene expression time series, including sparsity, high-dimensionality, noise in the measurements, and the local distortions that can occur in similar time series.The task that we consider is motivated by the need for faster, more cost-efficient protocols for characterizing the potential toxicity of industrial chemicals. More than 80,000 chemicals are used commercially, and approximately 2,000 new ones are added each year. This number makes it impossible to properly assess the toxicity of each compound in a timely manner using conventional methods. However, the effects of toxic chemicals may often be predicted by how they influence global gene expression over time. By using microarrays, it is possible to measure the expression of thousands of genes simultaneously. It is likely that transcriptional profiles will soon become a standard component of toxicology assessment and government regulation of drugs and other chemicals.dge database dge contains expression profiles from mouse liver tissue following exposure to a variety of chemicals and physiological changes, which we refer to as treatments. Some of the treatments in Edge have been assayed as time series. observation to refer to the expression measurements made at a single time point in a treatment.One resource for toxicology-related gene expression information is the EThe computational task that we consider is illustrated in There are several properties of the expression time series at hand that are important considerations for our work.Sparsity: As is the case with most time series characterizing gene expression dge database has observations at only 9 times, and several of the series include only two points.High-dimensionality: Because the expression data we consider is measured via microarrays, each time \u201cpoint\u201d in our series lies in a high-dimensional space. For the experiments reported here, each time point represents expression levels for 1,600 genes. Non-uniform and irregular sampling: Given the sparsity of the time series, it is typically the case that they have been sampled at non-uniform time intervals. Moreover, the sampling times may vary for different time series.Noise: As is the case with all microarray data, the measurements involve a fair amount of noise due to technical issues in the process.Biological variability: Because a mouse model is used for the toxicology experiments we consider, there is also a component of biological variation that affects the data measured. Each microarray assays a sample from a different animal.These properties of the data result in several additional challenges for the task we consider.The time points present in a given query may not correspond to measured points in some or any of the time series in the database.Queries may be of variable size. Some queries may consist of only a single observation, whereas others may contain multiple time points. Additionally, queries may vary in their extent: some may span only a few hours whereas others include measurements taken over days.A given query and its best match in the database may differ in the amplitude, temporal offset, or temporal extent of their responses. For example, the expression profile represented by a query treatment may be similar to a database treatment except that the gene expression responses are attenuated, or occur later, or take place more slowly.A given query and its best match in the database may differ in that one of them shows more of the temporal evolution of the treatment responses. In other words, the query may be similar to a truncated version of the database series, or vice versa.dynamic time warping. Dynamic time warping To address these challenges, we have developed a generative model that approaches the problem from a probabilistic perspective. In order to temporally align gene-expression time series using our model, we employ a novel method for local alignment in which the end of one series is unaligned. We refer to this case as shorting the alignment. This aspect of the approach is motivated by the consideration that one of the series may show more of the temporal response than the other. For example, one series may not have been measured for as long as the other. Another significant way in which our approach differs from standard time warping is that it is based on an explicit, generative model. This model allows the user to explicitly encode costs/probabilities that characterize the likelihood of various types of differences in closely related time series. The most significant way in which our approach differs from standard time warping is that it enables the user to impose fairly strong biases on the form that the alignments can take. In particular, it allows alignments that partition the given time series into a small number of segments in which the changes from one time series to the other are fairly uniform. This is important given the sparsity, high-dimensionality, and noisiness of the time series being aligned.Our time warping approach differs in several substantial ways from the standard dynamic programming method. Unlike the standard approach, our method does not force the two series to be globally aligned. Instead, it permits a type of We also investigate variations on spline interpolation in order to find an approach that results in accurate reconstructions of sparsely sampled time series. We find that we achieve more accurate interpolations when using higher order splines. Further, our experiments indicate that it is helpful to relax the splines' fit to the observed data, rather than potentially overfitting by exactly intercepting each observed data point.In earlier work, our group Lamb et al. Aach and Church Bar-Joseph et al. Listgarten et al. A related approach to aligning time series is proposed by Gaffney and Smyth correlation optimized warping (COW), devised by Nielsen et al. Another similar approach is generalized hidden Markov models, that directly evaluate the likelihood of segments of a sequence, instead of incrementally computing these likelihoods one sequence element at a time. Models of this type have been used for tasks such as gene finding Our approach is also related to various probabilistic sequence models, such as In this section we detail our generative model for classifying and aligning time series, and present a dynamic programming algorithm that is able to find optimal alignments under this model. We also present a review of B-spline interpolation and discuss some useful variations of the method. We use spline interpolation to reconstruct unobserved microarray observations.Our approach to answering similarity queries involves three basic steps: (i) we use interpolation methods as a preprocessing step to reconstruct unobserved expression values from our sparse time series; (ii) we use our alignment method to find the highest scoring alignment of the query series to each treatment series in the database; (iii) we return the treatment from the database that is most similar to the query, and the calculated alignment between the two series.http://www.biostat.wisc.edu/aasmith/catcode/.We have implemented all our algorithms in Java. The source code is available for download at pseudo-observations.One challenge that arises when aligning a pair of expression time series is that the series may have been sampled at different time points. Moreover, the sampling may be sparse and occur at irregular intervals. To address these issues, we first use an interpolation method to reconstruct the unobserved parts of the time series before trying to align them. This interpolation step allows us to represent each time series by regularly spaced observations. We refer to the \u201cobservations\u201d which come from the interpolation, as opposed to measurement, as Although linear interpolation is a natural first approximation, other work has explored the use of B-splines to better reconstruct missing expression data k of the splines, and the points of discontinuity knots. There are n bases, where:ibk, is the ith base of order k.As shown in kth-order basis splines have degree of k\u22121, so a second-order B-spline consists of line segments, a third-order spline consists of quadratic segments, etc. The splines are also continuous down to the (k\u22122)th derivative. The actual interpolating B-spline s inherits these properties. It is formally defined as:It follows that the segments of the iC are known as control points, and solving for them is a simple matter of solving linear equations. With n points to interpolate:The weights n points, the problem is underconstrained and cannot be solved with such a large k. With more than n points, the problem is overconstrained and can only be solved in a least-squares sense. This is easy to do with standard linear algebra techniques. However, one must make sure that every base overlaps with at least one observation, or the matrix will be rank-deficient and the equations unsolvable.With fewer than smoothing splines, and refer to B-splines solved with conventional methods as intercepting splines.Unfortunately, B-splines have a tendency to overfit curves in data-impoverished conditions. Such reconstructions can show large oscillations in an attempt to exactly intercept every observed data point. This can be especially problematic with microarray data, which are already inherently noisy. The solution we use is to solve for the control points of a low-order spline, and then use those control points for a higher-order one. Such a spline will tend to fall within the convex hull created by the lower-order spline m segments, where the ith segments of the series correspond to one another. Our dynamic programming method tries to find a partitioning of the series that reveals the maximal similarity between them. As discussed earlier, we want to take into account that the nature of the relationship between the two series may vary in different segments. For example, it may be the case that the first part of the expression response occurs more slowly in one treatment than in a similar treatment. Recall also that the segments do not have to cover the entirety of both series\u2014one of the series may be \u201cshorted.\u201dEach possible alignment we consider for two given time series (the query and the database series) partitions the series into stretching to refer to distortions in the rate of some response, and the term amplitude to refer to distortions in the magnitude of the response. In addition, the alignment has shorted so that the full query is aligned with only a partial database series.q and a particular database series d, we can calculate how likely it is that q is a somewhat distorted exemplar of the same process that resulted in d. In particular, we can think of a generative process that uses d to generate similar expression profiles. We can then ask how probable q looks under this generative process.To determine the similarity between a query time series q given a database series d as follows:m is the number of segments in the alignment, iq and id refer to the expression measurements for the ith query and database segments respectively, and is is the stretching value and ia is the amplitude value for the ith segment. The location of each segment pair is assumed to be given here.Given this generative process idea, we calculate the probability of a particular alignment of query mP represents a probability distribution over the number of segments in an alignment, up to some maximum number M of allowed segments. sP represents a probability distribution over possible stretching values for a pair of segments, aP represents a probability distribution over possible amplitude values, and eP represents a probability distribution over expression observations in the query series, given the database series and the stretching and amplitude parameters.sP, we use a discretized version of the following distribution:To represent P(x)\u200a=\u200aP(1/x). Thus for example, stretching some expression response by a factor of two is equiprobable to compressing it by a factor of two. This symmetry property means that it does not matter which series we consider to be the query and which we consider to be from the database. As we discuss in the next section, our dynamic programming algorithm only allows segments to begin and end at a limited number of points. Thus, our distribution is actually discretized so that probability mass is allocated only to possible stretching values, and then renormalized.We choose this distributional form because it is a variation of the log normal distribution that is symmetric around one, such that aP, the distribution of amplitude values, since we also want to have P(x)\u200a=\u200aP(1/x) symmetry with these values. Thus a twofold increase in an expression response is treated as equiprobable to a twofold decrease.We use a similar distribution to represent eP, we transform our representation of id using the given stretching and amplitude values, and then ask how probable iq appears when we use this transformed id series as a model. Let us first consider a simple case in which our time series have only one gene, and we are mapping only one point from the query segment iq to the database segment id. Let t represent a time coordinate in the segment iq, and let ilq and irq denote the leftmost and rightmost time coordinates in the ith query segment. Let ild and ird denote the corresponding bounding time coordinates for the ith database segment. Then we can map a time coordinate from segment iq into the corresponding coordinate in id as follows:is is defined by:To calculate p represent the probability density function of this Gaussian, where \u03bc is the mean and e\u03c3 is the standard deviation of the Gaussian. We can then compute the probability of generating a query point iq(t) located at time t as:Our model for \u201cgenerating\u201d points in the query series from a point in the database series is a Gaussian centered at the database point. Let In other words, we center a Gaussian on the expression level at the mapped time coordinate in the database series, and ask how probable the scaled expression value from the query looks at that time coordinate.in is the number of query observations in segment i.To generalize this calculation to multiple observations in the query series, we make the simplifying assumption that the observations are independent, and we have:p be a multidimensional Gaussian, with one dimension for each gene measured. In our current work, we treat the genes as independent of one another given the time point. Thus the covariance matrix for this Gaussian is zero on all of the off-diagonal terms.Each of our observations represents measurements for hundreds of genes. We therefore generalize the description above by having e\u03c3 represents variation in expression measurements that are due to technical and biological variability. Thus, we estimate the standard deviation for each gene by considering the variance in a sample that consists of all the replicated experiments in the database.We assume that unrelated sequences in the derivation of substitution matrices for protein sequence alignment In addition to considering the likelihood of the query series under the assumption that it exhibits a similar response to the given database series, we also consider its likelihood under a null model. The notion of a null model here is one that generates alignments by randomly picking observations from the database to align with the query sequence. The rationale for using such a null model is analogous to the use of a model of The value of a null model for our application is that it enables alignments of differing lengths, including shorted alignments, to be compared on an equal footing. Under our scoring function which incorporates the null model, segments have a positive score only if the database series in that segment explains the corresponding segment from the query series better than the null model does.p represent the probability density function of a multidimensional Gaussian whose mean DB\u03bc is the average expression level of the observations in the database, and whose standard deviation is e\u03c3 as before.Let ith segment of the query series under the null model as:We then estimate the probability of the q as follows:Since our null model assumes that there is only a single segment with no amplitude change or stretching, we can compute the probability of the entire query series Putting together the terms above, we can score a given alignment based on the log of the likelihood ratio of the query series under the \u201cdatabase series\u201d model versus the query series under the null model as:Up to now we have described this process in terms of using a database series to generate the query series. However, we want our alignment method to be symmetric so that it does not matter which series we consider to be the query and which we consider to be from the database. Due to the last two terms, this will not necessarily be the case using the scoring function defined above. Therefore, we modify the scoring function so that it also considers using the query series to generate the database series:eP is calculated in an analogous manner to eP but the inverses of is and ia are used to generate observations in the database series.Here q and d, both of which are represented by regularly spaced observations (or interpolated pseudo-observations) of the gene expression values.Given a pair of time series, we do not know a priori which alignment is optimal. However we can find the optimal alignment using dynamic programming. The following algorithm takes as input two time series, termed iq,id), we can calculate its score as follows:In particular, given a segment pair represents the best score found with i segments that align the query subseries from time 0 to x with the database subseries from time 0 to y. As above, x and y must be selected from the given observations in the two series. The basic idea is that in order to determine \u03b3, we look through all \u03b3 where a 0 (l < 0) and J0 is a (r \u00d7 r) matrix of zeros.where r = 1 we have the VAR(1) model and the asymptotic covariance simplifies toNotice that, if whereith element of vec and m is usually a (p \u00d7 1) vector or zeros.where \u03c72(d) distribution where d = rank(C) gives the number of linear restrictions. This test is useful to identify, in a statistical sense , which gene (predictor variable) is Granger causing another gene (response variable).Under the null hypothesis, (59) has a limiting Here, two methods to estimate measurement error are proposed. One when technical replicates are available and another one in the case when they are not available.When technical replicates are available, measurement error estimation may be performed by applying a strategy extending the methods described by Dahlberg (1940) and log(W'), i.e., log(W') = f(log(W)) + \u03b51. Notice that the logarithm was calculated as a variance stabilizer (due to the high variance observed in microarray data). This is a common practice in microarray analysis;1. Perform a non-linear regression such as splines smoothing between log(W), i.e., g(log(W)) + \u03b52;2. Apply again the splines smoothing between i = 1,..., m, where m is the number of spots in the microarray, also in the presence of heteroscedasticity.3. Calculate However, unfortunately, technical replicates is not always available. To this case, we have developed a strategy based on negative control probes and housekeeping genes frequently provided in commercial microarrays. Technically, housekeeping genes and negative controls should not change their expression levels . TherefoS be the set of all probes in the microarray and H be the set of housekeeping genes and negative controls. Calculate the mean and variance for each probe of S and H;1. Let var(H) = f(mean(H)) + \u03b51 and var(S\\{H}) = g(mean(S\\{H})) + \u03b52, where H is a matrix containing the expression values of each housekeeping gene and negative controls in each row and S\\{H} is a matrix containing the expression values of the remaining set of probes in each row. The functions f and g may be represented by a linear combination of spline functions \u03d5j(\u00b7), i.e., they may be written as2. Perform a splines smoothing in both sets of probes separately, i.e., a splines smoothing d is the number of knots used in the spline expansion . mean(H) and var(H) (or mean(S\\{H}) and var(S\\{H})) are vectors containing the mean and variance values of each row of H (or S\\{H}), respectively. In this step, the smoothed curves where 3. Divide the smoothed curve In order to evaluate the behavior of both, standard and proposed methods, we have conducted two simulations in small, moderate and large samples sizes . Computations were performed on the R software . For eacx and y be gene expression values where one is interested in examining if a certain gene xi is linearly correlated to gene y (q = 1) partialized by other genes. This situation can be represented by the following structureIn order to evaluate the performance of both, usual and corrected OLS methods, a controlled structure was defined. Let whereXi and Y are defined byThe observed variables xi ~ N, \u03b5 ~ N is the intrinsic biological random variation and \u03f51 ~ N, notice that Xt and Yt being gene expression time series data and one is interested in verifying if certain gene xi, t Granger causes gene yt (q = 1). This problem can be modeled by a VAR process of order one as described below:In the time series case, the data has some peculiarities which are not present in the independent data. Time series data are known to be autocorrelated and also contemporaneously correlated (contemporaneous correlation between time series). Considering these characteristics, a similar structure described in the previous section was designed. Let whereandXt and Yt are defined byThe observed variables \u03b5 ~ N is the intrinsic biological random variation and \u03f51 ~ N.Notice that The standard and proposed OLS methods were applied to lung cancer gene expression data collected by . This daAF has made substantial contributions to the conception, design and implementation of the study, and has also been responsible for drafting the manuscript. AGP has made substantial contributions to the development of the methods. JRS has made contributions to data analysis. SM has discussed the results and critically revised the manuscript. All authors read and approved the final version of the manuscript.\u03b2 in the multivariate case with no correlated errors.Here, we proof equation (29), i.e., the asymptotic variance of Consider the following model:Xi and Yi are the observed vectors with dimensions p \u00d7 1 and q \u00d7 1, respectively, \u03b1 is the model intercept (q \u00d7 1), \u03b2 is a (q \u00d7 p) matrix of slope parameters, \u03b5 i is a white noise vector with mean zero and covariance matrix \u03a3\u03b5. The joint distribution of \u03f5i1, \u03f5i2, \u03b5i and xi is given bywhere In this section, we investigate the asymptotic distribution ofwhereand\u03b2\u22a4) can be written as linear combinations of a vectorial mean. In the second one, we must demonstrate that this vectorial mean has an asymptotic normal distribution. Therefore, we need some auxiliar results for proving the asymptotic result, which are exposed in two propositions below.The proof idea, similar to presented in , has twoProposition 1 Under the model (61) under (62) the proposed estimator whereWi = (\u03b5i + \u03f5i 2- \u03b2\u03f5i1) \u2297 (xi - \u03bcx + \u03f5i1) - \u03a8, \u03a8 = (Iq \u2297 \u03b2\u22a4) and bn = nbn is limited in probability when n diverges. It implies that, prob(n-1) goes to zero when n increases.with Proof: Define \u03b2k as the coefficients associated with the kth element of the vector yi, that is\u03b2\u22a4) = may be rewritten in terms of the observed variables asThus, we have that vec\u22a4.where \u03f5Then, it follows thatwhere with Wki = (xi - \u03bcx + \u03f5i1)\u03d1ki - \u03a8k. Hence, it follows thatwhere whereWi = (\u03b5i + \u03f5i 2- \u03b2\u03f5i1) \u2297 (xi - \u03bcx + \u03f5i1) - \u03a8 and \u03a8 = (Iq \u2297 \u03b2\u22a4).with Proposition 2 Under all conditions stated in this paper, the mean where and Proof: Notice that the expectation of Wi is equal to zero for all i. Then, defining xi = \u03b4\u22a4Wi we have that E(xi) = 0, Var(xi) = \u03b4\u22a4E(Wi \u03b4 andFi = (\u03b5i + \u03f5i 2- \u03b2\u03f5i1) (\u03b5i + \u03f5i 2- \u03b2\u03f5i1)\u22a4. Thus, using the fact that the random quantities have independent normal distributions and we have thatwith x1..., xn is an iid sequence and we can use the central limit theory, which says thatThat is, V = \u03b4\u22a4 T\u03b4 with \u03b4 \u2260 0r then, by the Cramer-Wold device [where d device , we haveThen, by the Propositions (1) and (2), we have thatwhere Consider the following model:Zij is the measure obtained in one experiment (microarray), i is the sample index i = 1,..., m, m is the number of spost in the microarray, j is the replicate number , \u03bci is the unknown true value of the measure and \u03f5ij is the error of measure.where E(\u03f5ij) = 0 and Var(\u03f5ij) = ij, i.e., \u03b4\u03f5. Notice that the lower is the standard deviation of the error of measure (\u03b4\u03f5), the lower is the measurement error.Then, assume that ConsiderThereforeAssuming that there is no bias (systematic error), one intuitive estimator for The quantity posed in ."}
+{"text": "An important component of time course microarray studies is the identification of genes that demonstrate significant time-dependent variation in their expression levels. Until recently, available methods for performing such significance tests required replicates of individual time points. This paper describes a replicate-free method that was developed as part of a study of the estrous cycle in the rat mammary gland in which no replicate data was collected.A temporal test statistic is proposed that is based on the degree to which data are smoothed when fit by a spline function. An algorithm is presented that uses this test statistic together with a false discovery rate method to identify genes whose expression profiles exhibit significant temporal variation. The algorithm is tested on simulated data, and is compared with another recently published replicate-free method. The simulated data consists both of genes with known temporal dependencies, and genes from a null distribution. The proposed algorithm identifies a larger percentage of the time-dependent genes for a given false discovery rate. Use of the algorithm in a study of the estrous cycle in the rat mammary gland resulted in the identification of genes exhibiting distinct circadian variation. These results were confirmed in follow-up laboratory experiments.The proposed algorithm provides a new approach for identifying expression profiles with significant temporal variation without relying on replicates. When compared with a recently published algorithm on simulated data, the proposed algorithm appears to identify a larger percentage of time-dependent genes for a given false discovery rate. The development of the algorithm was instrumental in revealing the presence of circadian variation in the virgin rat mammary gland during the estrous cycle. In recent years, there has been considerable interest in using gene expression microarray data to study the dynamic behavior of cells. Microarrays allow researchers to take a \"snapshot\" of the state of a cell by measuring the mRNA expression levels of thousands of genes simultaneously. By taking multiple such \"snapshots\" at different times, one gains a dynamic picture of how expression levels change over time. A review article by Bar-Joseph gives anBased on this insight, we designed a microarray study of the estrous cycle of the rat mammary gland in which microarray data were collected at distinct time points without replicates. As part of this study, we developed an algorithm for identifying genes with significant temporal variation that does not rely on replicates. The algorithm is based on fitting the data with B-splines, which has a smoothing effect. A test statistic was developed that measures the magnitude of this smoothing effect relative to the overall variation in the data. This test statistic is then used in a false discovery rate procedure to identify significant genes.Method 1, has some notable similarities with a method recently proposed in which give the best approximation of the log-expression profile Yi in the least squares sense. The first step is to define the \"spline collocation\" matrix S, which stores the values of the basis functions evaluated at the sample time points. The entries of S are defined by Smj = bm(tj). The least-squares approximation is then calculated by the equationThis representation is particularly convenient for fitting splines to a set of data points. The goal is to determine for each gene S+ denotes the Moore-Penrose pseudoinverse of S (see is the vector of interpolated function values, \u03d5i(t) is the approximating spline function for gene i, and Var denotes the variance. Intuitively, a small value of \u03c1 corresponds to a large smoothing effect, which suggests that any long-term temporal trends are small relative to the overall variation in the log-expression levels. In contrast, a large value of \u03c1 indicates the presence of a meaningful temporal trend.where \u03c1. To apply this procedure, it is necessary to approximate an expected distribution for \u03c1 under the assumption that there is no temporal variation. This is accomplished by creating a permuted data set Y in such a way that columns that were originally close together become far apart in permutation vector \u03c0 is specified, which is simply a rearrangement of the integers 1,..., T. Then, define \u03c0j is the jth component of \u03c0.To determine which genes demonstrate significant temporal variation, we use a false discovery rate procedure in conjunction with the test statistic \u03c1 statistic is calculated for each \"gene\" in this permuted data set. The resulting values are then sorted, yielding an estimated null distribution. The estimated distribution is used to calculate p-values for each gene as follows: for gene i, the p-value is calculated byThe Li is the number of \u03c1 values from the permuted data set that are larger than \u03c1i. Finally, a false discovery rate procedure .The simulation tests were performed as follows. In the p-value and FDR tests, data were generated using time points t = :In the p-value tests, we generated 9 data sets, each consisting of the log-expression values of 2000 genes generated from the following model, with timepoints \u03b1, and \u03c9 are parameters that control the magnitude and frequency of the time dependent variation in the data, and N is the Gaussian distribution with mean 0 and variance 1.Here, \u03b1 = 0. The other 8 data sets were generated using all combinations of the following parameter values \u03b1 = 1 or 2, and \u03c9 = .1, .25, .5 and 1.The null data set was generated using \u03b1 and \u03c9 that were used in the p-value tests.In the FDR tests, we generated 2500 time-dependent genes using Equation (5) and 2500 genes from the null distribution, for a combined data set of 5000 simulated genes. For each method, we determined for each gene the smallest fdr rate that would result in that gene being selected. By sorting the genes according to these threshold fdr rates, we could easily calculate the number of true discoveries and the number of false discoveries that would result from every possible fdr rate. These test were performed for the same 8 combinations of \u03b1 = 1, \u03c9 = 1, with T evenly-spaced time points ranging from tmin = 0 to tmax = 20. With this choice of parameters, the cosine function in (5) cycles roughly 4 times over the range of time points. Both methods were run on the simulated data set using M + 1 evenly spaced knots ranging from tmin = 0 to tmax = 20. For Method 1, the permutation vector \u03c0 was chosen according to the procedure described in Section 2.1.3. For Method 2, a sample size of 500 was used for the bootstrapping procedure. For each run, we calculated the average of the estimated p-values for all genes in the data set. The tests were repeated for all combinations of T \u2208 {9, 17, 25, 33, 49, 65}, K = {2, 3, 4}, and M = (T - 1)R for R \u2208 {2, 4, 6, 8}. Note that the number of knots M + 1 used in each case depends on the number of time points T and the ratio R. Specifically, M = (T - 1)/R. Thus, smaller values of R correspond to larger numbers of knots.In the sensitivity tests, we generated 500 time-dependent genes from (5) with \u03bcg of amplified IVT product was fragmented, and the quality of both the IVT product and the fragmentation product were assessed using the Agilent Bioanalyzer system. All samples passed and were subsequently hybridized to the Rat RAE_230 2.0 Affymetrix microarray chips. Hybridized chips were scanned, data collected and scaled to a target gene intensity of 175 using GeneChip Operating Software\u2122 (GCOS) version 1.1 . Initial quality assessment of all scanned chips was performed using GeneChip Operating Software (GCOS) v1.1. Compiled data in the form of 32 individual CEL files, the primary output of scanned Rat RAE_230 2.0 microarray chips, were imported to GeneSpring for analysis using the native probe level GC-Robust Multi-array Average (GC-RMA) algorithm. Incomplete and ambiguous data was discarded leaving samples at the following 31 time points (measured in hours after midnight of the first day of the estrous cycle): 7, 7.5, 10, 13, 13.2, 14.6, 16, 17.6, 31.6, 32.8, 34.5, 35.5, 37.1, 38.3, 41.6, 43, 57, 58, 59.8, 61.3, 62.5, 64.2, 65.6, 67.1, 79.3, 80.9, 82.5, 83.9, 85.3, 86.3, 88.1. (Time point 40 was discarded because the data appeared corrupted). After processing, consensus expression levels were available for each gene at each time point. In order to minimize the possible adverse impact of low-level noise on the analysis, any gene that had a consensus expression level reading of less than 10 at any time point was deleted. This procedure resulted in a data set consisting of 21044 genes at 31 time points. Raw data are deposited in the National Center for Biotechnology Information Gene Expression Omnibus (GEO series GSE12289). Prior to applying our algorithm, we performed the following normalization procedure to the expression values:The data analyzed in the estrous cycle study were generated as follows. Sprague-Dawley female rats were obtained from Taconic Farms as adult virgins 70 +/- 3 days of age. Vaginal lavages were performed daily to identify rats with regular 4 day estrous cycles, as previously described [Xij is the consensus expression level for gene i at the jth time point, and \u03bci is the average value of log Xij for the ith gene.where Yij, using a 2nd order (piecewise linear) spline defined with the following knots sequence: 7,7,17.6,31.6,43,58,67,79.3,88.1,88.1. These knots correspond to the first and last time point collected for each day.The algorithm was applied to the normalized data \u03c0 = .The null distribution of the test statistic was calculated using the permutation vector f = .25. The genes were clustered by applying a hierarchical clustering method to the spline coefficients calculated by equation (3). A complete linkage method was used, and the distance between two genes i and j was defined using the Euclidean distance Significant genes were identified using a false discovery rate of In the validation study, two genes were selected for further study: Per1 (period homolog 1), and BMal (brain-muscle-ARNT-like protein). For these two genes, quantitative real time PCR was performed, as described in . PrimersSB and MN contributed the initial concept for this work and obtained funding for the experimental work. PS supervised the rat experiments and MR performed the RNA extraction, Affymetrix benchwork and initial microarray analysis using GeneSpring. Weston Porter's lab performed the quantitative real time PCR analysis of the Per1 and Bmal genes. SB contributed the major mathematical analysis and the paper was written with additional insights from the other authors. All authors read and approved the final manuscript."}
+{"text": "Biological networks are highly dynamic in response to environmental and physiological cues. This variability is in contrast to conventional analyses of biological networks, which have overwhelmingly employed static graph models which stay constant over time to describe biological systems and their underlying molecular interactions.To overcome these limitations, we propose here a new statistical modelling framework, the ARTIVA formalism (Auto Regressive TIme VArying models), and an associated inferential procedure that allows us to learn temporally varying gene-regulation networks from biological time-course expression data. ARTIVA simultaneously infers the topology of a regulatory network and how it changes over time. It allows us to recover the chronology of regulatory associations for individual genes involved in a specific biological process .Drosophila melanogaster and the response of Saccharomyces cerevisiae to benomyl poisoning.We demonstrate that the ARTIVA approach generates detailed insights into the function and dynamics of complex biological systems and exploits efficiently time-course data in systems biology. In particular, two biological scenarios are analyzed: the developmental stages of ARTIVA does recover essential temporal dependencies in biological systems from transcriptional data, and provide a natural starting point to learn and investigate their dynamics in greater detail. Drosophila melanogaster - which is segmented into different life stages: embryogenesis, larva, pupa and adult, or the adaptation of cellular organisms (the yeast Saccharomyces cerevisiae for instance) to growth defects and cellular damages induced by environmental stresses. Because they are extensively studied, considerable large-scale functional screening data exist for these examples. But while a growing number of studies report detailed and time-resolved analyses of regulatory and signalling processes [Molecular interactions and regulatory networks underlie the development and functioning of biological systems -3. Theserocesses ,5, mappiin-silico methods can generate hypotheses about biochemical and molecular mechanisms [i.e. the sets of nodes and edges, stays constant over time. Inferring the temporal changes in biological networks is an important statistical challenge [From available data, chanisms ,7 and guhallenge , but it et al. [via a stochastic transition matrix that requires an estimation of many parameters. Talih and Hengarten [a priori and the network evolution is restricted to changing at most a single edge at a time. More recently (2007-2008) methods in which the number of distinct regulatory phases is determined a posteriori have been proposed. Fujita et al. [ansatz, which switches between a convex optimization approach for determining a suitable candidate graph structure and a dynamic programming algorithm for calculating the segmentation of the time into distinct phases, i.e. the sets of successive time-points for which the graph structure remains unchanged. This time, the number of phases is explicitly determined, but it requires that the graph structure is decomposable. Finally, Robinson and Hartemink [Serious attempts to reconstruct dynamic networks whose topology changes with time started in 2005 ,10. Yoshet al. employeda et al. developpa et al. introducartemink used a Met al. [regime-SSM, which is divided into two steps. The main idea is to first cluster the genes that share the same temporal phases before inferring, in a second step, the network topology describing the regulatory associations between genes within each cluster using an expectation-maximization (EM) algorithm. Ahmed and Xing [l1-regularized logistic regression problems via convex optimization techniques.The approaches cited so far produce global network topologies with global changes, meaning that all the genes of the network change their regulatory inputs simultaneously. In reality however, we would rather expect that each gene (or at most a subset of genes) has its own and characteristic regulatory pattern. To that end, Rao et al. developeand Xing introduci) the topology of the regulatory network, and (ii) how it changes over time. In order to strike a balance between model refinement and the amount of information available to infer the model parameters, the ARTIVA model delimits temporal segments for each gene where the influence factors and their weights can be assumed homogeneous. For that we use a combination of efficient and robust methods: dynamical Bayesian networks (DBN) to model directed regulatory interactions between genes and Reversible Jump MCMC for inferring simultaneously the times when the network changes and the resulting network topologies. We evaluate the performance of ARTIVA on simulated data and illustrate our approach in the context of two different biological systems. We start by analyzing a commonly used dataset related to the developmental stages of Drosophila melanogaster and demonstrate the utility of our approach by a comparative analysis of the ARTIVA results with the TESLA results [Saccharomyces cerevisiae to benomyl poisoning. This dataset represents an important challenge for the inference of time-varying networks since (i) the number of time-points is extremelly small (only 5 time points) and (ii) the expression values combine measurements obtained in wild-type and knock-out yeast strains. The biological relevance of the results obtained with ARTIVA are finally assessed using functional annotations and transcription factor binding information.The challenge of inferring time-varying structures of gene regulation networks is only starting to be adressed and in this paper we present the ARTIVA algorithm (Auto Regressive TIme VArying models) that is particularly well-suited for addressing the issues raised above. Starting from time-course gene expression data, ARTIVA performs a gene-by-gene analysis and infers simultaneously have become a popular framework for representing regulatory networks ,16 as thcan be drawn and the joint distribution of gene expression levels written,P(a|b) denotes the probability of a conditional on b. Because BNs aim to represent the joint probability distribution (in our case for the expression levels of p genes) the corresponding graphical representation is limited to graphs which contain no cycles , with 1 \u2264 i \u2264 p and 1 \u2264 t \u2264 n. The joint probability distribution over the expression levels of all genes and at all times is then partitionned, P(X1(1), ..., pX(1), ..., X1(n), ..., pX(n)), into a product of conditional probabilities of the Markov form:With time-course measurements, this limitation can be overcome by employing a Dynamical Bayesian Network (DBN) formalism , where ti at time t depends on the expression levels of genes r, ..., s at time t - 1. Genes r, ..., s are called the 'parents' of gene i and denoted by Pai . By making the time dependence of expression levels explicit, loops and feedback interactions can be represented simply by requiring only that the expression of gene i at time t is independent of all other genes at the same time t. In conventional DBN inference approaches it is assumed that the conditional dependencies in Eqn. (1), and hence the set Pai, are independent of time t. Of course it is possible to allow iX(t) to depend on expression levels rX(t - \u03c4 ) with \u03c4 >1, i.e. allow for higher order dependencies. For computational reasons, however, our analysis is restricted to first order Markov processes.This means that the expression level of gene p be the number of observed genes and n the number of time-points at which expression levels are measured for each gene. In this study, the discrete-time stochastic process X = {iX(t); 1 \u2264 i \u2264 p, 1 \u2264 t \u2264 n} is considered, taking real values and describing the expression level of the p genes at n time-points. We start by modelling the gene expression levels at time t probabilistically by a vector-autoregressive process:Let X(t) = (iX(t))i\u2264p 1\u2264and \u03a3 (t)) is the multivariate normal distribution centered at 0 with diagonal covariance matrix \u03a3(t). Note that diagonality of \u03a3 ensures that the process describing the temporal evolution of gene expression -- here a first order autoregressive process -- can be represented by a Directed Acyclic Graph (DAG) as in Figure i.e. no edges between nodes at the same time, and where the edges from time t - 1 to time t are defined by the set of non zero coefficients in matrix A(t) [i does not affect the expression measurements of the other genes and off-diagonal elements in \u03a3 can be set to 0.where rix A(t) . FurtherA(t) = (ija(t))i, j\u2264p 1\u2264-- which is the adjacency matrix of the gene regulatory network [B(t) = (ib(t))i\u2264p 1\u2264-- which is the baseline gene expression that does not depend on the parent gene regulatory controls -- are allowed to vary explicitly with time. This could for example reflect switching on or off of regulatory interactions, e.g. in response to developmental, physiological or environmental signals remains unchanged. Assuming k changepoints, the changepoints are denoted by st order Markov model) and t in the phase h of gene i (i.e. i-th row of matrix A(t) and coefficient ib(t) are assumed constant:For each gene, h of gene i, the parents i include every gene j such that the coefficient i is modelled in a regression framework as:For phase jX(t - 1) is the expression level of gene j at time t - 1. This defines a multiple changepoint regulatory network, with changepoint positions h, i, j. All non-zero coefficients, i and j, and hence are good indicators of putative biological interactions between those genes.where k changepoints given the observed data x over all of the system's parameters, we used a Reversible Jump Markov Chain Monte Carlo (RJ-MCMC) procedure. The principle of RJ-MCMC lies in constructing a reversible Markov chain sampler that can jump between parameter subspaces of different dimensions; thus allowing the generation of an ergodic Markov chain whose equilibrium distribution is the desired posterior distribution [We want to infer the autoregressive time-varying network model, which belongs to the overall parameter space that is the union of the parameter spaces of all phases delimited by ribution ,22.x(1) observed at time-point t = 1 is denoted by Pr(x(1)). From the hierarchical structure of the overall parameter space, the joint probability distribution over all parameters can thus be written as the product:Presented in Figure i, we construct a RJ-MCMC sampler that directly samples from the joint distribution:For each gene, i|ki\u03be) and are respectively the prior probabilities of the number of changepoints i kand of the changepoint position vector i \u03befor gene i, and where h of gene i. Finally:Where i observed during phase h, and is a realization of the Gaussian distribution defined in Equation(4).is the likelihood of the expression levels i kto be distributed a priori as a truncated Poisson random variable with mean \u03bb and maximum In order to reinforce sparsity of the network and following multiple changepoint approaches involving RJ-MCMC ,24, we a\u03bb and \u039b can be interpreted as the expected number of changepoints and parent variables, respectively. Following [\u03bb and \u039b are drawn according to a Gamma distribution: \u03b1 and the scale parameter \u03b2 are chosen so that the prior probability decreases when the numbers of changepoints or parents increase are chosen following Andrieu and Doucet' RJ-MCMC procedure for regression model selection [h(i) are assumed to be uniformly distributed conditional on \u03c50/2 and scale parameter \u03b30/2, \u03c50 = 1 and \u03b30 = 0.1, we set up to Jeffrey's vague prior, Similarly, the prior probability for the number of parents is a truncated Poisson distribution ollowing , \u03bb and \u039belection , based oelection . The set\u221d1/(\u03c3hi)2. FinallyGiven the parent gene set j + 1th column contains the observed value j in 2 \u03b4represents the expected signal-to-noise ratio and is sampled according to an Inverse Gamma distribution where the symbol \u2020 denotes matrix transposition, i \u03b8, i \u03c3) in the posterior distribution is analytically tractable,A noticeable advantage of the model is that the marginalization over the regression parameters -- and the acceptance probability depends on the network topology only.; death of an existing changepoint (D); shift of a changepoint to a different time-point (S); and update of the regression model defining the network topology within the phases (R). These moves occur with probabilities k bfor B, k dfor D, k vfor S and k wfor R, depending only on the current number of changepoints i kand satisfying k b+ k d+ k v+ k w= 1. The changepoint birth and death moves represent changes from, respectively, i kto i k+ 1 phases and i kto i k- 1 phases. We impose d0 = v0 = 0 and In order to traverse the parameter space of unknown dimension we propose here four different update moves within the different phases. Proposed shifts in changepoint positions are accepted using a standard Metropolis-Hastings step, while regression model updates within phases invoke a second RJ-MCMC criterion, which was adapted from the model selection approach of Andrieu and Doucet [B, D, S and R allow the generation of samples from probability distributions defined on unions of spaces of different dimensions for both the number, ik, of changepoints and the number i.where d Doucet . As propa priori probabilities, the ARTIVA algorithm produces posterior probability estimations over the algorithm iterations for changepoint vectors and network topologies. These posterior probabilities give a detailed picture of all the results and allow in depth analyses of the entire regulatory network architecture. In this study we use in complement to posterior probabilities, the Bayes factor, i.e. the ratio of the posterior odds of an hypothesis over its prior odds [i) not supported when it has a Bayes factor below 3, (ii) positively supported for a Bayes factor between 3 and 20 and (iii) strongly supported for a Bayes factor over 20. The performance of ARTIVA is evaluated on synthetic and real data (see the following section) by selecting the network structure according to the following procedure. For each gene i we first choose the number i kof changepoints having the greatest Bayes factor. Then the i kchangepoint positions having the highest Bayes factors are selected, and for each resulting phase we finally compute the Bayes factor for the possible parent genes and choose the ones with a Bayes factor greater than 3 , two types of expression data were produced. The first type -- referred to Wild-Type (WT) simulations -- match the 'Drosophila life cycle' data. This dataset contains time-series expression data of several genes and the algorithm must find the correlations between unknown parent genes and each target gene. The second type -- referred to as Knock-Out (KO) simulations -- is equivalent to the 'benomyl' dataset. This dataset only contains time-series expression data of target genes in different genetic contexts: wild-type and knock-out mutants for several transcription factors (TFs).The simulation procedure for a given target gene, also presented in detail in Additional file h of gene i, the noise ie(t) is drawn from a Gaussian distribution \u2022 the quantity of noise in the data. For all phases phasesize = 1, 2, ..., 5, 12), and\u2022 the size of the temporal phases . This is not necessary for KO simulations because the potential parent genes are obviously restricted to the transcription factors for which KO data is being generated. Regardless of the number of potential parent genes, a maximum of 5 edges from parent genes to a target gene is allowed.\u2022 for WT simulations only, the number of potential parent genes , larval (10 time-points) and pupal stage (18 time-points) and the first 30 days of adulthood (8 time-points). Expression data were collected from the Gene Expression Omnibus database: http://www.ncbi.nlm.nih.gov/geo/. 4005 genes with consistent annotation are used for the analysis. Potential parent genes were restricted to genes with known transcriptional activity based on Gene Ontology information [The first microarray dataset -- referred to as 'n et al. . It inclormation . Hence, et al. [Saccharomyces cerevisiae cells. Parallel experiments were conducted in different genetic contexts: the wild type strain and knock-out (KO) strains in which the genes coding for different transcription factors connected to drug response, YAP1, PDR1, PDR3, and YRR1, were deleted. For each yeast strain, the measured expression values for 5 time-points were obtained from the website: http://www.biologie.ens.fr/lgmgml/publication/benomyl. We only considered genes that (i) showed significant changes in mRNA levels during the time-course analysis in the WT strain had less than 20% of missing expression measurement data in the four KO strains. The resulting expression table comprised data for 78 genes with a 2.66 GHz Intel(R) Xeon(R) CPU and 4 G RAM.i.e. the topology of the network within the phases, are presented in Table i \u03c3= 1. As noise increases further, the ability of the algorithm to recover changepoints decreases in terms of sensitivity, but still, the changepoint sensitivity remains greater than 70% when the noise standard deviation reaches i \u03c3= 1.2 . The WT data was generated with r = 8 repeated measurements for each time point, whereas the KO data were simulated with only 4 repeated measurements for each time point . That is the reason why the changepoint sensitivity with KO simulations starts to decrease with smaller noise standard deviation compared to WT simulations. Nevertheless, the changepoint sensitivity is still greater than 80% even when noise reaches i \u03c3= 0.8. The number of measurements for each phase also plays an important role for the changepoint detection sensitivity. Indeed, during a phase reduced to a single timepoint, there are only r repeated measurements to estimate the autoregressive models. Interestingly, the ARTIVA algorithm here succeeds in finding the correct dynamic networks with a sensitivity value of 79% for a phase size of 1, in both WT and KO simulations (default noise standard error i \u03c3= 0.5). With phases of size 2, the changepoint sensitivity is greater than 90%. For all noise levels considered here the changepoint PPV is greater than 95%; furthermore changepoint PPV appears to be stable and not to be affected by the phase size either. Knock-out data are usually collected for a restricted number of knock-out genes and the number of possible parents is limited. However, wild-type experiments give expression time series data for a large number of genes at once. The number of proposed parents increases the dimension of the model and the estimation procedure accuracy is expected to be affected as the dimension increases. Here, the changepoint sensitivity obtained with ARTIVA is still 54% when the parent genes are chosen from among a set of 40 proposed parents. The changepoint sensitivity goes up to 81% when the number of potential parents is reduced to 10. The changepoint PPV is only slightly affected by the number of proposed parents. The PPV is still greater than 75% when the number of potential parents is 40.To evaluate the performance of our ARTIVA approach, simulations are run in order to assess the impact of three major factors on the algorithm performances: noise in the data, minimal length of phases, and number of proposed parent genes . Sensitivity and Positive Predictive Value (PPV) calculated for the detection of changepoints and of models, The edge detection in Table Simulation studies such as the one performed here do, of course, only provide a partial insights into an algorithms performance and robustness. They are nevertheless essential to gain confidence in the performance of novel algorithms and to develop understanding of their likely limitations. Together these results serve to illustrate of the robustness of the ARTIVA algorithm. In particular, ARTIVA can deal with some of generic problems encountered in real experimental data. It still performs well when noise standard error is on the order of the mean value of the regression coefficients, when the number of measurement per phase is reduced to 8 or when the number of possible parents reaches 20. At some point, the ARTIVA algorithm misses some changepoints, but the PPV is still very large, meaning that we can have great confidence in the changepoints having a high posterior probability.et al. [D. melanogaster genes during a complete time-course of development. The ARTIVA algorithm is run for each gene for 50,000 iterations, looking for parental relationships with the 10 transcription factors for which gene expression profiles were most highly correlated over any successive 10 time-points (see Methods). Out of the 4005 analyzed genes, 1704 (42%) were found to be involved in the time-varying networks spanning the whole Drosophila life-cycle . Interestingly, 2583 changepoints were also identified. The distribution over the time-points and with respect to the developmental stages is shown Figure In light of the simulation analysis, we then apply our method to the well-studied expression datasets produced by Arbeitman et al. . In this\u03bb1, which is a sparsity coefficient, and \u03bb2, which is a smoothness penalty coefficient. Several combinations of parameters were tested (data not shown), and we finally retained the average values presented by the authors in their simulation study [i.e. \u03bb1 = 0.01, \u03bb2 = 1. The TESLA analysis was run using the same subset of Drosophila genes used with ARTIVA, and the 2583 most significant temporal changes identified with TESLA are compared to the 2583 ARTIVA changepoints compared to those in the second phase (Bayes factor = 14.22). This explains the detection of two changepoints. The results obtained for all other clusters are combined to obtain a global view of the time-varying regulatory network involved in benomyl stress response or vacuolar transporters (YCF1). The middle group contains also an important rate of Yap1 targets (87%), which act at the level of the plasma membrane (FLR1 and FRM2) or encode proteins involved in response to toxins . Yap1 activity in the last group is partially overlapping with the actions of Pdr1 and Pdr3. Most of the genes in this group have unknown functions, but some of them are still labelled in the YEASTRACT database as being targets for Yap1 (74%), Pdr1 (32%) and Pdr3 (20%). Finally, YRR1 deserves a special mention. Unlike the genes that encode the transcription factors Yap1, Pdr1 and Pdr3, the YRR1 gene is transcriptionnally activated during the benomyl response. As a consequence, ARTIVA identified YRR1 (i) as a Yap1 target whose expression was induced 4 minutes after benomyl addition in the cell growth culture as a parent for genes SNG1 and YLL056C at 10 minutes. Interestingly these observations highlight a sequential activity of Yap1 and Yrr1 transcription factors together with an overlap of their targets , Pdr1 % and PdrThe ARTIVA approach allows us to reverse engineer the temporally varying structure of transcriptional networks by inferring simultaneously the times at which regulatory inputs of genes change and the nature of these incoming inputs. Our approach is computationally efficient and can exploit powerful search heuristics to scan the space of potential incoming edges. Compared to others methodologies recently proposed in the literature, ARTIVA has the major advantage of combining efficient and well-tried techniques (Bayesian networks and RJ-MCMC sampler) in order to solve several related problems. First, with ARTIVA there is no need for prior information regarding either the number of regulatory phases or the number of regulatory interactions between parent and target genes. Starting from uninformative priors , the posterior distribution for the number of changepoints, their positions and the regulatory models within each recovered phase is directly obtained from the ARTIVA runs. Also, ARTIVA allows the detection of regulatory phases for individual genes. Finally, whereas many approaches -- like Bayesian Dirichlet Equivalent (BDE) score in a dynamic context or the Ti) applying it to simulated data (Table ii) performing a comparative analysis of the ARTIVA and TESLA [We demonstrate the performance of the ARTIVA algorithm by (ta Table and (ii)nd TESLA results Drosophila life cycle' data is representative of data used for classical regulatory network inference; successive gene expression measurements spanning a given biological process - here the Drosophila development - in order to detect potential regulatory interactions from gene expression profiles. This data is particularly suited for the inference of a temporally varying regulation network, since (i) the number of time-points is large the transitions between the distinct stages of Drosophila development {Embryo (E), Larva (L), Pupa (P), Adult (A)} are well-described in the literature [e.g. arresting development in a given state by selectively knocking down transcription factors or targets at a given developmental stage.The two biological networks presented in this study Figures and 4 aror fewer ) and , and no replicate data points are available. To manage the lack of data, we cluster genes with concordant transcription profiles and analyze them jointly with ARTIVA. This cluster analysis was possible because the maximal intracluster variability did not exceed 0.2 could be effective by replacing the uniform prior for the edges with a prior favouring edges that correspond to the experimentally identified interactions (see [Drosophila development and the yeast stress response, mainly because those are processes in which transcriptional regulations are highly dynamic. However, when considering systems that evolve more smoothly or in case of datasets with a small number of time points, it would be interesting to incorporate a regularization scheme into ARTIVA in order to favour slight changes from one phase to the next one. Such an approach has already been initiated in [e.g. transcription factor activities or other proteomic measurements.As no particular constraint is imposed to the changepoint positions or to the succession in network topologies within phases, the ARTIVA model appears to be highly flexible. The results are not ons (see ,41 for aiated in for disciated in where thSL conceived, implemented the first version of ARTIVA algorithm and drafted the manuscript. JB optimized the ARTIVA source code and performed simulations. FD provided help with data analysis. MPHS and GL contributed equally to this work. MPHS provided help with the model selection formalism and drafted the manuscript. GL designed the experiments, performed data analysis and drafted the manuscript. All the authors read and approved the final manuscript.Supplementary Text S1 - Priors illustration and complete mathematical description of the RJMCMC procedure and of the Bayes factor computation.Click here for fileSupplementary Figure S1 - Principle of the simulation study.Click here for fileSupplementary Dataset S1 - Full edge list of the inferred time varying networks of the 'benomyl' data.Click here for fileSupplementary Text S2 - Supplementary results related to the 'benomyl' analyses.Click here for fileSupplementary Figure S2 - Expression measurements for the 18 clusters used in the 'benomyl' analyses.Click here for file"}
+{"text": "Time series gene expression data analysis is used widely to study the dynamics of various cell processes. Most of the time series data available today consist of few time points only, thus making the application of standard clustering techniques difficult.ASTRO and MiMeSR, are inspired by the rank order preserving framework and the minimum mean squared residue approach, respectively. However, ASTRO and MiMeSR differ from previous approaches in that they take advantage of the relatively few number of time points in order to reduce the problem from NP-hard to linear. Tested on well-defined short time expression data, we found that our approaches are robust to noise, as well as to random patterns, and that they can correctly detect the temporal expression profile of relevant functional categories. Evaluation of our methods was performed using Gene Ontology (GO) annotations and chromatin immunoprecipitation (ChIP-chip) data.We developed two new algorithms that are capable of extracting biological patterns from short time point series gene expression data. The two algorithms, .Our approaches generally outperform both standard clustering algorithms and algorithms designed specifically for clustering of short time series gene expression data. Both algorithms are available at Then Z2 is calculated as Z2 = A - Z1. Then, MiMeSR identifies the submatrix with constant values on rows across the whole time points in Z2. This step is easily performed by identifying the set of rows of Z2 such that 2 - min) <\u03b5max in A. For simplicity and without loss of generality, let us consider an example of the synthetic gene expression matrix A, with coherent values cluster in it, corresponding to rows r1, r3 and r4 ), a new matrix Z2 is generated whose rows r1, r3, and r4 correspond to the submatrix with constant values on rows. Note that, the same cluster will be constructed by using any of the rows r1, r3, or r4. Therefore, after a cluster has been identified, its rows are not further considered in the construction of new Z1 matrices. This approach is guaranteed to identify all submatrices with minimum mean squared residue across all time point experiments. Note that, since the operation Z2 = A - Z1 is performed using all the rows of A during each iteration, and since we are seeking for the set of rows of Z2 such that 2 - min) < \u03b5max to belong to more than one cluster. The biological equivalent of this notion is that genes may be involved in more than one genetic pathway or to be regulated by more than one transcription factors.\u03b5 parameter on the clusters that are identified by MiMeSR. This can be done by sensitivity analysis in which the parameter \u03b5 is perturbed and the results are compared. For this analysis, it is usually sufficient to consider one or two values above and below the originally selected value of \u03b5. Only clusters that are consistently identified by MiMeSR as \u03b5 varies should be retained for further examination. Note that the number of genes in these clusters may also change. The user therefore needs to determine a rule for dealing with genes that may be dropped from the clusters as \u03b5 changes. The most conservative approach would be to retain only the genes that remain in the clusters for all values of \u03b5 around its selected value. It can be easily shown that the overall complexity of MiMeSR is ~O(NMK), where K is the number of minimum mean squared residue clusters in A. Note that K corresponds to the maximum number of constant columns matrices that can be constructed using the rows of A without identifying redundant clusters.For practical reasons, it is important to assess the effects of the PVB and ABT designed the study, analyzed the results and wrote the paper. KVB and TM designed the web server and contributed to the writing of the paper.Supplementary materials. The following additional data are available with the online version of this paper. Additional file ASTRO and MiMeSR.Click here for file"}
+{"text": "Agaricus bisporus, quantified by PCR, and for E. coli and Rattus norvegicus, using microarray technology. The use of parametric non-linear regression models provides a more precise description of expression profiles, reducing the \"noise\" of the raw data to produce a clear \"signal\" given by the fitted curve, and describing each profile with a small number of biologically interpretable parameters. This approach then allows the direct comparison and clustering of the shapes of response patterns between genes and potentially enables a greater exploration and interpretation of the biological processes driving gene expression.The vast quantities of gene expression profiling data produced in microarray studies, and the more precise quantitative PCR, are often not statistically analysed to their full potential. Previous studies have summarised gene expression profiles using simple descriptive statistics, basic analysis of variance (ANOVA) and the clustering of genes based on simple models fitted to their expression profiles over time. We report the novel application of statistical non-linear regression modelling techniques to describe the shapes of expression profiles for the fungus t + \u03b5. This non-linear regression approach allowed the expression patterns for different genes to be compared in terms of curve shape, time of maximal transcript level and the decline and asymptotic response levels. Three distinct regulatory patterns were identified for the five genes studied. Applying the regression modelling approach to microarray-derived time course data allowed 11% of the Escherichia coli features to be fitted by an exponential function, and 25% of the Rattus norvegicus features could be described by the critical exponential model, all with statistical significance of p < 0.05.Quantitative reverse transcriptase PCR-derived time-course data of genes were modelled. \"Split-line\" or \"broken-stick\" regression identified the initial time of gene up-regulation, enabling the classification of genes into those with primary and secondary responses. Five-day profiles were modelled using the biologically-oriented, critical exponential curve, y(t) = A + (B + Ct)RThe statistical non-linear regression approaches presented in this study provide detailed biologically oriented descriptions of individual gene expression profiles, using biologically variable data to generate a set of defining parameters. These approaches have application to the modelling and greater interpretation of profiles obtained across a wide range of platforms, such as microarrays. Through careful choice of appropriate model forms, such statistical regression approaches allow an improved comparison of gene expression profiles, and may provide an approach for the greater understanding of common regulatory mechanisms between genes. Various statistical approaches have been specifically developed to summarise the vast quantities of data that are produced in microarray studies -3, emploThis paper aims to use standard statistical non-linear regression models to enhance the biological interpretation of individual gene expression profiles. Such regression models provide accessible methods to describe the shape of each gene expression profile as a function of time, thus providing an insight into the underlying processes rather than simply identifying significant differences. For example, non-linear models can be used to identify the time of a particular event in a gene expression profile, such as the time of rapid up- or down-regulation. Similarly, modelling transcript changes using parametric equations that allow biological interpretation can further allow the comparison or clustering of the shapes of the expression profiles based on biological interpretable parameters. Such non-linear regression techniques are commonly used in agronomic studies to describe responses to a range of quantitative input variables, but are not commonly used in the examination of gene expression data.Agaricus bisporus, the cultivated mushroom, are ideal for studying fungal morphogenesis as they are macroscopic, the tissues are clearly de-lineated and the initiation of fruiting body morphogenesis is controlled environmentally. Differential screening and targeted gene cloning procedures have identified genes up-regulated post-harvest in A. bisporus fruiting bodies, based on Northern analysis dCTP probes and post-hybridisation washes carried out using established protocols dCTP . Agaricus bisporus 28S rRNA gene was used as a loading control as described previously [Total RNA, ~10 \u03bcg, from each sample was separated by formaldehyde agarose gel electrophoresis and immobilised onto nylon membranes as per established protocols . Hybridirotocols . To prodeviously ,39. HybrComparisons of the transcript levels, as determined by Northern analysis scanning densitometry and qRT-PCR, were made for each gene by calculating correlation coefficients, and by fitting linear and exponential regression responses to explain the Northern analysis measurements in terms of those from the qRT-PCR. Within each experiment, two replicate Northern analysis measurements were paired with the qRT-PCR values obtained from the same replicate mushroom RNA extracts .Quantitative RT-PCR data of transcription levels for each gene were analysed using analysis of variance (ANOVA) for each experiment separately. Three replicate mushrooms were assayed at each time or for each tissue-by-time combination. Prior to analysis, the data were subjected to a logarithm (base 10) transformation to satisfy the ANOVA assumption of homogeneity of variance. The significance of the overall treatment effects was assessed using an F-test, and the significance of differences between individual treatment means was assessed by comparison with appropriate standard errors of differences (SEDs). Treatment differences noted in the text are significant at the 5% level unless stated otherwise.x hours: time > x hours, for each of the observed values of x). The best model for each gene was chosen as the one with the minimum sum of residual sums of squares for the two regressions. The time point where the two lines crossed was postulated as the time when increased gene transcription began.For the qRT-PCR data only, regression analyses were used to model the gene expression changes over time. 'Split-line' or 'broken-stick' regression analysis of transcription levels from 0\u201324 h was applied to estimate the time when the up-regulation of each gene commenced. The 'broken-stick' model consists of two linear regression segments fitted to distinct subsets of the data, with separate estimates of slope and intercept for each segment. In this case the first line segment was constrained to have a slope of zero. A sequence of models was fitted to the data for each gene, splitting the data set into two parts (time \u2264 10-transformed data.The long-term gene transcript profiles (0\u20135d) were modelled using the critical exponential curve (Equation 1), fitted to the logA, B, C and R are parameters, y is the gene expression response (log10 transformed), t is storage time, and \u03b5 represents the errors, assumed to follow a Normal distribution with mean zero and a constant variance. This form of curve was selected following an initial graphing of the responses, as it can be used to describe a rapidly increasing phase followed by a decline or plateau, and after assessment of how well it fitted the observed data compared with both the simpler exponential model and the more complex double exponential model. The parameters of this non-linear response can be interpreted in terms of a postulated mechanism driving the observed gene expression responses, in this case potentially quantifying the relationship between transcript synthesis and degradation. The fitted parameters, and hence the shapes of the fitted curves, were compared between genes using a parallel curves analysis, either constraining each parameter to be the same across all five genes, or allowing variation in the values taken by each parameter between genes. This analysis provides a basis for comparing a sequence of possible models, and assessment of the change in residual variance between models allows the most appropriate model for the observed data to be determined.where E. coli and R. norvegicus), previously published [E. coli study [soxS demonstrated an exponential-type response to the application of paraquat. To identify all genes with a similar exponential shape of response, an exponential function was fitted using the regression modelling approach to all gene expression profiles from this microarray study, and the significance of each fit was determined. Similarly, a number of genes from R. norvegicus liver tissue treated with corticosteroid displayed profiles appearing to follow the same critical exponential curve as fitted to the A. bisporus data (Equation 1) [The regression modelling approach was applied to publicly-available microarray datasets from different organisms 0\u201324 hr experiment, (B) 0\u20135 day experiment, (C) tissues over 2 day experiment. 28S rRNA = loading control, CBP = cruciform DNA-binding protein, CYP II = cytochrome P450II, GHYD = glucuronyl hydrolase, GSYN = \u03b2 (1\u20136) glucan synthase, and RAFE = riboflavin aldehyde-forming enzymeClick here for fileComparison of gene expression responses measured using Northern analysis and qRT-PCR: correlation coefficient. Summary of linear regression and exponential regression fits , and minimum and maximum values of gene expression as measured by qRT-PCR. glucan synthase, and RAFE = riboflavin aldehyde-forming enzyme).Click here for fileRelationships between Northern analysis response and qRT-PCR measurements. Exponential regression curve showing the relationship between the Northern analysis response (y axis) and the qRT-PCR response (x axis) for all five genes and all three experiments. Column 1 is for the 0\u201324 hr experiment, column 2 is for the 0\u20135 day experiment, column 3 is for the tissues over 2 day experiment. Each row is for a different gene: CBP = cruciform DNA-binding protein, CYP II = cytochrome P450II, GHYD = glucuronyl hydrolase, GSYN = \u03b2 (1\u20136) glucan synthase, and RAFE = riboflavin aldehyde-forming enzymeClick here for file"}
+{"text": "Unlike LASSO, p L(p < 1) is asymptotic unbiased and has oracle properties [p Lfor individual gene identification to group p Lpenalty for pathway selection, and then develop a novel iterative gradient algorithm for penalized global AUC summary maximization (IGGAUCS). This method incorporates the genetic pathways into global AUC summary maximization and identifies survival associated pathways instead of individual genes. The tuning parameters are determined using 10-fold cross validation with training data only. The prediction performance is evaluated using test data. We apply the proposed method to survival outcome analysis with gene expression profile and identify multiple pathways simultaneously. Experimental results with simulation and gene expression data demonstrate that the proposed procedures can be used for identifying important biological pathways that are related to survival phenotype and for building a parsimonious model for predicting the survival times.It has been demonstrated that genes in a cell do not act independently. They interact with one another to complete certain biological processes or to implement certain molecular functions. How to incorporate biological pathways or functional groups into the model and identify survival associated gene pathways is still a challenging problem. In this paper, we propose a novel iterative gradient based method for survival analysis with group , Lp p < (with it Most current methods, however, are developed purely from computational points without utilizing any prior biological knowledge or information. Gene selections with survival outcome data in the statistical literature are mainly within the penalized Cox or additive risk regression framework -1(q)} for q \u03f5 , and the area under the ROC curve for time t is t TPand t FPare the true and false positive rate at time t respectively, and [FPt]-1(q) = infc{c : tFP(c) \u2264 q}. ROC methods can be used to characterize the ability of a marker to distinguish cases at time t from controls at time t. However, in many applications there is no prior time t identified and thus a global accuracy summary is defined by averaging over t:Consider we have a set of S(t) and g(t) are the survival and corresponding density functions, respectively.which indicates the probability that the subject who died (cases) at the early time has a larger value of the marker, where r clusters in the input covariates, our primary aim is to identify a small number of clusters associated with survival time it. Mathematically, for each input xi \u03f5 \u211dm, we are given a decomposition of \u211dm as a product of r clusters:Assuming there are xi can be decomposed into r cluster components, i.e. xi = , where each xil is in general a vector. We define M(x) = wT x to be the risk score function, where xi. We denote i M= M(xi) for simplicity. Our goal is to encourage the sparsity of vector w at the level of clusters; in particular, we want most of its multivariate components wl to be zero. The natural way to achieve this is to explore the combination of p L(0 \u2264 p \u2264 1) norm and L2 norm. Since w is defined by clusters, we define a weighted group p LnormL2 norm is used l dcan be set to be 1 if all clusters are equally important. Note that group p L= L2 if r = 1 and l d= 1, and group p L= p Lwhen r = m and l d= 1. We can define the optimization problemwhere within every group, an j M= wT xj . The ideal situation is that M(xj ) >M(xk ) or wT (xj - xk) > 0, \u2200 couple with corresponding times j tk|tj Mb = 1 if a >b, and 0 otherwise. Obviously, GAUCS is a measure to rank the patients' survival time. The perfect GAUCS = 1 indicates that the order of all patients' survival time are predicted correctly and GAUCS = 0.5 indicates for a completely random choice.where One way to approximate step function Equation (4) is nonconvex function and can only be solved with the conjugate gradient method to find a local minimum. Based on the property that the arithmetic average is greater than the geometric average, we haveWe can, therefore, maximize the following log likelihood lower bound of equation (4).w with Laplace prior provided we treat the sigmoid function as the pair-wise probability, i.e. Pr(j > MkM) = \u03c3(wT (xj - xk)). When p = 1, p Eis a convex function. A global optimal solution is guaranteed.where \u03bb is a penalized parameter controlling model complexity. Equation (5) is the maximum a posterior (MAP) estimator of w that maximizes pE, we need to find the first order derivative. Since group p Lwith p \u2264 1 is not differentiable at |wl| = 0, differentiable approximations of group p Lis required. We propose a local quadratic approximation for group |wl|p based on convex duality and local variational methods for IGGAUCS and from \u03bb \u03f5 for GL1Cox method with the step size of 0.1, as the p Lpenalty goes to zero much quicker than L1. We suggest that the larger step size such as 0.5 can be used for most applications, since the test GAUCS does not change dramatically with a small change of \u03bb.There are two parameters x1 - x3, x4 - x6, x7 - x9,..., x298 - x300 within each group are highly correlated with a common correlation \u03b3 and there are no correlations between groups. We set \u03b3 = 0.1 for weak correlation, \u03b3 = 0.5 for moderate, and \u03b3 = 0.9 for strong correlation in each triple group and generate training and test data sets of sample size 100 with each \u03b3 respectively from a normal distribution with the band correlation structure. We assume that the first three groups(9 covariates) are associated with survival and the 9 covariates are set to be w = [-2.9 2.1 2.4 1.6 -1.8 1.4 0.4 0.8 -0.5]t. With this setting, 3 covariates in the first group have the strongest association with survival time and 3 covariates in group 3 have less association with survival time. The survival time is generated with H = 100 exp(-wT x + \u03b5) and the Weibull distribution, and the census time is generated from 0.8*median(time) plus a random noise. Based on this setting, we would expect about 25% - 35% censoring. To compare the performance of IGGAUCS and GL1Cox, we build the model based on training data set and evaluate the model with the test data set. We repeat this procedure 100 times and use the time-independent GAUCS to assess the predictive performance.We first perform simulation studies to evaluate how well the IGGAUCS procedure performs when input data has a block structure. We focus on whether the important variable groups that are associated with survival outcomes can be selected using the IGGAUCS procedure and how well the model can be used for predicting the survival time for future patients. In our simulation studies, we simulate a data set with a sample size of 100 and 300 input variables with 100 groups (clusters). The triple variables L1Cox methods with the frequency of each of these three groups being selected under two different correlation structures based on 100 replications. The results are in Table p = 0.1 outperforms the GL1Cox in that IGGAUCS can identify the true group structures more frequently under different inner group correlation structures. Its performance is much better than GL1Cox regression, when the inner correlation in a group is high (\u03b3 = 0.9) and the variables within a group have weak association with survival time.We first compare the performance of IGGAUCS and GL1Cox in parameter estimation, we show the results for each parameter with different inner correlation structures in Figure L1Cox, the middle bar is the true value of the parameter, and the right bar indicates parameter estimated from IGGAUCS. We observe that both the GL1Cox and IGGAUCS methods estimated the sign of the parameters correctly for the first two groups. However both methods can only estimate the sign of w8 correctly in group 3 with smaller coefficients. Moreover, \u0175 estimated from IGGAUCS is much closer to the true w than that from GL1Cox, especially when the covariates are larger. This indicates that the p L(p = 0.1) penalty is less biased than the L1 penalty. The estimators of IGGAUCS are larger than that of GL1Cox with weak, moderate, and strong correlations. Finally, the test global AUC summaries (GAUCSs) of IGGAUCS and GL1Cox with 100 replications are shown in Table L1Cox regression. This is reasonable, since our method, unlike Cox regression which maximizes a partial log likelihood, directly maximizes the penalized GAUCS. One interesting result is that the test GAUCSs become smaller as the inner group correlation coefficient \u03b3 increases from 0.1 to 0.9. We also apply the gene harvesting method proposed by Hastie et al. (2001) and discussed by Segal (2006) [GL1Cox. One explantation is that the group is more heterogeneous with weaker correlations among variables, and the average does not provide a meaningful summary. Moreover, we cannot identify the survival association of individual variables using gene harvesting.To compare more about the performance of IGGAUCS and Gl 2006) ,26 to th006 ,26 tFollicular lymphoma is a common type of Non-Hodgkin Lymphoma (NHL). It is a slow growing lymphoma that arises from B-cells, a type of white blood cell. It is also called an \"indolent\" or \"low-grad\" lymphoma for its slow nature, both in terms of its behavior and how it looks under the microscope. A study was conducted to predict the survival probability of patients with gene expression profiles of tumors at diagnosis .xi R = |2GAUCS - 1|, where R = 1 when GAUCS = 0, or 1 and R = 0 when GAUCS = 0.5 . We perform the permutation test 1000 times for R to identify 2150 probes associated with survival time. We then identify 49 candidate pathways with 5 and more genes using DAVID. There are total 523 genes on the candidate pathways. Since a gene can be involved in more than one pathway, the number of distinguished genes should be a little less than 500. The 49 biological pathways are given in Table Fresh-frozen tumor biopsy specimens and clinical data were obtained from 191 untreated patients who had received a diagnosis of follicular lymphoma between 1974 and 2001. The median age of patients at diagnosis was 51 years range 23 - 81) and the median follow up time was 6.6 years (range less than 1.0 - 28.2). The median follow up time among patients alive was 8.1 years. Four records with missing survival information were excluded from the analysis. Affymetrix U133A and U133B microarray gene chips were used to measure gene expression levels from RNA samples. A log 2 transformation was applied to the Affymetrix measurement. Detailed experimental protocol can be found in Dave et al. 2004. The data set was normalized for each gene to have mean 0 and variance 1. Because the data is very large and there are many genes with their expressions that either do not change cross samples or change randomly, we filter out the genes by defining a correlation measure with GAUCS for each gene - 81 andwi and corresponding genes on each pathway indicate the association strength and direction between genes and the survival time. Positive iws indicate that patients with high expression level die earlier and negative iws represent that patients live longer with relatively high expression levels. The absolute values of |iw| indicate the strength of association between survival time and the specific gene. Genes on the pathway, estimated parameters, and relevance accounts are given in Table Since it is possible different pathways may be selected in the cross validation procedure, the relevance count concept was utilj w|:j \u2211|j w|/L, where L is the number of genes on a pathway as shown in Table The eight KEGG pathways identified play an important role in patient survivals and they can be ranked with the average |Genes in red color are highly expressed in patients with aggressive FL and genes in yellow are highly expressed in the earlier stage of FL cancers. Many important cancer related genes are identified with our methods. For example, SOS1, one of the RAS genes , encodes membrane-bound guanine nucleotide-binding proteins that function in the transduction of signals that control cell growth and differentiation. Binding of GTP activates RAS proteins, and subsequent hydrolysis of the bound GTP to GDP and phosphate inactivates signaling by these proteins. GTP binding can be catalyzed by guanine nucleotide exchange factors for RAS, and GTP hydrolysis can be accelerated by GTPase-activating proteins (GAPs). SOS1 plays a crucial role in the coupling of RTKs and also intracellular tyrosine kinases to RAS activation. The deregulation of receptor tyrosine kinases (RTKs) or intracellular tyrosine kinases coupled to RAS activation has been involved in the development of a number of tumors, such as those in breast cancer, ovarian cancer and leukemia. Another gene, IL1B, is one of a group of related proteins made by leukocytes (white blood cells) and other cells in the body. IL1B, one form of IL1, is made mainly by one type of white blood cell, the macrophage, and helps another type of white blood cell, the lymphocyte, fight infections. It also helps leukocytes pass through blood vessel walls to sites of infection and causes fever by affecting areas of the brain that control body temperature. IL1B made in the laboratory is used as a biological response modifier to boost the immune system in cancer therapy.As shown in Figure p Lpenalized global AUC summary (IGGAUCS) maximization methods for gene and pathway identification, and for survival prediction with right censored survival data and high dimensional gene expression profile. We have demonstrated the applications of the proposed method with both simulation and the FL cancer data set. Empirical studies have shown the proposed approach is able to identify a small number of pathways with nice prediction performance. Unlike traditional statistical models, the proposed method naturally incorporates biological pathways information and it is also different from the commonly used Gene Set Enrichment Analysis (GSEA) in that it simultaneously considers multiple pathways associated with survival phenotypes.Since a large amount of biological information on various aspects of systems and pathways is available in public databases, we are able to utilize this information in modeling genomic data and identifying pathways and genes and their interactions that might be related to patient survival. In this study, we have developed a novel iterative gradient algorithm for group With comprehensive knowledge of pathways and mammalian biology, we can greatly reduce the hypothesis space. By knowing the pathway and the genes that belong to particular pathways, we can limit the number of genes and gene-gene interactions that need to be considered in modeling high dimensional microarray data. The proposed method can efficiently handle thousands of genes and hundreds of pathways as shown in our analysis of the FL cancer data set.There are several directions for our future investigations. For instance, we may want to further investigate the sensitivity of the proposed methods to the misspecification of the pathway information and misspecification of the model. We may also extend our method to incorporate gene-gene interactions and gene (pathway)- environmental interactions.Even though we have only applied our methods to gene expression data, it is straightforward to extend our methods to SNP, miRNA CGH, and other genomic data without much modification.The authors declare that they have no competing interests.ZL designed the method and drafted the manuscript. Both LSM and TH participated manuscript preparation and revised the manuscript critically. LM provided important help in its biological contents. All authors read and approved the final manuscript."}
+{"text": "Due to the rapid data accumulation on pathogenesis and progression of chronic inflammation, there is an increasing demand for approaches to analyse the underlying regulatory networks. For example, rheumatoid arthritis (RA) is a chronic inflammatory disease, characterised by joint destruction and perpetuated by activated synovial fibroblasts (SFB). These abnormally express and/or secrete pro-inflammatory cytokines, collagens causing joint fibrosis, or tissue-degrading enzymes resulting in destruction of the extra-cellular matrix (ECM).We applied three methods to analyse ECM regulation: data discretisation to filter out noise and to reduce complexity, Boolean network construction to implement logic relationships, and formal concept analysis (FCA) for the formation of minimal, but complete rule sets from the data.First, we extracted literature information to develop an interaction network containing 18 genes representing ECM formation and destruction. Subsequently, we constructed an asynchronous Boolean network with biologically plausible time intervals for mRNA and protein production, secretion, and inactivation. Experimental gene expression data was obtained from SFB stimulated by TGF\u03b21 or by TNF\u03b1 and discretised thereafter. The Boolean functions of the initial network were improved iteratively by the comparison of the simulation runs to the experimental data and by exploitation of expert knowledge. This resulted in adapted networks for both cytokine stimulation conditions.TNF and MMP9 by fibroblasts stimulation) and corroborated previously known facts , but also revealed some discrepancies to literature knowledge .The simulations were further analysed by the attribute exploration algorithm of FCA, integrating the observed time series in a fine-tuned and automated manner. The resulting temporal rules yielded new contributions to controversially discussed aspects of fibroblast biology (e.g., considerable expression of The newly developed method successfully and iteratively integrated expert knowledge at different steps, resulting in a promising solution for the in-depth understanding of regulatory pathways in disease dynamics. The knowledge base containing all the temporal rules may be queried to predict the functional consequences of observed or hypothetical gene expression disturbances. Furthermore, new hypotheses about gene relations were derived which await further experimental validation. Rheumatoid arthritis (RA) is characterised by chronic inflammation and destruction of multiple joints perpetuated by the synovial membrane (SM). A major component of the inflamed SM are activated, semi-transformed synovial fibroblasts (SFB) -7. In noCentral transcription factors (TFs) involved as key players in RA pathogenesis ,10 and iDue to the rapid accumulation of data about biological processes and molecular interrelationships, there is an increasing demand for approaches to analyse the underlying regulatory networks. For instance, a recent analysis of the mRNA expression profiles in synovial tissue from RA patients revealed inter-individual and gene-specific variances . The undTherefore, we developed a method for simulating the temporal behaviour of regulatory and signalling networks. It was used to create a gene regulatory network emulating ECM formation and destruction, based on literature information about SFB on the one hand and on experimental data on the other, which we obtained from TGF\u03b21 or TNF\u03b1 stimulated SFB.The final simulations were analysed by the attribute exploration algorithm of formal concept analysis (FCA), a mathematical discipline that has multiple applications in various domains such as knowledge representation, data mining, semantic web, or software engineering. First FCA approaches related to the analysis of gene expression data have been published . Usiet al. [i, represented by a network node, as on or off . In biology, this concept is reflected by distinct expression thresholds which must be exceeded by each individual gene to initiate their cellular effects, including disease initiation and progression [et al. developed a method for the cholesterol regulatory pathway in 33 species which eliminates spurious cycles in a synchronous Boolean network model [Corresponding to the discrete approach of FCA, we applied Boolean network architecture for modelling. Boolean network models, first proposed by Kauffman et al. , use bingression . Booleangression ,32. For COL1A1 and COL1A2); (2) enzymes degrading them ; (3) molecules that inhibit these proteases ; (4) TFs and modulators acting as TFs which are regulated by (5) the external signalling molecules TNF\u03b1 (TNF) and TGF\u03b21 (TGFB1). These genes .For our analysis we used a collection of 18 genes, which can be classified into five functional groups, sufficient to create a self-contained regulatory network of ECM maintenance: (1) structural proteins which are the target molecules and TNF\u03b1 (TNF) are the only entities playing a dual role as both external signal molecules and target genes because of their introduction into the simulation as starting effectors.The resulting regulatory network is almost closed and represents the most important ECM network functions. Here, we imply that the receptors for the external signalling molecules are always available and functional in SFB. Note, that TGF\u03b21 , those effects can be reflected simplistically by regulatory processes at the transcriptional level. However, activating SMADs as SMAD3 and SMAD4 are also regulated by inhibitory members of the SMAD family (SMAD6 and SMAD7), which may counteract transcriptional activation and add an extra level of complexity . TherefoMMP13 (at present only known for SMAD3) for keeping all the SMAD effects in the network.In the case of SMAD3, we decided to subsume its influence under the SMAD4 effects because both are described to have nearly identical effects and act in concert. Moreover, we could not find well-defined information about SMAD3 regulation. Hence, we added an inducing influence of SMAD4 on JUN, JUNB, JUND, FOS), determine its different regulatory activities the activation of JUN via SMAD4 (TNF\u03b1 \u2013 independent) into the single Boolean function 5 (compare Table JUN.out = (TNF AND JUN) OR (TGFB1 AND SMAD4). Based on the illustrated principles, the Boolean functions characterising formation and remodelling of the ECM were generated (Table Another example for setting up the functions is the integration of: (i) the known auto-regulatory transcriptional activation of ed Table .We analysed gene expression changes of SFB from patients with RA (3 patients) or OA (3 patients) following TGF\u03b21 and TNF\u03b1 stimulation Table . Due to p \u2264 0.05 for any patient at any time point . In Figure COL1A1 and JUNB expression are shown to illustrate the TGF\u03b21 response in SFB, and the TNF\u03b1 response is illustrated by NFKB1 and MMP1 expression. SMAD7 expression data are also included for both treatments. The data and the respective diagrams for all genes and both treatments can be found in additional file Following pre-processing of the microarray data gained from U133 Plus 2.0 arrays, we extracted the data for probe sets related to our genes of interest (see Methods). The data are available in the GEO database (GSE13837 at ). For thCOL1A2, JUN, and TIMP1 gene expression increased, whereas FOS decreased. In contrast, FOS, JUN, and JUNB expression in HeLa cells rapidly increased following TNF\u03b1 stimulation. Unfortunately, no data about the protease genes were available in this dataset and second, TNF\u03b1 stimulation of endothelial cells . Following prolonged TGF\u03b21 treatment in murine cells, 0or 1 by k-means clustering [0 or 1, if the differences of all log2 values (fold-changes) were less than 1 [We developed a data discretisation method which appropriately captures biologically relevant differences in gene expression levels. The individual time profiles for each gene were separately discretised to the values ustering , a methoustering or singlustering (data nos than 1 . For the1 (NFKB1: 2), translation: 1, RNA lifespan: 1, and protein lifespan: 2. The Boolean functions generated the transcriptional states according to the functional influence of proteins (stimuli or TF); translation and mRNA/protein degradation were computed from this output state according to the predefined intervals (see Principles of simulation in Methods section).Simulations were generated using an asynchronous update scheme, assuming time intervals \u2013 approximately equal to 1 h time steps \u2013 as follows: transcription on, which enables the model system to respond to the external stimulators TNF\u03b1 or TGF\u03b21 immediately. The simulations were performed over twelve time steps; however, we did not aim at an exact correspondence to the experimental observation time of twelve hours, but tried to adjust the simulated time courses to qualitative features such as early, intermediate or late up-regulation. Improving the Boolean functions accordingly, the initially applied literature-based information was completed by: (i) valid and specific experimental information; (ii) knowledge and experience of biological experts; and (iii) in some cases, a more focused and precise literature query AND (...) from the Boolean function term. This adjustment did not always change the simulation result, since, for example, TNF was always off following TGF\u03b21 stimulation (numbers of the Boolean functions (BF) affected: 1, 2, 3, 8, 9, 11, 12, and 14).1. In the case of TNF\u03b1 (or TGF\u03b21) stimulation, the production and secretion of TGF\u03b21 (or TGF\u03b1) by SFB should not contradict the influence of the abundant stimulating protein TNF\u03b1 (or TNF\u03b21). In these cases inactivation of the TF protein itself; (ii) a general shift in the composition of the TF AP-1, resulting in a reduced amount of TF enhancing JUN transcription; and (iii) binding/inactivation of JUN by other proteins. Therefore, a time-limited mRNA inactivation was introduced for JUN, JUNB, FOS, and ETS1. Accordingly, an inactivating rule was created: if these TFs are expressed at t> 0, they will be set to off at t+3 and afterwards (no. of BF affected: 18). In addition, at that step we included an inhibition of TGFB1/SMAD4 signalling-based target gene expression by integrating a SMAD4-inhibiting signal guaranteeing the subsequent inactivation of TGF\u03b21-related gene expression . JUND is constitutively expressed at an intermediate level, which is consistent with GEO (GSE1742 and GSE2624) and our own data, as well as with the literature [NFKB1 transcription, an inhibitory effect was not implemented, since the activity of NF-\u03baB at the protein level is controlled by interaction with several IKB proteins [2. Down-regulation of gene expression is an essential biological principle. For that reason we had to introduce a time-limited inactivation mechanism which could not be derived from the literature because information regarding down-regulatory mechanisms is very restricted. Moreover, complex and variable mechanisms were hard to model, e.g., terature . For NFKproteins which weSMAD4 induction is not dependent on TGF\u03b21 stimulation, because it is constitutively expressed . However, without TGF\u03b21-mediated phosphorylation, SMAD4 is not activated at the protein level and shows no transcriptional activity, even though constitutively expressed. Therefore, we amended the term SMAD4 to TGF AND SMAD4 in order to represent the necessity of TGF\u03b21 for SMAD4 activation .3. ETS1 AND NFKB1 for the target genes [ETS1 OR NFKB1 because regulation by NF-\u03baB seems to be dependent on ETS1, and the MMPs, for example, require both factors [4. We considered the relation et genes instead factors the AND connection was changed to OR. The revision of this function prevents the absence of JUND expression following TGF\u03b21 stimulation .6. Since a TF should not necessarily be required for its own expression (positive feedback), in the case of MMP1 expression by FOS, there were contradictory findings in the literature [MMP1 would have been permanently down-regulated by TGF\u03b21 during the simulation OR TGFB1 was in obvious contradiction to the data of the present study, thus, the term OR TGFB1 was deleted. The same was done for the MMP13 function .8. The Boolean function NFKB1, the absence of TNF\u03b1 stimulation had no decisive influence . For that reason, we changed NFKB1.out = TNF AND (ETS1 OR NFKB1) to NFKB1.out = TNF OR ETS1 OR NFKB1 .10. However, concerning the expression of In summary, we adjusted the set of BF obtained by adaptation to the gene expression data measured under two experimental conditions (TNF\u03b1 and TGF\u03b21 stimulation), in order to create an appropriate set of BF representing the existing knowledge about naturally occurring interrelationships as accurately as possible.on or off for each gene, and transitions were computed by linking an occurring input state to an arbitrary future (output) state of the simulation or observation. The set of all these transitions simulated time series were translated and merged into a single formal context, the basic tabular data structures of FCA . States are defined by the value GENE1.in.on \u2192 GENE2.out.on means: if gene1 is expressed, gene2 always will be up-regulated in the future, at all subsequent observation time points, and simulation steps. Due to this semantics, the implications neither depend on the correspondence of a simulation time step to a specific observation interval, nor on prior knowledge about time periods of direct or indirect transcriptional effects. Within the large knowledge bases for TNF\u03b1 and TGF\u03b21 stimulation, the most frequent and simple temporal rules were considered and analysed for dependencies between stimuli, induced TFs, and their target genes.The implications of the stem base are temporal rules expressing hypotheses about attributes of states or system dynamics, which are supported by pre-existing knowledge and by the analysed data. Since an implication holds for the transitions between all temporally related states, a rule such as Regarding stimulation with TNF\u03b1, a coordinated down-regulation with of the two TF SMAD7 and ETS1 emerges, as indicated by the rules 33, 114, 135, 144, 157, and 186 that actually have the attributes of the premise. Rules 114, 135, 144, 157, and 186 indicate: if the TNF\u03b1-dependent genes are not induced (ETS1 as mediator), then simultaneously the expression of TGF\u03b21-dependent genes is enabled (SMAD7 is off). This suggests that TNF\u03b1 and TGF\u03b21 may act as antagonists in SFB, as described in [has the meaning: in all simulated and observed states characterised by the absence of COL1A1 and ETS1, SMAD7 is also ribed in ,56.NFKB1, which is also induced by TNF\u03b1, proceeds conversely to that of ETS1 and SMAD7 reflecting the different targets of NF-\u03baB and SMAD7. The antagonistic pattern of NFKB1 and SMAD7 appears indirectly in rule 33, where the two genes show up in the premise of a rule with high support:The expression of <150> (...) NFKB1.out.on, SMAD7.out.off \u2192 EST1.out.off AND connective) for the positive regulation of ETS1, MMP1, MMP3, MMP9, and TNF, as well as for the inhibition of COL1A1 and COL1A2. Thus, rule 33 further suggests that the coordinated action of NF-\u03baB and ETS1 is turned off in states which are characterised by supplementary conditions as SMAD7.out.off.Regarding this rule, it is interesting that ETS1 always acts in the same direction as NF-\u03baB, according to the network derived from the literature Figure , we assuCOL1A1, MMP1, and MMP3, depends on JUN (rules 211 and 258) and/or FOS (rule 204), with JUN as the key player. These rules connect input and output states and thus their semantics is directly related to dynamics, as seen in rule 211:The generated rules adequately reflect the major influence of the TF AP-1 in the system: the expression of prominent targets, such as <87> TGFB1.in.on, TIMP1.in.on, ETS1.in.on, JUN.in.on \u2192 MMP1.out.on ETS1 and JUN are on, MMP1 will always be up-regulated in the future (at least within the time frame of 12 hours).making this strong statement: if MMP1 is expressed simultaneously or before ETS1 and JUN. In the simulation, MMP1 was always on in the output state and from time point 2 h in the data. An exception can be found for the experimental results from OA sample OA3 , while MMP9 is expressed independently (rules 24 and 35). There is a contradiction between the simulation and the data: in the observed experimental time series, MMP13 is always off, whereas the Boolean network predicts an up-regulation similar to MMP1 and MMP3. This unexpected absence of predicted MMP13 expression may be an indication for a more complex regulation of MMP13 transcription, exceeding the already known regulatory interrelations. Therefore, the MMP13 promoter and further enhancer/repressor sequences should be targeted for a more pronounced structural and functional analysis. For MMP9, the simulation and the experimental data are in good agreement: the gene is off in most, but not all states. However, since the expression of MMP9 by (S)FB is discussed controversially in the literature FB.Concerning the regulation of target genes, the expression of ure , the caCOL1A1 and COL1A2 , but contradictory rules occur concerning their expression profile in comparison to the MMPs. COL1A1 and COL1A2 seem to be co-expressed with MMP1 (rules 90 and 176), for COL1A2, however, a certain co-expression with MMP9 is calculated as well (rules 76 and 77), which conflicts with the opposing expression of MMP1 and MMP9 (see above). Therefore, the expression of collagens does often, but not necessarily always correlate with the expression of MMPs. This reflects the imbalance between MMP-dependent destruction and collagen-driven regeneration/fibrosis of ECM in the joints in inflammatory RA.Several rules unanimously indicate the co-expression of the ECM-forming genes The calculated knowledge base also contains a further unexpected correlation. According to rule 166: <94> FOS.in.off, TIMP1.in.on, SMAD7.out.off \u2192 TGFB1.in.on, MMP1.out.on, TGFB1.out.on MMP1 may also be induced in the absence of FOS , indicating that the regulation of MMP1 does not predominantly depend on FOS as proposed in the literature [and rule 188, the expression of terature ,60. ThisCOL1A1, see rule 15 :For the stimulation with TGF\u03b21, we had a total number of 341 transitions. The SMADs play a major role for the expression of TGF\u03b21-dependent target genes, as reflected by various classes of rules containing SMAD4 and/or SMAD7 , since SMAD4 was permanently on during simulation and experimental stimulation with TGF\u03b21 (exception: sample RA3 at time point 2 h).This also suggests an antagonistic behaviour of ETS1 and SMAD4: if ETS1 was MMP9 is neither induced by SMAD4 nor by any other TF, indicating that MMP9 is not influenced by TGF\u03b21. The fact that TGF\u03b21 obviously does not induce MMP9 (but other MMPs) agrees with findings reported previously [The expression of eviously and reprCOL1A1 , for example, in rule 54:A further case of an antagonistic expression pattern was calculated for MMPs and <170> SMAD4.in.on, MMP3.out.off, MMP9.out.off, MMP13.out.off, ... \u2192 COL1A1.out.on SMAD4 and other TFs, e.g., JUN and JUNB or ETS1 . The variety of TF combinations found, even following the same stimulus, exceeds the possibilities of conventional TF studies because stimulation experiments are generally restricted to a selected set of readout parameters which are not able to reflect the multiplicity of different effects in the cell.Antagonistic expression profiles also can be observed for COL1A2 appears to be constitutively expressed since its status is always calculated as on (rule 1). Therefore, for the formation of collagen I, which contains COL1A1 and COL1A2 chains, COL1A1 expression seems to be the critical switch.Following stimulation with TGF\u03b21, interestingly via separate signal transduction pathways, leading to the expression and activation of different TFs. In general, ETS1 and NFKB1 are induced predominantly by TNF\u03b1, whereas SMAD expression depends on TGF\u03b21 . JUN and FOS, however, strikingly respond to both stimuli. This defined pattern results in the expression of target genes with opposing roles. TGF\u03b21 positively regulates the enhanced formation of ECM components, whereas TNF\u03b1 is strongly involved in the expression of ECM-degrading enzymes. This was the main reason for a discriminative revision of the BF for TNF\u03b1 and TGF\u03b21 (Tables via ETS1 (BF3), NFKB1 , or SMAD7 identifying ETS1- and NFKB1-associated pathways as the major TNF\u03b1-induced pro-inflammatory/pro-destructive signalling modules in rheumatic diseases, whereas TGF\u03b21-driven and SMAD7-related signalling appear to be prominently involved in fibrosis.The calculated results impressively illustrate that TGF\u03b21 and TNF\u03b1 stimulation are mediated 1 Tables and 4. SThe minimal rule set gave many new insights, and further queries can be addressed by accessing the TNF\u03b1 and TGF\u03b21 knowledge bases in one of two ways: (i) the Excel file containing the transition rules for structured searches within the rule sets , may therefore improve the efficiency and outcome of current anti-rheumatic therapy [versus burnt-out/fibrotic late) and tailor anti-rheumatic treatment to the particular needs of the respective phase [Our analyses also show that TNF\u03b1-induced signalling predominantly results in the activation of therapy . Alternave phase .Both the complexity of even relatively small networks such as our ECM network and the completeness of the attribute exploration algorithm led to a large number of temporal rules. However, high support of a rule (often correlated to its simplicity) can be used as an indicator for the most meaningful hypotheses about co-regulation, mutual exclusion, and/or temporal dependencies not only between single genes, but between small sets of genes. The fidelity of our rules was reinforced by the comparison of simulated and observed time series data, first manually, then automatically by the attribute exploration algorithm.Combining two well-developed algebraic, discrete and logical methods (Boolean network construction and FCA) it was possible to include human expert knowledge in all different phases , with the exception of the challenging data discretisation step. On the one hand, data discretisation is an important tool to filter out noise and to reduce complexity, but on the other hand it inevitably causes loss of information . CarefulAdditional method optimisations comprise strengthening the expert role on the one hand and up-scaling the network to medium size by supplementary automatisation on the other. Especially for a small set of interesting genes an interactive attribute exploration is feasible to fortify the human expert. Using this procedure for the knowledge base construction, single rules can be validated manually or by a supporting computer program, or even new experiments can be suggested. Whereas we applied a strong validation criterion requiring rules to hold for all simulated and observed transitions, the expert could also reject rules below a threshold of support and confidence in the observed context, potentially reducing noise or eliminating measurement errors.In order to enhance computational efficiency, methods of rule selection could be integrated into the algorithm, based on association rule mining and \"iceberg concept lattices\" derived from a transition context as in Table In summary, the adapted Boolean network model reported here for the simulation of ECM formation and degradation in rheumatic diseases may represent a powerful tool aiding computational analyses of disease-related ECM remodelling and supporting a precise design of further experiments.Synovial membrane samples were obtained following tissue excision upon joint replacement/synovectomy from RA and OA patients , and 10% FCS .The preparation of primary SFB from RA and OA patients was performed as previously described . BrieflyAt the end of the fourth passage, the SFB were washed in serum-free DMEM and then stimulated by 10 ng/ml TGF\u03b21 or TNF\u03b1 in serum-free DMEM for 0, 1, 2, 4, and 12 h. At each time point, the medium was removed and the cells were harvested after treatment with trypsin . After washing with PBS, they were lysed with RLT buffer and frozen at -70\u00b0C. Total RNA was isolated using the RNeasy Kit (Qiagen) according to the supplier's recommendation.\u00ae, Santa Clara, CA, USA). Labelling of RNA probes, hybridisation, and washing were carried out according to the supplier's instructions. Microarrays were analysed by laser scanning (Hewlett-Packard Gene Scanner). Background-corrected signal intensities were determined and normalised using the MAS 5.0 software (Affymetrix\u00ae). For this purpose, arrays were grouped according to the respective stimulus . The arrays in each group were normalised using quantile arrays in normalisation [Analysis of gene expression was performed using U133 Plus 2.0 RNA microarrays between two arbitrary time points was larger than 1, then the time profile was discretised to 0 or 1 by k-means clustering . We set the constant value 0 if: (i) the highest fold-change between two arbitrary points in a time series was less than 1; (ii) the absolute expression value was below the threshold of 100 for one probe set; or (iii) the Affymetrix detection value p indicating the reliability of the measurement exceeded 0.05. In all other cases, the constant was set to 1. Applying these criteria, also individual values were set to 0 following clustering.Since we were interested in regulatory interactions, the fold-change of the expression values was more important than absolute levels. Hence, we discretised individual time series separately. The discretised data served to verify or falsify the temporal dependencies predicted from the extracted literature knowledge. For that reason, we wanted to conserve as many effects on gene expression as possible and set weak criteria for up-regulation: if the highest fold-change was considered as mRNA and the right side as TF and/or stimulus (input). Unfortunately, time-resolved data for gene expression events, mRNA, or protein half-life are scarce in the literature. Therefore, time steps were selected based on general expert knowledge and comparison of literature and experimental data, if available. For example, the duration of transcription was generally set to 1 time unit, for NF-\u03baB it was set to a doubled time period, reflecting its markedly prolonged response time before expression compared to the immediate early response transcription factors AP-1 and ETS1 [Using the deterministic Boolean network, simulations were generated using an asynchronous update scheme based on the subsequent biologically-founded assumptions. In order to simulate the time courses more realistically, transcription and translation were separated, i.e., the left side of a Boolean function (and ETS1 .1 (NFKB1: 2), translation: 1, mRNA lifespan: 1, and protein lifespan: 2. Since TGF\u03b21 and TNF\u03b1 have to be released into the extracellular medium after translation, they were assumed to take effect three time units after induction. Nevertheless, we are aware that these intervals are not absolute durations , but their qualitative relationships are important. The starting conditions of the simulations were characterised by the initially observed, discretised states, and an initial state was introduced, for which the TFs were set to on. Supposing a steady state situation before starting the stimulation with TGF\u03b21 and TNF\u03b1, the protein levels at step 0 and 1 were defined according to that of the corresponding mRNA, and, in addition, the respective stimulating protein was set to on. The simulations were performed over twelve time units, roughly corresponding to the twelve hour duration of the gene expression experiments.In summary, we selected the time steps as follows: transcription obsS and simS were characterised by the expression levels of each gene, i.e., by a subset of attributes M = E \u00d7 F, with entities or genes E, and the corresponding values F = {off, on}. A state can also be considered as a tuple 1, ..., fn) and attributes (the discretised gene expression levels in input and output states). Accordingly, the rows in Table 1in, ..., fnin, f1out, ..., fnoutf).We computed the transitive closure of these relations, since we were interested in all states emerging from a given one, within the observation or simulation time. The data of all time series related to one stimulus was assembled in the \"formal contexts\" simK. It generates a minimal set of implications: A \u2192 B, A, B \u2286 M \u00d7 {in, out}, which are valid in the formal context simK. An implication means that if any transition has all attributes a \u2208 A, then it also has all attributes b \u2208 B. An expert is asked to validate each implication. If s/he denies, a counter-example must be provided, i.e., a new transition of the context. If s/he accepts, the implication is added to the \"stem base\" of the context.Generally, we applied the interactive version of the attribute exploration algorithm to Ksim.simK, can be derived. In other words, the implications are valid regarding the knowledge formalised in the Boolean network and can also be checked by supplementary human expert knowledge or further literature research, e.g., for co-regulation of genes or possible or forbidden resulting states.As a result, a sound, complete, and non-redundant knowledge base is created, from which all implications, valid according to the semantics given by the enlarged formal context simK had to be valid for all transitions of the observed context obsK. This is equivalent to an exploration of the union of the two contexts, where every proposed implication is accepted. Thus, the resulting stem base was computed automatically with the Java tool Concept Explorer which supports also expert centred attribute exploration programs (available upon request).In this study, we compared the literature-based implications with those merely derived from the data and applied a strong criterion: implications of loration . The othIn the worst case, the running time of the attribute exploration algorithm depends exponentially on the size of the input data table (rows \u00d7 columns) . Computivs. induced gene expression; (ii) co-expression vs. divergent expression of mediators, TFs, and target genes; (iii) expression of mediators/transcription factors vs. expression of target genes; (iv) regulation of target gene expression based on the expression of different transcription factors; (v) expression of individual genes vs. expression of their functional groups; and (vi) discrepancies to the literature. Subsequently, the extracted rules were assessed with respect to biological coherence and relevance.The calculated transition rules were screened manually, focussing on the appearance and the temporal behaviour of the following features: (i) constitutive BF: Boolean function; ECM: extracellular matrix; FCA: formal concept analysis; MMP: matrix metalloprotease; OA: osteoarthritis; RA: rheumatoid arthritis; SFB: synovial fibroblasts; SM: synovial membrane; TF: transcription factor.JW performed the bioinformatics experiments and analyses, RH participated in data analyses, RH and DP performed the stimulation experiments, and DK performed the Affymetrix microarray experiments. RG participated in design and coordination of the study and helped with bioinformatics. RWK participated in design and coordination of the study, and participated in the data analyses. UG selected ECM molecules for analyses, did the literature search, and participated in the data analyses. JW, RH and UG wrote the manuscript, supported by RWK and RG. All authors read and approved the final version of the manuscript.Literature used for the network construction. Each citation corresponds to one edge in the regulatory network.Click here for fileCytoscape import file. Import this file into Cytoscape to analyse the gene regulatory network in more detail. It also includes external links for the genes and references cited to GenBank, Uniprot, and PubMed.Click here for fileCytoscape import file. Open this file after importing the CYS and GSE2624 (TNF\u03b1) at .Click here for fileDiscretised gene expression time courses. For the discretisation method, see Results and Discussion as well as Methods sections.Click here for fileHistograms of gene expression simulation. The simulations for TGF\u03b21 (blue) and TNF\u03b1 (red) were run for 12 time steps (x-axis) and for each initial state derived from the patients' data separately. A simulated expression of 100% (y-axis) means that in all six cases the gene was on.Click here for fileList of the top 500 occurring KN rules. Excel file containing the top 500 knowledge base rules valid for the simulations as well as for the data from stimulations with TGF\u03b21 and TNF\u03b1.Click here for fileTemporal rules after TGF\u03b21 stimulation. Complete lists, valid for the simulation and the observed time series.Click here for fileTemporal rules after TNF\u03b1 stimulation. Complete lists, valid for the simulation and the observed time series.Click here for fileTemporal rules after TGF\u03b21 stimulation in PROLOG format. E.g., for the use with the XSB Logic Programming and Deductive Database system, supporting tabled resolution .Click here for fileTemporal rules after TNF\u03b1 stimulation in PROLOG format. E.g., for the use with the XSB Logic Programming and Deductive Database system, supporting tabled resolution .Click here for file"}
+{"text": "We then show that when we analyse the genes identified by DTW4Omics as significantly associated with a marker for oxidative DNA damage (8-oxodG), through over-representation, an Oxidative Stress pathway is identified as the most over-represented pathway demonstrating that the genes found by DTW4Omics are biologically relevant. In contrast, when the positively correlated genes were similarly analysed, no pathways were found. The tool is implemented as an R Package and is available, along with a user guide from http://web.tgx.unimaas.nl/svn/public/dtw/.When studying time courses of biological measurements and comparing these to other measurements eg. gene expression and phenotypic endpoints, the analysis is complicated by the fact that although the associated elements may show the same patterns of behaviour, the changes do not occur simultaneously. In these cases standard correlation-based measures of similarity will fail to find significant associations. Dynamic time warping (DTW) is a technique which can be used in these situations to find the optimal match between two time courses, which may then be assessed for its significance. We implement DTW4Omics, a tool for performing DTW in R. This tool extends existing R scripts for DTW making them applicable for \u201comics\u201d datasets where thousands entities may need to be compared with a range of markers and endpoints. It includes facilities to estimate the significance of the matches between the supplied data, and provides a set of plots to enable the user to easily visualise the output. We illustrate the utility of this approach using a dataset linking the exposure of the colon carcinoma Caco-2 cell line to oxidative stress by hydrogen peroxide (H Time courses provide insight into patterns and sequential biological events, and therefore temporal studies are an important tool in biological research. The systems we study are not static, but change dynamically over time. A large amount of 'omics research is currently performed by taking samples at a single time point and seeking the significantly changed genes, proteins and/or metabolites. However, given the dynamic features of biological systems we know the chosen time point will strongly influence the obtainable results. Therefore, studying the changes over time is crucial to obtain a fuller understanding of the system as a whole.However, when studying stressor-induced biological time courses in pharmacology or toxicology one can face difficulties with the interpretation of the results due to differences in the kinetics of time courses between biological measurables. Even when aspects of the system are related and therefore display similar patterns of change over time, we expect to see delays and differences in the speed of this change. Under these circumstances standard correlation analysis can often fail to find any association between the elements, and here DTW may be used.DTW works by finding the optimal alignment between time courses One common application of dynamic time warping is in speech recognition where it can automatically allow adjustment of the signal to cope with different speeds of speaking.In the biological domain DTW has been previously used for analysing gene expression data in several studies Currently there is one tool for DTW, aimed specifically at analysing 'omics data, in this case global gene expression. This tool, GenT\u03c7Warper Since our DTW4omics package can be generally applied to time related omics data from any source, throughout this paper and the DTW4Omics package we use the general terms \u2018Endpoints\u2019 and \u2018Entities\u2019 to refer to the different time courses which are undergoing DTW. How DTW4Omics uses endpoints and entities is shown in DTW4Omics is implemented as an R package, and utilises the DTW package The function for endpoint related DTW takes 6 parameters, 4 of which are essential. A matrix of the entities measured , a matrix of the endpoints , a list of directory names where the results will be stored, a list of the numerical values for the time points and then optionally the type of scaling to apply to the data before DTW and a maximum q-value allowed for an entity to be significant. The row names of the entity matrix are used as labels in the results files and plots.For each endpoint, the optimal matches for each entity are calculated with DTW. The order of the time points are then permuted (separately for entity and endpoint), to allow an estimation of the p-value. The p-values obtained for each entity are stored and Benjamini-Hochberg false discovery rate correction is applied to produce a list of significant entities.A set of jpeg image files is created storing plots of the original and warped time courses for every significant entity see . Two othFinally, there are two text files produced as output, one containing a list of the significant entities found, and the other containing a summary of the run, with details of the most significant entity found .A second function performs matched DTW which allows pairs of profiles to be compared see . This fu1*t1 and n2*t2, where t1 and t2 are the numbers of time points from two experiments and n1 and n2 are the number of measured entities with a significant time course. It is not necessary that the same timepoints are measured in each system, or even that the same number of time points is measured, and similarly it may be that a different overall set of entities has been measured, however the analysis will only be performed on the overlap as determined by the row labels. This is shown in Taking two matrices of measurements of dimensions, n2O2 or menadione exposure on gene expression in human colon carcinoma cell line Caco-2. This set was generated to investigate reactive oxygen species-induced oxidative stress in the colon, which is involved in inflammatory bowel diseases and suggested to be associated with colorectal cancer risk. This set includes changes in cell cycle parameters and measurements of 8-oxo-dG (8-oxo-2-deoxyguanosine), a marker for oxidative DNA damage at each of 9 time points spread over 24 hours. First we filtered the genes to select those with no missing values and showing a significant time course, using a threshold of at least one time point having an average expression value of at least 2 standard deviations from the mean. This resulted in 1391 genes from the menadione treatment and 1292 genes from hydrogen peroxide exposure. We used the DTW4Omics tool, with unit variance scaling to rescale the genes and endpoints so that values would be comparable.We used data from We compared the lists of significant genes generated by DTW4Omics with those genes with a significant Pearson correlation to the endpoints . This taIn order to validate the biological relevance of the genes found by DTW we took the genes significantly associated by DTW between menadione exposure and 8-oxo-dG and input them into ConsensusPathDB This example clearly demonstrates the ability of DTW4Omics to highlight relevant genes through the alignment of time series data, in this case between genes and phenotypical markers, thus achieving a phenotypic anchoring of 'omics responses to oxidative stress.To test the matched entities functions we generated a set of simulated data. First we generated random time series for 11,000 entities, these time series were designed to be smooth, as opposed to a series of random values. The first 10,000 entities in the second set were generated by adding noise and time shifts to the first set. 10 levels of time shifting were used ranging from 0\u20139 units of shift and 10 levels of noise. The final 1,000 entities in the second set were generated at random in the same manner as those in the first set. For a full description of the generation of these data, including matlab code see The results of these simulations were measured in two ways, firstly in how many cases would the entities appear as significantly associated, given a standard FDR of 5% and secondly, how often would the correct pattern of time shifts be detected. These results, shown in The high level of similarity detection when comparing two random series can be explained by the fact that the time series were generated to be smooth, and therefore are more likely to be matched to each other than when the time points are randomly permuted in the significance estimation process. This could indicate that when longer and smoother time series from real data are analysed through this tool, the p-values may be over-estimating the significance of the matches, although it will still serve to rank the entities. Therefore, we suggest that with longer, smoother time series a stricter FDR of, for instance 1% may be applied to take into account these false positive results.Finally, it is interesting to compare the two plots in 2O2 and after menadione treatments. This matches the conclusions of the original paper We also used the matched function on the oxidative stress dataset to find if there is a set of genes which exhibited the same response pattern (perhaps at different delays) after exposure to each compound. Here we might expect to find genes linked to a common mechanism of toxicity. We found that no genes had significantly associated time courses between their expression after HWe introduced a new tool for analysing time-course data, DTW4Omics which uses DTW to generate a list of genes whose time course may be minimally adjusted to obtain an optimal match to that of an endpoint. Many of the genes found in our test were not significant using correlation analysis, and thus this tool is complementary to such an analysis and we recommend applying both approaches in combination. Further the gene list generated through DTW was then able to be associated with biologically relevant pathways, in contrast to the list generated from positively correlated genes alone.Through the presentation of simulated data we have seen that DTW can also be applied to recognise time courses which are matched between two elements in different datasets, and through exploring the consequences of adding noise and time shifts to this simulated data we have found that in high noise environments the detection of paired elements is likely to be more robust than the detection of the \u201ccorrect\u201d pattern of time shifts.Whilst we have presented transcriptomics data combined with phenotypic endpoints from non-omics technologies, this approach can clearly be applied much more widely as a method for integrating data from different sources.Data S1Description of the generation of simultated time-series data including matlab code.(DOC)Click here for additional data file."}
+{"text": "Based on gene expression profiles, genes can be partitioned into clusters, which might be associated with biological processes or functions, for example, cell cycle, circadian rhythm, and so forth. This paper proposes a novel clustering preprocessing strategy which combines clustering with spectral estimation techniques so that the time information present in time series gene expressions is fully exploited. By comparing the clustering results with a set of biologically annotated yeast cell-cycle genes, the proposed clustering strategy is corroborated to yield significantly different clusters from those created by the traditional expression-based schemes. The proposed technique is especially helpful in grouping genes participating in time-regulated processes. The fast evolving gene microarray technology has enabled simultaneous measurement of genome-wide gene expressions in terms of mRNA concentrations. There are two types of microarray data: time series and steady state. Time-series data are obtained by sequential measurements in temporal experiments, while steady-state data are produced by recording gene expressions from independent sources, for example, different individuals, tissues, experiments, and so forth. The high costs, ethical concerns, and implementation issues prevent from collecting large time-series data sets. Therefore, about 70% of the data sets are steady state [A cell is the basic unit of life, and each cell contains instructions necessary for its proper functioning. These instructions are encoded in the form of DNAs that are replicated and transmitted to its progeny when the cell divides. mRNAs are middle products in this process. They are transcribed from DNA segments (genes) and serve as the templates for protein translation. This conduit of information constitutes the dy state , and mosBased on microarray measurements, clustering methods have been exploited to partition genes into subsets. Members in each subset are assumed to share specific biological function or participate in the same molecular-level process. They are termed as coexpressed genes and are supposed to be located closely in the underlying genetic regulatory networks. Eisen et al. applied A straightforward application of clustering schemes will cause the loss of temporal information inherent in the time-series measurements. This shortcoming has been noticed in literature. Ramoni et al. designedBased on time-series data, modern spectral density estimation methods have been exploited to identify periodically expressed genes. Assuming the cell cycle signal to be a single sinusoid, Spellman et al. and WhitThe biological experiments generally output unequally spaced measurements. The change of sampling frequency is due to missing data and the fact that the measurements are usually event driven, that is, more observations are taken when certain biological events occur, and the measurement process is slowed down when the cell remains quiet. Therefore, an analysis based on unevenly sampled data is practically desired and technically more challenging. The harmonics exploited in discrete Fourier transform (DFT) are no longer orthogonal in the presence of uneven sampling. Lomb and ScarThis paper proposes a novel clustering preprocessing procedure which combines the power spectral density analysis with clustering schemes. Given a set of microarray measurements, the power spectral density of each gene is first computed, then the spectral information is fed into the clustering schemes. The members within the same cluster will share similar spectral information, therefore they are supposed to participate in the same temporally regulated biological process. The assumptions underlying this statement rely on the following facts: if two genes X and Y are in the same cluster, their spectral densities are very close to each other; in the time domain, their gene expressions may just differ in their phases. The phases are usually modeled to correspond to different stages of the same biological processes, for example, cell cycle or circadian rhythms. The proposed spectral-density-based clustering actually differentiates the following two cases.(1) Gene X's expression and Gene Y's expression are uncorrelated in both time and frequency domains.(2) Gene X and Y expressions are uncorrelated in time domain, but gene X's expression is a time-shifted version of gene Y's expression.In the traditional clustering schemes, the distances are the same for the above two cases . However, in the proposed algorithm, the second case is favorable and presents a lower distance. Therefore, by exploiting the proposed algorithm, the genes participating in the same biological process are more likely to be grouped into the same cluster. Lomb-Scargle periodogram serves as the spectral density estimation tool since it is computationally simple and possesses higher accuracy in the presence of unevenly measured and small-size gene expression data sets. The appropriate clustering method is determined based on intense computer simulations. Three major clustering methods: hierarchical, K-means, and self-organizing map (SOM) schemes are tested with different configurations. The spectra and expression-based clusterings are compared with respect to their ability of grouping cell-cycle genes that have been experimentally verified. The differences between clusterings are recorded and compared in terms of information theoretic quantities.This section explains how to apply the Lomb-Scargle periodogram to time-series gene expressions. Next are formulated briefly the three clustering schemes: hierarchical, K-means, and self-organizing map (SOM). Afterward, we discuss how to validate the clusterings and make comparisons between them. The notational convention is as follows: the matrices and vectors are in bold face, and scalars are represented in regular font.Most spectral analysis methods, for example, Fourier transform and traditional periodogram employed in Spellman et al. and WichGiven where Let artholdi proved tThe number of probing frequencies is denoted byand the frequency grid can be defined in terms of the following equation:Notice further that the spectra at the front and rear halves of the frequency grid are symmetric since the microarray experiments output real values.Lomb-Scargle periodogram represents an efficient solution in estimating the spectra of unevenly sampled data sets. Simulation results also verify its superior performance for biological data with small sample size and various unevenly sampled patterns .The obtained Lomb-Scargle power spectral density will be used as input to clustering schemes as an alternative to the original gene expression measurements. Three clustering schemes: Hierachical, K-means, and self-organizing map (SOM) are used for testing this substitution.The hierarchical clustering represents the partitioning procedure that assumes the form of a tree, also known as the dendrogram. The bottom-up algorithm starts in treating each gene as a cluster. Then at each higher level, a new cluster is generated by joining the two closest clusters at the lower level. In order to quantize the distance between two gene profiles, different metrics have been proposed in literature, as enumerated in Table The correlation is the most popular metric and was exploited in Eisen's work . Based oThe single linkage method actually constructs a minimal spanning tree, and it sometimes builds an undesirable long chain. The complete linkage method discourages the chaining effect and in each step increases the cluster diameter as little as possible. However, it assumes that the true clusters are compact. Alternatively, the average linkage method makes a compromise and is usually the preferred method since it poses no assumption on the structure of clusters. The selection of distance metric and linkage method depends on the nature of the real data, and several clustering schemes were proposed to be tested at the same time so that each can capture different aspects of the data. The hierarchical clustering scheme can be formulated in terms of the pseudo code depicted in Algorithm 1. If a specific number of clusters Algorithm 1:Hierarchical clustering algorithm.1: Input 2: Initialize whiledo3: \u20034:\u2003Insert 5:\u2003Label all existing clusters with integers 6:\u20037:end while8: The K-means clustering divides the genes into Algorithm 2:K-means clustering algorithm.1: Input gene expressions or spectral densities, and the desired number of clusters 2: Randomly create centroids 3: Assign each gene while members in some clusters change do4: \u2003compute centroids 5:\u2003assign gene 6:end while7: The self-organizing map method is in essence based on a one-layer neural network, and it is exploited in . Each clwhere the function Algorithm 3:SOM clustering algorithm.1: Input gene expressions or spectral densities, the desired number of clusters 2: Randomly create centroids 3: Assign each gene fordo4: \u2003Randomly select a gene expresssion 5:\u2003Find the point 6:\u2003Update centroids 7:end for8: 9: Assign each gene The three clustering schemes with inputs of either gene expressions or spectral densities are to be evaluated in two different ways: how they group time-regulated genes, and whether they are significantly different from each other. Different criteria are defined based on information theoretic quantities.Given where The clustering schemes can be validated by their ability to group genes that have been annotated to share similar biological functions or participate in the same biological process. One of the most explored processes is the yeast cell cycle, for which genes have been mostly identified and their interactions have been proposed in the public database . Assume It is desirable that genes with the same functions be integrated in as small number of clusters as possible. Therefore, the smaller the joint entropy, the better the clustering.A straightforward performance metric combining both the clustering entropy and the joint entropy is defined as the mutual informationwhere the and Roth , wherebyTwo clustering schemes create two different partitions of all the observed genes. A measure of the distance between two clusterings is highly valuable when the two schemes do not show a significant difference in their performance. Various metrics have been proposed to evaluate the difference between two clusterings, for example, Fowlkes and Mallows , Rand 2, and morAssume two different schemes produce two clusterings Then, the variation of information (VI) is defined asVI is upper bounded by The performance of the proposed power spectrum-based scheme is illustrated through comparisons with three traditional expression-based clustering schemes: Hierarchical, K-means, and self-organizing map (SOM). The comparisons are divided into two parts. In the first part, we evaluate their ability to group the cell-cycle involved genes, while the second part is devoted to illustrate the fact that the proposed schemes construct clusters that are significantly different from those created by the traditional schemes.These simulations were performed on the cdc15 data set published by Spellman et al. , which cCell cycle has served as a research target in molecular biology for a long time since it plays a crucial rule in cell division, and medically it underlies the development of cancer. Experimentally 109 genes have been verified to participate in the cell-cycle process, and their interactions were recorded in the public database KEGG . Among tThe clustering performance is represented by an information theoretic quantity, that is, mutual information, which is defined between the obtained partition of all measured genes and the set of 104 genes. Higher mutual information indicates that the 104 cell-cycle genes are closely integrated into only a few clusters, and most clusters are balanced in size. In other words, with the same number of clusters, the higher the mutual information, the better the performance.The proposed strategy is surely not constrained to detect cell cycle genes. However we have to confine our discussion to cell cycle here because the available data set is right for the purpose of cell cycle research. Besides, the cell cycle genes have been identified for a relatively long time with high confidence.The simulation results for hierarchical clustering are illustrated in Figure Figure Figure Configured also with various distance metrics, the K-means algorithm was applied on both the spectral and original gene expression data. To avoid converging to local suboptimal solutions, all K-means clustering schemes were executed 5 times, and the best performance was reported. For clustering expression data, the correlation and cosine approaches are still the best choices while for spectra-based schemes, the Euclidean and city-block approaches still exceed the other schemes in most cases. Besides, it constructs a significantly different partition relative to traditional clustering strategies. When deploying the hierarchical or K-means clustering methods based on the spectral density, the Euclidean and city-block distance metrics appear to be more appealing than the cosine or correlation distance metrics. The proposed novel algorithm is valuable since it provides additional information about temporal regulated genetic processes, for example, cell cycle."}
+{"text": "When growing budding yeast under continuous, nutrient-limited conditions, over half of yeast genes exhibit periodic expression patterns. Periodicity can also be observed in respiration, in the timing of cell division, as well as in various metabolite levels. Knowing the transcription factors involved in the yeast metabolic cycle is helpful for determining the cascade of regulatory events that cause these patterns.Transcription factor activities were estimated by linear regression using time series and genome-wide transcription factor binding data. Time-translation matrices were estimated using least squares and were used to model the interactions between the most significant transcription factors. The top transcription factors have functions involving respiration, cell cycle events, amino acid metabolism and glycolysis. Key regulators of transitions between phases of the yeast metabolic cycle appear to be Hap1, Hap4, Gcn4, Msn4, Swi6 and Adr1.Analysis of the phases at which transcription factor activities peak supports previous findings suggesting that the various cellular functions occur during specific phases of the yeast metabolic cycle. Saccharomyces cerevisiae) exhibit oscillatory dynamics in several cellular pathways, such as those involving the cell cycle, glucose metabolism, and respiration. Previous studies [et al. [et al. [Budding yeast cells , R/B and R/C . Each phase is associated with a characteristic change in dissolved oxygen levels in the yeast culture. During the Ox phase, oxygen levels drop drastically. In the R/B phase, oxygen levels increase, while in the R/C phase, the longest of the three, oxygen levels stay relatively constant. During the course of the experiment, the yeast culture is continuously infused with low levels of glucose, however glucose levels in the media are almost zero at all phases of the cycle; cells appear to adsorb and metabolize available glucose immediately. Analyses of microarray time series expression data revealed that ~57% of yeast genes exhibit periodic expression during the course of a metabolic cycle and cluster into one the three superclusters, corresponding to the three phases of the yeast metabolic cycle. Gene expression in different clusters peaks at different phases, and many common metabolites also oscillate, indicating that there is a clear temporal separation between various cellular events ,11.In the oxidative phase, oxygen is rapidly consumed in a burst of respiration. Genes whose expression peaks during this phase are highly expressed during a very narrow window of the yeast metabolic cycle. Functional and metabolome analysis indicates that in the Ox phase, oxidative phosphorylation is using up previously accumulated acetyl-CoA while ATP is being rapidly produced. The oxidative cluster is enriched for genes involving amino acid synthesis and ribosomes, indicating that cells are preparing for cell division. Genes involved in sulfur metabolism and RNA metabolism also show increased expression. During the Ox phase, ATP is abundant, and this is what enables the assembly of translation machinery for the next phase: the reductive/building phase ,12.In the R/B phase, 40-50% of cells enter the cell cycle during each cycle of the yeast metabolic cycle . TherefoFinally, in the R/C phase, cells become dependent on non-respiratory modes of metabolism, and acetyl-CoA accumulates, which is a precursor to the upcoming respiratory Ox phase. The R/C cluster is enriched for genes involving fatty acid oxidation, glycolysis, stress-associated response and protein degradation, and this also includes genes involved in peroxisomal function, vacuoles and ubiquination machinery. Little oxygen is being consumed, and dissolved oxygen levels continue to rise. Altogether, cycles in metabolism, respiration and mitochondrial function are all important components of the yeast metabolic cycle ,12.Analysis of intracellular concentrations of metabolites shows that many metabolites show periodic oscillations during the yeast metabolic cycle, and some may be important in the establishment and regulation of cycles . NADP(H)et al. [\u03b1-coefficients, which capture whether the genes a transcription factor binds to are differentially expressed or not, assuming that the effects of other transcription factors are held constant. They are essentially a measure of transcription factor activity, and when calculated for each time point, one can find the \u03b1-coefficients (\"activities\") over time for each transcription factor.Time-series microarray data may be analyzed to determine the transcription factors that are most likely regulating the periodic genes. Other studies searched the promoters of periodic genes to find the most frequently occurring motifs and deduce the most significant transcription factors ,10. Cokuet al. develope\u03b1-coefficient profiles can be further analyzed for periodicity, and treated as if they were time-series expression data. Cokus et al. [\u03b1-coefficient profiles that resemble a sinusoidal curve. However, it would not be as effective for studying expression profiles from the yeast metabolic cycle, because some clusters of genes contain a sharp spike or two peaks per cycle [Transcription factor s et al. uses a Fer cycle , in the dynamical model.et al. [2S is added to the glycolysis model. This model supports the importance of the sulfur metabolic pathway for establishing oscillations in the short-period yeast metabolic cycle. A further study could create a mathematical model that includes pathways involving heme synthesis to determine whether the Hap2/3/4/5 complex and regulation of heme biosynthesis are sufficient for inducing oscillations in the long-period yeast metabolic cycle.Wolf et al. derived \u03b1-coefficient profiles of different transcription factors may suggest novel interactions or regulatory mechanisms. For example, Msn4 and Swi6 have very similar, but opposite, \u03b1-coefficients, suggesting that these may form the connection between oxidative stress and delaying cell cycle initiation during stressful conditions. We speculate that many interesting connections may be discovered by comparing detailed \u03b1-coefficient profiles from other types of oscillating biological systems, and it would be especially interesting to use the methods of this study to analyze the short-period metabolic cycle.Determining how the yeast metabolic cycle is regulated may have implications on other biological cycles and studies on transcription factors, as well. For example, as a function of the mammalian circadian cycle, heme concentrations oscillate . Similar\u03b1-coefficients, from [The method for calculating transcription factor ts, from , uses thRi is the relative expression level of gene i, bij is the degree to which transcription factor j binds to the promoter of gene i, and \u03b1j is the \u03b1-coefficient (\"activity\") of transcription factor j. The variable c is a residual constant that the binding factors get scaled by. Taking the logarithm of both sides, Equation 1 becomes:where \u03b1-coefficients and the regression residual, and these are calculated for each of the 36 time points and recalculating periodicity scores, it was found that they exceeded the threshold of 0.44 in only 0.023% of the random permutations, i.e. the threshold of 0.44 is highly significant .Transcription factors were given a periodicity score based on autocorrelation, which is the cross-correlation of a signal with itself at various time shifts. We first calculated the raw, unscaled cross-correlation sequence of the \u03b1-coefficient curves to estimate the phase and amplitude of each transcription factor, in order to determine the phase of the metabolic cycle they peak in. Indirectly, this would also determine the order of the cellular events taking place in the yeast metabolic cycle. Lelandais et al. [Sine waves were fitted to the time-dependent s et al. decompos\u03b1-coefficients were also calculated. The data from the 36 time points was averaged to produce 12 time points for each transcription factor, representing the average of \u03b1-coefficients across the three cycles for which we have data. The maxima of the averaged out \u03b1-coefficients was found for each transcription factor, and the location along the time axis was converted into radians.The phase of the peaks of \u03b1-coefficient profiles using a hierarchical clustering method. Clustering was done using the absolute values of \u03b1-coefficients, because transcription factors in the same cluster may have opposite \u03b1-coefficient curves depending on whether they are activators or repressors.Transcription factors were clustered based on their time-dependent \u03b1-coefficients, in order to compare transcription factor \u03b1-coefficients and oxygen levels side-by-side.Temporal data for levels of dissolved oxygen in the yeast culture were obtained from and incl\u03b1-coefficients at one time point from the \u03b1-coefficients at the preceding time point by matrix multiplication:Computing a transition matrix enables the prediction of transcription factor T using:The set of equations can be solved for A and B are the matrix of \u03b1-coefficients excluding the last and first time points, respectively [where ectively . The traectively .\u03b1-coefficients of the first time point, and multiplying the resulting vectors with the time-translation matrices again for each successive time point. Model residuals were calculated for each time point by finding the difference between the mean activity of \u03b1-coefficients calculated using regression and the mean activity of \u03b1-coefficients calculated using matrix multiplication.The two time-translation matrices were verified for correctness in modeling the dynamical system by multiplying them with the To illustrate the network of transcription factors visually, the transition matrix was converted into a diagram such that the nodes represent transcription factors and edges correspond to the most significant entries in the translation matrix. If connections existed in both directions, only the more significant connection was considered.\u03b1-coefficients and their autocorrelation functions, amplitudes and phases, and time-translation matrices were implemented in MATLAB [\u03b1-coefficient curves were clustered using TimeClust, a MATLAB-based tool for clustering genes according to their temporal expression profiles [Algorithms for computing transcription factor n MATLAB . The netn MATLAB . \u03b1-coeffprofiles .The calculations described in this manuscript were performed by AR. MP provided essential comments and guidance. The manuscript was written by AR and edited by MP. Network layouts, figures and tables were by AR. All authors read and approved the final manuscript.Time-translation matrix with no constraints. Shaded entries show significant interactions between transcription factors, with a significance threshold of 0.5. Entries shaded darker are positive values, lighter are negative values. Italics indicate that the interaction was not included in the graphical representation of the transition matrix , because an interaction with a greater magnitude exists in the opposite direction.Click here for fileTime-translation matrix with constraint to produce non-negative entries. Shaded entries show significant interactions between transcription factors, with a significance threshold of 0.5.Click here for fileModel residuals for two phases of the yeast metabolic cycle. Residuals were calculated from A) the transition matrix constrained for non-negative entries and B) the non-constrained transition matrix.Click here for fileGoodness of fit for multiple linear regression. Estimates of the square root of residual variance, \u03c3, are reported for each time point and were calulated by the MATLAB function robustfit in order to aggregate the residuals into a single measure of predictive power. First, a \u03c3 estimate (root-mean-square-error) is calculated from ordinary least squares (\u03c3OLS), and a robust estimate of sigma (\u03c3robust) is also calculated. The final estimate of \u03c3 is the larger of \u03c3robust and a weighted average of \u03c3OLS and \u03c3robust. Note that \u03c3 is equal to median absolute deviation (MAD) of the residuals from their median, scaled to make the estimate unbiased for the normal distribution: \u03c3 = MAD/0.6745. Also shown are the mean of the residuals at each time point. To put residuals on a comparable scale, they are \"studentized,\" that is, they are divided by an estimate of their standard deviation that is independent of their value.Click here for fileAutocorrelation function. MATLAB code for calculating the autocorrelation function of transcription factor \u03b1-coefficients.Click here for file"}
+{"text": "The vascular disease in-stent restenosis (ISR) is characterized by formation of neointima and adverse inward remodeling of the artery after injury by coronary stent implantation. We hypothesized that the analysis of gene expression in peripheral blood mononuclear cells (PBMCs) would demonstrate differences in transcript expression between individuals who develop ISR and those who do not.NAB2 and LAMP genes.We determined and investigated PBMC gene expression of 358 patients undergoing an index procedure to treat in de novo coronary artery lesions with bare metallic stents, using a novel time-varying intercept model to optimally assess the time course of gene expression across a time course of blood samples. Validation analyses were conducted in an independent sample of 97 patients with similar time-course blood sampling and gene expression data. We identified 47 probesets with differential expression, of which 36 were validated upon independent replication testing. The genes identified have varied functions, including some related to cellular growth and metabolism, such as the In a study of patients undergoing bare metallic stent implantation, we have identified and replicated differential gene expression in peripheral blood mononuclear cells, studied across a time series of blood samples. The genes identified suggest alterations in cellular growth and metabolism pathways, and these results provide the basis for further specific functional hypothesis generation and testing of the mechanisms of ISR. Cardiovascular disease is the leading cause of death in western countries and a major cause of morbidity and mortality world-wide. Studying coronary atherosclerotic disease (CAD) is challenging for several reasons, since it has substantial environmental and genetic components. Furthermore, despite the nearly universal presence of coronary atherosclerosis, particularly as individuals age, cardiovascular events such as acute coronary syndromes, sudden death and the need for revascularization therapy only occur in some individuals, highlighting the difficulty in precisely defining atherosclerosis phenotypes. In symptomatic patients, revascularization therapy is often required, and percutaneous intervention with balloon angioplasty and stent implantation is a cornerstone of therapy. In-stent restenosis (ISR) is a late complication of stent implantation in which inflammatory and proliferative responses to the vascular injury caused by angioplasty and stenting lead to neointimal hyperplasia within the stent and at its edges over the following weeks and months. Many of the same inflammatory and proliferative processes are activated in the development of atherosclerosis but occur over years or decades. ISR is characterized by proliferative responses to the vascular wound incurred as a result of stent implantation. TherefoIn the analysis reported here, we apply a novel method to analyze time-course gene expression data to gene expression profiles of peripheral blood mononuclear cells (PBMCs) of patients enrolled in our study of ISR. The results of the discovery transcriptome analysis of the CardioGene Study were tested for replication in an independent sample of Icelandic patients. We ultimately identified and validated a set of 32 genes that are temporally differentially expressed in the blood of patients who develop ISR after stenting, compared to those who do not develop ISR, highlighting the importance of cellular growth pathways and identifying several biologic candidates for further mechanistic investigation.In the CardioGene Study, 312 patients were included are from experiments that measure gene expression over time. Early time-course RNA expression studies have focused on identifying clusters of genes with a similar pattern over time[Gene expression profiling with DNA microarray technology is a popular tool to monitor the expression level of thousands of genes simultaneously and has been applied in cardiovascular research, to detect patterns of gene expression indicative of underlying disease states-21. Sincover time,23,24. MTo detect differential gene expression pattern over time, we used a time-varying intercept model which can account for differences in sample intervals between patients. The term \"differential gene expression pattern over time\" can be interpreted in several ways to form suitable questions and, thus, the hypotheses are dependent on the particular experiment under consideration. We considered two related questions where each can roughly correspond to the main effect of the group, and the interaction between group and time points. Consider, for example, a gene with exactly the same expression pattern over time in both groups, but the first group has a higher expression than the second group, consistently over all time points. This is a gene that shows the main effect of the group. On the other hand, consider another gene with similar expression level at two time points, 1 and 2, but this gene's expression increases from time 1 to 2 in one group but decreases in another group. This group-by-time interaction cannot be examined by methods that only test the main effect of the group. The time-varying intercept model we used can detect both the main effect and group-by-time interaction. This method, however, requires a large number of bootstrap resampling to evaluate the significance level of the difference between groups, which can be computationally challenging especially when a large number of genes are tested.NAB2 gene, which is also known as EGR1 binding protein 2. Early growth response (EGR) genes, which are transcription factors that are implicated in a wide variety of proliferative and differentiative processes[NAB2 is expressed in vascular smooth muscle cells in response to injury[LAMP2 gene product protects the lysosomal membrane from proteolytic enzymes within lysosomes and also functions as a receptor for proteins to be imported into lysosomes[LAMP2 gene have been identified in patients with hypertrophic cardiomyopathy[Of the genes we identified, the most extensive prior literature in vascular disease was found for the processes. Nab proprocesses. NAB2 isto injury-12,26 anto injury. The LAMlysosomes. Mutatioomyopathy, and theomyopathy. Cellulaomyopathy,31. In tVPS26, VPS41, SRP54, and RAD23B, and comparison to previously published reports in the literature do not show differential gene expression in other studies of restenosis, although these investigations were conducted primarily on vascular tissues or culture vascular smooth muscle cells rather than peripheral blood[VPS26 and VPS4 genes belongs to a group of vacuolar protein sorting genes and may have a role in lysosome maintenance[SRP54 is a protein in the signal recognition particle, which directs secretory proteins to membranes as they emerge from the ribosome [RAD23B has a role in DNA (nucleotide excision) repair, and genetic variants in RAD23B have been associated with several solid tumors[ACADM have been associated with medium-chain acyl-CoA dehydrogenase activity, but there is no known vascular implication of this disorder[PCMT1 and FOLR2 have been associated with neural tube defects[Some genes identified have no apparent role in vascular biology, such as ral blood,18,27,32intenance, and SRPribosome . Specifiid tumors-38. The id tumors. Genetic disorder. Variante defects, with noWe compared the results of our study to previously published reports of transcriptome analysis in ISR. Our results were negative for replication of these studies which focused primarily on vascular tissue samples, in relatively small sample sets. In one study, peripheral blood total leukocyte gene expression was studied in 10 patients with ISR and atherectomy specimens . While aVascular biopsy and atherectomy are performed infrequently as part of routine clinical care and would not support well-powered studies of vascular tissues. Tissue sampling over a time course is not clinically indicated or possible. Additionally, a large degree of intra-individual variability in gene expression was noted in these prior studies of vascular tissues, making replication testing critical, yet this cannot be done without access to additional tissues samples. In our study, we analyzed peripheral blood leukocytes, specifically focusing on the mononuclear fraction which contains primarily B and T lymphocytes and monocytes. Although the analysis of peripheral blood cells would ideally be complemented by similar studies in vascular tissues, studying gene expression profiles in blood leukocytes is biologically relevant due to well-defined interactions with the arterial wall, particularly in the setting of vascular injury and repair as in the setting of ISR. The oveTo address the possibility of false positives identified with our statistical methods, we conducted replication analyses in the independent sample of deCODE samples and we conducted bootstrap resampling to assess significance of the findings. Through this sensitivity analysis, we demonstrated that the validation of 36 probe sets is not likely to be due to chance. Additional potential limitations of this study of ISR are the use of a clinical restenosis outcome, rather than an angiographic outcome, in which clinically silent ISR may have been missed, and the choice of tissue analyzed, as discussed. The CardioGene and deCODE cohorts differ in the incidence of ISR (16.7% in the CardioGene Study and 28.8% in deCODE) with the patients in the deCODE sample showing overall lower residual percent stenosis in the treated lesion after stent implantation. Also, the proportions with hyperlipidemia and diabetes differ. However, despite the differences in the cohorts, we find replication of a substantial proportion of the discovery findings.In summary, we have used a method to analyze gene expression in serial blood samples and identified a set of genes that show differential expression in the blood of patients who develop ISR after BMS implantation, compared to those who do not. These gene expression patterns of adverse vascular remodeling suggest possible hypotheses for the mechanisms of injury-induced remodeling observed in both ISR and CAD, since ISR is a niche phenotype occurring in a subset of patients with CAD. Further studies are needed to investigate the functional relevance of these genes and are warranted based upon the findings of this study.The CardioGene Study was an IRB-approved, prospective cohort study of 358 patients enrolled at the time of bare metal stent (BMS) implantation to treat de novo, previously untreated native coronary artery lesions at William Beaumont Hospital and the Mayo Clinic . Patients were followed for one year to determine ISR outcomes Figure . EnrollmConsecutive patients presenting to the cardiac catheterization laboratories of the clinical enrollment sites were approached for participation in the study. Follow-up clinical evaluation was performed via patient interview and review of all available medical records at 6 months and 12 months post-stent normalization. SeveralIn the study design of the time course of blood gene expression profiling, the follow-up times were set at 2 weeks and 6 months for the early and late follow-up. A window was set at each of the time points, with the early follow-up designated as any time 2-4 weeks post-stent and the late follow-up designated as any time between 5-7 months for patients seen at 6 months. If patients did not return for follow-up at the 6 month time point, an attempt was made to have the patient provide the late follow-up blood sample at 12 months post-stent, at the time when final clinical ascertainment for ISR was made. As a result of the use of time windows, we were able to increase ascertainment of follow-up blood samples, but the actual sample collection intervals after stent implantation were not precisely spaced. To analyze such data, we used the time varying intercept model using a B-spline basis of dimension 2 based on only one knot at the median follow-up time (which was 14 days) and the DAVID/EASE software, which provides gene name and functional annotation of probe sets,52. AnalBMS: bare metal stents; CAD: coronary atherosclerotic disease; DES: drug-eluting stents; ISR: in-stent restenosis; PBMC: peripheral blood mononuclear cellsThe authors declare that they have no competing interests.Study design , CardioGene clinical enrollment and blood and data collection , deCODE clinical enrollment and data collection , RNA expression data generation , Data analysis , Manuscript preparation .All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1755-8794/4/20/prepubSupplementary Table 1. Initial 46 probesets identified in the timecourse analysis of the CardioGene samples.Click here for fileSupplementary Table 2. Gene ontology annotation of the final 32 genes identified and validated.Click here for fileSupporting Materials. Statistical methods and lists of probe IDs discovered and replicated.Click here for file"}
+{"text": "A number of models and algorithms have been proposed in the past for gene regulatory network (GRN) inference; however, none of them address the effects of the size of time-series microarray expression data in terms of the number of time-points. In this paper, we study this problem by analyzing the behaviour of three algorithms based on information theory and dynamic Bayesian network (DBN) models. These algorithms were implemented on different sizes of data generated by synthetic networks. Experiments show that the inference accuracy of these algorithms reaches a saturation point after a specific data size brought about by a saturation in the pair-wise mutual information (MI) metric; hence there is a theoretical limit on the inference accuracy of information theory based schemes that depends on the number of time points of micro-array data used to infer GRNs. This illustrates the fact that MI might not be the best metric to use for GRN inference algorithms. To circumvent the limitations of the MI metric, we introduce a new method of computing time lags between any pair of genes and present the pair-wise time lagged Mutual Information (TLMI) and time lagged Conditional Mutual Information (TLCMI) metrics. Next we use these new metrics to propose novel GRN inference schemes which provides higher inference accuracy based on the precision and recall parameters.It was observed that beyond a certain number of time-points of micro-array data, the performance of the algorithms measured in terms of the recall-to-precision ratio saturated due to the saturation in the calculated pair-wise MI metric with increasing data size. The proposed algorithms were compared to existing approaches on four different biological networks. The resulting networks were evaluated based on the benchmark precision and recall metrics and the results favour our approach.To alleviate the effects of data size on information theory based GRN inference algorithms, novel time lag based information theoretic approaches to infer gene regulatory networks have been proposed. The results show that the time lags of regulatory effects between any pair of genes play an important role in GRN inference schemes. A GRN is a complex set of highly interconnected processes that govern the rate at which different genes in a cell are expressed in time, space, and amplitude. Such a network is commonly represented by many pairs of proteins and genes, in which one protein/gene regulates the abundance and/or activity of another protein/gene . GRN\u2019s ccoarse grained properties of genetic networks [Reverse engineering of gene regulatory networks remains a major issue and area of interest in the field of bioinformatics and systems biology. A survey paper discussenetworks . Such binetworks ,10. Thinetworks . Each ofARACNE and REVEMI and CMI metrics are central in establishing the relationships between genes in information theory models. Hence, in order to design a smart GRN inference algorithm, it is important to study the behaviour of these MI and CMI metrics on microarray data of various sizes. The MDL implementation of Zhao will henAnother major disadvantage in information theory based models is that MI and CMI do not give directions between relationships. A unit time delay was assumed in our earlier PMDL implemenOur major contributions in this paper can be summarized as follows:1. We show that the performance of the inference algorithms saturate beyond a certain data size due to the saturation in the information theory metric mutual information. Note that we have only varied the data size in our experiments to understand the effects of regulatory time-lags between genes on the algorithms. The overall performance of the algorithms would also be affected by other factors , which might lead to other novel innovations that need to be considered in designing reverse-engineering schemes. This is however outside the scope of this paper. 2. A new way of computing time lags between any pair of genes is presented. Our scheme makes sure that time lags cannot be negative and we argue that a more biologically pragmatic view is that a gene can affect another gene only when it is up-regulated. This assumption makes more sense in the Boolean network formalism of GRNs where a gene can only be in two possible states: ON and OFF .3. We introduce the time lagged Mutual Information (TLMI) and time lagged Conditional Mutual Information (TLCMI) quantities.4. We present novel GRN inference schemes based on TLMI, TLCMI, MDL and PMDL principles that provide higher accuracy over the existing information theoretic methods.In this section, we first report the results of the existing inference schemes that were run on the time-series micro-array data of varying size and illustrate that the performance of the methods saturates beyond a certain number of time points. We also report how the pair-wise MI metric saturates beyond a certain data size. We then present our new time lag computation scheme and the modified version of the network MDL algorithm wherein, we replace the MI metric which considers unit time delay with the TLMI metric (considering a time-lag of \u03c4). We next present a modified version of the PMDL algorithm, by replacing the MI and CMI metrics with the TLMI and TLCMI metrics. Finally the results from the network MDL, PMDL and modified network MDL and PMDL algorithms are compared.R and precision P are used to evaluate the performance of the algorithms. Although different definitions for recall and precision exist in the literature [R is defined as Ce/(Ce+Me) and P is defined as Ce/(Ce+Fe), where Ce denotes the edges that exist in the true network and in the inferred network, Me are the edges that exist in the true network but not in the inferred network, and Fe are edges that do not exist in the true network but do exist in the inferred network.Benchmark measures recall terature , in thisThe performance of information theory and DBN based algorithms over different data size was carried out over random synthetic networks which were generated by the Genenetweaver tool -23.It was imperative for us to use synthetic data over time series micro-array experimental data in this phase due to the following reasons:in silico runs, we wanted to keep equal time intervals between each time point data such that we can understand the true effects of regulatory time-lags between genes on the inference accuracy. It is generally not possible to assign a single time-lag value to a gene-pair if the expression readings for each time point were not evenly spaced as mentioned in Zou [\u2022 Very few experimental data sets have equal time intervals between experiments and also the data size is generally limited to around 20 time points. In our d in Zou . \u2022 Also, the saturation in inference accuracy generally requires a larger data size (> 30 time points as shown later) and it would have been difficult to identify the role of MI in bringing about this theoretical limit on the accuracy of information theoretic schemes with a smaller biological data set (of ~20 time points).in silico GRNs from the prior knowledge database of yeast (Saccharomyces cerevisiae) which contains 4441 genes and 12873 interactions. Thus in order to create a sample GRN with 10 nodes, Genenetweaver clusters the yeast transcriptomic network into modules and chooses the module having number of genes closest to the given input (in this case 10 genes) to create the in silico network. Each such module maps to a particular biological function and this strategy essentially ensures that there is minimum cross-talk of these set of genes with the others in the yeast network resulting in a higher efficacy of the inference algorithms that use them.\u2022 It should be noted that the Genenetweaver software derives the The time series DNA microarray data from Spellman et al was usedWe used the same preprocessing steps as in . InitialFour separate biological networks (as discussed later) used for comparison purposes were derived from the yeast cell cycle pathway -28. The A network and data set with 75 time points was generated as in . The resFor the PMDL algorithm, it was observed that the precision increased until 55 time points, and, beyond that, the precision remains relatively stable for the two smaller tested networks (with 20 and 30 genes respectively). For the larger network (with 40 genes), the precision increased until 70 time points before saturation. The recall for PMDL algorithm increased until 40 time points before saturation for each of the 3 tested networks. For the network MDL algorithm, it was observed that precision increased until 35 time points and fluctuated after that. The recall for the network MDL algorithm kept increasing for all the test cases with considerable fluctuations.For further analysis we considered the recall/precision ratio as shown in Figure Table To understand the performance implications of the more conventional DBN approach on the number of time points, we conducted similar experiments with the DBN scheme developed by Zou . As the From Figure The performance saturation of the methods motivated us to study the behavior of the information theoretic quantities of entropy, conditional entropy and mutual information. For these set of experiments, biological synthetic data was created using the Genenetweaver tool -23. We bThe plots conclude that the saturation in the methods was due to the saturation in the mutual information quantity which goes close to zero even though the entropy increases in the network. This would conceptually mean that there is room to improve on the inference accuracy (due to high entropy), yet the mutual information metric will not be able to point us to the right direction. Other information theoretic algorithms, like REVEAL use the substantial regulatory effect on the other one. This can be best approximated by estimating the regulatory time-lags between each gene pair, and subsequently computing the MI between them for this particular time range. This concept was used to compute the time-lags between genes and the TLMI and TLCMI metrics as discussed in the Methods section. Note that, the time lag computation concept initially proposed in [The saturation in MI due to increasing number of time points would suggest that the MI should not be computed for the entire range of time points of micro-array data available from the experiments. GRNs are inherently time varying, and hence the pair-wise MI between any 2 genes needs to be computed over the time range where the first gene will have posed in to impleA network with 10 genes was derived from the yeast cell cycle -28 and SWe repeated the same process for two other biological networks with 11 and nine genes from the yeast cell cycle and Spellman\u2019s data . The corWe also incorporated the proposed TLMI and TLCMI metrics in the PMDL based algorithm. A network with 14 genes was derived from the yeast cell cycle -28 and SThe performance of the PMDL algorithm depends on three factors: the number of genes, the number of time points and most importantly the number of parents inferred for each gene by the algorithm. To see what role these factors play we will look into the time and space complexities of the algorithm. A schematic for the PMDL algorithm is shown in Figure n2(m \u2212 \u03c4) times where n is number of genes, m is the number of time points, and \u03c4 is the time lag. From line 5 to line 18 the algorithm iterates n4 times, lines 15 and 16 of the algorithm iterates n3(m \u2212 \u03c4) times. Finally from lines 20 to 31 the algorithm iterates n3 times. Thus the time complexity of the over-all algorithm is \u0398(n4 + n3(m \u2212 \u03c4)). Step 4 of the PMDL algorithm iterates n4 + n3m) time.From the time complexity it can be seen that if the number of genes is larger than the number of time points then the run time of PMDL algorithm depends on the number of genes. And if the number of time points is larger than the number of genes then the run time depends on the number of time points. In the worst case is zero for all genes and the algorithm runs in \u0398.In the MDL based implementation we discard the lines 20 to 31 from the PMDL based implementation. The worst case time complexity is again \u0398 as this can model the loops between 2 genes .Our motivation lies in changing the MI metric to incorporate the effect of time lags. While implementing Zou\u2019s method of calculating time lags, we come across the problem of negative lags. For every pair of gene when the initial expression change is not at the same time point, one of the two time lags turns out to be negative. Figure per Zou , gene A we propose time lags as the difference between initial up-regulation of first gene and initial expression change of the second gene after the up-regulation of first gene. This solves the issue of negative time lags besides being biologically more relevant as compared to the existing method of calculating time lags. Figure Ua and Ub indicate the initial up-regulation of genes A and B at time points two and three respectively. Ca and Cb indicate that time points six and three are the time points where the expression values of genes A and B changed after the initial up-regulation of genes B and A. Time lag between A and B is calculated as Cb-Ua\u03c41 = and time lag between B and A is calculated as Ca-Ub\u03c42 = respectively. In this example time lag between X and Y is one and time lag between Y and X is three.In biological networks the A\u2194 B schema is quiet common. Hence Zou\u2019s time lag computation scheme needs to change to handle such cases. We also argue that a gene can affect another gene only when it is up-regulated (ON). Based on the above discussion, Entropy, H, is the measure of the average uncertainty in a random variable [ip is the probability of observing a particular symbol in a sequence then entropy is given as variable . If pi iA is defined as 0p and 1p are the probabilities of observing a gene A as 0 and 1 respectively over the sequence A [A contains the values taken by a gene in the time series data; thus if we have a time series data of m time points then sequence A is of length m.As the proposed algorithm quantizes the microarray data to two levels, a gene takes two values a 0 and 1 corresponding to being in OFF and ON states respectively. In this case, the entropy of a gene quence A . Here sem-1 to simulate a default time-lag of 1, i.e., compute the entropy between A and B . In order to implement the TLMI and TLCMI metrics, we need to compute the entropies between A and B .This standard definition of entropy has been used in the MDL and PMDL schemes where the entropy was computed for sequence length of Joint Entropy between two sequences A and B , H is defined as A, B) are considered to be a single vector valued random variable dependent on each other [Thus joint entropy between two variables is an extension of entropy where the two sequences is defined as p0,0, p0,1, p1,1 and p1,1 are the probabilities of observing both zeros, a zero and a one, a one and a zero and both ones in sequences A and B respectively.As stated before, the proposed algorithm quantizes the microarray data into two levels, in this case the joint entropy between two sequences Mutual Information measures the amount of information that can be obtained about one random variable by observing another one [ther one . MI in terms of entropies is defined as and the Conditional Mutual Information is the reduction in the uncertainty of A due to knowledge of B when C is given [A, B and C, H is defined asis given . High MIis given . To overis given implemenA and B we remove the last \u03c4 symbols of sequence A and first \u03c4 symbols of sequence B to obtain reduced sequences (of length m-\u03c4 each) for A and B respectively. Computing the MI over these reduced sequences gives TLMI.After implementing time lags, we no longer compute the entropy and joint entropy over the complete sequences of gene(s) as discussed before. If a time lag \u03c4 is computed between two genes TLMI is not a symmetric quantity though, i.e. Considering a time lag, \u03c4, between A and B, we compute TLCMI by deleting the last \u03c4 symbols in sequences A and C and first \u03c4 symbols in sequence of B and computing the CMI of these reduced sequences to obtain TLCMI.Figure n genes then two n*n matrices viz. one connectivity matrix and one MI matrix are stored. A time lag of one unit is assumed, thus the MI computations are not symmetric. Using every MI value as a threshold over the MI matrix, n2 models are obtained. For every model, the description lengths (model length + data length) are computed and the model with the minimum description length is selected as the best model. This algorithm involved a fine tuning parameter for the model length algorithm which also makes the MDL principle method arbitrary [Estimating the MI threshold is one of the major drawbacks in information theory based model. Zhao . first prrbitrary . To overrbitrary . In the rbitrary .The basic information theoretic metrics with unit time lag were replaced in the network MDL and PMDL algorithms with the TLMI and TLCMI metrics. Figure The authors declare that they have no competing interests.VC, CZ and PG1 developed the algorithm and implemented the algorithm on synthetic and biological data sets. An in-depth analysis of results was also performed on the results. EJP and PG2 coordinated the study."}
+{"text": "Due to experimental constraints, most microarray observations are obtained through irregular sampling. In this paper three popular spectral analysis schemes, namely, Lomb-Scargle, Capon and missing-data amplitude and phase estimation (MAPES), are compared in terms of their ability and efficiency to recover periodically expressed genes. Based on The functioning of eukaryotic cells is controlled by accurate timing of biological cycles, such as cell cycles and circadian rhythms. These are composed of an echelon of molecular events and checkpoints. At the transcription level, these events can be quantitatively observed by measuring the concentration of messenger RNA (mRNA), which is transcribed from DNA and serves as the template for synthesizing protein. To achieve this goal, in the microarray experiments, high-throughput gene chips are exploited to measure genome-wide gene expressions sequentially at discrete time points. These time series data have three characteristics. Firstly, most data sets are of small sample size, usually not more than 50 data points. Large sample sizes are not financially affordable due to high cost of gene chips. Also the cell cultures lose their synchronization and render meaningless data after a period of time. Secondly, the data are usually evenly sampled and have many time points missing. Thirdly, most data sets are customarily corrupted by experimental noise and the produced uncertainty should be addressed in a stochastic framework.Saccharomyces cerevisiae (budding yeast) [Drosophila melanogaster (fruit fly) [6Extensive genome-wide time course microarray experiments have been conducted on organisms such as g yeast) , human Hg yeast) , and Drouit fly) . Buddinguit fly) has servuit fly) and Whituit fly) performeuit fly) exploreduit fly) implemenuit fly) employeduit fly) employeduit fly) applied uit fly) compareduit fly) compareduit fly) 67 while Biological experiments generally output unequally spaced measurements. The major reasons are experimental constraints and event-driven observation. The rate of measurement is directly proportional to the occurrence of events. Therefore, an analysis based on unevenly sampled data is practically desired and technically more challenging. While providing modern spectral estimation methods for stationary processes with complete and evenly sampled data , the sigThe remainder of this paper is structured as follows. In Section 2, we introduce the three spectral analysis methods, that is, Lomb-Scargle, Capon and MAPES. Hypothesis tests for periodicity detection and the corresponding aterials .In this section, the Lomb-Scargle periodogram, Capon method, and MAPES approach are introduced and compared in terms of their features and implementation complexity. The detailed derivations are omitted. As a general notational convention, matrices and vectors are represented in bold characters, while scalars are denoted in regular fonts.The deployment of Fourier transform and traditional periodogram relies on evenly sampled data, which are projected on orthogonal sine and cosine harmonics. The uneven sampling ruins this orthogonality. Hence, the Parseval's theorem fails, and there exists a power discrepancy between the time and frequency domains. When analyzing astronomical data, which in general are collected at uncontrollable observation times, Lomb found thGiven where For evenly sampled data, the sampling interval The highest frequency, namely, the Nyquist frequency, is n's work . Actualln's work proved tThe number of probing frequencies is denoted byand the frequency grid can be defined in terms of the following equation:Notice further that the spectra on the front and rear halves of the frequency grid are symmetric since the microarray experiments output real values.Lomb-Scargle periodogram represents an efficient solution in estimating the spectra of unevenly sampled data sets. Our simulation results also verify its superior performance for biological data with small sample size and various unevenly sampled patterns.Capon method represents a modern power spectral estimation technique that yields better spectral resolution compared with traditional periodogram . The oriwhere the Note that we have not included in this spectrum estimate a scaling factor. However, the absence of this scaling factor does not affect periodicity analysis for the genes. Therefore, we neglect this scaling factor. The bandwidth parameter Recently, the Capon method has been updated to cope with the presence of irregular samples . The samThe Capon method is slightly more computationally complex than Lomb-Scargle periodogram, and it usually achieves a better performance in terms of resolution provided that there are sufficient samples. However, for highly corrupted biological data with small sample size, this is not true.Regular sampling can be treated as a case of missing data as long as the sampling time tags share a greatest common divisor. This constraint is satisfied in most biological experiments and published data sets. The missing-data amplitude and phase estimation (MAPES) method, proposed in , is a noThe data, where The data vector where The residual vector and its covariance matrix are next definedwhere In the Then the estimates for spectral magnitude where the auxiliary matrices are defined as follows:In (19), Finally, the MAPES power spectral density estimator can be expressed asin silico experiments, assuming Actually, in our Based on the obtained power spectral density, each gene is to be classified as either a cyclic gene or noncyclic one. The null hypothesis is usually formed to assume that the measurements are generated by a Gaussian noise stochastic process. For a general periodogram or power spectral density estimator where given by (22)A rejection of the null hypothesis based on a n Fisher or Brockn Fisher . The ranFor the Lomb-Scargle periodogram, pothesis , a resulpothesis . Howeverpothesis , the liks [ test .In order to prevent the false positives from overwhelming the true positives, the multiple testing correction, as proposed in 26, is pewhere the numerator is an estimate of the number of false positives. Since generally periodic genes only occupy a small portion of all genes, the example, .The posed in as folloThe in silico experiments are first performed on the Saccharomyces cerevisiae (budding yeast) data set. The Lomb-Scargle, Capon, and MAPES are compared. Then we proceed to analyze the Drosophila melanogaster (fruit fly) data set.Our Saccharomyces cerevisiae (budding yeast) data set reported by Spellman et al. [The performance of the three schemes is evaluated based on the n et al. . In the The literature has provided prior knowledge about the yeast cell cycle genes: Spellman et al. enumeratThe comparison procedure is as follows: based on the given data set, the three schemes perform to preserve a prespecified number of genes. These genes are marked as cell cycle genes and are compared with two control gene sets, from which the number of positives are counted. If a preserved gene also exists in the gene set which has been verified to be cell cycle regulated, this hit is counted as a true positive. On the other hand, if the preserved gene appears in the gene set which has been corroborated to be not involved in the cell cycle, this hit is counted as a false positive. Notice that since we expect the noncell cycle genes to be the majority of all measured genes, but the verified noncell cycle genes are only a small portion of all the genes, the false positives from verified noncell cycle genes only provide a reference but not a significant knowledge of the false positives. Because the three algorithms perform similarly for all four data sets, only simulation outcomes for cdc15 are presented here to exemplify the general results. The cdc15 data set contained 24 time points sampled from in silico results based on cdc15 data set are illustrated in Figure The in silico experiments are performed. Firstly, one third of all measurements is randomly set to be missing. The results are organized in Figure To test the algorithm performance on the highly corrupted data, two Above all, Lomb-Scargle scheme always identifies the largest number of cell cycle genes that have been verified in previous biological experiments. Due to its simplicity, we recommend the use of this simplest method.Drosophila melanogaster (fruit fly) is selected as our research target because it is a well-studied, relatively simple organism with a short generation time and only 4 pairs of chromosomes. In addition, 75% of human diseases have their counterparts in fruit fly, and 50% of fruit fly proteins have their mammalian analogs [in silico experiments are performed on the fruit fly data set published by Arbeitman et al. [The analogs . These mn et al. . With thIn Arbeitman's experiments, 75 sequential sampling points were observed, starting right after fertilization and through embryonic, larval, pupal, and early days of adulthood. The time series data during the embryonic stage are analyzed. The embryonic stage gives us insight into the developmental process, that is, how the fruit fly grows from a zygote to a complex organism with cell specialization. The embryonic data takes the instant of egg lay as the time origin. 30 time points were sampled from The top 149 genes with the smallest aterials . The majIn order to measure a valid sample, the cell culture has to be synchronized, in other words, all cells within the culture should be homogeneous in all aspects, for example, cell size, DNA, RNA, protein, and other cellular contents, and should also mimic the unperturbed cell cycle. Cooper in argued t1Saccharomyces cerevisiae. Lomb-Scargle and Capon methods are computationally efficient while MAPES involves extensive matrix calculations and the iterative expectation maximization (EM) step. Our in silico experiments revealed that the simplest method, Lomb-Scargle, outperforms more sophisticated Capon and MAPES. Compared with the other two, Lomb-Scargle method is able to identify more published cyclic genes. This discrepancy between methods is mainly attributed to the data features, such as the small sample size, large proportion of missing samples, and samples highly corrupted by noise. In addition, the computational complexity sacrificed in MAPES for achieving high resolution is not justifiable in the context of gene microarray data. Thus, the computationally simpler methods are more fit for the small sample size scenarios.Three of the most representative spectral analysis methods, namely, Lomb-Scargle, Capon, and missing-data amplitude and phase estimation (MAPES) methods, are compared in terms of their performance for detecting the periodically expressed genes in Drosophila melanogaster experiments. A list of 149 genes are identified to express periodically. Their relation with the biological processes are yet to be validated. Our future works also include the development of a comprehensive time-frequency analysis framework for time series microarray data. The small sample size represents another great challenge. Besides, a cross-species study is also desired to examine the relations between fruit fly and homosapiens genes.The computational results also provide novel insights into the data reported by"}
+{"text": "The immune response to viral infection is a temporal process, represented by a dynamic and complex network of gene and protein interactions. Here, we present a reverse engineering strategy aimed at capturing the temporal evolution of the underlying Gene Regulatory Networks (GRN). The proposed approach will be an enabling step towards comprehending the dynamic behavior of gene regulation circuitry and mapping the network structure transitions in response to pathogen stimuli.We applied the Time Varying Dynamic Bayesian Network (TV-DBN) method for reconstructing the gene regulatory interactions based on time series gene expression data for the mouse C57BL/6J inbred strain after infection with influenza A H1N1 (PR8) virus. Initially, 3500 differentially expressed genes were clustered with the use of k-means algorithm. Next, the successive in time GRNs were built over the expression profiles of cluster centroids. Finally, the identified GRNs were examined with several topological metrics and available protein-protein and protein-DNA interaction data, transcription factor and KEGG pathway data.Our results elucidate the potential of TV-DBN approach in providing valuable insights into the temporal rewiring of the lung transcriptome in response to H1N1 virus. Gene Regulatory Networks (GRNs) depict the functioning circuitry in organisms at the gene level and represent an abstract mapping of the more complicated biochemical network which includes other components such as proteins, metabolites, etc. Understanding GRNs can provide new ideas for treating complex diseases and offer novel candidate drug targets. A commonly accepted top-down approach is to reverse engineer GRNs from experimental data generated by microarray technology [It is now well established that the study of biological complexity has shifted from gene level to interaction networks and this shift from components to associated interactions has gained increasing interest in network biology. chnology -5.Early computational approaches for inferring GRNs from gene expression data employed classical methods. Boolean network modeling considers the gene expression to be in a binary state (either switched on or off), and display via a Boolean function the impact of other genes on a specific target gene . NeverthRecently, several techniques have been developed for the mathematical modeling of the dynamics of gene-gene interactions from time series expression data, such as differential equation based models -14, statD. melanogaster. The TV-DBNs offer the ability to overcome limitations of other approaches like the structure learning algorithms for Dynamic Bayesian networks [Our study focuses on depicting the temporal dynamics of the lung transcriptome after perturbation of the biological system by an infection with influenza A virus. Intensive research has already been performed in analyzing the viral virulence factors and genetic host factors contributing to disease development and outcome -31. The networks , that denetworks , where aOne important aspect of our research was to bring together clustering and inferring networks from time series data. From the computational point of view, the number of estimated relationships in the network is significantly reduced by defining relationships on cluster level -36, thusSummarizing, the present reverse engineering approach consists of four steps: (1) data selection, (2) clustering for obtaining centroids, (3) parameter tuning and generation of Time Varying Dynamic Bayesian Networks based on the time series experimental expression profiles of cluster centroids and (4) evaluation of the resulting networks with respect to topological measures as well as with available biological knowledge.C57BL/6J mice were infected with a mouse-adapted influenza A virus (PR8), RNA was prepared from whole lungs and processed for hybridization on Agilent 4 \u00d7 44 k arrays. Three replicates, from three individually infected mice, were taken for each time point after infection and from three mock-infected mice (day 0) . All experiments in mice were approved by an external committee and according to the national guidelines of the animal welfare law in Germany , das zuletzt durch Artikel 20 des Gesetzes vom 9. Dezember 2010 (BGBl. I S. 1934) ge\u00e4ndert worden ist.'). The protocol used in these experiments has been reviewed by an ethics committee and approved by the 'Nieders\u00e4chsiches Landesamt f\u00fcr Verbraucherschutz und Lebensmittelsicherheit, Oldenburg, Germany', according to the German animal welfare law (Permit Number: 33.9.42502-04-051/09). Preprocessing steps of the raw data comprised background correction , quantilSubsequently, we used the GEDI toolbox in orderClustering and gene network inference methods are usually developed independently. However, it is widely accepted that deep relationships exist between the two and their implementation in a unified manner overcomes the limitations posed by each method. A challenging task in gene network reconstruction is that the number of genes is so large; hence network modeling based on a limited amount of data becomes too complex. The general opinion is that the amount of data required for GRN modeling increases approximately logarithmically with the number of genes . HoweverIn particular, we applied k-means clustering algorithm at the data with the cluster number ranging between 10 and 80. We selected this range, so that the resulting cluster number is both indicative enough of the size of our dataset as well not so large, avoiding so over-fitting that leads to poor predictive power. We employed Dunn index, a performance measure used for comparing different clustering results, in order to check the range of cluster number that gives dense and well separated clusters. This index is defined as the ratio between the minimal inter-cluster distance to maximal intra-cluster distance. As intra-cluster distance the sum of all distances to their respective centroid was calculated, while the inter-cluster distance was defined as the distance between centroids. According to the internal criterion of the index, clusters with high intra-cluster similarity and low inter-cluster similarity are more desirable. The maximal Dunn index score values were observed between 19-36 clusters as can be seen in Figure A Time Varying Dynamic Bayesian Network (TV-DBN) is a model of stochastic temporal processes based on Bayesian networks . It reprGiven a set of time series in the form oft is a time in the timeseries, Xt is a vector of the values of p variables at time t, a TV-DBN models relations as:where t A\u2208 Rp \u00d7 p is a matrix of coefficients that relate the values at t-1 to those of time t. The non-zero elements of At form the edge set of the network for time t.where i (i = 1...p) at each time point t* (t* = 1...T):In our experiments, each cluster was a variable of the model and its centroid gave the time series values. Thus, the resulting networks relate the expression levels of all clusters at previous time point to the expression levels of each cluster at each time point. In order to calculate the network structures, it is assumed that they are sparse and vary smoothly across time; therefore successive networks are likely to share common edges. The problem of estimating the networks is decomposed into smaller, atomic optimizations, one for each node \u03bb is a parameter for the 1\u2113-regularization term, which controls the number of non-zero entries in the estimated t*, and is defined as:where where:h is the kernel bandwidth. The above optimization is transformed further by scaling the covariates and response variables byis a Gaussian RBF kernel function and i.e. i while holding all other entries fixed. The kernel bandwidth h affects the contribution of temporally distant observations. A high value results in all observations contributing equally to each time point, while a small value narrows the effect to only the immediately previous time point. For our experiments, we selected h so that the weighting of observations 2 days away from each time point is higher than exp(-1).The optimization is then solved using the shooting algorithm , which i1\u2113-regularization term \u03bb affects the sparsity of the resulting networks and controls the tradeoff between the data fitting and the model complexity. In order to set the appropriate value to \u03bb, we employed the Bayesian Information Criterion (BIC) [\u03bb was set to 0.1. An implementation of the estimation algorithm was created in Python programming language, using the NumPy and Scipy libraries.The on (BIC) and the The current study proposes a systems biology approach to analyze the dynamic behavior of the lung transcriptome to H1N1 infection from stimulus-response data from perturbation experiments. This system can be regarded as a specific stimulus-induced perturbed biological system. In particular, we present an implementation of Time Varying Dynamic Bayesian Networks on time series gene expression data of murine C57BL/6J inbred strain after infection with H1N1 (PR8) virus. Our reverse engineering approach combines clustering techniques and network inference methods, in order to map the dynamic gene regulatory kinships occurring at various time points after infection, thus displaying the response of the lung transcriptome after an environmental stimulus. However, the low time resolution of data imposed significant constraints in analysis and modeling. Therefore, we permuted our analysis by defining the regulatory effects on cluster level in order to achieve some kind of dimensionality reduction. The resulting five TV-DBNs, each one representing the GRN at a specific time point (day p.i.), were evaluated with topological metrics as well as with available interactome data. Also, we checked whether known gene-to-gene relationships could be retrieved from our cluster based approach.bottlenecks) [regulators' whereas the top indegree clusters as the significantly 'regulatee' clusters. As seen, the majority of outdegree clusters are immune response related in terms of KEGG pathways [The first goal in our analysis was to explore the topological characteristics of the five TV-DBNs. Thus, we conducted local topology analysis in order to identify hub or bottleneck clusters/nodes that could serve as the key regulators at every time point. For this purpose we used Hubba server and calclenecks) . This melenecks) . The respathways and protein-DNA interactions and display the ability of TV-DBN approach in monitoring the dynamic presence or absence of these interactions over the time course. For this purpose, we downloaded the mouse datasets from InnateDB database . We seleIrf7 in cluster 17, Irf1 in cluster 29 and Bmi1 in cluster 33. A representative example is cluster 17 that includes in addition to Irf7 many other interferon-induced genes like Ifit1, Ifit2, Ifit3, Ifi44 and interacts bidirectional with cluster 9, which encompasses a great proportion of interferon-induced genes like Ifi205, Tgtp, Igtp, Irgm, Ifih1, Isg20. This observation is consistent with the established role of Irf7 as an important protective host response during infection. Irf7 induces the a- and b- interferons, which, in turn, regulate the expression of the interferon-induced genes [Atf3 and regulates, in all time shifts except for day 1, cluster 18 which contains Ifng. Other studies have shown that Atf3 is recruited to transactivate the Ifng promoter during early Th1 differentiation [Furthermore, we accumulated transcription factor (TF) data from the TFCat database , a highled genes . Anotherntiation .regulator) over another set B of co-expressed genes (regulatees) at a specific time point. On gene level, we expect to find the regulators of a gene, belonging to cluster B, in the gene pool of cluster A. Thus, moving forward in our analysis we checked whether TV-DBN approach may recover known gene-to-gene interactions from the derived cluster relationships and we reveal the dynamics of these interactions by displaying the exact time points of their occurrence. One example is the RIG-I-like receptor signaling pathway. A foreign RNA is recognized by a family of cytosolic RNA helicases termed RIG-I-like receptors (RLRs). The RLR proteins include Rig-I, Mda5, and Lgp2, which recognize viral nucleic acids and recruit specific intracellular adaptor proteins to initiate signaling pathways that lead to the synthesis of type I interferon and other inflammatory cytokines, which are important for eliminating viruses [regulator-regulatee' roles on the gene level. In particular, 25 genes (out of the 70 included in the pathway) are included in our dataset and TV-DBN managed successfully to recover all known interactions that are represented in the KEGG database. For example, the TV-DBN algorithm captured the interactions between Ddx58 (cluster 10) and Isg15 (cluster 17), between Ddx58 (cluster 10) and Trim25 (cluster 32), between Irf7 (cluster 17) and Ifna2 (cluster 21), Ifna4 (cluster 34), Ifnab (cluster 19), Ifna12 (cluster 21), Ifnb1 (cluster 32) and between Mapk8 (cluster 27) and Mapk9 (cluster 12) with Tnf (cluster 10). Nevertheless, one should bear in mind that the time spacing between gene expression measurements, as has been recorded in our present data set, is fairly large in comparison to the real time at which these interactions occur. Therefore, the current cluster-based networks provide only a very coarse representation of the regulatory effects which could be refined by higher time sampling.Our networks explicitly depict the cluster inter-relationships at every time serial snapshot. The underlying concept of our method is to reconstruct networks that represent the regulatory effect of a co-expressed gene set A ( viruses . We firsTlr1 (cluster 15), Tlr2 (cluster 8) and Tlr6 (cluster 14), between Tlr7 (cluster 11) and Myd88 (cluster 29) as well as between Pik3r3 (cluster 33) and Akt3 (cluster 8) are observed until day 3 p.i., whereas interactions between Ifnb1 (cluster 32) and Ifnar2 (cluster 12) and among Stat1 (cluster 9), Cxcl10 (cluster 17) and Cxcl9 (cluster 18) are observed until day 5 p.i.Another important example is the Toll-like receptor signaling pathway. Toll-like receptors (TLRs) are responsible for detecting microbial pathogens and initiating innate immune responses. Upon recognition of the pathogens, TLRs stimulate the rapid activation of innate immunity and induce the production of proinflammatory cytokines and upregulation of costimulatory molecules . In partNlrp3, member of the NOD-like receptor family, is activated after influenza virus infection. Nlrp3 forms a complex, called inflammasome, with apoptosis associated speck-like protein containing a caspase recruitment domain (ASC) and caspase-1 [Nlrp3 and ASC is necessary for converting pro-Il1b, pro-Il18 and pro-Il33 into mature cytokines. Il1b and Il18 are potent pro-inflammatory cytokines, and Il33 promotes immune responses mediated by Th2 cells. Our TV-DBNs identified interactions between Mapk3 (cluster 26), Ccl5 (cluster 32) and Tnf (cluster 10) as well as between Mapk8 (cluster 27), Mapk9 (cluster 12) with Il6 (cluster 24) in the first two days, while the interaction between Casp1 (cluster 14) and Il1b (cluster 32) was traced in days 4 and 5 p.i. It is worth mentioning that the amount of the recovered known gene-gene relationships of our cluster-based approach can offer to biologists novel hypotheses, about the involvement of other genes whose functional role is still unknown, yet belong to the same clusters where the gene-gene interactions were detected.Finally, we zoomed into the dynamics of NOD-like receptor signaling pathway, where 18 out of the 58 members are included in our dataset. Recently, it was shown that aspase-1 . ActivatUsing the TV-DBN method on large scale expression data after an external perturbation of a biological system, such as an infection of the lung with a virus, our proposed approach contributed towards obtaining a deeper understanding of the dynamic changes at the molecular level. We succeeded in detecting several gene-gene interactions known to be important in early host response.In the near future, more refined network structures will be provided and hidden aspects of the innate immune system will be revealed upon availability of experimental data of more dense time series gene expressions. Thus, the dynamically reconstructed GRNs will be available for monitoring H1N1 disease development and outcome.The authors declare that they have no competing interests.KD conceived of the study, implemented the algorithms, did the interpretation of the results and drafted the manuscript. CT and GP implemented the algorithms and drafted the manuscript. CP contributed to the analysis of the raw data and interpretation of the results. EW contributed to the interpretation of the results. KNS designed the flowchart of the computational aspects of the study and co-ordinated the implementation of the algorithms. KS contributed to the writing of the manuscript and interpretation of results. AB conceived of the study, participated in its design and co-ordination. All authors read and approved the final manuscript.Gene members of 35 clusters. List of gene members for the 35 clusters (with Entrez gene IDs and short description per gene).Click here for fileBiological Process GO enrichment analysis of the 35 clusters. We examined the derived 35 clusters with respect to biological process GO terms with the use of DAVID Bioinformatics Resources functional annotation tool.Click here for filePPI/Protein-DNA Interaction data. We downloaded InnateDB protein-protein interaction (PPI) and protein-DNA interaction data and isolated all interaction groups with members included in our dataset.Click here for fileTranscription factors. We downloaded all known and candidate Transcription Factors (TFs) from TFCat database. This table displays all TFs included in our dataset and the cluster in which they are located.Click here for file"}
+{"text": "Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements.Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation.TimeShift at http://www.picb.ac.cn/Comparative/data.html.The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package Anopheles gambiae) and fruit fly (Drosophila melanogaster) p > 0.5) Figure . Consist) Figure . These rTo further test the utility of the DTW-S in uncovering novel biological features, we studied time shift patterns in human and rhesus macaque brain development in two distinct regions: prefrontal cortex and cerebellum. For this, we used a published expression dataset containing time series measured in the dorsolateral prefrontal cortexof 23 humans and 26 rhesus macaques, and in the cerebellum of 22 humans and 24 rhesus macaques p = 0.007) and \"cellular lipid metabolic process\" ([HT] p = 0.007). By contrast, Cluster1 genes showing different expression patterns in cortex and cerebellum (Cl1-group2) were enriched in gray matter ([HT] p = 0.0004) and annotated by GO as \"nervous system development\" ([HT] p = 0.05) and mature oligodendrocytes ([HT] p = 0.03), while genes showing different expression profiles (Cl2-group1) were enriched in gray matter ([HT] p = 0.006). Thus, our results revealed an interesting biological phenomenon: within one species, ontogenetic profiles are shared between the prefrontal cortex and cerebellum for genes expressed in white matter, but distinct for genes expressed in gray matter. Importantly, however, the time shift between the human and rhesus macaque ontogenetic profiles is perfectly synchronized for both white and gray matter genes. On an organismal level this observation might not be surprising. Changes in the rate of ontogenesis might be expected to operate on the brain as a whole, leading to synchronized delay in white and gray matter development in humans, compared to rhesus macaques. Our results confirm that, on the gene expression level, such synchronization can indeed be observed.Overall, time shift measurements correlated positively between the prefrontal cortex and cerebellum Figure . SimilarComparisons of developmental patterns across closely related species are playing an increasingly important role in extracting meaningful information from biological time series. By modifying the dynamic time warping algorithm (DTW), we have designed an effective tool for time series alignment (DTW-S). Our simulation results show that this method is effective in calculating time shift between two time series, even when the proportion of noise is 20-30% of the total variance. Furthermore, this method performs well for expression profiles containing both recurrent and non-recurrent changes, and can estimate variation in the amplitude and direction of the time shift.When we applied our method to a published gene expression dataset of human, chimpanzee and rhesus macaque brain development and maturation, we obtained robust and reproducible time shift estimates consistent with previous observations . FurtherApplying our method to a gene expression dataset of human and rhesus macaque brain development and aging, we found that genes showing synchronized time shift between the species, in the prefrontal cortex and cerebellum, do not always follow the same expression profiles in the two brain regions. Notably, genes showing both synchronized time shift between human and macaque ontogenetic trajectories, and synchronized expression patterns in the prefrontal cortex and cerebellum, were preferentially expressed in brain white matter. By contrast, genes showing synchronized time shift between human and macaque ontogenetic trajectories, but different expression patterns in the prefrontal cortex and cerebellum, were preferentially expressed in brain gray matter.TimeShift\", should facilitate the application of this approach to further studies.Taken together, these two examples demonstrate that the combination of gene expression time series profiles with ontogenetic time shift estimates provides additional information revealing the biological properties of the investigated system. The development of DTW-S algorithm, freely available as the R package \"YY came up with the general idea and developed the algorithm and applications; SN wrote the code for the DTW-S algorithm; AGHX developed the R package; YPPC, MV, MS and PK supervised the analyses; LT provided helpful suggestions; YY, MS and PK wrote the manuscript. All authors read and approved the final manuscript.YPPC, MV, MS and PK jointly supervised this study.Supplementary figures. includes all additional figures mentioned in this paper.Click here for file"}
+{"text": "Plasmodium falciparum infection. Starting with synchronized cells, gene expression levels are continually measured over the 48-hour intra-erythrocytic cycle (IDC). However, the cell population gradually loses synchrony during the experiment. As a result, the microarray measurements are blurred. In this paper, we propose a generalized deconvolution approach to reconstruct the intrinsic expression pattern, and apply it to P. falciparum IDC microarray data.Microarrays are widely used to investigate the blood stage of P. falciparum IDC.We develop a statistical model for the decay of synchrony among cells, and reconstruct the expression pattern through statistical inference. The proposed method can handle microarray measurements with noise and missing data. The original gene expression patterns become more apparent in the reconstructed profiles, making it easier to analyze and interpret the data. We hypothesize that reconstructed gene expression patterns represent better temporally resolved expression profiles that can be probabilistically modeled to match changes in expression level to IDC transitions. In particular, we identify transcriptionally regulated protein kinases putatively involved in regulating the P. falciparum IDC, protein kinases are ranked in terms of their likelihood to be involved in regulating transitions between the ring, trophozoite and schizont developmental stages of the P. falciparum IDC. In our theoretical framework, a few protein kinases have high probability rankings, and could potentially be involved in regulating these developmental transitions.By analyzing publicly available microarray data sets for the P. falciparum microarray data, several protein kinases are predicted to play a significant role in the P. falciparum IDC. Earlier experiments have indeed confirmed that several of these kinases are involved in this process. Overall, these results indicate that further functional analysis of these additional putative protein kinases may reveal new insights into how the P. falciparum IDC is regulated.This study proposes a new methodology for extracting intrinsic expression patterns from microarray data. By applying this method to Plasmodium species, of which P. falciparum is responsible for the majority of human fatalities. The disease is transmitted when an infected mosquito bites a person, and injects sporozoites that migrate to and develop in the liver before merozoites are released into the bloodstream and invade red blood cells (RBCs) } be the intrinsic gene expression pattern of protein i on one complete life span, where \u2113 denotes the cell age of iRBC (hours). Since if(\u2113) represents the common pattern shared by individual RBC, the expression profile of individual iRBC is assumed to be re denotes the rescaled cell age according to its normalized life span i is written as Let {f S(t) denote the number of fast-growing iRBCs which reach end of their life span at time t. Let f R(t) be the number of RBCs which are infected by these iRBCs at time t. We assume that f R(t) is proportional to f S(t):Due to the diversity of growth rate, a few iRBCs can reach the late stage of schizont early. As a result, those fast-growing iRBCs can infect additional fresh RBCs, as illustrated in Figure aaf stands for the average number of fresh RBCs that will be infected by one schizont after the invasion period.The invasion factor L' and the average life span L. The number of iRBCs that start and end their life span at time t are denoted as R(t) and fS(t) respectively. Given the probability density function t can be written as follow:As shown in (2), the normalized life span f S(t) can be derived from (5) as:Therefore, the expression of f R(t) by substituting (6) into (4):Therefore, we have the expression of N(t) denote the total number of iRBCs at time t. N(t) consists of 3 parts: the late-schizonts that infect fresh RBCs during the infection period and have not yet burst at time t, the first generation of iRBCs (infected by late-schizonts around infection period) that have not yet reached the end of their life span at time t, and second generation of iRBCs (infected by fast-growing iRBCs) that have not yet reached the end of their life span at time t. Therefore, as illustrated in Figure N(t) as:Let S(t) stands for the number of late-schizonts bursts at time t, R(t) denotes the number of RBCs infected by late-schizonts at time t, and f R(t) is the number of RBCs which are infected by fast-growing iRBCs at time t. The expressions of R(t) and f R(t) are given by (1) and (7) respectively. The expression of S(t) will be derived in (25).where N stand for the number of iRBCs that reach the rescaled cell age \u2113re at time t. In other words, given the time t, N indicates the distribution of iRBCs over a complete life span of iRBC and N(t) satisfy following relation:Let re time t can be expanded from (8) as follows:Hence, as illustrated in Figure N can be derived from (10) as follows:Therefore, f R(t) (7) into (11), N can be written as:By substituting the expression of ie(t) denote the observed expression level of protein i at time t. As discussed earlier, N denotes the distribution of iRBCs on rescaled cell age \u2113re and re. Therefore, the observed expression level ie(t) can be written as a integral over one complete life span of iRBCs as follows:The gene expression levels obtained in microarray experiments are aggregates across many individual iRBCs. This superposition across iRBCs is modeled by means of a linear system. Let if(s) and N as a series of discrete points {if(1), if(2),..., if(T)}, and {N, N,...,N} respectively. Therefore, ie(t) can be approximated as:We represent the continuous function ie(t) is a series of discrete value. In the dataset HB3, for example, the expression levels are measured every hour over a period of 48 hours. Therefore, a linear system can be derived based on (14):In microarray experiments, gene expression levels are measured at a series of discrete time points. Hence, the resulting observed expression level A can be calculated by means of the equation (12). The element of matrix A at row t and column \u2113re denotes the number of iRBCs that reach the rescaled cell age lre at time point t. The constant vector b stands for the gene expression levels observed in microarray experiment. The unknown variable vector x is the intrinsic gene expression pattern of individual cell. We can find x by solving the discrete linear inverse problem (15).The observation matrix i, the observed expression level i eis modeled as the superposition of the expression level i fof individual iRBCs, as described in (15). To solve the described linear inverse problem (15), we minimize an objective function. The objective function contains the squared error (Ax - b)(Ax - b) \u22a4. The intrinsic gene expression pattern of iRBCs is assumed to be a smooth curve. Therefore, we also include square difference as x \u2265 0 because expression levels are positive. In summary, we compute the intrinsic gene expression pattern x as follows:For each protein f mincon in the Optimization Toolbox of Matlab We solve (16) numerically by means of the function i is involved in regulating the stage transition s by iL(s). Let s Tstand for the time point when stage transition s occur. The expression of likelihood iL(s) is derived in following.As we discussed earlier, an increased gene expression level before a cell stage transition is regarded as a sign that the corresponding protein kinase is involved in that stage transition. We denote the likelihood that protein kinases Let L. Since protein kinases are often regulated at translational and post-translational levels as these may also play critical roles in regulating parasite development during these stages and could provide new opportunities for antimalarial drug development.Interestingly, in the trophozoite-schizont and schizont-ring analyses, a number of protein kinases previously established to be essential and in some cases implicated in directly impacting the transition emerge with the highest probability rankings of the IDC. Indeed, the results of our analyses are supported by previously published experimental data confirming the involvement of several protein kinases in regulating parasite biology at or between developmental stage transitions in the IDC. Our results indicate that experimentally investigating the function of other putative protein kinases not previously studied but ranked highly in our analysis may provide new insights into P. falciparum biology.This study proposes a new methodology to reconstruct intrinsic expression patterns from microarray data. We derive a linear system that relates the microarray data to the expression patterns. By solving the corresponding linear inverse problem, the expression patterns are reconstructed. The experiments conducted on synthetic data suggest that the proposed method can reliably reconstruct the expression pattern, even though both signal noise and missing data points contaminate the microarray data. By applying this method to The authors declare that they have no competing interests.Z.W. and J.D. prepared the first draft of the manuscript. J.N. proposed the initial idea, and helped with the data analysis and interpretation. Z.W., J.D., and J.C. conducted the theoretical analysis in this study. The numerical experiments were carried out by Z.W. All authors read and approved the final manuscript.Protein kinases are prioritized in terms of their likelihoods of being involved in the stage transition from ring to trophozoite.Click here for fileProtein kinases are prioritized in terms of their likelihoods of being involved in the stage transition from trophozoite to schizont.Click here for fileProtein kinases are prioritized in terms of their likelihoods of being involved in the stage transition from schizont to ring.Click here for file"}
+{"text": "The aneurysm clip impact-compression model of spinal cord injury (SCI) is a standard injury model in animals that closely mimics the primary mechanism of most human injuries: acute impact and persisting compression. Its histo-pathological and behavioural outcomes are extensively similar to human SCI. To understand the distinct molecular events underlying this injury model we analyzed global mRNA abundance changes during the acute, subacute and chronic stages of a moderate to severe injury to the rat spinal cord.Time-series expression analyses resulted in clustering of the majority of deregulated transcripts into eight statistically significant expression profiles. Systematic application of Gene Ontology (GO) enrichment pathway analysis allowed inference of biological processes participating in SCI pathology. Temporal analysis identified events specific to and common between acute, subacute and chronic time-points. Processes common to all phases of injury include blood coagulation, cellular extravasation, leukocyte cell-cell adhesion, the integrin-mediated signaling pathway, cytokine production and secretion, neutrophil chemotaxis, phagocytosis, response to hypoxia and reactive oxygen species, angiogenesis, apoptosis, inflammatory processes and ossification. Importantly, various elements of adaptive and induced innate immune responses span, not only the acute and subacute phases, but also persist throughout the chronic phase of SCI. Induced innate responses, such as Toll-like receptor signaling, are more active during the acute phase but persist throughout the chronic phase. However, adaptive immune response processes such as B and T cell activation, proliferation, and migration, T cell differentiation, B and T cell receptor-mediated signaling, and B cell- and immunoglobulin-mediated immune response become more significant during the chronic phase.This analysis showed that, surprisingly, the diverse series of molecular events that occur in the acute and subacute stages persist into the chronic stage of SCI. The strong agreement between our results and previous findings suggest that our analytical approach will be useful in revealing other biological processes and genes contributing to SCI pathology. Human spinal cord injury (SCI), often the result of both impact and varying degrees of compression, is initially a primary mechanical tissue and cell injury, but further develops into a cascade of complex secondary damage . AccordiAmongst the injury models, the weight drop -19 and tVarious SCI injury models have been characterized by examining the primary injury and the secondary injuries permeability, ischemia, edema, apoptosis, glutamate excitotoxicity, inflammation, demyelination, axonal degeneration, reactive gliosis, and scar tissue formation) to the spinal cord tissue using low- and high-resolution microscopy and immunohistochemical methods ,25. AddiOur lab has successfully used the clip compression injury model to injure the rat spinal cord at the thoracic level with consistent and reliable results; both acute and chronic SCI in rats have been characterized using this model -23,29-32Analysis and filtering on the resulting file of 31,042 ProbeSets revealed that 10,791 ProbeSet IDs had no annotations, i.e. no Entrez IDs or official gene symbols, which were flagged out. This reduced the number of workable ProbeSet IDs to 20,251. In addition, there were duplicate or multiple ProbeSet IDs which represented a single gene. Conversely, there were ProbeSet IDs with multiple annotations (EntrezID/Gene Symbol) due to sequence identity across more than one gene segment in the genome. This issue could not be easily resolved as the level of uniqueness of the oligonucleotide sequence is not high enough to allow annotation to one gene exclusively. This feature requires manual curation of the data based on Affymetrix instructions to use the latest annotation, which is also the most relevant. Taking the above two features into consideration results in 14,324 gene symbols on the GeneChip RG230 2.0 array. The resulting data file still contains the ProbeSet IDs that have \u201cLOC\u201d or \u201cRGD\u201d identifiers instead of actual gene symbols. These identifiers are applied to genes that are less well characterized and usually belong to similar or orthologous proteins in other species. They may also belong to non-coding regions of the genome. Software platforms developed for GO enrichment and pathway analysis rarely map the LOC and RGD identifiers. In the original Affymetrix GeneChip 230 2.0 array annotation file, the number of LOC and RGD annotated ProbeSets sets are 1163 and 1135, respectively. The same issue of duplicate/multiple entries also applies to these ProbeSets, hence the numbers of \u201cLOC\u201d and \u201cRGD\u201d identifiers in the final output file with 14,324 entries were less: 939 for the LOC and 829 for the RGD identifiers. This means that the total number of annotated ProbeSets in the Affymetrix GeneChip Rat Genome 230 2.0 array annotation file that were mapped to known gene candidates was 12,557, which is equivalent to 62.4% and 71.2% of the total number of genes annotated and listed in the Rat Genome Database (RGD) and European Bioinformatics Institute (EBI) association files, respectively.We used a divisive hierarchical clustering algorithm (DIANA) to identify the strongest trends within the dataset, in all pair-wise comparisons (see Methods). The results were visualized using the Heatplus package of BioConductor Figure\u00a0A. Intere2 scale), and associated ANOVA t test p-values across the time points. Volcano plots of the corresponding fold change values against transformed (\u2212log10) p-values for every time point are displayed in Figure\u00a0Data normalization and expression/signal value determination resulted in a list of all 31,099 ProbeSets, their fold change values relative to sham . Thus, filtering the data with higher fold change values automatically targets for transcripts with smaller t test p-values. Based on the results presented in Figure\u00a0Examination of the number of ProbeSets with marginal ANOVA s Figure\u00a0D. We fout test p-values \u2265 0.05 were removed from the initial list and the resulting data were analyzed using STEM so that fold change values for genes with multiple ProbeSets are averaged based on the median values. Table\u00a0To explore our data at gene level, additional analysis and filtering was performed on the resulting file of 31,042 ProbeSets as mentioned earlier. In order to finalize the gene set data at different time-points for functional analysis, those transcripts with ANOVA We next examined the nature of deregulated transcripts at different time-points relative to each other. Figure\u00a0As our data were collected at different time-points, we performed time-series expression profile clustering to search for common temporal expression patterns. To allow clustering at a reasonable number of possible model profiles, the parameter for \u201cSTEM clustering method\u201d, \u201cmodel profiles\u201d was set to 50 and 2 was selected as the \u201cmaximum unit change between time points\u201d. To facilitate interpretation of our data in the context of previous microarray studies, we used a cut-off of 1.5 fold change (up and down) as has been previously reported -38. AddiTo simplify the graphical presentation of the data, fold changes in expression values for all genes associated with only the statistically significant profiles were averaged and plotted against the post-injury observation time points Figure\u00a0A-H. Two 2 scale) fold change reduction in transcript levels on day 3 compared to day 1. For profiles 44 and 41, this bi-phasic pattern of gene expression is further followed by escalation of gene expression, which peaks at day 7 and stabilizes on day 14 onward. The second cluster only includes profile 6, which is essentially the mirror of profile 44 and comprise down-regulated genes , with a surprising but more complex pattern of gene expression, most notably during the 24\u201372\u00a0hours post-injury Figure\u00a0G-H. ThisIn summary, the following conclusions can be drawn from the cluster analysis of transcripts both at the ProbeSet and gene level following clip-compression injury of the spinal cord in rats:\u2013\u2003Major molecular events after introduction of clip-compression injury occur immediately and up to 72\u00a0hours post-injury\u2013\u2003For many transcripts a bi-phasic pattern of gene expression is observed, possibly due to switching mechanisms acting between day 1 and day 3 or a shift in the cellular origin of deregulated transcripts or the type of response elicited resulting in chronic deregulations of many genes. Therefore, for many transcripts, the late up or down-regulations seem to be distinct from the early response\u2013\u2003The early events seem to stabilize for most transcripts by 1\u00a0week post-injury, i.e. no more dramatic global changes in the average gene expression are observed and the level of expressions remains relatively constant.t test p\u2009\u2264\u20090.05) deregulated genes (Fold Change\u2009\u2265\u20091.0 and 1.5) at each time point. We found that about 70-75% of deregulated transcripts were annotated for all three domains of GO, in reference to the RGD association file whereas the association file from EBI only annotated 55-65% (data not shown). This implies that a minimum of 25-30% of significantly deregulated transcripts are not annotated in any gene ontology association files and thus are not considered for analysis regardless of the type of software platform used to perform GO enrichment analysis. Therefore, due to its more extensive annotation coverage, GO enrichment analysis in this study was performed in reference to the RGD association file.Gene Ontology (GO) enrichment analysis was preferred as the method of choice for functional analysis of the list of deregulated genes as it is based on a controlled vocabulary of terms at all three domains of \u201cBiological Process\u201d (BP), \u201cMolecular Function\u201d (MF) and \u201cCellular Compartment\u201d (CC). Initially, gene association files from RGD or EBI were analyzed for the number of rat genes that are annotated at each of the three domains and compared with the list of significantly (ANOVA t test p\u2009\u2264\u20090.05) across all time-points were considered and only one missing value was permitted. Thus, deregulated transcripts at 1\u20134 fold change values in at least one time point were separately subjected to GO Biological Process (BP) enrichment analysis and the number of GO terms were plotted as a function of fold change in expression , which also significantly reduces the number of deregulated transcripts included in the analysis (data not shown). However, examining the number of terms at lower p-values remarkably reduced the number of GO terms, although similar trends across different fold change values were observed . In GO tree hierarchy, the terms Biological Process, Molecular Function, and Cellular Component are at level 1. Therefore, more general parent terms are at the top of the hierarchy with lower GO level values and higher GO level values are assigned to more specific child terms. Unless more than one parent is assigned, GO level can be considered as a constant value for each term. As GO level values refer to the position of the enriched terms in the GO hierarchy tree, they can define the specificity or granularity of a given GO term and thus are a valuable parameter for terms prioritization and for inferring biological meaning from GO enrichment analysis . To detet test p-values (e.g. p\u2009\u2264\u20090.05) and thus the insignificant expression values must be removed from the original data prior to STEM analysis. To resolve the issue of many transcripts with missing values across all time-points, STEM offers the option to set the missing value parameter. However, depending on the selected value, this may ultimately reduce the total number of deregulated genes included in the functional analysis. In the time-point approach, however, the input file is the list of genes that belong to a specific time-point, in which case the number of missing values is not an issue. In this study, the time-point GO enrichment analysis was employed to discover common up- and down-regulated biological processes across the time-points as well as possible unique processes to each time-point. The output GO terms were used for inter-relationship analysis and or visualized as a scatter plot or interactive graph using REViGO [Temporal analysis of gene expression may imply analysis of gene lists in either a time-series and/or a time-point fashion. Although STEM has been designed for time-series expression profiling prior to GO enrichment, it can also be used for time-point GO enrichment analysis. In the time-series approach, clustering by STEM produces significant expression profiles followed by enrichment analysis of the list of genes in each expression profile. The complication with time-series analysis is that not all transcripts have accepted ANOVA g REViGO .-4 on transcripts with a minimum of 1.5 fold changes in expression (ANOVA t test p\u2009\u2264\u20090.05) resulted in a significant reduction of the number of enriched GO terms to 329 at GO level 3 and higher. Within this collection of enriched GO terms, there are 267 terms whose GO levels are 5 and higher. The 329 and 267 terms along with their p-values were further summarized independently by the REViGO reduction analysis tool that condenses the GO description by removing redundant terms [Based on the results obtained from analysis of the effects of fold change, p-value cut off and GO level criteria, the pool of deregulated transcripts throughout all time-points was analyzed by setting the GO level at different values with the intention of obtaining more specific categories. The enrichment analysis at p-value cutoff of 10nt terms . The resMost enriched GO terms visualized in Figure\u00a0t test p-value\u2009\u2264\u20090.05) at individual time-points. Figure\u00a0We next analyzed the temporal pattern of each GO term in a time-point fashion in order to examine the order of events after SCI. To accomplish this, we made multiple comparisons of the enriched GO terms obtained for deregulated transcripts . Both positive and negative regulations of apoptosis are significantly enriched, which indicates the fact that the injured cells struggle for survival. However, activation of apoptosis seems to be more predominant than its suppression, as the positive regulation of apoptosis becomes activated earlier than negative regulation and its peak of activity is on day 1 post-injury, although it stays continuously up-regulated up to 1\u00a0week post-injury. In contrast, the only significant activity of negative regulation of apoptosis (p\u2009\u2264\u20090.00001) is on day 3 are amongst the currently available platforms for pathway analysis. A recent analysis showed that among the above three pathway databases, there is a low level of consistency, comprehensiveness and compatibility [Functional analysis of microarray data is a challenging task as the result of initial analysis is only the fold change values representing deregulations in the expression of thousands of transcripts. There are different approaches to analyzing the results of a microarray experiment in order to make efficient biological inferences. Various platforms share a common feature in that they perform an overrepresentation analysis on the list of deregulated genes and statistically analyze if the pool of up- and/or down-regulated transcripts is significantly enriched compared to the list of genes previously annotated to be part of a defined Biological Process, Molecular Function or Cellular Component, as is the case with GO enrichment, or to a certain metabolic or signaling pathway as is observed in pathway analysis platforms. Various pathway analyses are currently in practice for microarray data analysis and there are different approaches to accomplish this. KEGG pathway -49, Wikitibility and the tibility , we analtibility . For exatibility .An advantage in using a controlled vocabulary of gene function such as GO on the SCI microarray data comes from the challenging nature of such analysis due to the inherent complexity of the spinal cord tissue and also the type and level of injury itself. Spinal cord tissue is composed of an array of highly specialized neurons, astrocytes, oligodendrocytes, microglia, and pericytes. Another specialized and complex structure within the cord tissue whose permeability is highly compromised upon injThe supply of blood and nutrients is crucial for normal functioning of neural cells. It is well-documented that an early and progressive development of hemorrhage is a common feature of all experimental models of SCI and this includes the clip-compression model ,60. SheaThe results of our microarray data analysis clearly confirm the outcome of the primary impact and persistent compression injury to the spinal cord, which is disruption of the vasculature and hemorrhage as the major and initial result of the primary injury. Our data indicate that representative genes in the blood coagulation cascade are up-regulated Figure\u00a0A. For exThe GO enrichment analysis identified another 30 coagulation-related genes whose transcripts were up-regulated throughout the course of the study. Amongst these were regulatory proteins with anticoagulant properties such as tissue factor pathway inhibitor 2 (TFPI), which is released by endothelial cells and binds factor VIIa complexes, inhibiting them to generate factor Xa. TFPI function regulates the extrinsic coagulation pathway. Additionally, we found that thrombomodulin (THBD) transcripts were elevated upon SCI up to 2\u00a0weeks post-injury. THBD binds thrombin and promotes its interaction with protein C. The resulting complex inactivates factors VIIIa and Va. Elevated levels of these regulatory proteins indicate the importance of endogenous signaling mechanisms to limit excessive spreading of clot formation.A serious side effect of hemorrhage is the infiltration of blood components such as hemoglobin and fibrinogen to the spinal cord tissue which have been shown to be toxic to CNS tissue -73. InfiAlong with the blood coagulation cascade, a concomitant increase in the complement activation system is observed, whose temporal pattern is not the same as blood coagulation but rather develops in a more delayed fashion. The blood coagulation cascade peak of activity is on day 7 post-injury but stays up-regulated until 8\u00a0weeks. The complement activation, however, is turned on with a lag time in the first few days with activity increasing at later time points in the experiment is down-regulated one day after injury. However, it returns to normal values by day 3 and is further up-regulated by day 7 remaining at higher than normal levels even at day 56 post-injury (data not shown). C1S catalyzes the consecutive conversion of C4 to C4a and C4a to active C4b2a (C3 convertase), whose main function is to cleave parental C3 into C3a and C3b. As shown the mRNA levels of C1qa, C1qb, C1qc, Cfd and Cr1l are increased relative to sham un-injured animals. The transcript level of Factor H (CFH), a negative regulator of the alternative pathway for complement activation, is decreased after injury but fluctuates back to higher than normal levels by day 7 post-injury. The elevated level of CFH in our study is in agreement with previous reports that complement inhibitor proteins such as factor H were expressed at elevated levels on neurons and oligodendrocytes after SCI in rats ,76.Using inhibitor approaches, both classical and lectin pathways of complement activation have been shown to participate in SCI pathology -79. C1q The decrease in the local blood-flow leads to ischemic-hypoxic damage to the spinal cord tissue. Ischemia generally leads to a decrease in cytoplasmic levels of ATP, cellular swelling through malfunctioning of Na/K ATPases and also the mitochondrial membrane permeability transition . AdditioHif-1a induction and activation under hypoxic condition induces NF-kB and its inhibitor at the same time ,86. In tThe inflammatory response to injury is initiated within minutes after SCI . Our enrThe synthesis of IL-1B in neurons was shown to be dependent on NALP1 inflammasome . In astrToll-like receptor signaling is initiated after pattern recognition receptors (PRRs) detect pathogen-associated molecular patterns (PAMPs) or danger-associated molecular patterns (DAMPs), which are endogenously generated from tissue and cellular damage. It is now thought that for induction of innate immune response, two signals are required, the first from Toll-like receptors (TLRs) and the second from Nod-like receptors (NLRs). NLRs are responsible for processing of pro-interleukin-1B to IL-1B and pro-IL-18 to IL-18 . FollowiBoth IL-1 and IL-18, produced during the first phase of inflammation mediated through the two-signal model of TLRs and NLRs, can induce the cellular and humoral modes of the adaptive immune response. IL-18 affects natural killer (NK) cells, monocytes, dendritic cells, T cells, and B cells, thereby regulating not only the innate, but also the adaptive immune responses . AdminisIt has been shown that autoantibodies are generated and detected in patients with chronic SCI ,108. TheMicroarray expression profiling was used to investigate the temporal changes in the transcriptome of the injured spinal cord in rats. Using GO enrichment analysis we show that it is possible to analyze the fold change in the expression of thousands of genes and obtain the overall picture of the processes involved. Thorough analysis of the expression profiles detected, significant biological processes and events such as response to hypoxia and reactive oxygen species were identified as early events after the injury. We found that both induced innate and adaptive immune responses are strongly and significantly up-regulated, each with relevant sub-categories and deregulated genes. The induced innate immune response may be classified as an acute to subacute type of response, whereas the adaptive immune response and antibody production can be categorized as a late response. The biphasic expression pattern identified in many genes related to immune-response implies that both resident spinal cord cell types as well as infiltrating blood cells may participate in cytokine and chemokine production and general inflammatory response. Our approach in analyzing the fold change in the mRNA levels of many deregulated genes using microarray technology indicates that with careful and systematic analysis of the data, it is possible to reliably delineate the processes involved in injury and recovery and to establish hypotheses for further analysis and intervention strategies.All experimental protocols were approved by the animal care committee of the University Health Network in accordance with the policies established in the guide to the care and use of experimental animals prepared by the Canadian Council of Animal Care. Female Wistar rats were used for this study. Injuries by the aneurysm clip method were performed as previously described ,45,112. Rats were sacrificed at 1, 3, 7, 14 and 56\u00a0days after injury, and a 5\u00a0mm sample of the spinal cord containing the epicenter of the injured tissue was extracted for RNA analysis. Total RNA from each individual sample was extracted using TRIzol reagent . RNeasy mini spin columns were used for purification of total RNA molecules larger than 200\u00a0bp, which excludes smaller RNAs such as miRNAs. RNA quality was assessed with a 2100 Bioanalyzer (Agilent). cRNA for microarray hybridization was prepared from 5 ug of starting RNA using the protocol supplied by Affymetrix . cRNA was hybridized to GeneChip Rat Genome 230 2.0 arrays at the Centre for Applied Genomics, The Hospital for Sick Children, Toronto, Canada). Primary data sets were saved in a MIAME-compliant format and uploaded to GEO (series GSE45006).Data analysis was performed in R with the Affy package (v1.12.2) in BioCot test p-values were analyzed by Short Time-Series Expression Miner (STEM) (discussed below), which allows the temporal expression patterns to be examined and extracted from the pool of up- and down-regulated transcripts across all time-points. Alternatively, individual time-point data were analyzed separately for up- and down-regulated genes, protein classes and signaling pathways. Both approaches were combined with functional analysis of transcripts using gene ontology (GO) enrichment.The resulting gene set data with fold change and associated ANOVA We used the non-parametric clustering algorithm of STEM that is specifically designed to analyze short time-series expression data . STEM imt test p-values \u22640.05 and fold change values\u2009>\u20091.5 were analyzed by the GO enrichment analysis module of STEM. Temporal analysis of the list of deregulated genes was performed using both time-series and time-point approaches. Due to more comprehensive gene coverage of RGD annotation data source file, the enrichment analysis was performed with reference to the RGD association file. For GO analysis of various expression profiles, we applied the annotations of \u201cBiological Process\u201d (BP) domain and the minimum expression fold change was set to different values from zero. Other parameters were set to different values as follows: \u201cminimum GO level\u201d to different values from 3 to 20, \u201cminimum number of genes\u201d to 5, and \u201cmultiple hypothesis correction method for actual size based enrichment\u201d to Bonferroni. STEM also offers to run the GO enrichment analysis at different GO tree levels, which allows limiting the results to more specific terms in the directed acyclic graph (DAG) structure of the gene ontology hierarchy. In this study, the time-point GO enrichment analysis was also employed to discover common up- and down-regulated biological processes across the time-points as well as possible unique processes to each time-point. The output GO terms were used for inter-relationship analysis and visualization by Venn diagram tool and or visualized as a scatter plot or interactive graph using REViGO [STEM is a statistical technique based on unsupervised clustering to find cluster-centroids followed by assignment of genes using distance-classifications, with statistical analysis using enrichment-based techniques. The biological significance of a set of genes can be assessed by GO enrichment analysis. Deregulated transcripts with ANOVA g REViGO .The authors declare that they have no competing interests.EF and SK designed the experiment and conducted the animal surgery and RNA sample collection. PCB analyzed the microarray data using BioConductor. SSM participated in the gene set data analysis. MC conceived the methodology for gene set data analysis, carried out the data mining, expression profiling, GO enrichment analysis, the follow up temporal analysis of the enriched terms and drafted the manuscript. MGF supervised all aspects of this work. All authors read and approved the final manuscript.2 scale. Every tick mark on the Y-axis corresponds to one-log2 change in expression relative to sham. The filtering criterion was set to 1.5 fold .Time-Series Clustering of Microarray Gene Set Data Using STEM. The expression data 1 were loaded onto STEM platform and distinct temporal expression profiles were generated, which differentiate between real and random patterns. Profiles are numbered from 0 to 49. Each box corresponds to a model expression profile. Significant expression profiles are highlighted in color to represent a statistically significant number of genes assigned as their p-values are ordered from 0 to greater values up to 5.0E-3. The model profile is colored black while the gene expression patterns for each gene within the cluster are colored in red. Clusters with similar colors show similar patterns. To all expression profiles a zero time point was added to serve the control value . Genes are assigned to the most closely matching profile by statistical analysis. Significant expression profiles are highlighted in color. The X-axis represents days after injury when sampling was performed and the Y-axis denotes fold-increase or decrease in expression in logClick here for file"}
+{"text": "Drosophila melanogaster. A major problem for reverse-engineering pattern-forming networks is the significant amount of time and effort required to acquire and quantify spatial gene expression data. We have developed a simplified data processing pipeline that considerably increases the throughput of the method, but results in data of reduced accuracy compared to those previously used for gap gene network inference. We demonstrate that we can infer the correct network structure using our reduced data set, and investigate minimal data requirements for successful reverse engineering. Our results show that timing and position of expression domain boundaries are the crucial features for determining regulatory network structure from data, while it is less important to precisely measure expression levels. Based on this, we define minimal data requirements for gap gene network inference. Our results demonstrate the feasibility of reverse-engineering with much reduced experimental effort. This enables more widespread use of the method in different developmental contexts and organisms. Such systematic application of data-driven models to real-world networks has enormous potential. Only the quantitative investigation of a large number of developmental gene regulatory networks will allow us to discover whether there are rules or regularities governing development and evolution of complex multi-cellular organisms.Understanding the complex regulatory networks underlying development and evolution of multi-cellular organisms is a major problem in biology. Computational models can be used as tools to extract the regulatory structure and dynamics of such networks from gene expression data. This approach is called reverse engineering. It has been successfully applied to many gene networks in various biological systems. However, to reconstitute the structure and non-linear dynamics of a developmental gene network in its spatial context remains a considerable challenge. Here, we address this challenge using a case study: the gap gene network involved in segment determination during early development of in silico. As a case study, we investigate the gap gene network involved in determining the position of body segments during early development of Drosophila. We visualise spatial gap gene expression patterns using in situ hybridisation and microscopy. The resulting embryo images are quantified to measure the position of expression domain boundaries. We then use computational models as tools to extract regulatory information from the data. We investigate what kind, and how much data are required for successful network inference. Our results reveal that much less effort is required for reverse-engineering networks than previously thought. This opens the possibility of investigating a large number of developmental networks using this approach, which in turn will lead to a more general understanding of the rules and principles underlying development in animals and plants.To better understand multi-cellular organisms we need a better and more systematic understanding of the complex regulatory networks that govern their development and evolution. However, this problem is far from trivial. Regulatory networks involve many factors interacting in a non-linear manner, which makes it difficult to study them without the help of computers. Here, we investigate a computational method, reverse engineering, which allows us to reconstitute real-world regulatory networks Elucidating the regulatory structure and dynamics of gene networks is a major objective in biology. The inference of regulatory networks from gene expression data is known as reverse engineering Our research focuses on how developmental gene regulatory networks produce spatial patterns in multi-cellular organisms, and how these patterns evolve through changes in the underlying structure of the network Drosophila melanogaster (reviewed in hunchback (hb), Kr\u00fcppel (Kr), giant (gt) and knirps (kni)\u2014which all encode transcription factors. Gap genes are involved in establishing the segmented body plan of the animal. They are active during a very early period of Drosophila development, called the blastoderm stage, which occurs before the onset of gastrulation. At this stage, the embryo consists of a multi-nucleate syncytium allowing transcription factors to diffuse through the tissue. Gap genes are expressed in broad, overlapping domains along the embryo's major, or antero-posterior (A\u2013P) axis. They are regulated by long-range gradients of transcription factors encoded by maternal co-ordinate genes bicoid (bcd), hunchback (hb), and caudal (cad), and are repressed by the terminal gap genes tailless (tll) and huckebein (hkb) in the pole regions of the embryo. Maternal co-ordinate and gap genes form the first two tiers of the segmentation gene hierarchy in Drosophila. Together they regulate pair-rule and segment-polarity genes, the latter forming a molecular pre-pattern that leads to morphological segmentation at later stages of development respectively. Stained embryos were imaged using confocal laser-scanning microscopy, and the resulting expression profiles were quantified using a processing pipeline that includes image segmentation to identify nuclei, time classification, removal of non-specific background staining, data registration to remove embryo-to-embryo variability, and data integration during the earliest stages of expression.Previous reverse-engineering studies of the gap gene network were based on quantitative expression data obtained by visualising the distribution of gap gene mRNA These previous reverse-engineering studies have yielded many new insights into gap gene regulation, which would have been difficult to obtain by experimental approaches alone. An early pioneering study predicted a co-operative effect between maternal factors Bcd and Hb on the regulation of gap gene expression It would be extremely interesting to apply the gene circuit method to other developmental systems. In our view, reverse engineering has tremendous potential for the study of gene regulatory networks in development and evolution. For instance, gene circuits could be used to reconstruct homologous developmental regulatory networks across a range of species, to compare their regulatory structure and dynamical behaviour Drosophila has been very limited. The main reason for this, we suspect, is the following: collection of high-quality data sets\u2014such as the spatio-temporal profiling of gap genes described above\u2014is costly both in terms of time and resources. It is clearly the bottleneck of the approach. Protocols based on immunofluorescence require antibodies, which are difficult and expensive to obtain. Confocal microscopy is time-consuming and laborious, since a large number of embryo images need to be scanned. Moreover, while protocols for data acquisition and quantification work efficiently in Drosophila, their application to less well-established experimental models is not trivial. In particular, it is often difficult to adapt fluorescent staining protocols to non-model species.Despite this potential, the application of dynamic, non-linear reverse-engineering approaches beyond gap genes in Thus, in order to make the gene circuit method more widely applicable\u2014and hence useful for the study of developmental gene regulatory networks\u2014it is imperative that we simplify the method. We address an important question which applies to reverse-engineering approaches in general: how much, and what kind of data are required to successfully infer a gene regulatory network? Answering this question in the context of the gap genes will allow us to minimise the cost of data acquisition and processing. This, in turn, will decrease the barrier for applying reverse-engineering methodology to other developmental systems, many of which are similar in kind and complexity to the gap gene network.kni domain correctly (see hb domain (see ibid.). This defect is no longer present in more recent models based on data with the abdominal kni domain positioned accurately while still only measuring relative levels of protein concentration The quality of a gene circuit model depends directly on the quality of the data it was fit to. What matters most in this regard is the timing and position of expression domain boundaries with respect to each other. The relative level of expression in each domain is less crucial. For instance, early gap gene circuit models did not capture the formation of the abdominal ctly see . This wamain see , ibid.. Drosophila. We demonstrate how mRNA expression data derived from a colorimetric (enzymatic) protocol for in situ hybridisation can be used to infer the regulatory structure and dynamics of the gap gene network. We compare our results with those obtained in previous studies based on protein expression data, and show that they predict equivalent regulatory mechanisms that are consistent with experimental evidence. In addition, we show that our simplified data set can be reduced even further while still yielding correct predictions. In this way, we define a set of minimal requirements for the successful inference of gap gene regulatory network structure and dynamics. These minimal requirements suggest that the adapted gene circuit method can be applied to a variety of developmental systems with a reasonable amount of effort. Such wider application of reverse-engineering methods will enable us to carry out systematic and comparative analyses of developmental gene regulatory networks.In this study, we present a simplified reverse-engineering protocol and apply it to a new, quantitative data set of gap gene mRNA expression in Drosophila Genomics Resource Center and used to make riboprobes labelled with DIG and/or FITC. Wild-type blastoderm-stage Drosophila embryos were collected after 4 hrs of egg laying and stained with a colorimetric in situ hybridisation protocol adapted from cDNA clones were ordered from the An overview of image acquisition and processing steps is shown in Only laterally oriented embryos were selected for processing. Gene expression patterns were extracted from embryo images as follows. Binary masks covering the whole embryo are calculated using a sequence of image segmentation steps on the DIC image . Interme0, y0) and were determined that indicate the beginning and end of the boundary, where staining levels approach background and maximum levels respectively. A middle, third control point was automatically calculated from the other two points by taking the average for x, and locating the corresponding expression level y. Hence 0,2 and y0,2 were used as anchor points for cubic splines with fixed zero-derivatives at their end knots. Finally, splines were normalised such that the expression level at the starting point was 0, and the expression level at the end point was 1 based on nuclear density and number of nuclei in images showing DAPI nuclear counter-stains (hb and Kr) to verify the relative spatial order of gap gene expression domains (data not shown).Integrated time-series of gene expression were prepared as follows. Embryos were staged into separate cleavage cycles . (3) We also scaled our expression data along the time axis by a second-degree spline with a peak of expression during early C14A, to capture the gradual accumulation and degradation (during late C14A) of gap gene mRNA (t\u200a=\u200a0.0 min), 1.0 at around T5 (t\u200a=\u200a48.0 min), and 0.7 at gastrulation time (t\u200a=\u200a71.1 min), which is the final time point. (4) We multiplied our mRNA expression data by a constant factor of 200. This makes the scale of both mRNA and protein data match as closely as possible, and therefore facilitates comparison to models obtained with Drosophila protein data The following post-processing steps had to be applied to our data to make them suitable for model fitting and comparison transcription factor (not included in this study), and it does not participate in segment determination The posterior v the corresponding weight. This proportionality of variation with expression level reflects the fact that gap domains (showing high levels of expression) show more variation than those regions of the embryo in which a gene is not expressed Fitting models using a weighted least squares (WLS) protocol (see below) requires a weight for each data point indicating its associated variation. As our mRNA expression data do not provide such information, weights were created from normalised, integrated mRNA expression data according to the formula: tll and hkb expression patterns over time Gene circuits used for model fitting require external inputs based on data for maternal gradients (Bcd and Cad) as well as terminal gap genes (Tll and Hkb). Depending on the scenario we wanted to test see , expressa in nucleus i over time t is governed by the following of ordinary differential equation (ODE)n defines the number of previous divisions); a. The sigmoid regulation-expression function W and E define the interactions between, respectively, the trunk gap genes themselves, and between the external inputs and trunk gap genes. The elements of these matrices, baw and mae, are called regulatory weights. These weights define the effect of b on a (or m to a) which can be (1) positive (activating gene product synthesis), (2) negative (inhibiting synthesis), or (3) (close to) zero (no regulatory interaction). ah is a threshold parameter representing uniformly distributed maternal factors. During mitosis, gene product synthesis is set to zero. After mitosis follows division, which is instantaneous. At division, gene product concentrations are copied equally to both daughter nuclei. Finally, diffusion is implemented with no-flux boundary conditions.We use gene circuits for model fitting (reverse engineering) as described in t\u200a=\u200a0 min) when gap proteins reach detectable levels, to the onset of gastrulation and the end of C14A (t\u200a=\u200a71.100 min). We use the same division schedule as in t\u200a=\u200a\u22126.200 min) and C13 (t\u200a=\u200a10.550 min) using the same temporal scaling scheme as described in the previous section. Initial conditions of the external inputs were taken from t\u200a=\u200a24.225; T2, t\u200a=\u200a30.475; T3, t\u200a=\u200a36.725; T4, t\u200a=\u200a42.975; T5, t\u200a=\u200a49.225; T6, t\u200a=\u200a55.475; T7, t\u200a=\u200a61.725; T8, t\u200a=\u200a67.975 http://www.bsc.es). Per optimisation run we used 50 processor cores for an average duration of about 7 hours.We follow a reverse engineering protocol as described in T the set of time points at which expression data is available, c(n)N the number of nuclei after n divisions , a at nucleus I and time t as derived from the experiments. If weights Simulated Annealing requires that candidate solutions have an associated cost (or energy) function that is minimised during the optimisation. We adapt the cost function from E we fix interactions of Hkb to zero, with the exception of Hkb\u2192hb. Furthermore, we take ah\u200a=\u200a\u22122.5, for all gap genes Based on previous studies using gap gene circuit models, we fix certain parameters without negatively affecting the quality of the fits To report the goodness of a fit, we use the root mean square (RMS), defined byStatistical analysis of parameter estimates was performed as described in J is the Jacobian matrix of size Reverse engineering results in a vector of estimates http://rsbweb.nih.gov/ij). Intermediate processing steps and domain boundary positions were stored in a MySQL database, with a web interface (SuperFly) developed by the CRG Bioinformatics Core Facility. SuperFly is available online at: http://superfly.crg.es. We used scripts written in Python, Perl and R for the preparation of integrated data sets, for the generation of artificial external inputs, and for analysis of gene circuit models. Code for numerical solution and optimisation of gene circuits by pLSA www.gnu.org/software/gsl), the Sundials ODE Solver Library www.open-mpi.org). For numerical integration of ODEs, we use an implicit variable-order, adaptive-stepsize, multi-step method; a band-direct solver calculates the set of equations that is generated at each integration step Image processing and extraction of expression domain boundaries were performed using a custom-made processing pipeline with a graphical user interface developed in Java hb, gt, Kr, and kni produced with a colorimetric (enzymatic) in situ hybridisation protocol (columns 1\u20134 on the left), in comparison to protein expression data for Gt, Kni and the pair-rule protein Even-skipped (Eve) from the FlyEx database , when gap proteins start to accumulate, to the end of C14A, when gastrulation starts. They span the trunk region of the embryo from 35 to 87% A\u2013P position. This region is located slightly more anteriorly and is somewhat smaller than that used previously for protein models and weighted least-squares (WLS) fits to our full mRNA data set see . On one We performed 150 fitting runs each with OLS and WLS cost functions respectively. Because of potential artefacts caused by overfitting, one cannot simply select those circuits with the lowest residual scores for further analysis. Instead, we inspected all solutions visually to detect obvious patterning defects interact with gap genes in the same manner for both data sets.Apart from these minor differences, however, there is significant agreement between all three sets of gene circuits. This similarity in expression patterns is reflected in the parameter values of our models. Distributions of estimated parameter values for regulatory weights of WLS-mRNA and WLS-protein solutions are shown as scatter plots in hb by Kr, which is in accordance with the experimental literature Kr and gt are of no functional significance, since auto-regulation is dispensable for correct gap gene expression We analysed the differences between mRNA- and protein-based gene circuits in detail. Surprisingly, one of the differences is an improvement: our mRNA circuits predict repression of kni, Kni to not interact with gt, and Gt to repress hb, while protein-based circuits predict no interaction for Kr on kni, and activation in the other two cases poses a problem for this postulated mechanism. Still, our mRNA circuits exhibit correct shifting dynamics of posterior gap gene expression , no interaction (between \u22120.005 and 0.005), or activation (greater than 0.005). It is weakly determinable if its interval intersects two of these categories, but excludes the third. There are two different ways to calculate these confidence intervals: dependent intervals tend to underestimate the extent of the confidence region, while independent intervals have the tendency to overestimate it , threshold parameters (h), and certain regulatory weights are fixed during optimisation Under specific conditions, gap gene circuits fit to protein data can yield parameter estimates, which are very well determined , top rowTo distinguish between these three possibilities, we performed several series of optimisation runs. First, we used the original protein data set with approximated variances that only depend on expression level . This mechanism is recovered extremely robustly: it is present in all selected solutions . In addition fits with artificial maternal gradients show reduced presence of gt auto-activation. The variability in these results can be explained by the fact that auto-activation is not essential for positioning gap domains in gene circuit models With respect to auto-activation, we observe that all mRNA-based circuits behaved similarly, with the exception of the gene circuits from fits with artificial maternal gradients , an interaction which is likely to be indirect, and therefore not supported by experimental evidence To analyse the presence or absence of the domain shift mechanism hb and Kr does not contribute to gap domain shifts hb is repressed by Kr in the majority of cases . Interestingly, recovery of this interaction is rescued in those fits in which both maternal and terminal external inputs were replaced by \u2018artificial\u2019 patterns .Earlier studies using reverse-engineering approaches were based on quantitative protein expression data On the other hand, this type of mRNA data can be acquired and processed within a fraction of the time and effort it takes to obtain high-quality protein-based expression patterns: instead of years, it now only takes a few months to establish quantitative expression profiles for a complete set of genes involved in a developmental process. Our approach avoids having to raise antibodies against regulators, which is both technically challenging and expensive. It uses a robust colorimetric (enzymatic) staining protocol instead of fluorescence. It avoids laborious scanning of embryo images using confocal microscopy (which is not available everywhere). In addition, we show that much fewer embryos need to be processed and stained, and fewer developmental stages need to be represented in the data than previously thought. And finally, our approach simplifies data processing, further reducing the effort required for data quantification.Does the fact that we still recover the correct gap gene network using mRNA data imply that our method lacks specificity? Would it infer the same network with any kind of data? The following evidence demonstrates that the answer to these questions is a clear no, and suggests minimal conditions for the expression data that must be met for inference to be specific and consistent.First of all, gap gene circuit models fail to correctly predict gap gene expression in the head region of the embryo (anterior of 35% A\u2013P position) In contrast, we have mentioned earlier that accurate measurements of absolute expression levels do not seem to be crucial for correct network inference see . Our reskni domain was placed too far anterior in those data, leading to an exaggerated overlap with the central Kr domain, and an artefactual gap between the domains of kni and hb in the posterior of the embryo were sufficient to recover a correct regulatory network. This is probably due to the fact that gap domains shift and develop smoothly over time see . SystemsOur investigation of minimal data requirements for model fitting is of an empirical nature. It should be corroborated and extended in the future by more systematic and rigorous approaches based on methods for optimal experimental design , are clearly supported (or excluded) based on experimental evidence.Note that both the experimental evidence . Again, the reverse-engineering method is most powerful when used in conjunction with complementary experimental approaches.But what about systems which have been less well studied than By minimising the amount of quantitative data required for reverse-engineering a developmental gene regulatory network, we have removed a major bottleneck for applying the method more widely. Still, this method is unlikely to be scalable to systems that are orders of magnitude larger than the one studied here. Microscopy and image acquisition remain labour-intensive, and our quantification pipeline still requires a series of manual interventions, such as positioning the splines that are fit to expression boundaries, or time classification of embryos. It remains a major challenge to fully automate these steps. Therefore, the effort required to quantify hundreds or thousands of spatial gene expression patterns is still considerable, even if robust and fast methods are used Our data quantification method results in normalised mRNA expression data. All boundaries are approximated by splines, and all domains have the same standard expression level. (B) Relative mRNA concentrations are scaled along the A-P axis to reflect higher expression levels in the middle of the embryo. (C) Relative concentrations are also scaled over time, taking the basic biological assumption that mRNA levels are low at the start of C13, peak at C14-T5 and have diminished by C14-T7. (D) To easily compare the protein and mRNA gene circuits concentration levels are scaled by a factor 200. Horizontal axes represent A\u2013P position, where 0% is the anterior pole. Relative mRNA concentrations are in arbitrary units (au). C13 is cleavage cycle 13, T3/7 represent time classes during C14A. Time progresses downwards. Our models only include the trunk region of the embryo, ranging from 35% to 87% (grey background in D).(PDF)Click here for additional data file.Figure S2Model fitting solutions selected for further analysis. Histograms show root-mean-square (RMS) scores for solutions obtained using OLS (A) and WLS (B) cost functions. White bars constitute the histogram of 150 total runs performed in each setting. Blue bars indicate solutions selected for further analysis : 10 for OLS and 52 for WLS. Note that selection of sub-optimal runs in (A) indicates over-fitting of the data. This effect is greatly reduced in (B). See main text for details.(PDF)Click here for additional data file.Figure S3OLS mRNA fits: model output versus data. Plots show model output (solid lines) versus expression data (dashed lines). All 10 selected gene circuits fit with OLS to mRNA data are shown. Horizontal axes represent AP position, where 0% is the anterior pole. Relative mRNA concentrations are in arbitrary units (au). T1/3/5/7 represent time classes during C14A. Time progresses downwards.(PDF)Click here for additional data file.Figure S4Data sets based on reduced numbers of boundaries. The first column shows the full data set at three time classes (T1/4/7) during C14A. This reference data set is also represented by dashed lines in the middle and right column to facilitate comparison. The two other columns show reduced data sets based on 60% (middle) and 20% (right) of randomly selected individual boundaries . Increasing deviations from the full data set can be observed as the number of boundaries is reduced. Horizontal axes represent A\u2013P position, where 0% is the anterior pole. Only the trunk region of the embryo included in our models is shown. Relative mRNA concentrations are in arbitrary units (au). Time progresses downwards.(PDF)Click here for additional data file.Figure S5Examples of mRNA gene expression data sets with reduced numbers of time classes. The first column shows the full data set, followed by three columns of reduced data. We randomly select time classes in the range T1\u2013T7 for elimination, which maintains the starting (C13) and end point (T8) of the time series. Random se- lection of time classes was performed 5 times, resulting in 5\u00d720 optimisation runs for each category. Horizontal axes represent A\u2013P position, where 0% is the anterior pole. Only the trunk region of the embryo included in our models is shown. Relative mRNA concentrations are in arbitrary units (au). See (PDF)Click here for additional data file.Figure S6Measured and artificial maternal inputs. Plots show a comparison of protein-based expression (data) and approximated, artificial expression profiles (art) for the maternal gradients of Bcd and Cad. (A) Quantitative Bcd data (solid) was approximated using a (time-invariant) exponential function (dashed line). (B) Cad data (solid) was approximated using splines Click here for additional data file.Table S1Drosophila mRNA vs. protein expression: boundary shifts. This table shows shifts in boundary positions from a starting point at T1, or, from the earliest appearance of the boundary domain. Numbers are in % A\u2013P position. mRNA expression is shown in black, protein expression in grey. (\u2212) indicates a shift to the anterior (+) a shift to the posterior. Drosophila values are calculated from (PDF)Click here for additional data file.Table S2Drosophila mRNA versus protein expression: domain width. This table shows domain widths for trunk gap genes. Numbers are given in % egg length. mRNA expression is shown in black, protein expression in grey. Drosophila values are calculated from (PDF)Click here for additional data file.Text S1Common patterning defects in Gap Gene Circuits.(PDF)Click here for additional data file.Text S2Drosophila Segmentation Gene Expression.(PDF)Click here for additional data file.Text S3Genetic Interconnectivity Matrices.(PDF)Click here for additional data file.Text S4Dependent and independent parameter confidence intervals.(PDF)Click here for additional data file."}
+{"text": "Observation of gene expression changes implying gene regulations using a repetitive experiment in time course has become more and more important. However, there is no effective method which can handle such kind of data. For instance, in a clinical/biological progression like inflammatory response or cancer formation, a great number of differentially expressed genes at different time points could be identified through a large-scale microarray approach. For each repetitive experiment with different samples, converting the microarray datasets into transactional databases with significant singleton genes at each time point would allow sequential patterns implying gene regulations to be identified. Although traditional sequential pattern mining methods have been successfully proposed and widely used in different interesting topics, like mining customer purchasing sequences from a transactional database, to our knowledge, the methods are not suitable for such biological dataset because every transaction in the converted database may contain too many items/genes.CTGR-Span to efficiently mine CTGR-SPs even on larger datasets where traditional algorithms are infeasible. The CTGR-Span includes several biologically designed parameters based on the characteristics of gene regulation. We perform an optimal parameter tuning process using a GO enrichment analysis to yield CTGR-SPs more meaningful biologically. The proposed method was evaluated with two publicly available human time course microarray datasets and it was shown that it outperformed the traditional methods in terms of execution efficiency. After evaluating with previous literature, the resulting patterns also strongly correlated with the experimental backgrounds of the datasets used in this study.In this paper, we propose a new algorithm called CTGR-Span to mine several biologically meaningful CTGR-SPs. We postulate that the biologist can benefit from our new algorithm since the patterns implying gene regulations could provide further insights into the mechanisms of novel gene regulations during a biological or clinical progression. The Java source code, program tutorial and other related materials used in this program are available at http://websystem.csie.ncku.edu.tw/CTGR-Span.rar.We propose an efficient Over the past decade, a great number of studies on time course issue have become increasingly important since most clinical/biological events, such as infection-related chronic/acute inflammatory responses -3, drug AprioriAll [SPADE [PrefixSpan [apriori-like (level-wise) GSP [Prefix-growth [DELISP [downward closure [closed sequential patterns, the shorter ones will be eliminated from the final resulting patterns. For this purpose, some newer algorithms like incorporating with constraints, CTSP [CloSpan [WSpan [weighted sequential patterns from a transactional database, and the MAGIIC [Sequential pattern mining is one of the most important topics in the field of data mining, especially for the database systems. The fundamental meaning of a sequential pattern refers to a set of singleton frequent items/differentially expressed genes that are followed by another set of items/differentially expressed genes in the time-stamp ordered transaction. Therefore, once the potential gene regulations occurred in a period of time, it could be identified by mining such sequential patterns from a dataset-converted database. Referring to previous studies, several parental algorithms using different computational designs, such as rioriAll , SPADE [l [SPADE and PrefefixSpan , have beise) GSP and pattx-growth as well [DELISP are evol closure -12. Therts, CTSP , and wit[CloSpan , were thn [WSpan could bee MAGIIC was desie MAGIIC . This ise MAGIIC . MoreoveCTGR-Span with some biologically designed parameters to solve the issue mentioned above by mining CTGR-SPs . The CTGR-Span ensures that all of the resulting patterns imply gene regulations, which take place across different time points during the course of biological observations. The method is an extended and improved version of our previous paper [2012 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). The most important changes include: first, we designed a new optimal parameter tuning procedure for the proposed algorithm to ideally determine suitable conditions in pattern mining. The procedure has a merit that there is no need to additionally compute the standard deviation of time intervals in a time course dataset. Based on this design, then we compared our method with two representative sequential pattern mining algorithms, namely GSP and PrefixSpan, in execution efficiency and effectiveness. The resulting patterns were validated using a manual literature survey and an automatic Gene Ontology enrichment analysis [In this paper, we propose a new algorithm called us paper presenteanalysis ,4.The rest of this paper is organized as follows. The proposed method and materials for analysis are described in Methods. In Results and Discussion, we give the experimental results of the proposed method on two time course gene expression datasets. Concluding remarks are given in Conclusions.CTGR-SPs from a time course microarray dataset through 3 main parts: i) an introduction to the experimental background of 2 input microarray datasets, ii) how to convert a numeric dataset into a transactional database, and iii) the kernel of the CTGR-Span and its required biologically designed arguments.In this section, we introduce how to efficiently discover et al. attempted to detect 8,793 transcriptional changes in 11 ventilator-associated pneumonia patients' leukocytes across 10 time points. For the other GSE11342, Taylor et al. monitored 22,283 gene expression changes in peripheral blood monocytes of 20 hepatitis C virus infected patients across the first 10 weeks right after treating with the Peg-interferon alfa-2b plus ribavirin.We tested this paper presenting method using the same input datasets as our previous works . In brie1 Gto 3 Gover 4 time points 1 TPto 4 TPwith a fixed interval (1 day). The experimental design is performed in 3 patients. The first time point of this example is regarded as a baseline for deriving the significant items at each time point. All of the values are then divided by the first time point. The divided values can be presented in a fold change matrix as Table fold-change threshold are further defined as the significant genes. Suppose that the threshold is set as 1.5, only the eligible significant genes can be preserved as new items as shown in Table 1G, down-regulated 2 Gand down-regulated 3 Goccur at the second time point that will be presented within the same parentheses (transaction). In this example, a set of 3 time-ordered transactions for each patient is called a sequence.The sequential patterns could be mined directly from a transactional database if the data are discrete. The microarray-involved probe/gene expression values need to be discretized into singleton items within every transaction. Here we show you an example from Table threshold settings. In this study, the threshold of GSE6377 is set as 1.03 and the threshold of GSE11342 is set as 1.5, based on the same criteria used for the original datasets [However, the content of the converted transactional databases will be affected by different CTGR-Span is designed based on a pattern-growth-based manner [CTGR-SPs, we will present the kernel procedure and meanwhile show the main differences between the traditional pattern-growth-based and our methods using a readily understood example. Finally, we present several extra biologically designed parameters toward more meaningful CTGR-SPs in biology.Since the d manner for miniCTGR-Span is to overcome a problem that the transactions have too many items/significant genes. According to our design, it also has several advantages: i) the items within transactions do not need to be sorted in advance, ii) the mining results will not be affected by different sorting types, iii) more meaningful sequential patterns implying gene regulations in biology can be successfully discovered relative to the traditional sequential pattern mining algorithms [S of sequences containing 4 patients' transactions is shown in Table n+/-G. In this example, we set a minimum support (minSupp) as 50%, which means if any one of the items occur in at least 2 different individual sequences (each patient has its own sequence), we call these items as frequent items and further to generate CTGR-SPs through a prefix-projection-based manner [The main strength of the gorithms -12, and d manner in the flength-1 CTGR-SPsStep 1: Find S, the frequent items of length-1 including , and can be successfully identified since they appear over one half of the sequences. Therefore, these 3 frequent items are regarded as the lengh-1 CTGR-SPs.After scanning the Step 2: Divide search spacelength-1 CTGR-SPs is individually considered as a prefix to find its postfixes in which they are also frequent in the S.Each item within the set of postfixes of CTGR-SPsStep 3: Find prefix, the subsets of CTGR-SPs can be identified using a depth-first search-based manner in the prefixes projected databases.For each identified CTGR-Span. First, for the proposed method, the prefixes within length-1 CTGR-SPs are shown in the left-most column. Only the subsequences prefixed with the first occurrence of the prefixes and started from the next transaction will be presented in the projected databases. As an example, the prefix contained in the sequence <(G1+G4-)1(G3+)2(G2-G3+)4(G5+)5> of patient 2 (Table 3+)2(G2-G3+)4(G5+)5> will be listed in the projected database for mining longer CTGR-SPs. According to the same principle, the sequences in S containing are projected to form the -projected database, which consists of 4 candidate postfixes: <(G2-G3+)2(G3+)3>, <(G3+)2(G2-G3+)4(G5+)5>, <(G2-G3+)3> and <(G2-G3+)3>. Then, by scanning -projected database once, the length-2 CTGR-SPs having prefix can be identified including <(G1+)(G2-)>: 4 (<(G1+)(G2-)> appears 4 times) and <(G1+)(G3+)>: 4. The CTGR-SPs longer than length-2 can be further generated from the current length-2 CTGR-SPs. After constructing their respective projected databases, the <(G1+)(G2-)>-projected database consists of two candidate postfixes: <(G3+)3> and <(G5+)5>. However, both <(G3+)> and <(G5+)> appear only once over the sequences involved in the <(G1+)(G2-)>-projected database that is lower than the minSupp (50%). Hence, the further processes for mining the <(G1+)(G2-)>-projected database will be terminated. On the other hand, recursive mining patterns from the <(G1+)(G3+)>-projected database, which contains two candidate postfixes including <(G3+)3> and <(G2-G3+)4(G5+)5>, returns one eligible postfix to form a length-3 CTGR-SPs <(G1+)(G3+)(G3+)>. Finally, according to the same criteria, we can find the remaining CTGR-SPs prefixed with or by constructing their corresponding projected databases.For readily understanding the above 3 steps, here we extend an example shown in Table II of our previous conference paper as Tablence out of four traditional sequential patterns contains the simultaneous item G2- and G3+, which do not imply a gene regulation in a time period but a frequent itemset. Although the pattern could be disassembled into \"(G1+) \u2192 (G2-)\" and \"(G1+) \u2192 (G3+)\", they have overlapped with the other explored sequential patterns including the traditional length-2 sequential pattern <(G1+)(G2-)> and <(G1+)(G3+)>. Therefore, a lot of redundant patterns may be identified by the traditional methods. This thorny problem can be avoided by mining CTGR-SPs. Table CTGR-Span and elucidates why CTGR-Span is more efficient and useful than the traditional pattern-growth-based methods.After mining all of the sequential patterns, apparently, the traditional patterns marked with an asterisk will not be discovered by our proposed method since they contain the simultaneous items at the same time point. For example, in the first row data of Table minSupp for mining traditional patterns, we additionally introduce 3 parameters: minimum timepoint support (minTSupp), sliding window size (SWS) and maximum time constraint (maxTC) to the CTGR-Span to mine more meaningful sequential patterns in gene regulation based on some biological characteristics. Since the fundamental definitions of these parameters have been shown in the section II, MATERIALS AND METHODS, of our previous conference paper [As stated above, we have introduced the main differences between the traditional and our proposed method. Then we intend to describe how to enrich the patterns with more meaningful in biology. In addition to the inherent parameter ce paper , here weminTSupp (minimum timepoint support). After converting the input microarray datasets into the transactional datasets, thousands of items are contained in each transaction. The average lengths of the transactions of the two datasets are presented as two bars at the left-most N tick shown in Figure minTSupp. The average lengths of transactions in both input datasets as the functions of varying minTSupp are shown in Figure SWS (sliding window size). Mining sequential patterns implying gene regulations across fixed time points may cause the resulting patterns inadequate because the response times among a set of genes through transcription regulations are not identical. The sliding window size (SWS) parameter can flexibly allow the patterns containing items to be derived from the same/different time points. Here we show you an example extended from Table length-1 CTGR-SPs when the SWS is set as 1. Once the time intervals between the transactions contained in the length-1-projected databases and the prefixes not exceed 1 (SWS = 1), the transactions-involved items and the prefixes may actually take place at the same time point. In this case, the gene items involved in a-prime-symbol-marked transactions indicate that they occur with the prefixes at the same time point even if all of them originate from different time points.maxTC (maximum time constraint). Normally, the cells need to react quickly to resist adverse environmental changes, massive short-term gene regulations should be more pronounced within a cellular signaling transduction. In this regard, when setting smaller values of the parameter maxTC, a pattern containing two gene items with a big time gap will not be generated. Table length-1-projected databases and CTGR-SPs from an extended example of Table maxTC is set as 1. The possible postfixes for generating length-2 CTGR-SPs only will be checked till the transactions marked with a prime symbol.CTGR-Span of two time course gene expression datasets. Because performing the program with different parameter values would yield diverse results, all of the parameters used in this study will be tuned according to the biological backgrounds of the datasets. By introducing the tuned parameter values to the CTGR-Span, the resultant CTGR-SPs will then be evaluated with previous literature and a GO enrichment analysis to reveal their reliability in biology. Meanwhile, in terms of the performance, the execution efficacy between the traditional and our proposed methods will also be examined in this study.In this section, we presented the experimental results of the proposed minSupp of the traditional methods, we additionally introduced 3 parameters minTSupp, SWS and maxTC to the CTGR-Span. However, two questions might arise as to how to set these parameter values for most biologists and whether these parameters are useful for mining gene regulations. In this section, we performed an optimal parameter tuning process to obtain a general rule for setting the parameters without additionally calculating the standard deviations of the time intervals of a dataset in advance [CTGR-SPs, we examined the parameters in an order of minTSupp is more suitable for further exploration - it is a trade-off that higher parameter values would allow fewer patterns to be mined, but lower parameter values would dramatically increase the number of marginal patterns. Both quantity and quality of the resultant patterns are necessary to be taken into account in this work. In the first dataset (GSE6377), McDunn et al. have proven that as the ventilator associated pneumonia (VAP) patients recovered from critical illness complicated by acute infection, the general trajectory (riboleukogram) converged, consistent with an immune attractor [et al. identified 85 significantly up/down-regulated genes involved in the immune response from the blood monocytes of hepatitis C patients during the first 10 weeks of treatment with the Peg-interferon alfa-2b plus ribavirin in peripheral [CTGR-SPs-involved at least two genes under the conditions are relevant to the corresponding biological manifestations (inflammatory response in GSE6377 and immune response in GSE11342). We focused on the longest CTGR-SPs containing at least two gene items because the longer patterns not only contained more significant gene items but also carried more information in a consecutive gene regulation according to the original design of the algorithm. The testing results are presented as -log in the tables.In addition to the inherent parameter advance . Based opp Table , minSupppp Table , maxTC . It might be difficult for most biologists to work with the high number. In spite of these limitations, we could still successfully obtain a suitable common condition for the two datasets when minTSupp and minSupp were set as 100% and 95%, respectively.First of all, if the same significant gene items occur too frequent during a time period, they may be similar to the HGs. Then, the significant patterns should occur as frequently as possible in a group of patients. For these two reasons, we tested both minTSupp and minSupp have been decided, we subsequently tested all possible values of maxTC in both two datasets as shown in Table maxTC was set from the beginning as largest time interval, 2 days (21-19) in GSE6377 and 28 days (70-42) in GSE11342, to the end as the values which included most transactions bracketing the maximal time interval, 10 days (21-11) in GSE6377 and 67 days (70-3) in GSE11342. For each dataset, the maxTC would be increased with the first minimum time interval, 1 day (1-0) in GSE6377 and 3 days (3-0) in GSE11342, to ensure any possible conditions would be tested. Apparently, according to the same criteria mentioned in the above paragraph, there was a suitable common condition for the two datasets when the values of maxTC were set as \u221e days.Once the values of SWS as shown in Table SWS in both datasets were set from the beginning as 0 to the end as the values which included most transactions bracketing the maximal time interval, 10 days in GSE3677 and 66 days in GSE11342. The values of SWS were also increased with a fixed interval. Then, we could successfully observe a suitable common condition when the value of SWS was set as 3 days. These tables also demonstrate that these suitable common conditions were neither the rule number nor rule length dependent. Incorporating with the domain knowledge of biology to the parameter designs might had a great benefit on discovering the CTGR-SPs with potential gene regulations. Therefore, these optimal parameter values could be certainly considered as the default settings to most biologists even if they have no any experiences before.Finally, we fixed the previous three parameter values and tested the CTGR-Span and the traditional sequential pattern mining algorithms such as the GSP and PrefixSpan in terms of execution efficiency. For achieving a fair comparison, we performed the GSP, PrefixSpan and CTGR-Span with same parameter settings on both input datasets. The resultant patterns and execution times are presented in Table CTGR-Span only needed to take several hours in a worst case that the minSupp was set as 70%. and the immune response after drug treatments in hepatitis C patients (GSE11342). In this section, we attempted to further address whether these patterns contain potential genes/regulations which have not been reported in previous literature yet. We scrutinized and evaluated the longest CTGR-SPs derived genes from the two input datasets using a manual literature survey. Table +-prefixed CTGR-SPs: <(CAV1+)(GNG7+)(EIF2D+)>, <(CAV1+)(GNG7+)(FTSJ2+)>, <(CAV1+)(GNG7+)(NR2E1-)> and <(CAV+)(GNG7+)(TMOD3-)>. The CAV1+ and GNG7+ can be individually grouped and presented as a single item in the table.After performing the optimal parameter tuning process, we set the parameter After the evaluating process, 78% 54/69 hits) in Table /69 hits CTGR-Span overcomes the flaws of the traditional sequential pattern mining methods. Although the transactional databases converted from the large-scale time course microarray gene expression datasets have too many items/significant genes within every transaction, the gene regulations over a period of time can still be efficiently identified. The CTGR-Span runs dramatically faster than the traditional methods. In addition to the improvement of execution times, we incorporated the characteristics of gene regulation in the parameter designs and further used a GO enrichment analysis to yield the CTGR-SPs more meaningful biologically. After evaluating with previous literature, the identified patterns correlate very well with the experimental backgrounds of the two input datasets. Therefore, we postulated that our approach could provide more biological insights into the underlying mechanisms of certain biological or clinical progresses, and it also could be readily applied to other research topics of interest.In this study, our proposed CTGR-Span: Cross-Timepoint Gene Regulation Sequential pattern; CTGR-SPs Cross-Timepoint Gene Regulation Sequential Patterns; minTSupp: minimum timepoint support; minSupp: minimum support; SWS: sliding window size; maxTC: maximum time point support; GO: gene ontology.The authors declare that they have no competing interests.CPC, YCL, YLT and VST conceived and designed the entire experiments. CPC carried out the computational studies, performed the statistical analysis and drafted the manuscript. YCL participated in the data interpretations and helped to draft the manuscript. YLT carried out the experiments. VST obtained funding and made critical study supervision. All authors read and approved the final manuscript.minSupp = 70~100% and minTSupp =70%~90%)Characteristics of mined sequential patterns (Click here for file"}
+{"text": "Osteoarthritis (OA) is the most common form of arthritis and has multiple risk factors including joint injury. The purpose of this study was to characterize the histologic development of OA in a mouse model where OA is induced by destabilization of the medial meniscus (DMM model) and to identify genes regulated during different stages of the disease, using RNA isolated from the joint \u201corgan\u201d and analyzed using microarrays. Histologic changes seen in OA, including articular cartilage lesions and osteophytes, were present in the medial tibial plateaus of the DMM knees beginning at the earliest (2 week) time point and became progressively more severe by 16 weeks. 427 probe sets (371 genes) from the microarrays passed consistency and significance filters. There was an initial up-regulation at 2 and 4 weeks of genes involved in morphogenesis, differentiation, and development, including growth factor and matrix genes, as well as transcription factors including Atf2, Creb3l1, and Erg. Most genes were off or down-regulated at 8 weeks with the most highly down-regulated genes involved in cell division and the cytoskeleton. Gene expression increased at 16 weeks, in particular extracellular matrix genes including Prelp, Col3a1 and fibromodulin. Immunostaining revealed the presence of these three proteins in cartilage and soft tissues including ligaments as well as in the fibrocartilage covering osteophytes. The results support a phasic development of OA with early matrix remodeling and transcriptional activity followed by a more quiescent period that is not maintained. This implies that the response to an OA intervention will depend on the timing of the intervention. The quiescent period at 8 weeks may be due to the maturation of the osteophytes which are thought to temporarily stabilize the joint. Osteoarthritis (OA) affects over 27 million people in the United States and is similarly prevalent across the globe, making it the most common cause of chronic disability in adults Studies of human OA are limited by its multifactorial nature, the lack of tissue that can be obtained at various stages of the disease and difficulties in defining disease onset. Basic mechanistic studies often are performed with tissue obtained at the time of joint replacement, which represents end-stage disease. These studies have focused largely on changes present in the articular cartilage; however, OA is a condition that affects the joint as an organ including not only the cartilage but also subchondral bone, synovium, ligaments and, in the knee, the meniscus model where OA is surgically induced by transection of the medial meniscotibial ligament. In this model, OA results from altered joint biomechanics and the pathologic changes of cartilage destruction, subchondral bone thickening, and osteophyte formation are similar to the changes seen in human OA The present study was designed as a time-course experiment to evaluate changes in gene expression during the development of OA and to compare these changes to the histologic progression of the disease. The recently reported comparison of gene expression at the 8 week time point in young adult and older adult mice Male C57BL/6 mice (n\u200a=\u200a129) were purchased from Charles River Inc. and were 12 weeks of age at the time of DMM surgery or sham surgery performed as a control. Animal use was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Wake Forest School of Medicine Animal Care and Use Committee (protocol #A09-622). All surgery was performed under isoflurane anesthesia, and all efforts were made to minimize suffering. Butorphanol (2.5mg/kg) was administered in the event of pain.Details of animal housing and induction of OA have been recently published Hind limbs from both the operated side and the contralateral side of each animal were dissected and processed for histological analysis. The details of fixation, processing, and sectioning, as well as the histological analysis, were as recently published Immunohistochemistry was performed on sections from 3 DMM and 4 sham control mice at the 16-week time point in order to examine tissue distribution of type III collagen, fibromodulin and proline-arginine-rich end leucine-rich repeat protein (Prelp). Antigen retrieval and blocking steps were as recently described Knee joint tissue from the medial joint compartment was obtained from the time 0 group and the operated hind limbs from the other time points and processed for RNA isolation as described RNA was pooled prior to microarray analysis such that 3 randomly selected samples from each surgical group and time point were pooled to create a each biological replicate. Because 9 mice were used for each experimental group, a total of three biological replicates per group were analyzed using the Affymetrix Mouse Genome 430 2.0 oligonucleotide arrays as described 2 of the signal intensity and the Affymetrix detection p-value for each probe set. Because some genes had more than one probe set, we provide results for both numbers of genes and probe sets when the numbers differed. The complete dataset has been provided to the Gene Expression Omnibus (GEO) repository (accession number GSE41342). Relative gene expression in the samples collected from the sham-operated control animals was calculated to identify those genes that changed as the animals aged over the 16 week time course independent of DMM-induced OA. For this analysis, sham gene expression was calculated as the signal log ratio (SLR) of the average time 0 intensity (baseline un-operated controls) to each sham replicate pool for each time point. The average of the time 0 intensity was used to ensure a consistent baseline, and each biological replicate of the sham arrays was analyzed individually to allow biological consistency to be measured in the analysis. The SLR is the log2 of the fold change and so an SLR of 0.5\u200a=\u200a 1.4-fold. After SLR calculations, each probe set had 12 associated SLR values , which together are referred to as the \u201csham time course\u201d.Microarrays were imaged and the resulting data normalized using systematic variation normalization (SVN) as previously described Relative gene expression changes for the DMM time course were calculated to identify those genes that changed over time due to the DMM-induced changes in the joints. For this analysis, the sham results were used as a time-matched control (sham 2 weeks used as control for DMM 2 weeks etc.). Sham replicate signal intensities for each time point were averaged to ensure a consistent baseline for each replicate. DMM gene expression was then calculated as the time-matched signal log ratio of the average sham intensity to each DMM replicate for each time point, where each DMM replicate was analyzed individually. After SLR calculations, each probe set had 11 associated SLR values , which are referred to as the \u201cDMM time course\u201d.Relative gene expression for the sham time course and DMM time course was filtered for differentially expressed genes with consistent expression changes over time as previously described 2ATmd 2ATmd\u2019s functionality since its original release. This was implemented by converting the consensus matrix to an adjacency matrix and then using MATLAB\u2019s depth first search algorithm to identify consensus clusters. A figure of merit (FOM) analysis was run to determine the optimal number of clusters present in each replicate data set and to identify which clustering algorithm formed the most homogeneous clusters with respect to Euclidean distance Filtered DMM time course gene expression data were clustered using the updated consensus clustering option provided by version 3 of SCAnnotation enrichment analysis was performed using the Database for Annotation, Visualization, Integration and Discovery (DAVID) Real-time PCR was performed using samples of RNA from the same pools used for the microarrays as previously described Analysis of the microarray data is presented above. The histological data and the real time PCR data were found to be normally distributed. Histological data from both the lateral and medial tibial plateaus was evaluated using ANOVAs and repeated measures analyses using SPSS version 17.0 . The real time PCR results at each individual time point for sham controls and DMM animals were analyzed using t tests with StatView 5.0 software .The sham operated joints and the contralateral control demonstrated minimal to no pathological changes over the 16 week time course. Among the earliest changes observed in the DMM joints at 2 weeks after surgery were abaxial osteophytes in the medial tibial plateau. Osteophytes were primarily cartilaginous at 2 weeks, were ossified at 4 weeks, and were composed of trabecular bone that contained marrow spaces occupied by hematopoietic tissue at 8 and 16 weeks significantly and consistently regulated over time in the sham joints. A complete list of these genes is provided in A small subset of 25 genes remained up-regulated over the time course . FunctioThe time course results for changes in expression in the DMM joints relative to time-matched sham joints revealed 371 genes (427 probe sets) that were significantly and consistently regulated over the three biological replicates . ExaminaThese results were much different from the time course results noted in the sham joints, consistent with changes in gene expression due to induction of OA by the DMM surgery. An overlap analysis revealed 94 genes (97 probe sets) that were in common in the DMM/sham and sham/baseline control datasets . Many ofThe 94 genes in common between DMM/sham and sham/baseline were analyzed by DAVID. This returned 10 significant annotation groups that incConsensus clustering of genes with altered expression after DMM surgery identified 27 clusters with 2 or more probe sets and 38 genes (39 probe sets) that were classified as outliers (singletons). The complete list of the genes found in each cluster and the DAVID analysis for the clusters are found in Only clusters 5, 7, 10 and 12 exhibited significant down-regulation at specific time points. Cluster 5 contained genes down-regulated at week 4; cluster 10, genes down-regulated at weeks 4 and 8; and clusters 7 and 12, genes down-regulated at week 8. Cluster 5 genes were not associated with any significantly over-represented annotations, but this cluster of 27 genes (32 probe sets) included Cytl1 (cytokine-like 1), which has been shown to be expressed during chondrogenesis and to promote chondrocyte differentiation through stimulation of Sox9 transcriptional activity Significant up-regulation of gene expression at 16 weeks was seen in clusters 15 and 16, both of which also exhibited high up-regulation at week 4. Cluster 15 consisted of 7 probe sets covering three genes; Prelp, Col3a1 , and Fmod (fibromodulin). Prelp and fibromodulin were also present in the DMM/sham-Sham/baseline overlap gene list reported above, where they were found to be highly down-regulated at 16 weeks in the sham joints; however, these genes were highly up-regulated in the DMM joints. Cluster 16 was comprised of four probe sets, representing 3 genes: Col14a1 , Bgn (biglycan), and Fbln7 (fibulin 7), all associated with the extracellular matrix. Col14a1 was identified in a microarray analysis of human tissue Genes in clusters 3 and 11 displayed temporal profiles of up-regulation only at 4 weeks, with little significant change in expression from control at any other time point. Cluster 3 contained 45 genes (48 probe sets), with significant annotations that included regulation of transcription, DNA binding, regulation of RNA metabolic process, zinc ion binding, and transcription. Potential genes of interest were NF\u03baB activating protein, Atf2 and Erg. Cluster 11 was composed of 11 genes, which exhibited no significantly over-represented functions. One of the genes in this cluster was Asb13, an ankyrin repeat and SOCS box-containing protein.Clusters 1, 2, 4, 9, and 14 contained genes that exhibit strong up-regulation at 4 weeks, with less up-regulation at 2 weeks. Cluster 1 contained 48 genes (54 probe sets) representing genes whose significant annotations include embryonic organ morphogenesis, positive regulation of cell differentiation and developmental process. This cluster included several genes thought to be involved in joint biology and/or OA including syndecan 4, Bmp-1, and Timp-2. Cluster 2 contained 39 genes (52 probe sets) for genes also up-regulated at 2 and 4 weeks. The cluster included over-represented annotations for collagen, signal peptide, extracellular region, extracellular matrix, cell adhesion and glycosaminoglycan binding. Genes of potential interest in this cluster included collagen genes such as Col5a1 and Col16a1, growth factor-related genes such as Igf1 and Egfr, MMP-14, and biglycan. . Clusters 4, 9, and 14 contained 33, 16 and 8 genes , respectively. DAVID did not identify any function annotations over-represented any of these clusters. Cluster 4 genes included two glutathione peroxidases, Gpx7 and Gpx8, perhaps suggesting some redox-related event.Clusters 6, 8, and 13 also displayed the most up-regulation at week 4, but to a lesser extent than clusters 1, 2, 4, 9, and 14. Cluster 6 included 23 genes (24 probe sets) whose DAVID-identified over-represented annotations included signal peptide, glycoprotein, cell adhesion, serine/threonine kinase signaling pathway, skeletal system development and growth factor activity. Genes of particular interest included TGF\u03b22 and TGF\u03b23, latent TGF\u03b2 binding protein and Ddr2. Cluster 8 contained 19 genes for which the significant gene annotations included cell adhesion and contained COMP and Mmp13 which are thought to be important markers of the OA process. Cluster 13 included 9 genes for which there were no significantly over-represented functions. One of these, Enpp3, a pyrophosphatase, has been previously associated with osteoarthritis Overall, the analysis of gene clusters based on the metrics of both shape and magnitude of the time profile provided insight into the temporal process of osteoarthritis development. Only a few genes were down-regulated as part of this process, and those were down-regulated mainly at 4 and/or 8 weeks. Many more genes were up-regulated and most of the up-regulated genes displayed up-regulation at 4 weeks, with lesser up-regulation at 2 and 16 weeks. The phasic process was clearly visualized in these clusters, with the gene expression at 8 weeks being significantly different from gene expression at the other time points. This difference was consistent across all replicates.Based on the results of the consensus clustering, we choose to immunostain for type III collagen, fibromodulin, and Prelp, in order to determine which tissues might be providing the signals noted by gene expression at week 16. A total of 3 DMM and 4 sham control mice were studied and representative images are shown in We selected a set of 16 genes to evaluate using RT-PCR based on genes tested in our previous study Both the histologic and gene expression analysis support a phasic process for the development of OA in this model. The early phases at 2 and 4 weeks after DMM surgery were the most active in terms of gene expression at a time when the earliest cartilage lesions in the medial tibial plateau were very mild but abaxial chondrophytes had already formed. The chondrophytes matured to osteophytes as the articular cartilage lesions became more severe but, interestingly, this was accompanied by a significant decline in overall gene expression at the 8 week time point. In addition to the differences in overall gene expression, the findings from cluster analysis showed that each time point studied had a unique gene signature that also supports a phasic process.Perhaps most striking was the decline in gene expression at 8 weeks after surgery. There was a significant down-regulation of genes regulating cell proliferation while matrix remodeling genes that were up-regulated at 2 and 4 weeks were mostly off, suggesting that a quiescent phase had been reached. However, at 16 weeks this pattern had changed and was accompanied by significant cartilage loss in the medial tibial plateau while cartilage damage and chondrocyte death was beginning to be evident in the lateral tibial plateau along with the appearance of lateral axial osteophytes. Because RNA for the gene arrays was only isolated from the medial side of the joint, the changes in gene expression observed at 16 weeks were not due to the disease progressing laterally but rather reflect more advanced disease medially. The lack of time points between 4 and 8 weeks and 8 and 16 weeks prevents any conclusions about the length of the quiescent phase.The significant expression of Prelp, Col3a1, and fibromodulin at 16 weeks suggests that a matrix repair or a \u201cwound healing response\u201d was active. At 16 weeks, the articular cartilage from the DMM group had severe lesions in the medial tibial plateau with some loss of cartilage. This suggested that the signal at 16 weeks for these genes may have come from either early changes on the femoral side, since some femoral tissue was included for RNA extraction, or more likely, from other joint structures such as the meniscus, synovium, ligaments and subchondral bone. Immunostaining revealed that ligaments and the fibrocartilage found over osteophytes were common locations for all three proteins. These results are consistent with previous studies that examined the location and function of these proteins. Prelp is thought to play a role in matrix organization through interactions with collagen, as does the small leucine-rich proteoglycan fibromodulin Increased expression of type III collagen has been previously noted in human OA cartilage The phasic changes in gene expression noted in the DMM knees were not the result of aging of the animals over the 16 week time course. Changes in gene expression were noted when we compared the 12 week old animals at baseline (un-operated) and the sham controls over the 16 week time course but these differed significantly from the changes in gene expression noted in the DMM relative to sham joints. This was seen in the overlap analysis where the changes over time in expression in the DMM joints were most often in the opposite direction to changes over time in the sham joints. This was most evident for Prelp and fibromodulin which were significantly down-regulated at week 16 in the sham controls but increased in the DMM joints. Interestingly, immunopositivity for both of these proteins was notably stronger and more extensive in fibrocartilage over osteophytes than in articular or meniscal cartilage. In addition, immunopositivity for Prelp was increased in degenerative cartilage matrix compared with normal cartilage. In the sham time course analysis, most genes exhibited an up-regulation of expression at 2 weeks followed by a progressive decline out to 16 weeks, compared to time zero. Many of these genes were involved in morphogenesis and development, suggesting the animals were more actively growing at the start of the study and then growth likely slowed. Importantly, the progressive increase in subchondral bone area and thickness were the only histologic measures that changed with time in all groups .We have recently reported a comparison of the 8 week time point results included in the present study with results from the same time point in animals that were 12 months-old at the time of DMM surgery A phasic progression of OA was suggested by a previous study in humans where radiographic progression was related to changes in serial measures of serum COMP, as an OA biomarker There are important limitations to the present study. Because RNA was not isolated from individual tissues it is not possible to determine which specific tissue contributed to the changes in gene expression. Also, if a gene is expressed in a single tissue, pooling of tissues will reduce the signal to a lower level than what would have been observed in the individual tissue. That is the likely explanation for the overall lower levels of gene expression observed in the present study when compared to work using a single tissue. Based on the immunostaining for type III collagen, fibromodulin, and Prelp, as well as immunostaining for IL-33, periostin, and CCL21 reported in our previous study Further studies will be needed to determine the importance of these findings for human OA. An important implication of the results is that the degree of response to an intervention given during a phasic process will depend on the timing of the intervention. Pre-clinical studies using animal models most often start the intervention at the same time or just after the start of the OA process while in human trials participants are likely to be at various stages of the disease process when the intervention is initiated. Finding markers of various disease stages in OA could be used to direct targeted therapy to the proper phase of the disease when the target is most active.Table S1Filtering results for the sham time course.(DOCX)Click here for additional data file.Table S2Filtering results for the DMM time course.(DOCX)Click here for additional data file.File S1Excel file of 406 Sham filtered probe sets and DAVID results. Results are presented as signal log ratio (SLR) which is the log2 of the fold change and so an SLR of 0.5\u200a=\u200a 1.4-fold. A positive number would be up-regulation and negative number down-regulation.(XLSX)Click here for additional data file.File S2Excel file of 427 DMM filtered probe sets and DAVID results. Results are presented as signal log ratio as described for supplemental file 1.(XLSX)Click here for additional data file.File S3Excel file of 97 common probe sets and DAVID results.(XLSX)Click here for additional data file."}
+{"text": "However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only\u2009instantaneous interactions, and are unable to infer time-delayed interactions.In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization.Escherichia coli\u2009show a significant improvement compared with other state-of-the-art approaches for GRN modeling.The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in The availability of genome wide expression data has significantly increased interest in systems biology, in particular, reverse-engineering gene regulatory networks (GRNs). While static expression data allows the learning of only the network structure, i.e., transcription factors (TF) and target genes interactions, time-course data allows the modeling of detailed system dynamics over time. In our view, amongst different ways for classification -5, methocontinuous time. Of the several linear and non-linear types of DE models employed for reconstructing GRNs, the S-System model has gained popularity recently , where \u03c4max\u2009is the maximum possible delay in the considered network. Note that there are two time delay parameters: production\u2009part and degradation\u2009part of S-System. The delay matrices are represented as follows: Here, i,j\u2009=\u20091\u2009\u2026N\u2009and \u03c4max\u2009is the maximum allowed delay of the network.For both these matrices, {ith\u2009gene is affected by its own and other genes\u2019 concentration level at their corresponding delays. If a delay \u03c4ij, corresponding to an interaction (gij/ hij), is 0, we have an instantaneous interaction (provided that there is a regulation between genes i\u2009and j), whereas a non-zero value of \u03c4ij\u2009gives a delayed interaction. Thus, the proposed Time-Delayed S-System (TDSS) model is capable of capturing both time delayed and instantaneous genetic interaction in GRNs.At any time, the production and degradation rate of the ith\u2009sub-problem corresponding to the ith\u2009gene, the 2N\u2009+\u20092-parameter set \u03a9i\u2009=\u2009{\u03b1i,\u03b2i,{gij,hij}j\u2009=\u20091\u2026N} needs to be estimated. In the Time-Delayed S-System model, apart from these parameters, we also have to estimate the 2N\u2009time-delay parameters N\u2009+\u20092-parameter set \u03c4max) permissible for the system is set by considering common regulation time scale and the\u03c4max\u2009will be designated as history information, which reduces the available sample size for training. It should be noted that the step size for RK4 integration is set at a small value, allowing the numerical integration to capture the system dynamics accurately. Again, we use linear spline interpolation to generate a continuous history profile. A detailed description of the modified RK4 is presented in Sec. 2.3 in the supplementary document . For the Time-Delayed S-System model, we adapt the traditional RK4 method for DDE which takes into account the time delay parameters as described in detail in . For theDue to the intractable nature of optimization problem, S-System parameter learning is commonly carried out via evolutionary computation (EC), namely Genetic Algorithm (GA) or Differential Evolution (DE). Recently, DE and its variants, such as trigonometric differential evolution (TDE), have been used extensively because of their versatility ,19,36-38adaptive squared relative error\u2009(ASRE) and given below: To address various limitations of the regularized squared relative error of Eqn. (4) presented in Sec. 2, we propose a novel fitness function referred to as ri\u2009is the total number of actual regulators. Bi\u2009is a balancing factor which is used to maintain desired balance between the two terms of ASRE. Ci\u2009is the penalty factor for the ith\u2009gene, defined as: Here, I\u2009and J\u2009being the maximum and minimum in-degree respectively. Note that in our formulation, ri\u2009and I\u2009are restricted to be smaller than or equal to N, since a transcription factor generally does not affect both its target gene\u2019s production and degradation simultaneously. In our ASRE criterion, in contrast to a fixed\u2009weighting factor c\u2009as in Eqn. (4), the penalty factor Ci\u2009takes the form of an inverse power-law function. This is motivated by the fact that biological networks often have a scale-free structure, in which the node connectivity degree x\u2009distributes according to a power-law distribution, P(x)\u2009\u221d\u2009x\u03b3-, with the scaling parameter \u03b3\u2009\u2208 for various networks in nature, society and technology are not penalized (Ci\u2009=\u20091), while genes falling out of this region are penalized according to an inverse power law term . Sec. 2.4.2 and 2.4.3 in the supplementary document Adaptive regulator set size: \u2009Our algorithm adaptively and continually adjusts the values of the min-max in-degree region . Initially, we set J\u2009=\u20090 and I\u2009= a value less than or equal to N\u2009based on the size of the network. Then, for every l generations, we examine the smallest and largest in-degree within the population respectively and set these as new values for J and I.(ii) Adaptive balancing factor Bi: The balancing factor Bi\u2009is included in Eqn. (12) to dynamically balance the terms corresponding to the network accuracy and the model complexity. For the first initial tens of generations, we set the value of Bi\u2009to zero, i.e., we emphasize on network quality first. This allows the algorithm to quickly improve the network accuracy as there are no constraints on complexity. We allow the algorithm to proceed in this manner either until a fixed ne\u2009generations are executed or until the squared relative error is smaller than a specified threshold \u03b3i. When the individuals in the population achieve stability and improved accuracy, the value of Bi\u2009is updated as follows: from the top 50% individuals in the population, we calculate the average network accuracy ANA (first term of Eqn. (12)) and the average model complexity AMC (second term of Eqn. (12), i.e., 2N/(2N-ri)), then set Bi\u2009=\u2009ANA/AMC. With this, effect of the network accuracy is maintained in \u2018balance\u2019 with model complexity. Next, we replace the worst 50% individuals with randomly initialized individuals, and the optimization continues with the value of Bi\u2009computed as above.I\u2009and J, the implementation was rather adhoc, and had static weight factor. The proposed model evaluation criteria represented by Eqn. (12) and Eqn. (13) are thus novel and perform systematic adaptation of I\u2009and J\u2009while also simultaneously carrying out adaptive balancing of network complexity and accuracy.While our preliminary studies reported earlier also useEscherichia coli containing 8 genes.The proposed TDSS model is evaluated experimentally using both synthetic and real-life networks. As the model parameters increase quadratically with the network size, large scale modeling with the S-System based models remains a long-standing challenge. For this reason, like previous research on the S-System ,19,39-41Sn), specificity (Sp), precision (Pr) and F-score (F) have been applied for network evaluation. For the methods with executable code available, namely ALG lexA\u2009by used in lexA\u2009by u methods ,39],[43,[43lexA\u2009ds [,[,[,, while T[,[et al., and latFor the 6-gene (8-gene) SOS network, the proposed TDSS method was successful in inferring 8 (9) \u201ctrue\u201d+\u201cnovel\u201d regulations, including 4 (5) regulations which were reported as time-delayed. These results indicate the presence of possible delayed regulations in the network. All the inferred true regulations are shown in Table\u00a0J and maximum I in-degrees) as explained in the proposed Methods section. The adaptation of I and J narrows down the search space significantly and speeds up convergence. In Figure Finally, we consider the issue of computational time. We have compared the timing of TDSS with two other S-System based approaches, namely REGARD and ALG Time-delayed regulations are an inherent characteristics of all biological networks. While there have been some recent efforts using Bayesian network (BN) approach to simultaneously model time-delayed and instantaneous interactions, the current state of the art S-System approaches cannot model time-delayed interactions. In this paper, we have proposed a novel method to incorporate time-delayed interactions in the existing S-System modeling approaches for reverse engineering genetic networks. The proposed Time-delayed S-System (TDSS) model is capable of simultaneously representing both instantaneous and time-delayed regulations. Apart from the kinetic order and rate constant parameters as in traditional S-System models, additional parameters for the time delays are necessary for TDSS full description. To make the optimization effective and efficient in the increased parameter space, we proposed a novel objective function based on the sparse and scale-free nature of genetic network. The inference method was also redesigned, based on adaptive systematic adaptation of the max and min in-degrees for gene cardinality, and systematic balancing between time response accuracy and network complexity during the optimization process. The RK4 numerical integration technique has also been suitably adapted for TDSS. Investigations carried on small and medium synthetic networks with various levels of noise, as well as on two real-life genetic networks show that our approach correctly captures the time-delayed interactions and outperforms other existing S-System based methods.The authors declare that they have no competing interests.ARC, MC and NXV developed the concepts and drafted the manuscript. ARC developed the algorithms and carried out the experiments. MC and NXV suggested the biological data, experiments and provided biological insights on the results. MC provided overall supervision, direction and leadership to the research. All authors read and approved the final manuscript.Supplementary Document.Click here for file"}
+{"text": "A common approach for time series gene expression data analysis includes the clustering of genes with similar expression patterns throughout time. Clustered gene expression profiles point to the joint contribution of groups of genes to a particular cellular process. However, since genes belong to intricate networks, other features, besides comparable expression patterns, should provide additional information for the identification of functionally similar genes.In this study we perform gene clustering through the identification of Granger causality between and within sets of time series gene expression data. Granger causality is based on the idea that the cause of an event cannot come after its consequence.This kind of analysis can be used as a complementary approach for functional clustering, wherein genes would be clustered not solely based on their expression similarity but on their topological proximity built according to the intensity of Granger causality among them. Gene network analysis of complex datasets, such as DNA microarray results, aims to identify relevant structures that help the understanding of a certain phenotype or condition. These networks comprise hundreds to thousands of genes that may interact generating intricate structures. Consequently, pinpointing genes or sets of genes that play a crucial role becomes a complicated task.Common analyses explore gene-gene level relationships and generate broad networks. Although this is a valuable approach, genes might interact more intensely to a few members of the network, and the identification of these so-called sub-networks should lead to a better comprehension of the entire regulatory process.in silico methodologies are available for the identification of sub-networks, or clusters, within a given dataset[Several dataset-5. Most dataset within a dataset,8. Brief dataset analysis dataset and may et al.[functional clustering, initially proposed by[sets of time series[The concept of Granger causality has beenet al.-22 suggeposed by in neuroposed by, they apposed by between e series. The gene series,21 in orGranger causality identification is a potential approach for the detection of possible interactions in a data driven framework couched in terms of temporal precedence. The main idea is that temporal precedence does not imply, but may help to identify causal relationships, since a cause never occurs after its effect.A formal definition of Granger causality for sets of time series can be gGranger causality for sets of time series: Suppose that\u2111tis a set containing all relevant information available up to and including time-pointt. LetXt,andbe sets of time series containingp, mandntime series, respectively, whereandare disjoint subsets ofXt, i.e., each time series only belongs to one set, and thus,p\u2265m + n. LetXt(h|\u2111t) be the optimal prediction)h-step predictor of the set ofmtime seriesfrom the time pointt, based on the information in\u2111t. The forecast MSE of the linear combination ofwill be denoted by\u03a9X(h|\u2111t). The set ofntime seriesis said to Granger-cause the set ofmtime seriesifGranger cwhereis the set containing all relevant information except for the information in the past and present ofIn other words, ifcan be predicted more accurately when the information inis taken into account, thenis said to be Granger-causal forFor the linear case,\u03c1 is the largest correlation calculated by Canonical Correlation Analysis (CCA).where In order to simplify both notation and concepts, only the identification of Granger causality for sets of time series in an Autoregressive process of order one is presented. Generalizations for higher orders are straightforward.There are numerous definitions for clusters in networks in the literature. A functA usual approach for network clustering when the structure of the graph is known is the spectral clustering proposed by. HoweverIn order to overcome this limitation, we developed a framework to cluster genes by their topological proximity using the time series gene expression information. We developed concepts of distance and degree for sets of time series based on Granger causality, and combined them to the modified spectral clustering algorithm. The procedures are detailed below.p is the number of time series) and a definition of similarity wij\u2009\u2265\u20090 between all pairs of data pointsG\u2009=\u2009. Each vertex vi in this graph represents a time series gene expressionwij between the corresponding time serieswij). In other words, a wij\u2009>\u20090 represents existence of Granger causality between time serieswij\u2009=\u20090 represents Granger non-causality. The problem of clustering can now be reformulated using the similarity graph: we want to find a partition of the graph such that there is less Granger causality between different groups and more Granger causality within the group.Given a set of time seriesG\u2009=\u2009 be an undirected graph with vertex set V\u2009=\u2009{v1,\u2026,vp}(where each vertex represents one time series) and weighted edges set E. In the following we assume that the graph G is weighted, that is each edge between two vertices vi and vj carries a non-negative weight wij\u2009\u2265\u20090. The weighted adjacency matrix of the graph is the matrix W\u2009=\u2009wij; i,j\u2009=\u20091,\u2026,p. If wij\u2009=\u20090, this means that the vertices vi and vj are not connected by an edge. As G is undirected, we require wij\u2009=\u2009wji. Therefore, in terms of Granger causality, wij can be set as the distance between two time seriesLet Distance between two (sets of) time seriesandNotice thati andXtj,21. SincMoreover, notice that the CCA is the Pearson correlation after dimension reduction, therefore,analysis,27. The vi) which can be defined asAnother necessary concept is the idea of degree of a time seriesDegree ofis defined by:where in-degree and out-degree are respectivelyvi, respectively. Therefore, the degree of vertex vi contains the total information flow passing through vertex vi.Notice that in-degree and out-degree represent the total information flow that \u201centers\u201d and \u201cleaves\u201d the vertex vi in the regulatory networks. The algorithm based on spectral clustering is as foInput: The p time series (k of sub-networks to construct.Step 1: Let W be the (p\u2009\u00d7\u2009p) symmetric weighted adjacency matrix whereStep 2: Compute the non-normalized (p\u2009\u00d7\u2009p) Laplacian matrix L as D is the (p\u2009\u00d7\u2009p) diagonal matrix with the degrees d1,\u2026,dp of L.Step 4: Let U\u2009\u2208\u2009\u211cp\u00d7k be the matrix containing the vectors {e1,\u2026,ek} as columns.Step 5: For i\u2009=\u20091,\u2026,p, let yi\u2009\u2208\u2009\u211ck be the vector corresponding to the ith row of U.Step 6: Cluster the points (yi)i=1,\u2026,p\u2009\u2208\u2009\u211ck with the k-means algorithm into clusters {X1,\u2026,Xk}. For k-means, one may select a large number of initial values to achieve (or to be closer) the global optimum configuration. In our simulations, we generated 100 different initial values.Output: Sub-networks {X1,\u2026,Xk}.Notice that this clustering approach does not infer the entire structure of the network.The method presented so far describes a framework for clustering genes (time series) using their topological proximity in terms of Granger causality.k. The choice of the number of sub-networks k is often difficult depending on what the researcher is interested in. In our specific problem, one is interested in identifying the clusters presenting dense connectivity within a cluster and sparse connectivity between clusters.Now, the challenge consists in determining the optimum number of sub-networks In order to determine the most appropriate number of clusters in this specific context, we used a variant of the silhouette method.s(i) in the case of dissimilarities. Take any time seriesA the sub-network to which it has been assigned. When sub-network A contains other time series apart fromA. Let us now consider any sub-network C which is different from A and compute:C. After computingC\u2009\u2260\u2009A, we set the smallest of those numbers and denote it byB for which this minimum value is attained is very useful to know the best alternative cluster for the time series in the network. Note that the construction of b(i) depends on the availability of other sub-networks apart from A, thus it is necessary to assume that there is more than one sub-network k within a given network[Let us first define the cluster index network.a(i) and b(i), the cluster index s(i) can be obtained by combining them as follows: After calculating s(i)\u2009\u2264\u20091 for each time seriess(i)\u2009\u2248\u20091 or s(i)\u2009\u2248\u20090 or s(i)\u2009\u2248\u2009\u22121. For cluster index s(i) to be close to one we require a(i)\u2009\u226a\u2009b(i). As a(i) is a measure of how dissimilar i is to its own sub-network, a small value means it is well matched. Furthermore, a large b(i) implies that i is badly matched to its neighboring sub-network. Thus, a cluster index s(i) close to one means that the gene is appropriately clustered. If s(i) is close to negative one, then by the same logic we see thats(i) near zero means that the gene is on the border of two sub-networks. In other words, the cluster index s(i) can be interpreted as the fitness of the time seriesIndeed, from the above definition we easily see that \u22121\u2009\u2264\u2009s(i) of a sub-network is a measure of how tightly grouped all the genes in the sub-network are. Thus, the average cluster index s(i) of the entire dataset is a measure of how appropriately the genes have been clustered in a topological point of view and in terms of Granger causality.The average cluster index s of the entire dataset for each number of clusters k. When the number of identified sub-networks is equal or lower than the adequate number of sub-networks, the cluster index values are very similar. However, when the number of identified sub-networks becomes higher than the adequate number of sub-networks, the cluster index value s decreases abruptly. This is due to the fact that one of the highly connected sub-networks is split into two new sub-networks. Notice that these two new sub-networks present high connectivity between them because they are in fact, only one sub-network. In order to illustrate this event, see Figures decreases substantially, since the Granger causality between clusters increases and the within cluster decreases. The breakpoint where the value s decreases abruptly can be used to determine the adequate number of sub-networks. In fact, this can be visually identified by analyzing the breakpoint at the plot similarly to the standard elbow method used in k-means. However, if one wants to determine the breakpoint in an objective manner, this can be done by adjusting two linear regressions, one with the first q dots and another with the remaining dots, thus identifying the breakpoint that minimizes the sum of squared errors with where and i\u2009=\u20091,\u2026,20.for http://genome-www.stanford.edu/Human-CellCycle/HeLa/.In order to illustrate an application of the proposed approach, a dataset collected by was usedIn order to evaluate our proposed approach, we chose to analyze the same gene set examined in FigureIn order to study the properties of the proposed functional clustering method and to check its consistency, we performed four simulations with distinct network characteristics in terms of structure and Granger causality.TableTable% of false positive edges where there is indeed no Granger causality. In fact, where there is no Granger causality, the rate of false positives was controlled to 5%, and where there is Granger causality, the number of identified edges is clearly higher than where there is no Granger causality.TableBy applying the method described in section Functional clustering to the biological dataset, the optimum number of sub-networks was identified as three. Notice in Figure\u03baB or STAT3, and alterations in p53 status, have each been reported to affect cell survival individually. The presence of the three genes in the same cluster is in agreement with a recent study which examined the hypothesis that alterations in a signal network involving NF-\u03baB, STAT3 and p53 could modulate expression of proapoptotic BAX and antiapoptotic BCL-XL proteins, promoting cell survival[\u03baB or STAT3 induced greater increase in the BAX/BCL-XL ratio than modulation of these transcription factors individually. As discussed earlier in this paper, this is a situation in which similar patterns of gene expression are not sufficient to comprehend the biological process. Once clusters were obtained, the cluster-cluster network Figure was modesurvival. The aut\u03baB, p53, and STAT3. Here, cluster 1 groups not only NF-\u03baB, p53, and STAT3, but also the functionally associated gene BCL-XL, NF-\u03baB regulator A20 and targets IAP and i\u03baB\u03b1. The presence of NF-\u03baB and fibroblast growth factors (FGFs) and receptors (FGFRs) in the same cluster is also in agreement with the previous work. Members of the FGF family and NF-\u03baB have been shown to interact in various contexts and, despite distinct roles, are involved in cell proliferation, migration and survival[In, a netwosurvival,33.Even though MCL-1 and P21 play important roles in cell survival, and BAI1 is transcriptionally regulated by P53, the analysis run here clustered them separately from P53 containing cluster. This result suggests that, in the context of this dataset, their interaction is stronger with genes such as c-JUN, also functionally related to cell survival, proto-oncogene MET and tumor suppressor MASPIN, for instance. Also worth noticing is the interaction of this cluster with the two members of cluster 3: FGF5 and FOP. Like the other members of FGF family grouped in cluster 2, FGF5 is involved in cell survival activities, while FOP was originally discovered as a fusion partner with FGFR1 in oncoproteins that give raise to stem cell myeloproliferative disorders. It would be interesting to identify specific details regarding the intensity and direction of the information flow within this cluster for a clearer understanding of their relationship in the context of cell cycle progression.et al.[Fujita et al.,21 suggeet al., the appet al.[Krishna et al. proposedet al., the conet al..A disadvantage of our method is that it cannot be applied for very large datasets. The larger is the number of time series (genes), or the higher the order of the autoregressive process to be analyzed, the higher the chance to generate non-invertible covariance matrices in the calculation of distance (definition 2) and degree (definition 4) between clusters. We believe that this drawback can be overcome through sparse canonical correlation analysis, recentlWe only analyzed the autoregressive process of order one because gene expression time series data, possibly due to experimental limitations, are typically not large. However, if one is interested in analyzing greater orders, one minus the maximum canonical correlation analysis value among all the tested autoregressive orders can be used as the distance measure between two time series.The clustering algorithm used here is based on the well-known spectral clustering. Although results were satisfactory, other graph clustering methods may be used. The normalized cuts algorithm proposed by, for insFinally, which biological process underlie time series datasets correlation, remains a difficult question to be answered. Studies suggest that correlated genes may belong to common pathways or present the same biological function. However, it is also known that methods based exclusively on correlation cannot reconstruct entire gene networks. Further studies in the field of systems biology might be able to answer this question in the future.We propose a time series clustering approach based on Granger causality and a method to determine the number of clusters that best fit the data. This method consists of (1) the definition of degree and distance, usually used in graph theory but now generalized for time series data analysis in terms of Granger causality; (2) a clustering algorithm based on spectral clustering and (3) a criterion to determine the number of clusters. We demonstrate, by simulations, that our approach is consistent even when the number of genes is greater than the time series\u2019 length.We believe that this approach can be useful to understand how gene expression time series relate to each other, and therefore help in the functional interpretation of data.The authors declare that they have no competing interests.AF has made substantial contributions to the conception and design of the study, analysis and interpretation of data. KK, AGP and JRS contributed to the analysis and interpretation of mathematical results. PS contributed to the analysis and interpretation of biological data. AF and PS have been involved in drafting of the manuscript. SM directed the work. All authors read approved the final manuscript."}
+{"text": "Cyanobacterium Synechocystis.The microarray technique allows the simultaneous measurements of the expression levels of thousands of mRNAs. By mining these data one can identify the dynamics of the gene expression time series. The detection of genes that are periodically expressed is an important step that allows us to study the regulatory mechanisms associated with the circadian cycle. The problem of finding periodicity in biological time series poses many challenges. Such challenge occurs due to the fact that the observed time series usually exhibit non-idealities, such as noise, short length, outliers and unevenly sampled time points. Consequently, the method for finding periodicity should preferably be robust against such anomalies in the data. In this paper, we propose a general and robust procedure for identifying genes with a periodic signature at a given significance level. This identification method is based on autoregressive models and the information theory. By using simulated data we show that the suggested method is capable of identifying rhythmic profiles even in the presence of noise and when the number of data points is small. By recourse of our analysis, we uncover the circadian rhythmic patterns underlying the gene expression profiles from Cyanobacterium Synechocystis that are responsive to light stimulus Physiological states of a living organism change as time goes by, forming a sequence of patterns that repeat themselves periodically or nearly periodically, such as the circadian rhythm. Circadian rhythms are endogenous self-sustaining oscillations that are regulated by a pacemaker composed of one or more biochemical oscillators Data produced in microarray experiments carry a high degree of stochastic variation, obscuring the periodic pattern. Furthermore, microarray experiments are expensive, limiting the number of data points in a time series expression profile. The identification of the circadian expression pattern in time series data is challenging, because the measured data are often non-ideal, and efficient algorithms are needed to extract as much information as possible. Based on time series data, various approaches have been proposed to identify periodically expressed genes. Wichert et al. To address this issue, a new computational technique for the identification of periodic patterns in relatively short time series is introduced. By combining the autoregressive model, maximum entropy methods and information theory concepts, a novel inference approach is proposed to effectively screen out periodically expressed genes. The technique was applied to simulated expression profiles and to experimental data. We found 164 genes that are periodically expressed in gene expression data sets from two independent cyanobacteria cultures In order to calibrate our technique we analyzed four synthetic time series. ow noise are cleaow noise and noisow noise impact oow noise around 5Furthermore, we also noted that the sampling frequency impacts on the analysis, that better sampling (more data points per period) contributes to improving the characterization, in agreement with previous findings In After identifying the genes with an oscillatory expression, our aim is to identify genes that could be associated with the circadian clock machinery (CCM) that generates the circadian oscillations. In this sense we use the following hypotheses: (i) The expression profile must be circadian in both replicates; (ii) The dynamics of the CCM genes must be preserved through the two biological replicates; (iii) As replicates are synchronized, we expect that genes of CCM have a small phase shift between both replicates. With these working hypotheses, from the 164 oscillatory genes we select those whose model-based distance (Eq. 9) between replicates is shorter than 0.6 and the phase shift is smaller than 0.6 , photosynthesis and respiration , and regulatory functions . We presented an alternative method for the identification of oscillatory genes in microarray time series gene expression data. This approach uses both autoregressive models and the MaxEnt approach to secure the actual periodic genes existing in a microarray data set. We have also used a dynamic-based distance between two temporal expression profiles to compare the dynamics between two replicates. We found that the dynamic features extracted by the modeling procedure of the core clock genes are a signature through biological replicates. For a fixed amount of data points, for example 12 data points as typical time series, our results suggest the following prescriptions: (i) at high noise level, to record two periods (48 h times series length) sampled every 4 h; (ii) at low noise level, to record one period (24 h times series length) sampled every 2 h.A notable issue with the regulation of cyanobacterial metabolism is the circadian rhythm of many processes In conclusion, we propose that our procedure is a promising statistical tool for finding oscillatory expressed genes of any period in a microarray data set. The code of our procedure is freely available, as a Mathematica script, upon request.a priori distribution. Following the central tenets MEP i.e.,We also used MaxEnt approach and information theory (IT) Now, the idea is to interpret the data set After modeling we will focus our attention on the model based distance between two dynamics. From the information theory viewpoint, the amount of uncertainty of the probability distribution is measured by the entropy. Associated with the entropy function there is a divergence measure, also known as Kullback-Leibler distance In identifying periodic genes, the analysis of significance becomes mandatory after AC estimation. The usual approach consists in specifying a well-defined null hypothesis. The second step is to compute the AC of this process from original data. Finally, we test the null hypothesis against the observations. In this article, our null hypothesis is that the time series corresponding to a given profile is not periodic, i.e., that the associated AC do not satisfy Cyanobacterium Synechocystis chromosomal genes simultaneously, over two circadian cycle periods (48 h), at 4 h intervals. RNA samples were isolated from two independent cyanobacterial cultures. Each RNA sample was used for three independent microarray experiments. Thus, a maximum of six data points per gene was obtained for each time point . Each biological replicate was treated independently with the same procedure until the final step of the cycling genes' characterization of their rhythmicity and phase. Spots meeting any of the following criteria were flagged and not used for the analysis: (i) the GenePix Pro did not find the spot area automatically, (ii) the net signal intensity was Kucho et al. As simulated data we constructed four synthetic time series, corresponding to different oscillatory patterns that are 96 h long and were sampled every 4 h see . The timFigure S1Four synthetic oscillatory time series used to simulate expression profiles. A (dark gray down-triangle) is a saw-teeth signal, B (grey up-triangle) is a square-step signal, C (light grey circle) and D (black square) are sinusoidal time series. A, B and D have a 24 h period, while C has a 48 h period. Each signal was contaminated with 10 realization of additive noise to generate 10 noisy time series, which were used to test the performance of the method to identify oscillatory behavior in short and noisy time series.(EPS)Click here for additional data file.Table S1All oscillatory genes found. 164 genes exhibiting cycling behavior in both experiments and the functional categories of the according to KEGG categories (http://genome.kazusa.or.jp/cyanobase/Synechocystis/genes/category.txt).(PDF)Click here for additional data file."}
+{"text": "Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications.systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method\u2019s ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications.We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method\u2019s capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors\u2019 website: Gene expression time course experiments have been used to investigate fundamental biological processes which are often dynamic in nature. For example, the cell cycle , cell siMany computational approaches to time-series analysis are not always well suited to gene expression data, where missing measurements are common and time points may not be spaced regularly. In many conventional time-series models such as state-space models ,6 there structure of gene expression data. Many gene expression time-series are performed with multiple biological replicates: the crude method of simply averaging the replicates may be discarding interesting information. It is also unclear what to do when the replicates are not sampled at the same times. There is a need for a temporal model which deals with the replicate structure.Furthermore, existing time-series models do not necessarily capture the Our proposed model is based upon two important ideas: Gaussian process (GP) regression allows for parsimonious temporal inference, whilst a hierarchical structure accounts for covariance between biological replicates. Additional layers can be added to the hierarchy to model more structure in the data. For example, in a data fusion application, a layer of hierarchy can be used to account for differences between gene expression measurement platforms; or in a clustering application, a hierarchical layer can be added to account for temporal covariance of genes within a cluster.hierarchical Gaussian processes to deal with structure in the data. We provide an introduction to the idea, deriving a novel covariance function which accounts for structure. The idea is simple to implement yet highly effective as we demonstrate on several problems. A hierarchical GP model could easily be integrated with existing GP-based applications, allowing them to properly account for replicate structure.GPs have been successfully applied to the analysis of gene expression time series by several authors -9. ThereIn a further contribution, we manipulate the marginal likelihood expression for the hierarchical GP model for the case where each part of the structure is sampled at the same time, leading to an expression with reduced computational complexity. This situation is most likely to occur during clustering of genes, which must all be measured simultaneously using high throughput methods.Short time series are prevalent in gene-expression data sets . Our GP-Unlike other time-series based approaches, GPs are not restricted to data which has been sampled at evenly spaced time points. The model therefore removes any restriction on temporal sampling \u2014 it can be totally irregular and differ between replicates. This also allows our method to deal with both randomly and systematically missing data. We show how the model can be used for data fusion where the temporal sampling differs between experiments.Hierarchical models are an important idea in Bayesian statistics , allowintime of data collection, and the time-shift in each replicate is estimated. In our model, the times are assumed correct whilst the shift is assumed to occur in the expression. Our model has significant computational advantages, since we can marginalise the shifts in expression analytically under the GP framework, whilst the method proposed in [An alternative to our method was proposed by . In thisposed in requiredClustering expression data while modelling within-cluster variance is one of the primary applications of our model. Previously, proposedGaussian processes (GPs) have been used extensively in a variety of regression problems, and have been applied to gene expression time-series by several authors -9. We brdirectly over functions, we update the distribution in light of observed data, moving to a posterior distribution. Using standard results for Gaussian distributions, regression involves only some simple linear algebra. The GP prior is fully specified by two functions, a mean function m(t) and a covariance function k, and is denoted To perform regression using GPs, we adopt a Bayesian approach. Starting with a prior k=\u03b1 exp{\u2212\u03b3(t\u2212t\u2032)2}.For practicality, a zero-mean function is often assumed; throughout this work, the squared-exponential covariance function will be used: \u03b1) and relative length-scale (\u03b3) of the functions are highly correlated, whilst points which are distant(t\u2212t\u2032 is large) are less correlated.Our choice of covariance functions represents a prior belief that the underlying functions are smooth. Other covariance functions could be selected using a model-selection procedure . The parf of a function at times t, and wish to predict the values of that function at times t\u22c6, which we denote f\u22c6: the joint probability of f and f\u22c6 is given by Regression can be performed by using the marginal and conditional properties of multivariate Gaussian distributions. Supposing we have observations Kt,t has elements derived from the covariance function k, such that the th element of Kt,t is given by k. Consistency of the GP means that it is not necessary to consider the values of the function where we do not have data: these values are trivially marginalised. To perform regression, the conditional property of the multivariate Gaussian gives: where the covariance matrix y which is a noise corrupted version of f. Assuming Gaussian noisea it is possible to write \u03b2 is the variance of the noise and I the appropriately sized identity matrix, and then marginalise the variable f. Equivalently, one can consider y to be observations of the Gaussian process y(t), whose mean function is the Gaussian process f(t), and covariance function is In practice, we are presented with a measurement vector and regression follows from the conditional property similarly to (3).prior over functions to a posterior, and a significant attraction of the method is that this occurs in closed form as (3). However we must still deal with hyper-parameters of the covariance function. Here, we make use of the usual technique which is to optimise the hyper-parameters using type-II maximum likelihood. That is, collecting the hyper-parameters \u03b1,\u03b2 and \u03b3 in to a vector \u03b8, we use gradient methods to optimise p(y|\u03b8) with respect to \u03b8. This is given by Gaussian process regression is a Bayesian method. We move from a \u03b8 through the covariance matrix Kt,t.which depends on Gene expression time-series may be collected in multiple replicates, to account for biological variation. The idea is that there exists some common trend, present in all replicates, which we wish to identify, and the measurements made of each replicate vary due to biological differences as well as technical noise.ynr to denote the vector of measurements of gene expression of the nth gene, in the rth biological replicate; these measurements were made at times which we collect into a vector tnr. The data for the nth gene is denoted We shall use the notation underlying time-series as well as the biological variation, and accounting for (uncorrelated) measurement noise. First consider a time-series model of a single gene. To combine replicates of a particular gene\u2019s time-series, we use a Bayesian hierarchical approach: the underlying expression profile of the nth gene gn(t) is presumed to be drawn from a zero-mean GP with covariance kg, whilst the expression profile of a particular replicate fnr(t) is drawn from a GP whose mean is gn(t). Thus Our proposed methodology mimics the structure of the data, directly modelling kg and kf may in general be different: we have used the squared exponential function for both, with independent parameters. This simple model is illustrated in Figure Note that the two covariance functions fnr(t) are jointly Gaussian distributed with zero mean and covariance kg+kf. Furthermore, two points in separate replicates are jointly distributed with covariance kg. Thus, given a set of Nn replicates of gene expression time-series for a particular gene, The elegance of the hierarchical approach lies in its linearity: it is simple to show that two points on the function Yn, \u03b8 represents the parameters of the covariance functions kg and kf, and the block of \u03a3n corresponding to where gn(t) and fnr(t), the covariances between the data Y and the functions are required. Using the superscripted ynr(t) to denote the element of ynr observed at time t: In order to make inferences about the functions Inferences about functions can then be made using the standard methods described above, and hyper-parameters of the covariance functions can be optimised.gn(t), and subsequent panes show the inferred functions for each replicate, fnr(t).Fitting a hierarchical model to a set of replicates can be used as a diagnostic tool. In particular, by examining the maximum-likelihood values of the covariance function parameters, we can assess how noisy the experiment is, and how similar the replicates are. Figure gn(t) is attributed to noise. The model attributes 87% of the data\u2019s variation to the underlying function gn(t), and only 6% each to replicate variance and noise, i.e. the model \u2018recognised\u2019 the similarity of the replicates.These examples demonstrate different behaviours of the time-series which are captured by the model. For the first gene, CG18135, the replicates are quite similar, and most deviation of the data from gn(t) and 34% to fnr(t), with 6% to noise.The gene represented by the middle row is AP-47, and it can be seen that there is considerable replicate variance: although each replicate follows a similar pattern, the pattern is \u2018amplified\u2019 differently in the replicates. Here, the model attributes 60% of the data\u2019s variance to the function gn(t), fnr(t) and noise are 55%, 36% and 8% respectively. The model recognises the differences in the replicates, but uses a long length scale for fnr(t). In this gene, the detailed pattern of the time-series is captured entirely by gn(t), and fnr(t) is used to account for amplitude shifts between replicates. Note that these cannot be simply \u2018normalised out\u2019 because not all replicates cover the same temporal region. These genes were selected using a simple filtering procedure. The model was fitted independently to each gene on a microarray, and the genes were ranked according to the ratio of signal variance (a hyperparameter of kg) and replicate-plus-noise variance (hyperparameters from kf).The gene represented by the bottom row of Figure gn(t) which models the general gene expression activity for the nth gene. Subsequently, we define the functions eni(t) for each experiment which we want to model, and finally fnir(t) for the r th replicate in the i th experiment. In many cases, gene expression time-series may have more structure than simply biological replicates. For example, we could incorporate previous studies in a hierarchical fashion. In general, suppose that there is some underlying function With every layer of the hierarchy, we have introduced new parameters corresponding to the covariance function for that layer. Note that the hierarchy can be extended arbitrarily to represent the structure of the data. For example, we might want to model biological variation where the lineage is known, or to model inter-species variation, or to build a hierarchy which reflects the phylogenetic relationship between species.Clustering of gene expression time-series is often performed with a view to finding groups of co-regulated or associated genes. The central assumption is that genes which are involved in the same biological processes will be expressed together: they share an underlying time-series.In order to model a group of genes as defined by a cluster, the hierarchical model is extended to a three-layer hierarchy across the cluster, individual genes and replicates.ith cluster are presumed to share an underlying profile hi(t), and subsequently each gene follows a profile gn(t) and each replicate of that gene follows a profile fnr(t). The mean of each level in the hierarchy is given by the level above, so the data Yi in cluster i is modelled by: All genes in the ith cluster, noting that each ith cluster, Yi is given by If \u03a3i is structured such that the block corresponding to the two genes n and n\u2032 is given by where the covariance matrix \u03a3i are themselves block-structured, reflecting the double hierarchy in the model.Note that the diagonal blocks of Kh as the covariance matrix formed by evaluating kh on the grid of \u03a3n a covariance matrix structured as (8), modeling the variance of a single gene. The marginal likelihood can then be written The computational complexity of this model grows cubically as the size of the cluster increases, which is an undesirable property. To reduce the computational load, it is possible to exploit a known property of the data. In each array all genes are simultaneously measured, although we allow different times for each replicate. Denote D is the length of Ni is the number of genes in the cluster (see appendix for a derivation). This expression has reduced the computational complexity of the model from where h(t), shown in the bottom-left pane has a single wide peak at around 15 hours; all of the functions gn(t) (leftmost column) show a similar pattern, though the functions are each \u2018distorted\u2019 a little, with the width of the peak varying from gene to gene. Similarly, each replicate shows a similar pattern to the mean function for the corresponding gene, with smaller variations. The bottom row shows the predictive density for a new gene within the cluster.An example of this model is shown in Figure To use our model for clustering, the partitioning of genes into clusters needs to be inferred. Dunson proposedb. Cooke et al. [Heller and Ghahramani showed te et al. applied The algorithm is depicted in Algorithm 1, and works in a \u2018bottom-up\u2019 fashion. Starting with each gene in an individual cluster, the clusters are merged by greedily selecting the merge which maximises the log marginal likelihood of the data . Once no more merges are available to improve the overall marginal likelihood, the hyper-parameters are optimised, and the procedure is repeated with the new covariance function in an EM fashion.Drosophila development [In a recent study of elopment , gene exFor all the missing data experiments, the covariance function hyper-parameters were set to the maximum likelihood value using gradient based-numerical optimisation. Whilst we show that the hierarchical GP has better performance than the GP in all cases, it does not require any extra computation. All experiments took only a few minutes on a desktop PC.systematically, effectively removing entire microarrays from the experiment and predicting what was on them. Most missing data imputation methods cannot handle this type of missing data, highlighting an advantage of our method. This experiment also validates our assertion that it is important to include the replicate structure in modelling microarray time-series, and that simply averaging the data on a time-point basis is not satisfactory.The imputation of missing data is a straightforward method for validation of our model. In this Section, we remove data Algorithm 1 Cluster replicated gene expression time-series Whilst systematically missing data are not common in the laboratory, this test does examine the performance of our model in some potentially interesting applications. For example, we may wish to predict the future gene expression levels of a patient given the time series observed in other patients.The results of imputing missing data are compared with the simple but oft-used technique of averaging the replicates, using both the mean and median of the non-missing replicates. The method is also compared with a simple GP model which does not account for replicate structure. We investigated the effectiveness of our algorithm using varying amounts of missing data, removing 1, 5, 10 and then 20 of the 56 microarrays at random. Each experiment was repeated 10 times with different randomisations; for each we computed the RMSE (root mean square error) averaged over all missing arrays and over all genes. The mean RMSE and two standard deviations as measured over the randomisations are shown in Table Table We note from Table Melanogaster dataset, and measured the error on imputation. For comparison, we also used two popular methods, K-nearest-neighbour (KNN) [Our proposed model is novel in the sense that it can impute entire missing arrays, as above. Most missing data algorithms assume randomly missing data and use correlation between genes for imputation. To compare our algorithm with those from the literature, we randomly removed 100 values from the ur (KNN) and Bayeur (KNN) .Gene expression experiments usually contain many types of effect aside from the one under study. In this case, the data includes cross-sectional effects which arise from array-specific and sample-specific causes, and are not due to the underlying time-series. These are treated as noise by our model, whereas PCA and KNN make no distinction between longitudinal and cross-sectional variance and will freely impute these effects. This is illustrated in Figure To test the methods\u2019 abilities to impute the replicable part of the signal, we tested the imputed values of the three methods against the median value for the missing time-point, averaged across replicates. We measured the RMSE over the 100 imputations, and repeated the experiment 10 times with different randomisations. The mean RMSE (over randomisations) and two standard deviations are shown in the first column of Table insensitive to missing data: because the model can borrow statistical strength from other replicates, small amounts of data missing at random make little difference to the model.Another way to investigate the ability of the model to deal with missing data is to examine the difference between the model as inferred with some missing points to that inferred with all the data: the results of doing so are shown in the second column of Table The data under investigation are sampled at two-hour intervals. To improve our knowledge of the system, it is possible to perform data fusion with existing data sets. We demonstrate this with two previous studies: ,24, whicWe constructed a hierarchical model across the three experiments, and across replicates within the experiments. The data were considered on a gene-by-gene basis, and the model was optimised for each gene. An example gene (Acer) is shown in Figure In order to investigate the usefulness of our model in a clustering task, we first selected 300 dynamically differentially expressed genes using a method similar to .without replicate-specific or gene-specific variance. This model, simply fitting a GP to the lumped data is similar to the method proposed in [et al[We computed the marginal likelihood using our hierarchical model and a simpler GP model posed in , which rin [et al comparedFor further comparison, we used the R package Mclust . Mclust In order to validate the different clusterings, we use the biological homogeneity index (BHI) , which aFrom the Table, it can be seen that HGP method improves the biological homogeneity for all three GO categories. By directly comparing with the standard GP method, we have demonstrated that the improvement in clustering performance is not due simply to the clustering methodology or the GP correlations which give the GP method a small improvement over Mclust, but the hierarchical structure of intra-cluster variance which allows genes and replicates to differ in a temporally-correlated fashion.We have presented a method based on hierarchically structured GPs, which are a practical and flexible framework for modelling replicated time-series. The framework has a wide range of applications, and can be extended for various data structures besides biological replications.The method performed well in several tasks, including missing data imputation and clustering. We have shown that the method performs particularly well in missing data imputation, and that small amounts of missing randomly data have only a minor effect of the model. Biological validation through the BHI confirms the importance of modelling intra-cluster variance in a hierarchical fashion.Above we showed how fitting the simplest of our proposed models can lead to a quantitative assessment of how biological replications are behaving, as well as illustrating how our method deals with different types of replicate variation. Of course, if the replicate variance is truly independent \u2013 e.g. if only technical variation is present \u2013 then we recover standard GP regression. In this case the hierarchical approach requires the inclusion of an extra parameter, but we find that the additional computational complexity is negligible.A problem with standard GP regression is that the computational complexity grows cubically with the number of data. We have presented a method which exploits the necessary condition that all genes in a cluster are measured on the same time-points in order to significantly reduce the computational complexity and make our clustering algorithm applicable to large data sets. We note that the complexity of the clustering algorithm scales poorly with the number of genes: the initial step of the algorithm must compare the likelihood of merging of each pair. To address this, randomised versions of the same algorithm can be adopted ,31, and Whilst we have demonstrated that our model is useful in several applications, we envisage a number of extensions. For example, our model could be used for data fusion of microarray data with high-throughput sequencing data. Or, the hierarchical structure could be used in models of pathway activity , which mAlthough we have only used simple GP models within our hierarchical structure, the idea can be applied to more complex GP models, such as those proposed to model gene interactions ,33.a Other noise distributions are possible, but break the conjugacy of the model and thus complicate inference, see [nce, see .b Note that hierarchical in this sense means a hierarchical partitioning of the genes, distinct from our Bayesian hierarchical model applied within the cluster.h(t) at the time points t, which we denote h. The model (for a single cluster) can be written: The expression for the marginal likelihood of a cluster of genes as given in equation (15) can be derived by considering the values of the underlying function h and a likelihood for the data associated with each gene in the cluster, conditioned on the latent variable. The objective here is to marginalise (integrate-out) the latent variable to achieve a tractable expression. Expanding equation (16), This consists of a prior for the latent variable Some re-arrangement and completing the square inside the exponent leads to \u039b back into (18) leads to the expression given in (15).where we have defined for brevity The authors declare that they have no competing interests.JH designed the studies, implemented the algorithms in python and drafted the manuscript. MR and NDL assisted in design and analysis of the experiments. All authors read and approved the final manuscript."}
+{"text": "Modeling dynamic regulatory networks is a major challenge since much of the protein-DNA interaction data available is static. The Dynamic Regulatory Events Miner (DREM) uses a Hidden Markov Model-based approach to integrate this static interaction data with time series gene expression leading to models that can determine when transcription factors (TFs) activate genes and what genes they regulate. DREM has been used successfully in diverse areas of biological research. However, several issues were not addressed by the original version.DREM 2.0 is a comprehensive software for reconstructing dynamic regulatory networks that supports interactive graphical or batch mode. With version 2.0 a set of new features that are unique in comparison with other softwares are introduced. First, we provide static interaction data for additional species. Second, DREM 2.0 now accepts continuous binding values and we added a new method to utilize TF expression levels when searching for dynamic models. Third, we added support for discriminative motif discovery, which is particularly powerful for species with limited experimental interaction data. Finally, we improved the visualization to support the new features. Combined, these changes improve the ability of DREM 2.0 to accurately recover dynamic regulatory networks and make it much easier to use it for analyzing such networks in several species with varying degrees of interaction information.www.sb.cs.cmu.edu/drem.DREM 2.0 provides a unique framework for constructing and visualizing dynamic regulatory networks. DREM 2.0 can be downloaded from: Modeling gene regulatory networks (GRNs) is a key challenge when studying development and disease progression. These networks are dynamic with different (overlapping) sets of transcription factors activating genes at different points in time or developmental stages. Reconstructing the dynamics of these networks is a non-trivial task that requires the integration of datasets from different types of genome-wide assays.Several methods were proposed for reconstructing GRNs . ThesWhile several studies measure time series expression data, the available protein-DNA interaction data is almost always static (either from sequence motifs or from ChIP-chip or ChIP-Seq experiments). This creates a major computational challenge when attempting to integrate these dynamic and static datasets.Several methods were suggested for clustering time series expression data -11, or fC. elegans embryos [A number of methods have been suggested for addressing these issues, though most of them were targeted at specific input datasets and did not offer any software to support their general use. For example, created embryos .A different way of formulating the problem is to decompose the gene expression data into TF activity and TF affinity values for each expressed gene as suggested by Network Component Analysis . From thOne of the first approaches to construct networks that change over time while still incorporating the ordering of time series data was suggested by using dyE. coli[To provide a general method that can be widely applied to reconstructing dynamic regulatory networks, presenteE. coli, developE. coli, stem ceE. coli and diseE. coli.While DREM has been successfully used for multiple species, so far each group using it had to obtain its own protein-DNA interaction data. Since such data is often dispersed among several databases, websites and publications, this step was a major hurdle to using DREM. Other features not supported in the original DREM version included: the integration of motif discovery, the ability to utilize dynamic ChIP binding data ,30 and TDREM 2.0 is implemented entirely in Java and will work with any operating system supporting Java 1.5 or later. Portions of the interface of DREM 2.0 are implemented using third party libraries, the Java Piccolo toolkit from the University of Maryland and the The underlying Input-Output Hidden Markov Model learning can now accommodate dynamic input data for each time point in the following way. The transition probabilities for the IOHMM are derived from a logistic regression classifier that uses the protein-DNA interaction data as supervised input and utilizes them to classify genes into diverging paths at a split node in the model. In the new version the nodes in the input layer can be dynamic and thus the function can depend on input from the specific time point it is associated with. See Figure Users input their time series expression data by using the graphical user interface (GUI) Figure . PreprocS family , that arThe time series data for asbestos treatment of human lung cancer cells was downLog2 ratios of exposed versus control were computed as input to DREM 2.0. The human binding predictions and have added additional high throughput protein-DNA interaction datasets for human as well. With these additions DREM 2.0 now supports most of the well-studied organisms facilitating much wider use of the method. Table DREM 2.0 utilizes time series expression data and static interaction data which is often condition-independent . The original version of DREM only proThe original version of DREM did not use any information regarding the expression levels of the TFs predicted to regulate split nodes. The underlying reason for this was the fact that many TFs are post-transcriptionally regulated and relying on their expression to determine activity may lead to missing important TFs. In the new version, we still maintain the ability to identify TFs that are only post-transcriptionally regulated. However, we have added a new computational module that allows the method to utilize expression information for those TFs that are transcriptionally regulated. For each TF, its binding prior is elevated based on the TF\u2019s expression level using a logistic function. Thus, active TFs have a stronger prior of being selected as regulators by DREM 2.0 to discriminatively search for motifs. The predicted DNA motifs can be matched against known motif databases using STAMP [TCF12 gene was recently found in lung cancer patients [During learning DREM assigns genes to paths in the network model and uses split nodes . After grouping TFs from the same family, 10 of the 24 TFs identified in the original run of DREM for this split were found in the DECOD derived set interaction data. DREM 2.0 now supports continuous binding values. These can be derived from dictions . Thus, iIn addition, DREM 2.0 also supports temporal binding data. While most interaction data is still static, dynamic binding data is becoming available. Recent studies have shown that TFs may alter their binding behavior depending on the time point ,30 necesThe ability to incorporate temporal binding data allows DREM to reduce false positive assignments by only assigning TFs that are active at that time point (based on the time points binding data). This in turn can both help identify co-regulators for which only computational predictions exists and also lead to the identification of different waves of transcriptional regulation, where the same TFs activate different sets of genes at different time points.cellular response to stress and positive regulation of cell death we ran DREM 2.0 without using the static protein-DNA interaction information. This is similar to several clustering methods that have been suggested for time series data ,10. To ch Figure B.While several methods can be used to reconstruct GRNs using time series expression data, most such methods either rely only on the expression data itself or result in static networks that do not consider the ordering of the time points. DREM provides not only an alternative to these methods but also a rich GUI and as such, has been used by several groups in multiple species.Although here we used both treatment and control time series, DREM can also be used with only the treatment time series by taking the log fold change w.r.t. time point 0, see for an eThe new version eases the application to several species by directly supplying protein-DNA interaction data and incorporating de-novo discriminative motif discovery. In addition we have made other improvements including the ability to utilize and view the expression levels of the TFs and to use dynamic protein-DNA interaction data. Combined, we believe that these improvements will make DREM 2.0 a more widely used software package for the reconstruction of dynamic GRNs.Project name: DREM\u00b7 Project homepage:www.sb.cs.cmu.edu/drem\u00b7 Operating system(s): Platform independent\u00b7 Other requirements: Java 1.5 or higher\u00b7 License: Free to academics/non-profit\u00b7 Any restrictions to use by non-academics: License needed\u00b7 DREM, Dynamic Regulatory Events Miner; TF, Transcription factor; GRN, Gene regulatory network; DBN, Dynamic Bayesian network; ChIP, Chromatin immuno precipitation; IOHMM, Input-output hidden Markov model; GUI, Graphical user interface; GO, Gene Ontology; MGD, Mouse Genome Database; HGNC, HUGO Gene Nomenclature Committee; RNA-Seq, Next generation sequencing of messenger RNAs.The authors declare that they have no competing interests.MHS, WED, AG, SZ designed and implemented the new version. MHS, AG, SZ, JE performed the data collection and analysis. ZBJ supervised the work. MHS and ZBJ wrote the manuscript. All authors read and approved the final manuscript.Marcel H. Schulz and William E. Devanny joint first authors.Work supported in part by NIH grant 1RO1 GM085022.DREM 2.0 Manual. The Manual for using the DREM 2.0 software with details of all parameters and the different dialogs in the GUI.Click here for fileSupplementary Methods. Additional description for DREM 2.0 for TF expression level scaling, data collection for the protein-DNA binding data sets and the analysis with DECOD on an unannotated split node.Click here for file"}
+{"text": "Saccharomyces cerevisiae, and on data pertaining to the human HeLa cell network and the SOS network in E. coli. The results produced by CaSPIAN are compared to the results of several related algorithms, demonstrating significant improvements in inference accuracy of documented interactions. These findings highlight the importance of Granger causality techniques for reducing the number of false-positives, as well as the influence of noise and sampling period on the accuracy of the estimates. In addition, the performance of the method was tested in conjunction with biological side information of the form of sparse \u201cscaffold networks\u201d, to which new edges were added using available RNA-seq or microarray data. These biological priors aid in increasing the sensitivity and precision of the algorithm in the small sample regime.We introduce a novel algorithm for inference of causal gene interactions, termed CaSPIAN , which is based on coupling compressive sensing and Granger causality techniques. The core of the approach is to discover sparse linear dependencies between shifted time series of gene expressions using a sequential list-version of the subspace pursuit reconstruction algorithm and to estimate the direction of gene interactions via Granger-type elimination. The method is conceptually simple and computationally efficient, and it allows for dealing with noisy measurements. Its performance as a stand-alone platform without biological side-information was tested on simulated networks, on the synthetic IRMA network in Gene regulatory networks, protein-protein interaction networks, chemical signaling and metabolic networks are all governed by causal relationships between their agents that determine their functional roles. Discovering causal relationships through experiments is a daunting task due to the technical precision and output volumes required from the experiments and due to the large number of interconnected and dynamically varying components of the system. It is therefore of great importance to develop a precise analytical framework for quantifying causal connections between genes in order to elucidate the gene interactome based on limited and noisy experimental data. Statistically inferred interactions may be used to guide the experimental design process, helping with further refinement of the modeling framework One way to detect if a gene causally influences another gene is to monitor if changes in the expression of the potential regulator gene affect the expression of the target gene in the presence of at least one additional network component Sparsity of gene regulatory networks was exploited in a number of different inference frameworks, including transcription factor interaction analysis An alternative to the Lasso approach is a greedy compressive sensing framework, which overcomes some of the shortcomings of Lasso while still utilizing the sparsity of the network. Compressive sensing (CS) is a dimensionality reduction technique with widespread applications in signal processing and optimization theory causal compressive sensing and design new greedy list-reconstruction algorithms for inference of causal gene interactions; as part of the process, we generate two sparse models for each potential interaction pattern and infer causality by comparing the residual errors of the models using statistical methods. Furthermore, in CS, the most difficult task consists of finding the support (i.e. the nonzero entries) of a sparse signal. This is accomplished by inferring the subspace in which the vector of observation lies. As a result, the complicated process of choosing the regularization coefficient in Lasso is substituted by the more natural task of choosing a \u201cconsistency\u201d level between the vector of observations and its representation in the estimated subspace.Motivated by recent advances in CS theory and its application in practice, we introduce the concept of non-causal interaction among the genes. In addition, CS was used only as a preprocessing step; the obtained CS results were combined with extensive prior biological information, and the gene interactions were inferred through supervised learning performed by AdaBoost. It is worth mentioning that AdaBoost and similar boosters are highly susceptible to random classification noise, thereby limiting their applications in biological data analysis The CS approach has not been widely used for gene regulatory network inference; to the best of the author's knowledge, only the methods in In this manuscript, we propose two causal CS inference approaches. In order to infer causality, we apply these approaches to two different combinations of gene expression profiles shifted in time, one of which contains a potential regulator and another, which does not contain a potential regulator. Each dataset gives rise to a certain representation error, and one may infer the level of influence of genes on each other based on the differences in the representation errors and by using the F-test At the core of our computational method is the subspace pursuit (SP) algorithm, which we described in Saccharomyces cerevisiaeE. coliThe main finding of our analysis indicates that causal compressive sensing can infer a relatively large fraction of causal gene interactions with very small false-positive rates when applied to small and moderate size networks. This finding is supported by simulated data, synthetic data from the IRMA network in Compressive sensing (CS) is a technique for sampling and reconstructing sparse signals, i.e. signals that can be represented by Assume that one is interested in finding a vector On the other hand, it is known that an Although the We next introduce the List-SP method, a modification of the SP algorithm Assume that for a vector restricted model takes the formunrestricted modelOnly a few algorithms using signal sparsity were successfully integrated into causal inference models In most cases, causality and co-integration are assessed via Augmented Dickey Fuller (ADF) tests or F-tests on the residual. In a nutshell, these tests determine the significance of the change in the value of the variance of the residuals collection of candidate genes, which contain both genes that causally influence In what follows, the CS-Granger causality approach is applied on gene expression data sampled at a small number of time instances. There are two main issues to be addressed in this context: how to discover linear relationships between expression profiles that may be correlated with each other and how to adapt the sensing matrix In order to combine CS techniques, in particular List-SP, with Granger causality, we assume that the gene expressions may be modeled via a linear regression. In addition, we assume that different realizations of the model are available through different experiments. These realizations are used to form a vector regression model for the gene expressions. More precisely, assume that one is interested in finding the directed graph corresponding to causal relationships of target gene; in this case, we set Let previous time-points. In addition, we did not include the expression profiles of sufficient condition for recovery and may not be necessary for the algorithm to work. Our results, similar to the results pertaining to face recognition The sensing matrix is formed using the expression profiles of all the genes other than List-SP infers gene interactions based on orithm 1 . This alIn order to identify which genes in the set orithm 2 , and theIn many practical situations, some directed edges in the network are known in advance, due to extensive experimental confirmations of their existence. In such cases, one can leverage this side-information to improve the performance of CaSPIAN, especially when the number of time points available for inference is small. For any orithm 3 .Note that this approach of including the given side-information may not be optimal: one may try to include the information of existing edges into the subspace selection process of the SP method directly, at each of its iteration. Unfortunately, this approach may be computationally more demanding than using pseudo-inversion followed by List-SP and will not be discussed in this paper.The gene expression profiles In order to overcome these problems, we use a different method to form Two normalizations are performed on the vectors of gene expressions prior to running the algorithms. Given an expression profile, CaSPIAN has two input parameters, On the other hand, the choice for We evaluated the performance of the proposed algorithms with respect to the choice of different parameters such as the sparsity level In order to evaluate the effect of different parameters on the performance of the algorithms, we employed synthetic (simulated) networks. Synthetic networks have tightly controlled design parameters, such as the maximum and minimum degree, degree distribution, gene expressions' dynamics, noise level, and sampling frequency. Hence, they allow for accurate assessment of the effect of different parameters on the performance of reverse engineering methods.Saccharomyces cerevisiaeE. coliOn the other hand, synthetic network models usually lack a number of complex features of biological networks that may be hard to model or unknown to the designer. As a result, a fair comparison of different reverse engineering methods require using biological networks. Consequently, we used the IRMA network in correct subnetwork on the performance of CaSPIAN. In addition, we addressed the effect of using an incorrect subnetwork on the performance of this algorithm.We also used IRMA and human HeLa cell networks to discuss the effect of side-information on the performance of Algorithm 3. In particular, we addressed the effect of employing a known In our comparisons, we used four standard evaluation measures, \u201csensitivity\u201d , \u201cprecision\u201d, \u201cAs described earlier, we used synthetic networks to evaluate the performance of CaSPIAN with respect to different parameters. The constructed synthetic networks follow a model representing a modification of the Erd\u00f6s-R\u00e9nyi model with controlled degree distributions and with additional features that allow its dynamics to converge to a steady-state. A detailed description of the model, which we subsequently refer to as the synthetic network, is given in the supporting information.We start by evaluating the performance of SP, List-SP, and CaSPIAN for different values of Figures S1 and S5 in Figures S2 and S6 in Figures S3 and S7 in We also evaluated the performance of the CaSPIAN algorithm when the number of genes is larger than the number of available time-points. Since CaSPIAN is based on a compressive sensing approach, in principle one may use this method when the number of genes is significantly larger than the number of measurements. The impediment of direct use of CaSPIAN on large biological networks is associated with noise in the measurements and the sampling irregularities of experimental data.We evaluated CaSPIAN and related algorithms on From Figure S9 in The results discussed up to this point were based on uniform sampling strategies for the synthetic network, i.e., on datasets obtained by measuring gene expressions at uniformly spaced times. Since in some available datasets measurements were generated using non-uniformly spaced time-points, we examined how the performance of our greedy algorithms is affected by the sampling strategy. For this purpose, all gene expression profiles. The construction of the underlying networks is discussed in In order to evaluate the performance of the proposed algorithms in the presence of noise, we considered Figures S15\u2013S22 in The opposite behavior can be observed with respect to the precision of these algorithms: the precision of CaSPIAN with precision of CaSPIAN remains high. In particular, for Next, we focus on the performance of these algorithms in the under-sampled regime in the presence of white Gaussian noise. Figures S23 and S24 in correct edges is more important than finding all edges. On the other hand, when both sensitivity and precision are equally important, one should use the These findings demonstrate that changing the significance value One can also take advantage of the edges recovered using different values of As discussed at the beginning of this section, we used in vivo networks to compare the performance of CaSPIAN with other reverse-engineering algorithms. The results are discussed in what follows.CBF1, GAL4, SWI5, GAL80, ASH1, embedded in Saccharomyces cerevisiae (yeast). The IRMA network has a fixed topology, and is constructed in such a way that its constituent genes are not regulated by other yeast genes. This network was introduced in We focus next on a network model that we believe to be a benchmark standard, given that it shares many features of synthetic networks \u2013 such as precise design parameters and controllability \u2013 while being part of a biological network in a living organism. The network in question is the IRMA (in vivo reverse-engineering and modeling assessment) network, a synthetic network of five genes, For our analysis, we used the time-series gene expressions of \u201cswitch-on\u201d experiments , measured via quantitative real-time PCR (q-PCR) every average of the gene expressions of the five experiments. As can be seen in We first ran CaSPIAN . As CaSPIAN is based on compressive sensing methods which allow for inference in the under-sampled regime, and as most inference problems operate in the under-sampled regime, it is instructive to test the performance of the method on a larger network. The performance of CaSPIAN is contrasted to that of the Lasso and truncating Lasso (Tlasso) E. coli from the Many Microbe Microarrays Database (M3D) dinI, lexA, recA, recF, rpoD, rpoH, rpoS, ssb, umuC/D, as described in E. coli are available in Table S4 of under-sampled regime, we randomly selected a set of We used the gene expression data of all the links found using CaSPIAN are correct links .We ran List-SP (Algorithm 1) for http://www.biostat.washington.edu/~ashojaie). We used the error based method for choosing the regularization coefficient. In umuC/D to lexA with MR equal to We ran Lasso and Tlasso on the same set of genes, both of which are combinations of Granger causality with different versions of a Lasso penalty . In particular, we evaluated the improvement/change in the performance of CaSPIAN given that a SWI5 to GAL4. However, if the information that an edge exists from CBF1 to GAL4 is provided a priori, CaSPIAN correctly concludes that no edge exists from SWI5 to GAL4.In order to evaluate the performance of Algorithm 3, we used the averaged expressions of the genes in the IRMA network corresponding to the \u201cswitch-on\u201d experiments. We applied Algorithm 3 with parameters In order to address the second question, we considered HeLa is not completely known, we only address the first question raised in the introduction of the section.Since the exact regulatory network of CDKN3 cannot be reduced, since the in-degree of this gene in the known regulatory network is The CaSPIAN approach for network inference represents a new attempt to connect the fields of compressive sensing, causal inference and bioinformatics. Compressive sensing (CS) is a technique for sampling and recovering sparse signals using optimization and/or greedy approaches. In spite of its widespread applications in different areas such as signal processing, image and video processing, communications, etc., its utility has not yet been fully explored in the field of bioinformatics and gene network inference. Although Another line of work explored in the literature integrates the sparsity of gene networks using some version of the LASSO penalty CaSPIAN is a reverse engineering algorithm that is based on the list-SP algorithm: a low-complexity greedy search method, which scans the expression profiles for low-dimensional subspaces. Although the computational advantage of the method may not be of great relevance for current small network inference problems, with the ever-increasing volumes of data, this issue will become important in the near future. Furthermore, CaSPIAN does not require fine parameter tuning or supervised learning methods as was shown using both synthetic and real biological data. In addition, in Algorithm 3 we provided a version of CaSPIAN that incorporates biological side-information in the form of scaffolding subnetworks of accurate edges. Our analysis shows that correct edges may significantly improve the performance of CaSPIAN. Given that wrong side-information may severely deteriorate the performance of the inference methods, it may be advisable to use prior information with caution - for example, only if the links were experimentally verified using different techniques by at least several different research labs.We also evaluated the influence of different parameters such as network topology, degree distribution, size, number of measurements, sampling method, noise, and the value of In addition, we compared the performance of CaSPIAN with that of other algorithms using real biological (in vivo) networks and showed that in many cases CaSPIAN outperforms other algorithms in spite of their extensive use of resources and side-information and their high complexity. Although we provided extensive comparisons with other models and illustrated that under almost all performance criteria used by the reverse engineering community, CaSPIAN outperforms these methods, we cannot argue that there exists one \u201coptimal approach\u201d for all problems. As an example, we did not test the performance criteria used in the DREAM2 challenge, where teams were asked to provide rank-ordered list of edges believed to exist in the network, according to their reliability. Those lists were truncated to top-emad2@illinois.edu.The implementation of these algorithms in Matlab will be offered upon request. Please contact the following email address: Appendix S1CaSPIAN applied to synthetic networks.(PDF)Click here for additional data file.Appendix S2Tables of multiplicity ratios (MRs) for the E. coli SOS network.(PDF)Click here for additional data file."}
+{"text": "P. patens) protonemata that develop from leaflets upon their detachment from the plant. By our novel correlation analysis of the post detachment transcriptome kinetics we predict five out of 1,058 TFs to be involved in the signaling leading to the establishment of pluripotency. Among the predicted regulators is the basic helix loop helix TF PpRSL1, which we show to be involved in the establishment of apical stem cells in P. patens. Our methodology is expected to aid analysis of key players of developmental decisions in complex plant and animal systems.Transcription factors (TFs) often trigger developmental decisions, yet, their transcripts are often only moderately regulated and thus not easily detected by conventional statistics on expression data. Here we present a method that allows to determine such genes based on trajectory analysis of time-resolved transcriptome data. As a proof of principle, we have analysed apical stem cells of filamentous moss ( Physcomitrella patens are the filamentous protonemata. Each filament extends by tip growth through unequal division of the apical stem cell P. patens leaflet cell transdifferentiation has been studied, the dynamic interplay of gene expression and transcription factor (TF) regulation during this decision process remains poorly understood. One viable approach to elucidate the sequential activation of signaling processes involved in cell (de-)differentiation is the use of time-resolved microarray data. Indeed, processes in animal cells that evolve on time scales of hours to days, like differentiation, have been shown to exhibit a strong correlation between transcriptome kinetics, de novo protein synthesis and long-term cell behavior Plant cells differentiate from stem cells into specialized tissue cells and back to cope with varying environmental cues. The juvenile form of the moss Although transcriptome actions are important, the transcriptome response of a cell provides only an incomplete picture of cellular events. Many fast processes are preferentially regulated on the proteome level. Among those are protein modification or the regulation of TF activity. Therefore, the actual mechanisms leading to the observed transcriptome responses might remain elusive if considering the strongest responding genes alone. This behavior is probably rooted in the topology of the underlying gene and protein regulatory networks. Comparing gene expression in healthy versus diseased specimens, it was found that the change in gene expression as well as expression variability correlates with network connectivity, i.e. the number of interaction partners of the encoded proteins To capture the coordinated response of a cell, the concept of cellular attractors has been suggested. Similar to complex networks settling on a small number of stable configurations P. patens apical stem cells upon leaflet detachment. Using time-resolved microarray expression profiling, we identify key signaling components and mechanisms of an important plant developmental process by means of multidimensional scaling analysis of gene expression time series data in combination with a correlation-based analysis of global transcriptome-response behavior. The approach singles out genes with weak to moderate differential expression that control the developmental progression and its underlying transcriptional control. In this way, we predict and experimentally verify TFs critically involved in (or excluded from) the cellular decision of P. patens leaflet cells to undergo transdifferentiation into apical stem cells. Validation of loss of function mutants demonstrates for two examples that the prediction holds true.In this study, we apply these systems theoretic ideas for the first time in plant cells to elucidate the formation of P. patens gametophores, isolated RNA 0\u201396 h after detachment (a.d.), and subjected it to microarray analysis P. patens ortholog of the polycomb group protein FERTILIZATION INDEPENDENT ENDOSPERM (FIE) as a marker for newly reprogrammed stem cells CYCD;1 (Phypa_226408) FIE transcript, Phypa_61985 CYCD;1 and FIE, the RNA derived from the mixed cell population can be employed to detect expression profiles of such a subset.To study the time-resolved development of apical stem cells, we detached leaflets of P. patensCircadian rhythms can play an influential role in gene expression dynamics of plants P. patens protoplast transdifferentation into tip growing protonemata has been conducted in vitro start conditions. While in our study leaflets were detached from precultured gametophores, exerting primarily wounding and drought stress, the protoplastation obviously represents quite a different stress scenario. Moreover, the culture media and conditions are different, most notably with regard to the presence of ammonium tartrate in the other study.To assess the biological significance of gene regulation from detachment to eventual cell division and filamentous growth per time point we performed conditional hypergeometric testing for Gene Ontology (GO) bias While the importance of D-type cyclin and cyclin-dependent kinase A activation for transdifferentiation has recently been shown Due to the long time scale of observation (four days), a strong correlation between transcriptome response and phenotype is expected Therefore, we defined a temporal state space trajectory of the transcriptome from its Pearson correlation and mutual information with respect to the 0 h time points Characterizing the kinetics of these TF genes we find that both their start and their time to peak expression precedes that of other genes , althougArabidopsis thaliana ROOT HAIR DEFECTIVE 6 (RHD6) gene, a positive regulator of root hair development A. thaliana RHD6/RSL1 and P. patens RSL1/2 is clearly evident. However, in our analysis PpRSL2 is not predicted to be involved in the cellular decision due to the fact that it does not show predominant activation in the early phase alone mutants rsl1/2) with regard to leaflet regeneration.One of the predicted bHLH TFs has previously been described se alone and is rse alone . Estimatse alone , maximumse alone . The secse alone , Table 1se alone , 4D, we rsl1 and \u0394rsl1/2 mutants as compared to the wildtype and \u0394rsl2 lower in \u0394nd \u0394rsl2 . No signype 3.15 . Since n8 h a.d. , is cleailaments , which dPpRSL1 expressed early during the reprogramming, before the cells are transformed into apical cells and start their tip growth. Similar to this, the AtRHD6 gene is expressed in hair cells of the root before they grow the protuberances that are known as root hairs A. thaliana, pollen tubes, are not affected by down-regulation of AtRHD6 A. thaliana protein has apparently been specialized during evolution, while the P. patens protein has kept a more general functionality, although it is still able to complement the Atrhd6 phenotype Our approach singled out TFs putatively involved in the transdifferentiation process, which was confirmed in the case of PpRSL1. Namely, we find PpRSL1 and 2 in P. patens leads to massive generation of rhizoids from gametophore tissue A. thaliana and P. patens, only the double knockouts of the respective close paralog pairs severely affect root hair, respectively caulonema and rhizoid development, demonstrating that both paralogs are involved in positively regulating these cell types to start tip growing. It may well be that this is achieved by the two proteins acting as a heterodimer, as bHLH TFs are required to act as dimers.Constitutive expression of P. patens, however, it is tempting to speculate that instead of PpRSL2, which we have shown not to contribute to this process, the other bHLH subfamily member identified by us (Phypa_165670) will form a heterodimer with PpRSL1 in order to achieve this particular regulation defining genes that follow the global transcriptome response and are moderately regulated and (ii) isolating TFs among this population that display peak expression in the critical phase . The present analysis of transcriptome trajectories singled out several TFs, predominantly from the bHLH family, putatively involved in transdifferentiation to freshly homogenized protonemal liquid culture set to 100 mg*L\u22121 dry weight. Observation was carried out after two weeks and formed brachycytes scored into apical, subapical or side-branching, respectively. Six repetitions were carried out and total number of scored brachycytes was: WT 315, rsl1 222, rsl2 280, rsl1/2 161. For number of brachycytes formed, 100 \u00b5L culture per line was scored in three independent replicates.Detached leaflets were observed under a stereo microscope 72 h after detachment (a.d.) in order to determine the number of filaments per leaflet. Four to six repetitions with 10 leaflets each were carried out. For setup and long term observation refer to Isolation of total RNA was performed using the RNeasy Plant Micro Kit with an on-column DNase treatment following the manufacturers\u2019 protocol. Using the MessageAmp II aRNA Kit , 100 ng total RNA were amplified to yield sense-strand amplified RNA (aRNA), which was reverse transcribed into first strand cDNA using SuperScript III and random hexamer oligonucleotides .Microarray experiments were carried out as previously described http://emboss.sourceforge.net/) was used for design of gene-specific oligonucleotides .Dynamic gene response was estimated from a multi-dimensional scaling analysis (MDS), which projects the distance matrix containing the Euclidian distances between all gene kinetics onto a two dimensional space. To calculate the projection, retaining the original distances as closely as possible, we applied the HiT-MDS algorithm i located via the MDS projection at P-values http://www.R-project.org). The P. patens GO annotation was used Gene Ontology (GO) analysis was performed using the GOstats library from R and 0(x)p, i(y)p are estimated by discretizing the gene expression deviation. The constant I using 100 random permutations of Vr and mutual information I between time points t0 and ti, we generated the vectors 0V and iV from sampling gene sets of size transcriptomep from the whole transcriptome and taking the average over 1,000 repeats. Plotting subsetp. For each of these gene groups we calculated a subset trajectory and compared each trajectory to the whole transcriptome trajectory of the same sampling size (subsetp\u200a=\u200a transcriptomep) using the Euclidean distance as well as the Pearson correlation under the accession number E-MTAB-915.The microarray data can be accessed from ArrayExpress , as well as to their mean (shown in red).(PDF)Click here for additional data file.Figure S2p-value histogram of transcriptome response and multi-dimensional scaling of the gene expression profiles up to 96 h a.d. A, p-value histogram for dynamic transcriptome response scores evaluated from the MDS analysis and fitting with a bivariate skew normal distribution, resulting in a long-tailed distribution. The barplot insert shows the log2 fold change dynamics of two previously known markers for apical stem cell differentiation, the transcriptional regulator PpFIE (Phypa_61985) as well as Cyclin D;1 (Phypa_226408). B, Multi-dimensional scaling (MDS) analysis of transcriptome response to leaflet detachment using 17,128 genes P. patens gene IDs. The positions of the five predicted TFs as well as the weakly regulated PpRSL2 (165193) are additionally indicated by red squares. The red dashed lines mark curves of equal probability density by fitting a bivariate skew normal distribution to the point distribution.(PDF)Click here for additional data file.Figure S3Gene Ontology analysis of transcriptome response after leaflet detachment. Biased biological process GO categories for significantly regulated genes at 1\u201396 h a.d. Gene fold expression profiles were fit to a skew-t distribution and considered significantly up- or down-regulated for a p-value <0.05. GO analysis was done using a conditional hypergeometric test from the R Bioconductor package GOstats, using as background 11,283 genes having a GO annotation. GO categories were considered significant with a cutoff p-value <0.05. The p-values in the plot are log-transformed and color-coded in a range from 0 to 5. Highlighted categories are mentioned in the text.(PDF)Click here for additional data file.Figure S4Protonemal filaments occasionally emerge from leaflets of slowly dehydrated gametophores. Gametophores were slowly dehydrated over several weeks in petri dishes devoid of covering laboratory film. Under these conditions, occasional emergence of protonemal filaments from leaflets of dehydrated gametophores can be seen (arrows).(PDF)Click here for additional data file.Figure S5Transcription factors among the top ranked genes. TFs putatively involved in the development of apical stem cells in P. patens. All 28 annotated TFs (PDF)Click here for additional data file.Figure S6Phylogenetic tree of bHLH transcription factors. Gene family tree of part of the plant bHLH proteins, centered on the RSL subclade (members shown in red). The tree was calculated based on bHLH protein domains using Bayesian inference as previously described P. patens and A. thaliana bHLH proteins, are shown. Besides proteins from these two organisms, OsBP-5 (CAD32238) from O. sativa was included, as it belongs to this subfamily. The two P. patens bHLH TFs detected by their early peaking upon leaflet detachment are PpRSL1 and Phypa_165670 (marked in blue), while PpRSL2 was shown not to be involved in apical stem cell formation (see text).(PDF)Click here for additional data file.Figure S7Timeline of transcriptional activation after detachment. Mean TF activity per time point, the three response intervals are denoted as well as TFs (shown by family assignment) significantly differentially regulated in the respective interval. Mean fold change denotes the sum of log2 fold change values of all annotated TFs P. patens genome. The error bars denote the standard deviation of the mean fold change.(PDF)Click here for additional data file.Figure S8Long-term observation of detached leaflets. Petri dish of a leaflet detachment/transdifferentiation experiment, 62 d a.d., demonstrating the lack of rhizoids in the double mutant (upper right) and that no severe differences in long term gametophore growth are visible .(PDF)Click here for additional data file.Figure S9Regression Analysis of Gene subset Euclidean Distance. The Euclidean distances of ordered (dark red lines) and randomized (grey lines) gene subsets for different gene set sizes and all gene sets combined have been fitted with two different polynomial regression models. Plot titles denote the respective gene set size. A full regression model allows separate fitting of the randomized and ordered gene sets, depicted by the black and orange dashed lines, respectively. A reduced model (red dashed line) fits both types of gene subsets simultaneously. The p-values of the ANOVA comparison of the two models confirm the differences between the full and reduced model. Thus, the local minimum of the Euclidean distance between moderately regulated genes and the global transcriptome trajectory is indeed a result of using an ordered list of genes.(PDF)Click here for additional data file.Figure S10Skew-t distribution fit to gene fold expression. A, B Normal, robust and skew-t distribution fit to the log2 gene fold change at 1 h (A) and 96 h (B). C, D Comparison of goodness-of-fit of the fitting distributions: quantile-quantile plots of the sample distributions. The skew-t distribution shows the best fit to the distributions, in particular with respect to the outliers at low/high quantiles. Abbreviations: SD: standard deviation, IQR: inter quartile range.(PDF)Click here for additional data file.Figure S11Euclidean Distance and Pearson Correlation of Gene subsets and whole transcriptome trajectory. A, Euclidean distance and C Pearson correlation of ranked gene subsets with respect to the whole genome trajectory. Both measures have their absolute maximum around gene rank 1,500. B, Projection of the Euclidean distance and Pearson correlation showing the simultaneous extremum of both measures for genes ranked between 1,000\u20131,500. The dark and light shaded areas in a and c mark the cutoff for significantly regulated genes from the MDS and transcriptome trajectory at rank 299 and 1,500, respectively. The cross in B marks the rank 1,500. The arrows in A and C depict the location of the five predicted TFs and of the non-involved paralog, as shown in (PDF)Click here for additional data file.Table S1Gene Ontology categories of biological processes being under-represented among the significantly regulated genes. Significantly regulated genes as detected from the MDS analysis . Altogether, 11,283 genes were annotated and used as background. GO categories were considered significant with a cutoff of p<0.05.(PDF)Click here for additional data file.Table S2Oligonucleotides used for realtime PCR.(PDF)Click here for additional data file.Table S3qPCR validation. Log2-fold expression values relative to time point zero are shown.(PDF)Click here for additional data file.Table S4Brachycyte formation upon ABA treatment.(PDF)Click here for additional data file."}
+{"text": "One of the fundamental problems in time course gene expression data analysis is to identify genes associated with a biological process or a particular stimulus of interest, like a treatment or virus infection. Most of the existing methods for this problem are designed for data with longitudinal replicates. But in reality, many time course gene experiments have no replicates or only have a small number of independent replicates.We focus on the case without replicates and propose a new method for identifying differentially expressed genes by incorporating the functional principal component analysis (FPCA) into a hypothesis testing framework. The data-driven eigenfunctions allow a flexible and parsimonious representation of time course gene expression trajectories, leaving more degrees of freedom for the inference compared to that using a prespecified basis. Moreover, the information of all genes is borrowed for individual gene inferences.Saccharomyces cerevisiae cell cycle data.The proposed approach turns out to be more powerful in identifying time course differentially expressed genes compared to the existing methods. The improved performance is demonstrated through simulation studies and a real data application to the Time course microarray and RNA-seq experiments are increasingly used to study biological phenomena that evolve in a temporal fashion. Unlike the static experiment which captures only a snapshot of the gene expression, the time course experiment monitors the gene expression levels over several time points in a biological process, allowing investigators to study dynamic behaviors of the genes. One goal of such experiments is to identify genes associated with a biological process of interest or a particular stimulus, like a therapeutic treatment or virus infection. The differentially expressed genes can be defined as genes with expressions changed significantly with respect to time or across multiple conditions.The time course gene expression data typically exhibit features such as high dimensionality, short time course, few or no replicates, missing values, large measurement errors, correlations between observations over time, etc. Many of the multivariate techniques for analyzing such data, for example, SAM ,2, ANOVAFunctional data analysis approaches view the expression profile of each gene as a smooth function of time, and the time course measurements are collected as discrete observations from the function that are contaminated by noisy signals ,9. A keyMany of the existing functional methods are designed for time course data with longitudinal replicates. used a fTime course data with longitudinal replicates are costly and rather rare in reality. Many of the published time course data have no replicates or only a small number of independent replicates -19. The In this work, we propose a unified approach to model the gene profiles using the techniques of FPCA, and to identify differentially expressed genes in both single group test and multiple group test. Our methodology is motivated by the gene expression data without replicate, so we will focus on this case in this paper, although our method can also be easily adapted to accommodate data with replicates. We argue that our method can improve the power in identifying differentially expressed genes compared to existing methods. First, using the eigen-basis enables a parsimonious modeling of the gene expression curves, so we have more degrees of freedom for the inference than that using a pre-specified basis. Moreover, we propose to estimate the expression curve of a gene by borrowing strength across all the genes, which leads to a more powerful inference than that using the information of one gene only.Saccharomyces cerevisiae cell cycle data from , with mean function \u03bc(t) = E (X(t)) and covariance function G = cov (X(s), X(t)). Under mild conditions, we can assume that X possesses Karhunen-Lo\u00e9ve representation into small bins so that at least one observation falls into each bin for each group. We then permute the gene expression data within each bin instead of at each time point. This extended approach is further illustrated with the yeast cell cycle data in the next section.We again use a permutation test to obtain the null distribution of is a signal-to-noise ratio, these genes with small \u201csignals\u201d would still have large statistics due to small noises. In addition, the expression curves estimated by the gene-specific B-spline smoothing tend to be under-smoothed to stabilize its variance.Figure We next apply our method to identify genes with different expression patterns in the wild type and cyclin mutant cells. We restrict this analysis to the 1275 periodic genes identified by . Since tWe apply the FPCA method to the wild type, the cyclin mutant and the combined samples and obtain the estimates of their eigenfunctions. Figure Since the observation times are different for the wild type and the cyclin mutant, we can not use the usual permutation. Instead, we divide the lifeline domain into small bins with length 10 each. Within each bin, there are at least one observation from each of the wild type and the cyclin mutant groups . The observations are permuted within each bin, using the permutation strategy for data with replicates as described in Section \u201cMultiple group case\u201d.B=10,000 permutations, we identified 883 genes for an FDR of 0.01, in which 631 are included in the 835 genes identified by . The coefficients \u03beil are i.i.d. normal r.v.\u2019s with mean 0 and variance \u03bbl with \u03bb1=4, \u03bb2=2 and \u03bb3=1. For each gene, the gene expression profiles are simulated at the same time points as the wild type yeast data, so there are 21 observations in =. The noisy signal in the observed data is simulated as i.i.d. normal r.v. with mean 0 and variance \u03c32=0.01.In the single group case, the non-differential genes have model J=1,2, and the number of observations is Ki1=15 for the \u201cwild\u201d group and Ki2=22 for the \u201cmutant\u201d group, i=1,\u2026,n. The sampling times are the same as those in the yeast cell cycle data, locating within interval =. For the true gene expression profiles, we consider model t\u2208. The coefficients \u03bb\u03be1,\u03bb\u03be2)= and =. For non-differentially expressed genes, we let \u03b3ijl=0. The error term is generated from In the two group case, the data are generated from model (2), where n=1000 genes and the proportion of differential genes is set to be \u03a00=0.05,0.2,0.5, respectively. The number of principal components in our method is selected so that the fraction of variation explained (FVE) exceeds 90%. This criterion selects the correct number of components for over 90% of time under all simulation scenarios. We tried the method proposed by 15]. A siWhen smoothing the covariance function, the bandwidth is chosen by generalized cross-validation (GCV). The overall shapes of the estimated covariance and eigenfunctions are quite stable over a range of bandwidth values in our numerical examples. The smoothing parameter may have effects on the power of the inference procedure, but the detailed investigation on this problem is beyond the scope of this paper. Intersected minds are referred to and refeThe authors declare that they have no competing interests.SW proposed the method, performed the statistical analysis and wrote the manuscript. HW supported the research project and revised the manuscript critically for important intellectual content. All authors read and approved the final manuscript."}
+{"text": "The radiation bystander effect is an important component of the overall biological response of tissues and organisms to ionizing radiation, but the signaling mechanisms between irradiated and non-irradiated bystander cells are not fully understood. In this study, we measured a time-series of gene expression after \u03b1-particle irradiation and applied the Feature Based Partitioning around medoids Algorithm (FBPA), a new clustering method suitable for sparse time series, to identify signaling modules that act in concert in the response to direct irradiation and bystander signaling. We compared our results with those of an alternate clustering method, Short Time series Expression Miner (STEM).While computational evaluations of both clustering results were similar, FBPA provided more biological insight. After irradiation, gene clusters were enriched for signal transduction, cell cycle/cell death and inflammation/immunity processes; but only FBPA separated clusters by function. In bystanders, gene clusters were enriched for cell communication/motility, signal transduction and inflammation processes; but biological functions did not separate as clearly with either clustering method as they did in irradiated samples. Network analysis confirmed p53 and NF-\u03baB transcription factor-regulated gene clusters in irradiated and bystander cells and suggested novel regulators, such as KDM5B/JARID1B (lysine (K)-specific demethylase 5B) and HDACs (histone deacetylases), which could epigenetically coordinate gene expression after irradiation.In this study, we have shown that a new time series clustering method, FBPA, can provide new leads to the mechanisms regulating the dynamic cellular response to radiation. The findings implicate epigenetic control of gene expression in addition to transcription factor networks. Radon is the largest component of natural background radiation in the United States, and exposure is a risk factor for lung cancer. Comparison of epidemiological studies of uranium miners exposed to high levels of radon with studies of domestic exposures suggest that lower doses may be proportionately more dangerous than extrapolation from high doses would predict. This has resulted in the addition of a correction factor to domestic radon risk estimates, although the biological basis for this correction is not well understood . As few The radiation bystander effect is the response of cells in contact with or in the vicinity of irradiated cells. Many endpoints have been measured in bystander cells, including sister chromatid exchanges, micronuclei, apoptosis, terminal differentiation, mutation and gene expression changes -9. Some Overall, radiation effects at the tissue and organism levels are complicated to understand because they occur at different levels of biological organization, from chromosomal damage to metabolic pathways . After iIt is important to understand not only the physiological and DNA damage effects of radiation on cells but also the global inflammatory and stress responses of cells and tissues. For instance, irradiated fibroblasts are known to promote tumor formation in neighboring epithelial cells by altering the tumor microenvironment . With thThe choice of methodology is a crucial issue in the use of clustering methods to examine structure in a given data set. It is important to choose and/or devise a methodology appropriate for the given data. Time series data are often analyzed using standard clustering algorithms such as hierarchical clustering, k-means and self-organizing maps -22. AlthHere, we examine the data using two non-parametric clustering algorithms. The first is the Short Time series Expression Miner (STEM) algorithm and software developed by Ernst et al., where all genes are clustered into one of a set of pre-defined patterns based on transformation of gene profiles into \"units of change\" . Then, cThe idea of feature selection was first used in the context of clustering large time series data for dimension reduction, where the term dimension refers to the number of time points that describe the series. In these cases, a few well chosen statistics describing the dynamics of the series such as serial correlation, skewness, and kurtosis were used to summarize the data . We alsoFBPA sufficiently describes the time course by performing dimension augmentation using biologically relevant features, thus avoiding interpolation/extrapolation; as such, the unit of the analysis is the time course itself, and not the expression measurements obtained at each time point. Because FBPA clusters all genes, it preserves information and renders unnecessary the notion of cluster significance. The use of biologically relevant features, together with the sufficient description of the time course, tends to produce clusters with focused biology.This study addressed the question: can we extract information about regulation of genes in irradiated and bystander cells from closely coordinated temporal gene expression profiles? To do this we evaluated STEM and FBPA in both treatment conditions and showed our assessment of the results of both methodologies using computational measures as well as biological enrichment. To measure cluster tightness, we used homogeneity, and to measure cluster separation and structure we used the average silhouette, both are described in detail in the Methods section. To compare agreements of the various clustering methods, we used the Rand Index. We also curated a manual clustering using a subset of the data to compare clustering methods. We then assessed the biological implications of temporal clustering in both treatments and by both clustering methods, using gene ontology and pathway tools. Gene ontology analyses using the PANTHER tool showed that FBPA tended to cluster genes with related functions together and separated different biological processes into distinct clusters. This suggested that the features selected to describe the gene expression curves for FBPA analysis were more relevant to the underlying biological signaling than the parameters used in STEM. Network analysis using the Ingenuity Pathway Analysis (IPA) tool was also applied to the clusters enriched in related biological processes to identify potential hubs regulating specific aspects of the radiation and bystander responses. The overall picture of biological networks in irradiated versus bystander cells analyzed by FBPA clustering showed that temporal curves of gene expression after irradiation can be clearly differentiated into focused biological clusters. In comparison, bystander gene expression suggested that there is a general stress and inflammatory response in bystanders that can overshadow specific signaling networks. Some important and novel regulatory processes were suggested by the FBPA clustering approach, however, and we predicted the possible epigenetic regulation of the metallothionein gene family after irradiation and in neighboring bystanders as a novel finding in our study.IL1\u03b2, IL6, IL8 and IL33) and chemokine ligand genes . The other pattern, a response of cell cycle and DNA damage genes reaching maximum at 4-6 hours after treatment, was more pronounced in irradiated samples at six time points, with four biological replicates of each condition. The data (GEO accession number GSE21059) were background corrected but not normalized in order to preserve dependence across time points. We have previously reported on the analysis of data at the 4-hour time point . We alsoTo evaluate the quality of clustering between methods, we manually curated clusters. Of 80 possible microarray profiles confirmed by qRT-PCR, 67 were selected, on the basis of pattern and known pathway information, as distinct and were grouped into seven clusters: no early peak; no change; two peaks and two dips; two peaks and two dips with a shallow second dip; two peaks and one dip with a low magnitude first peak; two peaks and one dip with a high magnitude first peak; and down at 4 hours. The graphs in Additional File We next used the STEM platform to cluster temporal profiles of gene expression in cells exposed to irradiation. After examining several combinations of input parameters, we found results to be relatively consistent across input parameters and selected results from c = 3 and m = 50 for further analysis of the irradiated data, where c indicates units of change and m, the number of candidate profiles. This run significantly clustered 174 out of the 238 cases . STEM Cluster 2 mapped to FBPA Clusters 1 and 3 (41.7% and 50% match respectively). STEM Cluster 3 mapped partially to FBPA Clusters 1 and 2 (45.1% match and 41.9% match respectively). FBPA Cluster 4, however, did not match any of the STEM clusters. Also, genes showing down regulation, represented in STEM Cluster 5, were included in FBPA Clusters 1 and 2 . Because the features selected for clustering (see Methods) did not emphasize magnitude of expression but rather rates of change, the down-regulated genes did not cluster separately in FBPA. Interestingly, all significant STEM clusters showed some degree of mapping to the largest FBPA cluster, Cluster 1.In order to compare the two clustering methods on a related cellular response, we applied STEM and FBPA to gene expression curves after bystander exposure to radiation. We discuss the results of clustering bystander responding genes using the STEM platform first. We selected the results from c = 3 and m = 100 for analysis of bystander gene expression. Again, results were relatively consistent across input parameters. These parameters resulted in significant clustering of 160 out of the 238 cases (67.2%). Figure The expression curves of the 238 genes in bystander cells were also clustered using FBPA. Again, to determine the optimal number of clusters, we used the gap statistic. We examined k = 3 and 5, which both showed near zero inequalities. Average homogeneity was found to be 2.376 and average silhouette was 0.372 for k = 5 Table , indicatThe FBPA clusters are shown in Figure Comparing the bystander FBPA clusters to STEM clusters, STEM Cluster 1 mapped well to FBPA Cluster 2 (71.9% match). STEM Clusters 2, 3, and 5 mapped relatively well to FBPA Cluster 1 . As noted above, many of the gene expression curves assigned to STEM Clusters 2, 3, 5 and 6 showed a generally similar pattern. STEM Cluster 6, however, mapped most closely to FBPA Cluster 2 (50% match). STEM Cluster 4 mapped partially to FBPA Clusters 2 and 4 (48.4% and 38.7% respectively), while FBPA Clusters 3 and 5 did not match any of the STEM clusters well.After performing clustering on the microarray and qRT-PCR data using the STEM software and the FBPA approach, we used the Rand index to compare the agreement of methods. The Rand index table Table indicateWe next analyzed individual clusters using biology-based approaches that facilitate understanding biologically relevant responses. The first approach was an ontology-based analysis using the PANTHER database . We firs3) and Cluster 2 was significantly enriched in genes related to immunity and cell defense mechanisms . Network analysis suggested that these groups of genes are probably related or responsive to the p53 family of cell cycle regulators and to NF-\u03baB transcriptional regulation, respectively and NF-\u03baB cascade genes were significantly clustered. Surprisingly, biological functions were clearly separated among the first three clusters, suggesting distinct biological functionality with only one significantly enriched biological process, NF-\u03baB cascade, in common between Clusters 1 and 3. Generally, we found a cell signaling cluster, a cell cycle/cell death cluster, and a cell-mediated immunity cluster. Cluster 4, with only 6 genes, gave no significant results.We then analyzed clusters from FBPA k = 4) for the 238 directly irradiated gene expression curves gene iso-forms, in addition to MT2, MT3 and MT4 genes [Metallothioneins are modulators of metal toxicity and important mediators of oxidative damage with a specific role in radical scavenging after radiation exposure . StudiesT4 genes , furtherMT1E, MT1F, MT1 H and MT1L mRNAs to be highly responsive to levels of KDM5B. Overexpression of KDM5B was shown to repress gene expression and RNAi-mediated knockdown of KDM5B increased expression of all the above metallothionein genes.KDM5B (JARID1B), which can act as a transcriptional repressor by de-methylating histone H3 lysine residues bound to promoters , has beeHistone deacetylases (HDAC1 and HDAC2), have also been shown to regulate metallothionein gene expression ,47. The Using western blot analysis, we found that protein levels of KDM5B, HDAC1 and HDAC2 were all decreased an hour after exposure Figure , precediINK4A locus [The participation of trans-activating factors, such as transcription factors and co-activators that affect gene expression at promoter regions, in the radiation response is well known. However, the potential contributions of DNA topology changes and other epigenetic effects exerted by non-coding RNA, DNA methylation and histone modification are not as well studied in the radiation response. There is some evidence for epigenetic mechanisms such as DNA hypo-methylation after radiation exposure but litt4A locus . Our stuFAS, TNFRSF10C, TNFRSF10B, MYBL1 and MDM2 were gene members in this category. STEM clustering in bystanders suggested only one biologically significant cluster with minimal biological findings in other clusters. This suggests that although this method can group genes into visually tight patterns, the algorithm is \"blind\" to functionally related genes that could be clustered together with more descriptive features, such as those used in FBPA. Network analysis of the six clusters confirmed that p53 and NF-\u03baB family members were potential upstream regulators of gene expression in most of the STEM bystander clusters.STEM clustering of the bystander data for the 238 genes yielded 6 significant clusters Figure with uni-6). In Cluster 2, which was visually a tight cluster by pattern and magnitude of change and cell surface mediated signaling processes , both of which were also significantly enriched in Cluster 3. The most over-represented processes in Cluster 4 genes were granulocyte mediated immunity , NF-\u03baB and cytokine and chemokine signaling .We also applied the same analyses to the FBPA (k = 5) clusters of the 238 bystander gene profiles Figure . Again we Figure , other oGADD45A and SAT1 genes in Cluster 2 . Although there were some common processes between FBPA clusters, the gene ontology enrichment showed clear delineation of biological information. Related biological functions were focused in specific clusters, suggesting that features used in FBPA captured relevant biological details of the gene expression response curves. In radiation gene response, three out of four clusters gave distinct functional groups: a cell signaling cluster Figure , a cell By contrast, STEM resulted in only one cluster with biologically significant functions for both treatment conditions: irradiated Cluster 3 Figure , and bysThere were also consistencies between the clustering methods used. For example, cell cycle control processes were not over-represented in any clusters generated by FBPA or STEM in the bystander gene response, whereas, stress response, inflammation and cellular defense mechanisms were strongly implicated in the bystander gene expression response. Cell death, on the other hand, was a significant category in both STEM Clusters 1 and 2 Table and in FThe objective of this study was to summarize and cluster time series gene expression in irradiated and bystander fibroblasts to uncover novel biologically relevant information. We applied a new clustering algorithm, FBPA, which used relevant features to cluster data. These features summarized the gene expression profiles and accounted for dependence over time. This method was devised specifically for sparse time series where model-fitting is not realistic. It is broadly applicable to other data sets. It does not require measurements to be taken at the same time points and can handle missing values. FBPA is scalable to a large number of genes, only restricted by processing capacity.We compared FBPA to STEM, another popular clustering algorithm for short time series. While the two methods were comparable when using computational measures of evaluation, FBPA outperformed STEM in finding biologically meaningful clusters in both the irradiated and bystander cases. We believe this is because of the use of biologically relevant features that explain the data well and an emphasis on parsimony as opposed to strictly computational methods that do not address these factors.Additionally, we compared the temporal response of mRNA to 0.5 Gy \u03b1-particle irradiation and in-contact neighboring bystander cells and confirmed trends in gene regulation. More interestingly, we were able to extract new information from the clustering results that predicted upstream regulators of gene expression not previously suggested by class comparison and ontology methods. Our analysis suggested a candidate novel gene regulatory mechanism involving histone modifications at promoter regions of metallothionein genes by KDM5B (lysine demethylase) and HDACs (histone deacetylases). Further studies on the role of these epigenetic mechanisms and the induction of metallothionein genes in response to \u03b1-particle irradiation will be required to understand the roles of these new players in the radiation response.In conclusion, this study achieved the objective of extracting biological insights from quantitative data after grouping it into clusters and identifying novel processes in the precise regulation of individual biological molecules as a result of radiation. In this study, we addressed only mRNA level changes and it will be interesting to see if parallel measurements of \"omic\" data at other levels such as chromatin immunoprecipitation-array (ChIP-chip) information, proteomic and metabolomic data may be analyzed simultaneously using feature based clustering methods. Also, in this study we limited the analyses to genes shown to be differentially regulated at four hours, as a test set for the clustering methodology. We found that FBPA clustering can sort gene expression responses and subsequent biological enrichment of clusters can reveal new knowledge based on this sorting method. When this method is applied to the complete set of differentially regulated genes in the time series, it will also help us more fully understand the involvement of pathways that can affect cell and tissue integrity after exposure to radiation.4He ions (125 keV per micron) as simulated \u03b1-particles using the track segment mode of the 5.5-MV Singletron accelerator at the Radiological Research Accelerator Facility of Columbia University. Four independent experiments were conducted, and each was performed in parallel with irradiated, bystander and sham-irradiated samples derived from a sub-cultivated pool of IMR-90 cells that were seeded from a single cryo-vial.Early passage (population doubling < 35) IMR-90 human lung fibroblasts were sub-cultured in Dulbecco's modified Eagle's medium (Gibco) and Ham's F10 medium in a 1:1 mixture plus 15% fetal bovine serum. Mylar-bottomed culture dishes were prepared as described previously . An inneDirectly irradiated (outer dish) and bystander (inner dish) cells were separated at 30 minutes, 1, 2, 4, 6 and 24 hours after exposure, and RNA was isolated from the exposed cultures and from time-matched sham-irradiated controls using Ribopure . All RNA samples had RNA integrity numbers >9.0 and 260 Each sample was hybridized to an Agilent Whole Human Genome Oligo Microarray (G4112F) using the Agilent one-color workflow as previously described . The extGenes were selected for clustering based on 4-hour gene expression analyses performed in an earlier study . In thatThe High-Capacity cDNA Archive Kit was used to prepare cDNA from total RNA. A custom low-density TaqMan array was designed using validated assays. Gene expression assay information is in Additional File T method as described previously [PPIA) and ubiquitin C (UBC) gene expression levels. We used qRT-PCR measurements of 40 genes across the entire time course and used the median of ratios to control at each time point to generate heatmaps. BRB-ArrayTools was used to generate a heat map visualizing the median logarithmically transformed expression ratios for all four replicates generated by both microarray and qRT-PCR to compare gene expression across time and between measurement methods. qRT-PCR expression data are provided in Additional File Relative fold-inductions were calculated by the \u0394\u0394Ceviously using SDeviously to the sWe used two clustering methods to cluster the data. The STEM algorithm and software, described below, was developed by Ernst et al. . We alsoA) and bystander (B) samples were represented as a function of control (C) expression, as igr = log2(Aigr/Cigr) xor igr = log2(Bigr/Cigr)x, where i=1,2,..., n, n is the number of time points used, igr xindicates the relative expression measurement for irradiated or bystander genes at the time point i, igr Ais the unlogged expression in the irradiated sample at time point i and igr Bis the unlogged expression in the bystander sample at time point i. We used igr xfor both alpha and bystander expression here because the methods were agnostic to the particular treatment being considered. Representing the data as a ratio was possible because of the paired nature of the data. Irradiated data and bystander data were clustered separately for the microarray data but together for the smaller qRT-PCR data set.For the purpose of both clustering algorithms, expression measurements for a given gene, g, and replicate, r, for irradiated (http://www.cs.cmu.edu/~jernst/st/). Briefly, a set of model profiles based on units of change, c, was defined. For example, if c = 2 then, between successive time points, a gene can go up either one or two units, stay the same, or go down one or two units. The clustering system may also define one unit differently for different genes. Thus, the number of possible profiles for n time points is (2c+1)n-1. From these possible expression profiles, a set of candidate profiles, size m, which was user-defined, were chosen such that the minimum distance between any two profiles was maximized. Each gene was assigned to the closest profile using a Pearson correlation based distance metric. To determine significance level for a given cluster, a permutation based test was used to quantify the expected number of genes that would be assigned to each profile if the data were generated at random. Therefore, while all genes were clustered, not every gene was in a significant cluster.First, we used the STEM (Short Time series Expression Miner) algorithm and software presented in >gap(k+1)-sk and sk is the estimate of standard deviation from the gap. However, we examined all \"elbow points\" on the graphs and presented those that provide the best results in terms of separation of clusters and the homogeneity metric.The number of clusters, k, was determined via the gap statistic . Here, wP is the total number of genes in the cluster D is a distance function, gi is the ith gene and i) F, ranging from -1 to 1 was measured for each gene. This measured the average distance to all the elements in its assigned cluster and compared it to that of the closest cluster. An average silhouette width over 0.5 suggested a strong structure, 0.25-0.5 suggested a reasonable structure, and <0.25 suggested no substantial structure.In general, cluster validity can be assessed on three bases: within method metrics, between-method metrics and cluster significance . First, lhouette . First, Second, between-method metrics were used to evaluate cluster agreement. Here, we validated findings between the two methods as well as between each method and manually curated clustering. The Rand index was usedThird, cluster significance methods focus on the likelihood that the cluster structure has not been formed by chance. A fundamental difference between the above two clustering algorithms was that STEM pre-determines cluster patterns and, while it assigned all genes to clusters, it only designated some clusters as significant. Cluster significance was determined by a permutation based test, used to quantify the expected number of genes that would be assigned to each profile if the data were generated at random. In this way, the STEM algorithm measured cluster likelihood. We did not provide this for FBPA. The within-method silhouette and homogeneity metrics allowed us to look \"under the hood\" at individual clusters and make inferences on them. Given the caveat that these validation metrics are guidelines, ultimately subject to biological validation of patterns in gene expression, we felt that this approach was reasonable in the exploratory data analysis framework. It is also worth mentioning here that the significant clusters determined by STEM did not necessarily imply biologically significant clusters.We used qRT-PCR confirmed genes as a smaller subset of genes to assess between method clustering. Because of the small number of genes used, the 80 irradiated and bystander curves were clustered together. After examining results for various parameter combinations using STEM, we found that results were relatively consistent around the choice of c. Smaller values of c resulted in fewer genes being clustered. Thus, we selected c = 3 and m = 25 for further analysis. This run clustered 57 out of the 80 cases (71%). The Rand Index to the manually curated clustering was 0.486 for the directly irradiated cases and 0.483 for the bystander cases, indicating average similarity to the manually curated standard. Here we see the STEM algorithm shows more noise. This is potentially because we chose a higher value for the units of change (c = 3) but a lower number of pre-defined profiles (m = 25). We did this to significantly cluster more genes, but the cost is higher noise in the resulting profiles. Nevertheless, the clusters did show distinct patterns.To confirm results, we also clustered the median expression curves generated by qRT-PCR using FBPA. Again, because of the small number of genes confirmed by PCR, we clustered irradiated and bystander genes together and used the results to measure agreement only. Using the gap statistic method and plot, we examined k = 3 and k = 8 further. Based on within-method evaluation, we determined to use 8 clusters, which showed both better separation in terms of the average silhouette and better homogeneity. For k = 3, the average homogeneity was 3.969 and the average silhouette was 0.385. For k = 8, we had an average homogeneity of 2.345 and an average silhouette of 0.402. Because reasonable structure was found with k = 8, we chose this clustering. The Rand Index to the manually curated standard was 0.607 for the directly irradiated cases and 0.661 for the bystander cases, indicating good similarity.Following the separate clustering analysis of irradiated and bystander gene expression curves, we imported the gene sets from each cluster into PANTHER . The gen\u00ae Systems, http://www.ingenuity.com) to analyze network interactions between the genes. We applied pathway analysis as a complementary method of biological analysis of the gene groups generated by clustering. This approach allowed us to visualize potential interactions between the members of clusters, and to look for common upstream regulators. We applied very specific criteria, limiting our analyses to relationship type \"expression/transcription\" and molecule type \"only upstream transcriptional regulators of genes,\" to each cluster of genes one by one. In clusters dominated by down-regulated genes, we also queried potential coordinated targeting by microRNA species that can suppress mRNA levels of more than one gene.The sets of genes after clustering were also separately imported into Ingenuity Pathways Analysis (IPA) (Ingenuityhttp://rsb.info.nih.gov/ij/), background corrected and normalized to actin levels, then compared to time matched controls.For protein isolation directly irradiated (outer dish) and bystander (inner dish) cells were separated and trypsinized at specified times after irradiation. Cells were collected, washed and lysed in 25% glycerol, 40 mM HEPES at pH 7.5, 1 mM DTT, 0.35 M NaCl, 0.5% NP-40 and Protease inhibitor mixture . Protein concentrations were determined using the bicinchoninic acid method (Pierce) and measured using the Nanodrop-1000 spectrophotometer (Thermo Scientific). 50 micrograms of protein was used for western analysis and separated on 4-12% Tris-Glycine gradient polyacrylamide gels . Primary antibodies were from Abcam: HDAC1 (cat # ab46985), HDAC2 (cat# ab12169), and KDM5B (cat# ab50958) and from Chemicon: actin (cat# mab 1501). Secondary antibodies were conjugated to horseradish peroxidase and signals were detected using enhanced chemi-luminescence . Relevant bands were quantified by densitometry using Image J (NIH, FBPA: Feature based portioning around medoids algorithm; STEM: Short time series expression miner; NF-\u03baB: Nuclear factor kappa B; HDAC: Histone deacetylases; BEIR: Biological Effects of Ionizing Radiation; PANTHER: Protein analysis through evolutionary relationships; qRT-PCR: quantitative reverse-transcription and polymerase chain reaction; GEO: Gene expression omnibus; FDR: False discovery rate; GO: Gene ontology;SAG and AS drafted the manuscript. SAG designed the study and performed all biological experiments and analyses. MM and AS carried out the cluster analyses and developed the supporting statistical methods. SAA and MM conceived of the study and participated in the design of experiments and the writing of the manuscript. All authors read and approved the final manuscript.Manually curated clustering, pdf file.Click here for fileSTEM clustering on irradiation gene response, MS excel file.Click here for fileFBPA clustering on irradiation gene response, MS excel file.Click here for fileSTEM clustering on bystander gene response, MS excel file.Click here for fileFBPA clustering on bystander gene response, MS excel file.Click here for fileMetallothionein expression levels in irradiated and bystander cells, pdf.Click here for fileMicroarray data from irradiated and bystander cells, MS excel file.Click here for fileqRT-PCR data from irradiated and bystander cells, MS excel file.Click here for file"}
+{"text": "Biological processes occur on a vast range of time scales, and many of them occur concurrently. As a result, system-wide measurements of gene expression have the potential to capture many of these processes simultaneously. The challenge however, is to separate these processes and time scales in the data. In many cases the number of processes and their time scales is unknown. This issue is particularly relevant to developmental biologists, who are interested in processes such as growth, segmentation and differentiation, which can all take place simultaneously, but on different time scales.Saccharomyces cerevisiae cell-cycle dataset and an Arabidopsis thaliana root developmental dataset. In both datasets our method successfully detects processes operating on several different time scales. Furthermore we show that many of these time scales can be associated with particular biological functions.We introduce a flexible and statistically rigorous method for detecting different time scales in time-series gene expression data, by identifying expression patterns that are temporally shifted between replicate datasets. We apply our approach to a The spatiotemporal modules identified by our method suggest the presence of multiple biological processes, acting at distinct time scales in both the Arabidopsis root and yeast. Using similar large-scale expression datasets, the identification of biological processes acting at multiple time scales in many organisms is now possible. The advent of high-throughput technologies has given biologists the ability to measure system-wide gene expression, potentially capturing many of these processes at once. As a result, one of the major challenges of biological data analysis is the separation of these processes and their time scales. In many cases it is not even known how many processes underlie the measured signal or what their respective time scales are. These questions are particularly relevant to the field of developmental biology. Developmental studies focus on systems such as animal embryos or plant organs in which processes such as growth, segmentation and differentiation can all take place simultaneously, but on different time scales, complicating the interpretation of expression data.Biological processes in living organisms occur on a vast range of time scales, from 10Here we introduce a method for detecting the presence of different time scales in time-series gene expression data. Our approach is based on two assumptions that hold for many data sets of this type. First, that at least two replicate time-series measurements have been produced. Second, that there is at least one time-dependent process for which the time scale is known. This known process allows us to synchronize the replicates, and is most often the time scale on which the data was measured.shifted by searching for temporal expression patterns that are similar in the two replicates, but occur at different times. In other words, these patterns are d Figure . To thisAs an example of an applicable dataset, consider a gene expression measurement time-series with two replicates that is used to study cell cycle. Both replicates are synchronized in order to start at the same point in the cell cycle. Now let us suppose there is a second time-dependent process that is not affected by the synchronization (ie. not the cell-cycle). The two time-series of cellular snapshots provided by the replicates will now catch this second process in different temporal states. However, to ensure that we are observing the same process in two different states, rather than a signal corrupted by noise, the two snapshots have to be shifted versions of each other with a high degree of similarity, which is why our significance analysis has to incorporate the temporal shift in an explicit manner.blink comparator [Our approach is somewhat analogous to an astronomical device called the mparator , which iSaccharomyces cerevisiae cell-cycle dataset [Arabidopsis thaliana [We apply our approach to detect time scales in two datasets. The first is a dataset , and actthaliana . Using oSaccharomyces cerevisiae measuring gene expression through the cell cycle (GEO Accession: GSE8799)[To validate our method we chose to analyze a dataset for which there was a known separation between the time scale of the experiment and the time scale of a biological process of interest. The dataset we chose was a recent synchrony/release time-series microarray dataset from the yeast GSE8799).In the synchrony/release protocol used by the study, a population of cells is synchronized to early G1 phase. The cells are subsequently released to progress through the cell cycle, during which a time-series of microarray measurements are made. Thus, the measured time scale in the dataset is chronological time (minutes since release from the synchronization event). However, as the kinetics of release from synchronization always varies from experiment to experiment , replicaThe dataset itself consists of two replicate synchrony/release time-series experiments, each with 15 Affymetrix Yeast 2.0 microarray measurements taken at 16 minute intervals after the first sample. In this dataset the start of sampling in each replicate began at slightly different times (30 minutes and 38 minutes). As our method requires directly comparable data, simple linear splines were fit to each replicate and sampled at 8 minute intervals starting at 38 minutes after release, with a total of 28 samples per replicate.We analyzed the 5742 probes in the Shift+8 gene set, with no other gene sets showing any enrichment for cell-cycle related terms. Only two other gene sets, Shift+0 and Shift+16, have any enriched terms. The Shift+0 set is enriched for terms having to do with general biological processes such as growth and RNA processing which are generally not associated with the cell-cycle. This enrichment in the Shift+0 group is not surprising, as upon release from synchrony, cells would be expected to activate programs associated with growth and cellular reorganization. Thus these processes would be operating on the chronological time scale of the synchronization. The GO term analysis revealed that the Shift+16 set was slightly enriched for terms related to catabolic process. The biological relevance of this enrichment remains to be tested, but suggests an additional, as yet, uncharacterized time scale.We expected to find a large group of shifted genes related to the cell-cycle given the known time scale separation in this data. We therefore tested if any of the gene sets, defined by maximal shift, were related to the cell-cycle. Of the 1274 cell-cycle regulated probes identified by . The GO The identification of biologically coherent sets of shifted genes strongly suggests that our method is able to successfully detect the presence of processes occurring on multiple unrelated time scales. Furthermore, by analyzing the identified genes associated with those shifts, we were able to correctly identify the other major process, associated with Shift+8, as the cell-cycle.st section of the 13 section time-course) and use developmental time as the known time scale.The Arabidopsis root is an excellent model system for studying development due to its simple physical structure. In the root, the majority of new cells are born at the root apex from a stem cell population that surrounds the quiescent center (QC). Cell types are constrained within files, and with each new cell division, an older cell is successively displaced distal to the stem cell population. A cell's developmental time line can therefore be tracked along the root's longitudinal axis. In the work of Brady et al. , two devWe analyzed each of the 22746 genes in the root dataset for shifts of -6 to +6 sections. We used a maximal shift threshold of \u00b1 6, as larger shifts would result in an overlap window covering less than half the developmental time points. The distribution of maximally significant shifts for the 5992 genes with shifts at or below a p-value of 0.01 is shown in Figure While genes that display shifted expression profiles in vertebrates are known to regulate processes such as somitogenesis and, as The observation that genes associated with a shifted profile are enriched in single cell types suggests that spatially regulated transcriptional modules may exist. We next attempted to determine if genes associated with a particular shifted profile also showed strong temporal regulation in addition to spatial regulation. First, to systematically separate out these cell type specific modules within each shifted gene set we hierarchically clustered the individual genes on their cell type expression , and cut-113, p = 3.1e-39). These genes also show enriched expression in xylem cells in the meristematic zone (p = 3.2e-63) and display a time shift within the meristematic zone , but also contains genes whose expression is enriched in phloem cells, phloem companion cells and phloem pole pericycle cells Figure . This prBiological processes on multiple time scales occur during the development and morphogenesis of embryos, tissues and organs. Using time series microarray expression data in replicate, we have developed a method that identifies a number of temporal scales in addition to the time scale being measured. This method was able to identify these time scales in two different organisms, suggesting that it is an organism-independent method.p(i\u03b3) as a close approximation to the real (discrete) probability density function of i \u03b3values. Using a Monte Carlo simulation over uniformly random permutations we confirmed that this continuous distribution is an accurate approximation (data not shown). Note that, since the Gaussian distribution extends below i \u03b3= 0, the p-value given by the Gaussian distribution is in fact an upper bound on the true p-value for small (i.e. the most significant) i\u03b3, which means that the true p-value lies even lower. Therefore, our method provides an efficient, accurate, and conservative method for determining the significance of shifts in high-throughput datasets. Previous work on time-shifted expression data [Given the number of genes in high-throughput datasets, the computational efficiency of any data analysis method is critically important. By converting the data to rank permutations (see Methods), we can use uniformly distributed random permutations as a null model. As a result, our method is able to use a continuous Gaussian distribution for ion data -18 has fion data -17 and oIn our analysis of the yeast cell-cycle dataset, it is not a coincidence that the cell cycle process was identified in the Shift+8 group and that the original study adjusted the sampling times by eight minutes in the second biological replicate. In the original study, the authors employed a model designed to use auxiliary budding index data to specifically analyze kinetics of populations released from synchrony/release experiments . They usArabidopsis thaliana to identify numerous spatiotemporal transcriptional modules acting in separate time scales. Each spatiotemporal module demonstrates a strong conservation of biological association occurring during root development. Interestingly, the strongest observed associations are linked to genes expressed during the process of lateral root initiation.Numerous biological processes have been identified in plants that occur in multiple time scales ranging from seconds to hours (circadian rhythms). In the root however, the full spectrum of biological processes that act in multiple time scales has likely not been described, due to a lack of knowledge of the time scales that these processes are acting on. Our rigorous method is able to utilize the gene expression dataset measuring expression through root developmental time in Lateral root initiation occurs at regular intervals within pericycle cells located at the xylem pole, suggesting cell-cell communication between xylem and pericycle cells . HoweverArabidopsis) and p < 0.001 (for yeast), which are thresholds for a given shift class, and ensure that each such class contains only a small proportion of false positives. These thresholds are picked for technical reasons, and are therefore inevitably somewhat arbitrary. The second role of p-values is in the subsequent GO enrichment analysis for each class, where they measure the biological significance of the classification. The extremely small p-values we find in this context demonstrate that the shift classification is indeed biologically meaningful.Our analysis uses p-values in two separate places, which should not be confused. Firstly, they are employed in form of the significance thresholds of p < 0.01 -dimensional space. Note that the rapid growth of the volume of this space with m is likely to limit the feasibility of this generalization for m larger than three or four.In principle this method can be generalized to the case of three or more replicates, by choosing s method to find For all identified, uncharacterized modules in both yeast and Arabidopsis, further studies are needed to determine the relevant time scale of the observed shifts, and the nature of these shifts. Do these shifts act as part of signaling pathways that are on the scale of seconds, as part of metabolic rhythms , or are M probesets or genes and N datapoints, which we write as M \u00d7 N matrices d1ij and d2ij. We convert these series to rank permutations for each probeset, resulting in two new matrices \u03c01ij and \u03c02ij.Consider a pair of replicate datasets with N = 4, consider a row in d1 reading '0.3, 0.5, 0.6, 0.2', for which the corresponding rank permutation in \u03c01 would be '3, 2, 1, 4', since 0.6 is the highest value, 0.5 the second highest, and so on. Thus each row of \u03c01 and \u03c02 contains a permutation of length N. In the real data ties are highly unlikely due to the high resolution of the measurements, and can be broken randomly if necessary.As an example with The conversion to permutations simplifies the null model considerably, which makes it straightforward to measure the significance of the similarity between the replicates in a computationally efficient way. Rank permutations also form the basis of other correlation measures, such as Spearman's rank correlation . Anothers:We then calculate a measure of similarity of two rank profiles for a given shift s \u2265 0:For s < 0:For i to follow a Gaussian distribution with mean \u03bc and standard deviation \u03c3, given by:For uniformly random permutations (which will occur for any i.i.d. randomly distributed data) we expect \u03b3i is given by the probability that the same value of \u03b3i, or an even rarer value, occurs by chance. Hence the p-value based on the Gaussian distribution, for \u03b3i < \u03bc is given by:These expressions follow from the mean and standard deviation of the difference between two uniformly distributed continuous random variables. We have confirmed the accuracy of these distributions using computer simulations (see Discussion). The p-value for a given \u03b3i < \u03bc. Hence we can calculate a p-value for every gene i and every shift s under this condition. We can then record, for each gene, the shift at which the maximally significant p-value occurs, and the p-value itself. Note that the most significant shift between two replicates does not necessarily correspond to the shift with the lowest value of \u03b3i itself.We are only interested in similarity, i.e. in cases where \u03b3All probesets that display a shift with a significance value of p < 0.01 were selected for further analysis. AGI identifiers were assigned to probesets using data from the 2008-5-29 Affy_ATH1_array_elements file. The ChipEnrich program was modified by including genes enriched in individual root cell types as identified by . Genes aDAO performed most of the data analysis and participated in the conception and design of this study, as well as the writing of the manuscript. SMB primarily contributed to the conception and design of this study, and participated in the data analysis and writing of the manuscript. TMAF participated in the study's coordination and design. PNB contributed to the coordination and conception of this project. SEA was primarily involved in the design of this study and the writing of the manuscript. All authors read and approved the final manuscript.Clustering of shift profiles identifies spatiotemporally regulated modules of genes. A figure showing clustering of shift profiles identifies spatiotemporally regulated modules of genes for shifts of +2, +3, +4, -2 and -5. For all shifts, relative expression by marker line is visualized in the left heatmap, and relative expression by longitudinal section in the two roots is visualized in the right heatmaps. The relative expression scale is visualized on the right. If clusters with greater than ten members were identified, these are indicated on the left side of each heatmap.Click here for file"}
+{"text": "Robustness in biological networks can be regarded as an importantfeature of living systems. A system maintains its functions againstinternal and external perturbations, leading to topological changesin the network with varying delays. To understand the flexibility of biological networks,we propose a novel approach to analyze time-dependent networks,based on the framework of network completion, which aimsto make the minimum amount of modifications to a given network so that theresulting network is most consistent with the observed data. We have developed a novel network completion method for time-varying networksby extending our previous method for the completion of stationary networks. In particular, we introduce a double dynamic programming technique to identifychange time points and required modifications. Although this extended method allows us to guarantee the optimalityof the solution, this method has relatively low computational efficiency. In order to resolve this difficulty, we developed a heuristic method for speedingup the calculation of minimum least squares errors. We demonstrate the effectivenessof our proposed methods through computational experiments using synthetic dataand real microarray gene expression data. The results indicate thatour methods exhibit good performance in terms of completing and inferringgene association networks with time-varying structures. Computational analysis of gene regulatory networks is an important topic in systems biology. A gene regulatory network is a collection of genes and their correlations and causal interactions. It is often represented as a directed graph in which the nodes correspond to genes and the edges correspond to regulatory relationships between two genes. Gene regulatory networks play important roles in cells. For example, gene regulatory networks maintain organisms through protein production, response to the external environment, and control of cell division processes. Therefore, deciphering gene regulatory network structures is important for understanding cellular systems, which might also be useful for the prediction of adverse effects of new drugs and the detection of target genes for the development of new drugs. In order to infer gene regulatory networks, various kinds of data have been used, such as gene expression profiles (particularly mRNA expression profiles), CHromatin ImmunoPrecipitation (ChIP)-chip data for transcription binding information, DNA-protein interaction data, and protein-protein interaction data \u20133. HowevThese network models assume that the topology of the network does not change through time, whereas the real gene regulatory network in the cell might dynamically change its structure depending on time, the effects of certain shocks, and so forth. Therefore, many reverse engineering tools have recently been proposed, which can reconstruct time-varying biological networks based on time-series gene expression data. Yoshida et al. developeAs mentioned above, there have been many studies and attempts to analyze both time-independent and time-dependent networks from time-series expression data; however, gene regulatory systems in living organisms are so complicated that any mathematical model has limitations and there is not yet a standard or established method for inference, even for time-independent networks. One of the possible reasons is that there exists an insufficient number of high-quality time-series datasets to reconstruct the dynamic behavior of the network. In other words, it is difficult to reveal a correct or nearly correct network based on a small amount of data that includes some noise. Hence, in our recent study, we proposed a new approach for the analysis of time-independent networks, called network completion , 20, in In this paper, we present two novel methods for the completion and inference of time-varying networks using dynamic programming and least squares fitting (DPLSQ): DPLSQ-TV , which are divided into B + 1 intervals: , ,\u2026, , where B indicates the number of change points. A different network is associated with each interval. We assume that the set of genes does not change; therefore, only the edge set changes according to the time interval. Let V = {v1,\u2026, vn} be the set of genes. Let E be the initial set of directed edges , and let E0, E1,\u2026, EB be the sets of directed edges , where Ei denotes the edge set for the ith interval.In this section, we present DPLSQ-TV, a DP-based method for the completion of a time-varying network. We assume that there exist G consisting of n genes, N time series datasets, each of which consists of m time points for n genes and the positive integers h, k, and B, infer B change points and complete the initial network G by adding k edges and deleting h edges in total such that the total least-squares error is minimized. This results in the set of edges E0, E1,,\u2026, EB at the corresponding time intervals (see E = \u2205), the problem corresponds to the inference of a time-varying network.Then, the problem is defined as follows: given an initial network vals see . It is tvi are determined by the following differential equation:xi corresponds to the expression value of node vi, \u03c9 denotes random noise, and vi1,\u2026, vih are incoming nodes to vi. The second and third terms of the right-hand side of the equation represent the linear and nonlinear effects to node vi, respectively s, which correspond to xi(t)s in from an expression value xi(t) in the mathematical model equation for each node vi:yi(t) is the observed expression value of gene vi at time t, and vi1, vi2,\u2026, vih are tentative incoming nodes to node vi. Incoming nodes to each node are determined so that the sum of these values for all nodes is minimized under the constraint that the total number of edges is equal to the specified number. In order to minimize the sum of least squares errors for all genes along with determining the incoming nodes and corresponding parameters, DP is applied. Readers are referred to only.In this subsection, we present our proposed method for network completion of time-varying networks by the addition of edges and extend this to a general case in the following subsection. For simplicity, we assume V and the set of initial edges E are given. Let the current set of incoming nodes to vi be {vi1,\u2026, vid}. We define the least squares error for vi during the time period between p and q asyi(t) denotes the observed expression value of gene vi at time t. The parameters needed to attain this minimum value can be computed by a standard least squares fitting method.We assume that the set of nodes e\u2212(vi) = {vj1,\u2026, vjd} be the set of initial incoming nodes to vi. Let \u03c3kj,j+ denote the minimum least squares error when adding kj edges to the jth node during the time from p to q, which is formally defined asvjl must be selected from V \u2212 vj \u2212 e\u2212(vj). In order to avoid combinatorial explosion, we constrain the maximum kj to be a small constant, K, and let \u03c3kj,j+ = +\u221e, for kj > K or kj + |e\u2212(vj) | \u2265n.Because network completion is considered to involve the addition of edges, let c0 = 1 and cB+1 \u2212 1 = m.Then, the problem is stated asD+ asHere, we define D+ can be computed by the following DP algorithm:The entries of D+ is determined uniquely regardless of the ordering of nodes in the network. The correctness of this DP algorithm can be seen as follows:It is to be noted that E+ ascb+1 \u2212 1 = q. E+ can be computed by the following DP algorithm:Next, we define E+ and the corresponding DP procedure are the methodologically novel points of this work, compared with our previous work denote the minimum least squares error for the time period between p and q when adding kj edges to e\u2212(vj) and deleting hj edges from e\u2212(vj), where the added and deleted edges must be disjointed. We constrain the maximum kj and hj to the small constants K and H. We let \u03c3hj,kj,j = +\u221e if kj > K, hj > H, kj \u2212 hj + |e\u2212(vj) | \u2265n, or kj \u2212 hj + |e\u2212(vj) | <\u2009\u20090 hold. Then, the problem is stated asD asLet D can be computed byAs in the previous subsection, E asNext, we define E can be computed by the following DP algorithm:In this subsection, we analyze the time complexity of DPLSQ-TV. Since completion by the addition of edges and completion by the deletion of edges are special cases of completion by the addition and deletion of edges, we focus on completion by the addition and deletion of edges.O(mp2 + p3) time where m is the number of data points and p is the number of parameters. In our proposed method, we assume that the maximum indegree is bounded by a constant, and the numbers of addition and deletion edges in a given network are bounded by the constants K and H, respectively. In this case, the time complexity for least squares fitting can be estimated as O(m).First, we analyze the time complexity required per least squares fitting. It is known that least squares fitting for a linear system can be done in \u03c3hj,kj,j. The total time required to compute \u03c3hj,kj,j is O(mnK+1) s is O(m3nK+1), because p and q are O(m).Next, we analyze the time complexity required for computing (mnK+1) , where wDs. In this computation, we note that the size of table D is O(m2n3). Furthermore, in order to compute the minimum value for each entry in the DP procedure, we need to examine (H + 1)(K + 1) combinations, which is O(1). Hence, the time complexity for Ds is O(m2n3).Next, we analyze the time complexity required for computing Es. We note that the size of table E is O(mn2), where we assume that B is a constant. Since the number of combinations for computing the minimum value using DP is O(mn) per entry, the computation time required for computing Es is O(m2n3). Hence, the total time complexity isFinally, we analyze the time complexity required for computing N time series datasets, each of which consists of m points, the time complexity becomes O(Nm3nK+1 + m2n3). Although this complexity is not small, it is allowable in practice if K \u2264 2 and n and m are not too large. Indeed, as shown in K = 2.It is to be noted that if we use Although our previous algorithm, DPLSQ-TV, is guaranteed to find an optimal solution in polynomial time, the degree of the polynomial is not low, preventing the method from being applied to the completion of large-scale networks. Therefore, we propose a heuristic algorithm, DPLSQ-HS, to significantly improve the computational efficiency by relaxing the optimality condition. The reason why DPLSQ-TV requires a large amount of CPU time is that the least squares errors are calculated for each node by considering all possible combinations of incoming nodes and taking the minimum value of these. In order to significantly improve the computational efficiency, we introduce an upper limit on the number of combinations of incoming nodes. Although DPLSQ-HS does not guarantee an optimal solution, it allows us to speed up the calculation of the minimum least squares in the case of adding edges. A schematic illustration of least squares computation is given in vi, we maintain M combinations of k incoming nodes with M lowest errors at the kth step. Let Sik denote the set of M combinations computed at the kth step. At the kth step, for each combination {vi1,\u2026, vik\u22121} \u2208 Si1 \u222a Si2\u2009\u222a\u22ef\u222a\u2009Sik\u22121 where i1 < i2 < \u22ef ik\u22121 is the kth incoming node to vi. The calculated least squares errors are sorted in descending order, the top M values are selected, and the corresponding combinations are stored in Sik.Although DPLSQ-HS can be applied to the addition and deletion of edges, we consider only additions of edges as modification operations in this subsection. We have developed DPLSQ-HS, which contributes to reducing the time complexity, by imposing restrictions on the number of combinations of incoming nodes to each node. In \u03c3k,i+ in DPLSQ-HS, where \u03c3k,i+ does not necessarily mean the minimum value and the meaning of \u201cstep\u201d is different from that in The following is the description of the algorithm to compute p, q], repeat Steps For each period be the minimum least squares error among these top M combinations.Let The other parts of the algorithm are the same as in DPLSQ-TV.In this subsection, we analyze the time complexity of DPLSQ-HS. Since DPLSQ-HS can be applied to additions and deletions of edges, we consider the time complexity of completion for adding and deleting edges.K and H. In this case, the time complexity for least squares fitting can be estimated as O(m).In our proposed method, we assume that the numbers of adding and deleting edges in a given network are, respectively, bounded by constants \u03c3hj,kj,j, we assume that the addition of edges is operated only in the case of adding edges to the nodes with respect to the top M of the sorted list. Therefore, the number of combinations of addition of kj edges, which is bounded by a constant K, is O(MK). It is well known that the sorting of n data can be done in O(nlog\u2061n) time. Based on such an assumption, the total time required for the computation of \u03c3hj,kj,j is O(mnlog\u2061n) is O(m3nlog\u2061n), because p and q are O(m).As for the time complexity of computing mnlog\u2061n) , since tDs and Es, the calculation process is the same as that in DPLSQ-TV. Therefore, the computation time for both Ds and Es are O(m2n3) as described in Furthermore, for the time complexity required for computing N time series datasets, each of which consists of m points, the time complexity becomes O(Nm3log\u2061n + m2n3). DPLSQ-HS requires less time complexity than DPLSQ-TV, because O(m3nlog\u2061n) is much smaller than O(m3nK+1). Indeed, as shown in If we use http://www2.nict.go.jp/aeri/sts/stmg/K5/VSSP/install_lsq.html) for the least squares fitting method.We performed computational experiments using both artificial data and real data. All experiments were performed on a PC with an Intel Core(TM)2 Quad CPU (3.0\u2009GHz). We employed the liblsq library and \u03c9 is random noise taken uniformly at random from . For the artificial generation of the observed data yi(t), we usedoi is a constant denoting the level of observation errors and \u03f5 is random noise taken uniformly at random from .In order to evaluate the potential effectiveness of DPLSQ-TV and DPLSQ-HS, we begin with network completion for time-varying networks using artificial data. We demonstrate that our proposed methods can determine change time points quite accurately when the network structure changes. We employed the structure of the real biological network WNT5A 26] as as 26] ac1 = 10, c2 = 20, which is generated by merging three datasets for G1, G2, and G3. Since the use of time series data beginning from only one set of initial values easily resulted in numerical calculation errors, we generated additional time series data beginning from 200 sets of initial values that were obtained by slightly perturbing the original data. Under the above model, we conducted computational experiments by DPLSQ-TV in which the initial network G was modified by randomly adding k0 edges and deleting h0 edges per node, resulting in G1, G2, and G3; additionally, we also conducted DPLSQ-HS experiments in which the initial network G was modified by randomly adding k0 edges per node, using the default values of k0 = h0 = 1. We evaluated the performance of this method by measuring the accuracy of modified edges, the time point errors for time intervals, and the computational time for completion (CPU time). Furthermore, in order to examine how CPU time changes as the size of the network grows, we generated networks with 20 genes, 30 genes, and 40 genes by making 2, 3, and 4 copies of the original networks. We took the average time point errors, accuracies, and CPU time over 10 random modifications with several ois. In addition, we performed computational experiments on DPLSQ-TV and DPLSQ-HS using 60 genes, where additional time series data beginning from 100 sets (in place of 200 sets) of initial values were used, and G1, G2, and G3 were obtained by addition and deletion of edges. However, DPLSQ-TV took too long time (more than 1000\u2009sec. per execution) and thus the result could not be included in As for the time series data, we generated an original dataset with 30 time points including two change points Ei and Ei\u2032 are, respectively, the sets of edges in the original network and the completed network in each time interval. This value is 1 if all the added and deleted edges are correct and 0 if none of the added and deleted edges are correct. If we regard a correctly added or deleted edge as a true positive, \u2211i=0B(|Ei | \u2212|Ei\u2229Ei\u2032|) corresponds to the number of false positives and h + k + \u2211i=0B(|Ei\u2229Ei\u2032 | \u2212|Ei|) corresponds to the number of true positives. The time point error is the average difference between the original and estimated values for change time points and is defined asci\u2032\u2009\u2009 are the estimated change points. As for the computation time, we show the average CPU time.The accuracy is defined as follows:The results of the two methods are shown in n = 60, although DPLSQ-TV took more than 1000 seconds per execution and thus the result could not be included in oi = 0.1). Therefore, the applicability of DPLSQ-HS is also limited in terms of the accuracy, although it may still be useful for networks with n = 60 if the purpose is to identify change time points.It is also observed that DPLSQ-HS worked reasonably fast even for n = 20. The variances for DPLSQ-TV were 0.00602 and 0.00446 for oi = 0.3 and oi = 0.5, respectively. The variances for DPLSQ-HS were 0.01188 and 0.00732 for oi = 0.3 and oi = 0.5, respectively. This result suggests that DPLSQ-HS is less stable than DPLSQ-TV. However, the variances of DPLSQ-HS were less than twice those of DPLSQ-TV. Therefore, this result also suggests that DPLSQ-HS has some stability.Since DPLSQ-HS is a heuristic method, the results may be greatly influenced by data. Therefore, we evaluated the stability of DPLSQ-HS by comparing the variance of the accuracy with that for DPLSQ-TV, where B and the maximum number of added and deleted edges per nodes K and H on the least squares error, we performed computational experiments with varying these parameters (one experiment per parameter). Then, the resulting least squares errors for DPLSQ-TV are 5.495, 7.016, 7.875, 3.886, and 3.799 for = , , , , and , respectively. It is seen that use of larger K, H resulted in smaller least squares errors. It is reasonable that more parameters resulted in better least squares fitting. However, use of larger B did not result in smaller least squares errors. It may be because addition of unnecessary change points increases the error if an enough number of edges are not added. It is to be noted that although the least squares errors are reduced, use of larger K, H is not always appropriate because it needs much longer CPU time and may cause overfitting.In order to examine the effect of the number of change points We also compared our results with those obtained by the ARTIVA algorithm . It is t D. melanogaster and the cell cycle of S. cerevisiae.We examined two types of proposed methods for the inference of change time points using gene expression microarray data and also compared our results with those obtained using the ARTIVA algorithm. We applied our methods to two real gene expression datasets, measured during the life cycle of S. cerevisiae extracted from the KEGG database [The first microarray dataset is the gene expression time series collected by Spellman et al. . We empldatabase shown indatabase that wer D. melanogaster life cycle. We used the expression datasets of 30 genes selected from this microarray data with 67 time points, which include three change time points.The second microarray dataset is the gene expression time series from experiments by Arbeitman et al. . This daK = 3 and H = 0 were used with the S. cerevisiae dataset and K = 2 and H = 0 were used with the D. melanogaster dataset.In this computational analysis, with regard to applying the two different types of microarray datasets, we generated 200 datasets that were obtained by slightly perturbing the original data in order to avoid numerical calculation errors. Since the correct time-varying networks are not known, we only evaluated the time point errors and the average CPU time, where cis are the values of the change point in the original data and ci\u2032s are the estimated values. In the experimental analysis with S. cerevisiae data, as for the change time points, there seems to be almost no difference between the results of DPLSQ-TV and DPLSQ-HS, which can correctly identify the time points where the network topology changes. It is also observed from D. melanogaster, it is seen from i = 3. From the point of view of computational time, DPLSQ-HS performs significantly better than DPLSQ-TV; DPLSQ-HS runs about 46 times faster than DPLSQ-TV. Therefore, DPLSQ-HS allows us to significantly decrease the computational time. These results suggest that, in many cases, we can expect DPLSQ-HS to find a near-optimal solution, at least for change time points, while also speeding up the calculation.The results are shown in Tables S. cerevisiae and D. melanogaster microarray datasets, which consist of 71 measurements of 10 genes and 67 measurements of 30 genes, respectively, and tried to identify the change time points. Computational experiments on ARTIVA were performed under the same computational environment as that used in our methods.Furthermore, for the ARTIVA analysis, we employed both the above-mentioned D. melanogaster data using the ARTIVA algorithm. According to this validation, it has been observed that the time intervals 18-19, 31\u201333, 41\u201343, and 59\u201361 contain more than 40% of all change points. In order to compare with the ARTIVA results, we attempted to identify four change points using our proposed methods. The results of the comparative experiment using D. melanogaster microarray data are shown in cis are three change time points in original data. Although DPLSQ-HS identified change time points similar to those identified by ARTIVA, the results of ARTIVA appear to be slightly better. This suggests that the ARTIVA algorithm shows slightly better performance with respect to the inference of change points than our proposed methods. However, ARTIVA does not determine change time positions but determines time intervals at which the network topology might change. Therefore, DPLSQ-HS is more suited for identifying change time positions at all-time points. . However, as discussed in"}
+{"text": "Complete transcriptional regulatory network inference is a huge challenge because of the complexity of the network and sparsity of available data. One approach to make it more manageable is to focus on the inference of context-specific networks involving a few interacting transcription factors (TFs) and all of their target genes.training and a prediction phase. During the training phase we infer the unobserved TF protein concentrations on a subnetwork of approximately known regulatory structure. During the prediction phase we apply Bayesian model selection on a genome-wide scale and score all alternative regulatory structures for each target gene. We use our methodology to identify targets of five TFs regulating Drosophila melanogaster mesoderm development. We find that confident predicted links between TFs and targets are significantly enriched for supporting ChIP-chip binding events and annotated TF-gene interations. Our method statistically significantly outperforms existing alternatives.We present a computational framework for Bayesian statistical inference of target genes of multiple interacting TFs from high-throughput gene expression time-series data. We use ordinary differential equation models that describe transcription of target genes taking into account combinatorial regulation. The method consists of a Our results show that it is possible to infer regulatory links between multiple interacting TFs and their target genes even from a single relatively short time series and in presence of unmodelled confounders and unreliable prior knowledge on training network connectivity. Introducing data from several different experimental perturbations significantly increases the accuracy. A major challenge for computational systems biology is the inference of gene regulatory networks (GRNs) from high-throughput data such as gene expression time-series-5. This An experimental approach to identifying TF targets might involve the design of mutant strains with the TF perturbed and differences in the gene expression of all putative targets analyzed-8. When Many computational methods have been introduced to infer or \u201creverse engineer\u201d GRNs from time-series expression data,3-5. ManThe target identification methods of Barenco et al. and HonkIn this contribution we show that combining the idea of a training set of known targets with a non-linear regulation model can provide a very effective method for target identification. A distinguishing feature of our work is the use of a well-characterised (but not error-free) subnetwork which is used to learn protein activities for the regulating TFs of interest the use of ODEs to model transcription, translation and mRNA/protein decay, ii) a known set of TFs that regulate transcription and iii) data-driven inference of the model parameters and network structures by using a fully Bayesian statistical method. To infeConsider the following dynamical models for the time-evolution of mRNA and TF protein abundances driven by gene transcription and TF protein translation, mj(t), and the regulator TF protein activities pi(t). The translation model then relates the TF protein activities to the corresponding TF mRNA levels fi(t), This ODE model ties together the target gene mRNA concentration G(\u00b7) (see Methods). The equation also models mRNA degradation with rate dj while bj represents a basal production rate and sj is a sensitivity parameter. The response function takes a sigmoidal form that non-linearly transforms the TF activities so that saturation effects are taken into account and the TFs can competitively or cooperatively activate or repress transcription[\u03b8j which determine the network structure and regulation model coefficients. These parameters include weights that can effectively model nth order reactions, thus approximating the effect of, for example, TF dimerisation. Similarly, the translation equation explains the production rate of the active TF protein as a function of its mRNA while accounting for the protein degradation with rate \u03b4i. We assume that the main rate-limiting step in production of active TF protein is transcription. Thus the TF activity can be considered equivalent to the TF protein concentration. This is thought to be a reasonable assumption for TFs in the Drosophila embryonic developmental system considered later[In the transcription equation, the TFs can jointly modulate the mRNA production rate of a target gene through the response function cription. The resed later but in oed later-32, but ed later for exampi(t), will be difficult or impossible to measure. These continuous-time profiles must be inferred along with the parameters \u03b8j,dj,bj,sj and \u03b4i. Importantly, some individual parameters in \u03b8j quantify the interactions between TFs and genes and the estimation of their values allows us to infer the network structure, i.e. to identify the subset of TFs that regulate the transcription of each gene. The full continuous-time mRNA functions mj(bi) and fi(t) are also unobserved. A typical set-up is that we have noisy observations of these functions obtained at a set of discrete time points through gene expression analysis. Fitting the dynamical models to a biological system is carried out by the following two phases , by using a small set of training genes. The approximate structure of this sub-network is assumed to be given so that for these genes the regulating TFs are known to some degree. All other model parameters are unknown and are inferred from the data. In this phase both the transcription model and the translation model are used to estimate the TFs. Observations associated with both the mRNA of the training genes and the TF mRNAs are required. The training phase could be carried out without the translation model in cases where TF protein activity is regulated by post-translational modification. Extensive experimentation with artificial data reveals that, when appropriate, combining a translation model with TF mRNA observations greatly aids in estimation of the TF activities.1. Training phase: Here, we use the dynamical models to estimate the TF activities, \u03b8\u2217,d\u2217,b\u2217,s\u2217) are inferred. Here, only the transcription model is needed while the translation model is irrelevant. This phase is applied on a genome-wide scale and aims to identify the regulating TFs for each test gene.2. Prediction phase: Once the TF activities have been estimated, each test gene (for which the regulating TFs are unknown) is processed independently and the parameters affecting our system. We simulated data associated with two experimental conditions. The data are short unevenly sampled time-series of 10 time points. In our first experimental condition there is considerable overlap between the TF concentrations of ANT and BEE as shown in FigureWe consider an artificial gene network involving The purpose of our experiment with simulated data is to predict the set of regulating TFs for each gene using artificially generated mRNA measurements. Since the ground-truth network links are known, we can make a rigorous assessment of the ability of the model to identify the target genes of each of the TFs, as well as an assessment of the ability to predict non-regulation. The modelling is split into two distinct phases as described in the previous section. In the training phase, 30 genes with approximately known connectivity were used for learning the TF profiles. Specifically, to make the training phase more realistic we added 15% noise to the ground-truth network links in these 30 training genes. This resulted in 16 links between TFs and genes to change so that some of these links falsely became active and others were removed (i.e. from active they became inactive). Notice that this noise in the network links adds an extra model-mismatch in addition to the presence of the UNK TF which is not part of the model. In the prediction phase these profiles were used to rank other potential targets of the TFs from the remaining 1000 genes. Full details on how the data have been generated are given in Methods, while the dataset is provided together with software that is available online.To assess the predictive ability of the model with respect to the amount of information present in the data, we consider three experiments. In the first experiment only data from the first experimental condition are used, in the second experiment only data from the second experimental condition are used, while in the third experiment all data from both conditions are considered.unknown model parameters. The coloured solid lines show the estimated means and the shaded areas represent 95% posterior credible regions around the estimated means. Plots showing how the model fits the mRNA data in the training phase are presented in Additional fileHere, we assume the synthetic mRNA data are produced by a single experimental condition, i.e. either the first or the second condition mentioned earlier. When considering the first condition the true TF profiles for ANT, BEE, CAR and UNK are shown in the left plot of FigureFigureFrom the above experiment we can conclude that it is rather difficult to accurately predict network links between TFs and genes from experimental data obtained under conditions that do not disambiguate sufficiently the functionality of the TFs during the transcription process. Roughly speaking, the \u201csimilarity\u201d of some TFs causes the observed mRNA data to be well explained by alternative hypotheses associated with the presence/absence of these similar TFs and makes it hard to statistically identify which of those TFs were actually driving the regulation process.We now consider a second series of observed mRNA measurements associated with an alternative simulated experimental condition comprising a perturbation of the biological system that better disambiguates the two (previously overlapping) TFs in terms of their influence in gene transcription. We first use only these new data instead of the data associated with the first experimental condition. This alternative perturbation changes significantly the protein activity for BEE as shown on the left plot in FigureIn our third experiment we fit the models using all data from both experimental conditions. FigureIncluding data from both experimental conditions allows for a more confident estimation of the TF profiles. To see this, we can contrast the second up to fourth plots in the first row of FigureFurthermore, we obtain a significant increase in the predictive performance when identifying network links. As the green coloured ROC curves in FigureDrosophila melanogaster embryogenesis in wild-type embryos[Drosophila: Tinman (TIN), Biniou (BIN), Twist (TWI), Bagpipe (BAP) and Myocyte enhancer factor 2 (MEF2)[In this section we apply our method to a dataset of three independently repeated time-series of 12 time points collected hourly throughout embryos. For pre embryos. We studOnce the TF activities have been estimated, we use the model to predict the regulator TFs for a set of 6003 test genes which exclude the 92 genes used in the training phase. A web-based browser that displays how the model fits the mRNA data of test genes is available online at. Full pop < 0.01 or less in all cases using tail probability in a hypergeometric distribution) and clearly outperform the maximum likelihood baseline and the Inferelator. We also carried out empirical bootstrap tests for each pair-wise comparison of methods which confirm that the proposed methods outperform the other methods statistically significantly in most cases model. FigureWe also compute the p < 0.01; see TableFigurep < 0.01, except p < 0.05 for regression with 6003 top predictions; see TableBecause of frequent non-functional binding, it makeSimilar evaluation for DroID validation is shown in FiguresThe protein degradation rates and the corresponding protein half-life estimates from the model are presented in TableIt may be thought that a typical short time-series expression dataset contains only very limited information about the structure of a GRN. In a meta-analysis of methods proposed in the DREAM 2 competition, the autMany methods for the inference of GRNs from gene expression data require much more data, and data from a much greater diversity of experiments, than we consider here-3. HowevDrosophila data; we assumed knowledge of a well-characterised sub-network of the GRN, which is used to learn the TF activity profiles during the training phase, and in the present application we restrict ourselves to models of activation. Our results demonstrate improved performance over the Inferelator but it should be acknowledged that we are solving a more restricted class of problem. Our method is also much more computationally demanding over functional degrees of freedom. We haveThe data used here are very limited and therefore one must accept that the method will make many false predictions. To improve accuracy, predictions based on the analysis of expression data can be combined with evidence from complementary sources to identify a confident regulatory network structure. For example, show howDrosophila we were able to show that the method makes predictions which are significantly enriched for TF and TF-pair binding identified using ChIP-chip experiments on the same system. Our method works by fitting and scoring differential equation models of transcriptional regulation. Initially we use the model to infer the temporal pattern of TF protein activity given a small subnetwork of mostly known structure. Subsequently we score alternative target gene regulation models to make genome-wide target predictions. By using a fully Bayesian procedure we are able to automatically balance model complexity with data fit when scoring alternative models. Our method is readily parallelizable in the prediction phase, making it a practical tool for genome-wide network inference. On artificial data we showed that our method is able to cope with the existence of unknown regulating TFs that are not modelled and we showed that data from more diverse experimental conditions can help disambiguate between TFs that have similar profiles in a single condition. However, as our Drosophila example shows, even a single wild-type time course can be highly informative about the underlying regulatory network if the TFs of interest are changing over time. By combining the model predictions with other independent sources of evidence, e.g. from ChIP and spatial expression patterns, it will be possible to identify a confident condition-specific regulatory network.We have introduced a computational approach for genome-wide inference of the targets of multiple regulating TFs given time-series gene expression data. Using a time course measuring changes in wild-type expression during the embryonic development of Drosophila experiment are both available online at[Software and a web-based browser displaying results in the nline at.G(\u00b7) non-linearly transforms the TF protein activitiesThe transcription and translation equations are ordinary differential equations (ODEs) having the general form given in the beginning of the Results section. The response function I-dimensional real-valued vectorjth target gene and the I TFs. These interaction weights quantify the network links so that when wji = 0 the link between the jth gene and the ith TF is absent. When wji is negative or positive the TF acts as a repressor or activator respectively. wj0 is a real-valued bias parameter. The set of scalar parameters \u03b8j in the response function G(\u00b7) is defined to be \u03b8j = {wjw,j0}. Since the transcription ODE model is linear with respect to mj(t), it can be solved explicitly as shown in the supplementary information. The above transcription ODE model generalizes previous single-TF models that were used to estimate the concentration function of a single latent TF[G(\u00b7) was considered in all our experiments, our algorithms could easily be adapted to handle different forms for G(\u00b7).Here, the atent TF,21,43. W\u03b8j,dj,bj,sj,\u03b4i} are model parameters in the ODEs which need to be estimated under the constraint that {dj,bj,sj,\u03b4i} attain non-negative real values, while \u03b8j = {wj,wj0} can attain both positive and negative real values. When we search for TFs that act only as activators, wj is constrained to be non-negative.Furthermore, the simple linear translation equation can be solved explicitly as shown in the supplementary information. Finally, the parameters {A more detailed description of the ODE models is given in section 2 of the supplementary information.J is the number of target genes, the unobserved TF protein activitiesj we also have a binary vectorxji = 1 indicates the presence of the link between the gene and the i TF, while xji = 0 indicates the absence of the link. Prior distributions are assigned to all unknown quantities. The prior over each protein activity pi(t) was defined through the translation ODE and the placement of a suitable prior on the TF mRNA function, fi(t), through the use of Gaussian processes; see e.g.[The dynamical models contain a set of unknown quantities: the transcription model parameterssee e.g.. Bayesiasee e.g. where alA more detailed description of the training modelling phase is given in section 3 of the supplementary information.y\u2217 is the associated vector of observed mRNA measurements. This gene can be regulated by any combination of I TFs. LetI possible values. To infer the network links, it suffices to compute the posterior probability for each value of the discrete random variable x\u2217. Using Bayes\u2019 rule this probability is The prediction phase involves independently processing each test gene and probabilistically predicting its regulating TFs. Let \u2217 denote a test gene so that Yindicates the data used in the training modelling phase. To obtain the above, we need to compute the predictive density p(y\u2217|x\u2217Y) for any possible combination of regulating TFs, i.e. any value of x\u2217, together with the associated probabilities p(x\u2217|Y). While p(x\u2217|Y) could be computed by the frequencies of the known connectivity vectors in the training genes, this is unreliable since the small set of training genes may not be representative about the prior distribution of links between TFs and genes. Therefore, we set these probabilities to uniform values so that the posterior probability in Equation (1) becomes proportional to its predictive density value p(y\u2217|x\u2217Y). This latter quantity is intractable since it requires an integration over the parameters . We approximate it using a novel fast approximation to a marginal likelihood, described in detail in section 4.1 in the supplementary information, that follows ideas similar to Chib\u2019s approximation[where ximation.p, withGiven the estimated probabilities \u00b7 Maximum a posteriori (MAP) network configuration: This is the most probable settingith TF is present with posterior probability \u00b7 Marginal probability of a single link: The link between the test gene and the p for a pair of links.Similarly we can compute the marginal probability A more detailed description of the prediction modelling phase is given in section 4 of the Supplementary Information.Drosophila, follows exactly the same structure as the Bayesian approach with the following two differences. Firstly, the model parameters (such as kinetic parameters in the ODEs) were not treated using a Bayesian manner and instead they were obtained based on maximum likelihood which provides point estimates. Secondly, each protein function, pi(t), was deterministically estimated by the translation ODE model and by setting the driving TF mRNA function, fi(t), to a piece-wise linear interpolation function computed from the TF mRNA observations. Apart from the above differences, prediction using the baseline method is done exactly analogously to the Bayesian case.This method, that was used in the experiments in Drosophila and hence we can enumerate all possible 32 regression models and select the best model using cross-validation. In the results reported in Figure where dered in. Howeverhttp://err.bio.nyu.edu/inferelator/. This is the most recent version for which source code is available and which can be easily used for new data. We set each gene in its own cluster but otherwise used the default settings. We interpreted the maximum of the absolute values |\u03b2i| of all weights corresponding to a specific regulator alone or in combination with another as the counterpart of the posterior probability for ranking the predictions. For pairs, the corresponding value was\u03b21 and \u03b22 are the weights of the components (x1x2) of the pair and \u03b23 is the weight of\u03b2i| was also used in DREAM3 challenge submission of the Inferelator team[We compared our method against Inferelator 1.1 which is. (6) in). Combintor team.As previously described.cis-regulatory modules (CRMs) collected in[The training set was constructed from the training set of 310 ChIP ected in . Genes with no interactions in the database were excluded from the validation to avoid possible problems due to annotation incompatibilities.We downloaded the TF-gene interaction database from DroID to the value f i(tk). Negative values were truncated to zero.We generated synthetic mRNA time-series data that correspond to 1030 target genes and four transcription factors: ANT, BEE, CAR and UNK. The TF activities are depicted in the first column of Figure\u03b8j,dj,bj,sj) selected as follows. Each interaction weight wji for the TFs ANT, BEE and CAR was selected from the distributionwji = 0, the ith TF does not regulate the jth gene. The above procedure generates random sets of regulating TFs so that on average each target gene has approximately two regulating TFs. Each bias parameter wj0 was drawn from the Gaussiandj,bj,sj) plus an initial condition parameter aj (see supplementary information) were selected randomly from an empirical distribution obtained by applying the dynamical models to the 6095 genes (the 92 training genes plus the 6003 test genes) in the Drosophila data. This was done to obtain kinetic parameters that produce realistic mRNA profiles that closely resemble real gene expression data. Summaries of the values of these parameters are given in Tablemj(tk) to the value mj(tk). Negative values were truncated to zero.To generate mRNA observations for the target genes, we simulated the transcription ODE, given the known TF activities and by using model parameters (ChIP: Chromatin immunoprecipitation; CRM: cis-regulatory module; DREAM: Dialogue for Reverse Engineering Assessments and Methods; GRN: Gene Regulatory Network; MAP: Maximum a posteriori; MCMC: Markov chain Monte Carlo; MSE: Mean squared error; ODE: Ordinary differential equation; ROC: Receiver operating characteristic; TF: Transcription factor.The authors declare that they have no competing interests.MT developed the MCMC methodology and performed the simulations together with AH. AH and MT developed the validation method. MT, AH, NL and MR designed the MCMC method. MT, AH, ML and MR were involved in drafting the manuscript. All authors read and approved its final version.Supplementary Information. More detailed technical description of the methods and supplementary figures [. figures ,45,48-65Click here for filePosterior probabilities of alternative regulation models for Drosophila.Click here for file"}
+{"text": "Gene regulatory networks have an essential role in every process of life. In this regard, the amount of genome-wide time series data is becoming increasingly available, providing the opportunity to discover the time-delayed gene regulatory networks that govern the majority of these molecular processes.Gene Regulatory Network inference by Combinatorial OPtimization 2), which is a significant evolution of the GRNCOP algorithm, was developed using combinatorial optimization of gene profile classifiers. The method is capable of inferring potential time-delay relationships with any span of time between genes from various time series datasets given as input. The proposed algorithm was applied to time series data composed of twenty yeast genes that are highly relevant for the cell-cycle study, and the results were compared against several related approaches. The outcomes have shown that GRNCOP2 outperforms the contrasted methods in terms of the proposed metrics, and that the results are consistent with previous biological knowledge. Additionally, a genome-wide study on multiple publicly available time series data was performed. In this case, the experimentation has exhibited the soundness and scalability of the new method which inferred highly-related statistically-significant gene associations.This paper aims at reconstructing gene regulatory networks from multiple genome-wide microarray time series datasets. In this sense, a new model-free algorithm called GRNCOP2 (A novel method for inferring time-delayed gene regulatory networks from genome-wide time series datasets is proposed in this paper. The method was carefully validated with several publicly available data sets. The results have demonstrated that the algorithm constitutes a usable model-free approach capable of predicting meaningful relationships between genes, revealing the time-trends of gene regulation. The genome encodes thousands of genes whose products enable cell survival and numerous cellular functions. The amount and the temporal pattern in which these products appear in the cell are crucial to the processes of life. Gene Regulatory Networks (GRNs) govern the levels of these gene products. A GRN is the collection of molecular species and their interactions, which together control gene product abundance ,2. NumerInnovations in experimental methods have enabled large scale studies that allow parallel genome-wide gene expression measurements of the products of thousands of genes at a given time, under a given set of conditions and for several cells/tissues of interest. This technology, called DNA microarray, introduces a variety of data analysis issues that are not present in traditional molecular biology .Over the past few years, several statistical and artificial intelligence techniques have been proposed to carry out the reverse engineering of GRNs from monitoring and analyzing gene expression data -6. TheseAnother important aspect to be considered, when dealing with this biological problem, is constituted by the manner in which the temporal patterns of a GRN are captured. As it was mentioned in some other studies ,9, the tIn this paper, a new machine-learning approach for the inference of time-lagged rules from time series gene expression data is assessed. The discovered relationships, that represent potential interactions between genes, may be used to predict the gene expression states of a gene in terms of the gene expression values of other genes and, in this way, a putative GRN may then be reconstructed by applying and combining these rules. The approach offers several relevant and distinguishing features in relation to most of the existing methods. First of all, the gene expression value discretization criterion performed in this work is neither arbitrary nor uniform. Secondly, it can infer rules with multiple time-delays. Also, the results can be easily interpreted since the rules are derived from schemes that classify the different regulation states. As well, the algorithm can infer the relationships between genes automatically from multiple microarray time series data. Finally, the new method is capable of processing large scale datasets in order to perform genome-wide studies.The rest of the paper is organized as follows: in the next subsection, several machine learning techniques available in the literature for GRN inference are overviewed. Following, the underlying methodology and the main characteristics of the new algorithm are presented. Next, two experimental phases are described. The first one is constituted by a detailed comparison with several related methods; the second one contains a performance analysis of the method in a genome-wide scale. Finally, some conclusions are put forward.As it was aforementioned, several statistical and artificial intelligence techniques have been proposed in order to reconstruct a GRN from gene expression data. In this section, some of the approaches from the area of machine learning will be summarized. For a more detailed review please refer to ,3,6.Clustering techniques are one of the most used computational strategies for analyzing microarrays -16. Theset al. ) be the minimum.\u2022 Search for the element L[e]+L[e+1])/2 as the TDT of the genei.\u2022 Return , which specifies the minimum proportion of datasets in which a rule must predict well in order to be returned by the algorithm as a potential relationship. This parameter does not impose any order of importance among the datasets and thus, all of them have the same weight in the consensus process. Thereby, for example, if the algorithm is executed with 10 time series datasets and the RCA parameter is set to 0.60, it means that the rules returned by the algorithm predict well in at least any 6 datasets, no matter which ones. In this sense, and in order to set this parameter, the researchers must take into account the number of datasets available and, following the previous example, a question like this should be answered: is it enough evidence of a feasible regulatory relationship for a rule to be supported by at least 6 of 10 (RCA = 0.60) datasets? The answer will naturally depend on the biological nature of the experiments and on the criterion of the researcher.This type of consensus process does not limit the number of microarray datasets employed in the inference process. Thus, the following question might arise: is it necessary to assess the rules in all the microarray datasets? And a straightforward answer is no. Thereby, we have introduced a parameter on the consensus process, called X, are mapped to values -1 and 1 using the function X, i)\u03b4 a genei1 is not necessarily the same value required by the same gener to activate (or inhibit) a genei2. For this reason, a more flexible and dynamic threshold-selection policy that calculates a specific regulation threshold for each pair of genes is applied in GRNCOP2, as it was previously employed in GRNCOP precisionSaccharomyces cerevisiae cell cultures RCA param Yeasnet are show case of ,33. The precision and score metrics achieved by both algorithms in each of the 56 runs w.r.t. the Coverage Percentage of the Combinatorial Search Space (namely CP-CSS), i.e., the percentage of associations returned by the methods in relation to all possible gene pair-wise combinations GRNCOP in several of the proposed metrics, whereas both algorithms perform significantly above the random selection, as expected. In particular, while GRNCOP2 is on average more precise and more specific than GRNCOP, this last one recovers on average a bigger number of the \"relevant interactions\" (i.e. it is more sensitive). These results may be explained by the fact that GRNCOP actually recovers on average twice the amount of the associations obtained by GRNCOP2. However, since the values in Table Accuracy parameter to be more precise and with higher scores than those found by GRNCOP. This is particularly relevant since this behavior evidences the improvements achieved by the modifications included to the inference algorithm previously detailed. The third observation has to do with the different shapes in the distribution of the points of both methods in the figures. Along with the high number of associations discussed above, it seems that GRNCOP has fewer variation in the precision and score values achieved w.r.t. those obtained by GRNCOP2. However, this can be explained by the fact that GRNCOP is almost insensible to variations of its SCP parameter on the values employed in this comparison , the specificity (sensitivity) achieved by the former is higher. Moreover, although GRNCOP is able to recover almost 80% of the relevant associations in all the cases, this is due to the large number of interactions returned by the algorithm, as it was previously discussed. Therefore, these results show that, in this case of study, GRNCOP2 performs better than GRNCOP, and that the modifications proposed in the new methodology really improve the inference process since the results seem to be more relevant in terms of the precision, sensitivity, specificity and score metrics.Finally, it is also important to analyze the behavior of both algorithms in relation to the RCA parameter of 1, the rules found by the three approaches were filtered in accordance with the accuracy reported. In this way, only the rules which achieved an accuracy of at least 0.75 on the three datasets were selected for this study. In the case of [Accuracy of 0.75, a SCP of 0.95, an RCA of 1 and with W = 0. Table W which was set to 5 and then, the simultaneous rules were removed in order to make the comparison. This parameterization has been established as follows: W = 0 denotes that GRNCOP2 will only perform the search of the simultaneous rules; in the case of W = 5, it denotes that the search will be carried out upon five units of time-delay; RCA = 1 says that the rules must predict well in all the datasets; SCP = 0.95 aims to obtain rules of the cases -3,-2, 2 and 3 with high TP (TN) rates; and the Accuracy = 0.75 is intended to represent the same level of accuracy of the other methods, although this is not necessarily true due to the different criteria employed in each algorithm for the evaluation of the rules. Note that the values of the metrics in Table In this section, the performance of GRNCOP2 in terms of the proposed metrics w.r.t. other algorithms described in the literature will be compared. This comparison is limited conforming to the results reported by ,11,12 dusensitivity values) as Li et al. with the same level of precision. Although these results are not conclusive in the determination of the best method since it is limited to only one case of study in one level of accuracy, they provide insight regarding the real performance of the proposed approach. In this sense, these observations clearly indicate that GRNCOP2 is a method capable of inferring relevant interactions with high levels of precision that other methods of the literature are unable to find.As shown, GRNCOP2 performs equally or better with this level of accuracy in terms of almost all the proposed metrics. The differences w.r.t. the referential methods are more evident in the case of the simultaneous rules regulate CLB proteolysis. This data is consistent with the inhibitory relationships inferred between G1- and G2-specific genes: +/-CLB1 0\u2192 -/+CLN2, +/-CLB6 0\u2192 -/+CLB1 and +/-CLB2 0\u2192 -/+CLN2. In particular, the last rule was only inferred by GRNCOP2. The reader is referred to [It is also well know that in budding yeast the G1 cyclins, such as n et al. found th Measday present erred to ,43 and [erred to for addiSIC1, it is well known that this gene is an inhibitor of CLB complexes, and that it is active during the G1 phase - together with CLB5 and CLB6 - inhibiting CLB1 and CLB2 [SIC1 0\u2192 +/-CLB5 inferred by GRNCOP2. CDC20 and SWI5 are transcribed later in the S/G2 phase [SWI5 0\u2192 +/-CDC20. This rule was not detected by the methods compared with GRNCOP2. Printz et al. [CLB2 stimulates the synthesis of CDC20. This feature is captured by the rule: +/-CLB2 0\u2192 +/-CDC20.With regard to and CLB2 . This knG2 phase , which ez et al. presenteSWI4 is a component of the SBF complex, which controls the expression of genes during phase G1 [SWI4 on the genes expressed in the G1 phase, as represented by the rule: +/-SWI4 0\u2192 +/-CLB5. These observations offer evidence of the biological relevance of the association rules inferred by GRNCOP2.The protein phase G1 . This isCLB1 3\u2192 +/-CLB5, +/-CLB1 3\u2192 +/-CLB6, +/-CLB2 3\u2192 +/-CLB5, +/-CLB2 3\u2192 +/-CLB6, +/-CLB5 4\u2192 +/-CLB2 and +/-CLN2 4\u2192 +/-CLB2. In a similar way, when GRNCOP2 compares the activation patterns of genes with high expression levels during the G1 phase in contrast with the expression pattern of these same genes during G2 phase, some opposite and logical relationships may emerge: +/-CLB5 3\u2192 -/+CLB6, +/-CLB6 3\u2192 -/+CLB5, +/-CLN1 2\u2192 -/+CLB6, +/-CLN2 2\u2192 -/+CLB6, +/-CLN2 3\u2192 -/+CLB6 and +/-SWI4 4\u2192 -/+CLB5. Take for example the rule +/-SWI4 4\u2192 -/+CLB5 which has a contradictory interaction with the rule +/-SWI4 0\u2192 +/-CLB5. Figure Finally, the opposite behavior between G1- and G2-specific genes - as it is evidenced by rules obtained from the analysis of simultaneous time-points of the microarray datasets - turns into similar activation patterns when some time-delay is considered, as a consequence of the pattern comparison through the different cellular phases. In other words, if GRNCOP2 matches the behavior of a G1-cycling gene in G1 phase with the behavior of a G2-cycling gene in G2 phase a positive correlation is inferred. This is the case of the following rules: +/-Apart from the previous analysis, it is necessary to clarify that we do not claim that the rules inferred by GRNCOP2 always represent confident regulatory associations between genes. We think that our extracting-rules approach can be useful for the identification of some promising hypothesis, whose corroboration by biological experiments will always be mandatory in order to obtain curated new knowledge. In addition to this, it should be clear that important known interactions will not be found by GRNCOP2 (and by any other data driven approach) if the microarray data does not have correlations among the genes involved in such relations in the time-lags being analyzed.Saccharomyces cerevisiae organism, downloaded from the Gene Expression Omnibus (GEO) database [The aim of this study is to show the usefulness and capability of GRNCOP2 in genome-wide studies. To account for this, we have applied the proposed algorithm to several microarray time series datasets ,50-54 fodatabase and fromdatabase . The comIn order to perform rule inferences from these datasets, a few previous steps were performed. Since the list of genes reported in each dataset slightly differs from the other datasets, we have selected those genes that have been measured in all the studies. Moreover, this list was filtered according to those genes of the benchmarking databases described before. This results in a final list of 5245 yeast genes over which this study was focused. Additionally, the samples of some datasets ,51,53 weAccuracy parameter from 0.70 to 1 with increments of 0.05 and from the variation of the RCA parameter from 0.60 to 1 with increments of 0.05.For this analysis, 63 runs of the GRNCOP2 algorithm were performed, which result from the variation of the W = 4) were inferred since we consider that this value is appropriated (regarding its magnitude) to assess the genome-wide scalability of the algorithm. However, in order to obtain meaningful time-lagged relationship between genes, the researchers are encouraged to follow the recommendation given by (5) considering their hypothesis about the time-delayed regulations that may be present in the experiments. The SCP parameter was fixed in 0.95 following the suggested criterion as the objective is to analyze the behavior of the algorithm varying the proportion of datasets that support the rules. Each run took 30 min of execution on a six core processor with 8 gb of ram. As regards the results, Figure precision and score metrics on the reference sets and the number of associations achieved by GRNCOP2 in each run. The points of the upper-right corner of the figures (where the Accuracy and the RCA parameter get closer to 1) are omitted since the algorithm was unable to obtain any rule with those parameter values. The details of each run are available in the additional file Additionally, only the rules with a span up to 4 time-delay units , the method presented in this article is a new algorithm that constitutes a relevant evolution of the previous method due to the challenges that impose the proposed improvements. The new algorithm incorporates novel features such as inference of rules with multiple time-delays and on an unlimited number of time series datasets, and improvements over the whole inference process. This last feature was demonstrated by the fact that the results achieved by GRNCOP2 are significantly better than those obtained by the previous version. As well, the relevance of the new method became more evident since the scores achieved by GRNCOP2 were superior to those obtained by other related algorithms in terms of the proposed metrics. In addition, the relationships inferred by GRNCOP2 proved to be biologically relevant. Even more, it was able to obtain new potential interactions between genes, consistent with previous biological knowledge, that were not discovered by the other methods.Additionally, the ability of GRNCOP2 to perform genome-wide studies was assessed. In this regard, a study was performed over several genome-wide time series datasets, for which the proper functioning of the algorithm in terms of the proposed metrics was discussed. Also, with the realization of an ontological analysis it has been showed that the results were significant in biological terms, since the genes of the discovered sub-networks were found to be highly related in statistical terms.However, this study does not claim that the data-driven machine learning approach proposed in this paper is sufficient to infer biologically meaningful regulatory networks. Nevertheless, this tool offers significant evidence necessary to aid scientists in exploring and identifying biologically relevant associations, whose assessment by biological experiments is obligatory in order to achieve curated new knowledge.CAG designed and programming the algorithm GRNCOP2, conducted the computational experiments, proposed and contrasted performance metrics among the different methods, and drafted the manuscript. JAC participated in the design and coordination of the study and strongly contributed to improving the draft of the manuscript. IP is author of GRNCOP (ancestor of GRNCOP2), designed and coordinated the study, and performed and wrote the biological relevance analysis of the association rules inferred for cycling yeast genes. All authors read and approved the final manuscript.Individual values of the metrics for each run of GRNCOP2 and GRNCOP. The individual results of each run of both algorithms measured in terms of precision, sensitivity, specificity and score metrics regarding the reference sets are depicted in the table of the file. This is the information used in the comparison of GRNCOP2 and GRNCOP in the subsection A of the comparative study.Click here for fileIndividual values of the metrics for each run of GRNCOP2 in the genome-wide study. The individual results of each run of the GRNCOP2 algorithm measured in terms of precision, sensitivity, specificity and score metrics regarding the reference sets are depicted in the table of the file. This is the information used in the discussion about the performance of GRNCOP2 in the genome-wide study.Click here for fileRules of the GRN corresponding to Figure 6. The rules obtained for the genome-wide study with an Accuracy, RCA, SCP, and W of 0.75, 0.75, 0.95 and 4 respectively, is reported in a Table separated value file. The last two columns indicate the Accuracy and RCA achieved for each rule.Click here for file"}
+{"text": "Temporal analysis of genome-wide data can provide insights into the underlying mechanism of the biological processes in two ways. First, grouping the temporal data provides a richer, more robust representation of the underlying processes that are co-regulated. The net result is a significant dimensional reduction of the genome-wide array data into a smaller set of vocabularies for bioinformatics analysis. Second, the computed set of time-course vocabularies can be interrogated for a potential causal network that can shed light on the underlying interactions. The method is coupled with an experiment for investigating responses to high doses of ionizing radiation with and without a small priming dose. From a computational perspective, inference of a causal network can rapidly become computationally intractable with the increasing number of variables. Additionally, from a bioinformatics perspective, larger networks always hinder interpretation. Therefore, our method focuses on inferring the simplest network that is computationally tractable and interpretable. The method first reduces the number of temporal variables through consensus clustering to reveal a small set of temporal templates. It then enforces simplicity in the network configuration through the sparsity constraint, which is further regularized by requiring continuity between consecutive time points. We present intermediate results for each computational step, and apply our method to a time-course transcriptome dataset for a cell line receiving a challenge dose of ionizing radiation with and without a prior priming dose. Our analyses indicate that (i) the priming dose increases the diversity of the computed templates ; thus, increasing the network complexity; (ii) as a result of the priming dose, there are a number of unique templates with delayed and oscillatory profiles; and (iii) radiation-induced stress responses are enriched through pathway and subnetwork studies. Biological systems often operate as networks of interacting components that are highly regulated We provide an analysis of clustered temporal profiles, followed by an interpretation of the causal networks.i represents the template number , adaptive represents those samples receiving the priming dose prior to the challenge dose, and challenge representing those that only receive the challenge dose.The initial sets of gene expression data for the treatment groups with and without the priming dose are reduced to 682 and 527 genes, respectively, in accordance with the policy outlined in the Method section. These genes have highly variable expression values across different time points. Consensus clustering of filtered transcript data indicates there are 8 clusters that correspond to samples receiving the priming dose versus the 5 clusters that do not receive the priming dose , as shown in Pathway Studio to analyze the computed clusters through pathway and subnetwork enrichment analysis for identical and differential responses with the results shown in A comparison of 2 hour time point with the enrichment of CD43 (for regulating immune function) and TP53 pathways in adaptive and challenged response treatment groups, respectively. These analyses suggest that every template in the challenge group has a profile similar to those in the treatment group with the priming dose, and in one case, pathway enrichment has been limited only to the immune function activation.Similar profiles such as: (i) 4 hour time point) as well as the oscillatory profile of Dissimilar profiles are Causal networks, shown in The initial active templates are The final active templates are largely enriched by the identical temporal signatures in From the perspective of a strict gene expression, the fold changes are generally low and appear to be stochastic as a result of ionizing radiation. This observation is consistent with previous literature The method has been validated on synthetic data and then applied to transcriptome data that has been collected from a cell strain, which was exposed to 2 Gy ionizing radiation with and without the priming dose of 10 cGy applied 4 hours prior to the higher dose of radiation. Bioinformatics analyses revealed that computed templates without the priming dose are a subset of those that received the priming dose . Furthermore, the adaptive response group included templates with delayed activations and oscillatory behavior. It is clear that the priming dose has induced a significant amount of diversity in how the networks are modulated. In both treatment groups, the initial active templates of the causal networks are highly enriched by the down-regulation of the cell cycle machinery. However, in the case of the adaptive causal network, the network is also modulated by the up-regulation of the inflammatory processes. On the other hand, with the exception of EGR-1, the network is poorly enriched at the late stages for the treatment group that does not receive a priming dose. It has been suggested that both EGR-1 and p53 are essential for mediating radiation-induced apoptosis Another way to examine experimental data is through enrichment of cellular processes. Initially, both treatment groups are enriched by DNA double strand repair, apoptosis, and cell cycle processes. However, the group receiving the priming dose is also enriched with single strand base excision and mismatch DNA repair. Within the group receiving the priming dose, these processes are modulated with a chromatin remodeling 0.5. The net result is a significant reduction in the number of candidate transcripts, with those having similar temporal profiles being grouped together. The basic assumption is that co-regulated transcripts have a similar biological basis and is a step towards significant dimensionality reduction through clustering and categorization. Currently, there is an abundance of literature available on the clustering of time-varying expression data that includes predefined templates k-means), and facilitates visualizing of computed clusters for quality control. In our implementation, the clustering algorithm is based on k-means, where the distance measured is one minus the sample correlation. The clustering procedure is repeated for 1000 runs, and each run is performed on the randomly sampled genes with a sampling rate of 0.8. The optimal number of clusters is determined by examining the clustering stability and similarity matrix.The first step of our protocol is to eliminate transcripts with little variation, which are maximum folds of change less than K groups, and each group contains T is the number of time frames. By concatenating the representative pattern for the K groups, we can obtain a K-by-T data matrixK groups, i.e., whether the expression level of group-i at time t will have an impact on the expression level of group-j. More systematically, we use the following matrix equation to encode the causal relationships,t-th column of V, and j at time t affects the expression level of group-i at time t+1. A positive Suppose we have clustered genes into T. To alleviate the lack of samples in the inference problem, we propose the following extensions:However, in practice, we are confronted with the big challenge of processing limited data while estimating the large parameter space of (i) A temporal regression framework for the time course data.t and the current centering frame Instead of using only one column in (ii) The coefficient matrices should vary smoothly with time.t1 and t2 are in close proximity to each other (according to a predefined range), then i and j are close to each other: if so, i and j are far away. In practice, one can have either a hard indicator,Temporal coherence of (iii) The coefficient matrices are sparse.We note that the matrix By combining the three terms, we have the following optimization problemi-th group of genes, as discussed. However, in some cases, genes in the same cluster still have a certain level of variation, and using their average pattern for regression might lead to loss of information. To solve this problem and be able to fully utilize available patterns, we randomly sample genes from each group as the representative N data matrices; each data matrix will lead to one objective term, as specified in (4). We will then sum up the objective terms associated with all the data matrices as the ultimate objective function. We can sample as many times as needed, i.e., injecting more constraints to the optimization problem, using certain heuristics for sampling. For example, given K groups and k-th group, then the total number of different data matrices can be As we previously indicated, a critical issue is the low sample size given the high dimensionality of the parameter space. Thus, to improve robustness and stability of the solution, we adopt the policy of using individual transcripts as replicates as they have similar signatures within a clustered group. In practice, one can compute a representative expression profile T-1 matrices to form a K is very large, we apply the Nystrom low-rank approximation by sampling only a subset of the rows/columns of the Hessian matrix. This allows us to maintain enough energy in the eigen-spectrum in the reconstructed Hessian. The second approach is more memory efficient, but may require many cycling updates to converge.There are two ways to obtain the solution: (i) concatenate the columns of each BIC from the candidate models.The evaluation of regularization parameters k\u200a=\u200a9 nodes with T\u200a=\u200a4 time-varying states. The transition matrices, t\u200a=\u200a1,2,..,T, is designed as sparse k-by-k matrices with roughly 10% non-zero entries in the range of . Furthermore, adjacent matrices are designed to be similar with random perturbation of 10 entries in t\u200a=\u200a1,2,..,T. This is the same policy, used in We validated the network inference against a set of synthetic data. In the example, shown in t is the log-likelihood of the model at time t; (ii) With respect to network recovery, Ahmed and Xing Figure S1Visualization of the consensus matrix of N\u200a=\u200a2,3,\u2026,9 clusters for the adaptive dose.(TIF)Click here for additional data file.Figure S2Consensus CDF for the consensus matrix of N\u200a=\u200a2,3,\u2026,9 clusters for the adaptive dose as shown in (TIF)Click here for additional data file.Figure S3Visualization of the consensus matrix of N\u200a=\u200a2,3,\u2026,9 clusters for the adaptive dose.(TIF)Click here for additional data file.Figure S4Consensus CDF for the consensus matrix of N\u200a=\u200a2,3,\u2026,9 clusters for the challenge dose as shown in (TIF)Click here for additional data file.Figure S5NUSE plot of the microarray data.(TIF)Click here for additional data file.Figure S6An example of validation data in the top row with positive (red) and negative (green) values in the transition matrices. The bottom row shows inferred matrices through application of the computational method.(TIF)Click here for additional data file.Text S1Quality control for microarray data.(DOCX)Click here for additional data file.Text S2Selection of the number of clusters from consensus clustering.(DOCX)Click here for additional data file."}
+{"text": "The qualitative description of the corresponding processes is therefore important for a better understanding of essential biological mechanisms. However, wet lab experiments targeted at the discovery of the regulatory interplay between transcription factors and binding sites are expensive. We propose a new, purely computational method for finding putative associations between transcription factors and motifs. This method is based on a linear model that combines sequence information with expression data. We present various methods for model parameter estimation and show, via experiments on simulated data, that these methods are reliable. Finally, we examine the performance of this model on biological data and conclude that it can indeed be used to discover meaningful associations. The developed software is available as a web tool and Scilab source code at TFs) are proteins, which bind to certain short sequences (motifs) in the regulatory regions of genes. This can induce or suppress the transcription of these genes into mRNA and thus affect their expression as proteins. The binding motifs for many transcription factors are not yet known and are difficult to establish by direct in vivo or in vitro experiments. Therefore, discovery of regulatory relations between the transcription factors and the genes that they regulate forms a major challenge.Regulation of gene expression is one of the most important areas of contemporary biological research. Of all the known mechanisms behind gene regulation, perhaps the most important one is the regulation of transcription by transcription factors in silico discovery of putative associations between transcription factors and motifs from microarray gene expression and DNA sequence data. Due to overwhelming availability of this kind of data, as well as the computational simplicity of the proposed approach, our methodology can be used as a cheap and easy way to generate hypotheses concerning the networks of transcriptional regulatory control. Our experiments confirm that the generated hypotheses are biologically and statistically meaningful.In this work, we present a novel computational method for The idea to combine data about gene expression and promoter sequences for studying transcriptional regulation is not new. The main assumption behind all such methods is the premise that co-expression implies co-regulation, i.e., genes with similar gene expression profiles must be controlled by the same regulatory mechanisms Another compelling alternative is to avoid the clustering step and reconstruct gene regulation networks by modeling expression values directly. The two major approaches here are probabilistic graphical models and predictive models. Methods of the first kind typically discretize the data to reduce the effect of noise and then find a graphical model (mainly a Bayesian network) that provides the most coherent explanation for the data Methods of the second kind use supervised machine learning techniques to infer a predictive model for gene expression values The G\u200a=\u200aMAT model presented in this work falls into the category of predictive models, taking its inspiration from GeneClass The coefficients of the model can be estimated using a variety of approaches known from classical statistics, such as least squares or regularized least squares regression Being a simple linear model, the method is statistically more reliable than the more complex tree-based models of GeneClass and BDTree. Additionally, it does not require data discretization and can be implemented with better efficiency. This makes G\u200a=\u200aMAT a somewhat better alternative to the former approaches. We also provide implementations of our methods in SciLab and as a Python web application (see the supplementary website) for others to test and use.genes we refer to the protein-coding regions of the DNA. More precisely, we divide genes in two non-overlapping classes: transcription factors (TFs) and target genes. The class of transcription factors consists of all genes that correspond to actual or putative transcription factors. The class of target genes (in the following referred to simply as genes) consists of all the remaining genes. We denote TFs by Although an exact definition of a gene can be argued about, here by expression matrixTF expression matrix where the value The simplest way to quantify abundance of TFs and target genes is through mRNA expression levels. These levels can be measured using a variety of microarray-based experimental techniques. Each experiment measures the expression levels of thousands, if not all, of the genes in the cell simultaneously. Typically, a single study is comprised of several microarray experiments that are collected into a single dataset. Let us denote each experiment in a study by motif is a generalized representation of a binding site: a short region on the DNA, characterised by its sequence. Commonly, motifs are represented as fixed strings, strings with mismatches, position weight matrices or hidden Markov models, see As a second data source, we consider motif presence in promoter regions. A motif matrixThe information about motifs in the promoters of target genes can be represented as the protein expression levels of TFs, rather than their mRNA expression. Indeed, the TF proteins are involved in DNA binding and influence the target gene mRNA expression. However, current technology does not provide cheap methods for measuring expression levels of binding factors directly. Instead, we assume the microarray-measured mRNA expression levels to be a reasonable approximation for TF protein abundance. The assumption sweeps under the carpet the issues of translation regulation, splicing, post-translational modifications as well as the inertia of the whole process. Nonetheless, this assumption is rather common and often implicit in other similar methods Although the amount of data is sufficient for statistical analysis, there are also some inherent limitations. First, our model actually quantifies the effect of transcription factors on gene expression. Therefore, ideally, we would like the matrix Finally, it is worth mentioning that although public repositories of microarray data contain hundreds of normalized data sets, each data set having a hundred or so of microarray experiments concerning a single study, the different datasets cannot be combined easily. The differences in microarray protocols, cell cultures and laboratory conditions used in different studies make it difficult, if not impossible, to unify different datasets reliably In this section, we present and justify a new type of linear model for characterising gene expression. Our model is based on three simplifying assumptions about the transcriptional regulation process. Firstly, we assume that gene expression is controlled only by transcription factors. In particular, the target gene expression values in each experiment Secondly, we assume that transcription factors perform their functions by binding to certain motifs on the promoters of the target genes and the effect of each transcription factor is proportional to the number of matches of its bound motifs. Therefore, there must exist a single function growth curve modelThirdly, we assume that we can approximate the actual prediction function Now we can easily recast the equation (3) into a more compact matrix formIt is important to understand that the G\u200a=\u200aMAT model is only a crude approximation of the true biological processes taking place within the cell and in practice, all the three assumptions can be violated. For instance, the gene expression is not entirely controlled by transcription factors. In reality, various other factors also influence transcriptional regulation. Neither is the effect of TFs on transcription instantaneous. Nevertheless, as long as the primary effect of TFs is significantly stronger than the other influences, we can neglect them. In particular, in the following sections, we show both theoretically and experimentally that if the unknown regulatory influence is additive and independent from the effect of TFs, then the model coefficients Secondly, note that an identical motif combination in promoters does not always guarantee identical expression. Processes like DNA methylation and protein phosphorylation can interfere with binding, also the strength and location of the binding site might be of importance. Nevertheless, according to our current knowledge the second assumption is still a rather viable approximation.The third assumption of linearity is the most questionable. We can regard the linearisation (3) as a result of the first-order Taylor approximation of the predictor function Some of these secondary effects can be corrected by adding new terms into the G\u200a=\u200aMAT model. For instance, if a certain chemical compound is known to have significant impact on gene transcription, we can add its expression level to the G\u200a=\u200aMAT model as a predictor. Similarly, if a certain pair of TFs is known to act synergically, we can explicitly incorporate in the model the product of their expression values. Finally, if the expression data is a time series, we can introduce a time lag in the model by adding delayed signals as the rows of the matrix Next, we present a number of methods for parameter estimation for the G\u200a=\u200aMAT model. Our main emphasis is on the reliable detection of nonzero model coefficients ary text .least squares fit for the parameter matrix minimum-norm fit: a solution The most natural way of approaching the estimation problem is to search for parameter matrix The following two theorems describe the general solution to the problem (5) and provide sufficient and necessary conditions when the solution is unique.Theorem 1.All solutions to the problem (5) can be computed aswheredenotes the Moore-Penrose pseudoinverse of a matrix, denotes a properly-sized identity matrix andandare any twomatrices. The minimum norm solution to the problem (5) can be computed asTheorem 2.The problem (5) has a unique solution if and only if the columns of and the rows of are linearly independent, that is, and . The corresponding solution can be computed aswheredenotes matrix transposition.The solution to the least squares regression problem can be computed with reasonable efficiency. Namely, the time complexity of the computation depends linearly on the number of genes centered least squares method. Informally, row- and column-wise centering of matrices Often, one can improve the stability of estimates by proper preprocessing of the data. The same is true for the G\u200a=\u200aMAT model. Let In regularized least squares fit is defined as followsregularization parameter. Various values of Least squares estimate is reliable only if the number of data points is much larger than the number of parameters. In many cases, the expression data we have does not satisfy this premise and we have to use regularization to stabilize estimates. The idea of regularization is to enforce the solution with the smallest possible parameter values by penalizing the Frobenius norm of the parameter matrix ridge regressionregularization parameters and centered ridge regression as ridge regression applied to the properly centered matrices Unfortunately, the closed analytical solution for the problem (10) most probably cannot be expressed in terms of elementary algebraic operations on matrices sparse regression. The corresponding estimate is defined as followsiterative thresholding technique step size and Another common method of regularization is to penalize the (entry-wise) Least Angle Regression (LARS) algorithm GMAT-LARS can be interpreted as a a generative probabilistic model, where the measurements of all TFs in a given experiment Now, it is possible to establish connection between variable covariances and the unknown parameters Theorem 3.Assume that random variables satisfy the condition (16). If the variables and are not constant and are pairwise independent from other random variables , thenNote that the pairwise independence assumption is rather mild and is likely to be satisfied in many data sets. Hence, we can estimate The computation of a single coefficient with this method requires a covariance computation involving the whole matrix For all methods described above, we must separately decide which inferred coefficients z-score for the coefficient Let The value In practice, we obtain the z-score estimate by shuffling the values of To demonstrate and assess the applicability of the model to biological data we first of all applied it on a dataset of yeast microarray measurements by Saccharomyces cerevisiae) at different phases of the cell cycle. We combined this data with the Transfac motif matches in the 800bp upstream genomic sequences obtained from the SGD website to get the The dataset by Another strong indication in favor of the biological meaningfulness of the results was provided by a split-set experiment. If a method were overly sensitive to noise, its output would vary abruptly over different datasets even if all of them captured the same biological processes. Such behaviour would significantly reduce the practical applicability of any method. To detect such instability, we divided the Spellman dataset experiment-wise into two non-intersecting parts of 40 and 37 experiments and used our methods to find and rank TF-motif pairs for both data sets. Depending on the chosen inference parameters, the overlap between top-ten of these lists was from 3 to 4 elements \u2013 a result, which is significantly better than random , trying to keep the statistical characteristics of the generated data as close as possible to the Spellman dataset. Next, we attempted to estimate the matrix latent motifs allows to theoretically \u201cfix\u201d the predictive performance, leaving the model parameters and their interpretation intact. Indeed, suppose that, in addition to the We already noted the fact, that 38 motifs are not enough to linearly explain the variance of 5766 genes. Introduction of Experiments on artificial data allowed us to compare the parameter estimation performance of the different methods. We generated reasonably noisy datasets, estimated the parameters using different methods, ordered the model coefficients according to their estimated values and assessed the ROC AUC score of such ordering. The resulting scores are presented in As explained and illustrated in the previous sections, the G\u200a=\u200aMAT analysis can be used to discover putative associations between motifs and transcription factors. However, this is not the only task that can be addressed using the G\u200a=\u200aMAT model. In this section, we present a number of examples demonstrating various other applications of the G\u200a=\u200aMAT analysis in practical settings. The detailed results of all the experiments are available via the supplementary web tool.The most obvious application for the G\u200a=\u200aMAT model is the discovery of putative TF-motif associations from gene expression and motif presence data. An example of such analysis has already been presented in section \u201cModel Performance\u201d. However, quite often the discovered associations are rather indirect and require extensive biological knowledge to be verified. The results are easier to interpret if we consider the top-scoring TFs and the top-scoring motifs as two separate lists. These lists contain TFs and motifs that are specific to the processes measured in the microarray data.alternative decision tree.Such an approach was taken in the work of Middendorf et al. The GeneClass algorithm is reported to predict expression values quite well, but its main use is the ranking of most influential TF-motif pairs. In their paper, the authors apply this algorithm to a yeast stress response dataset. They observe that the TFs and the motifs in the top-scoring pairs are indeed known to be related to stress response. We applied the G\u200a=\u200aMAT model on the same dataset and observed similar results .Unfortunately, it was not possible to obtain exactly the same data as the one that was used in the GeneClass experiments due to a minor, but unrecoverable error in the supplementary materials of the GeneClass paper. However, following the instructions provided in the paper, we reconstructed a similar dataset. The dataset consists of microarray data We applied the G\u200a=\u200aMAT model on the dataset and examined the top-scoring coefficients of the model. In general, the exact ranking of the coefficients varied depending on the chosen G\u200a=\u200aMAT estimation method and its parameters. Nonetheless, a certain small set of TFs and motifs consistently occupied the top-scoring positions. This is rather similar to the situation in the GeneClass paper, where the exact ranking varied depending on the scoring algorithm, yet several TFs were consistently present in the top.USV1 coincides with the top-scoring regulator obtained by GeneClass. The remaining regulators differ from those reported by GeneClass, yet we believe our list to make no less sense. Indeed, the discovered TFs and motifs are known to be involved in the processes related to stress response.RSF2 gene is known to be involved in glycerol-based growth and respiration The SHP1 gene has been predicted to have a role in stress response The MSN1 gene is known to be involved in hyperosmotic stress The MIG1 regulator is to repress the transcription of genes that are responsible for sugar utilization It is thought that the major function of the ABF1 encodes a multifunctional regulator particularly involved in different chromatin-related events HSP12 heat shock gene The gene HSF1 occupies several top-scoring positions in the G\u200a=\u200aMAT correlation-based results. As several of the microarray experiments were measuring the response of yeast to heat shock, this result makes sense.Other G\u200a=\u200aMAT estimates produced different, but still meaningful results. For instance, the heat shock factor motif discovery. A good overview of motif discovery methods and applications is provided in So far, we used a rather small set of well-known motifs and aimed at identifying the most influential out of these. Alternatively, we can use a large set of motifs encompassing all the substrings of a given length. Finding the most influential out of that set is equivalent to identifying biologically meaningful sequences in DNA \u2013 a task known as An approach similar to the G\u200a=\u200aMAT model has already been used for motif discovery in the work of Bussemaker et al. We considered all possible 7-mers of letters {A,T,C,G} and matched them on the promoters (800bp upstream sequences) of the The motif corresponding to the largest coefficient of the least squares estimate was AAATCTT. This does not differ much from the two top-scoring results of the REDUCE algorithm: AAAATTT and AAATTTT. Also interesting was the top-scoring motif of the G\u200a=\u200aMAT correlation-based estimate, CGATGAG. This motif is the fourth highest on the REDUCE result list. Notably, both motifs have also been discovered from the same data by various other studies Gene Ontology (GO) annotations to genes is an important problem and a popular research direction in contemporary computational biology Automated assignment of relevant In all our previous experiments, the values of model parameters could be interpreted as follows: a high In this case, the interpretation of model parameters changes to the following: a high We used the Spellman dataset, described in section \u201cPerformance on the Spellman Dataset\u201d, for the KAR4 gene is associated to the terms \u201cconjugation with cellular fusion\u201d and \u201cmating projection tip\u201d. Both terms are related to the mating process, and the KAR4 gene is actually known to be involved in this process. In fact, its current true annotation is \u201ckaryogamy during conjugation with cellular fusion\u201d.Ridge regression with Also, note that we can regard the obtained result as two separate lists, as we did it in section \u201cDiscovering Process-specific TFs and Motifs\u201d. In this case, the list of top-scoring GO terms represents the important processes that were measured in the expression data.Efficient computational analysis of microarray data as well as the discovery of putative associations between transcription factors and DNA binding sites are issues of prominent importance in bioinformatics. We proposed a statistical model to address these problems. Our method can detect potential DNA-binding candidates together with the binding sites that might participate in the regulatory processes.In particular, we studied the applicability of the model to biological data. Experiments on both real and artificial data demonstrated that our model is not predictive, but purely descriptive. That is, the prediction error of the model is very large, but the estimated parameters are still reliable and biologically meaningful. For instance, we have shown that associations discovered using our model from the well-known Spellman microarray dataset correspond to known indirect relations between transcription factors and motifs. Additionally, we illustrated how the G\u200a=\u200aMAT model can be applied in several other contexts besides the discovery of TF-motif associations. We demonstrated how the G\u200a=\u200aMAT model can be applied for the discovery of process-specific TFs and motifs, for motif discovery and for GO annotation.Text S1Supplementary detailed mathematical development and analysis of the method.(0.36 MB PDF)Click here for additional data file."}
+{"text": "Housekeeping genes (HKGs) generally have fundamental functions in basic biochemical processes in organisms, and usually have relatively steady expression levels across various tissues. They play an important role in the normalization of microarray technology. Using Fourier analysis we transformed gene expression time-series from a Hela cell cycle gene expression dataset into Fourier spectra, and designed an effective computational method for discriminating between HKGs and non-HKGs using the support vector machine (SVM) supervised learning algorithm which can extract significant features of the spectra, providing a basis for identifying specific gene expression patterns. Using our method we identified 510 human HKGs, and then validated them by comparison with two independent sets of tissue expression profiles. Results showed that our predicted HKG set is more reliable than three previously identified sets of HKGs. A housekeeping gene (HKG) is typically a constitutive gene which is required for the maintenance of basic cellular functions, and generally has a steady expression level across various tissues through all phases of cell development irrespective of environmental conditions. This makes HKGs excellent controls for the normalization of Gene Chip technology, and allows the sample quality and consistency of sample quantity on chips to be assessed Eisenberg et al. However, even if there was strong agreement on these defining features of HKGs, these characteristics by nature are not powerful or sufficient enough to decisively discriminate between HKG and non-HKG genes. Thus, at present there is no effectual algorithm for reliably predicting HKGs.Existence of natural bio-rhythms implies that HKGs, which are constitutively expressed in all cell types and phases, may have certain expression frequency patterns. These spectral features can be extracted using harmonic analysis of gene expression time series and used for predicting HKGs. Here, in order to develop a method for discriminating HKGs on the basis of expression features, we introduced discrete Fourier transform of finite length time series Fourier analysis requires data with a long series length and high sampling density. Unfortunately, this requirement is much too rigorous for most standard biochemical experiments. In addition, the length of a time series is not easily extended, for example, cells synchronized by serum starvation gradually lose their phase coincidence after several cycles of cell division, thus causing the Gauss distribution to broaden. If cells continue to divide in an unsynchronized manner, cell cycle phases will totally vanish and information from an extended time series will be meaningless.http://genome-www.stanford.edu/Human-CellCycle/HeLa/).To satisfy these requirements, we selected a set of human Hela cell gene expression time-series, each with 47 sampling points which were spaced 1 hour apart, covering three cell cycles It is almost inevitable that there will be some missing data points in a gene expression time series. Here, we eliminated series which had successive missing points or three or more separated missing points, since non-uniform sampling is problematic in Fourier analysis. Series that had one or two separated missing points were interpolated with piecewise cubic Hermite interpolation, a relatively conservative algorithm which does not overshoot and introduces less oscillation , since tGenerally speaking, these time series were not stationary, i.e. their mean values varied with time. In order to uncover the periodical components of the data by Fourier analysis, we eliminated trends and seasonal components using the least squares method with five variation bases, transforming the time series into at least a first order stationary series. The principle of variation used to fit the series with variation bases was to minimize the grand total square errors .Taking a series with p time points as a vector with p components, Here we chose five base functions The logarithm term was derived from the Frobenius method for second order differential equations which implies that the gene expression time series were continuous and did not contain singularities within the time intervals we concentrated on. Frequency analysis before and after data pre-processing showed the maintenance and enhancement of periodical components in the residual series .Warrington Discrete Fourier transform (DFT) was first applied to time series that had been made stationary in order to enhance the gene expression frequency components of the spectrum. As the time series all contain 47 time points, each separated by 1 hour intervals, we obtained 24 terms from the frequency spectra obtained by applying DFT. The frequency components could be obtained by the formula:http://www.csie.ntu.edu.tw/~cjlin/libsvm) was used here to distinguish between the genes, taking the 24 effective frequency components obtained by Fourier transformation as features. The Gaussian radial basis function (RBF) kernel was adopted with penalty parameter In order to test whether the frequency components of the time series obtained were characteristic features which could be used to distinguish HKGs from non-HKGs, we used a supervised statistical learning method. Generally speaking, whether an HKG expression spectrum has frequency characteristics or not is best determined using Support Vector Machine (SVM). The SVM performed classification by constructing an hyperplane that optimally separates the data into two categories of HKGs and non-HKGs. The goal of SVM modeling was to find the optimal hyperplane that separates clusters of time series in such a way that cases of the HKG category are on one side of the plane and cases of the non-HKG category are on the other size of the plane. Libsvm was downloaded from the Gene Ontology website Human hg18 conservation data for 28 vertebrate genomes (phastCons28way) For a brief summary of the entire process, please see the part 1 of Since HKGs are genes that commonly have stable expression levels at all growth stages in all organisms, there should be conceivable differences in periodic expression features between HKGs and non-HKGs. For this reason we hypothesized that frequency spectrum features could be used to discriminate between HKGs and non-HKGs. Here, we used Whitfield et al.'s Hela cell dataset which contains the time expression series of 41508 probes. Spectral analysis was performed with Discrete Fourier Transform (DFT), and periodical features were identified and extracted from the frequency statistics obtained using SVM see section.12) rounds of classification. The proportion of probes that had high counts in the set of putative HKGs that overlapped with one or two of the published HKG sets, was much greater than the proportion of possible non-HKGs, once again showing the validity of frequency features. 299 genes from the 805 putative HKG genes were selected as HKGs in this way, using 3328 counts as the minimum cut-off point for selection . 53 genes from the non-HKG set were also selected since each of them was counted as an HKG more than 4085 times . As discussed above, the lower than anticipated overlap between the HKG collections published by Warrington Our prediction results were evaluated against two sets of tissue expression profiles The median CVs of the two tissue expression profiles are shown in We performed a gene ontology analysis to classify the predicted HKGs on the basis of their function . Genes iHKGs and non-HKGs differ in several statistical quantities such as CG content and SSR density. However, these features are parameters posteriorly-derived from statistical induction, and are therefore not suitable for use in quantitative classification. Such statistical induction is naturally incomplete because sampling processes have unavoidable limitations which tend to result in the choice of different collections of samples being used to address the same problem, and thus in sharply different conclusions. For example, Zhu et al. (2008), and Eisenberg and Levanon have quite different, even opposite, opinions about whether the ESTs of HKGs are compact. Thus, with respect to classification, it is not appropriate to use these statistical quantities as features of high significance and consistency. Classification using our HKG definition and Fourier analysis avoids the use of parameters based on statistical hypotheses. Results from such classifications can be verified by other statistical measures such as differences in tissue expression levels, which are independent of statistical learning and modeling, making the classification more rational.Some research has shown that expression levels of housekeeping genes may vary depending on experimental conditions Fourier analysis is an approach which takes advantage of pattern recognition to remove noise from microarray data. A requirement of the DFT method used here is that the data from time series should be steady. The Fourier series expansion is a mathematical description of the physical fact that every linear periodic phenomenon can be expressed by a series of simple harmonic modes. The Fourier coefficient is the weighted mean over the whole time domain, i.e. Fourier analysis shows the properties of an entire time series, instead of being restricted to a small segment. So it is only asymptotic to describe the partial features of time series with it.Several studies have already extract frequency features from expression time series of cell cycle data using Fourier analysis. The frequency features were further analyzed by functional clustering methods and genes were classified according to different expression patterns across the stages in the cell cycle Instant Fourier analysis and wavelet analysis, which consider both time and frequency, can deal with frequencies changing over time. Kim et al. We picked two distinct thresholds for the selection of putative HKG and non-HKG sets. We reasoned that genes in the putative HKG set of the three published datasets are more likely to be HKGs, while those in the non-HKG set are less likely to be HKGs, and thus chose a relatively loose threshold (3328 counts) for genes in the putative HKG set. In fact, a stricter threshold would make the CV of the selected set smaller, but more false negatives would result. We set a much stricter threshold for the non-HKG set (4085 counts), since the relative proportion of suspect HKGs was much greater than that of non-HKGs from about 4085 counts .Some genes from the putative HKG set were rejected by our procedure. For example, TUBB3 was annotated as an HKG in the Eisenberg set, but in fact it is a microtubule element expressed exclusively in neurons, commonly used to identify neurons in nervous tissue. The score for TUBB3 with our prediction method was 2287, below the HKG threshold. In the same way, TUBB scored 0 and was also below the HKG threshold. ILF2 encodes a 45 kDa subunit of NFAT (nuclear factor of activated T-cells), a transcription factor required for T-cell expression of the interleukin 2 gene that is probably only expressed in T-cells and may not be an HKG. CES2 (carboxylesterase 2), expressed in the intestine and liver, is a major intestinal enzyme and functions in intestine drug clearance. It is tissue-specific rather than housekeeping, and was also rejected by our method.On the other hand, in the non-HKG set, ATG9A scored 4093 and was selected as an HKG. Yamada et al. Here we have proposed an HKG prediction method using spectral analysis of gene expression time-series data. Our method has proved effectual and we have predicted 510 HKGs using Hela cell cycle data, including 54 genes not present in previously reported HKG sets. Our predicted HKG set was then validated using two independent tissue expression profiles. This method will be further verified when more time series data providing in-depth coverage of a sufficiently long time period become available.Figure S1Organization of training and testing sets used by SVM. Details in the supervised statistical learning process. There are three selected sets used in learning and testing and they are used to test whether the frequency features can be used to recognize HKGs.(TIF)Click here for additional data file.Figure S2An overall distribution of CVs. A comparison of the CVs for our predicted HKGs and all the 15,261 genes in the tissue expression profiles that overlapped with the Hela cell gene expression dataset, which suggests that CV is an appropriate parameter for evaluating HKGs.(TIFF)Click here for additional data file.Table S1(XLS)Click here for additional data file.Text S1(DOC)Click here for additional data file."}
+{"text": "DREAM4 Challenge, that our algorithm outperforms non-clustering methods in many cases (7 out of 25) with fewer samples, rarely underperforming (1 out of 25), and often selects a non-clustering model if it better describes the data. Source code (GNU Octave) for BAyesian Clustering Over Networks (BACON) and sample data are available at: http://code.google.com/p/bacon-for-genetic-networks.Inferring gene regulatory networks from expression data is difficult, but it is common and often useful. Most network problems are under-determined\u2013there are more parameters than data points\u2013and therefore data or parameter set reduction is often necessary. Correlation between variables in the model also contributes to confound network coefficient inference. In this paper, we present an algorithm that uses integrated, probabilistic clustering to ease the problems of under-determination and correlated variables within a fully Bayesian framework. Specifically, ours is a dynamic Bayesian network with integrated Gaussian mixture clustering, which we fit using variational Bayesian methods. We show, using public, simulated time-course data sets from the Of particular interest are the results of each of the algorithms when applied to the DREAM4 In Silico Network Challenge data sets, which includes data types such as \u201cknock-out\u201d, \u201cknock-down\u201d, and time-series data among the sub-challenges. See DREAM challenges.Inferring gene regulatory networks from high-throughput gene expression data is a difficult task, in particular because of the high number of genes relative to the number of data points, and also because of the random noise that is present in measurement. Over the last several years, many new methods have been developed to address this problem; a nice review of these can be found in Though Dynamic Bayesian networks (DBNs) are typically some variation of the basic linear modelG1DBN and is available as an R package from CRANVBSSM, as in the review. Causal structure identification (CSI) in non-linear dynamical systems (NDSs) avoids the restriction of linearity when determining network structure, and in the case of G1DBN and VBSSM algorithms performed well on the DREAM4 data sets, as did the CSI algorithm of The algorithms considered in Though these results are convincing, there is still room for improvement, and the discussion of optimal methods is still open; in fact, the body of research in the area of gene expression time-series analysis continues to grow quickly. A recent review, e.g. nine discrete expression levels). In order to successfully cluster time-series data, we need to utilize the stronger dependencies between data in consecutive time points relative to more distant time points. Quite often, researchers are interested in expression patterns across time; In each of the above papers, it was shown that gene clustering can infer biological meaning, whether co-expression, co-regulation, involvement in particular biological processes, or some other effect. Such information may also be valuable in inferring genetic regulatory networks. DREAM4 data sets, where no clusters were explicitly included. It achieves this by potentially reducing\u2013in a fully Bayesian manner\u2013the parameter space and helping solve the problem of solution identifiability in under-defined, noisy data models such as are common in gene expression analysis. The algorithm presented here is a variational Bayesian hybrid of a DBN and a Gaussian mixture clustering algorithm, both of which have been shown to infer meaningful solutions to their respective problems BAyesian Clustering Over Networks (BACON). BACON is built specifically to simultaneously consider multiple data sets based on the same network, such that for each data set, expression states are inferred independently, but that cluster membership and regulatory dynamics are assumed to be constant for all data from the given network, regardless of the particular data set. This gives more accurate results than a heuristic combination of interaction rankings based on the various time-series for each of the DREAM4 networks.In this paper, we describe a fully Bayesian model of gene cluster interaction, and we demonstrate that probabilistic gene clustering in conjunction with a dynamic Bayesian network can aid in the inference of gene regulatory networks, even in the BACON, which is a variational Bayesian algorithm that combines a Gaussian mixture clustering model with a DBN. However, before we give the specific formulation of our model, it may be helpful first to look at a simple case where integrated clustering can help infer gene regulatory networks, even if no \u201ctrue\u201d clusters are present.In this paper we introduce an algorithm called Assume, as an illustration, that we have a three genes, X, Y, and Z and that we have time-series expression data for each of them, such that the observed expression levels of these at time Note that this is a simple linear model on three variables, where all interaction coefficients except Given these assumptions, the data Under some conditions, such inference works quite well, but if the expression profiles for X and Y are highly correlated (or negatively correlated), then the determinant of If the Here, we propose that clustering genes and inferring the dynamics of the clusters can help avoid the case in which highly correlated gene profiles inhibit interaction inference. In our example, if genes X and Y have highly correlated expression profiles, then for weak priors the precision estimate in It may seem, at first, that passing along inferred interaction coefficients to all cluster members would create many false positives. However, if clusters include\u2013by definition\u2013highly correlated expression profiles, then if a cluster appears to be a good potential regulator of a gene, all of the cluster's members must also have profiles that generally indicate potential regulation, and in the absence of clustering, it would be difficult to identify the best interaction parameters. This is true whether or not any or all of the concerned genes are actually verifiable regulators, and thus clustering together correlated expression profiles\u2013regardless of the biological meaning of the clustered genes\u2013could improve inference. For instance, in our example, the presence of gene Y (if highly correlated with X) adversely affects the identification of X as a regulator of Z, a problem that can be avoided if X and Y are treated as members of the same cluster. In a data set with hundreds of genes, the chance of having at least one pair of highly correlated expression profiles is rather large. Of course, we must be careful in our construction of clusters and their dynamics, but as we show, Bayesian inference provides the means to select a number of clusters, to assign cluster membership, and to estimate cluster interaction parameters in an optimal way. We describe this below.Given The expression of gene DREAM4 Challenge data sets we use in this paper, we infer all of the parameters separately for each of the series, except for the dynamics parameters For multiple time-course data sets from the same gene regulatory network, as we have in the To estimate the parameters of our model, we use a variational Bayesian algorithm analagous to those described in In short, the algorithm used in this paper estimates the posterior parameter distribution DREAM4 data, We fit the model using 10 starts with randomized initial parameter values, and with a range of cluster numbers less than or equal to the number of genes in the data set , we conclude that both versions of BACON give satisfactory results for these data sets.For each of the five data sets, each corresponding to a single gene regulatory network, we inferred the network using all available time-series (five each) and used the inferred interactions and the known gold standard to calculate the area under the receiver operating characteristic (AUROC) curve and the area under precision-recall (AUPR) curve, as in DREAM4 time-series data are not typical; a single time-series with 20 time points is somewhat uncommon in practice (most experiments have 10 or fewer time points), and five independent time-series for the same gene network would be extremely rare. Thus, we subsequently consider each of the time-series individually, in order to see if an even more under-determined problem (only 20 data points for each of the 10 genes instead of 100) favors the model version with clsutering. We show in BACON model both with and without clustering.However, the BACON with clustering performed identically to the version without, but in seven cases, the version with clustering gave higher scores for both AUROC and AUPR. In only one case, the without-clustering version outperformed the with-clustering version in both AUROC and AUPR. These tallies are summarized in In many cases, the with-clustering and without-clustering scores were identical\u2013i.e. 10 clusters is optimal\u2013but in several other cases, fewer clusters gave a higher marginal likelihood score, and the corresponding AUROC and AUPR were indeed better, more often than not. Specifically, for 15 of the 25 time-series, Inferring gene regulatory networks from expression data is not usually easy, but it is common and often useful. Because of the under-determined nature of the problem\u2013there are more parameters than data points\u2013some reduction of the parameter set is often necessary in order to reach any meaningful conclusion at all. Sometimes, we can accomplish this through heuristic methods and decisions about which data are more important prior to the main statistical analysis. Other times, this is not desirable. In this paper, we present a probabilistic model of time-series gene expression with an integrated, theoretically sound method of parameter space reduction. We have described its implemetation and use, including a simple analytically-tractable example in which clustering is advantageous to network inference even if no \u201ctrue\u201d cluster exists, and if we are not at all concerned with cluster membership.Many of the expectations we had for the Bayesian model turned out to be true. In particular, we expected the model to favor clustering mainly in data sets with few samples; in fact, the model preferred (via the likelihood function) not to cluster when we included all data for each network , but elected to cluster for 10 of the 25 separate time series (20 samples each). Likewise, because of the under-determined nature of network inference, we also expected the clustering model to perform better than a model without clustering if there are fewer samples. This also proved true; of the 10 time-series for which the model's marginal likelihood was highest for less than 10 clusters, seven were indeed better than without clustering (when comparing both AUROC and AUPR scores), and only one proved worse.We believe that probabilistic clustering could be very useful in gene network inference, though there are disadvantages. For one, the computational time is generally much higher when clustering. This is due to the need to do model fits for a range of possible cluster numbers. For the purposes of this paper, in addition to doing the 10 random starts for the non-clustering model version, we do 10 random starts for the cluster quantities we wish to consider. Of course, the algorithm is much faster for smaller cluster numbers, as the size of the parameter of primary interest, the interaction/transition matrix, varies with the square of the number of clusters. It would likely be beneficial, in the case of very large data sets, to use a sequential or iterative search over the number of clusters, rather than use the exhaustive search method as we have here, but we leave that for a future publication.BAyesian Clustering Over Networks (BACON), can help avoid the negative consequences of inter-gene correlation for the purposes of network inference. In our tests, the algorithm outperformed its non-clustering version in 7 out of 25 time-series from the DREAM4 Challenge, underperforming only once, and most often electing to disregard clusters when the data did not support it. Therefore, we feel that there are significant benefits of using probabilistic clustering to aid in the inference of gene regulatory networks.In summary, we have shown that there are benefits to be had by clustering genes as part of a network inference algorithm. The potential for significant correlation among genes is high in typical time-series data sets, particularly those with few samples. The algorithm we have presented here, which we call GNU Octave), more information about the software for BAyesian Clustering Over Networks, (BACON) and sample data can be found at: http://code.google.com/p/bacon-for-genetic-networks.Source code ("}
+{"text": "This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets.Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the Clustering is considered the most important unsupervised learning problem. The clustering of time series data is particularly advantageous in exploratory data analysis and summary generation. Time series clustering is also a preprocessing step in either another time series mining task or as part of a complex system. Researchers have shown that using well-known conventional algorithms in the clustering of static data, such as partitional and hierarchical clustering, generates clusters with an acceptable structural quality and consistency and is partially efficient in terms of execution time and accuracy . However k-Means [ k-Medoids algorithm [ k-Means and k-Medoids algorithms are very fast compared with hierarchical clustering [ c-Means and Fuzzy c-Medoids) [The clustering of time series data can be broadly classified into conventional approaches and hybrid approaches. Conventional approaches employed in the clustering of time series data are typically partitioning, hierarchical, or model-based algorithms. In hierarchical clustering, a nested hierarchy of similar objects is constructed based on a pairwise distance matrix . Hierarc k-Means or k-Medlgorithm , are amoustering , making ustering , 14\u201318 oMedoids) \u201320. ModeMedoids) . A few aMedoids) \u201326; howeMedoids) .Aside from all of these conventional approaches, some new articles emphasize the enhancement of algorithms and present customized models for time series data clustering. One of the latest works is an article by Lai et al. , who desO(n2), which is rather high. As a result, the authors attempt to reduce the search area by data preclustering (using k-Means) and limit the search to each cluster only to reduce the creation network. However, generating the network itself remains costly, rendering it inapplicable in large datasets. Additionally, the solution to the challenge of generating the prototypes via k-Means when the triangle is used as a distance measure is unclear.The authors in also proIn this study, the low quality problem in existing works is addressed by the proposal of a new Two-step Time series Clustering (TTC) algorithm, which has a reasonable complexity. In the first step of the model, all the time series data are segmented into subclusters. Each subcluster is represented by a prototype generated based on the time series affinity factor. In the second step, the prototypes are combined to construct the ultimate clusters.To evaluate the accuracy of the proposed model, TTC is tested extensively using published time series datasets from diverse domains. This model is shown to be more accurate than any of the existing works and overcomes the limitations of conventional clustering algorithms in determining the clusters of time series data that are similar in shape. With TTC, the clustering of time series data based on similarity in shape does not require calculation of the exact distances among all the time series data in a dataset; instead, accurate clusters can be obtained using prototypes of similar time series data.The rest of this paper is organized as follows. In The key terms used in this study are presented in this section. The objects in the dataset related to the problem at hand are time series data of similar lengths.Fi = {f1,\u2026, ft,\u2026, fn} is an ordered set of numbers that indicate the temporal characteristics of objects at any time t of the total track life T [A time series k life T .N objects, D = {F1, F2,\u2026, FN}, where Fi is a time series. The unsupervised partitioning process of D into C = {C1, C2,\u2026, Ck} occurs such that homogenous time series data are grouped together based on similarity in shape, a grouping that is called time series clustering. Ci is then called a cluster, where D = \u22c3i=1kCi and Ci\u2229Cj = \u2205, for i \u2260 j.Given a dataset of The similarity between two time series data is based on the similarity in each time step.The similarity between two time series is based on the similarities between their subsequences or their common trends regardless of time occurrence.i is a set of individual time series data that are similar in time and are represented as a single prototype. Time series data are attached to a new subcluster based on their affinity to the subcluster. Thus, V = {SC1, SC2,\u2026, SCi,\u2026, SCM} is the set of all subclusters, where k < M << N.A subcluster SCFx with a subcluster SCi is defined as follows:Axy is the similarity between time series Fx and Fy and |SCi| is the number of time series data that exist in the subcluster SCi. This value is used to distinguish the time series data that have a low affinity by placing them into a new subcluster.The affinity of a time series Ri = {r1,\u2026, rx,\u2026, rn}, which represents the most typical time point of a finite set of time series data in subcluster SCi. The prototype of each subcluster is constructed with regard to the affinity of each time series with the subcluster.The prototype is a time series similar in time.Time series clustering relies highly on a distance measure. Several distance measures have been proposed by researchers in the literature \u201342. Howe similar in shape.In contrast to ED, which proposes one-to-one matching between time points, DTW is suggested as a one-to-many measurement. DTW is a generalization of ED, which solves the local shift problem in the time series data to be compared see . The locUsing this definition, time series clusters with similar patterns of change are constructed regardless of time points, for example, to cluster share prices related to different companies that have a common stock pattern independent of time series occurrence , 50. DTWDTW \u201cwarps\u201d the time axis to achieve the best alignment between data points within the series. Dynamic programming is generally used to effectively determine the warping path. However, warping causes a scalability problem that requires quadratic computation, which is a huge challenge for DTW . However k-Medoids clustering.The detailed description of the proposed algorithm is presented in this section. According to the steps above, the activities of the TTC are explained in the following sections. z-score [Fi = {f1, .., ft, .., fn} is a time series with T data points, z-normalization is defined as\u03bci is an arithmetic mean of data points f1 through fn and sd is the standard deviation of all the data points in the given time series.The main objective of this TTC step is to reduce the size of the dataset by defining a prototype for each group of very similar time series data, which significantly decreases the complexity of TTC. The time series data are first standardized usingization) , which c N-by-N similarity matrix (AN\u00d7N), where Aij is the similarity between time series Fi and time series Fj. ED is used as the dissimilarity measure to calculate the similarity (similarity in time) between time series data. AN\u00d7N\u2032 is assumed to be the pairwise distance matrix, where Aij\u2032 is the Euclidian distance between Fi and Fj. This distance is mathematically defined asAN\u00d7N, the algorithm is performed by adding and removing time series data from a subcluster based on a threshold affinity value between 0 and 1, as defined by the user.Subsequently, all the data are clustered as a whole based on similarity in time. In this step, the affinity search technique concept in CAST is borroustering . Given A\u03b1 is specified to determine what is considered significantly similar. This parameter controls the number and sizes of the produced subclusters. After a subcluster is formed, CAST deletes the low affinity objects from the subcluster. The process of adding to and removing from a subcluster is performed consecutively until no further changes occur in the subcluster.A new subcluster is constai , where Zi,j = disED and disED is the Euclidean distance. Given W = {w1, w2, \u2026, wu} as a set of warping paths, where wu = {, ,\u2026, } is a set of points that define a traversal of matrix Z and the DTW between the two prototypes Rx and Ry is a warping path that minimizes the distance between Rx and Ry,rx1, ry1) = and = and 0 \u2264 rxi+1 \u2212 rxi \u2264 1 and 0 \u2264 ryj \u2212 ryj+1 \u2264 1, for all i < n.Therefore, the similarity between subclusters is calculated and stored in anlated by . To comp k-Medoids, which has been shown to be effective in the time series clustering domain [Given the pairwise dissimilarity matrix, different schemes can be used for clustering.g domain \u201333, is sThe experiment on the proposed model is conducted with one syntactic dataset and 12 real-word datasets obtained from the UCR Time Series Data Mining Archive in various domains and sizes . This seThe well-known three-class Cylinder-Bell-Funnel (CBF) dataset is used as a syntactic dataset in the experiment on 2PTC with large datasets. The CBF dataset is an artificial dataset that has temporal domain properties and was originally proposed by Saito in 2000. This dataset has been used in numerous works , 65, 66.In general, evaluating extracted clusters (patterns) is not easy in the absence of data labels . HoweverC and ground truth G can be estimated usingG (ground truth) and are clustered together in C. |TN| (True Negative) is the number of pairs that neither belong to the same class in G nor are clustered together in C. The types of error clustering are the |FN| , which is the number of pairs that belong to one class in G but are not clustered together in C, and |FP| , which is the number of pairs that do not belong to one class in G (dissimilar time series) but are clustered together in C. The Random Index evaluation criteria have values ranging from 0 to 1, where 1 corresponds to the case wherein ground truth and clustering result are identical and 0 corresponds to the case wherein they are completely different.Rand Index is a popular quality measure \u201371 for eCj, the class distribution of data is computed as the probability Pr(Gi | Cj), wherein an instance in Cj belongs to class Gi. Using this class distribution, the normalized entropy of Cj is computed asGi | Cj) = |Cj\u2229Gi|/|Cj|. ConEntropy is the converse of Entropy based on the definition of Entropy, wherein ConEntropy is 1 when the ground truth and the clustering result are identical. Overall ConEntropy is defined as the sum of the individual cluster entropies weighted by the size of each cluster:The Entropy , 73 of a k-Medoids. The reader may wonder why DTW is not used to compare the results. In simple terms, the use of DTW does not result in a fair comparison because DTW is not practically feasible in the real world as a result of its very high complexity. The complexity of DTW in between each pair of time series data in k-Medoids is O(Ik(N \u2212 k)2 and O(n2), where N is the number of time series data, k is the number of clusters, I is the number of iterations required for convergence, and n is the length of time series data. Therefore, the total computation of k-Medoids is O(Ik(N\u2212k)2 \u00b7 n2). That is, N(N \u2212 1)/2 distance calculation is required to calculate the confusion matrix alone (needed in clustering), where N is the number of time series. As a result, the complexity of the distance matrix alone (not the entire clustering process) equals N(N \u2212 1)n2/2, which is very high. For example, given N = 1000 and n = 152 in a dataset, the number of instruction executions is 11,540,448,000. However, using TTC on the same process requires approximately 177,084,440 executions because the process operates on a fraction of the entire dataset with the reduction factor = 0.1 because the accuracy of PAA itself depends on the number of segmentations. As a result, the mean of three accuracies for each dataset is calculated as the average accuracy of k-Medoids. k-Medoids with regard to raw time series data and the time series data represented by PAA.As a fair comparison in the subsequent experiment, the raw time series data are represented by a representation method because time series data are represented by a representation method prior to clustering in the majority of previous literature. Numerous studies focusing on the representation or dimensionality reduction of time series data have been conducted , 55, 59. k-Medoids (ED) and k-Medoids (PAA-ED) shows that TTC is more accurate in most of the datasets. TTC outperforms k-Medoids (ED) because ED cannot handle the local shifts in time series data, which decreases the accuracy of the final clusters.As expected, the comparison of TTC with k-Medoids on represented time series, that is, k-Medoids (PAA-ED). Although several outliers and noises in raw time series data are handled in the time series represented by PAA, the proposed algorithm, namely, TTC, remains superior to k-Medoids (PAA-ED) because of its shift-handling mechanism. The result shows that improved clustering quality is obtainable without reducing the time series dimension by using the prototypes of very similar time series data. This result is the proof of the researcher's claim that the TTC model can outperform conventional algorithms using either raw time series data or dimensionality reduction approaches.Furthermore, TTC is more accurate than the conventionalAs mentioned in related works, one of the novel works close to the proposed model in this study is the two-level approach proposed by Lai et al. called tAs mentioned, a high-resolution time series is used in the TTC model, which is superior to the dimensionality reduced time series used in 2LTSC. As a result, the quality of TTC is increased after clustering occurs in the second level. The subclusters are merged in the second step of TTC, which causes the generated cluster structure to be more similar to the ground truth.Another study that performed clustering in more than one step is Zhang et al. , which wAs the result shows, TTC is superior to the graph-based algorithm in some datasets. The graph-based approach notably requires the generation of a nearest-neighbor graph, which is costly. However, the graph-based approach can be advantageous in datasets where similarity in time is essentially very important, such as the Coffee dataset .First, a parameter is defined as reduction factor Erate of the subclusters is calculated based on the number of items in the same subcluster that belongs to the same class (ground truth) [G = {G1, G2,\u2026, GM} as ground truth clusters and V = {SC1, SC2,\u2026, SCi,\u2026, SCM} as the subclusters generated by CAST, the subcluster SCi is assigned to the class most frequently found in the cluster to compute the error rate of cluster SCi with respect to G. The error rate of this assignment is then measured by counting the number of misclassified time series data and dividing the result by the number of time series data in the subcluster. M is assumed to be the number of subclusters determined by CAST, and the size of cluster Ci is shown by |SCi|. max\u2061\u2061(|SCi\u2229Gj|) is assumed to denote the number of items in subcluster SCi that are not in Gj. The error rate of cluster SCi is then given byN as the size of the dataset, the overall error rate of the reduction step can be expressed as a weighted sum of individual cluster error rates:Rfactor = 0.23). The effectiveness of TTC is not significantly reduced; that is, the error rate is less than 0.05.The error rate d truth) . Given GTo confirm the effectiveness of TTC further, some experiments are conducted on large synthetic datasets. For this purpose, up to 8,000 CBF time series are generated. To evaluate the results of the proposed model on large datasets, the average accuracy of TTC with regard to different CBF data sizes is shown in As the result shows, the quality of TTC is superior to that of other algorithms. The quality of TTC reaches 90% in most We illustrated the advantages of using some time series data as prototypes to cluster time series data based on the similarity in shape. We proposed a two-step clustering approach and showed its usage. The results obtained by applying TTC to different datasets were evaluated extensively. Clustering can be applied to a large time series dataset to generate accurate clusters. In the experiments with various datasets, different evaluation methods were used to show that TTC outperforms other conventional and hybrid clustering. Currently, we are working on a multistep approach, which is very scalable in the clustering of very large time series datasets. This approach will be performed as an anytime split-and-merge algorithm to present early results to the user and thus improve the clusters."}
+{"text": "Pluripotent stem cells are uniquely capable of differentiating into somatic cell derivatives of all three germ lineages, therefore holding tremendous promise for developmental biology studies and regenerative medicine therapies. Although temporal patterns of phenotypic gene expression have been relatively well characterized during the course of differentiation, coincident patterns of endogenous extracellular matrix (ECM) and growth factor expression that accompany pluripotent stem cell differentiation remain much less well-defined. Thus, the objective of this study was to examine the global dynamic profiles of ECM and growth factor genes associated with early stages of pluripotent mouse embryonic stem cell (ESC) differentiation. Gene expression analysis of ECM and growth factors by ESCs differentiating as embryoid bodies for up to 14 days was assessed using PCR arrays , and the results were examined using a variety of data mining methods. As expected, decreases in the expression of genes regulating pluripotent stem cell fate preceded subsequent increases in morphogen expression associated with differentiation. Pathway analysis generated solely from ECM and growth factor gene expression highlighted morphogenic cell processes within the embryoid bodies, such as cell growth, migration, and intercellular signaling, that are required for primitive tissue and organ developmental events. In addition, systems analysis of ECM and growth factor gene expression alone identified intracellular molecules and signaling pathways involved in the progression of pluripotent stem cell differentiation that were not contained within the array data set. Overall, these studies represent a novel framework to dissect the complex, dynamic nature of the extracellular biochemical milieu of stem cell microenvironments that regulate pluripotent cell fate decisions and morphogenesis. In vitro, ESCs are commonly induced to differentiate via the spontaneous assembly of cell aggregates in suspension referred to as \u201cembryoid bodies\u201d (EBs) For three decades, embryonic stem cells (ESCs) have been used as a model of mammalian developmental morphogenesis in order to define and characterize mechanisms of self-renewal and differentiation of pluripotent cells in vitro to specific cell phenotypes by precisely controlling the biochemical composition of the cell microenvironment. This strategy, often inspired by principles of developmental biology, has been successfully employed to direct pluripotent stem cell differentiation to certain cell types Exogenous administration of ECM and growth factor molecules in different combinations and sequences has been used to direct the differentiation of stem cells The objective of this study was to globally assess the dynamics of ECM and growth factor expression associated with the differentiation of ESCs within the EB microenvironment using PCR arrays for gene expression and pathway analyses. Gene expression of EB differentiation focused on ECM components, including cell adhesion molecules, matricellular proteins, integrins, and proteases, as well as growth factors, including members of the BMP, FGF, transforming growth factor \u03b2 (TGF\u03b2), and interleukin (IL) families. Gene expression profiles were contrasted using hierarchical clustering, k-means clustering, and statistical mapping to identify different global patterns of expression, as well as shared profiles of independent molecules; the combination of these approaches enabled the identification of groups of molecules expressing either coincident or divergent expression patterns. Subsequent pathway analyses highlighted key signaling pathways capable of acting on transcription factors regulating ESC phenotype at different stages of differentiation that were reconstructed solely from ECM and growth factor gene expression data. Characterizing the dynamic relationships between ECM/growth factor expression and EB differentiation using the quantitative analytical framework described provides new insights into the composition of the extracellular microenvironment regulating pluripotent stem cell biology and associated with early morphogenic differentiation events.3 U/mL leukemia inhibitory factor (LIF) (ESGRO). ES cells were passaged every two to three days before reaching \u223c70% confluence. To initiate EB culture, adherent ESCs were detached from the gelatin-coated dishes using 0.05% Trypsin/0.53 mM EDTA (Mediatech) and 100 mm bacteriological grade polystyrene Petri dishes (Corning) were inoculated with 10 mL of a suspension of ESCs (4\u00d7105 cells/mL) in differentiation media (ESC media without LIF). EB suspension cultures were maintained on rotary orbital shakers at 40 rpm at 37\u00b0C in 5% CO2 for the entire duration of suspension culture. Previous work from our lab has demonstrated that rotary orbital suspension culture methods result in greater yields of homogeneous populations of EBs Mouse embryonic stem cells were initially expanded on a feeder layer of mouse embryonic fibroblasts and subsequently cultured feeder-free on 0.1% gelatin-coated polystyrene cell culture dishes (Corning) with Dulbecco's modified eagle medium (Mediatech), supplemented with 15% fetal bovine serum (HyClone), 2 mM L-glutamine (Mediatech), 1\u00d7 MEM non-essential amino acid solution (Mediatech), antibiotic/antimycotics (Mediatech), 0.1 mM \u03b2-mercaptoethanol , and 10EB morphology was monitored daily by phase microscopy for up to 14 days of differentiation using a TE2000 microscope (Nikon) and a Spot Flex camera . For histological analysis, EBs collected at different stages of differentiation were fixed with 10% formalin for 30 minutes and embedded in Histogel\u00ae (Richard-Allan Scientific). Histogel\u00ae-embedded samples were dehydrated through a series of alcohol and xylene rinses prior to paraffin embedding. Sections of EB samples (5 \u00b5m each) were obtained using a Microm HM 355S rotary microtome and stained with hematoxylin and eosin (H&E) using a Leica AutoStainer XL. Stained slides were coverslipped using low viscosity mounting medium and imaged on a Nikon 80i microscope equipped with a SPOT Flex Camera (Diagnostic Instruments).Gapdh), which were each independently validated with appropriate positive cell controls. Relative levels of pluripotent gene expression were calculated compared to undifferentiated ESC samples and normalized to Gapdh using the \u0394\u0394Ct method Gapdh expression levels.RNA was extracted from undifferentiated ESCs and EBs at days 4, 7, 10, and 14 of differentiation (n\u22653 for each sample) using the RNeasy Mini Kit (Qiagen). Complimentary DNA was reverse transcribed from 1 \u00b5g of total RNA using the iScript cDNA synthesis kit (Bio-Rad), and real-time RT-PCR was performed using SYBR Green technology with a MyiQ cycler (Bio-Rad). Beacon Designer software was used to design forward and reverse primers for pluripotency and differentiation markers as well as for the housekeeping gene glyceraldehyde-3-phosphate dehydrogenase . Genomic DNA was eliminated by mixing 0.5 \u00b5g RNA with 5\u00d7 gDNA Elimination Buffer and RNase-free water before being incubating at 42\u00b0C for 5 minutes. The RT cocktail was prepared and added to the elimination buffer mixture. Each cDNA sample was synthesized in an iCycler Thermal Cycler (Bio-Rad) and diluted with RNase-free water after synthesis was complete. RT-PCR was performed by first preparing the experimental cocktail and then equally distributing the cocktail (25 \u00b5L) into all of the individual wells of the PCR 96-well array (Mouse Extracellular Matrix and Adhesion Molecules array or Mouse Growth Factors array). Only one gene, secreted phosphoprotein 1 (Spp1), overlapped between both arrays besides the housekeeping genes and internal controls. Each array was tightly sealed with optical thin-wall 8-cap strips and amplified in a MyiQ cycler (Bio-Rad) with a two-step cycling program . Fold changes in gene expression were analyzed using the \u0394\u0394Ct method of quantitation, whereby samples of EBs from different time points were compared relative to undifferentiated ESC values after individual sample values were normalized to internal Gapdh levels.For SuperArray RTGapdh as the normalization gene as described above. Initially, the results of the independent ECM and Growth Factor PCR arrays were separately analyzed by Genesis (Release 1.7.5) array analysis software. Two-dimensional hierarchical clustering of the log2-transformed data sets was performed across the different genes and time points using Euclidean distance and average linkage clustering. The clustering results were represented visually by a heat map dendrogram, with green indicating decreased expression and red indicating increased expression relative to undifferentiated ESCs. The relative color intensity values corresponding to the magnitude of fold change (either an increase or a decrease) were set between \u22127.0 and 7.0 to provide a distinct color range for all log-transformed magnitudes.Gene expression differences by differentiating EBs were calculated as fold change increases or decreases at the different time points examined compared to ESCs, using Prior to all further analysis, the ECM and Growth Factor array data sets were combined and examined together. The average fold-change values of ESCs and EBs for each gene from the entire time course were analyzed using k-means clustering analysis in Genesis software (version 1.7.5). In order to determine the optimal number of k-means clusters that sufficiently captured the different distinct profiles of the entire data set, the cluster number was varied between 4 and 20 and evaluated for a maximum of 300 iterations. Analysis with fewer clusters (4\u20138) did not distinguish different patterns of expression as clearly, whereas larger numbers of clusters (>12) yielded some independent groups with as a few as 1\u20132 genes; therefore, subsequent k-means analysis was performed using 12 clusters.p<0.05). Significant differences in expression fold-change between consecutive time points were depicted with a branch schematic for all genes from the array, with increasing or decreasing slopes representing positive or negative fold differences, respectively, while non-significant differences were represented as horizontal lines.Significance testing was conducted using SYSTAT (Version 12) software. For individual genes, expression fold change comparisons across time points were evaluated using a one-way analysis of variance (ANOVA) with subsequent post-hoc Tukey analysis to determine significance (p<0.01) using the Benjamini-Hochberg (B\u2013H) multiple testing correction method to account for false-positives and were then ranked in decreasing order of significance ). Based on published literature reports, IPA generated networks that included interactions between the focused sets of array molecules as well as other molecules present in the IPA database. Each of the networks generated by IPA, which included up to 25 focus genes, was assigned a relative score reflecting the probability that any given gene in a particular network was present by chance; higher scores indicate a lower likelihood .Pathway analysis of genes that exhibited significant changes over time was performed using Ingenuity Pathway Analysis to examine the biological functions and signaling pathways that were implicated in EB development. For each time point , fold-changes (\u221246.25 to \u22121 and 1 to +513.58) were filtered in IPA using a minimum 2-fold change threshold; from these genes, a list of \u201cfocus\u201d genes was generated that contained molecules present in IPA's knowledge database (78% of eligible molecules). The top biological functions and networks for each time point were assessed based on the resulting eligible genes from the array (\u201cfocus genes\u201d) and IPA's database containing gene associations . Biological functions were tested for significance was expressed at significantly increased levels by day 14 compared to all other time points examined and mesoderm differentiation increased over time (data not shown), thus further confirming the expected time course of differentiation. The coincident decrease in pluripotency and increase in germ lineage marker expression, as well as the EB morphological changes that occurred over 14 days of suspension culture were consistent with previous studies from our laboratory The time course of EB differentiation was examined by morphology and phenotypic markers prior to performing semi-global gene expression analysis. Using rotary orbital suspension culture, the formation of EBs and maintenance of the EB population remained relatively uniform over the time course examined , similarn factor , decreast marker . ConversIn order to visualize the global gene expression patterns exhibited by EBs and to identify any subsets of molecules undergoing coincident expression changes over time, two-dimensional hierarchical clustering was performed independently for the Extracellular Matrix and Adhesion Molecules array consistently increased between each of the different time points examined . In the Although hierarchical and k-means clustering analysis enabled the identification of subgroups of genes that were similarly expressed over the course of differentiation, the correlative relationships were based upon the magnitudes of fold-change relative to the starting state (ESCs), but didn't account for statistical changes occurring between each of the discrete time points examined. Parallel independent ANOVA analysis for each gene was therefore performed to evaluate significant changes in gene expression over time . In geneFgf2, matrix metalloproteinase 2 (Mmp2), and the housekeeping genes heat shock protein 90 kDa alpha, class B member 1 (Hsp90ab1) and \u03b2-actin (Actb). In the previous analyses, hierarchical and k-means clustering results similarly indicated that expression of both Hsp90ab1 , fibroblast growth factors (6/17 genes), interleukins (10/10 genes), and matrix metalloproteinases (4/12 genes) all appeared within the group of genes that did not change significantly between consecutive time points , whereasNetwork analysis was performed using only the genes that changed significantly over the course of differentiation and exhibited more than two-fold expression changes compared to undifferentiated ESCs. Using the IPA database, biological functions reflected by these genes were generated and ranked (only the 10 highest- (A) and 10 lowest-ranked (B) functions are shown in Fn1; day 4), Mmp9 (day 7), Vegfa (day 10), and insulin-like growth factor 1 . At day 4 . Many of the molecules included in the top day 4 network were related to the p53 transcription factor that acts to suppress the pluripotent marker Nanog and thereby induce differentiation p53; however, several growth factors that act on p53 in the day 4 network were not present at day 7, indicating an emergence of different roles for growth factors by day 7. Along with the greater number of genes significantly increasing over time, the number of genes included in the networks at later time points also grew such that the top networks for days 10 (Fli1), TATA box binding protein-associated factor 4B (Taf4b), and cell division cycle 73 (Cdc73), linked to extracellular factors, but their connectivity was low (2\u20133 connections) compared to the number of connections formed to p53 at days 4 and 7. This increase in the number of nuclear factors present in the day 10 network is suggestive of fewer commonly shared nuclear targets by the population of differentiating cells, which is consistent with the onset of diverging cell phenotypes within EBs to different germ lineages. Overall, the physical and biological network connections generated using statistically significant array data highlight the ability of ECM and growth factor expression patterns alone to elucidate global trends in cell growth and differentiation as a function of time.The networks generated from the gene expression data highlighted the transition of the differentiating ESCs away from a pluripotent state . Through this analysis, functional relationships based on unrelated previous studies emerge from the data that allow for new hypothesis generation and testing regarding exogenous molecules capable of affecting ESC differentiation.Complementary information provided from different forms of gene expression analysis yields an overall more comprehensive characterization of the temporal dynamics of EB differentiation than individual analyses alone. Some molecules appear consistently throughout the time course and change their molecular interactions with time, while others are unique to certain stages of differentiation. As an illustrative example, or array and was Emilin1 when the Emilin1 . When viin Vegfa that emeFn1 as an important molecule in the examined EB system, due to the fact that it didn't exhibit large fold changes over the time course examined. Further analysis using parallel ANOVAs indicates that Fn1 expression increases significantly by day 14 , although it was present in the top network at day 4 with a single connection. Analysis displayed a relationship between Spock1 with Col18A1, a hub gene not included in the arrays, but highly connected to several ECM and growth factor genes contained in the arrays , as well as other highly connected genes (p53 and Fn1). The aforementioned examples of Fn1 and Spock1 demonstrate that a combination of analytical tools can identify potential key regulators in the differentiation system that would have otherwise been overlooked by single forms of analysis alone.Just as each individual analysis lends insight when examining specific molecules, the range of analyses presented is critical for extracting a more global perspective of EB morphogenesis from the array data. Researchers often rely on a single method for assessing large data sets, typically either hierarchical clustering or pathway analysis, which can potentially result in overlooking molecules important to the system. For example, performing clustering alone did not result in the identification of y day 14 , along wostn/Vtn , while Aostn/Vtn . HoweverExtracellular factors, including matrix molecules and growth factors which are typically not highlighted so much as transcription factors to examine differentiation patterns, exhibited different dynamic signatures during EB differentiation that accompany changes in ESCs, such as decreased pluripotency and the onset of tissue morphogenesis. Semi-global transcriptional network analysis of extracellular factors expressed by pluripotent cells in dynamic environments highlighted the potential importance of such endogenous molecules and their utility in assessing the temporal shifts in tissue morphogenesis of the system. In order to analyze the wide spectrum of matrix molecules in this study, multiple methods were used that highlighted differences in expression patterns through clustering tools and focused on the relationships between the molecules via network analysis. Subsets of genes with diverging expression values (i.e. groups that exhibited opposing signatures over time) emerged from k-means and statistical analyses. The simultaneous increase and decrease of different sets of molecules is likely necessary for the onset of many differentiation events, and the identification of these sub-groups could be critical for further understanding the coincident cell phenotype specification. This study also demonstrates the potential caveats of examining individual molecules in a one-dimensional fashion and neglecting the global systems view that contributes considerably to the fundamental understanding of emergent biology. In contrast, the combination of clustering/statistical analyses with network mapping provides a multi-faceted approach that enables a more in-depth understanding of the dynamic nature and processes of the EB microenvironment. Overall, these insights provide a better understanding of the transcriptional changes for ECM molecules and growth factors that accompany embryonic morphogenesis and thereby enable novel routes to consider engineering the differentiation of ESCs to specific lineages within EBs.Figure S1Embryoid differentiation. (A\u2013D) Histological examination of EB differentiation with H&E staining. (E) Gene expression of Oct-4, marker of pluripotency, during EB differentiation decreases at later time points of differentiation. ANOVA: # p<0.05 compared to ESCs (day 0), \u2020 p<0.05 compared to days 4 and 7.(DOCX)Click here for additional data file.Table S1Genes in each cluster represented in . Hierarchical clustering identified five gene groups based on relative expression over time. Clusters I and IV increased over time, clusters II and V decreased over time, and cluster III maintained relatively unchanged gne expression levels over time.(DOCX)Click here for additional data file.Table S2Genes in each k-means plot represented in . K-means analysis defines temporal gene expression, refining patterns of expression and separating hierarchical clusters. Clusters B\u2013E correspond to clusters II and V; clusters F\u2013I correspond to clusters I and IV; clusters J\u2013M correspond to cluster III.(DOCX)Click here for additional data file.Table S3Genes represented in . Liste of genes separated by ANOVA analysis that do change statistically over time (4C\u2013F) as well as those that do not change statistically over time (4B).(DOCX)Click here for additional data file."}
+{"text": "Some strategies reverse engineer regulatory interactions from experimental data, while others identify functional regulatory units (modules) under the assumption that biological systems yield a modular organization. Most modular studies focus on network structure and static properties, ignoring that gene regulation is largely driven by stimulus-response behavior. Expression time series are key to gain insight into dynamics, but have been insufficiently explored by current methods, which often (1) apply generic algorithms unsuited for expression analysis over time, due to inability to maintain the chronology of events or incorporate time dependency; (2) ignore local patterns, abundant in most interesting cases of transcriptional activity; (3) neglect physical binding or lack automatic association of regulators, focusing mainly on expression patterns; or (4) limit the discovery to a predefined number of modules. We propose Regulatory Snapshots, an integrative mining approach to identify regulatory modules over time by combining transcriptional control with response, while overcoming the above challenges. Temporal biclustering is first used to reveal transcriptional modules composed of genes showing coherent expression profiles over time. Personalized ranking is then applied to prioritize prominent regulators targeting the modules at each time point using a network of documented regulatory associations and the expression data. Custom graphics are finally depicted to expose the regulatory activity in a module at consecutive time points (snapshots). Regulatory Snapshots successfully unraveled modules underlying yeast response to heat shock and human epithelial-to-mesenchymal transition, based on regulations documented in the YEASTRACT and JASPAR databases, respectively, and available expression data. Regulatory players involved in functionally enriched processes related to these biological events were identified. Ranking scores further suggested ability to discern the primary role of a gene (target or regulator). Prototype is available at: Gene regulation is the major orchestrator of cellular activity, directing the creation of proteins designed to participate in every biological process. Considerable effort has been undertaken to unveil regulatory mechanisms and advance the knowledge on complex system responses and dysregulation events leading to medical conditions. In particular, transcription has been extensively studied for its essential role in gene regulation, determining which genes should be transcribed into mRNA and influencing their expression rates. Explaining the translation of a biochemical stimulus into a cellular outcome is however a challenging task. One of the reasons is that most transcriptional responses result from a concerted action of multiple transcription factors (TFs). Regulatory players are often involved in diverse pathways simultaneously or over time. Additionally, mechanisms such as dynamic feedback loops, add layers of complexity as they generate intricate responses with transient gene products frequently found in regulatory cascades.Individual regulatory associations have been actively predicted using diverse techniques, from ChIP-chip experiments to automated assessment of TF-binding site affinity. Transcriptional responses have also been investigated looking for significant changes and patterns. Nonetheless, research has been progressively evolving toward the study of organisms from a systemic standpoint and the next endeavour is to assemble heterogeneous elementary data into functional representations of regulatory networks, considering both control and behavior, to support the modeling and prediction of system\u2019s responses to specific conditions. Available computational solutions usually fit into one of two groups. Reverse-engineering, also termed network inference, regards the system as a mathematical function with parameters. Models such as bayesian networks or differential equations are fitted to the experimental data using learning algorithms Several authors have addressed the module identification problem Most of the strategies revised herein rely on clustering techniques to unravel transcriptional trends, searching for global patterns. It has been often recognized that clusters are not able to describe the complex nature of transcriptional response, as genes tend to behave coherently only in specific time frames and may be involved in different functional groups over time We propose Regulatory Snapshots, a computational framework to identify regulatory modules from expression time series and regulatory associations. First, we unravel sets of genes exhibiting coherent expression profiles using a state of the art temporal biclustering method, CCC-Biclustering Advantages of this framework include ability to: (1) combine mechanics as documented evidence of regulation defining the network topology (prior knowledge), and dynamics as transcriptional responses yielded by expression ; (2) search for local patterns, known to prevail in transcriptional response; (3) capture coordinated activity through algorithms specifically incorporating the temporal dimension; (4) identify relevant TFs relying on whole-network analysis, where transcriptional behavior is seen as a result of intricate system connectivity, rather than the direct action of a few players; (5) visually expose the variation of regulatory interactions relevance over time.Saccharomyces cerevisiae\u2019s response to heat shock We assess the effectiveness of Regulatory Snapshots to identify regulatory modules underlying http://kdbio.inesc-id.pt/software/regulatorysnapshots.In this section we describe Regulatory Snapshots, an integrative mining approach to identify regulatory modules over time. We define a regulatory module as a group of genes exhibiting coherent transcriptional activity in a given time frame and sharing a common set of regulators. In this context, we propose to discover and characterize regulatory modules involved in specific transcriptional responses in two steps . First, M' are discretized to a set of symbols, \u03a3, representing distinct activation levels in a new matrix M. Any discretization is eligible. In this work, we followed the original approach M' into M, where D, N, and U mean down-trend, no-trend and up-trend and adding a symbol to the beginning or end of the expression pattern induces changes in I . CCC-Biclusters pertaining a single row are biologically uninteresting and are thus discarded.A CCC-Bicluster, maximal if addinv are guaranteed by generalized suffix tree construction. Left-maximality of an internal node v is guaranteed when either v has no incoming suffix links v. CCC-Biclustering uses efficient techniques to find these nodes in In order to find all maximal CCC-Biclusters, CCC-Biclustering first performs a simple alphabet transformation to append the column number to each symbol in the discretized matrix . RegardiV, is composed of regulators and target genes, while the set of edges, E, includes the regulatory associations between elements in V. Let A and D denote the adjacency and diagonal matrices of N, respectively, where u and target gene v and u. Given a set of initial target genes, or seeds, R is the set of transcription factors regulating the set S of target genes. Personalized PageRank Relevant TFs are identified and prioritized through the application of personalized ranking to a network of regulatory associations. Such network can be described as a directed graph N, the transition probability matrix W of a typical random walk on N is defined as N is a directed graph and L is normalized against the sum of weights of the outgoing edges. Given a preference vector p0 and a non-negative heat diffusion coefficient t to control the rate of diffusion and preference for closer or farther regulations, the ranking vector pt is given byWe have previously relied on a related technique based on the heat kernel rank The discrete heat kernel Z is the number of iterations. The preference vector p0 contains the expression values of the target genes in S, as follows:where e(u) is the expression value of gene u. To be able to reach the regulators from the targets, the signal is propagated through the regulatory network by traversing the regulations in reverse direction, or using the transpose of the network graph N. The procedure is repeatedly applied to generate one ranking per time point.where Coherent temporal transcriptional responses can be studied using current software tools Saccharomyces cerevisiae\u2019s response to heat shock upon exposure to 37\u00b0C. It focuses primarily on the biological soundness of the top transcription factors and their relevance over time, output by Regulatory Snapshots, for biclusters whose value has been previously confirmed In this section, we investigate the regulatory modules obtained using Regulatory Snapshots in two case studies. The first study concerns We analyzed time series expression data from yeast cells upon exposure to heat shock, measured at five time points over a one hour period p-value was computed under the null hypothesis that a similar pattern would occur by chance in an expression matrix of equal size p-value above 1 percent level and biclusters overlapping with a Jaccard similarity larger than 25 percent. Functional enrichment was assessed through a p-value based on the hypergeometric distribution. We considered highly significant all GO terms with a Bonferroni corrected p-value lower than 0.01. Six of the resulting biclusters, describing transcriptional upregulation and downregulation patterns, have been previously subject to biological analysis p-value; (3) presence of abrupt variations; and (4) interestingness of expression pattern, including evidence of anti-correlation between the two biclusters.CCC-Biclustering was applied to the expression data, reporting 167 CCC-Biclusters with coherent responses see . For eacTFRank was applied to propagate the normalized expression levels of the genes in each bicluster through the transpose of the graph and identify the most relevant TFs at each measured time point see . We paraBicluster 39 includes genes whose expression was abruptly upregulated during the first 5 minutes of exposure to heat, followed by residual variation between 5\u2019 and 30\u2019 and a large decrease in the last 30 minutes . Arr1p, Many players with potentially relevant roles were found in the top 20 . Sfp1p aWe finally analyzed the variation of the transcription factor relevance over time output by Regulatory Snapshots. We focused on four of the most varying TFs in terms of relevance scores, from those included within the top 30 and appearing among the 20 best ranked in at least one of the time points: Mig1p and Rim101p for bicluster 39, and Hcm1p and Arr1p for bicluster 151 . Mig1p hContrary to many available approaches, Regulatory Snapshots does not make assumptions of consistency between the expression of regulators and targets. Regulators may thus be selected even if their expression was not measured, as in the case of Aft1p. Notably as well, most regulators at the top of the ranking were not included in the biclusters and effectively yielded distinct patterns from their targets, namely Crz1p, Msn2p, Rpn4p, Sfp1p, Yap1p for bicluster 39, and Abf1p, Leu3p, Met4p, Ste12p, Yox1p for bicluster 151.p-value cutoff of We built a human regulatory network as in p-value We obtained biclusters and calculated the overrepresentation of Gene Ontology annotations following the same procedure used for the yeast dataset. Post-processing involved filtering biclusters containing less than 50 genes or less than 5 time points, and sorting in descending order of number of highly significant Gene Ontology terms or 16h (FEV and HNF1B). While the relevance on the first interval seems to be influenced by the upregulation of the genes encoding these transcription factors, the latter is more likely due to the drastic change exhibited by the target genes in the bicluster, as variations in relevance and expression were not consistent at those time points. Notably, FOXA1 is a known negative regulator of epithelial-to-mesenchymal transition and all three transcription factors are involved in cell differentiation, organ morphogenesis and development characteristic of EMT. Five transcription factors, IRF2, SRY, CREB1, NKX3-1 and NFIL3, appeared in the top 10 exclusively in rankings obtained for bicluster 5536. Genes IRF2 and SRY, related to cell proliferation and cell differentiation, respectively, were considered most relevant at the first and last time points of the bicluster time frame (4h/8h and 72h), eventually relating to before and after the cellular reprogramming during EMT. The remaining regulators, CREB1, NKX3-1 and NFIL3, exhibited a steady increase in relevance between 4h and 16h. This variation inversely proportional to the changes observed in the expression level of the genes in the transcriptional module, which could explain an eventual repressor control exerted by the three factors upon these targets. Functionally, the roles of CREB1, NKX3-1 and NFIL3 in the regulation of cell cycle, circadian rythm and organism growth, are consistent with the annotations yielded by the target genes and with the expression evidence of growth arrest experimented by the cells during EMT. In bicluster 4499, NHLH1 arose as a relevant player. This transcription factor possesses documented interactions with major regulators of EMT, such as TFC3, and with several genes encoding cysteine-rich proteins containing LIM domains, of which CSRP3 is probably the most relevant CSRP3 is also involved in the regulation of cellular calcium ion concentrations affecting the cadherins, important mediators of cell-cell adhesion and cytoskeleton organization Similarly to the case study in yeast, we obtained rankings of regulators for each of the five EMT-related transcriptional modules (biclusters) at every time point. In this case, the input for TFRank consisted in the expression levels measured for genes undergoing TFGAvailable tools for the identification of regulatory modules can differ significantly in input data, definition of module, relationships within and between modules, and output. Systematical comparisons are thus either unfeasible, or likely to be performed in such terms that will favor a particular method in detriment of the others. In this section, we compare Regulatory Snapshots with a recent contribution to regulatory network inference, namely Physical Module Networks (PMN) Simultaneous optimization of transcriptional control and response, performed by PMN and MN, seems theoretically preferable to the strategy of Regulatory Snapshots, which first groups genes in modules based exclusively on expression and then identifies regulators through integrated analysis. Nevertheless, Regulatory Snapshots showed very good performance with minimal guidance. Its strength lies in its prior search for temporal expression patterns, which delivers more specific and functionally coherent modules per se than other available clustering approaches Likewise, the PMN formulation restricts the configuration of the regulatory pathways underlying a particular transcriptional response. Typically a single path is selected per module, consisting of an indirect regulator linked by a physical interaction pathway to a direct regulator exerting transcriptional control upon the consistently expressed genes. One drawback of this scheme is that it ignores that gene response is more likely the result of a combined effect of multiple regulatory players and pathways than the isolated action of a given transcription factor In essence PMN has been shown to perform well using data previously isolated relative to a particular biological process Not surprisingly, both methods lack full characterization of the dynamic nature of gene regulation. It is known that only a subset of the regulatory interactions in the network underlying a particular transcriptional response are in fact involved in the biological process under study and that the group of active interactions changes over time, as more specific tasks occurring in the cell start and finish. Not only this increases the complexity of the problem, as also little or no large scale experimental information exists on dynamics of interactions. PMN regards the network as static and identifies the part which best describes the behavior of the genes at all time points. In this regard, PMN analysis outputs a single network topology, in which the temporal dimension is lost. Regulatory Snapshots performs an analysis per time point, generating a list of transcription factors ranked according to a measure of relevance of those regulators relative to the response observed at such time point. In this context, we put forward a novel way to interpret dynamics and highlight the variation of transcriptional control over time.N is the number of iterations for the transcription factor ranking procedure In a subsequent step, personalized ranking is applied to determine the most relevant regulators exerting control upon the genes in a module at each time point (TFRank) Saccharomyces cerevisiae\u2019s response to heat shock and human epithelial-to-mesenchymal transition. In both case studies, the targets in the regulatory modules were found to yield coherent transcriptional profiles and functional properties. Results further confirmed the successful identification of TFs known to participate in the regulation of the modules. Additional TFs unraveled by Regulatory Snapshots underlied annotations consistent either with the biological process under study or with functional annotations enriched for the set of target genes. Some snapshots revealed coincident variations in the relevance of prominent TFs and the expression of their target genes in regulatory modules. In addition, we observed that the relevant TFs could be identified even though they did not exhibit expression coherence with their targets. Regulatory Snapshots thus proved effective to enable temporal exploration of regulatory networks and suitable for enhancing their dynamic properties. In particular, the underlying ranking scores suggested inherent ability to discern the primary role of a given gene at each time point, whether TF or target. Ultimately, the fact that results output by a largely automated approach with minimal guidance could be confirmed by prior knowledge supports the value of this integrative contribution to the study of regulatory networks over time through the identification of regulatory modules using expression time series and regulatory associations.We used Regulatory Snapshots to study Several directions arise for future research. It is known that consistent expression profiles are not sufficient guarantee of co-regulation"}
+{"text": "Here we argue that recent extensions to principle component analysis called STATIS and dual-STATIS can be used to determine the compromise on which classical techniques for data analysis, such as clustering and term over-enrichment, can be subsequently applied. In addition, we illustrate that STATIS and dual-STATIS facilitate interpretations of a publically available transcriptomics data set capturing the time-resolved response of Arabidopsis thaliana to changing light and/or temperature conditions. We demonstrate that STATIS and dual-STATIS can be used not only to identify the components of a biological system whose behavior is similarly affected due to the perturbation , but also to specify the extent to which each dimension of the data tables reflect the perturbation. These findings ultimately provide insights in the components and pathways which could be under tight control in plant systems.With the advent of high-throughput technologies for data acquisition from different components of a given biological system, generation of hypotheses, and biological interpretations based on multivariate data sets become increasingly important. These technologies allow for simultaneous gathering of data from the same biological components under different perturbations, including genotypic variation and/or changes in conditions, resulting in so-called multiple data tables. Moreover, these data tables are obtained over a well-chosen time domain to capture the dynamics of the response of the biological system to the perturbation. The computational problem we address in this study is twofold: (1) derive a single data table, referred to as a The investigated environmental conditions are usually not independent in the sense that some, but not necessarily all, of the controlled parameters are varied in the process of generating a pair of data tables. Therefore, there is an increasing need for development and application of multivariate statistical techniques which account for the inherent dependence between data tables while capturing what is common, i.e., preserved, across them.High-throughput technologies are routinely applied to obtain a snapshot of plant systems operating under a given environmental condition. The resulting multivariate data sets gathered from the same set of biological entities under various conditions require the development of methods for simultaneous analysis of multiple data sets of a specially constructed matrix. Since we consider the case where no supervised information is available about the gene labels, it is then possible to apply classical unsupervised learning techniques to the resulting compromise. The approach presented in this study may be regarded as an instance of the multi-way unsupervised learning problem which requires decomposition of a multidimensional table of data matrix and the eigenvalue decomposition of the corresponding covariance matrix. Since their early application to analyses microarray gene expression data is large \u2013 typically arising in systems biology studies. In such a setting, the classical methods for covariance matrix estimation are highly unstable since the covariance matrix is likely singular. However, recent theoretical advances, some of which take advantage of the special types of data , facilitate more reliable estimates and allow their usage in conjunction with the methods applied in this study or both environmental parameters were changed. This resulted in the following seven (combinations of) environmental conditions: (i) 4\u00b0C and darkness (abbreviated as 4-D), (ii) 21\u00b0C and darkness (21-D), (iii) 32\u00b0C and darkness (32-D), (iv) 4\u00b0C and 85\u2009\u03bcEm\u22122s\u22121 , (v) 21\u00b0C and 75\u2009\u03bcEm\u22122s\u22121 , (vi) 21\u00b0C and 300\u2009\u03bcEm\u22122s\u22121 , and (vii) 32\u00b0C and 150\u2009\u03bcEm\u22122s\u22121 . Together with the set of plants kept at the original conditions , eight different conditions were considered, as illustrated in Figure We consider a recently obtained transcriptomics data set from As a starting point, we used the 15,089 genes analyzed in the original study after normalization and filtering of the microarray data using standard methods. The data set was subsequently filtered for genes which exhibit fold-change of at least two with respect to the first time point (0\u2009min corresponding to the control condition) in at least two time points for any condition and a coefficient of variation within the considered time domain of at least one. These are reasonable criteria to remove less informative, and possibly non-differentially expressed, genes. This strategy was used as there is only one replicate for each time point, precluding application of more rigorous statistics. The preprocessing step resulted in identification of 2,276 genes which were used in the analysis based on STATIS and dual-STATIS.K data tables X1, \u2026, XK corresponding to the investigated conditions, and let they be gathered as blocks of a matrix X\u2009=\u2009[X1|\u2026|KX]. Moreover, let each one of these tables include observations on n genes, corresponding to the rows, over t time points, given by the columns. Furthermore, we will use ith gene in the jth time point of the lth condition. In addition, each data table is preprocessed, i.e., centered and normalized according to the recommendation of Abdi et al. , where a\u2009=\u2009[\u03b111T|\u2026|\u03b1K1T]. The loadings can then be computed as2. The compromise factor scores, F, can be computed from S asIn the second step, generalized eigen value decomposition of the compromise i, 1\u2009\u2264\u2009i\u2009\u2264\u2009K, are obtained asF, for table i, 1\u2009\u2264\u2009i\u2009\u2264\u2009K, are given byThe loadings of the variables from table ith row to the component b is given byb is the lth largest eigenvalue from the decomposition of the compromise S and i,bf denotes the factor score of the ith observation for the bth dimension. Since t-statistic , as well as the similarity quantified by the y-axis). One can observe that the low-light condition, 21-LL, as well as the 21-D, both at control temperature, have the strongest effect in the derivation of the compromise. This indicates that the transcriptomics changes under normal temperature regimes coupled with darkness/low-light conditions are characteristic for the entire data set, thus, superimposing the changes in the remaining combinations of conditions. Moreover, the light stress (21-HL) and cold stress condition (4-L) have the lowest influence within the complete dataset reflected by the low Similar to the classical PCA, analysis based on STATIS allows describing the contributions of tables, variables, and observations. Here, the Subsequent investigations of the table interstructure, i.e., the contribution of the tables to the first, second, and third principal components Figure as well Furthermore, the first principal component in the interstructure characterizes the dominant transcriptional response within the complete dataset as it corresponds to the largest eigenvalue which is proportional to the variance explained. The strong effect on the covariance structure of transcripts in all datasets under the control temperature condition can be attributed to the experimental design; namely, out of eight environmental conditions, four were conducted at different light regimes and the same temperature of 21\u00b0C. Moreover, the second strongest effect, corresponding to the contributions to the second principal component is the contrast of cold and heat stress, while the third strongest influence is ascribed to the diurnal cycle.VR coefficient, can be used to further assess the similarities between the tables. Figure heatmap.2 in the gqplot R package). It becomes apparent that the type of conditions, in all considered combinations of perturbations, has an effect on the clustering: Primarily, similar temperature conditions cluster together (as derived by the cluster dendrogram) in comparison to conditions in the same light regime . Under control, the secondary effect of light intensity can be seen by the co-clustering of the low-light/dark and high-light/normal light conditions.Apart from the contributions of individual tables to the compromise, the pairwise cosine similarity, i.e., the loadings) in a PCA. The analysis by STATIS however, allows the characterization of each of the eight experimental conditions independently of the time-factor. A similar consideration of the same dataset by classical PCA \u2013 as conducted in Figure In summary, these results demonstrate the advantages of the multi-table analysis based of STATIS as compared to classical PCA. In the classical PCA, distinct dimensions, corresponding to experimental factors, of the investigated data tables cannot be separately investigated. The reason is that individual table structure gets lost, since a combination of environmental conditions and time points, e.g., 21-L 60\u2009min, serves as variables (often referred to as x-axis) and second (y-axis) principal component. Arrows between two successive time points illustrate the sequential progression of the contributions over time with the compromise space defined by the first two principal components. While all the trajectories of time points >0\u2009min show substantial contribution to either the first or second principal component \u2013 clearly illustrating the ongoing transcriptional adjustments of Arabidopsis\u2019 transcriptome \u2013 three observations are of particular interest:The analysis of the contributions of variables reflects the influence of the particular time point with respect to the overall transcriptional adjustments of the Arabidopsis in response to the applied stresses. Figure (1)Even under the control condition, 21-L, we observed strong changes and varying influence on the compromise of the similarity structure in expression levels. One example is the expression of the central oscillator gene TOC1 Arabidopsis\u2019 transcript profiles in the beginning and final measurements. As the last time point almost comprises a complete diurnal cycle, this could be explained by circadian rhythm which is self-sustained despite the continuous light is similar to the early time points indicating a conserved covariance structure of McClung, . MoreoveMcClung, .(3)y-axis) has an effect on the early-mid time points of all three darkness conditions: the higher the temperature, the faster early-mid time points exhibit positive contribution on the second principal component. Particularly the 1280\u2009min time point in cold, 4-D, is the only time point exhibiting a positive contribution, which again can be attributed to the previously mentioned attenuation effect of low temperature, whereas the exclusive positive contribution of time points in heat stress and darkness, 32-D, display the effects of a synergistic stress response. Finally, control temperature at darkness, 21-D, displays an intermediate profile. These observations corroborate the previous findings that darkness clearly leads to dramatic differences in the responses to temperature but these are not as striking as the effect of temperature on the response to darkness and variables , but also the analysis of the contributions of observations to the principal components. In particular, for the purpose of visualization, contributions of individual observations, rows in matrix cf. Materials and Methods) are used to identify significant contributions at a significance level of 1%. Those genes which are found to be significantly contributing to either the first or second principal component followed by an over-enrichment of bins involved in sugar metabolism. For instance, starch synthesis and degradation (darkness) as a result of nutrient limitation, e.g., bin 2.1.2 \u201cmajor CHO metabolism.synthesis.starch\u201d and 2.2.2 \u201cmajor CHO metabolism. degradation.starch.\u201d The metabolism of minor sugars, such as members of the raffinose (cold) family and trehalose (darkness) which are considered to act as signal under stress or energy deprivation, are also affected (bins 3.1 and 3.2). High-light (21-HL) results in over-enrichment of bins involved in \u201cPS.photorespiration\u201d and detoxification such as \u201cmisc. gluthatione.\u201dFollowing this response, as a result of both diurnal changes and light/dark contrasts, the carbon/nitrogen (C/N) metabolism is clearly affected. This response is reflected by the over-enrichment of MapMan bins associated to photosynthesis-related processes such and Amino acid metabolism\u201d and \u201cProtein degradation.\u201d Next to the C/N balance disruption, under limited nutritional condition, like darkness, remobilization of carbon and nitrogen can occur through the secondary metabolism, as it is reflected by the over-enrichment of the bins involved in \u201csecondary metabolism.phenylpropanoid,\u201d \u201csecondary metabolism.glucosinolates,\u201d \u201csecondary metabolism.flavonoids.\u201d Additionally, bins 14.15 \u201cS-assimilation.AKN\u201d and \u201cmisc.nitrilases\u201d are over-enriched as a result of early changes of abundance of the metabolite O-acetylserine (OAS), which is in accordance with existing results analyzing changes of OAS level within this data set as well as providing further evidence that OAS is a signaling molecule [aside from its role in sulfur assimilation and VR coefficients (within the range of 0.83\u20130.94). This indicates that the covariance of time points is similar under all experimental conditions suggesting similar temporal progression of transcriptional regulation.The previously performed three steps of a STATIS analysis can be complemented by a dual-STATIS analysis. As outlined in Section Moreover, the contribution of time points, by projecting the corresponding rows onto the principal components of the compromise, displays an almost perfect cycle. With the exception of some intermediate time points (\u223c120\u2013280\u2009min) as well as the last time point (1280\u2009min), all observations contribute equally to either the first or second principal component as indicated by their equidistant displacement from the origin. Such representation could be explained by a gradually progressing system under tight control of diurnal rhythm. Interestingly, these results contrast the anticipation of stronger contributions of early time points as a result of fast system-wide adaptations to environmental changes.cf. Figures cf. The obtained findings pointed out that the control condition, 21-L, exhibits a considerable effect with respect to the contribution of conditions as well as time points , early time points, e.g., 5, 10, 20 40, 60\u2009min, are grouped together in a non-sequential manner further separated from successive time points indicating the systems response to perturbation. This is additionally illustrated by considering the length of the trajectories between early time points in Figure By performing an analysis based on dual-STATIS on the transposed and normalized data set, in which observations correspond to time points, t Figure B. In conmajor CHO metabolism.synthesis.starch.starch synthase,\u201d \u201cmajor CHO metabolism. synthesis.starch.starch branching,\u201d and \u201cmajor CHO metabolism.degradation. sucrose.hexokinase\u201d characterize in greater detail. Here, the effects of darkness and high-light conditions reciprocally lead to degradation of starch stored transiently in the chloroplasts or increase of synthesis as primary product of photosynthesis in leaves and/or external perturbations. Descriptive and inferential statistical analysis of such multidimensional data sets, usually including observation of biochemical entities whose number is comparatively larger than that of observations, necessitate the development and application of novel methods. The principle requirement for such methods is that they provide the possibility for simultaneous analysis of all or some of the multidimensional data sets, while inferring the biochemical entities which most contribute to the resulting observations.compromise, which captures information common to the investigated multiple data sets. We note that the compromise is derived as a linear combination of the covariance matrices corresponding to the individual data sets, with coefficient obtained from the eigenvalue decomposition of a special matrix capturing the congruence for all pairs of data sets. Moreover, since STATIS can be seen as a form of generalized SVD, of which PCA is a very special instance, the contribution of each row and/or column of each data set to the derived compromise can be investigated through several projections. Moreover, classical statistical techniques, such as bootstrapping and jackknifing, can be used to infer which of the contributions are statistically significant.The methods used in this study, namely, STATIS and dual-STATIS, derive a single table, referred to as a Arabidopsis thaliana under combination of growth conditions. Analysis based on the coefficient of congruence, i.e., the VR coefficient, for a pair of tables is in agreement with the established biological knowledge regarding the influence of mild vs. strong perturbation on the level of transcripts, with temperature perturbation having a larger effect on the similarity of the data sets in comparison to the modulation of light. In addition, our detailed examination of time points indicates the circadian/diurnal effects in light conditions, while starvation response is pronounced in dark conditions. Finally, we demonstrated that dual-STATIS can be integrated with GSEA to determine gene functions most affected by the considered conditions by using projections onto the compromise, itself capturing the similarities across the data sets. As with PCA, the choice of which components are to be used plays an important role in the interpretation of the data.Here we illustrated the usage of STATIS in the analysis of time-resolved transcriptomics data sets obtained from Our novel analysis of transcriptomics data sets based STATIS and dual-STATIS raises some issues which could likely be addressed through modification of the applied methods. For instance, the usage of covariance matrix does not allow the consideration of the time domain implicitly present in time-resolved data nor it is suitable for categorical variables. To this end, extension to STATIS, called DISTATIS, may facilitate the treatment of other distance measures in creating the compromise, which will be explored in a future study. Finally, one may consider extension of the illustrated methods so as to allow generation of a compromise data table. Clearly, moving away from a compromise on the level of the similarity structure within each data set may bring the modified approach closer to multi-way data analysis.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Pseudoperonospora cubensis, is the causal agent of downy mildew on cucurbits, and at present, no effective resistance to this pathogen is available in cultivated cucumber (Cucumis sativus). To better understand the host response to a virulent pathogen, we performed expression profiling throughout a time course of a compatible interaction using whole transcriptome sequencing. As described herein, we were able to detect the expression of 15,286 cucumber genes, of which 14,476 were expressed throughout the infection process from 1 day post-inoculation (dpi) to 8 dpi. A large number of genes, 1,612 to 3,286, were differentially expressed in pair-wise comparisons between time points. We observed the rapid induction of key defense related genes, including catalases, chitinases, lipoxygenases, peroxidases, and protease inhibitors within 1 dpi, suggesting detection of the pathogen by the host. Co-expression network analyses revealed transcriptional networks with distinct patterns of expression including down-regulation at 2 dpi of known defense response genes suggesting coordinated suppression of host responses by the pathogen. Comparative analyses of cucumber gene expression patterns with that of orthologous Arabidopsis thaliana genes following challenge with Hyaloperonospora arabidopsidis revealed correlated expression patterns of single copy orthologs suggesting that these two dicot hosts have similar transcriptional responses to related pathogens. In total, the work described herein presents an in-depth analysis of the interplay between host susceptibility and pathogen virulence in an agriculturally important pathosystem.The oomycete pathogen, Cucumis sativus L.) is an economically important vegetable crop cultivated in over 80 countries, with more than 66 million tons produced annually for both fresh use and processing (http://faostats.fao.org). Cucumber has been utilized extensively as a model system to study sex determination Cucumis spp., such as melon (Cucumis melo), have 12 chromosomes, making interspecific breeding difficult, if not impossible. As such, advances in breeding for important agronomic traits such as increased yield, fruit quality, and disease resistance are slow.Cucumber (Pseudomonas syringae pv. lachrymans), viral , fungal , and oomycete pathogens Ps. cubensis, the causal agent of cucurbit downy mildew, which threatens cucumber production in nearly 80 countries and causes severe economic losses Ps. cubensis is an obligate, biotrophic oomycete pathogen with a host range limited to the CucurbitaceaePs. cubensisCucumber production is hindered by diseases caused by bacterial Ps. cubensis is responsible for economic losses in recent years dm1 gene, which has been incorporated into most commercial cucumber germplasm dm1 locus, as well as its functional role in resistance signaling remains undefined. In addition to widespread incorporation of dm1, breeding of Ps. cubensis resistance has focused mainly on genes from melon Cucumis hardwickii. Large-scale screening trials to identify tolerant germplasm are in progress, but have yet to identify a source of complete resistance to Ps. cubensisWhile genetically conferred host resistance is the ideal means of disease control in crop species, it has become ineffective in controlling cucurbit downy mildew, particularly in the U.S., where introduction of a new, more virulent pathotype of Ps. cubensis to identify genes, pathways, and systems that are altered during a compatible interaction. Using this technology, deep profiling of both the host and pathogen transcriptome continuing through 8 dpi, with up to 3,286 genes differentially expressed between time points. Comparative analyses revealed correlated gene expression patterns in cucumber and Arabidopsis thaliana leaves infected with downy mildew, suggesting orthologous host responses in these two dicotyledonous hosts. Through co-expression network analyses, modules of temporal-specific transcriptional networks were constructed that provide a framework to connect transcription factors with defense response genes.Next generation sequencing of the transcriptome (mRNA-Seq) permits deep, robust assessments of transcript abundances and transcript structure C. sativus cv. \u2018Vlaspik\u2019 was monitored at 1, 2, 3, 4, 6, and 8 dpi. As shown in Ps. cubensis infection were apparent at 1 dpi, in the form of water soaking on the abaxial leaf surface at the inoculation site and the pathogen (Ps. cubensis). At the early time points, nearly all of the reads obtained were of host origin, which is consistent with our microscopy analysis revealing limited pathogen biomass. However, as we are surveying a susceptible interaction, pathogen biomass increases throughout the time course and consequentially, pathogen transcripts increase in the total read pool in the later time points C. sativus transcriptome with our deep read coverage of the libraries. Randomly selected subsets of reads, 5 to 30 million, from the total read pool were used to evaluate the effect of sampling depth on gene expression detection. The simulation demonstrates a positive relationship between sampling depth and numbers of expressed genes at lower sequencing depths (5 to 20 million reads) (2R) ranging from 0.97 to 0.98 for biological replicates of each time point, nearly all genes fell along the diagonal of plots, indicating no major variation and providing evidence for high reproducibility of biological replicates . The numplicates .C. sativus genes were expressed (Cucumis melo (melon) in response to abiotic stress C. sativus line \u2018IL57\u2019 with a high level of resistance to downy mildew C. sativus to early infection stages of Ps. cubensis, including zoospore encystment, appressorium formation, and penetration via stomata Over the infection period, a total of 14,476 xpressed , with 10xpressed . InteresC. sativus defense-related genes are expressed within 1 dpi of Ps. cubensis inoculation, and based on our correlation analysis likely corresponds to similar symptoms observed values for time point comparisons ranged from 0.78 to 0.93, with tight clustering readily apparent, revealing patterns that highlight the extent of transcriptional diversity underlying early (1 dpi), intermediate , and advanced (6 and 8 dpi) stages of disease progression and the corresponding responses in host gene expression . As descanalysis , these lobserved as well A. thalianaH. arabidopsidisC. sativus and A. thaliana, we identified single copy orthologous genes in both plant genomes and analyzed their expression patterns. A total of 7,396 clusters of single copy genes from both species were identified by clustering 23,248 and 27,416 representative protein coding genes from C. sativus and A. thaliana, respectively. Data from a previous microarray-based expression profiling A. thaliana-H. arabidopsidis interaction was compared with our mRNA-Seq-based expression data. The H. arabidopsidis infection time points were 0, 0.5, 2, 4, and 6 dpi, similar to the 1\u20138 dpi time points assayed in this study. Spearman rank correlation coefficients (SCCs) of log2 expression values were calculated between the single copy orthologs at all time points in the two datasets; between 2,136 and 3,446 gene pairs were included in the pair-wise comparisons. Among the six comparisons between similar time points, the SCC values ranged from 0.10 to 0.72 through which pathogenicity occurs may be inferred e points . In genep-coumaric acid, was shown to be rapidly induced in C. sativus in response to abiotic stress Using Weighted Gene Correlation Network Analysis (WGCNA) Modules F, G, H, I, J, and K all have discrete time points where genes are up-regulated compared to the other sampled time points. As shown in in planta. With expression profiles for nearly 15,000 genes during a compatible interaction, we have new insight into molecular events at the host-pathogen interface including a suite of defense response-related genes that are down-regulated early upon infection and transcriptional networks that respond in a temporal manner throughout the infection cycle. Most intriguingly, these networks include transcription factors and genes of no known function, which may have a role in the host-pathogen interaction.In summary, the work described herein represents the first genome-scale analysis of the cucumber-downy mildew interaction in which we catalog gene expression throughout an 8 day infection period and identify differentially expressed genes that could be correlated with pathogen growth and development C. sativus cv. \u2018Vlaspik\u2019 was grown in growth chambers maintained at 22\u00b0C with a 12 h light/dark photoperiod. Ps. cubensis MSU-1 was maintained as previously described 5 sporangia/ml solution. After inoculation, plants were kept at 100% relative humidity for 24 hours in the dark. Plants were returned to growth chambers for disease progression.2O), which was inoculated as described above and kept at 100% relative humidity for 24 hours in the dark was collected at 1 dpi. Samples were frozen immediately in liquid nitrogen and stored at \u221280\u00b0C until use.Samples from two biological replicates were collected at 1, 2, 3, 4, 6, and 8 dpi at the site of inoculation using a #3 cork borer. Additionally, a mock-inoculated sample and treated with DNase per the manufacturer's instructions. RNA concentration and quality was determined using the Bioanalyzer 2100 . The mRNA-Seq sample preparation was done using the Illumina mRNA-Seq kit according to the manufacturer's protocol. Parallel sequencing was performed using an Illumina Genome Analyzer II at the Research Technology Support Facility (RTSF) at Michigan State University. Each library within a biological experiment was barcoded, six different barcodes for 6 time-points, pooled and run on multiple lanes. Two biological replicates of each time-point were sequenced multiple times and single-end reads between 35 and 42 bp were generated. Reads from both biological replicates were pooled for determining expression abundances. The mock-inoculated sample library was made as described above, but not barcoded and run in a single lane.C. sativusC. sativus (version 2) from the Cucurbit Genomics Database (http://www.icugi.org/cgi-bin/ICuGI/misc/download.cgi) was provided in which a representative isoform, the gene model with the longest CDS, was used to estimate expression of the gene; a total of 23,248 gene models from a total of 25,600 gene models were used. All other isoforms were discarded from the annotation set. The minimum and maximum intron length was set to 5 and 50,000 bp, respectively; all other parameters were set to the default values.mRNA-Seq reads obtained from Illumina Pipeline version 1.3 were quality evaluated on the Illumina purity filter, percent low quality reads, and distribution of phred-like scores at each cycle. Reads were deposited in the National Center for Biotechnology Information Sequence Read Archive under accession number SRP009350. Reads in FASTQ formats were aligned to the http://cran.r-project.org/), in which all log2 FPKM values less than zero were set to zero. Only tests significant at p\u200a=\u200a0.05 are shown. The correlation values were clustered with hierarchical clustering using a Pearson correlation distance metric with average linkage and depicted as a heat map. For each node, bootstrap support values were calculated from 1000 replicates using Multiple Experiment Viewer Software (MeV) v4.5 Normalized gene expression levels were calculated using Cufflinks v0.9.3 A. thaliana utilized microarray-based gene expression data from an H. arabidopsidis-A. thaliana time course experiment A. thaliana genes expressed in response to H. arabidopsis infection at 0, 0.5, 2, 4, and 6 dpi using the ATH1 Affymetrix platform. The probe intensity values were downloaded from Gene Expression Omnibus A. thaliana. For the mRNA-Seq to microarray comparative analysis only single copy orthologous genes were considered for further analyses. The FPKM and intensity values were log2 transformed, and Spearman rank correlations of the single copy orthologs in both hosts were calculated in R (http://cran.r-project.org/). Only tests significant at p\u200a=\u200a0.05 are shown.Comparative analyses of host gene expression responses during a compatible interaction with the model species The Cuffdiff program within Cufflinks version 0.9.3 C. sativus genes were generated from BLAST searches of the UniProt databases (Uniref100) ftp://ftp.sanger.ac.uk/pub/databases/Pfam/Tools/) C. sativus sequences were functionally annotated based on the best possible UniRef sequence match using a minimum E value cutoff of 1E-5. If there was no UniRef sequence match, functional annotations were assigned using Pfam domains. Transcription factors were annotated based on PFAM domain assignment.Functional annotation for all http://cran.r-project.org/). All gene FPKM expression values were log2 transformed and any transformed FPKM value less than 1 was converted to zero. Genes without variation across the mock sample and 6 time points were filtered out using a Coefficient of Variance (CV\u200a=\u200a\u03c3/\u03bc) cutoff of 0.4. The \u03b2 and treecut parameters for WGCNA were 13 and 0.9, respectively. All other parameters were used with their default values. Eigengenes for each gene module Gene modules of highly correlated genes were identified using the WGCNA method Figure S1Concordance of expression values in two biological replicates of Cucumis sativus during infection by Pseudoperonospora cubensis. Reads from different time points were mapped to the C. sativus genome using Bowtie version 0.12.5 C. sativus genome annotations. For each time point, log2 transformed FPKM values of equal number of genes from both replicates are plotted. 2R, correlation coefficient; dpi, days post-inoculation.(PDF)Click here for additional data file.Figure S2Trend plots for all 11 modules. All 11 modules generated using WGCNA are shown (Modules A through K). The number of genes in each module is shown in parentheses.(PDF)Click here for additional data file.Table S1List of Cucumis sativus genes expressed following infection with Pseudoperonospora cubensis. Gene ID, expression values at different time points, and their functional annotation are shown.(XLS)Click here for additional data file.Table S2List of the top 20 genes highly expressed at different time points and control, their FPKM (fragments per kilobase pair of exon model per million fragments mapped) values and putative function as determined by BLASTX searches against UniRef100 , and Pfamprotein family and domain search.(XLS)Click here for additional data file.Table S3Spearman rank correlations of expression values between single copy orthologous genes in Cucumis sativus and Arabidopsis thaliana following infection with a compatible oomycete pathogen.(XLS)Click here for additional data file.Table S4List of genes differentially expressed at different time points along with their expression values (FPKM) and functional annotation. Differential expression analysis was conducted using the cuffdiff program in Cufflinks version 0.9.3 (XLS)Click here for additional data file.Table S5List of modules (Module A through K) with their corresponding module name, gene ID, and putative function as determined by BLASTX searches against UniRef100 and transcription factor-related Pfam domains. Transcription factor-related Pfam domains were identified using Pfam domain assignment. Expression values as represented by fragments per kilobase pair of exon model per million fragments mapped (FPKM) were calculated using Cufflinks (XLS)Click here for additional data file."}
+{"text": "Dynamic Bayesian Networks (DBNs) are widely used in regulatory network structure inference with gene expression data. Current methods assumed that the underlying stochastic processes that generate the gene expression data are stationary. The assumption is not realistic in certain applications where the intrinsic regulatory networks are subject to changes for adapting to internal or external stimuli.In this paper we investigate a novel non-stationary DBNs method with a potential regulator detection technique and a flexible lag choosing mechanism. We apply the approach for the gene regulatory network inference on three non-stationary time series data. For the Macrophages and Arabidopsis data sets with the reference networks, our method shows better network structure prediction accuracy. For the Drosophila data set, our approach converges faster and shows a better prediction accuracy on transition times. In addition, our reconstructed regulatory networks on the Drosophila data not only share a lot of similarities with the predictions of the work of other researchers but also provide many new structural information for further investigation.Compared with recent proposed non-stationary DBNs methods, our approach has better structure prediction accuracy By detecting potential regulators, our method reduces the size of the search space, hence may speed up the convergence of MCMC sampling. Recently non-stationary Bayesian network models have attracted significant research interests in modeling gene expression data. In non-stationary Bayesian networks, we assume that the underlying stochastic process that generates the gene expression data may change over time. Non-stationary Bayesian networks have advantage over conventional methods in applications where the intrinsic regulatory networks are subject to changes for adapting to internal or external stimuli. For example, gene expression profiles may go through dramatic changes in different development stages , or in tRecent work on non-stationary Bayesian networks could be found in ,2. Robin\u03c4 = 1 that leads to a relatively low accuracy of prediction on network re-construction ..5]. Our With a well-defined probabilistic semantics and the capability to handle hidden variables , DynamicThe early work of applying BNs to analyzing expression data could be found in ,8. Many G and a complete joint probability distribution of its nodes P(X) = P. The graph G : G = {X, E} contains a set of variables X = {X1n,\u2026, X}, and a set of directed edges E, defining the causal relations between variables. With a directed acyclic graph, the joint distribution of random variables X = {X1,\u2026,n X} are decomposed as P = \u220fi P(iX|\u03a0i), wherei \u03a0 are the parents of the node (variable)i. XBayesian networks (BNs) are a special case of probabilistic graphic models. A static BN is defined by an acyclic directed graph D spanning T time points, the structure learning problem of DBNs is equal to maximizing the posterior probability of the network structure G. By the Bayes\u2019 rule, the posterior probability is expressed as the following:The topology of bayesian networks must be a directed acyclic graph and hence could not be used to model the case where two genes may be a regulator of each other. As an extension of BNs to model time series data, Dynamic Bayesian Networks (DBNs) lift the limitation of directed acyclic graph by incorporating time in constructing bayesian networks. Given an observed time series dataThe current application of DBNs to gene expression data assumes that the underlying stochastic process generating the data is stationary. Here we provide a new approach to capture the structural dynamics of non-stationary data. m segments. In each segment, there is one graphi G : 1 \u2264 i \u2264 m that dominates the segment. Given a sequence of network structures TG = , the posterior probability in Equation 1 is replaced by Equation 2.We assume the time series gene expression profile is subdivided to\u03c4, which is the time delay between causes and effects in the time series data. Most previous work set \u03c4 = 1 for modeling a first-order markov chain. However, evidence shows that higher-order markov chain might better model gene expression data and biological networks msp-300 mworks in . In Figuile only sees twis of mef2,36 that s of mef2, such ase models ,37. It iIn this paper we introduced a new non-stationary DBNs method and applied our approach on three time series microarray gene expression data. Our new DBNs method uses a systematic way to determine potential regulators and takes a flexible lag choosing mechanism. Our experimental study demonstrated that compared with recent proposed non-stationary DBNs methods, our approach has better structure prediction accuracy. By detecting potential regulators, our method reduces the size of the search space, hence may speed up the convergence of MCMC sampling.The authors declare that they have no competing interests.YJ developed methods, implemented the software, and drafted the manuscript. JH was responsible for all aspects of the project, and helped revise the manuscript."}
+{"text": "It bridges the gap between high-resolution, and high-throughput image processing methods, enabling us to quantify graded expression patterns along the antero-posterior axis of the embryo in an efficient and straightforward manner. Our method is based on a robust enzymatic (colorimetric) in situ hybridisation protocol and rapid data acquisition through wide-field microscopy. Data processing consists of image segmentation, profile extraction, and determination of expression domain boundary positions using a spline approximation. It results in sets of measured boundaries sorted by gene and developmental time point, which are analysed in terms of expression variability or spatio-temporal dynamics. Our method yields integrated time series of spatial gene expression, which can be used to reverse-engineer developmental gene regulatory networks across species. It is easily adaptable to other processes and species, enabling the in silico reconstitution of gene regulatory networks in a wide range of developmental contexts.Understanding the function and evolution of developmental regulatory networks requires the characterisation and quantification of spatio-temporal gene expression patterns across a range of systems and species. However, most high-throughput methods to measure the dynamics of gene expression do not preserve the detailed spatial information needed in this context. For this reason, quantification methods based on image bioinformatics have become increasingly important over the past few years. Most available approaches in this field either focus on the detailed and accurate quantification of a small set of gene expression patterns, or attempt high-throughput analysis of spatial expression through binary pattern extraction and large-scale analysis of the resulting datasets. Here we present a robust, \u201cmedium-throughput\u201d pipeline to process In order to achieve this, we need to map and compare spatio-temporal patterns of gene expression across different species. With the advent of high-throughput methodology, the scale at which we can generate expression data has increased dramatically. RNA-seq, DNA microarrays, and quantitative PCR are among the best known methods used for this purpose. However, none of these \u2018omics\u2019 approaches provides detailed spatial information, which is crucial in this context. Therefore, quantitative techniques based on in situ hybridisation (WMISH) to quantify the expression patterns of segmentation genes in early embryos of different fly species (Diptera). These genes form a regulatory network, which determines the basic body plan of the animal by creating a segmental pre-pattern of periodic gene expression along the anterio-posterior (A\u2013P) embryonic axis We use whole mount Drosophila melanogaster. They range from the analysis of high-resolution data based on fluorescent staining protocols for a relatively small set of genes Various methods have been developed to process and analyse images that result from ISH (or antibody staining) experiments in embryos of the vinegar fly, In this paper, we present our own image processing and quantification pipeline, which may be characterised as a \u201cmedium-throughput\u201d technique. It is designed for quantification of spatial expression data from a small set of genes, yet for multiple species. Compared to the high-resolution methods mentioned above, it increases robustness and versatility of the experimental protocol by using enzymatic (colorimetric) instead of fluorescent techniques (the latter being difficult to apply in non-model organisms). The speed of image acquisition and data quantification is also increased by using wide-field microscopy combined with a simplified and efficient processing pipeline. Compared to the high-throughput methods mentioned above, our method allows for the measurement of graded spatio-temporal expression profiles along the A\u2013P axis, rather than \u2018on/off\u2019 characterization and classification of 2D expression patterns on the surface of the embryo. Measurement of graded expression levels is crucial for our attempts at reverse-engineering pattern forming networks.A schematic overview of our processing workflow is shown in Drosophila and three other species of flies: the scuttle fly Megaselia abdita, the moth midge Clogmia albipunctata, and the hover fly Episyrphus balteatus. Currently, we are analysing the position of gene expression boundaries a differential interference contrast (DIC) image (Gene expression data are acquired by means of whole-embryo enzymatic (colorimetric) C) image , (B) a bC) image , (C) a fC) image , and (D)C) image . Images Processing steps for image segmentation are shown in http://rsbweb.nih.gov/ij has detailed descriptions of ImageJ methods, macros, and plug-ins). The outcome of each step in the process is shown in The following sequence of image segmentation operations are applied to the 10x DIC images (image A) in order to identify the embryo outline, and to separate the embryo from the background ImageConverter.convertToGray8 method.Combine the RGB channels into a single channel by means of the ImageJ ImageProcessor.gamma method. The gamma value can be adjusted manually (see Results).Apply a gamma correction with the ImageJ ImageProcessor.invert method.Invert the gray scale image to get a dark background and light foreground with the ImageJ ImageProcessor.findEdges method. It uses a Sobel edge detector Find edges by applying the ImageJ ImageProcessor.threshold method with the threshold parameter set to 6. All the pixels with values lower or equal to 6 are set to 0, while all the rest are set to 255. Pixels with a value of 255 are considered to be part of a \u2018blob\u2019.Convert to black-and-white by using the ImageJ ImageProcessor.dilate method.Perform two dilations ImageJ ImageProcessor.killBorderBlobs method.Remove blobs that are touching the image border, using the ImageProcessor.dilate method.Perform two additional dilations using the ImageJ Fill holes: any black areas enclosed by white pixels (255) are set to white.Remove blobs that are touching the image border, as the dilations in step 8 could have generated new ones.If more than one blob is present, remove the supernumerary blobs that have an area smaller than a certain threshold value. The threshold is determined by dividing the total image area by Beta, where Beta is set to an empirically determined default value of 13.0. This value can be modified in the user interface. 1.0/Beta gives the maximum blob size that should be considered non-embryo as a fraction of the image area. The result of this processing step should be an image with a single blob that segments the embryo from the background.Gaussian Blur\u2019 plug-in is used.To smoothen the edges of the mask, we apply a Gaussian filter with an accuracy of 1e-3 and a standard deviation of 31 pixels along the x- and y-axis. The ImageJ \u2018ImageProcessor.threshold method, with the threshold parameter set to 145. If more than one blob is present after thresholding, remove supernumerary blogs as indicated in step 11. This repeated blob removal operation is required because small artifactual blobs occasionally appear after blurring or thresholding.Convert the image back to black-and-white using the ImageJ The result of these processing steps is an image that isolates the embryo from the background: the embryo mask .13. FromOrientation\u2019 to calculate the rotation angle with reference to the horizontal axis (in degrees). We then rotate the image by that angle in opposite direction, using the ImageJ \u2018Rotate\u2019 plug-in. Afterwards, the embryo may be flipped manually in the horizontal and/or vertical direction such that in the resulting dataset all embryos have the anterior facing left, and dorsal facing up.Rotate the embryo mask to have the A\u2013P axis placed horizontally. We have used an ImageJ plug-in called \u2018ImageProcessor.crop method.Automatically crop the image to the size of the embryo mask by finding the rotated embryo mask and applying the ImageJ Rotate and crop DIC (image A), bright-field (B), and nuclear images (C) accordingly.We calculate the skeleton of the embryo mask 0,y0) and respectively. These knots correspond to the outer and inner edges of a boundary: x0 marks the point at which signal can be distinguished from background noise, and x2 marks a position at which a high level of expression is attained, representative of expression levels in the interior of a domain. Automatically, a knot is added at , with x1\u200a=\u200a|x2\u2212x0|/2 and y1 equal to the gene expression intensity at position x1. As mentioned above, the spline is constrained by requiring the first derivatives at both knots to be zero (hence \u2018clamped\u2019 cubic spline). Per gene, each boundary is labelled with an integer identifier number, and whether it represents the anterior or posterior boundary of an expression domain. This enables us to compare homologous boundaries between different embryos and species, to group them according to the developmental age of an embryo, and track to expression domains over time (see Results).We manually determine the spatial boundaries of gene expression domains in the extracted profiles by approximating them with cubic splines that have their endknots clamped to a zero first derivative Drosophila these are named C1 to C14A. Staging by cleavage cycle is achieved by counting the number of nuclei present in the fluorescent image of the DAPI nuclear counter-stain . Time classification is carried out manually, after the creation of the embryo mask . Staged embryos are inspected visually using the FlyAGE program , even-skipped (eve), hunchback (hb), knirps (kni), nubbin (nub), orthodenticle (otd)), or enzymatic protein stains , Caudal (Cad)) to plot. Variability plots show boundaries extracted from individual embryos, and the corresponding median boundary per time class. The media boundary is calculated by taking the median starting and end points from the data set. These plots give insight into embryo-to-embryo variability by visualizing the distribution of boundary positions along the A\u2013P axis. They show A\u2013P position on the x-axis, and normalised signal intensity on the y-axis. Space-time plots display the mid-points (x1) of the median slopes over time, with the A\u2013P position on the x-, and developmental time on the y-axis (flowing downwards). Such plots allow us to observe and assess trends in the data over time.In order to visualise the expression data, we developed two tools for plotting expression domains over space and/or time. In both cases, we may select one or more gene stains (e.g. http://www.java.com), using classes from the ImageJ packages (http://rsbweb.nih.gov/ij). It uses a MySQL database (http://www.mysql.com) for storage and retrieval of the data and processing settings. Source code and precompiled executables (jar files) for the Linux operating system have been made available on https://subversion.assembla.com/svn/flygui/.We designed and implemented our workflow as a graphical user interface, named FlyGUI, in Java adding images, (2) creating the embryo mask, (3) extracting expression profiles, and (4) extracting slopes to identify boundaries. Each of these steps correspond to a separate tab of the FlyGUI ,6,7,8. IM. abdita stained enzymatically for kni transcripts and Kr transcripts , the value of Beta must be increased to prevent the embryo from being considered an artifact and therefore from being removed.Clicking the \u2018Run\u2019 button starts the process of computing the embryo mask, and inserts basic data on the mask generation process into the database using default settings. In an output window at the bottom of the screen, messages are displayed that report the progress of the mask computation . IndividBefore embryos are imported into the database they must be correctly orientated and annotated. In the \u2018Mask Details\u2019 tab , a list The embryo mask, DIC, nuclear, and bright-field images can be re-orientated if necessary by flipping them vertically and horizontally using the \u2018V Flip\u2019 and \u2018H Flip\u2019 buttons. This manual flipping step is necessary since automated orientation of embryos in A\u2013P and D\u2013V directions is unreliable see also . SelecteNext, embryos and masks need to be annotated. Additional information on each embryo is organised under 4 headings: \u2018Embryo Data\u2019, \u2018Mask\u2019, \u2018Aging\u2019, and \u2018Staining Data\u2019. We discuss each in turn below.\u2018Embryo Data\u2019 are added using the drop-down menus on the right of the screen . A genot\u2018Mask\u2019 quality is assessed by eye and the fit rated as either \u2018good\u2019, \u2018ok\u2019, or \u2018not good\u2019 using the drop-down menu. Again a comment box is present to record any observations that may not fit the controlled vocabulary.D. melanogasterhttp://projects.gnome.org/eog) to display the high-resolution membrane image. Once the time class has been determined, it can be selected from the drop-down menu.\u2018Aging\u2019: embryo age is assessed based on the number of nuclei visible in the nuclear counterstain to view embryos stained for particular genes at specific time classes. FlyAGE is used after annotation of the whole batch of embryos is complete (see In the \u2018Extract Profile\u2019 tab . The reskni in the purple channel.The \u2018Slopes\u2019 tab of FlyGUI implemenBoundaries (slopes) are added by clicking once on the peak of the expression profile, and once on the position where the levels of expression can no longer be detected. Each left mouse click positions a blue bar overlapping the expression profile and the bright-field image . The barKr the use of multiple embryos per time point to establish the median position of an expression domain boundary, and (2) a set of guidelines for the user for determining boundary positions.These guidelines are as follows: a gene expression domain boundary has an upper and lower limit of expression intensity (expression level), where the upper one is placed where the gene expression signal levels off, and the lower limit is placed where signal is no longer distinguishable from background. Finally, we ensure that the positions we select in the profile graph agree well with the expression boundaries visible in the bright-field image.kni\u2019 as the gene, \u2018all\u2019 for the slopes, and time classes \u2018C14_T1, T4, T6 and T8\u2019 marking the corresponding tick boxes on the right , displaying all slopes for the gene \u2018kni\u2019 from four different species of dipterans. It is designed for simplicity and ease of use. Although the workflow is largely automated, it still requires a few manual processing steps, such as flipping embryo orientation, time classification, and boundary extraction using splines. This is because automatic orientation of embryo masks is notoriously difficult due to significant embryo shape variation, and a lack of pronounced asymmetry along both A\u2013P and D\u2013V axes, while the high and uneven background in bright-field or DIC images makes automatic recognition of boundaries non-trivial. To resolve these issues remains a major challenge for future work.Another challenge is the measurement of relative expression levels from enzymatically stained embryos. We currently only measure boundary positions but not relative expression levels of different domains. One reason for this is that our current protocol involves potentially non-linear amplification of the signal, and it is difficult to detect and avoid saturation in the opaque precipitate of the NBT/BCIP stain. Another reason is the large variability in background staining and illumination in the images we use, which makes it difficult to separate signal from noise. Further research will be required to solve these problems.Furthermore, we are exploring possibilities of processing data from more species (dipteran and non-dipteran), and data sets created by other research groups. This raises the challenge of creating embryo masks from non-standardised data meaning that our method should be made more robust when dealing with lower quality images , and embryos imaged in different manners . This should be achievable by using alternative segmentation algorithms such as those based on machine learning strategies proposed in Drosophila (refs). With appropriate modifications to image segmentation and axis identification algorithms, our profile and boundary extraction methods could also be applied in these contexts.Finally, our method can be easily adapted to any system in which graded gene expression patterns need to be measured along a well-defined axis. Examples of such systems are the D\u2013V system and wing imaginal disk of In this paper, we have presented a robust, \u201cmedium-throughput\u201d method for measuring the position of graded gene expression domain boundaries along the A\u2013P axis of different dipteran embryos. Our method fills a gap between previously published methods that either provide very precise, high-resolution measurements of expression levels across time and space, or enable the high-throughput extraction, analysis, and comparison of expression patterns from thousands of embryo images in genome-wide expression databases. We have used this method to measure the spatio-temporal dynamics of maternal and gap gene expression in four different dipteran species. We have shown elsewhere that the resulting integrated data sets can be used to analyse and reverse-engineer the structure and dynamics of the gap gene network"}
+{"text": "Nowadays, it is possible to collect expression levels of a set of genes from a set of biological samples during a series of time points. Such data have three dimensions: gene-sample-time (GST). Thus they are called 3D microarray gene expression data. To take advantage of the 3D data collected, and to fully understand the biological knowledge hidden in the GST data, novel subspace clustering algorithms have to be developed to effectively address the biological problem in the corresponding space.Plasmodium chabaudi), systemic acquired resistance in Arabidopsis thaliana, similarities and differences between inner and outer cotyledon in Brassica napus during seed development, and to Brassica napus whole seed development. These studies showed that OPTricluster is robust to noise and is able to detect the similarities and differences between biological samples.We developed a subspace clustering algorithm called Order Preserving Triclustering (OPTricluster), for 3D short time-series data mining. OPTricluster is able to identify 3D clusters with coherent evolution from a given 3D dataset using a combinatorial approach on the sample dimension, and the order preserving (OP) concept on the time dimension. The fusion of the two methodologies allows one to study similarities and differences between samples in terms of their temporal expression profile. OPTricluster has been successfully applied to four case studies: immune response in mice infected by malaria (Our analysis showed that OPTricluster generally outperforms other well known clustering algorithms such as the TRICLUSTER, gTRICLUSTER and K-means; it is robust to noise and can effectively mine the biological knowledge hidden in the 3D short time-series gene expression data. Clustering of co-expressed genes has been an active data mining topic and advanced in parallel with the development of microarray technology . There iPioneering clustering algorithms such as K-means , HierarcAlthough subspace clustering algorithms are biologically more meaningful than full space clustering algorithms, the identification of full space clusters are less costly compared to subspace clusters. In fact, most subspace clustering algorithms have been shown to be NP-complete , thus madistances over either all or only a subset of the dimensions is divided into E equal intervals: = , where e = bb0 + e\u03b4, and e = 1 to E. Finally, a new expression level of the corresponding gene in the considered sample at the given time point is obtained using Equation 3.The first step of OPTricluster which is in fact optional consists of performing the gene expression data quantization. This is due to the fact that we are not only working with noisy data, but also DNA experimental data contains missing values. Many techniques are available in the literature to deal with noise through data quantization and to recover missing values by imputation . OPTriclnml fof the corresponding gene n gin the given sample m sand at a given time points l tfalls in the interval = [nmlr], in which every row along the time-dimension for a given sample is a vector of the ranks of the corresponding expression values in A, in an increasing or decreasing order. For example, if the expression levels of gene n gin sample m salong the time-dimension is nmf(T) = at \u03b4 \u22641.0, then, the corresponding row in the rank matrix would be nmr(T) = [1-3] in the increasing order. The ranking matrix of the example of an order preserving matrix defined by Equation 2 above will be:OPTricluster then uses the quantized 3D gene expression matrix to generate a 3D rank expression matrix in the second step. A 3D rank expression matrix is an 0.5, 3, 0.5] would be . This approach allows the identification of constant patterns.Without lost of generalities, one can easily see from this example (Equation 4) that the rows of the ranking matrix of an order preserving matrix will always be identical. In fact, this property is exploited below for the identification of order preserving patterns from a 3D short time series gene expression matrix. Note that, if more than two entries have the same value, they are given the same ranking. For example as defined above, with a set of genes G = {1, . . ., gn, ..., gNg}, a set of biological samples S = {1, . . .,sm, ..., sMs} and a series of time points T = {1, . . ., tl, ..., tLt}, we define the biological sample space \u03a9 as the set of all possible combination of the samples, and \u0393 their number. That is, if S = {1 s2 s3s} as in Figure \u03a9 = {{1, s2, s3s}, {1, s2s}, {1, s3s}, {2, s3s}, {1s}, {2s}, {3s}} and \u0393 = 7. Recall that n rcorresponds to a row vector of the ranks of the nth gene, in sample s and across the entire time series T. Hence, for each combination i\u03a9\u2208\u03a9, the exact number i hof distinct order preserving triclusters that can be found in the 3D dataset is the number of distinct 2D nr matrices of its corresponding 3D ranked matrix R. Thus, the set of 3D distinct order preserving patterns, V, can be identified by considering R as a set of 2D matrices nr, that is, R = {1r2, rn, ..., rN, ..., r}, and identify all distinct nr in it. From the above definitions, one can easily show that the exact number \u039b of order preserving triclusters in the 3D gene expression matrix is:In its third step, OPTricluster identifies the set of distinct 3D coherent patterns that can be found in the 3D gene expression matrix i his the number of distinct 2D nr (rank matrices)corresponding to each i\u03a9\u2208\u03a9 as defined above.Recall that i\u03a9\u2208\u03a9, OPTricluster assigns each gene to one of the i hgroups by comparing each distinct pattern k vof V (V: set of 3D distinct order preserving patterns identified from the previous step) to nr, and assign gene n gto the order preserving tricluster C{k} each time nr = k v. This approach is guaranteed to identify all order preserving triclusters of size I \u00d7 J \u00d7 K, with min \u2264 I \u2264 NI, min \u2264 J \u2264 MJ, and K = L, where min Iand min Jare the minimum number of genes and samples in a tricluster, respectively.Once the exact number of distinct 3D order preserving patterns has been identified, for each min and Jmin should be set to 1. In this case, the algorithm will identify all the conserved clusters and perform comparison between them at the single sample and single gene level in the divergent patterns identification step, as explained below.Since the goal of the OPTricluster algorithm is to study the similarities and differences between samples in terms of the expression profile of all genes, ID can be easily derived from the sets of conserved ones using Equation 5.The 3D procedure as presented above identifies sets of genes that behave similarly (same OP patterns) across the subsets of samples considered. The sets of divergent patterns C{p} = {p, Jp, KpI} and C{q} = {q, Jq, KqI} are two conserved triclusters (similar OP patterns). Basically, Equation 5 identifies sets of genes that are co-expressed in the subset of sample in C{p}, but split and co-expressed differently in one or more samples in C{q}. For example, if clusters C{p} and C{q} have the same OP patterns, and if C{p} = {{1, g2, g3, g4g}, {1, s2s}, {1, t2, t3t}} and C{q} = {{1, g2g}, {1, s2, s3s}, {1, t2, t3t}}, then pq = D{{3, g4g}, {3s}, {1, t2, t3t}}, meaning that genes {3g,4g} have different behaviour in {3s} compared to {1s,2s}. The computational burden of this step is reduced because only triclusters with same OP or ranking patterns are compared. This is due to the fact that ranking patterns are associated with the expression profile and are unique to each cluster for a given subset of samples.I genes and J samples is assessed by computing the tail probability that a random dataset of size N \u00d7 M \u00d7 L will contain an order preserving tricluster with I or more genes and J or more samples in it. In principle, the probabilistic description of the reference 3D random matrix would be that of the observed noise in the microarray experiment [L time points that are independent and identically distributed according to the uniform distribution, the probability that a random gene-sample supports a given cluster is equal to the number of possible time points permutations or 1/L!. Since the genes and samples are assumed to be independent, the probability of having at least I genes and J samples in the cluster is the I -tail of the ) binomial distribution, i.e.:The statistical significance of each identified tricluster with periment ,14,33. Speriment ,13,14,33Ls = L! ways to choose an OP tricluster of size L, the following expression Z is an upper bound on the probability of having a tricluster of size L with I or more genes and J samples:As there are L with I genes and J samples. The best tricluster is the one with the largest statistical significance, i.e., the one with the smallest Z Therefore, as long as that upper bound probability is smaller than any desired significance level, the identified tricluster in the real 3D gene expression matrix will be statistically significant.We use this bound to estimate the significance of any given tricluster of size O(N\u0393\u039b). Recall that the 3D short time-series gene expression data A is an N \u00d7 M \u00d7 L matrix. The 3D rank matrix can be identified within O(NML). The set of distinct 3D patterns can be identified with O(N\u0393). Finally, the set of coherent conserved triclusters can be identified within O(N\u0393\u039b). In all, the complexity of the triclustering algorithm is O(NML) + O(N\u0393) + O(N\u0393\u039b), which is O(N(ML + \u0393 + \u0393\u039b)). Note that the complexity for identifying the sets of divergent patterns from convergent ones is negligible. Since \u0393\u039b >\u0393 and \u0393\u039b >ML, the overall time complexity is O(N\u0393\u039b).The overall complexity of the triclustering algorithm is The authors declare that they have no competing interests.Arabidopsis and Brassica biological experiments and contributed the data: HS, PF, YH, JZ, DH, and AC. Analyzed the data: ABT, SP, ZL, and YP. Wrote the first draft manuscript: ABT. Commented on the manuscript with important intellectual contributions: SP, FF, PF, JZ, ZL, AC and YP. Revised the manuscript: ABT and YP. Designed the algorithm and performed clustering: ABT. Implemented the algorithm in Matlab and Java: ABT. Conceived and designed experiments: ABT, SP, and YP. Performed the OPTricluster Java package.Click here for fileOPTricluster user manual.Click here for fileExpression profile of NPR1 in different samples The axis corresponds to the time point experiments, the y-axis the expression level in Log2. Each curve corresponds to a sample.Click here for fileStatistic of differences between inner and outer cotyledons The x-axis corresponds to the combination of time points, the y-axis the number of genes.Click here for fileBrassica napus clusters GO analysis of the 11 clusters in whole seed development Brassica napus.Gene Ontology analysis of whole seed Click here for file"}
+{"text": "A variety of high-throughput methods have made it possible to generate detailed temporal expression data for a single gene or large numbers of genes. Common methods for analysis of these large data sets can be problematic. One challenge is the comparison of temporal expression data obtained from different growth conditions where the patterns of expression may be shifted in time. We propose the use of wavelet analysis to transform the data obtained under different growth conditions to permit comparison of expression patterns from experiments that have time shifts or delays. We demonstrate this approach using detailed temporal data for a single bacterial gene obtained under 72 different growth conditions. This general strategy can be applied in the analysis of data sets of thousands of genes under different conditions."}
+{"text": "Arabidopsis reveals many biologically meaningful network modules worthy of further investigation. These results collectively suggest DPC is a versatile tool for genomics research. The R package DPC is available for download (http://code.google.com/p/dpcnet/).Inferring regulatory relationships among many genes based on their temporal variation in transcript abundance has been a popular research topic. Due to the nature of microarray experiments, classical tools for time series analysis lose power since the number of variables far exceeds the number of the samples. In this paper, we describe some of the existing multivariate inference techniques that are applicable to hundreds of variables and show the potential challenges for small-sample, large-scale data. We propose a directed partial correlation (DPC) method as an efficient and effective solution to regulatory network inference using these data. Specifically for genomic data, the proposed method is designed to deal with large-scale datasets. It combines the efficiency of partial correlation for setting up network topology by testing conditional independence, and the concept of Granger causality to assess topology change with induced interruptions. The idea is that when a transcription factor is induced artificially within a gene network, the disruption of the network by the induction signifies a genes role in transcriptional regulation. The benchmarking results using GeneNetWeaver, the simulator for the DREAM challenges, provide strong evidence of the outstanding performance of the proposed DPC method. When applied to real biological data, the inferred starch metabolism network in In recent years various multivariate analysis techniques have been developed for inferring causal relations among time series. Although many of them have previously proved their power on analysing economic and neurophysiological data, the unique nature of gene expression time series, typically large-scale and small-sample, poses a challenge to these techniques. On the other hand, gene expression dynamics are important, since they directly reveal the active components within the cell over time, indicating gene regulatory relationships at the transcriptional level. Therefore, a lot of time and effort has been spent on developing tools that suit the need for expression time series analysis.We define a causal relation as a target at the current time having directed dependence on a regulator at the past time, when conditioned on the rest of the regulators . Inferriet al.et al. For example, a directed network inference approach, termed the shrinkage vector autoregressive method (SVAR), was proposed by Rhein In this paper, we describe some of the most commonly used multivariate inference techniques for large-scale gene regulatory network reconstruction. We demonstrate that the proposed directed partial correlation (DPC) algorithm is an efficient and effective solution to causal/regulatory network inferences on small-sample, large-scale gene expression data. The comprehensive analysis of the experimental results not only reveals good accuracy of the proposed DPC method in large-scale prediction, but also gives much insight into all methods under evaluation.In essence, partial correlation, which is able to test conditional independence on multivariate Gaussian data, is used as the mathematical foundation for establishing direct interactions among genes. For example, variable b is highly correlated with c because of the causal effects from a . PearsonAlthough a shrinkage estimate of partial correlation An immediate difficulty in accessing a network inference method lies in the fact that current biological knowledge is far from sufficient to provide a clear picture. A reasonable validation process involves the use of real biological datasets, in addition to synthetic datasets which provide both ground truth and unlimited sample length. Under a broad set of assumptions, if datasets of various sample sizes and number of variables can be produced, an inference method then can be tested extensively, especially against its sensitivity to dimensionality. We adopt this validation process, but specifically note that, since most of the methods are probabilistic, selecting cutoffs to represent one resulting network may introduce false positives. Hence it is desirable to compare different methods with their direct output \u2013 the network probability matrix in which a coefficient denotes the probability of interaction between two genes.in silico challenges. DREAM is a community effort to assess reverse engineering algorithms. Benchmarking using GeneNetWeaver datasets should provide strong evidence of the power of network inference algorithm in a controlled environment. Specifically, we discuss the statistical properties of transcriptional networks and their impacts on the performance of an algorithm in the comparative evaluation. In addition, we discuss model assumptions for different inference methods. The question is, to what extent the model assumptions influence the confidence of the inference outcome.The rest of this paper is organised as follows. In the second section, we present the technical details of the three existing algorithms, together with the proposed algorithm for directed regulatory network inference. Then benchmarking using datasets of various sizes generated by GeneNetWeaver The experiments are designed to give a thorough evaluation of the proposed algorithm and to compare the four algorithms in a coherent manner. The reported results on simulated data indicate superior performance of the proposed algorithm both in terms of accuracy and efficiency. For the biological dataset, detailed analysis of the results suggests that DPC uncovers more biologically relevant regulatory relationships than the competing method SVAR.In this section, we hope to shed some light on the nature of different inference techniques, their advantages and inherent problems. First the autoregressive models are presented, they form the theoretical ground for most of the existing methods in comparison. Then we describe the technical details for the three representative existing methods, with notes on their capabilities in gene expression analysis. Next, the proposed DPC method is formulated. These technical details provide us with a strong foundation for later discussions of experimental results.Suppose Time series In the multivariate case. Let Symbol \u201cAlthough the VAR model has been widely used in economics and neuroscience, it has its own limitations when small samples are encountered. An effective shrinkage estimation procedure was developed for learning the VAR models from small sample data The covariance matrix would otherwise be ill-conditioned, given the large number of variables to provide a more appropriate dataset for the assessment of this method. The simulations in this paper represent even more difficult problems for the inference methods. Specifically, for knockout and knockdown experiments, a simulation may see the change of expression profiles of very few genes while others remain constant. For this to be a standalone test, we then select the datasets based on the variance in the dataset and only use datasets with high variations.We simulate networks of size 50, 100, 200, and 500 genes with four types of perturbations. The times series are all simulated from time point 1 to 100, but measured with 21 time points or 100 time points to form two datasets with different time series lengths. Experimental noise is modeled by simulating noise in microarrays, which is a mix of normal and log normal noise. Then the data is normalized after the experimental noise is added. With 4 network sizes, 4 types of perturbations, 2 time series lengths, and 5 simulations for each setup, there are altogether Four multivariate time series inference algorithms as described above are evaluated in this experiment. Their ways of inferring the final network vary and each requires fine tuning for the parameters, which could be subjective for large-scale experiments . To eliminate any subjective elements and enable a fair comparison, we decided to compare directly on their preliminary output, the network probability matrices. For clarity, the related symbols for each probability matrix in the algorithms' technical details are listed in For the inferred network probability matrices, we compute their true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN) at a given threshold. This procedure was repeated 500 times for each test statistic and variance scenario to obtain Receiver Operator Characteristic (ROC) curves While AUC provides a quantitative measurement on the average performance for a method, maximum F-score Apart from these metrics, we also base our evaluation on the consumed computation time and the true positive rate at a 0.2 false positive rate, since usually a low false positive rate is preferred. All three metrics are used for assessment in the simulation experiments.For DPC and SVAR, we plot their experimental results together so that they can be compared with respect to individual simulations. Then the average results on part of the datasets for each of the four algorithms are shown in separate plots. This is because GC-VAR can only be applied on datasets with 100 time points and 50 genes, since it requires long time series . This may be because some perturbations have only few downstream effects. When the regulators are not perturbed, the real relationships between them and their downstream targets cannot be found. Effectively, a significant proportion of the variation in the data is a result of experimental noise. Hence for some inference methods, it is difficult pick up the true signals amidst noise.in silico challenge provides four perturbations each of 10 simulations to infer a network. However, for benchmarking in this paper we use a single simulation for one type of perturbation as input. As a result, better performance in DREAM can be expected for the methods in comparison. Nevertheless, for comparative purposes, these simulation assessments undoubtedly yield benchmarking results on a fair ground, i.e. not biased towards any model assumptions.In DREAM an Arabidopsis L. Heynth dataset Arabidopsis. Two replicates consist of measurements at 11 time points of uneven time intervals to capture the periods immediately after the transitions from dark (light) to light (dark). Samples were firstly taken at the end of light period, then at 1, 2, 4, 8, and 12 hr of darkness and at 1, 2, 4, 8, and 12 hr of light. During the day, starch is synthesised to serve as an intermediate store of carbon fixed during photosynthesis when rates of production exceed the export rates of the chloroplasts. During the night, starch formed and stored within the chloroplasts during the day is metabolized to maltose and glucose and exported from the chloroplast. These exported breakdown products are then used as sources of energy for plant growth and metabolism as well as being sent to sink tissues where starch can be re-synthesised for more long term storage in specialised storage organelles called Amyloplasts , while 257 biclusters were found for SVAR with quality scores of 87\u2013940 . GO Enrichment analysis is performed for transcription factor binding elements in target promotors. 60 cases of promotor enrichment were observed for DPC biclusters while 44 cases were identified in the SVAR biclusters , ISA3 (At4g09020), PHS1 (At 4g29320) and SEX1/GWD1 (At1g10760) are known to be part of the starch degradation pathway operating in chloroplasts The tight grouping of these genes within the large background subset of genes indicates that DPC appears to be identifying target genes which are potentially co-regulated and involved in the same biological pathway. A larger bicluster (number 222), containing 43 members identified within the SVAR data was also found to contain 3 of the 5 known genes and the two putative pathway members identified in the DPC bicluster number 190. From the validation results it appears that DPC generates more biologically meaningful results than SVAR.One of the significantly enriched motifs in bicluster 190 one motif M00958 suggestsBesides analysing the biclustering results, we also looked directly for known regulators. 41 of the 800 genes are known transcription factors in Arabidopsis. Since a row of coefficients in the probability matrix represents probabilities of one gene regulating other genes, the sum of this row should be proportional to the probability of this gene being a regulator. A Welch-Satterthwaite test The 41 transcription factors are then tested individually for their roles in the probability matrices of DPC and SVAR in the same way. Results of the test are provided in targets , as deteThis paper reviews some recent advances in multivariate time series inference of gene expression data. It then reports a new method, Directed Partial Correlation (DPC), for efficient and effective large-scale network inference. Experiments on both simulated and biological data are designed to investigate the properties of the proposed method and existing methods.From the experimental results, superior performance of the proposed DPC method is observed when compared to three other inference methods. When analyzing simulated datasets, DPC can pick up the true signal and reveal the underlying relationships. SVAR is the most efficient, but less effective than DPC in most of the cases. For the biological dataset, DPC appears to give more biological meaningful results than SVAR. These results provide good evidence that DPC is suitable for the scenario of expression time series analysis.Additionally, we should be aware that high-throughput data often lacks the specificity for accurate inference of regulatory relationships. Therefore, the network inference result can be either examined in a modular fashion as in the paper, or combined with other data sources or biological knowledge to address complex biological problems.In summary, the proposed DPC algorithm has excellent performance with large numbers of variables. Its efficiency in learning among hundreds of variables is mainly due to the fact that the computation is based on partial correlation instead of model fitting. DPC has the potential of being extended to applications on static data such as cancer expression for learning the data structure. With time series data, the time lag should be carefully selected based on users understanding of the dataset, in order to reveal the information embedded in time lags.Figure S1A significantly enriched motif in LHY targets as determined by DPC in network module/bicluster 190.(TIF)Click here for additional data file.Table S1GO enrichment for DPC biclusters (Bonferroni adjusted (XLS)Click here for additional data file.Table S2GO enrichment for SVAR biclusters (Bonferroni adjusted (XLS)Click here for additional data file.Table S3Transcription factor enrichment for DPC biclusters ((XLS)Click here for additional data file.Table S4Transcription factor enrichment for SVAR biclusters ((XLS)Click here for additional data file.Table S5Transcription regulators ranking by SVAR.(TXT)Click here for additional data file.Table S6Transcription regulator ranking by DPC(1).(TXT)Click here for additional data file."}
+{"text": "Principal-oscillation-pattern (POP) analysis is a multivariate and systematic technique for identifying the dynamic characteristics of a system from time-series data. In this study, we demonstrate the first application of POP analysis to genome-wide time-series gene-expression data. We use POP analysis to infer oscillation patterns in gene expression. Typically, a genomic system matrix cannot be directly estimated because the number of genes is usually much larger than the number of time points in a genomic study. Thus, we first identify the POPs of the eigen-genomic system that consists of the first few significant eigengenes obtained by singular value decomposition. By using the linear relationship between eigengenes and genes, we then infer the POPs of the genes. Both simulation data and real-world data are used in this study to demonstrate the applicability of POP analysis to genomic data. We show that POP analysis not only compares favorably with experiments and existing computational methods, but that it also provides complementary information relative to other approaches. Genes whose expression varies differentially and periodically over the cell cycle have been identified by both experimental and computational methods Typically, the dynamics of a genomic system are too complicated to be known explicitly. In POP analysis, a complex system is linearized using a set of first order ordinary differential equations (ODEs). These ODEs correspond to the state equation in systems theory; their parameters can be inferred from data. The state equation with perturbations has been applied to model gene expression in However, genome-wide gene-expression data sets normally have a limited number of time samples. Since the number of time samples is much fewer than the number of genes, estimation of the genomic system matrix is underdetermined. In order to solve this problem, we use the idea of dimensionality reduction to construct an eigen-genomic system that consists of significant eigengenes calculated from the singular value decomposition (SVD) We evaluate the applicability of POP analysis to genomic systems using both simulation and real-world datasets. Using simulation data, we check the capability of POP analysis to recover the oscillation amplitudes and phases defined by the simulation parameters. Using real-world data, we compare POP analysis with both the results of experiments and existing computational methods We model gene expression data from a system point of view; i.e., the genome-wide time-series gene-expression data X(t) affects the state change rate dX(t)/dt, and also encapsulates the dynamic characteristics of the genomic system, and in this paper, we denote as where Estimating the genomic system matrix In this section, we introduce to calculate the eigen-genomic system matrix. The singular value decomposition (SVD) of r significant eigengenes.where By Equation (1) and Equation (2), the eigen-genomic system matrix The relationship between the genomic system matrix where The eigen-genomic system equation is given by Equation (3). After discretizing it into a difference equation, we obtain\u03c4 is the time interval between measurement time points and where We can estimate where By Equation (5), the eigengene expression where The coefficient satisfies the dynamic equation:where where The eigen-genomic system matrix For a complex conjugate pair of eigenvalues, After summing the terms of the complex conjugate eigenvectors in Equation (7), their sum, denoted as \u03c9 is driven by the patterns which shows that the oscillation with frequency By Equation (2), the relationship between gene expression and eigengene expression is linear, so gene expression \u03c9; i.e., the period \u200a=\u200awhere From a system point of view, the oscillation part of a genomic system is a periodic process starting from For an individual gene, for example, the \u03c9 as shown in where We now perform a series of analyses to determine the strengths and limitations of POP analysis as applied to gene expression data:Can POP analysis recover known periodic features of a genomic system? To address this question, we apply POP analysis to simulation data. At the system level, we check if the period of the POPs recovers the one defined in the simulated oscillation process. At the gene level, we check if the amplitudes of the POPs are highly positively correlated with the simulated amplitudes of the oscillating strengths, and likewise if the phases of the POPs match the simulated phases.What is the sensitivity of POP analysis, i.e., for genes that have been experimentally verified as being periodically expressed, does POP analysis identify them as periodically expressed? To address this question, we examine the results of POP analysis on genes that are experimentally considered to be periodic based on previously reported experimental investigations of gene expression across the cell cycle.What is the specificity of POP analysis, i.e., does POP analysis falsely identify genes as periodic that are not actually periodically expressed? To address this question, we examine the results of POP analysis on genes that have never been identified by either previous experiments or existing computational methods as periodic across the cell cycle.Can POP analysis identify genes that are likely to be periodically expressed but that were missed in previously reported previous experiments? To address this question, we examine annotations Can POP analysis identify genes that are unlikely to be periodically expressed across the cycle, yet were previously reported as such by previous experiments? To address this question, we examine annotations How does POP analysis compare with existing computational methods for identifying periodically expressed genes? To address this question, we evaluate the results of POP analysis relative to existing computational methods We simulate the time series expression of each gene using the following first order differential equation, which is widely used in modeling gene-expression data:where \u03b1-factor-based synchronization \u03c9, as \u03c9\u200a=\u200a2\u03c0/30, which corresponds to a 30 minute period of the oscillation process. The simulated phase \u03c0]. The simulated amplitude Therefore, using Equation (14), we obtain simulated expressions of 4000 genes at 0,7,14, ... ,119 minutes. These time points were selected to match those of a widely used budding-yeast cell-cycle data set with Saccharomyces cerevisiae) gene-expression dataset with \u03b1-factor-based synchronization \u03b1-factor-based synchronization were measured at t\u200a=\u200a0, 7, 14, \u2026 , 119 minutes covering two cell cycles of around 120 minutes with an equal time sampling interval of 7 minutes We apply POP analysis to a widely studied budding-yeast that has a strong oscillating process with a period matching the length of the cell cycle. Thus, we investigate whether the mean POP amplitude of genes that are known to be periodically expressed across the cell cycle is higher than the mean POP amplitude of other genes.\u03b1-factor-based synchronization, there are 344 cell cycle genes identified by experimental methods. The mean of the POP amplitudes of these 344 \u2018cell cycle\u2019 genes (0.12) is significantly is significantly . We examine their MIPS annotations to evaluate the merits of the POP analysis for these genes. We find that MIPS does not annotate 16 out of these 21 genes (\u223c76%) as \u2018cell cycle\u2019 genes, which is consistent with the results of the POP analysis. These results indicate that experiments may mistakenly identify some genes as \u2018cell cycle\u2019 genes that can be correctly recognized as aperiodic with the assistance of POP analysis.A high POP amplitude indicates that a gene's expression is strongly periodic across the cell cycle. Thus far in our analyses, we have avoided applying arbitrary thresholds and have evaluated the POP approach based on comparison of distributions. However, in practical application of POP analysis to gene expression data, it would be valuable to apply a fixed threshold to denote some genes as \u2018periodic\u2019 and others as \u2018aperiodic\u2019. While such a threshold is necessarily data-set specific, we present a general approach for selecting an appropriate cutoff.\u03b1-factor-based synchronization, we plot the cumulative distributions [Probability is also considered. We first compare POP with existing computational methods by analyzing the overlaps of the sets of genes identified by the different methods, and likewise the overlaps with the set of genes identified in reports of experiments. Then, we analyze the potential benefits of combining POP with existing methods.Each of the computational methods identifies a substantial number of genes as periodic that are not flagged by POP nor were reported as such from experiments are also noted as such in reports of experiments. If we consider the 454 genes flagged by Spellman's method which are not flagged by POP analysis, the percentage that are also experimentally identified as \u2018cell cycle\u2019 drops to 18%. On the other hand, if we consider the 345 genes for which both Spellman and POP analysis indicate that they are likely to be periodically expressed, the percentage that are also experimentally identified as \u2018cell cycle\u2019 increases to 40%. Similar results are obtained when other computational methods are considered . The POPs, N-dimensional vectors, drive the cosine and sine, respectively, of the oscillation part with angular frequency \u03c9. The oscillation part of an N-dimensional genomic system starts from (DOC)Click here for additional data file.Figure S2Scatter plot and Pearson correlation of POP amplitudes vs. Simulated amplitudes. The Pearson correlation between the gene expression amplitudes defined in the simulation and the amplitudes recovered by POP analysis is 0.99 with (DOC)Click here for additional data file.Figure S3Scatter plot and Pearson correlation of POP phases vs. Simulated phases. There is high Pearson correlation between the gene expression phases defined in the simulation and the phases recovered by POP analysis (rho \u200a=\u200a0.96 with (DOC)Click here for additional data file.Figure S4\u2018Cell cycle\u2019 genes identified by Experiment vs. POP vs. Existing methods. Venn diagrams show overlaps of \u2018cell cycle\u2019 genes identified by previous experiments, POP analysis and each existing computational method.(DOC)Click here for additional data file.Table S1Eigenvalues and POP period of simulated genomic system.(DOC)Click here for additional data file.Table S2Eigenvalues and POP period of the genomic system of the budding yeast with \u03b1 factor-based synchronization.(DOC)Click here for additional data file.Table S3POP amplitudes and phases of all 4598 genes.(XLS)Click here for additional data file.Table S4\u2018Cell cycle\u2019 genes identified by POP, experiment, MIPS and existing computational methods.(XLS)Click here for additional data file.Table S5Overlap in percentage of experimentally identified \u2018cell cycle\u2019 genes identified by POP and existing methods.(XLS)Click here for additional data file.Table S6POP amplitudes and phases of all 4774 genes in Dataset alpha 30, http://labs.fhcrc.org/breeden/cellcycle/.(XLS)Click here for additional data file."}
+{"text": "H1N1 influenza viruses were responsible for the 1918 pandemic that caused millions of deaths worldwide and the 2009 pandemic that caused approximately twenty thousand deaths. The cellular response to such virus infections involves extensive genetic reprogramming resulting in an antiviral state that is critical to infection control. Identifying the underlying transcriptional network driving these changes, and how this program is altered by virally-encoded immune antagonists, is a fundamental challenge in systems immunology.in vitro with seasonal H1N1 influenza A/New Caledonia/20/1999. To provide a mechanistic explanation for the timing of gene expression changes over the first 12 hours post-infection, we developed a statistically rigorous enrichment approach integrating genome-wide expression kinetics and time-dependent promoter analysis. Our approach, TIme-Dependent Activity Linker (TIDAL), generates a regulatory network that connects transcription factors associated with each temporal phase of the response into a coherent linked cascade. TIDAL infers 12 transcription factors and 32 regulatory connections that drive the antiviral response to influenza. To demonstrate the generality of this approach, TIDAL was also used to generate a network for the DC response to measles infection. The software implementation of TIDAL is freely available at http://tsb.mssm.edu/primeportal/?q=tidal_prog.Genome-wide gene expression patterns were measured in human monocyte-derived dendritic cells (DCs) infected We apply TIDAL to reconstruct the transcriptional programs activated in monocyte-derived human dendritic cells in response to influenza and measles infections. The application of this time-centric network reconstruction method in each case produces a single transcriptional cascade that recapitulates the known biology of the response with high precision and recall, in addition to identifying potentially novel antiviral factors. The ability to reconstruct antiviral networks with TIDAL enables comparative analysis of antiviral responses, such as the differences between pandemic and seasonal influenza infections. Pathogenic viruses, such as influenza and measles, subvert normal immune functioning through the expression of immune antagonists, such as the influenza NS1 protein. These antagonists differ between viral strains, and are crucial components of viral pathogenicity. Determining how these antagonists interact with the host immune system would be aided by knowledge of the genetic regulatory network that operates in response to infection. Recently, multiple studies have begun to define the cellular response to bacterial and viral pathogens in specific cell types that focuses on uncovering dynamic transcription regulatory programs, and apply this approach to antiviral responses. In contrast to other approaches, we consider initial up-regulation time as the main criterion to identify genes with common regulatory control logic. TIDAL is an integrative method, relying on expression data and promoter binding site information conserved across species for inference of regulatory relationships. Note that since each individual data type can be incomplete or error-prone, integrative methods provide more robust and accurate results by drawing on multiple lines of evidence and requiring consistency between several heterogeneous source of data )While genome-wide identification of direct TF targets can be done experimentally using techniques such as ChIP-Seq , it has Methods), is applied to infer the activity of transcription factors. Of the 49 TRANSFAC matrices annotated to genes that are up-regulated at some point in the time-series, our analysis identifies 15 of these matrices as having a role in the response. Figure Genes are grouped according to the time of their first detected up-regulation in the microarray . Inspection of the location plots shows that the shift in location during times of peak TF activity is towards the TSS, as expected.We validate the inferred activity profiles through a complementary computational analysis. It has been observed that the location of functional cis-regulatory binding sites relative to the transcription start site (TSS) is non-random ,31. TrueMethods for details).Having identified the set of TFs driving the antiviral response to influenza, we next seek to explain how each of the individual TFs becomes up-regulated by connecting these factors into a coherent network. We initially consider all TF pairs such that a binding site of one factor is located in the promoter region of the other based on our promoter analysis. We filter these potential network links in two steps. First, we define a time-window for each TF's activity based on when its mRNA is up-regulated, and when any of its associated TRANSFAC matrices shows significant activity, and retain only regulator-target connections where the target is up-regulated within the regulator's inferred activity window. Since TF binding site locations are correlated with the TF activity profiles Figure , limitinTo visualize the inferred network, we order nodes vertically based on their up-regulation times. Links between nodes indicate predicted regulatory relationships Figure . Since eHaving filtered the links by TF activity windows and having limited each node to three most likely regulators, we obtain an influenza antiviral network that contains 12 TF nodes and 32 regulatory links . Recall for the two methods . However, NFkB and STAT activity is predicted to regulate the latter stages of the response (via the V$STAT_Q6 and V$NFKB_Q6_01 matrices). In contrast, TIDAL predicts their activity early in the response, which is consistent with known biology . OverallIn this study, we describe the Time-Dependent Activity Linker (TIDAL), a bioinformatics method for antiviral network inference that integrates a statistically rigorous enrichment approach with genome-wide expression kinetics and time-dependent promoter analysis. We use TIDAL, in combination with new experiments, to define the regulatory networks that operate in DCs infected with seasonal H1N1 influenza virus and Measles virus. DCs provide a crucial link between virus detection and adaptive immunity, and the ultimate success of antiviral responses depends on the early signaling and maturation elicited in these cells. The DC response to virus infection depends on the activation of multiple pathways, which is carried out by a complex genetic regulatory program. This program can be altered by viral immune antagonists, such as the NS1 protein of influenza ,40. In cThe networks produced by TIDAL consist of a set of transcription factors, their temporal activity profiles, and specific regulatory relationships. We apply TIDAL to reconstruct the transcriptional network mediating the antiviral program in human DCs in response to influenza and measles infections. TIDAL reliably captures known elements common to many antiviral responses, including the key roles of the NFkB, IRF and STAT family transcription factors. When compared with DREM, another state-of-the-art method for identifying response regulators, TIDAL has similar recall, but higher precision. In fact, all of the TFs identified by TIDAL in the influenza network have previously been connected to the immune response, and all of the TFs are also up-regulated in DCs during other anti-viral responses Figure . This coVisualization of the dynamic transcriptional networks produced by TIDAL presents several challenges. Figures A regulates target gene B at time T through the binding site S located at position P in the promoter region. Moreover, our time-centric approach is generally applicable to understanding the immune response to other pathogens. Such reconstructions of transcriptional networks underlying the immune response across infections and cell types, coupled with the ability to compare them, will provide critical insight into the host immune response and viral antagonism.In summary, we have developed TIDAL, a new method for constructing transcriptional regulatory networks from time-series transcriptional profiling data. Application of our integrative analysis to data from the influenza and measles responses enables us to identify the underlying regulatory network structure, along with potentially novel antiviral transcription factors. Importantly, TIDAL provides specific hypotheses that can be validated experimentally. These hypotheses take the form of: Transcription factor All human research protocols for this work have been reviewed and approved by the IRB of the Mount Sinai School of Medicine. Monocyte-derived DCs were obtained from healthy human blood donors following a standard protocol described elsewhere . BrieflyThe recent seasonal H1N1 influenza virus A/New Caledonia/20/1999 (NC) was kindly provided by Dr. Peter Palese and grown in 10- to 11-day-old embryonated chickens eggs as described previously . The VirCells were homogenized by using QIAshredder microcentrifuge spin-columns and RNA was isolated from cells using Qiagen Micro RNeasy plus kit following the manufactures protocol . RNA quality was assayed by determination of the RNA integrity number using the 2100 Bioanalyzer (Agilent).q <0.05). In analysis where a more inclusive gene universe is used, it was defined as the set of genes from all time-points that met criteria 1 and 3 (minimum observed intensity and significant change by LIMMA). This definition provided an expanded set to allow for more power in statistical enrichment tests, but ensured that all genes exhibited some changes over the response. All of this analysis was performed using BioConductor software packages [RNA samples were processed and hybridized to HumanHT-12 v4 Expression BeadChip Kit by the Mount Sinai Genomics Institute following the manufacturer's instructions, and raw expression data were output by the Illumina GenomeStudio software. Microarray data are available through the Gene Expression Omnibus (GEO) Database, accession number GSE41067. The data were log-transformed and quantile normalized . Differepackages in R. FeUsing the UCSC Genome Bioinformatics site, we downloaded the transcription start site data (TSS) for all human RefSeq genes, defined by the January 2010 refGene table . The regG, called the foreground set, to be the set of genes first up-regulated at a particular time-point, and T to be the set of all genes in the dataset with binding sites for a given TRANSFAC matrix M. The set T is determined by choosing an appropriate sequence match cutoff for matrix M, and further filtering the genes with matches to M, we included only those with a conserved binding site for an orthologous mouse and chimp gene (as described above). The background set B, which served to catalog the expected distribution of binding sites for M and dictate whether the observations in the foreground set G are unusual, is computed as the set difference between the gene universe U and the foreground set G under consideration. Note that the background set changed slightly depending upon the time slice under analysis. Next we found how many of the genes in the foreground and the background contained binding sites for the matrix M. That defined subsets in the foreground and the background as intersections of G and B, respectively, with T. Defining g Hbinding sites in the foreground set. We computed such P-values for every TRANSFAC matrix mapped to a gene found to be up-regulated somewhere in the time-course at each time-point, and retained those passing an FDR-corrected threshold of 0.05.The TF inference was based on statistical enrichment of putative TF targets. More precisely, we defined To connect the individual TFs into a regulatory network, we placed each gene represented by one of the enriched matrices at the time it is first determined to be differentially expressed. For every TF included in the network, we defined an activity window for it. Contained in this window was the consecutive stretch during which the gene is up-regulated in the microarray, possibly extended to include all time-points of enrichment. Once the activity window had been identified, the transcription factor was connected explicitly to all nodes in the network with binding sites for the factor and implicitly with all other non-TF targets, placed within its activity window. Connections from a node placed earlier in time to nodes in later time-points were referred to as forward links; the reverse is true for back links.t. To choose the most likely regulators, for each TF i Rwe considered the time interval during which i Ris significantly enriched (as shown in Figure i 300nM) was observed for a DNA duplex mutant that served as negative control and altered the srbA binding motif from ATCATACGAT to ATATAACATA. Furthermore, high affinity SrbA161-267 binding was observed as well with putative binding sites that had only one mismatch compared to known binging sites in S. pombe. Kinetic binding responses on duplexes encoding hapX (-1340 to -1331) and hemA (-527 to -518) promoter regions fit with KD values of 4.6nM and 4.2nM, respectively. Additionally, we identified two sites with weak binding in hapX and hemA, respectively. Taken together, SrbA has high-affinity binding capacity to binding sites in hapX, hemA and srbA. Together with our predicted network and the Northern blot analyses [A. fumigatus genome, allowing up to 2 mismatches in the variable region \" and \"Lipid, fatty acid and isoprenoid metabolism\". This adds evidence to the hypothesis that SrbA links ion concentration and fatty acid metabolism.The refined network model consists of four target genes for SrbA. Two of them, rn blots , while tanalyses , these rIn this study, we propose a modelling approach based on a set of ordinary differential equations. Even though this approach fits well to the measured time series data, it has the drawback that it is inappropriate for large-scale modelling. In general, a large number of genes being part of a model leads to a large number of parameters to be identified, which may result in over-fitting of the data. Our modelling approach aims at inferring a sparse network (i.e. many parameters are zero) and makes use of resampling techniques where the data are perturbed in a random manner (see chapter \"Network prediction\" point 4 and 6). Both attempts help to prevent over-fitting. Furthermore, we restrict the number of genes, thereby leading to a smaller number of parameters to be identified. The selection of those genes that are included in the model are directed from experimental findings. However, some additional genes which might be involved in iron homeostasis are not included in the model. Furthermore, there is a number of so far uncharacterised genes which might play a role. The clustering of the expression data helps to get an idea about those genes. One gene of each cluster in the proposed regulatory model could be thought of a cluster representative. In this way, regulatory interactions inferred by our model might be transferred to other pairs of genes belonging to the respective clusters. With the knowledge of co-expression patterns and the regulatory influences proposed by our model, it might be possible to obtain an idea about the function of so far uncharacterised genes.mirB is activated by PacC under alkaline conditions in A. nidulans [The modelling strategy makes use of prior knowledge. The cross-validation procedure helps to prevent the model from adapting too much to the given knowledge. However, the prior knowledge incooperated into the model could be changed. In general, we could add more interactions when making use of knowledge based on other organisms. It remains to find out what organisms could be used for this task, i.e. what the maximal evolutionary distance of an organisms that could be used as prior knowledge source is. This also relates to the question of how we chose the local scores for each interaction. The scoring sheme used in this study was already successfully applied . Howevernidulans . This woA recent study shows that sidL is expressed independently of SreA . Our modThe applied scoring scheme assigns the highest score for prior knowledge based on Northern blots. The rationale behind this is that Northern blots are no high-throughput experiments and thus we believe these experiments give strong evidence that the respective regulatory interactions exist. However, the Northern blots were performed at steady state (24 hours after adding iron). On the other hand, the used expression data focused on early effects after the change to iron replete conditions (10 to 240 minutes). This might explain some discrepancies between the Northern blot data and the proposed models.A. fumigatus. The data revealed a remarkable sequence similarity between S. pombe and A. fumigatus. An interesting future task will be to identify further SrbA target genes and to analyse whether the defined binding sites are conserved throughout other fungal species. All high-affinity binding sites of S. pombe and A. fumigatus show a conserved C at the third position. While the existence of the conserved AT can be explained by the fact that the human SrbA ortholog physically interacts with these nucleotides, the reason for the conservation of the C remains to be elucidated.Here, we report the first SrbA binding sites in A. fumigatus. Wet lab experiments proved that the proposed modelling approach allows to predict novel biologically relevant interactions. Results of the latest experiments were used to refine the predicted model. Taken together, this underlines that using prior knowledge during network inference improves the prediction quality of the reverse engineering. Together with the experimental results, we identified a new iron homeostasis regulatory network based on the amount of metabolically available iron. Furthermore, we found that SrbA physically interacts with its predicted target genes via specific DNA-binding and identified the SrbA binding site in A. fumigatus.This study demonstrates how the Systems Biology circle is carried out, i.e. how experimental work and modelling iteratively interact, in order to gain understanding of a biological system. Analysing gene expression time series data and using a modelling approach based on a set of differential equations, we were able to predict new regulatory interactions controlling iron homeostasis in C. albicans during an experimental infection. This model was based on a similar modelling strategy, i.e. it also exploits gene expression data and uses a set of prior knowledge. In the case of C. albicans it remains unclear which of the predicted regulatory interactions is exclusively based on limited iron. In contrast, for A. fumigatus we do not know which of the (predicted) interactions play a role in an in vivo infection process. Further experiments will focus on time series expression of A. fumigatus in an infection and on expression data of C. albicans under in vitro iron limitation. It will be interesting to figure out whether C. albicans also regulates iron homeostasis based on the amount of metabolically available iron. This will give us the opportunity to compare regulations of iron homeostasis for both important fungal pathogens. Together with the growing amount of available expression data for both fungi we will be able to expand our models to other important processes, thus making A. fumigatus and C. albicans model organisms for fungal infections.In a previous study, we predicted a regulatory network concerning iron acquisition by the fungal pathogen HH and RG directed the study. JL carried out the analyses of the data, performed the modelling, and wrote the manuscript supported by the coauthors. HH assisted in choosing candidate genes for the model. EF and JL scanned for the SrbA binding sites. PH designed and performed the experiments for the binding site. HH, PH and AB assisted in the interpretation of the results. All authors read and approved the final manuscript.Table S1 - Imputation. The whole genome expression data (wild-type and mutant) includes 20.4% missing values. Since clustering and network inference need complete observations, we imputed those missing values following a similar approach applied by Albrecht et al. [t et al. . First, t et al. K-nearest et al. : probabiClick here for fileSrbA binding site. Experimental details and results of real-time in vitro binding analysis of SrbA.Click here for fileFigure S3 - Cluster validity index. Validity indeces for partitions based on two up to twenty clusters are shown. The maximum denotes the best partition.Click here for fileTable S4 - Cluster annotation. This table shows to which cluster each gene belongs to. Functional annotations and GO annotations are given.Click here for fileTable S5 - Overrepresented functional categories for each cluster. Overrepresented categories for each cluster. The file consists of 20 sheets. For each cluster, significantly (p < 0.01) overrepresented functional categories for Funcat level 1-4 and Gene Ontology are presented.Click here for fileTable S6 - Inferred interaction network based on final prior knowledge. The table summarise results of the inferred network based on the final prior knowledge list. It gives the number of resampling during random perturbation of time series data and during the cross-validation of prior knowledge.Click here for fileTable S7 - SrbA regulon. The table lists A. fumigatus genes having the experimentally validated SrbA binding sites in their regulatory region.Click here for file"}
+{"text": "Changes in the nature of gene interactions during development help explain the robustness of early development and the basis for developmental evolution. Strongylocentrotus purpuratus. We find extensive variation in gene expression in this network throughout development in a natural population, some of which has a heritable genetic basis. Switch-like regulatory interactions predominate during early development, buffer expression variation, and may promote the accumulation of cryptic genetic variation affecting early stages. Regulatory interactions during later development are typically more sensitive (linear), allowing variation in expression to affect downstream target genes. Variation in skeletal morphology is associated primarily with expression variation of a few, primarily structural, genes at terminal positions within the network. These results indicate that the position and properties of gene interactions within a network can have important evolutionary consequences independent of their immediate regulatory role.Regulatory interactions buffer development against genetic and environmental perturbations, but adaptation requires phenotypes to change. We investigated the relationship between robustness and evolvability within the gene regulatory network underlying development of the larval skeleton in the sea urchin Animal development is highly stereotypic in the face of changing environmental conditions and individual genetic differences. At the same time, developmental processes can evolve rapidly, suggesting that selectable genetic variation is hidden beneath this apparent stability. To better understand the relationship between these seemingly opposed properties of robustness and evolvability, we measured how natural variation in gene expression propagates across a network of interacting genes underlying early development in sea urchins. We found that gene interactions are not equal across development: expression variation is well buffered during early development by the use of on/off switch-like (rather than continuously tunable) regulatory mechanisms, while during later development it has a greater impact on the expression of downstream target genes and on morphology. Using a breeding design, we were able to detect a substantial genetic component to the observed variation in gene expression. Interestingly, the degree of genetic contribution was greatest during early development and specifically at points of switch-like regulation, suggesting that the properties of developmental gene regulatory networks that underlie robustness also promote the accumulation of genetic variation that could seed adaptation. The process of development is a balancing act in an unpredictable world: it is often remarkably resilient, producing consistent phenotypes in the face of new mutations and environmental perturbations, but it is also adaptable, allowing for the evolution of novel phenotypic traits in response to environmental change. The first property is essential for the survival of individuals and the second for the persistence of populations. Yet these two critical aspects of development seem diametrically opposed, since buffering stabilizes phenotypes while adaptation permanently changes them Key to understanding both buffering and adaptation is measuring how, and to what extent, variation in developmental gene function impacts downstream phenotypes In this study, we examine three pertinent questions for understanding the relationship between developmental buffering and evolvability: How much variation in the expression of developmental regulatory genes exists within a natural population? What impact does this variation in gene expression have on downstream genes within a regulatory network? And finally, how does expression variation during development influence the morphological phenotypes that lie at the interface between organism and environment and are therefore potential targets of natural selection?S. purpuratusTo address these questions, we measured naturally occurring variation in gene expression within a well-defined developmental gene regulatory network, as well as the impact of expression variation on downstream gene expression and on morphology. The network of >100 genes that we examined containsS. purpuratus harbor high levels of genetic diversity Three additional features make this gene regulatory network particularly useful for studying the consequences of variation in gene expression during development. First, the network spans the major phases of development: from the unfertilized egg, through embryonic patterning and cell fate specification, to morphogenesis and terminal cellular differentiation. It thus offers an opportunity to examine how changes in the expression of regulatory genes influence phenotypes throughout development. Second, the network includes all of the regulatory genes and many of the structural genes known to be involved in the formation of the larval skeleton, a discrete and readily imaged three-dimensional structure S. purpuratus collected from a natural population. We also measured the phenotypic impact of this variation, both on downstream gene expression and on the resulting morphology of the larval skeleton. While several previous studies have examined correlations between natural variation in gene expression and ecologically significant traits from single genes We measured variation in gene expression throughout the gene regulatory network at multiple developmental time points in crosses of outbred individuals of In order to examine the extent and consequences of variation in gene expression within the gene regulatory network, we set up a 6\u00d76 cross using outbred parents derived from the same wild population. We raised the 36 families as replicated cultures in a randomized design within a growth chamber at the Duke University Phytotron and sampled individuals from each culture at seven time points during development . From thVariation in developmental gene expression within a population can arise from many sources, including genetic differences and non-genetic parental influences such as egg quality, which contribute to resemblance among relatives, as well as from environmental influences and stochastic processes, which do not. Because the cultures we analyzed were derived from a controlled cross with known parents (the NCII breeding design), we were able to estimate the magnitude of genetic and non-genetic parental contributions , based on correlations in expression levels among related individuals relative to the population as a whole. In the NCII design, male and female contributions each provide a direct estimate of the additive (heritable) genetic contribution to gene expression variation We observed pervasive parent-of-origin effects on gene expression throughout development. The expression levels of most genes in the network (72/74) showed significant paternal and/or maternal effects at one or more of the time points sampled. For most of these genes, we could ascribe a direct contribution from genetic variation at multiple time points : some 70Nkx2.2, which encodes a transcription factor, was delayed in offspring of one female relative to other families despite reaching typical levels later (p\u200a=\u200a4\u00d710\u22125) to higher correlations at later time points , consistent with predominately zygotic transcription after the first time point.Zygotic transcription occurs at very low levels during the first few cell cycles after fertilization in ilcoxon) . In contilcoxon) . Furtherp\u200a=\u200a2.9\u00d710\u22125, \u03c72\u200a=\u200a61.21, df\u200a=\u200a6) , suggesting that the zygotic component of genetic influences on variation in gene expression changes throughout development and that genetic contributions to gene expression are at least as great during early development as they are during morphogenesis. Importantly, we obtain qualitatively similar results when we examine only statistically significant paternal effects or when we look at mean parental contributions for genes only at the times in which they are known from prior research (see above) to be involved in direct regulatory interactions (protein:DNA and protein:protein) .The magnitude of parental influences on variation in gene expression clearly changed during development and S1B,\u200a32,208) . DiffereS. purpuratus, including SNPs within experimentally validated transcription factor binding sites regulating the expression of FoxB, Endo16, and SM50Taken together, these results suggest that this wild population harbors substantial amounts of genetic variation that influence gene expression during even the earliest stages of embryonic and larval development. This finding is consistent with the high levels of genetic variation in known cis-regulatory sequences that previous studies have documented in wild populations of 2) between pairs of genes for which there is experimental evidence of a direct regulatory interaction. We restricted these analyses to time points when each interaction is known to occur . The information in this database has been painstakingly compiled, and represents arguably the most complete picture of a developmental gene regulatory network to date In order to understand the impact that variation in the expression of regulatory genes has on downstream targets, we first examined correlation coefficients . However, the strength of these correlations changed dramatically during development. During the first two time points, r2 values were no greater, on average, for interacting genes (whether via protein:DNA or protein:protein interactions) than for non-interacting pairs . Importantly, correlations among pairs of genes expressed in the same tissue were often negative and were not, on average, greater than those between random pairs of genes . Changes in correlations over development are thus unlikely to stem from differences in tissue composition or developmental rates among broods.As expected, correlations over all stages were, on average, significantly stronger between genes that are known to interact than between genes with no known regulatory interactions . The proportion of sensitive edges increased substantially from early to later stages of development when considering zygotic transcription only. The first time point marks an exception. However, at this time a large proportion of transcripts present are still maternally derived Interestingly, the relative proportion of sensitive and insensitive edges changed over time . In order to account for changes over developmental time in average paternal effects and in the proportion of insensitive interactions, we repeated the analysis incorporating the effects of developmental time point in the model and the result was still significant (p\u200a=\u200a0.048). Thus, insensitive regulatory interactions may contribute to buffering and may influence the distribution of genetic variation across the network.Insensitive regulatory interactions may allow more genetic variation to accumulate within the population by buffering downstream phenotypes from the consequences of mutations impacting the expression of upstream genes. To test this possibility, we compared the size of paternal effects, the most conservative estimator of genetic influences While gene expression is an important intermediate phenotype, the ultimate products of developmental gene regulatory networks are the morphological and physiological traits upon which natural selection directly acts. To better understand how natural variation in gene expression within the network influences ecologically relevant traits, we next examined associations between variation in gene expression throughout the network and variation in the morphology of the larval skeleton. Because the size and shape of the larval skeleton is closely associated with feeding rates and survivorship After measuring the skeletons of larvae from each culture at time point 7 using established morphological landmarks SM30-E, Msp130, SM50) encode abundant protein components of the biomineral matrix of the skeleton C-lectin) encodes one of the most abundant proteins in the cells that secrete the skeletal matrix FoxB and Hex) encode transcription factors that are direct activators of the four structural genes just mentioned Dkk and Su(H), are regulators of Wnt and Notch signaling, respectively Six of the genes reside within the skeletogenic subnetwork: three (Because the majority of associated genes (five of eight) are expressed exclusively within skeletogenic cells, correlations between their expression and skeletal morphology could be explained by differences in the number of skeletogenic cells among families. For two reasons, this is unlikely to be the case. First, as mentioned above, correlations in the expression of skeletogenic cell genes among families are no greater than background, and are often negative. Second transplant experiments that artificially increase skeletogenic cell number by more than 2-fold, which is far outside the normal range of variation, have no measurable impact on the size or shape of the larval skeleton These results indicate that: (1) natural variation in the expression level of several genes within the network has an impact on an ecologically important structure in the larva; (2) the genes with the largest impact are located at the termini of this gene regulatory network; and (3) these genes primarily encode cell type-specific structural proteins and their immediate regulators.2.Analyses focused on single gene associations (previous section) may overlook cases in which an upstream regulator affects a morphological trait through its influence on other genes. To investigate this possibility, we carried out a two block partial-least squares (2B-PLS) analysis p\u200a=\u200a0.002; permutation). A large weight in this association (49%) was attributable to the first pair of factors. 84% of the weighting in the skeletal factor of this first pair was associated with the body rod length and thus with the overall length of larvae . A further seven of these top 10% expression measures corresponded to later stages of development among the most heavily weighted components of the first gene expression axis involved regulatory genes at intermediate time points: Pmar, which encodes a transcription factor critical in skeletogenic cell fate specification, and SM50 and SM27, both of which encode components of the skeletal matrix with primary functions in later in development , at time points 2 and 3. Included in this list were only two genes expressed later in development: SM30-E (time points 5 and 6), a second gene that was also associated with skeletal size in our PCA-based analysis (previous paragraph), and FoxO (time point 6), which encodes a transcription factor that regulates the epithelial-to-mesenchymal transition that skeletogenic cells undergo prior to commencing biomineralization .Next, we examined the influence of gene expression on multidimensional variation in skeletal morphology by calculating the total contribution of each expression measure as the weighted sum of its loadings on each of the six 2B-PLS factors. The 11 gene-time point combinations that comprised the upper tail of the distribution (top 5% of scores) were again heavily weighted towards the top and bottom of the network . Six of Thus, the 2B-PLS analyses indicate that the strongest gene expression-morphology associations are concentrated bimodally, in very early development and in terminal differentiation.Since skeletal variation and many of the genes uncovered by our correlation analyses are both influenced by maternal effects, the correlations we observed between them could be due to covariation with a common maternal influence. To test this possibility, we sought evidence of a statistically significant correlation between gene expression and a principal component of skeletal variation by including a maternal parental term as an additional factor in our linear models. For any gene-time point expression measures that remain significant predictors of skeletal variation even after accounting for maternal effects, we can reject the hypothesis that maternal effects are the sole factor influencing correlations between gene expression and skeletal variation.FoxO, SM30-E, Msp130, and C-lectin) we found significant support (p<0.01) for non-maternal co-variances at time points 4 and/or 5 showed striking differences much of it must be buffered so as to avoid adversely affecting later developmental processes, (2) it may influence quantitative variation in ecologically significant organismal traits such as morphology, and (3) it may form the basis for future adaptations.S. purpuratus show either switch-like behavior regulatory interactions than in those upstream of quantitative (sensitive) interactions. Our results further suggest that genetic variation in the natural population is not distributed randomly across the gene network, but is instead a by-product of the change in developmental mechanisms from cell fate specification in the early embryo to morphogenesis and growth during later stages. We hypothesize that by masking variation during early development, switch-like regulatory interactions may allow cryptic genetic variation to accumulate over evolutionary time at nodes that operate early within the network. Cryptic genetic variation is evolutionarily significant, because mutation or stress can unmask it, sometimes with dramatic phenotypic consequences The larval skeleton of sea urchins plays critical roles in feeding, defense, and orientation Correlations between the first set of genes and skeletal morphology are unlikely to represent a causal relationship that is mediated through transcription of the network genes we examined. Support for this inference comes from the lack of correlation between the expression of genes in the middle portion of the network and skelIn contrast, the second set of genes are likely candidates for directly contributing to variation in the larval skeleton. All are expressed during formation of the skeleton, most are expressed exclusively within skeletogenic cells, and most encode protein components of the biomineral matrix itself. That many of these genes change expression in response to targeted chemical manipulations that alter of the size and structure of the larval skeleton Terminal genes may influence morphological variation simply because fewer molecular events separate them from the trait of interest than is the case for genes that operate farther upstream, providing less opportunity for buffering. Other studies have also found that the expression of terminal genes can influence an organismal trait, as with abdominal pigmentation in Drosophila FoxO and skeletal morphology become all the more surprising. Unlike structural genes such as SM30-E, the impacts of changes in transcription factor expression are necessarily indirect, and it is worth considering why some transcription factors, but not others, quantitatively influence morphology. Differences in molecular mechanisms of action may provide important clues. For instance, forkhead proteins, including FoxO, often act as pioneer transcription factors, binding directly to condensed chromatin and setting the stage for lineage-specific transcription by recruiting chromatin remodelers and additional transcription factors In light of the number of regulatory interactions that appear to be buffered in this network, associations between genes encoding transcription factors such as Several elegant studies of adaptation in wild populations have traced ecologically important phenotypes to changes in the transcriptional regulation of a single gene during development C-lectin, FoxO, and SM30-E are the strongest candidates among the genes we examined for contributing to adaptive changes in the size and shape of the larval skeleton in S. purpuratus. Additional candidates are FoxB, Msp130, and SM50, though with somewhat weaker support. Most of the remaining genes we assayed show no clear correlation with skeletal morphology and seem less likely to contribute. Five of the six candidates are terminal differentiation genes and the other is a transcription factor expressed during differentiation, suggesting that network position is a significant factor in the genetics of adaptation for the larval skeleton.For a gene to contribute to adaptation, it must harbor variants that influence its function, and this variation must be associated with an ecologically significant organismal trait. Based on these criteria, S. purpuratus should be able to evolve adaptively. First, this species has an enormous effective population size, non-assortative mating, and extensive gene flow S. purpuratus harbor substantial genetic variation (0.5%\u20133%) in noncoding regions of the genome Several independent lines of evidence suggest that the larval skeleton of S. purpuratusAdaptation requires a fourth component that is more difficult to assess, namely the ecological circumstances that favor phenotypic change. As mentioned earlier, the skeleton plays several important roles in larval feeding, defense, buoyancy, and swimming, making it a likely target of selection Our results reinforce the idea that the position of a gene within a regulatory network position has important evolutionary consequences The adult sea urchins used for the cross were collected during a single SCUBA dive from a population in the Santa Barbara Channel, Santa Barbara, California (US). Individuals were shipped overnight to Durham, North Carolina and held in aquaria containing artificial sea water at 12\u00b0C for <48 h prior to spawning. Gametes were obtained from similarly sized individuals (a proxy for similar age) and fertilization was carried out following standard procedures The 36 pools of zygotes generated by our cross were reared in replicate, and all 72 cultures were sampled at seven developmental time points: 10, 18, 24, 28, 38, 45, and 90 h post-fertilization. These time points span very early embryogenesis through a free-swimming larva capable of feeding . All 72 RNA extractions were carried out using the Qiagen RNeasy 96 kit (Qiagen), quantified on a NanoDrop (Thermo Scientific), and adjusted to between 10 and 100 ng/\u00b5l with water. RNA integrity was checked in ten samples using an Agilent 2100 Bioanalyzer. None of the samples showed evidence of RNA degradation. Genomic DNA extractions were carried out using the Qiagen DNeasy mini kit (Qiagen), DNA quantity measured on a NanoDrop, and adjusted to between 20 and 100 ng/\u00b5l with buffer AE.We used Illumina's DASL platform www.spbase.org). Where possible, we validated sequences from the sea urchin genome sequence against targeted sequencing efforts available in GenBank. We chose three to six probes with Illumina final scores >0.8 (App version 6.4.1.0.0.0:2.0.0) for each gene. Illumina recommends using three probes per gene to improve measurement precision. We included more probes when possible so that we could identify poorly performing probes on the basis of the correlations of all probes targeting the same transcript. The full set of probe sequences and the genes they target are available upon request.We worked with Illumina to design a custom DASL assay containing 384 probes that targeted exons of 77 genes, based on annotations from SpBase DASL assays were carried out on the Illumina BeadStation by the Duke Genotyping Core Facility at the Duke Institute for Genome Sciences & Policy. RNA and gDNA samples, used for quality control, were processed and run separately. Raw bead-level data was recorded directly instead of passing through the Illumina BeadStudio software. At each stage of development, gene expression measures were normalized to the expression of RBM8A following correction for background fluorescence . Note thTo facilitate comparisons among genes, we normalized trait variances by the square of the mean expression of each gene over all cultures. The square root of this quantity is thus the coefficient of variation, an easily interpretable quantity in terms of percent change relative to the mean, and is recommended for comparing variances among traits http://sugp.caltech.edu/endomes/; In our analyses of this gene regulatory network, we benefitted from extensive prior work by other labs, much of which has been compiled into a Biotapestry FvMo1,2,3,\u201d which represents three members of the flavin-containing monooxygenase gene family that are co-expressed in developing pigment cells and are strongly suspected to be co-regulated SpFmo2 gene as a proxy for the activity of this cluster both because it had been used as a proxy in recent analyses of the network Gcm and GataE) In order to design probes for the DASL assay, we needed to assign each gene represented in the network to an annotated gene in the sea urchin genome project (SpBase.org) We measured the size and shape of the larval skeleton at time point 7 (90 h post-fertilization) by marking eight established morphological landmarks in three dimensions 2 values between directly interacting genes were, on average, stronger than those between active genes with no known regulatory interactions. To address this question, we compared the average r2 values of all interacting genes at each time point to the average of all non-interacting genes. To test for statistical significance, we compared the observed results to 10,000 permutations of the data by randomizing the edges but keeping the overall network topology intact. With the second method, we asked a slightly different question: How does the qualitative nature of regulatory interactions change over development? We classified each edge (regulatory interaction) in our curated network representation into one of two types: (1) Switch-like or \u201cinsensitive\u201d interactions, in which a downstream gene is insensitive to quantitative variation in the upstream gene, and (2) \u201csensitive\u201d interactions, in which there is a statistically significant (p<0.01) relationship between variation in an upstream gene and its downstream targets as assessed by standard linear regression. Both sets of analyses were conducted with and without repressive regulatory interactions with no impact on the statistical significance of the results.Correlations between the expression levels of interacting genes vary both quantitatively and qualitatively e.g., . To bettp>0.2, 1,000 permutations), a result that holds when the analysis is restricted to genes expressed in one, and only one, tissue per time point. For thoroughness, we also conducted this analysis using correlations among breeding values of nine non-overlapping domains/tissues of expression at each time point in the Biotapestry database (covering time points 1\u20136) from which we extracted information about network topology. g values with simS. purpuratus individuals of approximately the same age as described more fully in A requirement for the operation of natural selection is the presence of additive genetic variance, that is genetic variation that has a significant average effect on a phenotype across a range of environments and genetic backgrounds. One metric of additive genetic variance is four times the paternal or maternal covariance among half-sibs, namely individuals that share one parent, but not the other. In this experiment, estimates of these parental contributions were estimated using a North Carolina II breeding design using six outbred male and female Parental effects in the NCII design are typically estimated using ANOVA methods. However, ANOVA methods are not well suited for estimating error terms or significance in the face of missing data. When analyzing gene expression data, we therefore converted the standard mixed-effect linear model underlying the NCII design into a Bayesian hierarchical mixed-effect model by adding priors on the genetic and residual variances and fitting the model using a Gibbs sampler, implemented in the MCMCglmm package in R With an NCII design, it is theoretically possible to estimate the genetic covariances between genes. Owing to the relatively small size of our breeding design, however, we cannot accurately estimate genetic covariances between individual pairs of genes. We can, however, make attempts to compare the full set of genetic covariances (the G-matrix) between time points. One measure of genetic constraints in this case is the variance of the eigenvalues for the G-matrix at each time point: high variances indicate that the G-matrix is constrained primarily in one or a few directions, while low variance is indicative of a relative lack of constraint. Via permutations tests, we can see that the G-matrices at each time point are more structured than random , as one Data are available from the Dryad Digital Repository Figure S1p\u200a=\u200a3.48\u00d710\u22128). The difference remains significant under a number of measures of additive genetic variation (see (A) Mean levels of additive genetic variation [2\u00d7] at each developmental time point . Time points 1 and 2 show a slightly increased level of additive genetic variation relative to later time points that is highly significant Meation see .(EPS)Click here for additional data file.Figure S2Edge correlations by tissue by time. Plotted are histograms of the observed pairwise correlations between all genes expressed in each of the nine tissues as annotated in the Biotapestry database. Each page represents a different time point (1\u20136) with the correlations over all annotated genes plotted in the upper right. The blue bar notes 0 and the green bar the mean pairwise correlation in that tissue. If fewer than two genes were expressed in a tissue at a time point, the histogram is blank with a single green bar at zero. Following conventions in the Biotapestry database, abbreviations for regions are: Abo, aboral ectoderm; E, endoderm; EC, ectoderm; EM, endomesoderm; M, mesoderm; MAT, maternal; OE, oral ectoderm; P, primary mesenchyme/skeletogenic cell lineage; VE, vegetal. Numbers following these abbreviations refer to time points 1\u20136 of the present study.(PDF)Click here for additional data file.Figure S3Breeding value correlations by tissue by time. Plotted are histograms of the observed pairwise correlations between breeding values for genes expressed in each of the nine tissues as annotated in the Biotapestry database. Each page represents a different time point (1\u20136) with the correlations over all annotated genes plotted in the upper right. The blue bar notes 0 and the green bar the average pairwise correlation in that tissue. Following conventions in the Biotapestry database; abbreviations for regions are: Abo, aboral ectoderm; E, endoderm; EC, ectoderm; EM, endomesoderm; M, mesoderm; MAT, maternal; OE, oral ectoderm; P, primary mesenchyme/skeletogenic cell lineage; VE, vegetal. Numbers following these abbreviations refer to time points 1\u20136 of the present study.(PDF)Click here for additional data file.Figure S4Morphological landmarks. Pluteus larva (\u223c90 h post-fertilization) with the larval skeleton visible. Red dots represent standard morphological landmarks (EPS)Click here for additional data file.Figure S5Examples of bead masks. These were used to control for spatial artifacts in the raw bead data.(EPS)Click here for additional data file.Figure S6Distribution of number of beads used. Each expression measurement in each sample is generated from the average expression estimated by a number of beads attached to the same probe.(EPS)Click here for additional data file.Figure S7Example MA plots. These show the dye-specific biases associated with the two dyes used to measure gene expression.(EPS)Click here for additional data file.Figure S8Corrections to the distribution of the intensities of the different background-control beads. (A) The distribution of background intensities by control bead before correction. (B) The distribution of background intensities by control bead after correction.(EPS)Click here for additional data file.Figure S9The distribution of \u201cexpression\u201d levels for DASL measurements applied to genomic DNA. For details on how certain probes were removed on the basis of the divergence of their expression.(EPS)Click here for additional data file.Table S1Single gene estimates of parental effects. For each gene at each time point, summary statistics are provided for the output of our generalized linear model. This includes the mean, mode, median, and 90% credible intervals for male, female, interaction, and residuals, as well as DIC scores for each effect. The columns \u2018male.sig\u2019, \u2018female.sig\u2019, and \u2018male.female.sig\u2019 represent, respectively, whether or not the male, female, or interaction effects were significantly greater than zero based on permutations when using a REML-based approach (ASReml) for estimating parental effects. The columns \u2018mean\u2019 and \u2018var\u2019 list the unscaled mean and variance for each gene at each time point.(TXT)Click here for additional data file.Table S2Summary of a linear model describing the relationship between variance and time point. The intercept is forced to 0,0. As a result, the estimates are the mean of the total variance at each time point. Importantly, there is no relationship between variance levels and the fraction of sensitive edges described in (DOC)Click here for additional data file.Table S3The relative weightings of each skeletal measure in the first three principle components of skeletal variation. These three axes explain 55.1%, 20.4%, and 13.4% of the total between culture variation in skeletal morphology.(DOC)Click here for additional data file.Table S4Male, female, and interaction contributions to between family variation in the principle components of skeletal variation. ** indicates p<0.01 using a standard likelihood ratio test.(DOC)Click here for additional data file.Table S5Single gene correlations with morphology. This table provides the full set of correlations between each gene at each time point and the first three principal components of larval skeletal variation, as measured at time point 7.(TXT)Click here for additional data file.Table S6The weights of each skeletal measure's contribution to the six vectors summarizing skeletal variation produced by the two block partial least-squares analysis.(DOC)Click here for additional data file.Table S7Each of the six pairs of vector found by the two block partial least-squares analysis contributes different amounts to the total correlation between gene expression and skeletal variation. The relative weighting of each of the six pairs of vectors are described by the eigenvalues given in this table.(DOC)Click here for additional data file.Table S8The top 10% of gene expression contributions to the first 2B-PLS axis. The column \u201cGene_Cluster_Time\u201d gives the gene name followed by the cluster number (see (DOC)Click here for additional data file.Table S9By combining the gene expression contributions to each of the six axes weighted by the eigenvalue corresponding to each axis, we get a measure of the overall contribution of each gene_time expression to the overall relationship between gene expression and skeletal variation. Above are the top 5% of total the weighted contributions of expression measurements to the overall correlation between gene expression and skeletal variation.(DOC)Click here for additional data file.Table S10Edge activity. This table describes each edge extracted from the Biotapestry database, the times at which the interaction occurs, the region in which it takes place, and whether the interaction is activating or repressive. The times correspond to the time points shown in (TXT)Click here for additional data file.Text S1Supplemental methods.(PDF)Click here for additional data file."}
+{"text": "Surprisal analysis is a thermodynamic-like molecular level approach that identifies biological constraints that prevents the entropy from reaching its maximum. To examine the significance of altered gene expression levels in tumorigenesis we apply surprisal analysis to the WI-38 model through its precancerous states. The constraints identified by the analysis are transcription patterns underlying the process of transformation. Each pattern highlights the role of a group of genes that act coherently to define a transformed phenotype.We identify a major transcription pattern that represents a contraction of signaling networks accompanied by induction of cellular proliferation and protein metabolism, which is essential for full transformation. In addition, a more minor, \"tumor signature\" transcription pattern completes the transformation process. The variation with time of the importance of each transcription pattern is determined. Midway through the transformation, at the stage when cells switch from slow to fast growth rate, the major transcription pattern undergoes a total inversion of its weight while the more minor pattern does not contribute before that stage.A similar network reorganization occurs in two very different cellular transformation models: WI-38 and the cervical cancer HF1 models. Our results suggest that despite differences in a list of transcripts expressed in different cancer models the rationale of the network reorganization remains essentially the same. Deciphering regulatory events that drive malignant transformation represents a major challenge for systems biology. Here, we analyzed the genome-wide transcription profiling of an in vitro cellular system, in which cells were induced to transform to a cancerous phenotype, through intermediate states. Cells evolving towards a malignant state exhibit changes in gene expression that do away with pathways that interfere with proliferation . Cancer In this study we are using a physically motivated method of gene expression analysis based on the proposition that the process of gene expression is subject to the same quantitative laws as inanimate nonequilibrium systems in physics and chemistry. This allows us to apply a thermodynamic-like approach where entropy is a physical quantity and not only a statistical measure of dispersion. Our purpose is similar to earlier studies of groupings of genes ,5 includThe information-theoretic analysis that we use is called surprisal analysis to emphaThe biggest extent of deviation from the maximal entropy defines the major transcription pattern that occurs during the process of transformation. We also define minor transcription patterns that participate in the establishment of cancer. The major pattern is important throughout while more minor patterns typically contribute significantly only early or only later on. We will specifically emphasize the stages during the cellular transformation when the role of an expression pattern undergoes an 'inversion' in its significance. By 'inversion' we refer to a time course where genes that were highly expressed at the stage before, undergo a change to being under expressed in the stage after and vice versa. A model ,21 whereThe technical mathematical details are spelled out in the Additional file T = 1,2,..,12. It is important to note that different trajectories of the transformation process go through different time points. For example, we will compare the three trajectories 1-5-7-8-9, 1-5-7-8-11 and 1-5-7-8-10-12, which share a common process up to and including point 8. These are all continuous processes where each time point follows the preceding one and we will refer to such a sequence of stages as a trajectory. An opposite example is the trajectory 1-5-6-7 that cannot be considered as continuous since time point 7 does not experimentally follow point 6.We analyze the changes in the gene expression levels during the process of carcinogenesis in the thoroughly studied cellular model WI-38, developed by one of us ,20,21. TThe novelty and a power of our approach lies in our ability to identify the major and minor gene expression patterns in each stage (= time point) during the transformation. Moreover this analysis identifies the necessary and sufficient transcription patterns that define the cellular transformation. Additionally our analysis identifies new networks that participate and contribute significantly to the establishment of the different phenotypes during the transformation. The patterns identified by the present study are further examined by comparison to the results of the original analysis of the WI-38 system ,20,21, sThe model developed in the Rotter lab uses fibroblasts while in an earlier recent study ,23 we usThe presentation of the results follows two lines. One is the discussion of the information theoretic based tool, which utilizes gene expression levels to identify transcription patterns and to determine their contribution to the cancer transformation process at each stage. The complementary development is the biological interpretation of the calculated patterns and their weights.\u03b1, \u03b1 = 1,2,.. label the different independent patterns. For each pattern we determine its importance. \u03b1\u03bb(Tt) is the value (= the importance) of the contribution of the pattern \u03b1 at time point T. We shall show that at any stage there are very few important patterns. The validation of this statement as well as the determination of the transcription patterns is quantitative. The information theoretic thermodynamic-like approach derives the logarithm of the expression level of each gene. For gene i at the time point T we obtain equation (1) for the expression level of gene i at the time point T where i\u03b1 Gis the weight of that gene in the pattern \u03b1This section summarizes the essence of the information theory based method used for the analysis of mRNA array as described in detail in the Additional file \u03b1\u03bb(Tt) is the extent of reduction of the entropy due to the particular gene transcription pattern \u03b1. Due to the presence of the constraints, represented in equation (1) by the action of expressed genes, the system does not reach a steady state.The first term on the right side of the equation is the time-invariant part of the gene expression level for the particular transformation process. Typically this term makes a non-negligible contribution. According to the theory, this term is the gene expression level at maximal entropy. The time varying transcription patterns are represented by the terms in the sum. It is these terms that reduce the magnitude of the entropy. In the information theory approach i\u03b1 Gand \u03b1\u03bb(t) with the necessary mathematical details provided in the Additional file We have an accurate representation of the transformation process when the measured left hand side in equation 1) is close in value to the theoretical representation on the right hand side of equation (1). By making a least squares match between the two sides of equation (1) we obtain the numbers is closei\u03b1 Gis determined by our analysis for each value of \u03b1, see Additional file i\u03b1 Gare not a function of time. Only the weight \u03b1\u03bb(Tt) of the transcription pattern varies with time. This is the background necessary for the analysis to be implemented below. We next proceed to make some additional points about the theory.By the end of the thermodynamic-like analysis we associate the deviations from steady state with a set of transcription patterns. Note that in our approach, each pattern is permanent and not varying in time. The list of coefficients \u03b1\u03bb(Tt) = \u03b1P\u03b1T\u03c9, see the Additional file \u03b1 \u03c9is the time independent weight of transcription pattern \u03b1 while \u03b1T Pis the fractional weight of the contribution of pattern \u03b1 at time T. is not just a technical matter because it shows that a transcription pattern can have a lower absolute weight \u03b1 \u03c9yet its time-dependent weight can change significantly at some stage of the transformation.A technical point is that the theory expresses the weight of a pattern, that is its importance, as a product of two factors, iX(Tt) that is not dependent on time. For notational reasons it is convenient to introduce a 'zeroth' pattern by writing the steady state term as \u03bb's, from its definition \u03bb0 does not depend on time. In our computation we allow \u03bb0 to vary and use its theoretical constancy as a check and a numerical validation of the results. In the experiments of Rotter et al, [The time invariant part is computed as that part of r et al, ,21 there\u03b1\u03bb(Tt) and of the transcription patters i\u03b1 Gare determined from the measured values of the expression levels iX(Tt) of different genes, where i is an index of a gene, at different times Tt. It follows from that technical discussion that there is an upper value for the number of different transcription patterns that can be identified from the data.In the Additional file The result (1) was first derived in the context of selectivity of energy requirements and specificity of energy disposal of chemical reactions ,28. Usin\u03b1 = 1,2,.., A-1 is an exact representation of the data, so at any time point the A, \u03bb's, \u03bb0,\u03bb1,...,\u03bbA-1 fully suffice to recover the data, noise and all. The surprising result is that, as we shall see, in practice one major transcription pattern often accounts for the measured expression levels, to the observed expression levels at different times along a particular trajectory. Say that there are A time points in that trajectory. When we use all A transcription patterns then the information theoretic expression (1) with to the o\u03b1\u03bb(Tt) is associated with each constraint \u03b1 at each time point T. The value of the multiplier is determined by the value of the constraint at that time T. We here determine the value by fitting equation (1) to the data and we interpret \u03b1\u03bb(Tt), the value of the multiplier \u03b1 at time T, as the contribution of transcription pattern \u03b1 at that time. We make the least square fit of the experimentally measured right hand side to the theory derived left hand side of equation (1) using the matrix technique of SVD. This provides a set conjugate eigenvectors that define both the weights i\u03b1 Gof gene i in pattern \u03b1 and the weight \u03b1\u03bb(T) of the pattern \u03b1 at the time T. The distinct eigenvectors are orthogonal and this insures the independence of the patterns.The functional form (1) is derived by imposing constraints that prevent the entropy of the distributions of gene expressions from being fully maximal. The details are provided in and in r\u03b1\u03bb(T) is directly seen when we compute the change in gene expression between two time points T and T'This interpretation \u03b1T Pchanges significantly but it helps that the absolute weight \u03b1 \u03c9is large. Also worth noting is that the changes in the fractional weights and in the absolute weight appear in the exponents of the levels of gene expressions. Particularly when the fractional weights change sign, see Figure Equation (2) plays a key role in the quantitative evaluation of the biological implications of the results of surprisal analysis as reported below. Specifically, equation (2) highlights the quantitative aspects of changes in the levels of gene expressions. Changes in expression patterns primarily require that the fractional weight The Additional file In this study we examined the WI-38/hTERT cell system, which was guided to develop into a fully transformed cell, beginning with the normal WI-38 immortalized non-transformed fibroblasts. Cells underwent molecular manipulation such as hTERT insertion, many doublings, depression of p53 function and the insertion of oncogenic H-ras as depicted in Figure iX(Tt), at time t = T for a series of 12 time points as shown in Figure \u03b1 between two time points. Biological categories that were significantly over-represented, as defined by EASE score < 0.05 are shown in the Additional file The model system was studied using the Human Genome Focus Array with 8500 verified human genes ,21. The Since this system did not develop continuously from one point to the next we divided it into several trajectories representing the various possible processes. The expression levels were measured in duplicates for each time point in the trajectory Figure . The dat\u03b1\u03bb(Tt) represents the importance of the contribution, of gene expression pattern \u03b1 at the time T. The trajectory 1-5-7-8-10-12 can be calculated where \u03b1 = 0,1,...,5. The \u03b1 = 0 term is the time invariant gene expression pattern term and the five other are the varying patterns and we rank them in order of decreasing weight. Thereby, consecutive terms in the sum of terms in equation 1 make decreasing contribution. Figure \u03b1\u03bb(Tt) for the 3 constraints (or gene expression patterns) contributing most to the process of transformation, as a function of time.Using the data reported by Milyavsky et al. , we impl\u03b1 = 1 shown in Figure \u03b1\u03bb(Tt) = \u03b1P\u03b1T\u03c9. A weight of near zero at some values of time means that while the absolute weight \u03b1 \u03c9is not necessarily small, the fractional weight \u03b1T Pis small at those time points. \u2248 0, at such time points that its presence does not lower the entropy. From the point of view of the expression levels of genes, a pattern with a very low weight does not contribute to the gene expression levels at that time, see equation (1) and Figure The dominant transcription pattern \u03bb0(Tt) does not depend on the time T tof measurement. This value should be constant, since \u03bb0(Tt) is the measure of the contribution of the time invariant gene expression pattern of the steady state, \u03b1 = 0. The nearly exact constancy of the fitted value of \u03bb0(Tt), at different times, is a validation of the concept of a time-invariant contribution, see equation (1). \u03b1 = 0 is the pattern at the maximum of the entropy without time dependent constraints. It is the expression pattern at the limit of steady state. As expected from basic considerations, not all trajectories lead to the same cell fate. Therefore, different trajectories have different secular fates and can therefore differ in their \u03bb0 values.As shown in the Additional file \u03b1 = 1 being the dominant one where importance is by the size of the absolute weight \u03b1\u03c9. The smaller the value of \u03b1\u03c9, the more likely it is that the fit is not to the real data but to the noise. So at a given point in time we have more confidence in the biological significance in those transcription patterns with larger weights. Even so, we will see that the fourth transcription pattern is very important at early times. Digital results for the weights \u03b1\u03bb(Tt)'s in different trajectories are given in the Additional file We rank the varying transcription patterns by their importance with \u03bb0Gi 0plus the other five time-varying transcription patterns exactly reproduce the measured levels of all gene expression. If we use fewer patterns in the expansion (1) we get a quite acceptable approximation when the dominant constraints are used. To highlight this point we show in Figure \u03b1 = 1,2. Also shown in Figure The steady state term The analysis of individual expression patterns shows that they can undergo an 'inversion' in their importance. Inversion means that genes that at the previously measured point had high expression levels now go down while genes that had a low level go up in their level at the present time point. Examined at the level of a particular pattern the change is an outright inversion. By this we mean that the logarithm of the expression level changes sign or equivalently that the exponent of the expression level changes sign as shown graphically in Figure The results of analysis of different trajectories are shown in Figure \u03b1= 1 for trajectory 1-5-7-8-10-12. Among the over-represented categories with induced expression we find transcripts participating mostly in protein biosynthesis and the metabolism of DNA and RNA . Note that these transcripts are limited to the induced and do not appear in the over-represented reduced categories , as judged from mRNA levels. In transcription pattern \u03b1 = 1 in the trajectory 1-5-7-8-11 possesses the same disregulated transcription patterns as in 1-5-7-8-10-12 except for the cell death category, which is reduced significantly at point 11 (Table S3). Thus, the major transcription pattern, that has the biggest impact on the process of transformation of these two trajectories shows similar changes at gene expression levels. Reduction in signal transduction in the trajectories 1-5-7-8-11 and 1-5-7-8-10-12 is highly correlated with the enhanced rate of proliferation of the late stages that was measured experimentally [Interestingly, the major transcription pattern mentally in the HAnalysis of the trajectory 1-5-7-8-9 reveals that the major changes in the transcription patterns of the cells in this route of transformation are different from the previous two trajectories, 1-5-7-8-11 and 1-5-7-8-10-12. The long evolution of hTERT immortalized cells without opening the system (for H-RAS induction or p53 inactivation) leads to similar main changes, like reduction in development processes, induction of tRNA and rRNA metabolism and protein biosynthesis. However, the voraciously energy consuming category of signal transduction values of the 5 continuous routes: 1-5-7-8, 1-5-7-8-9, 1-5-7-8-10, 1-5-7-8-11 and 1-5-7-8-10-12 that branch out at point 8. The major transcription pattern of the 1-5-7-8, 1-5-7-8-10 and 1-5-7-8-11 routes included the reduced signal transduction category and induced protein metabolism. Reduction in signal transduction and induction of protein metabolism are found to be H-RAS/p53 independent, but play important role in cellular transformation. Since these alterations appear in the 1-5-7-8, 1-5-7-8-10, 1-5-7-8-11 and 1-5-7-8-10-12 trajectories we suggest that point 8 exhibits the appropriate cellular context for future oncogenic transformations. The trajectory ending at time point 9 represents a different route, where numerous cell divisions, which occurred between points 8 and 9, caused many additional alterations that not necessarily would lead to a cancerous phenotype.Our purpose here is to characterize the major transcription pattern, \u03b1 = 1 transcription pattern of the trajectory 1-5-7-8 and of 1-5-7-8-10-12 have the largest number of the overlapping categories (see Tables S2 an S6).The cell proliferation category in trajectory 1-5-7-8-10 is among the over-represented down regulated groups of the trajectory 1-5-7 reveals similar results.The analysis of the trajectories 1-5-7-8-9 and 1-5-7-8-11 reveals that the transcription pattern \u03b1 = 3, identifies the particular changes that occurred between point 8 and the last point of the trajectories 1-5-7-8-9, 1-5-7-8-11 and 1-5-7-8-10-12. This transcription pattern onsets between point 8 and points 9, 10 and 12 . - Table S1 providing the results of surprisal analysis in a digital form (p.5). - Additional supplementary figures . - Validation section (p.11). - Results of the analysis of the minor transcription patterns (p.13). - Lists of transcripts participating in different transcription patterns given as supplemental tables S2-S19 (pp.14-36).Click here for file"}
+{"text": "Time-course gene expression data such as yeast cell cycle data may be periodically expressed. To cluster such data, currently used Fourier series approximations of periodic gene expressions have been found not to be sufficiently adequate to model the complexity of the time-course data, partly due to their ignoring the dependence between the expression measurements over time and the correlation among gene expression profiles. We further investigate the advantages and limitations of available models in the literature and propose a new mixture model with autoregressive random effects of the first order for the clustering of time-course gene-expression profiles. Some simulations and real examples are given to demonstrate the usefulness of the proposed models.We illustrate the applicability of our new model using synthetic and real time-course datasets. We show that our model outperforms existing models to provide more reliable and robust clustering of time-course data. Our model provides superior results when genetic profiles are correlated. It also gives comparable results when the correlation between the gene profiles is weak. In the applications to real time-course data, relevant clusters of coregulated genes are obtained, which are supported by gene-function annotation databases.Our new model under our extension of the EMMIX-WIRE procedure is more reliable and robust for clustering time-course data because it adopts a random effects model that allows for the correlation among observations at different time points. It postulates gene-specific random effects with an autocorrelation variance structure that models coregulation within the clusters. The developed R package is flexible in its specification of the random effects through user-input parameters that enables improved modelling and consequent clustering of time-course data. DNA microarray analysis has emerged as a leading technology to enhance our understanding of gene regulation and function in cellular mechanism controls on a genomic scale. This technology has advanced to unravel the genetic machinery of biological rhythms by collecting massive gene-expression data in a time course. Time-course gene expression data such as yeast cell cycle data appear tVarious computational models have been developed for gene clustering based on cross-sectional microarray data -5. Also,Finite mixture models have bee(1) there are no replications on any particular entity specifically identified as such and(2) all the observations on the entities are independent of one another,are violated, multivariate normal mixture models may not be adequate. For example, condition (2) will not hold for the clustering of gene profiles, since not all the genes are independently distributed, and condition (1) will generally not hold either as the gene profiles may be measured over time or on technical replicates. While this correlated structure can be incorporated into the normal mixture model by appropriate specification of the component-covariance matrices, it is difficult to fit the model under such specifications. For example, the M-step may not exist in closed form .2EM-based MIXture analysis With Random Effects) to handle the clustering of correlated data that may be replicated. They adopted a mixture of linear mixed models to specify the correlation structure between the variables and to allow for correlations among the observations. It also enables covariate information to be incorporated into the clustering process [Accordingly, Ng et al. have dev process . Proceed process , the E- process .kth order Fourier series expansion is given as Fourier series approximations have been used to model periodic gene expression, leading to the detection of periodic signals in various organisms including yeast and human cells ,20,21. Ia0 is the average value of gk(t). The other coefficients akand bk are the amplitude coefficients that determine the times at which the gene achieves peak and trough expression levels, respectively, and \u03c9 is the period of the signal of gene expression. While the time-dependent expression value of a gene can be adequately modelled by a Fourier series approximation of the first three orders [where e orders , recent e orders ,14 demonThe EMMIX-WIRE procedure of Ng et al. is develThe paper is organized as follow: we first present the development of the extension of the EMMIX-WIRE model to incorporate AR1) random effects which are fitted under the EM framework. Then in the following section, we conduct a simulation study and the data analysis with three real yeast cell datasets. In the last section some discussion is provided. The technical details of the derivations are provided in the Additional file random eX denote the design matrix and \u03b2the associated vector of regression coefficients for the fixed effects. In the specification of the mixture of mixed linear components as adopted by Ng et al. [yj for the jth gene conditional on its membership of the hth component of the mixture is expressed as We let g et al. , the vec\u03b2h is a (2k + 1) vector containing unknown parameters a0a1,\u2026,akb1,\u2026,bk; see (1), ujh=Tand vh=Tare the random effects, where m is the number of time points. In (2), Z1 and Z2 are m\u00d7m identity matrices. Without loss of generality, we assume \u03b5jhand vh to be independent and normally distributed, N and N, independent of ujh. To further account for the time dependent random gene effects, a first-order autoregressive correlation structure is adopted for the gene profiles, so that ujhfollows a N) distribution, where where A(\u03c1) can be expressed as The inverse of and I, J, and K are all m\u00d7mmatrices. Specifically, I is the identity matrix; J has its sub-diagonal entries ones and zeros elsewhere, and K takes on the value 1 at the first and last element of its principal diagonal and zeros elsewhere. The expressions (4) and (5) are needed in the derivation of the maximum likelihood estimates of the parameters.where The assumptions (2) and (3) imply that our new model assumes an autocorrelation covariance structure under which measurements at each time point have a larger variance compared to the model of Kim et al. under ang-component mixture with probability density function (pdf) as In the context of mixture models, we consider the fhis the component-pdf of the multivariate normal distribution with mean vector Xh\u03b2h and covariance matrix where \u03a8and can be estimated by maximum likelihood via the EM algorithm.The vector of unknown parameters is denoted by y=T is augmented by the unobservable component labels, z1,z2,\u2026,znof y1,y2,\u2026,yn, where zj is the g-dimensional vector with hth element zjh, which is equal to 1 if yjcomes from the hth component of the mixture, and is zero otherwise. These unobservable values are considered to be missing data and are included in the so-called complete-data vector. Finally, we take the random effect vectors ujh and vh, to be missing and include them too in the complete-data vector. Now the so-called complete-data log-likelihood lc is the sum of four terms lc=l1 + l2 + l3 + l4, where In the EM framework adopted here, the observed data vector zjh, and where l2 is the logarithm of the density function of y conditional on ujh,vh, and zjh=1, and l3 and l4is the logarithm of the density function of u and v, respectively, given zjh=1, is the logarithm of the probability of the component labels where lc, the above decomposition implies that each of l1,l2,l3, and l4 can be maximized separately. The EM algorithm proceeds iteratively until the difference between successive values of the log likelihood is less than some specified threshold. All major derivations are given in the Additional file To maximize the complete-data log likelihood \u03b82corresponding to low and high autocorrelation among the periodic gene expressions. We also assume that \u03a9and D are diagonal matrices, where the common diagonal elements are represented by \u03c32 and d2, respectively.To illustrate the performance of the proposed model, we present a simulation study based on synthetic time-course data. In the following simulation, we consider an autocorrelation dependence for the periodic expressions and compare our model to that of Kim et al. . Synthet.00001, with the maximum iteration of 1000. For our model, we started from the true partition; for the model of Kim et al. [There are three clusters of genes. The periods for each cluster are 6, 10, and 16, respectively. There are 24 measurements at time points 0, 1, \u2026, 23, and the first order Fourier expansion is adopted in the simulation models. Parameters and simulation results are listed in Tables m et al. , we starm et al. . For them et al. , where tpa0a1b1\u03b82\u03c1, and \u03c32 in the proposed model are approximately unbiased, except for d2, which is slightly underestimated. In contrast, the model of Kim et al. [Specifically, we first investigate the performance of our new extended EMMIX-WIRE model and that of Kim et al. when them et al. fails tod2 = 0), where gene expressions are independent. The results are presented in Tables We now compare our model with that of Kim et al. , using tLastly, we generate the data from the model of Kim et al. and provOur model again provides unbiased estimates for all parameters. In contrast to the model of Kim et al. , our modThe first example considers the yeast cell cycle data analysed recently by Wong et al. . This dad2in clusters 1 and 4 are large and are greater than the corresponding estimates of \u03b82, indicating coregulation in these two clusters. If we ignore such within-cluster coregulation, we will have Rand Indices similar to those for the model of Kim et al. [In Table m et al. . Our modThe second example is the subset of 384 genes from the yeast cell cycle data in Cho et al. , correspd2 are all very small compared to the estimates of \u03b82.Each of gene is assigned a \u201cphase\u201d. We call each \u201cphase\u201d a \u201cMain Group\u201d. There are five \u201cMain Groups\u201d in this dataset, namely, early G1, late G1, S, G2, and M. We now compare and assess the cluster quality with the external criterion (the 5 phases). The raw data are log transformed and normalized by columns and rows. Figure With this third example, we demonstrate how the proposed method can be adopted to cluster a large amount of yeast genes of which only a small proportion shows periodicity. The original dataset consists of more than 6000 genes, where the yeast cells were sampled at 7 min intervals for 119 min with a total of 18 time points after synchronization . By compg=1 to g=20, where the cell cycle period \u03c9=63 was determined using a global grid search method described in the Discussion section. The optimal number of clusters was determined using the Bayesian Information Criterion (BIC) of Schwarz [The new mixture model with AR(1) random effects and Fourier series approximations was fitted to the periodic gene expression data with the number of clusters Schwarz . The use Schwarz ,13,28. BWith reference to the findings by Spellman et al. , a majorWe have presented a new mixture model with AR1) random effects for the clustering of time-course gene expression profiles. Our new model involves three elements taking important role in modelling time-course periodic expression data, namely, (a) Fourier expansion which models the periodic patterns; (b) autocorrelation variance structure that accounts for the autocorrelation among the observations at different time points; and (c) the cluster-specific aandom effects which incorporate the coregulation within the clusters. In particular, the latter two elements corresponding to the correlations between time-points and between genes are crucial for reliable and accurate clustering of time-course data. We have demonstrated in the simulation and real examples that the accuracy of clustering is improved if the autocorrelation among the time dependent gene expression profiles has been accounted for along the time points; this is also demonstrated in Kim et al. . Further random ek-means clustering procedure to all the simulated and real datasets considered in the paper. We found that the k-means procedure gave higher error rate and smaller adjusted Rand index (and both with higher variability), especially when the correlation between genetic profiles is not small. For example, the mean (SE) of the error rate and adjusted Rand index obtained for the k-means procedure for the model in Table k-means procedure are 0.045 (0.018) and 0.876 (0.041), respectively, in Table k-means procedure, which is smaller than the two methods that are based on the EMMIX-WIRE model. Using the k-means procedure, the error rate and adjusted Rand index are 0.404 and 0.442, respectively, for the Yeast 2 dataset. This error rate is the highest among the methods considered in the paper, while the adjusted Rand index is comparable to the other methods. With the complete yeast dataset, the results obtained using the k-means procedure are somewhat different from those for our method. For example, we have identified a majority of yeast genes (81%) in cluster 5 which show a typical M/G1 phase, while the cluster obtained by the k-means procedure contains only 69% of genes with a M/G1 phase. Moreover, the clusters obtained by the k-means procedure for the non-periodic genes are very different from those presented in Figure As an additional empirical comparison, we applied a simple S be the space with typical element (a vector) T, representing the component periods, where \u03c9hcan take all possible values (grid points). For example, for the yeast cell cycle data, the possible periods are 60,61, \u2026, 90. Then for each fixed T, we estimate the parameters as if the periods for each component were known. Finally, we compare the log likelihood and choose the one with the highest log likelihood as the final result. Since it is very slow if there are too many elements in S when we have no prior information about the periods, we recommend using other methods to obtain the periods in such cases, such as the weighted least-squares estimation approach considered in [For the purpose of comparison, the periods of the signal of gene expression are assumed to be known in the simulation study and applications to real data. In practice, there are several ways to estimate the periods for each cluster ,13,14,20dered in . In all v = 0) and that random effects u be autocorrelated for each gene. Furthermore, when both random effects u and v are assumed to be zero, then we have normal mixture of regression models. In the program we have developed, there are many options and parameters for users to specify the models they want to use in addition to the models we list in our paper. For example, the developed program is applicable to cluster time-course gene expression profiles that are not periodic (see Figure The proposed model is very flexible through the different specifications of design matrices or model options as originally available in Ng et al. . For exaOur new extended EMMIX-WIRE model is more reliable and robust for clustering time-course data because it postulates gene-specific random effects with an autocorrelation variance structure that models coregulation within the clusters. The developed R package is flexible in its specification of the random effects through user-input parameters that enables improved modelling and consequent clustering of time-course data.An R-program is available on request from the corresponding author.The authors declare that they have no competing interests.All authors contributed to the production of the manuscript. SKN and GJM directed the research. KW wrote the R-program and analysed the simulated and real data. All authors read and approved the final manuscript.Supplementary file bmcbioinf-supp-2012.pdfClick here for fileSupplementary file for code and data supp2.zipClick here for file"}
+{"text": "We develop a new regression algorithm, cMIKANA, for inference of gene regulatory networks from combinations of steady-state and time-series gene expression data. Using simulated gene expression datasets to assess the accuracy of reconstructing gene regulatory networks, we show that steady-state and time-series data sets can successfully be combined to identify gene regulatory interactions using the new algorithm. Inferring gene networks from combined data sets was found to be advantageous when using noisy measurements collected with either lower sampling rates or a limited number of experimental replicates. We illustrate our method by applying it to a microarray gene expression dataset from human umbilical vein endothelial cells (HUVECs) which combines time series data from treatment with growth factor TNF and steady state data from siRNA knockdown treatments. Our results suggest that the combination of steady-state and time-series datasets may provide better prediction of RNA-to-RNA interactions, and may also reveal biological features that cannot be identified from dynamic or steady state information alone. Finally, we consider the experimental design of genomics experiments for gene regulatory network inference and show that network inference can be improved by incorporating steady-state measurements with time-series data. Determining gene regulatory network structure from gene expression data is one of the most challenging problems in molecular systems biology. Microarray technologies, as well as other newer approaches such as RNA-seq, have been widely used to generate quantitative gene expression data. Typically, experiments measure gene expression following perturbation of target genes (for example following RNAi-mediated gene knock-down or gene deletion), following treatment of cells with a drug or other molecule, or following a change to the cellular environment. Measurements of gene expression are typically conducted at a single time-point, or during successive time-points, after some perturbation. These data are termed steady-state data, and time-series data, respectively. Both types have been used for network inference. Steady-state and time-series data can both provide valuable information about the topology, or \u2018wiring diagram\u2019, and dynamics of the gene regulatory network. Compared with steady-state data, time-series data are thought to be more useful for revealing directional interactions to indicate the cause-and-effect relationships among genes A wide variety of computational algorithms and approaches have been brought to bear on the inference problem from steady-state data, including Bayesian networks In this work we focus on regression algorithms, in which gene networks are modelled using ordinary differential equations (ODEs) In this study we present a regression-based algorithm in which steady-state and time-series datasets can be combined for gene network inference. We base our algorithm on the MIKANA algorithm, which uses a model selection approach for inference of biochemical network models. MIKANA has previously been shown to successfully infer network structures from steady-state and temporal data sets. Comparisons with other gene network inference methods were performed by Hurley et al. In this study we compare the performance of three different versions of MIKANA, a regression-based ODE model for gene regulatory network inference. Steady-state MIKANA (ssMIKANA) infers networks from steady-state gene expression data sets. Time-series MIKANA (tsMIKANA) infers networks from temporal gene expression data. A new algorithm, called combined MIKANA (cMIKANA), is developed here to infer gene networks from combined time-series and steady-state data sets. These algorithms and the development of cMIKANA are discussed in the in silico experiments for simulating gene expression data were designed to mimic the microarray experiments performed previously To determine whether combining steady-state and time-series data can provide better prediction of gene regulatory interactions, we assessed the performance of network inference with steady-state, time-series and combined datasets by comparing candidate networks inferred from 100-gene simulated datasets against the synthetic networks used to simulate the data. In this work, the In Although both tsMIKANA and cMIKANA were sensitive to noise, cMIKANA demonstrated better performance compared with tsMIKANA in terms of higher sensitivity while retaining low FDR. We note that at very high noise levels (>20%) FDR increases dramatically for time-series data methods. We have not included any smoothing step in the tsMIKANA or cMIKANA algorithms, and therefore at very high noise levels the differences-based calculation of time derivatives suffers significantly. This may be improved in a straightforward manner by including a data-smoothing step for time-series data.Next we compared networks inferred from multiple time-series data sets with networks generated from a single time-series combined with steady-state data sets, with the same overall number of experimental measurements . In Overall, these results show that steady-state and time-series data sets can be combined for network inference and that combining steady-state data and time-series data can raise the sensitivity score for networks identified from time-series data (more true positive edges are identified), while not penalizing the networks by raising the false discovery rate.ODE-based network inference approaches assign directional edges from either steady-state data or time-series measurements . However, assignment of correct edge direction is thought to be improved using temporal information. We next sought to determine what effect combining steady-state data with temporal data might have on correct assignment of edge direction.We generated time-series and steady-state data sets from 50 separate simulations for 100-gene networks. Each of these networks had the same connectivity. 10% noise was added to each of data points. We scored the networks inferred using ssMIKANA, tsMIKANA and cMIKANA algorithms for the number of directed edges shared with the networks used to simulate the data. We also scored the number of directed edges shared for the inferred networks in which the direction of each edge was reversed. Finally, the figure shows that by combining steady-state with time series data, networks identified using cMIKANA have approximately the same proportion of correctly identified directed edges. These results confirm that temporal measurements provide directional information for identifying cause-and-effect relationships among genes, but that incorporation of steady-state data does not appear to deteriorate identification of edge directionality.In a study of human umbilical vein endothelial cells (HUVECs), Hurley et al. To illustrate the similarities and differences between these three networks, we performed a RNA-to-RNA edge-wise comparison between all three networks. To establish whether the network identified using cMIKANA was simply the addition of the ssMIKANA and tsMIKANA network models, we compared the cMIKANA network to the union of the ssMIKANA and tsMIKANA networks. 94 out of 319 interactions (\u223c30% overlap) in the union network were found to overlap with interactions inferred from the combined data set using cMIKANA.ID1, FOS and CFB) overlapping between ssMIKANA and tsMIKANA network models. These hubs were highly enriched for the regulation of transcription from the Pol II promoter (GO: 0006357 with a Bayes factor of 7) according to the GATHER web tool FOS was also a hub in the cMIKANA network. Moreover, two other hubs IL15 (in the top 10 for ssMIKANA) and HIVEP2 (in the top 10 for tsMIKANA) were also found in the cMIKANA network.\u2018Hub\u2019 genes in regulatory networks are genes with high out-degree, which influence the expression of many other genes. To determine the potential biological significance underlying each of the inferred network models, we ranked genes by out-degree (i.e. the number of target genes in the inferred network) from the three different models. To determine biological function of these genes, we next used GATHER web tool to perform a functional enrichment analysis of the hubs in each network model by comparing the hubs to the Gene Ontology (GO) database. This work has focused on identifying gene regulatory interactions from combinations of steady-state and temporal gene expression data. In real biological regulatory networks, \u2018steady-state\u2019 data points can be measured either from perturbation experiments, knock-down \u2018disruptant\u2019 data as studied here, or clinical measurements of patients. The measurement of \u2018steady-state\u2019 is relative to the experimental time scale and temporal processes that are observed. In reality, however, it is uncertain whether biological data is ever collected at a genuine steady state of the system The cost and practical complexity of genomic experiments typically limits the number of time-series measurements in a given study to a few time points and a small number of experimental replicates. This constrains the temporal information available for identifying regulatory interactions between genes. We have shown that by combining steady-state data with time-series measurements for network inference it is possible to increase sensitivity without raising the false discovery rate. Furthermore we have shown that addition of this steady-state data does not reduce the ability to correctly determine edge direction in the inferred network. Based on the results we achieved from simulated datasets, we tested our new approach using a steady-state siRNA disruptant microarray dataset and a temporal response to perturbation with TNF microarray dataset from HUVECs Our analyses of the networks inferred from these two datasets separately and in combination suggest that biologically plausible hub genes were identified in each of the different networks, even though there was relatively low overlap between the hubs and edges identified in the networks determined from the combined data set with those identified from the time-series and steady-state data individually. There are several reasons why the overlap may be low. Using the steady-state formulation of the network inference model, for networks reconstructed using ssMIKANA a gene whose expression profile has high variation across samples and is highly correlated with the expression profile of the \u2018regulated\u2019 gene is more likely to be selected as a regulator of that gene in the network model. An edge in this model implies that the variation in the abundance of the regulator RNA can explain (some of) the variance in the abundance of the regulated gene across the experimental samples. Using the temporal formulation of the model, however, for networks inferred using tsMIKANA an edge indicates that the variation in the abundance of a regulating RNA affects the rate of change in abundance of the regulated gene. Another potential reason is that regression methods will select one member of a highly correlated set of genes as a regulator, but different methods select a different member of the same set of highly correlated genes. Greater overlap may therefore be found by preprocessing the data to cluster together gene sets which are highly correlated across all experimental measurements, and to use a single representative from each such highly correlated set for network analysis. Despite these reservations, the inference from the combination of the two datasets using cMIKANA recapitulated 33% of interactions in tsMIKANA and 22% of interactions in ssMIKANA. This suggests that cMIKANA network does not simply represent the union of the tsMIKANA and ssMIKANA networks, but may identify regulatory interactions that were not evident in either the siRNA disruptant network or TNF time course network alone.In conclusion, we have developed an ODE regression model for reverse-engineering, called cMIKANA, to identify gene regulatory networks from gene perturbation measurements combining steady state and temporal gene expression data. The combined use of time-series and steady-state data outperformed the inference from time-series data only, under moderate noise level. Although different types of genomics experiment measurements may describe different aspects of the regulation underlying the system, our results suggest that combining steady-state and temporal measurements can improve the prediction of gene regulatory interactions and may reveal regulatory information that cannot be observed from either steady-state or time-series data alone. Our results also suggest a potential cost-efficient approach that incorporates steady-state measurements to time-series data sets to improve the design of genomics experiments for gene regulatory network inference.Gene regulatory networks are usually modeled as a graph of connected nodes, in which nodes represent genes and edges represent interactions between genes. We use an ODE formulation as a model for reverse engineering the gene regulatory networks, and as a simulation model to generate synthetic gene expression data sets with which we test our methods. The network simulation, synthetic data generation, network inference and algorithm validation were executed using the computational framework developed by Hurley et al. MIKANA uses an iterative model selection technique, first proposed by Judd and Mees if . The underlying model for MIKANA uses a set of ODEs, one for each gene, describing gene regulation as a function of the level of expression of the other genes in the network, using a linear summation of weighted basis functions:A nonlinear basis function derived from the Hill activation function Combining Equations (1.1) and (1.2), the general form of the ODE-based MIKANA can be obtained:Following Wildenhain and Crampin Writing Following Srividhya et al. Given time-series measurements collected at time points For the time-series microarray experiments considered here, the perturbation is set to zero for tsMIKANA as we assume that the perturbation is to the initial conditions and is not sustained.Noticing that the right hand sides of Equations (1.5) and (1.6) are identical, we can write the regression model in a form such that steady-state data and time-series measurements can be combined for model fitting simultaneously. Using the nonlinear basis function above, we have the regression for cMIKANA: Here P(k)\u223ck\u2212r where a and b are selected at random, and an edge is assigned from a to b if both avk \u200a=\u200a 3 and r \u200a=\u200a 0.65.We adapted the simulation environment described by Wildenhain and Crampin Given a specified network structure, gene regulatory behaviour can be simulated using the set of ODEs The regulatory effects of positive and negative interactions on gene To examine the three variants of the MIKANA network inference approach, two types of numerical experiments were designed for simulating steady-state datasets and time-series datasets of gene expression in response to external stimuli. The numerical experiments were designed in accordance with the microarray studies on HUVECs described in Hurley et al. Two assumptions were made for simulating these experiments: (1) The system is originally at a steady state, which is considered as the reference state for the experiments; (2) the initial concentration of mRNA of a target gene in siRNA knockdown experiments or the mRNA abundances of all genes in a temporal experiment are changed once external stimuli is applied. The simulation has two steps: (i) reference state generation and (ii) microarray experiment simulation. The simulation is executed by MatLab code, which is available from author upon request.Numerical experiments are initiated with all variables In order to generate numerical data, the parameters in Equations (1.8) and (1.9) were randomly selected from uniform distributions. The range of each distribution was determined empirically in order to achieve a stable model with sufficient variations in simulated data across time: For each knockdown experiment a different gene is perturbed \u2013 a target gene is selected and knock-down is simulated by holding the corresponding variable To simulate steady state datasets generated from a series of siRNA knock down experiments, the procedure is repeated for each targeted gene in turn, with the remaining N-1 ODEs solved to find the new steady state, generating a set of steady state data.To capture the variation in gene expression in response to broader external stimuli, for example cellular response to an inflammatory trigger, we simulated experiments in which all genes in the network were initially perturbed. For each experiment, a set perturbation coefficients To generate time series data a number of gene expression data were collected at sequential time points as the system evolves to a new steady state. The process was repeated To assess the effects of noise on the performance of an ODE-based algorithm we modelled noisy gene expression data To score the performance of network inference, inferred networks were compared to the synthetic networks used to generate the data by calculating Sensitivity and False Discovery Rate (FDR): In this work, we have used two experimental microarray datasets in HUVECs: siRNA disruptant and TNF time course microarray datasets. These two microarray datasets were prepared previously To prepare the siRNA disruptant microarray dataset, Hurley et al. had selected 400 siRNA targets, including transcription factors, signalling molecules, receptors and ligands that are related to endothelial cell biology. HUVECs were perturbed by siRNA treatment against each of the selected target RNAs. The global variations in transcript abundance resulting from the siRNA-mediated knockdowns were then measured by the CodeLink UniSet Human 20K Bioarray microarrays. These data are publicly available from the Gene Expression Omnibus (GEO) database with accession number GSE27869. The siRNA disruptant microarray dataset used in this study is a small subset of the data containing 400 samples for 379 RNAs that are particularly selected for their relevance to Rel/NFkB transcription factors, as described in To prepare the TNF time-series microarray dataset, HUVECs had been treated with the pro-inflammatory growth factor TNF for 24 hours. Samples were then harvested at 0, 1, 1.5, 2, 3, 4, 5 and 6 hours after being treated and were prepared in triplicate. In each of three replicates, the abundance of transcript was measured by CodeLink Uniset microarrays at each time point. These data are publicly available from the GEO database with accession number GSE27870. The TNF-treated time course dataset used in this study is a subset of the data containing 234 differentially expressed RNAs identified by Hurley et al Table S1A list of 50 RNAs that were collected in both the siRNA disruptant and TNF time course datasets.(XLSX)Click here for additional data file.Table S2Interactions inferred from the siRNA disruptant data of 50 RNAs using ssMIKANA. 130 interactions were identified by ssMIKANA. Each interaction is defined by a regulator gene (parent) pointing to a target gene (child) with an interaction coefficient showing the strength of the interaction.(XLSX)Click here for additional data file.Table S3Interactions inferred from the TNF time course data of 50 RNAs using tsMIKANA. 204 interactions were identified by tsMIKANA. Each interaction is defined by a regulator gene (parent) pointing to a target gene (child) with an interaction coefficient showing the strength of the interaction.(XLSX)Click here for additional data file.Table S4Interactions inferred from the combined dataset of 50 RNAs using cMIKANA. 738 interactions were identified by cMIKANA from the combination of the siRNA disruptant and the TNF time course data of 50 RNAs. Each interaction is defined by a regulator gene (parent) pointing to a target gene (child) with an interaction coefficient showing the strength of the interaction.(XLSX)Click here for additional data file."}
+{"text": "Phase transition widely exists in the biological world, such as transformation of cell cycle phases, cell differentiation stages, disease development, and so on. Such a nonlinear phenomenon is considered as the conversion of a biological system from one phenotype/state to another. Studies on the molecular mechanisms of biological phase transition have attracted much attention, in particular, on different genotypes (or expression variations) in a specific phase, but with less of focus on cascade changes of genes' functions (or system state) during the phase shift or transition process. However, it is a fundamental but important mission to trace the temporal characteristics of a biological system during a specific phase transition process, which can offer clues for understanding dynamic behaviors of living organisms.Yeast cell cycle data. It was found that the dynamics of the boundary genes are periodic and consistent with the phases of the cell cycle, and the temporal block network indeed demonstrates a meaningful cascade structure of the enriched biological functions. In addition, we further studied protein modules based on the temporal block network, which reflect temporal features in different cycles.By overcoming the hurdles of traditional time segmentation and temporal biclustering methods, a causal process model (CPM) in the present work is proposed to study the biological phase transition in a systematic manner, i.e. first, we make gene-specific segmentation on time-course expression data by developing a new boundary gene estimation scheme, and then infer functional cascade dynamics by constructing a temporal block network. After the computational validation on synthetic data, CPM was used to analyze the well-known All of these results demonstrate that CPM is effective and efficient comparing to traditional methods, and is able to elucidate essential regulatory mechanism of a biological system even with complicated nonlinear phase transitions. In the biological world, a phase transition can be defined as the transformation of a biological system from one phenotype or state to another, where different phenotypes can be mapped to distinct states. For example, cell cycle is known to have four distinct phases: G1, S, G2 and M phases; cell differentiation contains different stages like cell proliferation, growth arrest and mature differentiation; and cancer development mainly involves three steps as mutation, promotion and invasion. Obviously, analysing those biological phase transitions will offer valuable clues for understanding life and its dynamics. Therefore, a fundamental but important question is how to trace the temporal characteristics or dynamics of a biological system during a particular phase transition process.The study on molecular mechanism of biological phase transition has attracted much attention -4. For iAs well known to us, one gene generally has multiple roles in biological processes but what role at a specific time is still unclear. Thus, identifying a gene functional group or module, which is composed of cooperative genes in biological processes or pathways, can reveal the functional specificity of individual genes or network modules. On the other hand, nowadays, there is rich information on biological processes ,7, but tYeast cell cycle and metabolic cycle [The rapid accumulation of temporal gene expression data provides us the opportunity to unveil mechanisms of dynamic processes behind phenotype changes. In particular, a recent work shows that temporal dynamical model has ability to detect the presence and absence of stage/phase specific biological processes in ic cycle . But, thic cycle are usedic cycle , marker ic cycle , predictic cycle ,18, and ic cycle .Yeast cell cycle data, we show that the phase division based on CPM is more efficient and effective than the segmentation based on traditional CCC-biclustering method [In summary, the construction of our causal process model (CPM) includes three steps. First, we identify specific biclusters with linear patterns, and assemble them into temporal blocks representing a group of genes and their time segmentations. Then, each temporal block is refined by conducting functional enrichment analysis. Finally, we infer the sequential or cascade relations between temporal blocks by a graphical model among two groups of genes. Through various experiments, we demonstrate the effect of our method on gene-specific temporal segmentation. In particular, on g method ; and in EBB: Error-Bounded Biclustering) is used to enumerate so-called error-bounded linear patterns, e.g. traditional shifting pattern and scaling pattern [Unlike traditional time segmentation methods requiring the same division on a time period for all genes discretizing the raw data matrix to a 0-1 matrix by a referred element in data matrix and a given error bound; (2) building a suffix tree based on 0-1 sequences encoded by rows in the above 0-1 matrix where '0' represents left child node and '1' represents right child node; (3) identifying the deepest right-only node in the suffix tree as a potential bicluster with error-bounded linear pattern. In fact, CCC-biclustering is also an exhaustive method , but it As well known to us, biclusters represent similar expression behaviors of a group of genes at the same time points. However, our temporal block gathers those genes with the cooperative expression change during a specific time period, i.e. find those genes which simultaneously obtain or lose similar expression with their partner genes. Qualitatively, a temporal block is a sub-matrix in the original data to cover the complete biclusters as many as possible but split the known biclusters as few as possible. According to the following concepts and definitions, the genes on so-called temporal boundary are used to divide the whole data matrix into different matrices named as temporal blocks Given a data matrix D = {dm,n}m\u2208I,n\u2208J, let a set of gene expression patterns as biclusters . Then, a gene g in I is on the temporal boundary at time point t in J only when its R value is larger than a given threshold \u03b8 with default value as one, where R is calculated as formula (1). And all boundary genes at every time point consist of a boundary set {BG(t) = {g|R >\u03b8, g \u2208 I}}t\u2208J.Definition 2 Given a matrix data D = {dm,n}m\u2208I,n\u2208J i and its boundary set BG, the temporal block B= {|Gi \u2286 I, Ti \u2286 J} should satisfy following conditions:(a) (b) or (c) or (d) or (e) \u2200G \u2286 Gi, T \u2282 Ti, does not satisfy conditions (a)-(d);(f) \u2200G \u2286 I - Gi, T = Ti, does not satisfy conditions (a)-(d).For convenience, g1, g2, g3, g4, g5, g6)} might have coherent expression on time points {}. In order to reflect the different gene reorganization events happening on time points t2 and t3, these genes are divided into two temporal blocks during the co-expression period. This is just the over-division phenomenon in biclustering study which can supply a multi-granularity model for overlapping patterns [Like temporal segmentation, CPM gives a non-overlapping division on the whole data. It means that one gene within one time point at most belongs to one temporal block although this gene can belong to a different temporal block but at a different time, i.e. one temporal block cannot cover any other one in CPM. Taking Figure patterns . When anDefinition 3 Given a data matrix D = {dm,n}m\u2208I,n\u2208J and its temporal block Bi = {Gi, Ti|Gi \u2286 I, Ti \u2286 J}, the corresponding expanded temporal block satisfies: . Where, Cx,y represents the Pearson coefficient correlation between expression profiles of two genes during the time period , and p is a threshold with a default value as 0.8.Therefore, the temporal blocks are useful to give a global scheme of the data division, and the expanded temporal blocks are suitable to reflect the local property of large data.In order to extract the cascade dynamics of temporal blocks representing the sequential order of biological processes, there is a need to build a directed network among different temporal blocks whose qualitative connections are evaluated by the partial correlation . It shouDefinition 4 Given three gene expression profiles or vectors X,Y and Z, the partial correlation between X and Y under condition Z is calculated as:where C.,. represents the Pearson coefficient correlation.Definition 5 Given two temporal blocks B1 = and B2 = , if , these two blocks have a link with direction from B1 to B2. The link strength between their referred gene expression profiles in the time period can be calculated as:B1 to genes in a target block B2. It requires that the gene X in a source can directly interact with gene Y in a target , or be indirectly related to Y without the conduction from other target genes . When the link strength is larger than a threshold with default value as 0.9, the connected temporal blocks are thought to have significant causal relation.This strength measurement indicates the potential partial relation from genes in a source block http://www.sysbio.ac.cn/cb/chenlab/software.htm.Based on the links (edges) with strengths (weights) among temporal blocks (nodes), the temporal block network (TBN) is constructed for deep analysis on dynamic biological processes. And the execution program (CPM) for temporal blocks can be accessed from There are different characteristics between the proposed temporal blocks and traditional biclusters. Due to the module-in-focus property of biclustering, biclusters always have overlap with each other and have less size than the original data . The redg5, g6), } is one without either disorder period or asynchronous ending period, while the temporal block {, } covers a disorder period because genes are at time points and an asynchronous ending period for genes being at time points .For instance, in the matrix of above Figure Furthermore, the time cost of CPM is mainly on the computation of temporal block construction by temporal bicluster mining, which is similar to CCC-biclustering with a polynomial time complexity .First of all, we analyzed CPM on a synthetic data in a simple but typical strategy adopted in the previous studies . We prodYeast Cell Cycle of \u03b1-factor synchronization experiment of Spellman et al. [P-value to be based on the F-distribution with significant threshold as 0.05), remaining data denoted as YCC with 730 genes and 18 time points was used for further analysis. Again, we used different error bounds in {0.05, 0.1, 0.15, 0.2, 0.25} and minimum bicluster size as 10*5 to build CPMs on YCC data for extensive evaluations.Then, we analyzed CPM for the n et al. . This dan et al. ,25. Evern et al. to selecAs described before, the boundary genes can be used to trace the role-change events of a group of genes, and their number would increase greatly at a time point around the alternation of phases . Due to In order to further confirm the efficiency of the proposed (EBB) bicluster-based segmentation method comparing with other bicluster-based methods, we used temporal biclusters produced by CCC-biclustering correlation between the last phases of the first cell cycle and the initial phase of the latter cell cycle. This means that CPM can also identify the phases' relations belonging to inter-cell cycle, and has the ability to infer cascade dynamics of biological functions like biological processes across multiple cell cycles. Note that, the previous temporal dynamic model needs multiple datasets to deduce causal relation between biological processes [TB82 suggest that some of those functions will hold before entering the next phase or cell cycle.\u2022 In all temporal blocks, only rocesses , however\u03b1-factor handling [TB39 and TB20 in Figure TB82 again. This supports the need and importance of novel temporal blocks across neighbouring functional periods which are modelled by the gene-specific temporal segmentation integrated in CPM.\u2022 As the temporal dynamical model strongly shows the similarity of two cell cycles after handling , CPM canst cell cycle related temporal block TB39 covers the former three phases with time points 0-8 and has 12 genes expanded to 432 ones. On the other hand, the 2nd cell cycle related temporal block TB20 covers the latter three phases with time points 9-17 and has 42 genes expanded to 400 ones. For those two expanded gene sets, the significant phase (cell cycle)-related biological processes and pathways are listed in Table st cell cycle related genes and 2nd cell cycle related genes have shown several different biological processes annotated in GO [st cell cycle related genes are frequently observed in biological pathways annotated in KEGG and Reactome [\u03b1-factor treatment can be just thought as two super-phases with distinct dynamical properties, which is helpful to understand the cascade dynamics of complicated biological procedures across multiple phases or cycles.The 1ed in GO , and theReactome ,30. TherYeast gene expression datasets which also cover two cell cycles after the \u03b1-factor handling. They were downloaded from NCBI GEO with id GDS2318 [TB39 can correctly classify almost all time points into two cell cycles disregarding the effect of potential circular expression profiles in cell cycles. According to Figure TB20 also have good performance on clustering time points from different cell cycles. Considering the existence of missing expressions (filled with zero) of genes in other independent datasets, we only analyzed the molecular network behind such cell cycle specificity on our main YCC dataset in next subsection.In addition, in order to re-validate the cell cycle specificity on gene expression of such two temporal blocks, we used the genes in them to conduct hierarchical clustering with appropriate distance measurements respecti GDS2318 (one con GDS2318 (two con\u03b1-factor treatment through the rewired structures of the protein interaction network (PIN). Given a cell cycle related temporal block TB, we had a group of genes G and obtained the interactions of these genes' encoding proteins from STRING database [Yeast protein subcellular localization [Ti}i=1,2 , we calculated the Pearson coefficient correlation of two proteins with an interaction; combining the proteins and interactions with weights (or correlations), we extracted a PIN conducted co-expression network (PCCN).The co-expression network was alsodatabase ; with thlization denoted TB39 and TB20 with their expression profiles during two cell cycles to build four PCCNs. They are denoted as i cell cycle related temporal block have a rewired PCCN in actual c cell cycle. Figure TB39; the genes represented by nodes in dark blue belong to TB20; while genes represented by nodes in blue belong to the overlap of such two cell cycle related temporal blocks. Each interaction edge becomes from light & thin to dark & thick when its absolute weight (or correlation) increases. By network visualization of Cytoscape [C2 and C3 in the Figure Nucleosomal protein complex extracted from the information of Yeast protein complexes [C1. It is interesting that three different changes of network rewired profile correspond to the specificities of proteins in cell cycle related temporal blocks.Thus, we used the genes in ytoscape , we easiomplexes denoted TB39, they are densely connected to module C3 in just the first cell cycle but not the second one; while C3 always has fewer contacts with proteins in TB20 so that the presence and absence of relation with module C3 would be a temporal trace for functional specificity in the first cell cycle.\u2022 For proteins in TB39 or TB20, they present strict interactions with module C2 in the first cell cycle but lose such relation in following cell cycle. This means, in our mathematical model, TB39 mainly captures the presence of relation with C2 while TB20 tends to mine the disappearance of relation with the same module.\u2022 For proteins in C1 strengthens its relation with proteins in TB20 in just the second cell cycle but not the first one. Hence, the varying relation with protein complex C1 can be a candidate temporal trace for functional specificity in the second cell cycle.\u2022 Dissimilar from the above two conditions, protein complex Therefore, attractively, protein interaction modules and their relations with other proteins above can be thought as the dynamical markers of cell cycles in phase transitions. The proposed temporal blocks with the causal process model are indeed effective to efficiently uncover such molecular basis of a biological transition.To overcome the drawbacks of traditional time segmentation and temporal biclustering methods, the causal process model (CPM) was proposed to study the biological phase transitions in a systematic way. The experimental results validated that CPM can effectively identify gene-specific temporal segmentations by developing a boundary gene estimation scheme, and efficiently infer the potential cascade dynamics of biological processes by constructing a temporal block network. CPM not only has identified the phase-specific dynamic biological processes which were found by the traditional dynamic temporal model, but also revealed cell cycle specific rewiring of the protein interaction network which was missed in the previous studies. All in all, along with the improvement of bicluster enumeration and sparse causal network inference, the proposed CPM can both detect unknown phase transitions in real biological systems, and identify the candidate functional cascade dynamics with temporal traces during the transformation of a biological system.The authors declare that they have no competing interests.TZ and LC conceived the research. TZ performed the study. LC supervised the project. TZ drafted a version of the manuscript. All authors wrote and approved the manuscript."}
+{"text": "Inference of gene-regulatory networks (GRNs) is important for understanding behaviour and potential treatment of biological systems. Knowledge about GRNs gained from transcriptome analysis can be increased by multiple experiments and/or multiple stimuli. Since GRNs are complex and dynamical, appropriate methods and algorithms are needed for constructing models describing these dynamics. Algorithms based on heuristic approaches reduce the effort in parameter identification and computation time.The NetGenerator V2.0 algorithm, a heuristic for network inference, is proposed and described. It automatically generates a system of differential equations modelling structure and dynamics of the network based on time-resolved gene expression data. In contrast to a previous version, the inference considers multi-stimuli multi-experiment data and contains different methods for integrating prior knowledge. The resulting significant changes in the algorithmic procedures are explained in detail. NetGenerator is applied to relevant benchmark examples evaluating the inference for data from experiments with different stimuli. Also, the underlying GRN of chondrogenic differentiation, a real-world multi-stimulus problem, is inferred and analysed.NetGenerator is able to determine the structure and parameters of GRNs and their dynamics. The new features of the algorithm extend the range of possible experimental set-ups, results and biological interpretations. Based upon benchmarks, the algorithm provides good results in terms of specificity, sensitivity, efficiency and model fit. For the adaptation of biological systems towards external and environmental stimuli usually a complex interaction network of intracellular biochemical components is triggered. That includes changes in the gene expression at both the mRNA and protein level. Considering a certain stimulus as an input signal to the system and mRNA or protein levels as outputs, the connecting network may include interactions between signal transduction intermediates: transcription factors and target genes. Generally, the term \u201cgene-regulatory network\u201d (GRN) summarises genetic dependencies, which describe the influence of gene expression by transcriptional regulation,.The inference (elucidation) of GRNs is important for understanding intracellular processes and for potential manipulation of the system either by specific gene mutations, knock-downs or by treatment of the cells with drugs, e.g. for medical purposes. Towards a full understanding in terms of a complete network, partial models of the network give intermediate results which help to refine the knowledge and to design new experiments. Still, many gene-regulated cellular functions, e.g. stem cell differentiation, depend on more than one stimulus and the cross-talk within the GRN, e.g.. On the Candida albicans and the analysis of the Aspergillus fumigatus infection process,..Candida In the present article, we propose NetGenerator V2.0, an extended version of the algorithm which enables the use of multi-stimuli multi-experiment data, thus increasing the number of addressable biological questions. This causes significant changes in the algorithmic procedures, especially the processing of this kind of data as well as the structure and parameter optimisation. Also, some other updated features will be outlined, for example the different modes of prior knowledge integration, further knowledge-based procedures, options of graphical outputs, changed non-linear modelling and re-implementation in the programming language / statistical computing environment R,. FurtherThe successful application of the novel NetGenerator will be shown by inference of relevant multi-stimuli multi-experiment benchmark examples, namely systems with a different degree of cross-talk. Two aspects will be assessed: (i) reproduction of the benchmark systems (data and structure) and (ii) refinement / extension of a network structure by combination of different experimental data. Furthermore, the applicability of NetGenerator to a real-world problem is presented: after describing necessary data pre-processing steps, the underlying GRN of chondrogenic differentiation of human mesenchymal stem cells, a process influenced by the two stimuli TGF-beta1 and BMP2, is inferred.In the following subsections the necessary background knowledge and methodology of the NetGenerator algorithm is described. In comparison to previous publications this includes new, updated and more detailed algorithmic procedures. First, the motivation and the goals are defined by considering the biological data. Necessary steps of data pre-processing are also explained within this subsection. Subsequently, ordinary differential equations and some of their properties are presented as a means for modelling the dynamics of gene regulatory networks. Then the heuristic approach of the algorithm is explained including the structure and parameter identification (here: optimisation-based determination). The next important topic will be the consideration of prior knowledge, followed by a subsection about the numerical simulation as well as the representation of modelling and graphical results. Finally, some important options and their influence to the algorithm are presented.Gene expression time series data as required by NetGenerator are typically derived from microarray measurements. Before starting the network inference, raw microarray data have to be processed comprising a series of steps. The three main steps are displayed in FigureMicroarray pre-processing applies multiple procedures to remove non-biological noise from the data and to estimate gene expression levels. Custom probe-sets, as assembled by, reduce t-statistics) operates on contrast terms, defined by subtracting the control group at each time point. LIMMA returns a ranked table for all genes containing columns for gene name, fold-change and adjusted p-values. Differentially expressed genes are selected by a combination of adjusted p-value cut-off and fold-change criterion.Gene selection (\u201cfiltering\u201d) is the important second step of processing, since reliable network inference is only feasible for a sufficient number of measurements per gene. This nue\u2009=\u20091,\u2026,E. Therefore, the dimensions areTe being the number of experimental time points, N being the number of time series and M being the number of inputs. Furthermore, NetGenerator provides the option of introducing additional artificial data points by cubic spline interpolation.Time series standardisation is the last processing step including centering and scaling of each time series. The centering procedure subtracts the original initial value at the starting time point from all values such that the transformed time series starts from zero. In the subsequent scaling procedure each time series is divided by its maximum across all provided experimental data. This leads to gene-wise scaled data and gene expression time series varying within \u22121and 1. The resulting data provided to the NetGenerator algorithm are stored inThe NetGenerator algorithm infers dynamical models of GRNs. Their general non-linear dynamics can be described by a set of first-order time-invariant ordinary differential equations (ODEs), initial conditions, and time range t, the independent variable, that is dropped in further equations. The description is valid for a certain time range starting at t0 from the initial conditions for the state variablesN being the number of state variables, M the number of inputs and P the number of parameters.with the vector of state variablesEven though NetGenerator has a non-linear modelling option, the core mechanisms are based on linear modelling. Under the assumption that most networks can be considered linear and time-invariant, the differential equation system in (1) can be modified resulting in the linear state-space equation systemai,j, i,j\u2009\u2208\u2009N, because they describe the dynamics and the coupling of state variables.with the system or interaction matrixai,j\u2009>\u20090) or inhibition of state variable xi is not changing the value of the originating state variable xj.Under the assumption, that the behaviour of a GRN is described sufficiently by indirect transcriptional events and not by a conversion of material, activation by including ith iteration step of the outer loop already i\u2009\u2212\u20091 time series have been included in the model as sub-models. There are N\u2009\u2212\u2009i + 1 remaining time series to be included. The ith state equation (sub-model) would be described byIn FigureNi being the indices of state-state connections including the self-regulatory term ai,ixi, and connections from inputs with Mi being the indices of input-state connections for the considered state variable xi. That means that only the parameters of sub-models have to be identified.containing connections from state variables, Ni|\u2009\u2264\u2009N and |Mi|\u2009\u2264\u2009M, the inner loop starts with basic models for the remaining time series containing only self-regulation, one input term as well as connections from \u201cfix\u201d prior knowledge if available, see respective subsection. Those basic structures can be extended by further incoming connections (\u201cgrowing\u201d) from already included sub-models and further inputs. Every structural change requires a parameter identification of the active connections with respect to the considered time series, as will be explained later in the corresponding subsection. For every different set of parameters the resulting model needs to be simulated, that is the numerical solution of an initial value problem has to be found, as will be described later in another subsection.Since the algorithm aims at a low number of parameters, i.e. small |The basic sub-model which reproduces one of the remaining time series best, is chosen for further improvement, for details see Figure\u2022\u201cGrowing\u201d: further connections added\u2022\u201cHigher Order\u201d: increase sub-model order\u2022\u201cPruning\u201d: connections removedIn the improvement step \u201cgrowing\u201d is not restricted to connections from time series that are already included in the model. For describing the influence of time series that have not yet been included as sub-models, the corresponding measured and interpolated data are used as inputs. Those connections form global feedbacks in the final model.r\u22121 additional equations or intermediate state variables leading to the following form:The increase of the dynamical order within the description of a time series is realised by rth order integrator chain allowing for reproduction of processes with more complex time courses. It should be emphasised that by applying this approach the number of parameters is not increased but on the other hand the number of state variables becomes larger than the number of time series data. In that case only the state variable with the highest order in such a sub-model is to be compared to time series data. Still, for the sake of simplicity all following algorithmic procedures are described for first-order sub-models.In this way the dynamics of a certain sub-model are described by an final system matrixIn terms of the iterative process of including sub-models the different elements of the describe forward, local feedback and global feedback connections. Elements below the main diagonal become forward connections, whereas the main diagonal elementsa-priori defined settings and options of the algorithm. Some of them are balancing network complexity and error between measurement and simulation. For example, additional connections are rejected if they are not improving the objective function value to a significant extent while on the other hand connections are removed only if they are not worsening the result significantly. Further important options of the algorithm are explained in the respective subsection.All the previously described structural procedures and the corresponding parameter identification are controlled by xi equation (3) can be rewritten asThe parameter values of an active sub-model are identified by a non-linear optimisation, minimising the error between simulated and measured time series data of multiple experiments. The initial parameters required for this optimisation are obtained by a linear regression. For one specific first order state variable withith state variable. Satisfying the measured data in an optimal way the unknown parameters can be determined by the following equation of linear regression, see e.g.BackgrounMicroarray time series data. hMSC from bone marrow were commercially obtained (Lonza) and cultured as described in[nM dexamethasone, 10 ng/mL TGF-beta1 and, if applicable, 50 ng/mLBMP2. Time-dependent gene expression was studied under three experimental conditions: (i) following treatment with TGF-beta1 (\u201cT\u201d), (ii) following treatment with TGF-beta1 + BMP2 (\u201cTB\u201d) and (iii) untreated hMSC as a control. At 10 different time points hafter addition of the stimuli, RNA was isolated from three technical replicates per time point and measured on Affymetrix HG-U133a microarrays.ribed in. To induPre-processing and filtering. Raw microarray data was pre-processed as described in the corresponding sub-section. This included the use of custom chip definition files provided by[vided by and applvided by. This prp-values using LIMMA. All genes with an adjusted p-value less than 10\u221210and an absolute fold-change greater than 2 for any time point were considered significant. Using those criteria, 192 genes were found to be differentially expressed compared to control as well as between \u201cT\u201d and \u201cTB\u201d. Subsequently, we selected from this group 10 annotated transcription factors and associated 5 of them with our investigated process . Those genes may be involved in promoter-dependent regulation, which is important for binding site predictions. Furthermore, we added COL2A1, ACAN, COL10A1, all three essential marker genes of chondrocyte differentiation, which encode essential structural proteins of the extracellular matrix.Modelling a small-scale GRN using microarray data requires adequate filtering of genes. We tested all genes for differential expression, used functional annotation and expert knowledge. Differentially expressed genes were identified for both experiments by computing adjusted Prior knowledge. Prior knowledge was taken into account as described in the corresponding sub-section. Gene interactions were retrieved from the Pathway Studio ResNet Mammalian database. We obtained 6 gene-gene and 5 input-gene regulatory interactions. Gene-gene interactions were passed as flexible prior knowledge to NetGenerator. Input-gene interactions were not integrated. Additionally, potential gene interactions were determined by binding site predictions. For this purpose, we obtained PWMs for SOX9, MEF2C and MSX1 from the Transfac database and promoter sequences 1000 bp upstream from the transcription start site. Both PWMs and sequences were loaded into matrix-scan from RSAT, which is performed with default options and organism-specific estimation of background nucleotide frequencies. The resulting significant binding sites have been listed in the table of Additional fileNetwork inference of multi-stimuli (TGF-beta1 and BMP2) multi-experiment data. After pre-processing, the input and time series data of the microarray experiments were passed to NetGenerator for automatic network inference. According to the experimental set-up, the available data sets describe two experiments: only TGF-beta1 stimulation (\u201cT\u201d) and TGF-beta1 + BMP2 stimulation (\u201cTB\u201d). This is mirrored by the two distinct input data matrices both describing the respective stimuli by step functionsModel evaluation and validation. The inference results of the chondrogenic system, the GRN and the graphical comparison of time series, are displayed in Figure%, see Figure% of the models.For validation, we performed resampling which is based on random perturbation of time series data. A Gaussian noise componentNetwork interpretation. SOX9 exhibits a central role in this chondrogenic network and is activated by both TGF-beta1 and BMP2. This indicates a complementary effect of both stimuli on the expression of SOX9. Activated SOX9 drives the expression of its target genes COL2A1, ACAN and COL10A1[ COL10A1-26. This COL10A1. MSX1 al COL10A1. In summThe NetGenerator algorithm for automatic network inference from multi-input multi-experiment time series data and prior knowledge, described in the methods section, will be classified and distinguished from other methods in the next sub-section. Therefore, its properties will be reviewed and justified showing advantages and disadvantages to other approaches. Our discussion contains a wide spectrum of other methods, but will only go into detail for the ones closely related to NetGenerator. Also, further specifications of NetGenerator will be summarised without a detailed comparison to other methods.Good review articles on methods for automatic inference of GRNs can be found in,3,4. TheVery often, like in case of the core elements of NetGenerator, GRNs are based on linear modelling, i.e. the behaviour of one variable depends on a linear combination of other variables. Still the method can be a combination of either probabilistic or deterministic approach as well as algebraic correlation modelling (equations system) or dynamic modelling . In the case of the probabilistic modelling which is especially covered by static and dynamic Bayesian networks (see aforementioned review articles) the inference is based on the application of probability distributions to describe the uncertainties or noise inherent in GRNs. Beside the differences in the mathematical approach, probabilistic modelling includes the determination of statistical parameters and therefore generally more data replicates are required in comparison to deterministic modelling approaches such as NetGenerator.Deterministic linear modelling applied to automatic network inference,, can be N2 + M\u00b7N) in the model description (2) only a small number of parameters has to be determined.In contrast to all these methods, we propose the NetGenerator algorithm dealing with the problem of data number and sparsity in a different way. The algorithm is not inferring the network structure and parameters in one go. Instead we applied an heuristic approach of explicit structure optimisation, which iteratively generates a system of sparsely coupled sub-models. In that way, the GRN property of possessing more or less hierarchical input to output structures is reproduced. Thus, only the parameters of sub-models describing one time series have to be determined. A major drawback of regression-based solutions of linear differential (difference) equations systems is the necessity of applying numerical derivatives of small sample size and noisy data, which have a strong influence on the resulting network and modelled dynamics. NetGenerator uses a different solution, whereby the regression just provides initial parameters for a non-linear optimisation of an objective function of the least squares type. Overall, the final dynamic network can be obtained by a lower computational effort, because in comparison to the total number of parameters the majority of connections, especially the ones of prior knowledge and predicted binding sites, occur with a high frequency which can be considered a measure of reliability and (ii) the ranking of the frequencies can be used in interpreting the results with regard to biological hypotheses. Overall, this shows the importance for an extension of NetGenerator to deal with multiple data sets.The means to integrate prior knowledge (fix and flexible) into the network inference is a distinctive feature of the extended NetGenerator algorithm achieved by modifying the objective function. This feature can reduce the complexity of the structure optimization, although it strongly depends on the origin and quality of the given knowledge, see e.g.. Using pFor our example of chondrogenic differentiation, we exemplarily showed network inference using flexible prior knowledge about regulatory interactions extracted from a database (Pathway Studio). The graphical evaluation of the inferred network showed very good reproduction of the proposed prior knowledge. Further predicted connections could be associated with potential regulatory binding sites generated from sequence data .Apart from the linear modelling presented in detail, the ability of NetGenerator to infer a non-linear model has been mentioned as a further option. The additional sigmoid function describing saturation in gene-expression has been proven successful before, e.g.-42. SincBesides the many advantages and possible application areas, there are minor restrictions of NetGenerator: it should be applied to pre-processed data without high correlations, it infers networks from measured time series data and due to the heuristic approach it cannot be proven that the global solution was found. The latter can be improved by decreasing the influence of noisy data using a bootstrap (resampling) approach, see chondrogenesis example and. One feaWe presented the novel NetGenerator algorithm for automatic inference of GRNs, which applies multi-stimuli multi-experiment time series data and biological prior knowledge resulting in dynamical models of differential equations systems. This heuristic approach combines network structure and parameter optimisation of coupled sub-models and takes into account the biological properties of those networks: indirect transcriptional events for information propagation, limited number of connections and mostly hierarchical structures. The analysis of benchmark examples showed a good reproduction of the networks and emphasised the biological relevance of inferred networks with a different degree of cross-talk. The ability to infer a real-world example based on multi-stimuli multi-experiment data was shown by application of NetGenerator to a system of growth factor-induced chondrogenesis.The authors declare that they have no competing interests.MW and SGH drafted the manuscript. SGH and DD contributed to the development and programming of the NetGenerator algorithm and software as well as to the mathematical and modelling background. MW and RG contributed to data processing, application of NetGenerator to examples, statistical evaluation and the biological interpretation. SV contributed to the generation of the benchmark systems and their artificial data. EJvZ contributed to experimental set-ups, measurements and biological interpretation of the chondrogenic investigation. All authors read and approved the final manuscript.Figure: \u201cLimited cross-talk\u201d example, time courses. Comparison of the \u201climited cross-talk\u201d (LCT) network time courses. Each panel displays the results of one gene: the simulated time course (solid line), interpolated measurements (dashed line) and the measured time series (dots) for both data sets (Experiment1 and Experiment2).Click here for fileFigure: \u201cNo cross-talk\u201d example, time courses. Comparison of the \u201cno cross-talk\u201d (NCT) network time courses. Each panel displays the results of one gene: the simulated time course (solid line), interpolated measurements (dashed line) and the measured time series (dots) for both data sets (Experiment1 and Experiment2).Click here for fileTable: Results of RSAT. Results of RSAT matrix-scan tool using Transfac PWMs and genomic DNA sequences from Ensembl. Each row represents a predicted binding site with Transfac motif (\u201cPWM\u201d), target gene, start and end coordinates, the matched sequence, match score (\u201cWeight\u201d) and associated p-value.Click here for fileFigure: Chondrogenesis system, time courses. Comparison of the chondrogenesis system time courses. Each panel displays the results of one gene: the simulated time course (solid line), interpolated measurements (dashed line) and the measured time series (dots) for both data sets (\u201cT\u201d and \u201cTB\u201d).Click here for file"}
+{"text": "We focus here on a publicly available DNA microarray time series, representing the transcriptome of Drosophila across evolution from the embryonic to the adult stage.This paper lies in the context of modeling the evolution of gene expression away from stationary states, for example in systems subject to external perturbations or during the development of an organism. We base our analysis on experimental data and proceed in a top-down approach, where we start from data on a system's transcriptome, and deduce rules and models from it without In the first step, genes were clustered on the basis of similarity of their expression profiles, measured by a translation-invariant and scale-invariant distance that proved appropriate for detecting transitions between development stages. Average profiles representing each cluster were computed and their time evolution was analyzed using coupled differential equations. A linear and several non-linear model structures involving a transcription and a degradation term were tested. The parameters were identified in three steps: determination of the strongest connections between genes, optimization of the parameters defining these connections, and elimination of the unnecessary parameters using various reduction schemes. Different solutions were compared on the basis of their abilities to reproduce the data, to keep realistic gene expression levels when extrapolated in time, to show the biologically expected robustness with respect to parameter variations, and to contain as few parameters as possible.We showed that the linear model did very well in reproducing the data with few parameters, but was not sufficiently robust and yielded unrealistic values upon extrapolation in time. In contrast, the non-linear models all reached the latter two objectives, but some were unable to reproduce the data. A family of non-linear models, constructed from the exponential of linear combinations of expression levels, reached all the objectives. It defined networks with a mean number of connections equal to two, when restricted to the embryonic time series, and equal to five for the full time series. These networks were compared with experimental data about gene-transcription factor and protein-protein interactions. The non-uniqueness of the solutions was discussed in the context of plasticity and cluster versus single-gene networks. In the field of gene expression, DNA microarray techniques provide the simultaneous expression levels of many--sometimes all--genes in a cell sample, usually relative to those in a reference sample ,2. Thesee.g. -12). Howg. [e.g. ).a priori knowledge about the gene expression network, so as to limit the solution space. We take here a different approach, using biology-based constraints, and ask whether it could also reduce degeneracy. One biological constraint considered here is the robustness of the solutions with respect to parameter variations (see e.g. [A first possibility to handle the degeneracy of the solutions is to use see e.g. -16). IndAnother biological constraint is related to the stability of the solutions when extrapolated in time. Even though the available data usually cover only a part of the system's life, it is reasonable to assume that the expression levels continue to be of the same order of magnitude, never becoming unrealistically large or negative. The same property is expected to be built in the model: the solutions must take realistic values until the next perturbation, development stage, or the end of the organism's life.Drosophila as model organism, and model the time evolution of its transcriptome using coupled, linear and non-linear, differential equations.We analyze in this paper the effects of adding these biological constraints on the modeled dynamics of gene expression, particularly in the framework of the development of an organism. More specifically, we investigate whether these constraints limit the choice of the model structure and/or its parameter values. We use \u03bc I(\u03c4) emitted by the fluorophores attached to the mRNAs labeled here by \u03bc , which are extracted from a specific sample taken at a given time \u03c4 and are hybridized to their complementary sequence attached to a microarray. These intensities are usually expressed relative to the intensity \u03bc X(\u03c4) as a function of the normalized intensities With the DNA microarray technique ,2 one men levels ,18. We dN different time points i \u03c4. We made here the common assumption that the RNA concentrations and fluorescence intensities are proportional [i.e. that \u03bc X(\u03c4) represents the RNA concentration up to a gene-dependent scaling factor \u03bc will refer indistinguishably to the gene product--RNA or protein--or the gene wherein the gene product is encoded.Time series are obtained when considering the sample at ortional , i.e. thDrosophila melanogaster [i.e. of Drosophila of all ages. We considered here on the one hand the complete time series of 67 time points, and on the other hand the part of the time series covering the embryonic phase, which contains the 31 first time points.We use here a DNA microarray time series of male nogaster . It contIt is technically impossible to model the evolution of the expression levels of thousands of genes, given the few data points available. Moreover, even if we had a sufficient number of time points to ensure parameter identification, the problem would be degenerate, in that multiple solutions with almost the same ability to reproduce the data would exist. Indeed, many of the gene expression profiles are very similar and are thus basically indistinguishable without additional information. We therefore cluster the gene expression profiles into a limited number of distinct classes.D [D between two profiles \u03bc X(\u03c4) and v X(\u03c4) satisfies, The clustering is performed on the basis of the least-square distance D . This diray data .D is shown to be of the form [With these constraints of scale and translation invariance and the usual symmetry constraint the form :\u03bc\u03c2:in terms of the mean D is chosen in Eq. 2.where the sign that minimizes D between any pairs of profiles from the two classes is minimum. It stops when all genes are in the same class. This clustering tree is then cut at a certain level by putting a threshold on the maximum number of classes, denoted C. The choice of this threshold is always a subjective matter and depends on the aim of the clustering. Here, the number of classes must be sufficiently low to ensure that they are manageable for modeling purposes. Moreover, to have a meaningful classification, the distances between profiles within each cluster must be sufficiently low and those between profiles of different clusters sufficiently high.Based on this distance, the gene expression profiles are clustered using a hierarchical, tree-like algorithm. It starts by considering each gene as a class on its own. It then joins, at each step, the two classes for which the average distance C clusters labeled by c is represented the average profile Each of the The system of differential equations that correctly models the evolution of gene expression across development stages is not known, and even less is known about equations that can model gene clusters. We therefore test several model structures. Assuming the system to be autonomous, we consider structures of the form:t is the real, continuous time. The dot means the derivative with respect to t. Since the transcription term c, basically through the binding of transcription factors, which either activate or repress genes in this cluster. The positively defined function c, or their removal from the system. Note that this general model, which is deterministic, represents the average behavior of the system, which is stochastic.where Five model structures are studied. The first is the linear model, defined as:Escherichia coli subject to glucose-lactose diauxie [The other four model structures are non-linear. They resemble models that have been developed to describe diauxie . The fircdA, cdB, c\u03c1, c\u03b3\u2265 0. The degradation factor is considered to be constant. The parameters cd Aweigh the effect of activators on the expression of genes from cluster c, whereas cd Bweigh the effect of repressors. The transcription term is thus proportional to the product of the probability that an activator is bound to the promoter and the probability it is not bound to a repressor. It is obtained by making the approximation that the expression of a gene can be activated or repressed by a single protein, and does not require protein complexes or cascades of interacting proteins. Another assumption is that the form of the dynamic equations remains the same for individual genes and for gene clusters.where The degradation term can also be considered as dependent on gene expression levels. As in the model describing diauxie , we chose.g. through stabilizing complexes) or shorten (e.g. through degradation by proteases) their period of activity. The two parameters cd Kgives the influence (stabilizing or destabilizing according to its sign) of gene product d on gene product c.with The above expression for the degradation term may also be used for the transcription term, while keeping the degradation factor constant. This yields:with Finally, the last model structure we considered has the same expression for both the transcription term and the degradation factor:with c \u03b6to be minimized are decoupled. These read as:To manage the large amount of parameters and the non-linearity of the equations, we used a two-stage procedure for parameter identification. The first stage consists of reproducing the derivatives of the gene expression levels rather than the gene expression levels themselves. This entails considering the expression levels and their derivatives as independent variables and reducing the identification to an algebraic problem, where the functions t-derivative of the gene expression profiles, referred to as Jc. The t-derivatives of the measured gene expression profiles, csaps in Matlab .The estimates of the c Jis inspired by [q is defined to be the average number of connections that end at a node of the network. The number of parameters defining a connection depends on the model structure. In a first step q is considered identical for all nodes, that is, an identical number of gene classes regulates each gene class. We start by putting q = 1 and test, for each gene cluster c, all possible connections one by one. The identification of the parameters that define each connection and minimize c \u03b6is first performed using the global optimization algorithm Direct [fmincon of Matlab. For each cluster c, the connection for which c \u03b6is minimum is kept. This procedure is repeated for q = 2 up to q = C. Note that each time a connection is added, the parameters defining the previously fixed connections are freed and reoptimized.The procedure used to identify the parameters pired by , and worm Direct implemenm Direct intervalq = 1 solution of the previous stage, with the parameters initialized either to the values of this solution or to zero, whichever minimizes the standard deviation \u03c3, defined as:In the second stage, the parameters that maintain the network defined in the previous stage and minimize the difference between measured and estimated profiles, rather than their derivatives, are identified. More precisely, we start with the connections determined in the J = . We then free the parameters and optimize them using the fmincon optimization algorithm of Matlab, so as to minimize the function \u03c3. The estimate of the gene expression profiles, c and J rather than on Jc only. We then repeat this procedure by choosing the q = 2 up to q = C solutions obtained in the first stage, freeing the parameters and identifying them by minimizing the function (11). The initial values of the parameters are chosen to be those obtained for the q-1 identification, with the newly added parameters set to zero.where q = C, but stop it when the value of \u03c3 stops decreasing significantly, thus when no additional connection improves significantly the quality of the data reproduction.In practice, we do not continue this procedure up to cdM, cdA, cdB, cdL, and cd Kthat appear in eqs (5-9). We require that at least one connection per gene class be kept. We proceed by dropping one parameter at a time, according to different criteria detailed in what follows. Note that we also tried to drop several parameters at the same time, but the results were worse. The reduction procedure was stopped when the value of \u03c3 exceeded 0.5, as the measured and estimated gene expression profiles started to differ too much.The next step consists of eliminating unnecessary parameters among Several reduction procedures were tested. They consist of eliminating at each iteration:v);1) the parameter of smallest absolute value (this procedure will be referred to as \u03a8\u03c3);2) the parameter which, when dropped, leads to the smallest increase of \u03c3 (\u03a8P) ;3) the parameter that is most sensitive to a perturbation (\u03a84) the least sensitive parameter in the Fisher sense 5) the most sensitive parameter in the Fisher sense F [To determine the most or the least sensitive parameter in the Fisher sense, we compute the Fisher information matrix F , definedi, j = 1,..p, with p being the total number of parameters. The parameter i to be eliminated is the one that is correlated with at least one other parameter j, i.e. i.e. corresponds to the minimum value of ii F(in point 4) or its maximum value (in point 5).where fmincon. The elimination procedure is then reiterated. Note that the reductions 1, 2 and 4 are standard procedures, whereas the reductions 3 and 5 attempt to eliminate parameters that are sensitive to perturbations.After a parameter is eliminated the remaining parameters are optimized again using the local optimization algorithm Four criteria were used to evaluate the quality of the estimated profiles:1) the number or remaining parameters;2) the standard deviation \u03c3 between estimated and experimental profiles, defined in Eq. 11;c and time point k\u03c4, and computing the value of the standard deviation \u03c3 obtained with this perturbed parameter, denoted pert\u03c3;3) the robustness of the solution with respect to perturbations of its parameters; this is estimated by adding to each parameter in turn \u00b1 1% of its value, determining which perturbation leads to the largest deviation between measured and estimated expression levels, end \u03c4and by computing the difference between the average value of the estimated gene expression levels over the measuring period and the extrapolated level:4) the stability of the solution, evaluated by extrapolating the estimated profiles up to a time end \u03c4corresponds to 3 times the measured time span and at most the Drosophila life time, i.e. 80 days.The time Drosophila development, presented in section 2.a, are classified using a translation-invariant and scale-invariant distance measure and a hierarchical tree-like classification procedure, as detailed in section 2.b. To obtain the final classes, we cut the clustering tree by putting a threshold on the maximum number of classes, so as to ensure that the distances between profiles within each cluster are sufficiently low, that the distances between profiles of different clusters are sufficiently high, and that the number of classes is sufficiently low for allowing the identification of the models' parameters on the basis of the available data points. Taking these criteria into account, we took the number of classes C to be equal to 12 for the full time series and 10 for the embryonic stage.The 4,028 gene expression profiles across The clusters for the embryonic stage and for the complete time series are shown in Additional file \u03bc X(\u03c4) defined in Eq. 1 rather than their logarithms, because these levels are the naturally appearing functions in our differential equation model given in Eq. 4.Note that our clustering procedure is based on a distance measure between profiles that is adapted to our modeling purposes, and that it differs from previously described ones ,25. In pq per cluster, as described in section 2.d. This algebraic procedure attempts to minimize \u03b6, i.e. the standard deviation between the time derivatives of estimated and experimental gene expression profiles (Eq. 10). The procedure is stopped when \u03b6 does not significantly decrease any more. The results are given in Figures The first stage of parameter identification is a bottom-up procedure devised to fix the gene expression network. It starts with a single connection per cluster and ends with a constant number of connections \u03b6, for the same connectivity q, is higher for the embryonic stage than for the full time series. This is due to the fact that in the embryonic stage the profiles are less smooth and thus the derivatives are higher than in the full time series. The model structure \u03b6-values remaining almost constant as the connectivity increases. The best structure turns out to be \u03b6 reasonably well.The value of \u03c3, i.e. the standard deviation between estimated and measured gene expression profiles (Eq. 11), rather than their time derivatives. The results are given in Figures Having fixed the gene expression network, the second step consists of identifying the parameters that minimize linm, which has by far the fewest parameters. Interestingly, the two structures The results confirm those obtained in the previous step: the model structure m qthat must be considered to yield a fair reproduction of the expression profiles, and beyond which the reproduction does not significantly improve. By visual inspection of Figures m q= 3 for the embryonic stage and m q= 7 for the full time series. Clearly, the network needs more connections to describe reliably the complete time series than just the embryonic stage.Based on these results, we determined the minimum connectivity m qare evaluated according to the criteria listed in section 2.g. As seen in Tables \u03c3-values below 0.5. The linear model lin machieves this with the smallest number of parameters. This result would a priori push to the selection of the linear model lin mand to the rejection of the non-linear model The solutions obtained with these values of \u03c3 values are 300 times larger than the unperturbed ones. Note that the non-linear However, when perturbing the parameters by \u00b1 1%, the linear model appears to be by far the least robust. This is particularly visible for the full time series where the perturbed Furthermore, the stability of the solutions, which is evaluated by extrapolating the estimated profiles in time (see Eq. 13), is depicted in Figures e.g. [The principal criterion that the solutions have to fulfill is the reproduction of the data. However, to ensure biological significance, they must moreover be reasonably robust against parameter perturbations (see e.g. -16). IndNote that network models involving individual genes for which a few specific parameters are sensitive to perturbations may not be immediately disqualified. However, we do not work here with networks of individual genes, but rather with clusters containing hundreds of genes. Therefore, if a parameter that represents the strength of the interaction between these groups of genes is sensitive to perturbations, it is not one but a large number of genes that deviate from their intended expression profiles. We thus require our models to be robust with respect to all (tested) parameter variations.lin mis unsuitable as it is non-robust and non-stable. Only the three non-linear structures Hence we can conclude that the non-linear model m qper gene necessary to minimize \u03c3. However, some of the genes probably require fewer connections than others, and some of the connections fewer parameters. To identify the parameters that may be dropped without altering the data reproduction too much, we applied the five different reduction schemes described in section 2f. Three of them are commonly used schemes: \u03a8v drops the parameters of smallest absolute value; \u03a8\u03c3, the parameters that increase \u03c3 the least; and P, aim at selecting solutions that are the most robust with respect to variations of the parameters: they drop parameters that are the most sensitive to infinitesimal and finite parameter variations, respectively. Note that the parameters are dropped one by one in all these schemes. The scores reached, when dropping several parameters simultaneously, are not as good.In the identification stage described in sections 3.b-c, we determined the number of connections \u03c3), the robustness (pert\u03c3) and the stability upon extrapolation (\u03c7) of the reduced solutions obtained by the five different reduction procedures applied to the three remaining model structures \u03c3, which drops the parameters that increase \u03c3 the least, leads almost invariably to the solutions that best reproduce the data. The procedure \u03a8v, which drops the parameters of smallest absolute value, does also very well in this respect, whereas the other three procedures generally perform less well. Surprisingly, the reduced solutions obtained by the procedures \u03a8\u03c3 and \u03a8v are also robust against perturbations and stable upon extrapolation, usually even more than the solutions obtained with the procedures P that are nevertheless designed to select robust solutions. The commonly used Fisher matrix-based \u03c3 and \u03a8v, for none of the criteria considered, and is of the same order as P. Note that, in some cases, a priori quite different. However, it has to be reminded that they have a common part: they both drop correlated parameters, which may explain the similarity.The quality of the data reproduction , and therefore no reductions are indicated in the Table. The same holds for the structure v, \u03a8\u03c3, P. Among these, the best solutions are those that satisfy the criterion \u03c3\u03c7\u22640.5 \u2200c and have: the lowest value of the deviation between estimated and experimental profiles, \u03c3; the lowest value of this deviation after perturbation of the parameters, \u03c3\u03c0\u03b5\u03c1\u03c4; the lowest value of the extrapolated expression level, \u03c7; and the lowest number of parameters, p. The best solutions so selected are indicated in Table For the embryonic time series, even the non-reduced solution obtained with the structure Drosophila show different gene expression profiles, where the transition from one stage to the next is encoded by abrupt changes [The most reduced of these best solutions have an average of two connections per node for the embryonic time series, and five connections per node for the full time series. The embryonic gene expression network is thus much sparser than the network of the full time series. This reflects the fact that the embryonic profiles are much simpler to reproduce than those of the full series. Indeed, the four development stages of the changes .As shown in Figure \u03c3, and with v; for the full time series they are obtained with v and mode.g. ,28, but e.g. ; their ge.g. . We focu\u03c3), (v) for the embryonic time series (see Table v) for the full series , becauseDrosophila gene expression profiles and was rejected. Two reasons can be invoked to explain why this biology-based structure did not work. The first is that it has been developed for prokaryotes, where the transcriptional and translational regulation machineries are much simpler. For instance, one single repressor (activator) is able to repress (activate) gene expression in such systems, whereas in eukaryotes large biomolecular complexes are usually required. The second reason is that this transcription term is physical for gene expression networks involving single genes, but not for gene clusters. Some arguments have been presented to justify the use of this model structure for gene clusters [Among the four non-linear model structures, eviously to modelclusters , but theexpm, are much more flexible and encode the possibility that gene regulation is driven by biomolecular complexes. The three model structures considered differ by the constancy of the transcription term or degradation factor: exp mmodel structures reproduced the data very well, as clearly seen in Figure The three remaining non-linear model structures include a transcription term and/or a degradation factor that is constructed from ratios of exponential terms of the form exp mfamily of model structures adequate for modeling Drosophila gene expression across development, is their generally robust behavior against parameter variation and their large stability upon extrapolating the solutions in time. These structures are therefore selected for further analysis.In addition to fair data reproduction, the biologically crucial properties that make the exp mmodel structures, several reduction procedures were defined and applied. The two simplest procedures, \u03a8v and \u03a8\u03c3, where the former amounts to dropping the parameters of smallest absolute value and the latter to keeping the parameters that increase \u03c3 the least, gave in general the best results in terms of data reproduction, robustness against parameter perturbations and stability upon extrapolation in time. The common procedure i.e. the most robust with respect to infinitesimal parameter variations, was in general less efficient . The variant To get rid of the unnecessary parameters and connections in the E. coli [expm, the two-step parameter identification procedure developed here and the two reduction schemes \u03a8v and \u03a8\u03c3, are all together appropriate for modeling the Drosophila gene expression across development.We finally selected the best reduced solutions, for the embryonic stage and the full time series. These solutions turned out to have all required characteristics: good data reproduction, robustness against parameter variations, stability when extrapolating in time, and a reasonably low number of parameters. Note that parameter reduction does not have the general tendency of increasing the robustness and stability of the non-linear models (see Additional file E. coli . We can \u03c3 values start to exceed a threshold value, above which the correct reproduction of the data is no longer guaranteed. The parameters of the most reduced solutions can thus be assumed not to be overfitted. Note also that the number of parameters of the reduced solutions are much smaller than the number of data points (see Tables Although overfitting of the models' parameters can never be totally excluded in the absence of thorough cross validation, our reduction procedure is designed to avoid this problem. Indeed, the original solutions, which might suffer from overfitting, are reduced until their However, our results suffer from an important drawback, that is, that many gene expression networks and parameter values can be found which have approximately the same performance in terms of our different criteria, and cannot be ranked on the basis of the available data. We would like to emphasize that the biological constraints we have introduced, namely the robustness against parameter variations and the stability of the solutions upon extrapolation in time, limit the possible model structures and parameter ranges, and thus partially lift degeneracy, but not completely. Without additional data, it is impossible to determine which of the networks is the most realistic. The inclusion of other types of data and subsequent analysis of whether this renders the solution unique will be the focus of future research. Also, the application of our approach to the gene expression across the development of other organisms, or to systems subject to external perturbations such as stress, will also lead to relevant insights.Notwithstanding the nonuniqueness of our predicted cluster networks, they compare favorably with experimental data. Indeed, focusing on the well-studied gene subset involved in muscle development, we observed that many of the partners of the experimentally identified transcription factor-gene and protein-protein interactions are members of the same gene cluster. In many other cases, the clusters these partners belong to are connected in the predicted networks. These results will be thoroughly analyzed and confirmed in further studies on the basis of experimental data on other gene subsets.It can also be argued that the non-uniqueness of the network is actually a correct result, and can be due to the inherent plasticity of gene expression networks, where a same external perturbation can lead to different gene expression responses . HoweverThe authors declare that they have no competing interests.AH, JA and MR designed the research and analyzed the results. AH performed the modeling. MR wrote the paper. All authors read and approved the final manuscript.Supplementary material.Click here for file"}
+{"text": "Interpreting gene expression profiles often involves statistical analysis of large numbers of differentially expressed genes, isoforms, and alternative splicing events at either static or dynamic spectrums. Reduced sequencing costs have made feasible dense time-series analysis of gene expression using RNA-seq; however, statistical methods in the context of temporal RNA-seq data are poorly developed. Here we will review current methods for identifying temporal changes in gene expression using RNA-seq, which are limited to static pairwise comparisons of time points and which fail to account for temporal dependencies in gene expression patterns. We also review recently developed very few number of temporal dynamic RNA-seq specific methods. Application and development of RNA-specific temporal dynamic methods have been continuously under the development, yet, it is still in infancy. We fully cover microarray specific temporal methods and transcriptome studies in initial digital technology between traditional microarray and new RNA-seq. The first type of data analysis is to compare simple pairwise group comparisons from a static sampling experiment where samples are collected from distinct biological groups without respect to time. In this setting, there are typically two-group comparison and multi-group comparison (more than two). The second type of experimental design in gene expression profiling is a temporal experiment with or without replicates, where samples are collected over a time window to characterize temporal dynamic spectrum and underlying developmental or progressive biological mechanisms. In this category, large-scale longitudinal data with repeated measurements in gene expression profiles are also additionally included. Temporal dynamic studies in disease progression and age- related psychiatry data . The primary goal of whole-transcriptome analysis is to identify, characterize, and catalog all the transcripts expressed within a specific cell or multiple tissues, either at a static given stage or across dynamic time-varying stages. As microarray, there are major two distinct types of experimental design for identification of differentially expressed genes in RNA-seq . The firthe first type of data have been proposed, such as Fisher\u2019s exact test procedure has been proposed as another pooling method. However, when the read counts of expression are relatively large, methods based on the transformation and consequence of standard normal Gaussian approaches fit very well. When expression levels have small read counts, such approximated asymptotic approaches from the transformation of discrete counts into continuous variable are less accurate. The main drawback of pooling methods, the GLM, or LIMMA approach for time-series RNA-seq data is that even if the labels of a sample from one time point to another time point are reordered, the results would be identical based on the second type of experimental in RNA-seq, these dynamic processes are more likely to be related to underlying mechanisms of disease progression and developmental process. Nevertheless, application and development of method on RNA-seq data have mostly focused on static two-group or multi-group comparisons, whereas, more complicated experimental design settings such as temporal dynamics have not been addressed. Due to the lack of methods to precisely analyze temporal dynamics, as alternative and intuitive solutions, existing (I) pairwise methods (II) pooling analysis of sample to run all samples simultaneously in the model, and (III) clustering analysis to group co-expressed similar patterns. Mainly, there are three different types of time course experiment: (1) a single-series of time course to explore single temporal transient pattern, (2) a multi-series of time course to simultaneously explore differences on expression levels among biological conditions in vertical line and expression patterns over time within a condition to define horizontal temporal trajectory, as a state-of-art visualization, consecutive patterns of vertical differences by given conditions can be shown in temporal fashion whether there is at least one pair to be different on the multiple conditions over a series of time points. This multi-series of time course expression profile has been widely performed in stimulus-response time-series experiment to identify altered expression of multiple responses from different conditions of stimuli. (3) Another type of time course is periodicity including cell-cyclic and circadian rhythmic patterns. The periodical time course data can be also incorporated with multiple factors at a time. In microarray factorial cell-cycle time-series data, To consider Most recently published studies of time course experiments in RNA-seq are inconclusive in terms of statistical viewpoints to infer temporal patterns . As promWith the popularity of such rich systematic resources and reduced costs in profiling gene expression, it is clear that the complicated accompanying experiments will have multiple factors and parameters, e.g., the transcriptomic experiments with time window lie in the coming years. To address this type of data, the formal statistical tests used in temporal analysis will be more advantageous for understanding the causes and consequences of various human diseases observed over time in clinical applications or developmental processes. This manuscript comprehensively reviews statistical methodologies by focusing on a variety of dynamic time-series expression profiles in RNA-seq and outlines important questions in the field, speculating on how these statistical methods can analyze and interpret time-series expression profiles in aspects of human disease progression and on the accompanying evolutionary implications . These in = 12) and a small sample size (n = 5). Unfortunately, most current RNA-seq experiments have at most three replicates in common use. Although the cost of sequencing has dropped substantially, and experimental designs with appropriate replicates, depending on the hypotheses of the studies, have been suggested in terms of power and reliable outcomes, there are still very few biological replicates available in RNA-seq experiments, either static or dynamic.It is now well known that a developmental transcriptome is highly dynamic and that the previous gene expression profile can affect the subsequent one, which in turn can influence the pattern of DEX. Currently, one of most popular computational bioinformatics tools for researchers in this field for identifying gene- and transcript-level expression is Cufflinks , where DIn summary, the initial statistical techniques as indirect temporal dynamic methods provided a first glimpse into the identification of the DEX of gene regulation in RNA-seq in terms of simple pairwise comparison and multi-group/factor comparison. However, as interest grows in a variety of species and cell types, and scientific questions are asked about developmental stages, single cell-cyclic, and circadian rhythmic regulation. RNA transcriptome studies to identify TDE are involved with single-series, multi-series, and periodicity. Various time course experimental designs are gradually revealing relevant biological mechanisms and functional pathways in temporal dynamic process as in conventional microarray. Therefore, the development and application of statistical methodologies to uncover and capture the TDE across dynamic time points is timely very critical in RNA-seq. This is a comprehensive review article of static and dynamic algorithms by including oscillations in cell-cycle as well as computational modeling in traditional technologies. The field is advancing so rapidly that a brief review cannot include all of the work done in the past 5 years. It is a sampling of a few highlights of statistical methodologies in differential analysis, from conventional hybridization-based array experiments that provide a continuous fluorescent-intensity gene-expression profile to attractive new RNA-seq technology with count-based measurements from tag sequence in both static and dynamic time course experimental design settings.Time course cDNA microarray experiments have been widely used to study temporal profiles of cell dynamics from a genomic perspective and to discover the associated gene regulatory relationship. The output gene expression matrix in a hybridization-based microarray is basically a large-scale dataset filled with numbers related to the signal fluorescent intensity between each gene and a sample on the chip. These raw data are pre-processed in a lower analysis, such as background detection, normalization, probe set summarization, and filtering out genes or outlier samples with specific criteria. For instance, the coefficient of variation is used to filter out genes which are less than the user-set cut-off value, and the missing values between genes and samples are estimated by imputing methods. The potential outlier samples with bad quality control measures are discarded beforehand. Some exploratory approaches, such as box-plot, mean-difference plot, correlation-plot, and principal component analysis, can be performed as diagnostic measures of outliers in a visualization manner. After such exploratory analyses, genes and samples that are of good quality and more meaningful of targets for further exploration are kept for downstream analyses, including identification of DEX at a static or dynamic mechanism in cellular responses over time. Once a certain set of genes are identified to be of interest for further investigation, grouping of clusters of genes with similarly co-expressed regulation patterns and analysis of pathway and gene ontology can be performed to identify biologically meaningful information and biologically related and interacting gene regulatory networks, e.g., in disease with respect to a normal control sample at a static time point or across a time-series or developmental time stages or covering expressions that are periodically regulated across the cell-cycles.When replicated time course microarray data are available, various statistical approaches and modifications are employed. This category of approaches, ANOVA , has beeWhen characterizing temporal features in a time course data, there are some drawbacks to merely considering clustering methods, in that they make no explicit use of the replicate information and they use all the slides or means of the replicates. That is, the clustering technique does not reveal what genes are differentially expressed among different experimental conditions and temporal patterns of such DEX sets. Another limitation in gene clustering methods for time course data is that clustering does not provide such ranking for the individual genes based on the magnitude of change in expression levels over time, which scientific researchers frequently want to investigate. Large microarray data sets with 20K ~ 40K genes are very common; however, clustering methods are not suited to handle such large input data files and may not provide clear group patterns due to the inevitable noisy set contained in microarray experiments.Thus in microarray analog-based experiment platforms, diverse strategies were developed and applied in the literature to address different aspects of biological questions in time course gene expression data. For many applications of new digital-based RNA-seq technology, which has been rapidly replacing array-based experiments in transcriptome gene expression profiling, however, we are not able to simply plug in methods used in microarray, as it quantifies discreteness of expression level and somehow different biases from experiments and normalization strategies to adjust artifacts. More sophisticated RNA-seq-specific algorithms and software tools are particularly important in analyzing various RNA-seq applications along with the wave of data produced by this fast-moving RNA-seq technology. Statistical methodologies currently lack solutions to analyze time course experiments as well as other types of outcomes. Significant efforts must be undertaken on the statistical and computational methodology front. Sophisticated, tailor-made data analysis approaches will likely play a key role in fully realizing the power to interpret whole stories of the time-series RNA-seq transcriptome in next-generation sequencing technologies.Since the pioneering work in methods of identification of periodically expressed genes as a function of the cell-cycle or circadian rhythm in microarray, various organisms at a genome-wide level have been studied in yeasts, plants, and mammals . Using cHowever, the majority of applications have focused on ODE approaches. It has been shown from efforts at modeling the identification of clockwise genes in a periodic time course of microarrays that many computational methods currently cannot be directly applied to RNA-seq data due to the quantification and type of data . Therefore, efficient methods should be well established to analyze periodically temporal dynamics by capturing linear or non-linear dynamic behaviors of modules and interactions of genes during different stages or successive time points in system biology. The identified biological disruption or abnormal pattern on clockwise genes in a timing system is being further investigated as impairments on metabolic regulation . In RNA-k = 4\u201310 time points and a longer time-series as 11\u201320 time points, very often irregularly time-spaced. Compared to RNA-seq, although relatively a microarray platform has more samples, occasionally a few replications with n \u2264 5 has been a common standard. Gene expression values at different time points may be inherently correlated, especially if a common reference design is used or a common pool of cells is sampled , SAGE, and CAGE have been popular tools. The main difference to microarrays is that they provide the tag-based expression level quantification and digital measurement technologies to enable the quantification of the expression levels of novel genes and alternatively spliced transcripts without prior knowledge. cDNA microarray has been successfully applied in transcriptome studies, but it provides partial information about abundance based on fluorescent intensity, whereas the expression level in SAGE is quantified by a short sequence-tagging enzyme that gives rise to 15-bp tags to uniquely identify a transcript . SimilarSome studies associated with the identification of altered changes of expression in temporal or/and spatial patterns initially investigated using digital expression. But applications on the proper methods to facilitate the temporal analysis of large-volume data generated by digital technologies have been poorly addressed when compared to the comprehensive gene expression microarray approach, which has been the most commonly used technology so far. The next-generation sequencing currently revolutionizing the transcriptome field thus presents advantages and a great potential over the previous technologies by allowing for more in-depth studies. Detailed reviews of methods for identifying DEX in early digital technologies can be found in . ThroughThe newly emerging RNA-seq technology enables the study of the transcriptional program of various types of time-series experimental designs underlying the development and evolution of complex organisms. A more robust and precise methodology is needed to fully understand the complex temporal patterns that contribute to biological questions in their developmental systems so as to better understand the causal relationship between gene expression and time. This is not a trivial problem and makes it difficult to study them without a sophisticated statistical method. The identification and characterization in RNA-specific time-series expression profiles has been a long-standing challenge. As an alternative, static methods, limited but intuitive, have been applied for RNA-seq time-series data . The curAccordingly, the issue of development of methodologies in RNA-seq time-series data has emerged as a new syndrome in this field, following by statistical methods to detect changes of expression in pairwise comparisons. Prior to exploration and deep discussion on dynamic methods, an understanding of the pros and cons of different static methods is critical to enumerating the specific benefits of the temporal aspects and the future direction of applications of robust new methods of dynamic time-series RNA-seq data in a given biological context. The advantages and limitations of the existing static methods in the identification of DEX were comprehensively reviewed in the previous sections.The important question of how many time points (data points) and replicates should be used to identify a particular temporal expression pattern might be raised in the beginning of the experiment to increase power and efficiency, since the cost of sequencing data in time-series is not insignificant .ad hoc approach, our group is currently developing a statistical method to consider factorial time course experiments (multi-series time course) as well as identification of periodically regulated genes. Another interesting topic to be addressed in RNA-seq is genetic regulatory networks at alternative splicing (AS) diversity and gene level quantification. More recently, in a small sample setting or large-scale longitudinal data with repeated measurements in clinical applications. For example, ones are interested in identification of temporally differentially expressed genes characterized from different enzyme effects to stimulus-response experiments of human diseases. In the particular time course data with multiple parameters to distinct levels of experimental or biological factors, such as different tissues, strains, or drug treatments in microarray, ANOVA and LIMMA have generally been applied in relatively large experiments compared to RNA-seq. However, to date RNA-seq assays in a multi-series of time course, a factorial time course experiment has not been explored in statistical methodology perspectives, even though such studies play a pivotal role in revealing temporal mechanisms of expression from disease-specific target genes with the stimulus of drug treatments. Thus, RNA-specific methodologies of identifying temporal expression in stimulus-response experimental settings will offer a range of specialized applications that have not been available in conventional approaches to gene therapeutic effects. As an ecently, also poiCharacterization of temporal dynamics at AS diversity will soon be a very promising and prominent research area by taking a look at transcript variants individually other than unified gene level quantification. In addition, in order to study cell-cycle or circadian rhythmic variations with periodicity, an appropriate statistical methodology must be selected to identify significantly cell-cycle-regulated genes of the genome in a given organism. Currently there are no effective methods of defining the subset of predominantly periodically expressed genes in RNA-seq genome-wide analysis from factorial periodical time course experiment with conditions at a time.From a DEX perspective as steady-static methods, importantly, prior to detection of temporal difference a normalization procedure must be incorporated to adjust various artifacts from experiments. In order to be comparable between distinct intra- and inter-samples, although intra-samples at a static time point are assumed to be independent, the challenge is to incorporate the fact that a gene\u2019s expression at time t is dependent on its expression at the previous time point t-1. This remains elusive because the normalization methods all assume that samples are independently distributed, which is not true in time-series reality. It has become apparent that reliable detection of the DEX between two different or multiple groups at a static stage (or time point) is the key to understanding complex biological functions and identifying known and novel disease-specific genes between distinct groups such as those that lead to various types of tumors. For now, the importance of detection of DEX in dynamic ways to provide practical solutions to comprehensively exploiting temporal RNA-seq data is emphasized by the growing popularity of time-series experimental studies on a system level for characterizing the temporal orchestration of behaviors as a function of a time effect in gene regulation of complex biosystems. Through DEX based on ranking individual genes, numerous candidate genes with disease effects in age or developmental progress can be detected. However, the exact mechanisms underlying the influence of these genes and the relationship between individual genes in temporal regulation must be further examined.Analysis of time-series RNA-seq data is still at an immature stage in terms of development and application of methods to decipher the complexity of a series of observations in time-series representations. Most studies in RNA-seq time-series data so far have applied methods that were originally developed for static expression profiles without respect to time using simple pairwise comparisons in RNA-seq data or for time-series methods in microarray after variance-stabilizing transformation. The identification and analysis of static gene expression profiles in RNA-seq have become routine, and the rapid growth of time-series studies for developmental biology and disease processes in clinical applications imposes issues for traditional methods of analyzing dynamic system mechanisms. In general, the limited number of time points and replications in experimental design are due to the expense of sequencing and the limited number of available biological RNA-samples. This difficulty in obtaining the proper number of samples for the many time points and biological replicates prevents investigators of statistical methods from establishing standards for time-series analyses due to large-scale dimensionality with the small number of observations. And the corresponding large number of parameters causes a singularity of matrix, over-fitting, and misleading results with false discovery rates. To overcome this, short-time-series data can be dealt with by borrowing meaningful information across genes in shrinkage methods, incorporating prior information for representative temporal patterns or sequential correlation of gene expression between consecutive time points in Bayesian approaches, incorporating prior information with other genomic levels with Chip-seq and methylation data, and collecting meta time-series data, although the challenges regarding the heterogeneity of data remain.The advanced, powerful statistical methods used to identify temporal expression in a variety of RNA-seq time course data will be useful both for web-lab biologists/clinicians and computational/statistical analysts who want to understand altered gene expression over time, for gaining insights into biological systems and for obtaining rich information from the data. Analyses of temporal gene expression patterns may soon result in improvement in the diagnosis and treatment of tumor progression or neurodegenerative brain diseases and other human diseases with a time window and experimental treatment conditions in gene expression profiling. This comprehensive discussion provides the first systematic review of methodologies on the identification of dynamic expression profiles in time-series RNA-seq and forms the foundation for future genetic, genomic, and developmental evolutionary studies related to human disease and health. Understanding dynamic transcriptomes is crucial to understanding the mechanisms of cell differentiation and ultimately providing therapeutic immune system solutions or for characterizing cell signaling and mitochondrial dynamics in neural degenerative diseases such as Alzheimer\u2019s and Parkinson\u2019s from gene expression profiles, although RNA-seq has not yet been extensively utilized to characterize these diseases.The pros and cons of several static methods of identifying differentially expressed genes are listed here, including simple pairwise comparison and multi-group comparison, and the limitations of such methods in discerning temporal and spatial transcript structure and analyzing transcriptome complexity in dynamics are presented. Thus dynamic methodologies, including the periodic time course proposed here, will provide critical insights from simple short time course to retrospective studies of disease patients according to clinical characteristics. The approaches can be applied in disease-related time course RNA-seq transcriptome data or other count featured -omics data, such as the initiation and progression of the immune response in dynamics behavior in a given patient and the complexity and dynamics of the human brain. In microarray, a longitudinal breast cancer study identifying the gene expression profiles compared between enriched cell populations and whole bone marrow from normal volunteers and breast cancer patients after neoadjuvant chemotherapy treatment could be an ideal example for defining the correlation of the disease status, response to treatment and survival in course of the study . This stThrough currently the most attractive technology, RNA-seq, identification of temporal changes of gene expression will provide a potential avenue for future studies of genetics, genomics, system biology in developmental process and time-dependent observed data such as aging, parkinson\u2019s or Alzheimer\u2019s disease and for screening patients with tumor progression and infectious dynamic disease using the new technology of RNA-seq time course experiments in the various genomes.More specifically, techniques are described here for identifying temporal patterns that take into account the autocorrelation and Markovian assumption that a time-series random variable typically exhibits high level of sequential correlation and the current expression levels are dependent on past expression profiles. The more informative dynamic methodologies considering correlation structure across time points to identify temporal patterns will be widely used with the advantages in RNA-seq ahead.To overcome a general lack of dependent assumptions on existing methods and owing to small size of observations in RNA-seq settings, statistically rigorous and validated approaches for time course will lead to find dynamic response markers on gene expression profiles by accounting for proper assumptions, robustness, and biological interpretations in gene functional pathways of system biology at the most perturbed time point. Moving on temporal dynamic analysis, the review describes the first systematic and comprehensive identification methods of static and temporal dynamic patterns in RNA-seq transcriptomic data, including array-based experiments, the very beginning of sequencing-based experiments.The comprehensive enterprise of DEX analysis in a class of high-throughput technologies was discussed by highlighting the identification of DEX in static and temporal dynamics in RNA-seq time-series, which have not been explored in statistical modeling approaches to identify temporal expression, implying quantitative biological scenarios in the biomedical research community. To make this connection, this review is intended to be a guide to the choice and use of a suitable method in a given study and to lead to a significant paradigm shift in RNA-seq methodology. It can further advance our understanding of hypotheses in directing a mathematical relationship of system biological behaviors and statistical modeling because all static existing methods are restricted to direct applications for time-series dynamic data without any adaptations of modeling and estimation. Without appropriate statistical measures in temporal analysis, almost any static approach will yield significant genes, including a large number of false discoveries. Furthermore, by incorporating the advantages of RNA-seq, in a parallel manner, AS events and allelic specific expression will be also of value in addressing the consequences of mis-regulation of splicing and allele in spatial and temporal time-series expression profiling in the context of human disease development. As an important extension remark of these issues, a mathematical graphical model is being developed in biological systems over time on how to infer the temporal dynamics of underlying networks in RNA-seq time-series data. For example, neuronal migration during development of the cerebral cortex requires particular exon-skipping events in a given transcript, and identification of the specific transcripts which undergo a developmentally induced AS switch in migrating neurons will play a critical role in defining the dynamics of expression profiles and the intimate connections between the regulation of AS and development . Many coSunghee Oh conceived the reviews and wrote the manuscript, Sunghee Oh, Seongho Song, Nupur Dasgupta, and Gregory Grabowski contributed to critical discussion on mathematical approaches and biological aspects in transcriptome dynamics.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Aedes aegypti is arguably the most studied of all mosquito species in the laboratory and is the primary vector of both Dengue and Yellow Fever flaviviruses in the field. A large number of transcriptional studies have been made in the species and these usually report transcript quantities observed at a certain age or stage of development. However, circadian oscillation is an important characteristic of gene expression in many animals and plants, modulating both their physiology and behavior. Circadian gene expression in mosquito species has been previously reported but for only a few genes directly involved in the function of the molecular clock.Ae. aegypti Agilent\u00ae microarray. Transcripts were quantified in adult female heads at 24 hours and then again at 72 hours and eight subsequent time points spaced four hours apart. We document circadian rhythms in multiple molecular pathways essential for growth, development, immune response, detoxification/pesticide resistance. Circadian rhythms were also noted in ribosomal protein genes used for normalization in reverse transcribed PCR (RT-PCR) to determine transcript abundance. We report pervasive oscillations and intricate synchronization patterns relevant to all known biological pathways.Herein we analyze the transcription profiles of 21,494 messenger RNAs using an These results argue strongly that transcriptional analyses either need to be made over time periods rather than confining analyses to a single time point or development stage or exceptional care needs to be made to synchronize all mosquitoes to be analyzed and compared among treatment groups. Aedes aegypti (L) have been previously reported in the field and in patterns ; sugar-fpatterns ). Both opatterns . Entrainpatterns . Subsequ periods ,8. Conne periods . This st periods .Drosophila melanogaster. A number of genes have been implicated in the control of circadian rhythms in this species. Transcriptional analyses of these suggest a mechanism whereby transcriptional negative feedback loops control cyclic expression . Timeless and Period oscillate in the same phase with TFIIH slightly ahead of dCLK and Cycle . Among all pathways composed of oscillating genes basic energy metabolism is particularly significant.The first example of coordinated rhythmic pattern in a biological pathway is presented in Figure osophila are simiosophila . In contosophila . We confosophila has beenA. aegypti mosquitoes. These oscillators are linked and work in synchrony, but can be temporarily or permanently uncoupled by changing environmental conditions or due to mutations that lead to the creation of the behavioral patterns reported in previous publications , then the periodogram exhibits a peak at that frequency with a high probability. Conversely, if the time series is a purely random process (a.k.a \"white noise\"), then the plot of the periodogram against the Fourier frequencies approaches a straight line and p is the largest integer less than 1/x.where This algorithm closely follows the guidelines recommended for analysis of periodicities in time-series microarray data except tY = x0, x1, x2, ...xN-1 the autocorrelation is simply the correlation of the expression profile against itself with a frame shift of k data points . For the time shift f, defined as f = i + k if i + k Mt, where t = 1,2,....,12), while black values indicate M24 \u2243 Mt. and bright green indicates large negative M-values (M24 < Mt).A heatmap was constructed to represent gene expression profiles in 72 hour-104 old female mosquito heads wherein all twelve timepoints are represented as columns along the abscissa. Each line on the ordinate corresponds to the M-values for a particular gene feature and appear as shades of red to black to green. Bright red indicates large positive M-values profiles are on top and the least periodic profiles are at the bottom of each group. All expression profiles were tested for periodicity by autocorrelation test and sorted in order of decreasing correlation between an early time and a second time 24 hours later. Additional explanation of the algorithm for generating gene expression heatmap is given in Supplemental Figure 1 , Supplemental Table 2 (MS Excel file) and Supplemental Figure 1 (Adobe PDF file). Supplemental Table 1 reports the experiment design for sample collection with date, time, pooling and replication information. Supplemental Table 2 reports the results of straight application of periodicity tests to reconstructed 48 h expression profiles (see Methods). Supplemental Figure 1 illustrates the process of generation of circadian expression heat map (presented in Figure Click here for file"}
+{"text": "Identifying modules from time series biological data helps us understand biological functionalities of a group of proteins/genes interacting together and how responses of these proteins/genes dynamically change with respect to time. With rapid acquisition of time series biological data from different laboratories or databases, new challenges are posed for the identification task and powerful methods which are able to detect modules with integrative analysis are urgently called for. To accomplish such integrative analysis, we assemble multiple time series biological data into a higher-order form, e.g., a gene \u00d7 condition \u00d7 time tensor. It is interesting and useful to develop methods to identify modules from this tensor.In this paper, we present MultiFacTV, a new method to find modules from higher-order time series biological data. This method employs a tensor factorization objective function where a time-related total variation regularization term is incorporated. According to factorization results, MultiFacTV extracts modules that are composed of some genes, conditions and time-points. We have performed MultiFacTV on synthetic datasets and the results have shown that MultiFacTV outperforms existing methods EDISA and Metafac. Moreover, we have applied MultiFacTV to Arabidopsis thaliana root(shoot) tissue dataset represented as a gene\u00d7condition\u00d7time tensor of size 2395 \u00d7 9 \u00d7 6(3454 \u00d7 8 \u00d7 6), to Yeast dataset and Homo sapiens dataset represented as tensors of sizes 4425 \u00d7 6 \u00d7 6 and 2920\u00d714\u00d79 respectively. The results have shown that MultiFacTV indeed identifies some interesting modules in these datasets, which have been validated and explained by Gene Ontology analysis with DAVID or other analysis.Experimental results on both synthetic datasets and real datasets show that the proposed MultiFacTV is effective in identifying modules for higher-order time series biological data. It provides, compared to traditional non-integrative analysis methods, a more comprehensive and better view on biological process since modules composed of more than two types of biological variables could be identified and analyzed. Identification of biological modules plays a key role in bioinformatics because it can reveal interesting groups of proteins/genes having strong interactions, which may be related to some biological functionalities. In the literature, many methods have been proposed for this purpose. One popular way is to make use of clustering algorithms -4, whichWith rapid acquisition of biological experiments from different laboratories or studies based on different databases, many higher-order biological data representing interactions between more than two types of variables can be obtained. For instance, researchers in different laboratories may be interested in analysing gene co-expression networks under different stimulus, each of which is represented as a gene\u00d7gene matrix. Integrating these matrices results in a higher-order biological data, namely a gene\u00d7gene\u00d7stimulus tensor, and finding module patterns from such data tends to offer a better view of the underlying biological structures. Therefore powerful methods which are able to detect modules with integrative analysis are urgently called for.recurrent heavy subgraphs from multiple weighted networks represented as a 3D tensor, i.e., gene \u00d7 gene \u00d7 network. In the framework, a tensor objective function is proposed and solved, the solution of which helps to discovery a heavy subgraph. In recurrent\u03b2 injection. Joining such data together, we can form a higher-order time series tensor, e.g., a gene\u00d7condition\u00d7time tensor. Identifying modules of genes, conditions and time-points from such tensor data could offer us a better understanding of the corresponding biological processes. For example, Supper et al. proposed EDISA algorithm by extending the 2D iterative signature algorithm to extract and analyze such modules T , let x+ = {i|xi >x0, 1 \u2264 i \u2264 n} and x- = {i|xi 0 is a regularization parameter. Clearly, U and V do not have nonnegative constraints because we allow negative entries in the tensor and \u03b1 version , the decU, V and W that minimize the objective function in (1). As there are three matrices unknown, we need to solve them in an iterative fashion, i.e., changing the optimization problem into three subproblems with one unknown matrix in each, and then solving them iteratively until it converges. Therefore we have three subproblems for MultiFacTV as follows.MultiFacTV seeks matrices Subproblem 1: Fix V and W, and solve U by minimizing the objective function in (1).In this subproblem, the objective function is transferred into:F = (W \u0298 V)T. We have the following solution for U:where Subproblem 2: Fix U and W, and solve V by minimizing the objective function in (1).In this subproblem, the objective function is transferred into:F = (W \u0298 U)T. We have the following solution for V:where Subproblem 3: Fix U and V, and solve W by minimizing the objective function in (1).In this subproblem, the objective function is transferred into:F = (V \u0298 U)T. In order to solve the matrix W in (6), we introduce two (n3 - 1) \u00d7 K auxiliary matrices P and Q and adopt the strategy of Alternating Direction Method of Multipliers (ADMM) with 0.1 as increasing step and then the best parameter values were to produce final result. For MultiFacTV, we used \u03c41 = \u03c42 = 1.0 and \u03c43 = 0.75. All results were evaluated based on the Fscore and NMI by considering the discovered modules and the \"ground truth\" modules.As for a comparison, we performed EDISA and MetaFac as well. For MetaFac and MultFacTV, we set \u03b1. In Figure \u03b1. We see from this figure that its performance does not change significantly as parameter \u03b1 changes from 1 to 20, and the best result is yielded when \u03b1 = 10. Therefore we used this value for \u03b1 in the experiments. Table Before comparing the performance of MultiFacTV and the other two algorithms, we first demonstrate the convergence of the proposed MultiFacTV and how its performance changes against the tuning of parameter K = 40, \u03b1 = 10, \u03c41 = \u03c42 = 1.0 and \u03c43 = 0.75 on each tensor.In this experiment, the MultiFacTV was applied to Arabidopsis thaliana data to explore biological module patterns therein. The data recorded the time-series genomic expression of the root/shoot tissue in Arabidopsis thaliana when different abiotic stresses were considered. For the genomic expression data of root tissue, we constructed a gene\u00d7condition\u00d7time tensor p-values are also given to demonstrate the statistical significance of these functional terms.Next we present some biological modules discovered from each of these tensors(data). To validate these modules, we associate them to some functional annotation terms with DAVID analysis . BesidesInteresting genomic modules in root tissue: Some interesting biological modules detected from root tissue by MultiFacTV are given in the following.p-values: 3.9 \u00d710-4, 7.1 \u00d7 10-4 and 1.2 \u00d7 10-3 respectively) by using DAVID, and the second module is associated to \"response to osmotic stress\", \"response to temperature stimulus\" and \"response to cold\" . These facts confirm that both modules play key roles in the response to cold and osmotic stresses.1. Cold-osmotic modules. In , it has p-values: 2.6 \u00d7 10-9 and 5.0 \u00d7 10-3 respectively) are mapped to it, which manifests this module indeed functions under salt stress.2. Salt module. In Figure p-values: 1.1 \u00d7 10-55 and 1.3 \u00d7 10-43 respectively).3. Heat module. We obtained a module participating in the response to heat shock, shown as in Figure p-values: 1.4 \u00d7 10-8, 2.8 \u00d7 10-2 and 1.2 \u00d7 10-2 respectively). It can be observed from Figure p-values: 1.4 \u00d7 10-4, 1.4 \u00d7 10-4 and 8.7 \u00d7 10-3). Clearly, both modules indeed participate in the response to uvb light and wound stresses.4. Uvb-wound modules. We obtained two modules responding to uvb light and wound stresses, see Figure Interesting genomic modules in shoot tissue: In the following, we show two interesting genomic modules in shoot tissue output by the proposed MultiFacTV.p-values: 7.8 \u00d7 10-2, 3.1 \u00d7 10-2 and 4.1 \u00d7 10-2 respectively). This suggests that the module is significant and indeed has biological functionalities related to salt and oxidative stresses.1. Salt-oxidative-drought module. We found a module participating in the response to salt, oxidative and drought stresses, see Figure p-values: 2.4 \u00d7 10-4 and 2.5 \u00d7 10-2 respectively). This suggests the module identified by MultiFacTV indeed has wound-related biological functionalities.2. Wound module. We obtained a module participating in the response to wound stress, see Figure Saccharomyces cerevisiae regarding to different environmental changes [2O2, 1mM menadione, 2.5mM DTT(dithiothreitol), 1.5mM diamide and 1M sorbitol. Since different time-points were adopted to record the expression under different environmental stresses in the original data, we preprocessed this data by selecting 6 time-points, i.e., 10min, 20min, 30min, 40min, 60min and 80min. The missing time-point was handled by using a linear interpolation of two closest time-points available. Other missing values were replaced with the average expression value at the corresponding time-point. As a result, we constructed a gene\u00d7condition\u00d7time tensor K = 20, \u03b1 = 10, \u03c41 = \u03c43 = 0.75 and \u03c42 = 0.85.We performed the proposed MultiFacTV on Yeast dataset to explore interesting module patterns. This dataset recorded multiple time series genomic expression of yeast changes . We consNext we present and analyze some interesting module patterns identified by the proposed MultiFacTV.2O2-menadione modules. In [2O2 stress and menadione stress despite that they are supposed to result in different reactive oxygen species. The MultiFacTV obtains similar findings and we present the genomic expression of two modules of such kind, see Figures p-values: 1.9 \u00d7 10-2, 7.8 \u00d7 10-2 and 6.1 \u00d7 10-2 respectively), and the module 2 is functionally associated to \"glucose catabolic process\", \"hexose catabolic process\" and \"monosaccharide catabolic process\" . All these terms may be related to some biological process induced by the oxidative and reductive reactions taking place in the cells.1. Hules. In , it has p-values: 2.5 \u00d7 10-15 and 3.5 \u00d7 10-23 respectively). Moreover, we find this module is annotated to functional terms like \"protein catabolic process\" and \"cellular macromolecule catabolic process\" . This can be interpreted by the fact that heat shock usually leads to protein unfolding [p-values: 8.6 \u00d7 10-24 and 7.0 \u00d7 10-7). This may be because the protein unfolding induces the concurrent ribonucleoprotein complex biogenesis.2. Heat shock modules. We obtained two interesting modules responding to heat shock in yeast. The genomic expression of both modules are shown as in Figures nfolding . The mod\u03b2 injection treatment. We represented this data as a nonnegative gene\u00d7patient\u00d7time tensor of size 2920 \u00d714\u00d79, i.e, there were 2920 genes, 14 patients {A, B, C, D, E, F, G, H, I, J, K, L, M, N} and 9 time-points. The MultiFacTV was performed with \u03c41 = \u03c42 = 0.5, \u03c43 = 0.75, \u03b1 = 10 and K = 40. As a result, we found many interesting modules responding to IFN-\u03b2 treatment similar to [We applied the proposed MultiFacTV to Homo Sapiens dataset for exploring biological modules. It was a higher-order time series dataset about genomic expression of multiple sclerosis patients after IFN-milar to . To explM of size 14 \u00d7 40 representing the membership of each patient to the modules, where i,j m= 1 if the i-th patient was associated to the j-th module, otherwise i,j m= 0. In such case, each of the 14 patients was represented as a 1 \u00d7 40 binary vector. Subsequently, we clustered the 14 patients by using k-means algorithm and the clustering results were {A, B, C, D}, {E, F, G, H}, {J, K, L, M }, {I, N }. This grouping result may suggest some differences of patients in their disease histories or progressions. We believe that this result will be beneficial to the designation of personalized medicine for the patients with multiple sclerosis [With the 40 modules identified, we constructed a binary matrix clerosis .As more and more time series biological data are being accumulated from different laboratories or databases, identification of modules with integrative analysis become an important and urgent task. One way to accomplish such integrative analysis is assembling multiple time series biological data into a tensor form. In this paper, we have proposed the MultiFacTV method, which extends the tensor factorization objective by introducing a time-related regularization term of total variation, to detect modules from such higher-order time series biological data. We have performed the MultiFacTV method on synthetic datasets, Arabidopsis dataset, Yeast dataset and Homo sapiens dataset to test its performance. The results have shown that the proposed MultiFacTV indeed reveals some interesting module patterns. We have shown and validated these interesting findings with DAVID analysis or other analysis.In this paper, we assume that the multiple time series genomic expression data have the same size, i.e., the same number of genes and the same number of time-points, so that they can be joined into a tensor. In some cases, the data may be in different sizes. For example, the original Yeast dataset has diffThe authors declare that they have no competing interests.XL, MN and YY participated in designing the algorithm, drafting and revising the manuscript. XL participated in implementing the algorithm and performing experiments. QW participated in the discussions of experimental results. All authors have read and approved the final version of this manuscript."}
+{"text": "Streptomyces coelicolor. Using microarrays, the first 5.5 h of the process was recorded in 13 time points, which provided a database of gene expression time series on genome-wide scale. The computational modeling of the kinetic relations between the sigma factors, individual genes and genes clustered according to the similarity of their expression kinetics identified kinetically plausible sigma factor-controlled networks. Using genome sequence annotations, functional groups of genes that were predominantly controlled by specific sigma factors were identified. Using external binding data complementing the modeling approach, specific genes involved in the control of the studied process were identified and their function suggested.A computational model of gene expression was applied to a novel test set of microarray time series measurements to reveal regulatory interactions between transcriptional regulators represented by 45 sigma factors and the genes expressed during germination of a prokaryote The expansion of high-throughput techniques in recent years has increased the potential to infer new biological knowledge from existing data and has also increased the demands of computational approaches to decipher large quantitative data sets. One of the primary challenges of systems computational biology lies in inferring gene regulatory networks among genes from time series expression data. A typical source of genome-wide information represents gene expression data obtained from microarrays that can be used for inference of transcriptional control networks. In this article, we focused on identifying the potential target genes of transcription regulators using a reverse-engineering transcriptional model based on the relationship between regulator expression profiles and the expression of its target genes.et al. are based on the assumption that the dynamics of the regulator at the protein level is correlated with the dynamics at the transcript level ,4; thereet al. addresseet al. , and furet al. .Streptomyces coelicolor.In bacteria, the initiation of transcription depends mainly on sigma factors, which are proteins (regulators) that are able to recognize and bind, in the form of an RNA polymerase holoenzyme, to a specific gene promoter region (target gene) and guide RNA polymerase to start transcription. Therefore, sigma factors are the essential nodes in gene regulatory networks that govern further interactions and processes in the cell. A crucial task involved in inferring gene regulatory networks in bacteria is the recognition of the target genes of sigma factors. Experimentally, the physical interaction between sigma factors and gene promoter sequences is verified by chromatin immunoprecipitation methods (ChIP-chip and ChIP-Seq). It was shown, however, that the static binding information may also include silent binding events that do not directly enhance transcription ,9. A comStreptomyces species are Gram-positive soil bacteria that are widely studied for two primary reasons. First, they are important natural producers of diverse antibiotics and biologically active compounds. Second, due to their complex developmental life cycle Streptomyces serve as model organisms for fundamental cell development studies.Streptomyces biology , Escherichia coli (7 sigma factors) or Bacillus subtilis (18 sigma factors), Streptomyces has an enormous capacity for regulation. The complexity of regulation in Streptomyces has fascinated researchers for decades. Current systems approaches applied on a global genomic scale, such as transcriptomics and chromatin immunoprecipitation methods are predicted to exist in the r genome . Recent r genome ,16. The factors . Thus, i methods , contribS. coelicolor sigma factors have been functionally characterized. For example, the principal sigma factor HrdB (SCO5820) represents the primary housekeeping regulator. Similar to the primary sigma factor and also closely related in promoter recognition are three sigma factors, HrdA (SCO2465), HrdC (SCO0895) and HrdD (SCO3202); however, these three factors have been reported to be non-essential for exponential growth . The resulting relationships between sigma factors and regulated genes or groups of genes were interpreted in biological manner and compared with published data.In this study, we applied a numerical model of gene expression kinetics to identify potential sigma factor target genes in ed model originatS. coelicolor A3(2) M145 spore cultivation and growth were published in our previous work . The RNA was stored in water at \u221220\u00b0C.To break the cells, we used a FastPrep-24 machine where the spores were mechanically disrupted in tubes containing zirconium sand, two 4-mm glass beads, 500 \u00b5l of lysis buffer } from each array were centered to ensure that the medians and the median absolute deviations of all the array distributions were equal. The centering was performed by subtracting the Log2Ratio median value of the array from each Log2Ratio measurement on the array and was divided by the median absolute deviation. To eliminate array outliers, we filtered out the 0.02 quantile of the least and the most intensive Log2Ratio values. Therefore, normalized Log2Ratios were exponentiated to return the values to the original scale . This type of normalization was chosen as the distribution of the Log2Ratio among experiments, and time points did not show any dependence across samples and over time. This normalization for such kind of data distribution is commonly used (The experiment included 37 arrays from 13 distinct time points of nly used .Time series of relative mRNA concentrations (\u2018gene expression profiles\u2019) were obtained by averaging the normalized Ratios across biological replicates at specific time points and across all the gene replicate spots on the array. Before averaging, the outliers among gene replicates at individual time points were filtered using the Q-test (for 3\u20139 inputs) and the Pierce test (for >10 inputs).The filtering caused a few profiles to have no value for certain time points. These zero values were examined to determine whether they occurred between two non-zero time points. In this case, the neighboring time points were non-zero; therefore, the missing value was linearly interpolated (performed for \u223c100 of 7115 profiles).To eliminate profiles with overall low expression during germination, we analyzed microarray sample channel signals (Cy3 labeling). The idea was to minimize the influence of gene profiles whose microarray signal originated from experimental errors that exceeded the pure technical limits for eliminating signals under the background. Thus, for each gene, the overall expression level was specified by computing the median across all microarray replicates at all time points for the sample channel microarray signal. The profiles, whose overall expression level was below the first quartile value (563) of all counted medians, were filtered were computed for all genes at each time point. From the entire set of genes, only profiles that met the criteria of having a CV of <0.47 for at least eight time points (out of 13) in the profile were chosen. The resulting set of genes contained 3317 genes, which were further used to infer the kinetic cluster core profile.n). For each cluster, we then defined a cluster core as a group of profiles that appeared in one cluster in at least 50% of the repeated runs.The set of genes with a relatively small CV was grouped according to similarity in their expression profiles. The grouping was conducted in the same manner as in our previous work . To deten), the clustering procedure was recomputed for different numbers of clusters (n = 30\u201370), and a jackknife method was also applied. The jackknife was based on the systematic recomputation of clusters while omitting a small amount of random observations (1.5%). The within-cluster sums of point-to-centroid distances (J) (n) (data not shown). The point on the curve where the significant local change of J occurred indicated the potentially optimal n for the given data set. The optimal number of clusters for the data set was n = 42.To roughly estimate the optimal number of clusters (nces (J) of each For all 42 kinetic core clusters, the average profile (core profile) of each core cluster was calculated. The core profiles were then used as inputs for the modeling procedures and as the seeds for the classification of profiles into 42 groups according to their correlation with the core profile.iy represents the transcript concentration of the regulated i-th gene, and jR is the transcript concentration of the j-th sigma factor (regulator), which is modulated by parameter iw. The ib parameter corresponds to a reaction delay. Incremental expression is diminished by the rate of transcript degradation described by the term 2iyik. The 1ik, 2ik, ib and iw model parameters are derived from experimental expression data, specifically microarray data in this study.We used the kinetic model of gene expression suggested by Vohradsky , which wi, given by were optimized to fit the measured expression profile i-th gene using the measured expression profile of the j-th sigma factor jR by minimizing an objective functionic represents a Pearson correlation coefficient calculated for pair iy (computed expression profile).To fit measured gene expression time series given by , we usedhe model . For eac1ik, 2ik and iw were forced to remain positive to reflect their biological nature.Simulated annealing was usedInitial parameters were preset by random values, and then the optimization procedure was performed. For each examined relationship between sigma factor and target gene or sigma factor and another sigma factor or sigma factor and typical representative profile of kinetic cluster (core profile), the optimization procedure was completed 15 times with diverse initial parameter values. The optimal set of parameters was established as the set with the lowest value of the objective function ; therefoiy was the Pearson correlation coefficient. When the correlation coefficient exceeded a predefined value, the interaction between the sigma factor and a gene was considered possible. For the modeling of regulations where no prior knowledge was available (\u2018Results and Discussion\u2019), the Pearson correlation coefficient was required to be >0.8. For the modeling of interactions found by the ChIP-chip experiment or in the literature (\u2018Results and Discussion\u2019), the requirement for the Pearson coefficient was arbitrarily set to 0.65 to obtain all possible and even less correlated interactions. We did not have a statistical criterion for selection of the threshold value of the coefficients, and their choice was done arbitrarily to include all possible interactions where some prior knowledge was available and after visual inspection of the results.The criterion for the goodness of fit between the measured profile https://gephi.org/was used for network visualization.The open-source software Gephi S. coelicolor spores evolved during the examined period of 5.5 h from the dormant stage to cells with germ tubes.The S. coelicolor, we used a transcriptomic-based approach in a time-dependent manner. During the monitored 5.5 h, the RNA samples were collected at 30-min intervals. The RNA sample collected from dormant spores was set as the initial time point (T Dorm), followed by RNA sample obtained after heat-shock treatment (T0) and continued by samples T0.5\u2013T5.5 gained in 30-min intervals. Finally, we obtained RNA samples from 13 time points. For each of the 13 time points, RNA was isolated from three, for time points 4 and 7 from two, independent cultures. The mRNA expression levels were measured by microarray. In total, entire experimental set contained 37 microarrays.To understand transcriptional regulations in germination in The term \u2018gene expression profile\u2019 used through the text refers to the normalized Ratio signals as described in the \u2018Methods\u2019, which recorded temporal changes in mRNA expression kinetics. The arrangement of the microarray experiment enhanced measurement accuracy but did not provide information regarding the absolute expression levels of individual genes due to the various hybridization levels of the reference caused by the diverse probes on the microarray. If we consider that an equal amount of mRNA was always loaded onto the microarray chip, we only used the sample channel signals (Cy3) to estimate the absolute expression levels of individual genes. Although this approach led to increased variance of the averaged expression values, the expression kinetics of the sample channel and kinetics of the normalized Ratios stayed highly correlated (data not shown), indicating that the overall kinetic trends were similar for both types of data. The overall expression levels based on the sample channel signals were calculated (\u2018Methods\u2019) for each gene. The logarithmic distribution (based 2) of the overall expression levels was approximately lognormal A, with tS. coelicolor genome were detected. The overall expression levels of the most transcribed sigma factors . In reality, genes controlled in the same way share the same kinetic profile pattern. Instead of computing one to one interactions, it is therefore possible to compute interactions between sigma factors and characteristic gene profiles based on the kinetic profile common for a group of genes without loss of generality. With this assumption in mind, we identified kinetic clusters of genes having common expression profile and modeled the interactions on global scale between all 45 sigma factors and characteristic kinetic profiles of the clusters (\u2018Results and Discussion\u2019). The relation between the kinetic clusters and operons of To gain more detailed insight, the individual one-by-one strategy was applied solely to identify target genes of sigma factors HrdD (\u2018Results and Discussion\u2019) and SigR (\u2018Results and Discussion\u2019), which were selected for the following reasons\u2014HrdD represented the most expressed sigma factor in the experiment and for SigR we were able to incorporate static ChIP-chip binding data . IndividTo compute all the potential regulatory combinations of the 45 highly expressed sigma factors and 7115 expressed genes, we would have to analyze 320 175 sigma factor\u2013target gene combinations. Therefore, keeping in mind that genes controlled in the same way have the same expression profile, instead of investigating one-by-one gene regulatory interactions, we analyzed the transcriptome on a global scale by working with typical kinetic trends characteristic for group/cluster of genes with \u2018similar\u2019 expression profiles that, from the kinetic point of view, may be regulated in the same way.Each typical trend (defined by the core profile of the kinetic cluster\u2014\u2018Methods\u2019) was tested as a possible target expression profile of each of the 45 studied sigma factors. To identify the typical kinetic representatives of the target genes, we first selected a subset of gene profiles with low CV within the experimental repeats to eliminate the influence of the profiles with higher measurement errors (\u2018Methods\u2019). Among the selected subset of 3317 genes with low CV, 42 different kinetic groups were identified. The core profiles were determined as an average profile of the most frequently occurring members in the particular group in the repeated runs of clustering (\u2018Methods\u2019).For each pair sigma factor\u2013core profile, the kinetic model and the w of the model , whose functions are still unknown, were suggested to control gene kinetic clusters 12, 37 (HrdA), 3 (HrdC) and 8 (HrdD). In addition to several known sigma factors and extracytoplasmic function subfamily sigma factors , we proposed possible regulatory activity for many other sigma factors whose functions have not been previously identified , each gene was categorized into a functional or a potential functional class. The idea was to identify significantly overrepresented gene functional groups in the kinetic clusters, and thus characterize individual clusters. Knowing sigma factors that control the kinetic cluster, specific cell metabolic processes were characterized as controlled by individual sigma factors.Under the generally accepted assumption that co-expressed genes characterize a specific functional group, we examined gene members of kinetic clusters that were proposed in previous paragraphs for their membership in different metabolic groups. According to the database annotating the Supplementary Table S2). The coefficient can be understood as a level of membership of the gene in the particular cluster. The higher the coefficient, the more kinetically similar was the gene profile to the cluster core profile; therefore, the proposed regulation by the sigma factor had higher probability compared with genes with lower correlation coefficients.All highly expressed genes were assigned to kinetic clusters based on the value of the correlation coefficient between the given gene expression profile and the core profile of each cluster. Evidently, the choice of the criterion value may significantly affect the gene composition of the clusters; therefore, the choice of the criterion influences the functional characteristics of the group. Hence, for the enrichment of functional groups in the kinetic clusters, we tested four levels of the criterion . Obviously, the number of genes assigned into the clusters differed with a different correlation criterion, but the changes in significantly overrepresented functional groups were minor (data not shown). Therefore, in contrast with the initial assumption, the value of the correlation coefficient was not crucial for the resulting functional characteristic of the cluster. As a selection criterion, the correlation coefficient was set to be \u2265 0.8 (3586 gene profiles assigned into kinetic clusters) , transcriptional regulators 15% , thiol homeostasis 13% , oxidoreductases 11% and cofactor metabolism 11%. Regulation for genes with other functions was confirmed for only a few (under 9%). Our approach did not confirm any regulation of genes involved in energy metabolism and identified only one target gene from the lipid metabolism group and three from the modulation of ribosomal constituents or translation group, although these groups represent 23% of target genes identified in original ChIP-chip-based approach. The full list of kinetically plausible SigR target genes in germination is shown in Supplementary Table S3. An example of the expression profiles suggested being SigR target gene is provided in Supplementary Figure S2.The agreement between the ChIP-chip results and kinetic modeling was observed for approximately one-third of the ChIP-chip suggested target gene set. As shown in Generally, kinetically confirmed SigR transcriptional controls belong to genes with \u2018special\u2019 functions in redox homeostasis, regulation or protein quality control rather than \u2018basal\u2019 metabolism functions, such as energy metabolism, lipid metabolism and ribosome constituents/translation. When interpreting the results of our experiments, the expression of SigR was not \u2018arbitrarily\u2019 enhanced by adding diamide or any other chemical compounds inducing oxidative stress, unlike in all previous studies, where the expression of SigR and its regulon was induced by diamide. Additionally, this study used cells undergoing germination; however, previous studies used cells at later life phases. When we considered that the sigma factor activity and the selection of promoters depended on growth conditions and developmental stage, the inconsistencies in our findings and the previous studies can originate from both a difference in SigR transcriptional control under \u2018normal\u2019 physiological growth conditions (germination) and under induced thiol-oxidative stress and from silent binding events.S. coelicolor, the SigR regulon exhibited a similar response as observed under thiol-oxidative stress which has not been reported yet. We can speculate that a probable trigger of the stress that consequently induced SigR expression was a stimulus provided by high content of aggregated and/or misfolded proteins present in the dormant cell from the sporulation phase. These suggestions are in agreement with previous observations by Kalifidas et al. , where HrdD was also proposed to control the overrepresented regulation/defined family\u2019s functional group from cluster 8. Moreover, 78% of the proposed HrdD target genes also belonged to cluster 8, suggesting an agreement between global and individual gene analysis for this regulator.Due to the dnaJ (SCO3669), hspR (SCO3668) coding autoregulatory repressor protein and clpB (SCO3661) coding ATP-dependent protease) . More thSupplementary Table S4), the trivial regulations are marked with a hash. For seven of these genes, the possibility of the existence of common regulators should be considered. For these genes, we previously identified possible regulation by SigR (\u2018Results and Discussion\u2019), and previous binding experiments were found. In the list of HrdD target genes http://dbscr.hgc.jp/index.html. We investigated the possibility of transcriptional regulation from the kinetic perspective, using the computational model , a ssgA-like gene encoding sporulation protein controlling septum site initiation and DNA segregation in spores (sigL (SCO7278) and its putative anti-sigma factor SCO7277 (cwgA (SCO6179) and dagA (SCO3471) as target genes of SigE. CwgA is the first gene of the cwg operon involved in biosynthesis of cell wall glycans describing distinct biological phenomenon under different experimental conditions and during different life phases of ficities . The intIn this work, a large experimental data set containing thousands of gene temporal expression profiles that were relatively densely sampled was created. The generated data set can serve as a source material for both further computational analyses using time series expression data and consequent experimental studies.S. coelicolor from dormancy to vegetative growth. Using computational modeling, we identified the target genes and gene kinetic clusters of 29 sigma factors (out of 45 studied) and suggested potential transcriptional regulatory networks that are controlled by these sigma factors.We analyzed the whole-genome transcriptome dynamics during the transition phase of Specifically, we chose germination because it has not been studied sufficiently, although it influences further development of the cell. Germination also represents a system with a well-defined origin, which is suitable for numerical kinetic modeling applications. We showed, together with previous studies on computational kinetic modeling ,6, that S. coelicolor germination. For a single sigma factor\u2014SigR, the kinetic data were complemented with ChIP-chip experiment, and the results were compared with the published data. We suggest particular role for the alternative principal sigma factor, HrdD, whose expression at the mRNA level was extremely high during germination in comparison with all the other genes.From the analysis of the functional groups of the target genes, we identified sigma factors that are probable regulators of basic metabolic processes activated during Streptomyces is more complicated than the presented model in its current form can describe. Existence of anti-sigma factors or even anti\u2013anti sigma factors document complex nature of the transcriptional control, making inference of the control networks in this organism complicated. In principal, two strategies for inference of gene expression control networks can be used. First, traditional, inspects sigma factors individually and experimentally searches for their targets. Such an approach has brought most of the knowledge about Streptomyces gene expression control available so far. However, predominantly used gene deletion and consequent pairwise comparison of mutant and wild-type strains is complicated by the inability to distinguishing between direct and transmitted control, that, to be resolved, require additional set of experiments, complicating their interpretation. The existence of >60 sigma factors in S. coelicolor and thousands potential target genes generate hundreds of thousands of potential regulator\u2013target gene interactions. Their complete experimental inspection using traditional methods is virtually impossible. From such a pool, only a few were picked by other researches for detailed experimental inspections. To reconstruct the networks on a global scale from such individual results that were often obtained independently under, sometimes, different experimental conditions, is therefore difficult or even impossible, and their quantitative features from such data are completely unfeasible. A computational modeling using complex parallel kinetic data can help to overcome this hurdle, giving possibility to retain the parallel nature of the data and keep the consistency given by one experimental setup common for whole data set. Incorporating extra information such as DNA binding data to the model, as here for the case of SigR, can contribute even more to the network inference by excluding those interactions that are physically impossible and thus make the model-based predictions more accurate.We are aware that gene expression control in We see the contribution of this article in providing an overview of the gene expression networks active during the studied process rather than in giving ultimate answers concerning individual interactions. Our approach reduced thousands of potential regulatory interactions to tens that can be experimentally verified and gave a global outlook on the system level of control that gives a complex picture of the whole systemLast, but not least, the model presented here allows simulating kinetics of gene expression and provide possibility to make virtual experiments which can, again, point out additional experiments. Such iterative process will lead to creation of a functional model of gene expression control network that can be used to get deeper insight into the dynamic properties of gene expression control of the studied process and the topology of the network.S. coelicolor but also in other organisms.We are convinced that such systems level approach combining prior knowledge with computational modeling can identify, from a global perspective, regulatory networks controlling cellular processes, not only in germination and Supplementary Data are available at NAR Online.Czech Science Foundation [P302-11-0229 to J.V.]; Grant Agency of the Charles University [17409 to E.S.]. Funding for open access charge: Publication cost grant from Academy of Sciences of the Czech Republic.Conflict of interest statement. None declared."}
+{"text": "The analysis of gene expression from time series underpins many biological studies. Two basic forms of analysis recur for data of this type: removing inactive (quiet) genes from the study and determining which genes are differentially expressed. Often these analysis stages are applied disregarding the fact that the data is drawn from a time series. In this paper we propose a simple model for accounting for the underlying temporal nature of the data based on a Gaussian process.We review Gaussian process (GP) regression for estimating the continuous trajectories underlying in gene expression time-series. We present a simple approach which can be used to filter quiet genes, or for the case of time series in the form of expression ratios, quantify differential expression. We assess via ROC curves the rankings produced by our regression framework and compare them to a recently proposed hierarchical Bayesian model for the analysis of gene expression time-series (BATS). We compare on both simulated and experimental data showing that the proposed approach considerably outperforms the current state of the art.Gaussian processes offer an attractive trade-off between efficiency and usability for the analysis of microarray time series. The Gaussian process framework offers a natural way of handling biological replicates and missing values and provides confidence intervals along the estimated curves of gene expression. Therefore, we believe Gaussian processes should be a standard tool in the analysis of gene expression time series. Early studies of this data often focused on a single point in time which biologists assumed to be critical along the gene regulation process after the perturbation. However, the With the decreasing cost of gene expression microarrays time series experiments have become commonplace giving a far broader picture of the gene regulation process. Such time series are often irregularly sampled and may involve differing numbers of replicates at each time point . The expstatic experiments, i.e. gene expression measured on a single time-point, that treat time as an additional experimental factor ), where a reverse-engineering algorithm was developed (TSNI: time-series network identification) to infer the direct targets of TRP63 . In thatnoisy ground truth. We pre-process the data with the robust multi-array average (RMA) expression measure .By computing the Ky. Whatever Ky is, it must satisfy the following requirements to be a valid covariance matrix of the GP:Notice in eq. (7) how the structure of the covariance implies that choosing a different feature space \u03a6 results in a different Kolmogorov consistency, which is satisfied when ij K= K for some covariance function K, such that all possible K are positive semidefinite (\u22a4 Ky y\u2265 0).\u2022 Exchangeability, which is satisfied when the data are i.i.d.. It means that the order in which the data become available has no impact on the marginal distribution, hence there is no need to hold out data from the training set for validation purposes .\u2022 a Gaussian process is a stochastic process (or collection of random variables) over a feature space, such that the distribution p (y(x1), y(x2),..., ny(x)) of a function y(x), for any finite set of points {x1, x2, ..., nx} mapped to that space, is Gaussian, and such that any of these Gaussian distributions is Kolmogorov consistent.More formally, noise term I from Ky in eq. (7) we can have noiseless predictions of f(x) rather than y(x) = f(x) + \u03b5. However, when dealing with finite parameter spaces Kf may be ill-conditioned (cf. sec. SE derivation), so the noise term guarantees that Ky will have full rank (and an inverse). Having said that, we now formulate the GP prior over the latent function values f by rewriting eq. (8) asIf we remove the mean function and the covariance function respectively arewhere the In this paper we only use the univariate version of the squared-exponential (SE) kernel. But before embarking on its analysis, the reader should be aware of the existing wide variety of kernel families, and potential combinations of them. A comprehensive review of the literature on covariance functions is found in .ill-conditioned covariance matrix. In the case of a finite parametric model (as in eq. (1)), f Kcan have at most as many non-zero eigenvalues as the number of parameters in the model. Hence for any problem of any given size, the matrix is non-invertible. Ensuring Kf is not ill-conditioned, involves adding the diagonal noise term to the covariance. In an infinite-dimensional feature space, one would not have to worry about this issue as the features are integrated out and the covariance between datapoints is no longer expressed in terms of the features but by a covariance function. As demonstrated in and , with an example of a one-dimensional dataset, we express the covariance matrix Kf in terms of the features \u03a6In the GP definition section we mentioned the possibility of an radial basis functions and integrating with respect to their centers h, eq. (12) becomesthen by considering a feature space defined by signal variance univariate squared-exponential (SE) covariance function. The noisy univariate SE kernel -- the one used in this paper iswhere one ends up with a smooth (infinitely differentiable) function on an infinite-dimensional space of features. Taking the constant out front as a stationary kernel, i.e. it is a function of d = i x- j xwhich makes it translation invariant in time. ij \u03b4is the Kronecker delta function which is unity when i = j and zero otherwise and l2 is the characteristic lengthscale which specifies the distance beyond which any two inputs become uncorrelated. In effect, the lengthscale l2 governs the amount that f varies along the input (time). A small lengthscale means that f varies rapidly along time, and a very large lengthscale means that f behaves almost as a constant function, see Figure hyperparameter adaptation, as described in a following section. Other adapted hyperparameters include the signal variance noise variance noisy case, its adaptation can give different explanations about the latent function that generates the data.The SE is a positive-definite. Examples of valid combined covariance functions include the sum and convolution of two covariance functions. In fact, eq. (14) is a combined sum of the SE kernel with the covariance function of isotropic Gaussian noise.One can also combine covariance functions as long as they are f* at a new input (non-sampled time-point) x*, given the knowledge of function estimates f at known time-points x. The joint distribution p is Gaussian, hence the conditional distribution p(f*| f ) is also Gaussian. In this section we consider predictions using noisy observations; we know the noise is Gaussian too, so the noisy conditional distribution does not differ. By Bayes' ruleTo interpolate the trajectory of gene expression at non-sampled time-points, as illustrated in Figure where the Gaussian process prior over the noisy observations ismean function and the covariance between a new time-point x* and each of the th iknown time-points, where i = 1..NWe start by defining the * is concatenated as an additional row and column to the covariance matrix KC to giveFor every new time-point a new vector kC = N..N* is incremented with every new k* added to KC. By considering a zero mean function and eq. (19), the joint distribution p from eq. (15) can be computedwhere predictive mean and covariance of the posterior distribution from eq. (15) we use the Gaussian identities presented in . These are the predictive equations for GP regression of a single new time-pointFinally, to derive the f K= f K. These equations can be generalised easily for the prediction of function values at multiple new time-points by augmenting k* with more columns and k with more components, one for each new time-point x*.and Given the SE covariance function, one can learn the hyperparameters from the data by optimising the log-marginal likelihood function of the GP. In general, a non-parametric model such as the GP can employ a variety of kernel families whose hyperparameters can be adapted with respect to the underlying intensity and frequency of the local signal structure, and interpolate it in a probabilistic fashion (i.e. while quantifying the uncertainty of prediction). The SE kernel allows one to give intuitive interpretations of the adapted hyperparameters, especially for one-dimensional data such as a gene expression time-series, see Figure fIn the context of GP models the marginal likelihood results from the marginalisation over function values GP prior p(f|x) is given in eq. (9) and the likelihood is a factorised Gaussian log-marginal likelihood where the \u03b8 (hyperparameters) to emphasise that it is a function of the hyperparameters through Kf. To optimise the marginal likelihood we take the partial derivatives of the LML with respect to the hyperparametersWe notice that the marginal here is explicitly conditioned on scaled conjugate gradients . Overall, the LML function of the Gaussian process offers a good fitness-complexity trade-off without the need for additional regularisation. Optionally, one can use multiple initialisation points focusing on different non-infinite lengthscales to deal with the multiple local optima along the lengthscale axis, and pick the best solution (max LML) to represent the Figure http://staffwww.dcs.shef.ac.uk/people/N.Lawrence/gp/ and as a package for the R statistical computing language http://cran.r-project.org/web/packages/gptk/. The routines for the estimation and ranking of the gene expression time-series are available upon request for both languages. The time needed to analyse the 22690 profiles in the experimental data, with only the basic two initialisation points of hyperparameters, is about 30 minutes on a desktop running Ubuntu 10.04 with a dual-core CPU at 2.8 GHz and 3.2 GiB of memory.The source code for the GP regression framework is available in MATLAB code AAK designed and implemented the computational analysis and ranking scheme presented here, assessed the various methods and drafted the manuscript. NDL pre-processed the experimental data and wrote the original Gaussian process toolkit for MATLAB and AAK rewrote it for the R statistical language. Both AAK and NDL participated in interpreting the results and revising the manuscript. All authors read and approved the final manuscript."}
+{"text": "Inferring regulatory interactions between genes from transcriptomics time-resolved data, yielding reverse engineered gene regulatory networks, is of paramount importance to systems biology and bioinformatics studies. Accurate methods to address this problem can ultimately provide a deeper insight into the complexity, behavior, and functions of the underlying biological systems. However, the large number of interacting genes coupled with short and often noisy time-resolved read-outs of the system renders the reverse engineering a challenging task. Therefore, the development and assessment of methods which are computationally efficient, robust against noise, applicable to short time series data, and preferably capable of reconstructing the directionality of the regulatory interactions remains a pressing research problem with valuable applications.Here we perform the largest systematic analysis of a set of similarity measures and scoring schemes within the scope of the relevance network approach which are commonly used for gene regulatory network reconstruction from time series data. In addition, we define and analyze several novel measures and schemes which are particularly suitable for short transcriptomics time series. We also compare the considered 21 measures and 6 scoring schemes according to their ability to correctly reconstruct such networks from short time series data by calculating summary statistics based on the corresponding specificity and sensitivity. Our results demonstrate that rank and symbol based measures have the highest performance in inferring regulatory interactions. In addition, the proposed scoring scheme by asymmetric weighting has shown to be valuable in reducing the number of false positive interactions. On the other hand, Granger causality as well as information-theoretic measures, frequently used in inference of regulatory networks, show low performance on the short time series analyzed in this study.Our study is intended to serve as a guide for choosing a particular combination of similarity measures and scoring schemes suitable for reconstruction of gene regulatory networks from short time series data. We show that further improvement of algorithms for reverse engineering can be obtained if one considers measures that are rooted in the study of symbolic dynamics or ranks, in contrast to the application of common similarity measures which do not consider the temporal character of the employed data. Moreover, we establish that the asymmetric weighting scoring scheme together with symbol based measures (for low noise level) and rank based measures (for high noise level) are the most suitable choices. Even though they measure a different type of relationship than the product moment correlation coefficient, like the previous correlation measures, these are also defined in the interval .This type of correlation is based on the rank distribution of the expression values:R(x) is the rank of x.where Another rank correlation isc nbeing the number of concordant pairs, and d nthe number of discordant pairs of the rank sets.with It is common to regard the rank correlation coefficients as alternatives to Pearson's coefficient, since they could either reduce the amount of calculation or make the coefficient less sensitive to non-normality of distributions. Nevertheless, they quantify different types of association.Unlike most of the measures discussed here, the correlation measures do not only provide an information about whether two genes are interacting, but also whether it is an activating or repressing relationship. As the latter information is outside of the interest of the current study, only the absolute value (respectively the square) of the correlation coefficient is taken into account.Additionally to the correlation, information-theoretic measures are also defined using random variables as relevant representation for expression time series.MI) . Next, shiftN, resulting in a vector \u03bc the corresponding entry will be set equal 0). The scoring is now given bywhere In case the largest significant value of the measure is obtained for a negative shift, the regulatory direction from the first to the second gene is kept, while the opposite direction is preserved if the largest significant value is obtained for a positive shift. Furthermore, both regulatory directions are kept, if the maximum arises for a shift of zero or multiple opposed shift values or in the case when no significant value exists. The scoring scheme aims at providing a hint on the directionality, because the absolute values of the calculated correlations on the delayed time series are rather biased as the data sets are quite short.ij c= ij \u00b7 fijw). Hence, the weights for the unlikely direction are set to zero in order to break symmetries, and thus reduce the number of false positives. Finally, all edges with ij c<\u03c4 are removed, where \u03c4 is a particular threshold.In the next step of Algorithm 1, the information regarding the directionality are combined with the weight of interaction inferred from a particular measure and S \u03bc(Spearman) for pairs of the shifted expression series, where the significance level was set to \u03b1 = 0.01 and only absolute values of correlation larger 0.9 have been taken into account. The choice of the measure to infer the weights in the first step of Algorithm 1 is independent of that and includes here the mean of sequence similarity and mutual information of symbols, as well as Spearman's and Pearson's correlation. Furthermore, this scoring scheme is applied in addition to (or after) another scoring scheme . It is important to note that in contrast to the previously described modifications of the algorithm, the scoring scheme we propose here allows to investigate the directionality, also when symmetric measures are considered.We test that scoring scheme using the F) is defined by:Most of the measures used to infer the degree of interaction between pairs of genes, such as correlations or the mutual information, are symmetric. Hence, when applied in symmetric algorithms they are not able to unravel the regulatory dependences, since these measures do not distinguish the direction of the interaction. Thus, we introduce an asymmetric weighting based on topological aspects, for the complete set of pairwise weights obtained from a particular measure, and implement it according to Algorithm 1. In particular, we compute a matrix of weights, where the columns represent the genes which are regulated, and the rows stand for the genes which regulate other genes. The scoring value is then calculated by dividing each row entry by the sum of the corresponding column values. The scoring scheme analysis [ROC analysis is a tool for visualizing, organizing, and selecting classifiers based on their performance in terms of a cost/benefit analysis. For this purpose the ROC space is defined by the false positive rate, fpr, and the true positive rate, tpr, which depict the relative trade-offs between true positives tp (benefits) and false positives fp (costs). An overview on important quantities in ROC analysis is given in Table In order to rank the performance of the different similarity measures and scoring schemes, we evaluate to which extent each of them accurately reconstructs the underlying network of regulatory interactions. To this end, we use the receiver operating characteristic ) or the YOUDEN index (YOUDEN = max(tpr - fpr)) [ROC analysis is the precision/recall graph (PvsR), which is based on the comparison between the true edges and the inferred ones. Hence, it highlights the precision of the reconstruction, and does not suffer from the typically large number of false positives in a gene regulatory network reconstruction. We thus give the summary statistic using the area under the precision-recall curve (AUC(PvsR)) as well as the ROC curve. An efficient implementation of the ROC analysis is provided by the R-package \"ROCR\" [While discrete classifiers lead to just a single point in the - fpr)) . Anothere \"ROCR\" .ROC curves are evaluated with respect to the underlying GRN, which is a directed graph. As several of the scoring schemes/measures do not distinguish whether the regulation is directed from gene i to gene j or vise versa, some of the false positives will follow from the missing information on the directionality. However, since the network under study is a sparse one, this additional false positives barely carry a weight.All The authors declare that they have no competing interests.All authors participated in the selection of the measures and algorithms included in this investigation. SD and ZN have chosen the test data set (parameters). SD implemented and evaluated the measures and scoring schemes. All authors participated in the comparison, interpretation and discussion of the results. SD and AK drafted the manuscript, ZN participated in structuring and formulating of the draft. ZN and JK provided feedback. All authors wrote, read and approved the final version of the manuscript.Supplement Figures. Figure 1: Performance of the identity scoring scheme using different measures operating on vectors, in terms of the ROC curves, where the false positive rate (fpr) vs. the true positive rate (tpr) is plotted. The results shown here are obtained from the Euclidean distance (EC\u03bc), the Ls norm (L\u03bc) and the Manhattan distance (MA\u03bc), as well as from the dynamic time warping (W\u03bc) with the step pattern symmetric1, symmetric2 and asymmetric. Figure 2: ROC curves obtained for the ID scoring scheme using the simple, conditional and partial Pearson correlation ), and the CCIR represented as a distance (distance)), in frames of the identity scoring scheme. Figure 5: (a) The ROC curves, obtained for the simple, conditional and partial Granger causality index . ROC curves of (a) the Granger and partial Granger causality , the effect of (spline) interpolation . Figure 9: Artefacts introduced in the reconstruction procedure by interpolation of short, coarsely sampled time series. The left panel shows the corresponding ROC curves in the noise-free case for 10 points equally sampled in time, whereas the right panel presents the same results for 10 points, unequally sampled. The unequal sampling in time is the same as in Figure 8. Figure 10: ROC curves for selected measures and algorithms obtained in the noise-free case, using unequally sampled data without interpolation. The sampling is the same as in the previous two figures, including the following data points of a simulated series of 100 points: 1|2|3|6|9|15|25|39|63|99. Figure 11: ROC curves obtained from the reconstruction of an E. coli network of 100 genes, a S.cerevisiae network of 100 gene and an E. coli network of 200 genes. (a)-(i) show the results using various similarity measures together with the ID scoring scheme: (a) Euclidean distance EC\u03bc, (b) Manhattan distance MA\u03bc, (c) Ls norm L\u03bc, (d) Kendall's rank correlation K\u03bc, (e) Pearson correlation \u03bcP, (f) conditional Pearson correlation K \u03bctogether with (j) MRNET, (k) CLR, and (l) ARACNE scoring scheme are shown. Figure 12: Summary statistics for the top-ranked measures/scoring schemes for increasing noise intensities (noise level 0.5). Similar approaches are grouped together. The first group in cyan refers to the different measures applied together with the ID scoring scheme. The green stands for the CLR scoring scheme, the orange for the MRNET, yellow refers to the ARACNE, magenta to the AWE and violet stands for the TS. Furthermore, blue groups together all measures applied with a combination of scoring schemes. Figure 13: Summary statistics ((a), (c) and (e) area under the ROC curve, as well as (b), (d) and (f) Y OUDEN index) for the top-ranked measures/scoring schemes as a function of the noise intensity for varying lengths of the time series. The results in (a) and (b) are obtained from 8 time points, those in (c) and (d) from 10 time points, and those in (e) and (f) from 20 time points. Figure 14: (a) Illustration of the network and its degree distribution for 100 genes in E. coli. Here and in the following figures p(k) is the frequency of nodes with total degree k, p_in(k) is the frequency of nodes with an in-degree k, and p out(k) is the frequency of nodes with an out-degree k. Futhermore, the network and its degree distribution for (b) 100 genes in S.cerevisiae, and (c) 200 genes in E. coli arClick here for file"}
+{"text": "Microarray technology has produced a huge body of time-course gene expression data and will continue to produce more. Such gene expression data has been proved useful in genomic disease diagnosis and drug design. The challenge is how to uncover useful information from such data by proper analysis methods such as significance analysis and clustering analysis. Many statistic-based significance analysis methods and distance/correlation-based clustering analysis methods have been applied to time-course expression data. However, these techniques are unable to account for the dynamics of such data. It is the dynamics that characterizes such data and that should be considered in analysis of such data. In this paper, we employ a nonlinear model to analyse time-course gene expression data. We firstly develop an efficient method for estimating the parameters in the nonlinear model. Then we utilize this model to perform the significance analysis of individually differentially expressed genes and clustering analysis of a set of gene expression profiles. The verification with two synthetic datasets shows that our developed significance analysis method and cluster analysis method outperform some existing methods. The application to one real-life biological dataset illustrates that the analysis results of our developed methods are in agreement with the existing results. Saccharomyces cerevisiae be parameters of model which minimize\u03b8k, \u2009k = 1,\u2026, K}. In this study, it is assumed that a time-course gene expression dataset is a collection of time series which belongs to several clusters and time series in each cluster can be described by model with difof model for the kth cluster, parameters \u03b8k = can be estimated asCk| represents the number of time series in cluster Ck, \u2211k=1K|Ck| = N. According to the parameter estimation method proposed in previous section for single time-course expression profiles, for the (1)K;select an initial partition for the given number of clusters, (2)s = 1,2,\u2026):iterate ((a)s based on the current partition by using (estimate the parameter \u0398by using -19);;s based (b)x to cluster k wheregenerate a new partition by assigning each sequence (3)stop if the improvement of the cost function is belowThis study employs the following relocation-iteration algorithm to estimate the parameters such that the cost function is minims = {\u03b8ks, 1 \u2264 k \u2264 K} represents the estimated parameters in cost function (s while in 2(b), parameters \u03b1js, \u03c9js, ajs, bjs, cjs,\u2009 \u2009and djs represent the parameters of model j at iteration s.In 2(a), \u0398function at itera\u2009true positive is a time-course gene expression profile identified as it is;\u2009false positive is a time-course gene expression profile identified as it is noisy;\u2009true negative is a noisy gene expression profile identified as it is;\u2009false negative is a noisy gene expression profile identified as it is time-course.In this section, we use two synthetic datasets to evaluate our proposed significance analysis method and cluster analysis method, respectively. To evaluate the significance analysis method, we generate one synthetic dataset that consists of 2000 noisy gene expression profiles based on model and 2000The sensitivity and the specificity depend on thresholds which determine if an expression profile is time-course or noisy. In general, the sensitivity is increasing, while the specificity is decreasing and vice versa. However, a good method is expected to have both high sensitivity and specificity. N objects, the r-cluster partition U = {u1,\u2026, ur} and the s-cluster partition V = {v1,\u2026, vs}, the ARI is defined as follows [T is the number of pairs of N objects, nij is the number of objects that are both in clusters ui and vj, i = 1,\u2026, r, j = 1,\u2026, s, and ni. is the number of objects in cluster ui, while nj. is the number of objects in cluster vj. From these definitions, we haveTo evaluate our proposed cluster analysis method, another synthetic dataset consisting of six clusters is generated from model , where d follows :(22)ARIk-means initial,\u201d and \u201ck-means\u201d over several different numbers of clusters, where \u201calgorithm with random initial\u201d means our proposed clustering algorithm with randomly chosen initial partition, \u201calgorithm with k-means initial\u201d means our proposed clustering algorithm with k-means result as initial partition, and \u201ck-means\u201d is an algorithm coded in the MATLAB software for k-means clustering method. Those values of AARI are also listed in As the results of clustering are sensitive to the initial partition, we run our proposed clustering algorithm and competing clustering algorithms 30 times on the synthetic dataset and calculate the average ARI (AARI) for each algorithm. k-means result as initial partitions is comparable with that of k-means, which indicates that after k-means falls in a local optimum, our algorithm cannot escape from that local optimum and thus inherits the drawbacks of k-means. This also suggests that our developed algorithm should combine with random chosen initial partitions.From Saccharomyces cerevisiae) at 18 equally spacing time points in the alpha-synchronized experiment. The original dataset is publicly available at http://genome-www.stanford.edu. Genes with missing data are excluded in this study. The resultant dataset contains the expression profiles of 4489 genes.In this section, we apply our proposed significance analysis and cluster analysis method to a real-life gene expression dataset which is collected from the alpha-synchronized experiment . To stud\u03b3 = 0.1. As a result, 846 genes are determined to be involved in the alpha-synchronized cell division cycle process, while the other 3643 genes are determined to be noises with respect to this process. \u03b1k are negative numbers, which are reasonable. As the cell division cycle is a stable biological system, after a perturbation such as the alpha synchronization, the system tends to its stable attractor. Therefore the degradation rate represented by \u03b1k should be negative.We first apply our proposed significance analysis method to this dataset and set the significance level ak and bk determine the importance of periodic components. From ak and bk is the largest, while the absolute value of parameter \u03b1k is small for Cluster 3. This indicates that 17 genes in Cluster 3 are periodically expressed in this process, which can be verified from ak and bk is the second largest for Cluster 5, while the absolute value of parameter \u03b1k is very large for Cluster 5. As a result, gene expression profiles in Cluster 5 are quickly degrading while hardly displaying periodicity as shown in Furthermore, the values of model parameters In this paper, we have presented a significance analysis method and a cluster analysis method for time-course gene expression profiles. In these methods, time-course gene expression profiles are modeled by a nonlinear model, which is a generalization of several existing models. To estimate the parameters, which is key to the developed significance analysis method and a cluster analysis method, we propose a two-step linear least squares method. One synthetic dataset has been employed to verify our developed significance analysis method in terms of sensitivity and specificity, while another synthetic dataset has been employed to verify our developed cluster analysis method in terms of AARI. The results have shown that both of our developed methods outperform some existing methods. The application to one real-life biological dataset illustrates that the analysis results of our developed methods are in agreement with the existing results. The reconstruction of gene regulatory network from time-course gene expression data is a very important issue in systems biology . Obvious"}
+{"text": "Conventional identification methods for gene regulatory networks (GRNs) have overwhelmingly adopted static topology models, which remains unchanged over time to represent the underlying molecular interactions of a biological system. However, GRNs are dynamic in response to physiological and environmental changes. Although there is a rich literature in modeling static or temporally invariant networks, how to systematically recover these temporally changing networks remains a major and significant pressing challenge. The purpose of this study is to suggest a two-step strategy that recovers time-varying GRNs.Drosophila Melanogaster show that the proposed change point detection procedure is also able to work effectively in real world applications and the change point estimation accuracy exceeds other existing approaches, which means the suggested strategy may also be helpful in solving actual GRN reconstruction problem.It is suggested in this paper to utilize a switching auto-regressive model to describe the dynamics of time-varying GRNs, and a two-step strategy is proposed to recover the structure of time-varying GRNs. In the first step, the change points are detected by a Kalman-filter based method. The observed time series are divided into several segments using these detection results; and each time series segment belonging to two successive demarcating change points is associated with an individual static regulatory network. In the second step, conditional network structure identification methods are used to reconstruct the topology for each time interval. This two-step strategy efficiently decouples the change point detection problem and the topology inference problem. Simulation results show that the proposed strategy can detect the change points precisely and recover each individual topology structure effectively. Moreover, computation results with the developmental data of Identifying causal relationships of a gene regulatory network (GRN) is one of the fundamental problems in understanding cell behaviors. For most conventional identification methods, it is generally assumed the topological structure is constant over time. Based on this assumption, various models and methods have been proposed, such as Boolean networks Drosophila Melanogaster, which is segmented into different life stages: embryogenesis, larva, pupa and adult Saccharomyces cerevisiae exhibit dramatic topological changes and hub transience during a temporal cellular process and in response to diverse stimuli Recent research results, however, show that GRNs are dynamic in response to physiological and environmental changes. For instance, an example of such time-varying regulatory network can be provided by the development of the fruitfly To identify time-varying GRNs, some special methods have been proposed recently. A machine learning method called TESLA is presented in Drosophila Melanogaster during a complete time-course of development, and computation results show that the proposed change point detection procedure has ability to work effectively in real world applications and the change point positioning accuracy exceeds other existing approaches, which means the suggested strategy may also be helpful in solving actual GRN reconstruction problem.The purpose of this study is to suggest a two-step strategy that recovers time-varying GRNs. In this paper, the model for time-varying GRNs is adopted as the switching auto regressive model. Consequently, in the first step, on the basis of a relation between the Kalman filter and recursive least squares (RLS) estimation, it is shown that a stochastic process can be constructed which is white if and only if the time series expression data are generated by the same sub-regulatory network. Based on this observation, a procedure is developed to detect change points of a time-varying GRN. The observed time series are divided into several segments using these detection results; and each time series segment belonging to two successive demarcating change points is associated with an individual static regulatory network. And then, in the second step, conditional network structure identification methods are used to reconstruct the topology for each time interval. In summary, in the suggested time-varying GRN identification strategy, the problem of identifying a time-varying regulatory network is transformed into that of identifying multiple single static regulatory networks. To solve the latter is much easier than to solve the former. For the performance evaluation, we use both time series data generated by a synthetic time-varying GRN and time series data provided by the DREAM3 challenge, and simulation results confirm that the proposed strategy can detect the change points precisely and recover each individual topological structure effectively. For a real data application, our proposed strategy is applied to time series data of In Silico data and the developmental data of Drosophila Melanogaster. Variations of estimation performances with respect to parameters of the suggested method will also be reported. Besides, some concluding remarks are given about the characteristics of the suggested method, as well as some future works worthy of further efforts. Finally, an appendix is included in The rest of this paper is organized as follows. At first, the problem discussed in this paper and some mathematic preliminary results are given, then the change points estimation algorithm is illustrated and the two-step strategy is derived. Afterwards, the proposed estimation strategy is assessed using both The following notations are adopted in this paper. Generally speaking, a basic model for a time-varying GRN consisting of It has been pointed out that over the course of a cellular process, such as a cell cycle or an immune response, there may exist multiple underlying themes that determine the functionalities of each molecule and their relationships to each other, and such themes are dynamic and stochastic a priori in this type experiment. And by extension, the change points are not always known a priori in general. Therefore, it is assumed that the change point is unknown and its value is needed to estimate in some literatures on the identification of time-varying GRNs In general, normal biological tissues will undergo morphologic changes when they are inflicted by some external stimuli, such as ionizing radiations. In fact, many literatures have studied the radiation tolerance Based on the discussion above, the time-varying GRN identification problem discussed in this paper is as follows.Given a series of gene expression vectors It is well known that the computation procedure of recursive least squares (RLS) parametric estimations for AR models possesses the same form as that of Kalman filtering a priori unbiased estimate about Consider the following linear time invariant (LTI) AR systemSet In the time-varying GRN identification, a common situation is that available knowledge about the actual network topology is nothing but its gene expression data. In order to develop the change points detection procedure, it appears appropriate to investigate at first whether or not there exist some detectable stochastic differences between gene expression data generated by the same sub-network and those generated by more than one sub-network. If the answer is positive, then a change of these stochastic properties reflects a switch between two sub-networks. In other words, a statistic can be constructed for estimating change points of a time-varying GRN. Based on these considerations, stochastic properties of an innovation process of the network described by Suppose that gene expression time series data A proof of the above theorem is given in Assume that for arbitrary Corollary 1 makes it clear that change point estimation for a time-varying GRN described by Based on the conditions of Corollary 1, define The results of Corollary 2 are helpful in detecting the change point. As a matter of fact, in actual applications, if An attractive property of the change point detection procedure is that its computational complexity does not depend on the number of change points. Moreover, it is also worthwhile to point out that in this detection procedure, neither prior distribution on the number of change points nor knowledge about the change time instant is required, i.e., our change points detection procedure do not require the structure prior distribution of a GRN, which is the major difference from the method proposed in In the above subsection, the change point estimates have been obtained, which are denoted by Estimate change points using the change point detection procedure in For each time series segment, infer the causal relationships by conditional network structure identification methods, such as IOTA In summary, the suggested two-step strategy can decouple the change point detection problem and the topology inference problem, that is, the problem of identifying a time-varying regulatory network is transformed into that of identifying multiple single static regulatory networks. To solve the latter is much easier than to solve the former.a priori information about the network topology, it is also desirable to jointly learn the static networks across time segments. A feasible method is as follows.It should be noted that the number of the biological experimental time series data is very limited. If there exist some a priori knowledge, then we can set the initial value Suppose that the time series In order to evaluate the properties of the suggested two-step strategy, gene expression time series data are generated by an academic dynamic network. This simulated dynamic network include two sub-networks denoted by l to gene k.In the second step, we apply a recent identification method, named the inner composition alignment (IOTA) In systems biology, predictions are compared with the actual network structure using the following two different metrics in topology prediction accuracy evaluations.AUPR: The area under the precision-recall curve;AUROC: The area under the receiver operating characteristic curve.\u03b1 as 0.05, independently simulate this dynamic network 500 times, and the results are summarized in Choose The change point estimated mean in in silico size10 challenge The above simulation is an academic case, in which the experimental environment is very close to the fundamental assumption in the section of From Drosophila Melanogaster have been well-studied in different aspects Drosophila Melanogaster genes during a complete time-course of development. And, cDNA microarrays were used to analyze the RNA expression levels during 66 sequential time periods, including the embryonic period (30 samples), the larval period (10 samples), the pupal period (18 samples) and the first 30 days of adulthood (8 samples). In addition, a major morphological change relates to a modification of transcriptional regulations during the first 0 to 6.5 hours of embryonic development, which consists of 12 samples. Therefore, the actual change instants ought to be 13, 31, 41, and 59. Here, we use a sub-dateset of this developmental data, containing the following 11 genes: \u2018actn\u2019,\u2018eve\u2019,\u2018gfl\u2019,\u2018mhc\u2019,\u2018mlc1\u2019,\u2018msp300\u2019,\u2018myo61f\u2019,\u2018prm\u2019,\u2018sls\u2019,\u2018twi\u2019, and \u2018up\u2019.The gene expression data of To apply our change point detection procedure, we first estimate noise covariance matrix By setting Drosophila Melanogaster have divided into five segments. For each segment, we use the well-studied LASSO model in statistics to infer the causal relationship Based on these change point estimates, the developmental data of Drosophila Melanogaster show that the suggested two-step strategy may also be helpful in solving actual GRN reconstruction problem.By setting Drosophila Melanogaster, there exist some other real microarray compendia. Especially, the more recent DREAM5 network inference challenge offer some alternative real microarray compendia, which can be found in the web site at http://wiki.c2b2.columbia.edu/dream/index.php/D5c4 or in a priori in general. Therefore, we use the suggested change point detecting algorithm to check whether there is a topological change in response to P19.Apart from the developmental data of Based on the gold standard of Network4 suggested by the Dream project organizers, we select three sub-networks. In this way, although it is not clear that whether there is a biological significance for these sub-networks, it can be guaranteed that the system matrix S. cerevisiae; the external interference P19 is phenelzine treatment; and the de-anonymized gene names are listed as follows: G15: YKL043W, G20: YJR147W, G21: YER045C, G45: YMR016C, G47: YNL167C, G61: YLR131C, G76: YJR060W, G87: YHR206W, G95: YNL314W, G101: YGL162W, G111: YOR028C, G112: YER111C, G152: YLR182W, G212: YDL106C, G213: YIL130W, G224: YEL009C, G273: YDR259C, and G319: YPR104C.Finally, some information about the dataset can be found in In this paper, we consider the time-varying GRN identification problem. The switching auto-regressive model is used to approximate the regulatory model for time-varying GRNs. And, a two-step strategy is proposed to recover the topological structure. In the first step, on the basis of a relation between the Kalman filter and recursive least squares estimation, it is shown that the innovation process is white if and only if the time series expression data are generated by the same sub-regulatory network. Based on this observation, a procedure is developed to detect change points of a time-varying GRN. The observed time series are divided into several segments based on these detection results; and each time series segment belonging to two successive demarcating change points is associated with an individual static regulatory network. Therefore, in the second step, for each time interval, the causal relationships inference problem can resort to conditional network structure identification methods, such as IOTA, and LASSO, etc.Drosophila Melanogaster show that the suggested strategy may also be helpful in solving actual GRN reconstruction problem.The main difficulty of the time-varying GRN identification problem is that the identification problem includes a classification problem in which each data must be associated to the most suitable sub-network. The more precise the data classification is, the better the identification results are. The proposed two-step strategy efficiently estimates the change point, which results in the decoupling of the change point detection problem and the topology inference problem. Hence, the problem of identifying a time-varying regulatory model is transformed into that of identifying multiple single static regulatory models, which means that the proposed two-step strategy can ease the difficulty level in recovering a time-varying GRN. Simulation results show that the proposed strategy can detect the change point precisely and recover each individual topology structure effectively. Moreover, computation results with the developmental data of Under our two-step strategy architecture, recovering a static GRN from time series is the most basic problem. However, this problem is not solved completely and efficaciously! Therefore, the most urgent problem is how to utilize gene expression time series data to obtain a static network structure with high accuracy.Text S1Appendix: Proof of Theorem 1.(PDF)Click here for additional data file."}
+{"text": "In nonlinear dynamic systems, synchrony through oscillation and frequency modulation is a general control strategy to coordinate multiple modules in response to external signals. Conversely, the synchrony information can be utilized to infer interaction. Increasing evidence suggests that frequency modulation is also common in transcription regulation.In this study, we investigate the potential of phase locking analysis, a technique to study the synchrony patterns, in the transcription network modeling of time course gene expression data. Using the yeast cell cycle data, we show that significant phase locking exists between transcription factors and their targets, between gene pairs with prior evidence of physical or genetic interactions, and among cell cycle genes. When compared with simple correlation we found that the phase locking metric can identify gene pairs that interact with each other more efficiently. In addition, it can automatically address issues of arbitrary time lags or different dynamic time scales in different genes, without the need for alignment. Interestingly, many of the phase locked gene pairs exhibit higher order than 1:1 locking, and significant phase lags with respect to each other. Based on these findings we propose a new phase locking metric for network reconstruction using time course gene expression data. We show that it is efficient at identifying network modules of focused biological themes that are important to cell cycle regulation.Our result demonstrates the potential of phase locking analysis in transcription network modeling. It also suggests the importance of understanding the dynamics underlying the gene expression patterns. A major goal of systems biology is to integrate biological functions of individual genes in terms of their interactions. Time course gene expression profiling, which can capture the global transcriptional responses to signals during a biological process of interest, offers a major data source to achieve this goal .In network modeling of gene expression data, assessing pair-wise relationships is often a starting point. In early days, correlation coefficient ,3, Eucliet al proposed a local clustering approach based on optimal pair-wise alignment through dynamic programming , with no consistent pattern. This is not surprising as it is the protein of the TF that interact directly with the target gene transcription, not the expression of the TF transcript . Based on these findings, we constructed interaction networks and revealed that genes with higher network degrees are more likely to be essential genes. Utilizing the phase-locking index based topological overlapping matrix, we further investigated the modular structures in the network. We showed that genes forming network modules are more likely to be essential genes than scattered genes in the network, and members of the same module tend to be involved in the same biological functions and processes. In view of the importance of the frequency domain signal in transcription regulation, we believe that the phase locking analysis can potentially lead to new network modeling approaches and help to understand the dynamic designs of the intracellular signaling networks.http://genome-www.stanford.edu/cellcycle/data/rawdata/. These studies profiled expression changes in 6178 genes at ~20 time points under each condition following alpha factor arrest (18 time points from 0-119 minutes), elutriation ELU (14 time points from 0-390 minutes), and arrest of a cdc15 (24 time points from 10-290 minutes) and a cdc28 (28 time points from 0-160 minutes) temperature sensitive mutant [http://www.geneontology.org/) to be involved in the biological process of cell cycle . They are termed cell cycle genes in this study.Yeast cell cycle gene expression data were downloaded from the Yeast Cell Cycle project at the Stanford University e mutant ,34. Manyet al studied the transcription regulation of yeast genes by 9 cell cycle regulating transcription factors (TF): Fkh1, Fkh2, Ndd1, Mcm1, Ace2, Swi5, Mbp1, Swi4, and Swi6, using the ChIP-chip technology [Simon chnology . We havei, the Z-score of the sig across the 9 TFs is also determined to examine binding specificity.For each gene sig > 3 (significant binding), and the Z-score > 1.5 (the binding is specific). The number of targets for each TF ranges from 18-54 for the alpha factor arrest data set, 12-50 for the cdc15 dataset, 1-21 for the cdc28 dataset, and 19-65 for the ELU data set. A negative control non-target set is constructed for each TF that includes all genes with sig <1 (p > 0.1). This set consists of over 3,000 genes for each TF in the alpha factor and cdc15 datasets, over 875 for each TF in the cdc28 dataset, and over 4,000 for each TF in the ELU dataset.We constructed a positive control target set for each TF that consists of those with s(t), its Hilbert transformation [We adopt the analytic signal concept to derivormation ,56 is giwhere PV stand for Cauchy Principal Value of the integration . The cor\u03c6 (t) is thus uniquely determined. \u03c6 (t) calculated this way can be sensitive to low-frequency trends [jg(t): [where the instantaneous phase y trends . We used [gj(t): ,58.i\u03c6(t) and j\u03c6(t), their cyclic relative phase is determined byFor two time series with instantaneous phase 0 is a constant. On a Poincare phase map this will be represented by a stable fixed point. For noisy time series the phase difference is in general not a constant but distributes around \u03a80 : |\u03a8 - \u03a80 | 0.77, \u03bb 1,1 < 0.2, and r < 0.45.Click here for file"}
+{"text": "In the analysis of effects by cell treatment such as drug dosing, identifying changes on gene network structures between normal and treated cells is a key task. A possible way for identifying the changes is to compare structures of networks estimated from data on normal and treated cells separately. However, this approach usually fails to estimate accurate gene networks due to the limited length of time series data and measurement noise. Thus, approaches that identify changes on regulations by using time series data on both conditions in an efficient manner are demanded.We propose a new statistical approach that is based on the state space representation of the vector autoregressive model and estimates gene networks on two different conditions in order to identify changes on regulations between the conditions. In the mathematical model of our approach, hidden binary variables are newly introduced to indicate the presence of regulations on each condition. The use of the hidden binary variables enables an efficient data usage; data on both conditions are used for commonly existing regulations, while for condition specific regulations corresponding data are only applied. Also, the similarity of networks on two conditions is automatically considered from the design of the potential function for the hidden binary variables. For the estimation of the hidden binary variables, we derive a new variational annealing method that searches the configuration of the binary variables maximizing the marginal likelihood.For the performance evaluation, we use time series data from two topologically similar synthetic networks, and confirm that our proposed approach estimates commonly existing regulations as well as changes on regulations with higher coverage and precision than other existing approaches in almost all the experimental settings. For a real data application, our proposed approach is applied to time series data from normal Human lung cells and Human lung cells treated by stimulating EGF-receptors and dosing an anticancer drug termed Gefitinib. In the treated lung cells, a cancer cell condition is simulated by the stimulation of EGF-receptors, but the effect would be counteracted due to the selective inhibition of EGF-receptors by Gefitinib. However, gene expression profiles are actually different between the conditions, and the genes related to the identified changes are considered as possible off-targets of Gefitinib.From the synthetically generated time series data, our proposed approach can identify changes on regulations more accurately than existing methods. By applying the proposed approach to the time series data on normal and treated Human lung cells, candidates of off-target genes of Gefitinib are found. According to the published clinical information, one of the genes can be related to a factor of interstitial pneumonia, which is known as a side effect of Gefitinib. Gene network estimation from time series gene expression data is a key task for elucidating cellular systems. Thus far, wide variety of approaches have been proposed based on the vector autoregressive (VAR) model ,3, the sA possible way for finding changes on regulations is to estimate networks from two data sets separately and then compare their structures. However, due to the limited length of time series data and unignorable measurement noise, networks are estimated with high error rates and the estimation errors cause the serious failure on identifying changes on regulations. Thus, approaches using two time series data in an efficient manner are strongly demanded. Also, widely used statistical methods such as the VAR model and dynamic Bayesian network assume equally spaced time points in time series data. However, observed time points on usually available time series data are not equally spaced ,6,9, andWe propose a new statistical model that estimates gene networks on two different conditions in order to identify changes on regulations between the conditions. As the basis of the proposed model, we employ the state space representation for VAR model (VAR-SSM), in which observation noise is considered between the measured or observed gene expressions and the true gene expressions in observation model and gene regulations between true gene expressions are considered in the system model . The VARThe hidden binary variables are estimated by searching the configuration of binary variables that maximizes the marginal likelihood of the model. However, searching the optimal configuration is computationally intractable. Thus, as an alternative approach, we derive a new variational annealing method based on in orderFor the performance evaluation, we generate two regulatory networks in such a way that most of the regulations commonly exist and some exist only on one of the networks. We then apply our proposed approach and existing var model based and dynamic Bayesian network based approaches to two equally spaced time series data drawn separately from the generated networks. From the comparisons of true positive rates and false positive rates of these approaches, we confirm the effectiveness of our approach. We also generate unequally spaced time series data from these networks, and show that our approach works correctly on unequally spaced time series data while the performance of the existing approaches assuming equally spaced time points is drastically worsened.Our proposed approach is used to analyze changes on regulations in gene networks between normal Human lung cells and Human lung cells treated by stimulating EGF-receptors and dosing an anticancer drug termed Gefitinib. A lung cancer condition is simulated by the stimulation of EGF-receptors in the treated cells. Since Gefitinib is known as a selective inhibitor of EGF-receptors, the stimulation of EGF-receptors would be counteracted by Gefitinib, and hence the treated cells are expected to be the same condition as normal cells. However, gene expression profiles from normal and treated cells are actually different, and off-targets of Gefitinib causing unexpected positive or negative effects are implied. We focus on genes with changes on regulations between the networks estimated by our approach and find possible off-target genes of Gefitinib. According to the published clinical information, one of the possible off-target genes is suggested as one of factors of interstitial pneumonia, which is known as a side effect of Gefitinib.p genes during T time points {y1, ..., yT}, the first order vector autoregressive (VAR(1)) model at time point t is given byGiven gene expression profile vectors of A is a p \u00d7 p autoregressive coefficient matrix, and \u03b5t is observation noise at time t and follows i, j)th element of A, Aij, indicates a temporal regulation from the jth gene to the ith gene, and if Aij \u2260 0, regulation from the jth gene to the ith gene is considered. By examining whether Aij is zero or not for all i and j, a gene network is constructed. Since equally space time points are assumed in the VAR model, it has difficulty on handling unequally spaced time series data.where T time points and xt be hidden variable vector representing true gene expression at time t. The system model is given as the VAR model of xt:Let \u03b7t is the system noise normally distributed with mean 0 and variance H = diag. The observation model represents measurement error of observed gene expression yt and true gene expression xt at observed time point where \u03c1t is the observation noise normally distributed with mean 0 and variance R = diag. Unequally spaced time series data are handled by ignoring observation model at non-observed time points.where c, where c. We also let Tc) th element j to gene i exists on condition c and zero otherwise, i.e., the presence of regulations is controlled by Ec) , where Y, X, \u0398, and E are respectively the sets of Ec) is assumed to be factorized aswhere the prior distribution zij is a parameter for a potential function of Aij is given byHere, \u03b10 and \u03b11 are parameters controlling the shrinkage of coefficients A and Fij is a binary variable that takes 1 if Fij is given by \u03b11 is set to a large value, while \u03b10 is set to smaller than \u03b11. From the design of the prior for Aij, if j to i in condition 1 or 2, weaker prior hi , and ri are given bywhere u0 and v0 are the shape parameters, and k0 and l0 are the inverse scaling parameters. Under the assumption that a small number of regulations change between two conditions, we design the prior distribution for c = 1 or 2:where zij is small, zij by beta distribution with parameters \u03b6i0 and \u03b6i1:In this setting, if B(\u00b7) is the beta function. Thus, the prior distribution of where yt is represented by gray nodes. In the observed data Figure E maximizing the following marginal likelihood:For the parameter estimation, we search the configuration of E is computationally intractable, and heuristics approaches such as the EM algorithm and the variational method are used in practice. Here, we use the variational annealing, an extension of the deterministic annealing for discrete variables [Finding the optimal configuration of ariables . In the E that maximizes Equation (1).In the deterministic annealing, optimization problem is solved while gradually changing temperature in a some schedule, and maximum likelihood estimator is obtained like the EM algorithm -14. YoshE, X, and \u0398 be p dimensional binary variables, unobserved variables, and parameters, respectively, and consider to search E maximizing the following marginal likelihood:Let E is bounded by the following formula:The maximum of the marginal likelihood on \u03c4 is called temperature and the equality holds for \u03c4 \u2192 +0. Hereafter, the integral range of E is omitted if no confusion occurs. Let Q(E) be a normalized non-negative function, i.e., Q(E) \u2265 0 and \u222b Q(E)dE = 1.where From the Gibbs inequality, the right side of Equation (3) is also bounded:Q(E) is considered as an approximation function of P \u221d Q(X)Q(\u0398), where Q(X) and Q(\u0398) are normalized non-negative functions, we have the following inequalityHere, E maximizing Equation (2), we try to find Q(E) is the function maximizing the lower bound in Equation (4). Since higher values on P(E) are weighed more in the approximation by Q(E) for \u03c4 < 1, the better approximation is expected for the higher values. This property on the limited case is shown in Proposition 1. As is in the variational method and the EM algorithm, the maximum of the lower bound in Equation (4) is searched by a hill climbing from high temperature \u03c4 > 1.Thus, as an approximation of Q(X), Q(\u0398), and Q(E) are alternately updated from the following equations:In the hill climbing, \u03c4 to 0, local optimum of the lower bound and corresponding Q(E), Q(X), and Q(\u0398) are obtained.Gradually converging X is the set of hidden variables and E is handled as the set of parameters to be maximized. We show the effectiveness of variational annealing compared to the variational method and the EM algorithm under the following conditions: P is factorized into PP, and P is given as a binomial distribution, where Ei is the ith element of E. If factorization of P = Q(X)Q(\u0398)Q(E) is assumed in the calculation of the variational method, arg maxE Q(E) is not the optimal solution of Equation (2) in general. In the EM algorithm, by allowing E to move around a p-dimensional continuous space ith element of p-dimensional binary space {0,1}p, but such a mapping is not guaranteed to provide p.As alternatives of the variational annealing, we may consider the variational method and the EM algorithm where In the following, we prove a proposition in order to show that the variational annealing possibly give the optimal solution of Equation (2) even if the factorization of Proposition 1. P is factorized into PP, and P is given as a binomial distribution. Let be Q(Ei) maximizing the lower bound of the variational annealing for \u03c4 \u2192 +0 given byThen, the set of Ei \u2208 {0,1} maximizing is P = PP is satisfied and optimal Q functions are found, the variational annealing is guaranteed to provide the optimal solution of Equation (2) while the variational method and the EM algorithm are not. Although the factorization is not a generally satisfied property, the factorization is often assumed in approaches based on the variational method, and the assumption usually works as good approximations. Thus, the variational annealing is expected to provide the better performance than the variational method and the EM algorithm even if the factorization is not satisfied exactly.For the proof of the proposition, see Section 1 in Additional file Q functions for hidden variables X, parameters \u0398, and binary variables E iteratively while cooling temperature \u03c4 to zero gradually at each iteration cycle. In the following, we show the calculation procedures of Q(X), Q(\u0398), and Q(E) on the proposed model as variational E-step, variational M-step, and variational A-step, respectively. More details of the procedures are given in Additional file x with a probability distribution Q(y) as \u2329x\u232aQ(y).In the variational annealing on the proposed model, we calculate Q(X) are mean of xt, variance of xt, and cross time variance of xt-1 and xt. These parameters can be calculated via variational Kalman filter by using following terms expected with Q(\u0398)Q(E): \u2329Ec)(\u232aQ(\u0398)Q(E), \u2329A\u232aQ(\u0398)Q(E), \u2329H-1 A \u25cb Ec)(\u232aQ(\u0398)Q(E), and \u2329(A \u25cb Ec)'H-1A \u25cb Ec)(\u232aQ(\u0398)Q(E). For the details of variational Kalman filter, see [Q(X), expectations of the following terms with Q(X) required in other steps are calculated: Parameters of ter, see . From thQ(\u0398) is factorized into \u220fiQ(Ai|hi) Q(hi)Q(ri) \u220fjQ(zij), where Ai is a vector given by '. From the design of the proposed model, Q(Ai|hi), Q(hi), Q(ri), and Q(zij) are given in the following form:Parameters for the above functions are calculated by using Q(E), we assume the factorization of Q(E) to Q(E) is given by k \u2260 j. A few iterations are enough for the convergence.For the calculation of u0, k0, v0, l0, \u03b6i0, \u03b6i 1, \u03b10, and \u03b11 as hyperparameters. \u03b10 and \u03b11 should be \u03b10 <\u03b11 as in the model setting, but this condition can be violated in the update step of the variational method. Thus, we select \u03b10 and \u03b11 by cross validation, and update other hyperparameters as in the variational method.The proposed model contains u0, k0, v0, l0, \u03b6i0, and \u03b6i1 to increase the lower bound of marginal probability. u0 and k0 are updated by maximizing the following equation:We first consider update of hyperparameters v0 and l0 are also updated in a similar manner to u0 and k0. \u03b6i0 and \u03b6i1 are updated by solving the following equation:\u03b10 and \u03b11, we set \u03b11 to some large value and select \u03b10 by a leave one out cross validation procedure. For condition c \u2208 {1, 2} and time point y and use the data set to train the model. We calculate square sum of residues \u03b10, we select \u03b10 that minimizes For the selection of The procedures for estimating parameters in the proposed model are summarized as follows:\u03c4 to some large value. Also set \u03b10 to a small value and \u03b11 to a large value satisfying that \u03b10 <\u03b11.1. Set 2. Initialize other hyperparameters and hidden variables.3. Perform the following procedures:(a) Calculate variational M-step.(b) Update hyperparameters.(c) Calculate variational E-step.(d) Calculate variational A-step.(e) Go back to step (a) until some convergence criterion is satisfied.\u03c4 by some value > 1 such as 1.05.4. Divide \u03c4 is larger than some very small value > 0.5. Go back to step 3 if \u03b11 is set to 1,000. For the initialization of \u03c4 and other hyperparameters, we use the following settings: In our setting, G1 and G2 based on a linear regulatory network model G0. G0 is prepared in the following manner: (i) a scale free network of 100 nodes and 150 edges is generated; (ii) edge directions are assigned randomly; (iii) autoloop edges are added to root nodes of the directed network; and (iv) AR coefficients for the directed edges are chosen randomly from {-0.9, -0.8, -0.7, -0.6, -0.5, 0.5, 0.6, 0.7, 0.8, 0.9}. We then generate G1 and G2 from G0 as follows: (i) autoloop edges and 70% of non-autoloop edges in G0 are used for commonly existing edges in G1 and G2; and (ii) the other 30% of non-autoloop edges are randomly assigned as either G1 or G2 specific edges. Note that AR coefficients on edges of G1 and G2 are preserved, i.e., if a regulation from gene j to gene i exists in G1 or G2, then its coefficient is the same as that of the regulation from gene j to gene i in G0.For the evaluation of the proposed approach, we generate two linear regulatory network models with similar topological structures G1 and G2, where commonly regulations, G1 specific regulations, and G2 specific regulations are represented with black, red, and green arrows, respectively.Figure G1 and G2, we obtain two equally spaced time series data of 25 time points and 50 time points. For system noise and observation noise, normally distributed values with mean 0 and standard deviation 1 and mean 0 and standard deviation 0.1 and 1 are used, respectively. The signal-noise ratio for system noise with standard deviation 1 and observation noise with standard deviation 0.1 is 0.03dB, and the signal is bit stronger than the noise. On the other hand, for observation noise with standard deviation 1, the signal-noise ratio is -0.26dB. In the condition, the noise is stronger than the signal, and the noise level is quite high.From each of j to i on condition c is considered to exist if the estimated We first compare the performances of the proposed approach and the approach that is based on the proposed approach but uses the EM algorithm instead of the variation annealing using the equally spaced time series data of 50 and 25 time points on the system noise with standard deviation 1 and observation noise with standard deviation 0.1. From the comparison, we verify the effectiveness of the variational annealing, compared to the EM algorithm. Table From the comparison, the results of the proposed approach contain more true positives than those of the EM algorithm based approach except for identifying changes on regulations for time points 50. For identifying changes on regulations, the EM algorithm based approach estimates bit more true positives than the proposed approach, but the difference is so small that it can be ignored. On the other hand, the results of the EM algorithm based approach contain more false positives than those of the proposed approach, and hence the precision of the results by the EM algorithm is worse than that of the proposed approach. Therefore, the effectiveness of the variational annealing is confirmed in the computational experiment as well.G1 and G2 independently, i.e., for the estimation of G1, only time series data from G1 is used, while ENet2 and G1DBN2 assume that G1 and G2 have the same network structure and estimate a network by using two time series data. Thus, ENet2 and G1DBN2 are considered to more use data sample than ENet1 and G1DBN1 for network estimation, but changes on regulations between G1 and G2 are not considered. For selection of hyperparameters in ENet1 and ENet2, AICc is used [\u03b11 and \u03b12 of G1DBN1 and G1DBN2, a setting of \u03b11 = 0.1 and \u03b12 = 0.0059 considered in [We employ the elastic net based VAR model approach and the dered in is used.For the comparison of these approaches, we focus on the following two points: the number of correctly estimated regulations and the number of correctly estimated changes on regulations. The former is usually considered for evaluating the performance of gene network estimation methods. The numbers of true positives and false negatives of the estimated regulations are summarized in Table G1 and G2 are in total 305. For the latter point, we consider the estimated regulations existing only in one of two estimated networks as changed regulations, and check if they correctly exist only in the corresponding true network. The numbers of true positives and false negatives of the estimated changes on regulations and the precisions are summarized in Table G1 and G2 are 47, i.e., the number of true positives on this case is at most 47.are also provided. The results are averaged on ten data sets. The number of regulations in the true network models of For the estimation of the regulations in Table G1 and G2, but is estimated only for G1, then the case is not counted as a false positive in Table One may think it is strange that false positives in ENet1 and G1DBN1 in Table In order to show the performance in unequally spaced time series data, we generate unequally spaced time series data of 25 and 50 observed time points. For time series data of 25 observed time points, we first generate equally spaced time series data of 40 time points and divide it into three blocks: 15 time points, 10 time points, and 15 time points. We then remove time points in the following manner: no time point is removed in the first block; one of every two time points are removed in the second block; and two of every three time points are removed in the third block. Figure We also consider the time series data with the high level noise: system noise with standard deviation 1 and observation noise with standard deviation 1. The results for the case are summarized in Tables We apply the proposed approach to two time series microarray gene expression data from normal Human small airway epithelial cells (SAECs) and SAECs treated by stimulating EGF-receptors and dosing an anticancer drug termed Gefitinib. EGF-receptors are often overexpressed in lung cancer cells such as tumoral SAECs, and a lung cancer condition is simulated in the treated SAECs by stimulating EGF-receptors. Since Gefitinib is known as a selective inhibiter of EGF-receptors, the stimulation of EGF-receptors would be counteracted by Gefitinib, and the condition of treated SACEs should be the same as that of normal SAECs in theory. However, since some gene expression patterns are different between the two conditions in practice, some unknown effects by Gefitinib may be involved in the phenomenon. Thus, we focus on changed regulations between the gene networks estimated from gene expression data in these two conditions in order to find some insights on the unknown effects of Gefitinib.For gene set selection, we first screen 500 genes from the ranking of the gene list sorted by coefficient variation [The estimated networks from time series gene expression data in normal SAECs and treated SAECs are summarized and given in Figure From Table et al. reported that LIF prolongs the cell cycle of stem cells on acute myelogenous leukemia lines [We also focus on other several genes related to changes on regulations in normal and treated SAECs. LIF, leukemia inhibitory factor, is known to affect cell growth and development. Gefitinib is also known to be effective for acute myelogenous leukemia via Sky, which is an off-target gene of Gefitinib . In addiia lines . Althouget al. reported that Human Siglecs, Siglecs-14, -15, and -16 interact with transmembrain adaptor proteins containing the immunoreceptor tyrosine-based activation motif such as DAP12 [Heikema as DAP12 , and theAlthough the stimulation of EGF-receptors in the treated SAECs is considered to be counteracted by Gefitinib, the expressions of some genes may be affected by the stimulation in practical conditions. HAS3 is related to synthesis of the unbranched glycosaminoglycan hyaluronic acid and is reported to be up-regulated by EGF . The stiWe proposed the new computational model that is based on VAR-SSM and estimates gene networks from time series data on normal and treated conditions as well as identifies changes regulations by the treatment. Unlike many of existing gene network estimation approaches assuming equally spaced time points, our approach can handle unequally spaced time series data. The efficient use of time series data is achieved by representing the presence of regulations on each condition with hidden binary variables. Since finding the optimal configuration of the hidden binary variables on the proposed model is computationally in tractable, we derive the extended variational annealing method in order to address the problem as the alternative method.In the Monte Carlo experiments, we use equally and spaced time series data from synthetically generated two regulatory networks whose structures are different in several regulations, and verified the effectiveness of the proposed model in both estimation of regulations and changes on regulations between the two conditions, compared to existing methods.As the real data application, we use the proposed approach to analyze two time series data from normal SAECs and SAECs treated by stimulating EGF-receptors and dosing Gefitinib. From genes related to changes on regulations by the treatment, we find possible off-target genes of Gefitinib, and one of these genes is suggested to be related to a factor of interstitial pneumonia, which is known as a side effect of Gefitinib. In this study, we consider changes on regulations in two conditions, but the proposed approach can be extended to identifying changes among more than two conditions.The authors declare that they have no competing interests.KK, SI, RY, and SM designed the approach to identify the changes on regulations by the cell treatment to SAECs. KK, SI, and AF contributed to the statistical modeling for the approach, and devised the details of methodologies for estimating the proposed model. MY and NG carried out the microarray experiment for measuring time series the gene expression data on normal and treated SAECs.Proof of Proposition 1 and more details on the procedures of variational annealing. A proof of Proposition 1 and more details on the procedures of variational annealing on the proposed model are described.Click here for file"}
+{"text": "Computational gene regulation models provide a means for scientists to draw biological inferences from time-course gene expression data. Based on the state-space approach, we developed a new modeling tool for inferring gene regulatory networks, called time-delayed Gene Regulatory Networks (tdGRNs). tdGRN takes time-delayed regulatory relationships into consideration when developing the model. In addition, a priori biological knowledge from genome-wide location analysis is incorporated into the structure of the gene regulatory network. tdGRN is evaluated on both an artificial dataset and a published gene expression data set. It not only determines regulatory relationships that are known to exist but also uncovers potential new ones. The results indicate that the proposed tool is effective in inferring gene regulatory relationships with time delay. tdGRN is complementary to existing methods for inferring gene regulatory networks. The novel part of the proposed tool is that it is able to infer time-delayed regulatory relationships. Microarray technology allows researchers to study expression profiles of thousands of genes simultaneously. One of the ultimate goals for measuring expression data is to reverse engineer the internal structure and function of a transcriptional regulation network that governs, for example, the development of an organism, or the response of the organism to the changes in the external environment. Some of these investigations also entail measurement of gene expression over a time course after perturbing the organism. This is usually achieved by measuring changes in gene expression levels over time in response to an initial stimulation such as environmental pressure or drug addition. The data collected from time-course experiments are subjected to cluster analysis to identify patterns of expression triggered by the perturbation 2. A fundl et al. . A gene network derived by the above clustering methods is often represented as a wiring diagram. Cluster analysis groups genes with similar time-based expression patterns and infers shared regulatory control of the genes. The clustering result allows one to find the part-to-part correspondences between genes. The extents of gene-gene interactions are captured by heuristic distances generated by the analysis. The network diagram produced provides insights into the underlying molecular interaction network structure.Two major limitations of conventional clustering methods are that r et al. where ovr et al. . In anotr et al. conducteWu et al. propose 9The above state-space models \u201310 do noRecently, state-space models with time delays have been proposed to account for the effects of missing data and complex time delay relationships. In earlier work we developed a state-space model with time delay to model yeast cell-cycle data , and theThese existing state-space modeling techniques do not incorporate the structure of gene regulatory networks derived from biological knowledge. Alternatively, Li et al. have pubTo complement these existing methods, we have developed a new modeling tool called tdGRN for inferring time delayed gene regulatory networks. tdGRN generates a state space-based model into which time delays and the ChIP-on-chip data are incorporated to infer a biologically more meaningful network. A more extensive treatment of tdGRN and the use of state-space modelling with time-series microarray data can be found in the thesis of Koh .The tdGRN approach consists of three parts. First, we implement a state-space model which incorporates multiple time delays. Secondly, we incorporate ChIP-on-chip data for determining network connectivity for both nonreplicated and replicated data. This involves replacing Rangel's bootstrap confidence intervals (derived from highly replicated data) for identifying gene-gene interaction with a substitute. Finally, the networks generated from the new model are visualized using techniques from the literature .We consider the expression profile of a regulator as an input function to the system. Therefore, the time period, p-dimensional vector collecting the values of ,where l et al. , the proy Akaike to detery Akaike 18, such The model was implemented as a MATLAB program. tdGRN uses various functions from MATLAB's Control System and System Identification toolboxes. The n4sid and aic functions are used for system identification, system stability, and delay analysis. The n4sid function implements the Numerical Algorithms for State Space Subspace System Identification (NS4SID) proposed by Van Overschee and De Moor . It comps stable . The onls stable as compuwhere In a simple one-to-one regulatory relation, the regulation of a gene is highly related to its transcription factor (TF). In other words, residual regulation by other factors can be treated as hidden variables, that is, missing data. Therefore, a single-input and single-output (SISO) model (TF versus gene or TF versus TF) can be used to describe the input and output signals. The SISO model can be applied to identify network motifs such as feed-forward loops, Multi-component loops, and single-input motifs as described by Lee et al. . Figure According to Lee et al. , two anaRecall that this Another example of a network motif is the regulation of CLB2, a G2/M-cyclin gene, and transcription factor Swi4 by Mcm1. It is illustrated by Lee et al. as an exThe time delay, \u2009minutes . For Spe\u2009minutes , since ei et al. but unlii et al. , we belin's data . Note thA SISO model may not work well when multiple regulators show significant regulation of a target gene. The presence of two or more regulators increases the model complexity. In addition, some studies have shown that different gene pairs have different time delays for gene regulation . TherefoFigure q \u00d7 p matrix captures the regulatory relationship between the Recall that this The maximum number of input channels allowed in the model depends on the complexity of the motif structure and the time delay of each input channel. A greater number of available time points are required to model a more complicated network structure. Also, given a grossly limited number of time points, each additional unit of time delay reduces the number of available points to train a model and therefore reduces the reliability of the model. Consider an extreme case where a factor F regulates a gene G with 9 units of time delay. If there are only 10 time points, the regulatory relationship cannot be modeled since the data will show little or no evidence of regulation. In the case of Spellman's yeast microarray data (18 time points), tdGRN can compute a stable system for a maximum of four input and four input delays. In general, the maximum number of input channels is determined by trial and error and varies depending on the complexity of the network.Rangel et al. construcn TGs with time delay In this paper, we present a three-step solution (tdGRN) such that network connectivity is based on, but not limited by, genome-wide location analysis results. First, the data is partitioned into two groups: transcription factors (TFs) and target genes (TGs). Each TF is a possible regulator of another TF and/or TG. Secondly, using the n4sid function, tdGRN creates an initial set of network connections based on the location analysis results. All the TF versus TF and TF versus TG regulatory relations derived at this stage are screened for potential corresponding state-space models. Only the potential regulatory relations which satisfy the goodness-of-fit criteria are recorded and subjected to the next round of analysis. For each TF, tdGRN records the optimized parameters: initial state, number of time delays, the number of states (variables) that reflects the complexity of the regulations. In the third step, tdGRN performs an additional round of network connection screening based on the regulation parameters generated in the second step. For example, if a transcription factor F regulates egulated 22223. Thlated 222, the latIn addition, tdGRN generates a network output file that can be directly imported into Cytoscape for netwTwo data sets are used in this study. First, an artificial data set is created to validate the model. There are several methods proposed in the literature to create appropriate artificial gene expression data 26. The an et al. . DetailsThe artificial data consists of data streams of 2 regulators, R1 and R2, and 9 target genes, G1, G2,The second data set used in this study consists of 800 expression profiles of alpha factor-based yeast cell-cycle genes studied by Spellman et al. . The micIn this study, it is assume that (1) the experimental time points capture biologically significant changes, but (2) there exist effects of hidden variables in the biological system that cannot be measured in a gene expression profiling experiment, for example, missing data for mRNA degradation.In the following, we first describe the output of modeling the artificial data and the lessons learned in the modeling process. Then we present the results of modeling the yeast cell-cycle expression data. The global regulatory network diagram is presented as well as detailed analysis of G1- and B-type cyclins. Finally, we illustrate the capability of tdGRN in selecting the most feasible regulatory mechanism from multiple models.To demonstrate the difference between the SISO and MISO models, we first apply only SISO to network prediction on the artificial data. The two regulators, R1 and R2, are expected to connect to the target genes, G1 to G9, as described in Table The results show that the SISO model can predict 100% correctly the one-to-one regulations but not the many-to-one regulations. For many-to-one regulations, the SISO model detects 5 out of 6 were from the study of Young's lab . The rese et al. . A relaxe et al. which boWe applied tdGRN to the 301 cell-cycle regulated genes identified above. It predicted the regulation models of 93 genes or approximately 31% of the total input genes. The results are tabulated and shown in Supplementary Table \u20092. On a Pentium III 800\u2009MHz computer, the total run time for tdGRN to analyze the 301 genes was approximately 90 minutes.Almost half of the 93 genes are regulated in the G1 phase and about 25% are regulated in the G2/M phase. Compared to the 301 input genes, this represents a minor increase in percentage of genes regulated in G1 phase (36% to 44%), and a slight decrease for M/G1 phase (17% to 12%). The differential success rates in modeling G1- and M/G1-regulated genes may be due to the differences in the number of TFs from each phase. There was no M/G1-specific transcription factor used in this study. On the other hand, there were three G1-activated TFs.Among the nine transcription factors, Swi4, Swi6, and Mbp1 are known to play important roles in G1 and late G1 phase gene regulation 29. The tWe examined more closely the regulation models of 3 G1-cyclins and 5 B-type G2/M-cyclins . These two sets comprise all the CLN and CLB cyclins in the data set (CLB3 was not present). The CLN and CLB cyclins were selected due to their important roles in cell-cycle regulation and relatively well-studied regulatory mechanisms. Figure The tdGRN technique uncovered a network of 15 nodes with 30 edges. 21 out of the 30 edges have known regulatory relationships. The average model fitness is 67%. A tabulated output is provided in Table The tdGRN technique uncovered the regulatory relationship between Swi6 and CLN2 (with order = 2 and delay = 2) that is not reported in the location analysis results (see Supplementary Table \u20091). As mentioned in the previous section, Swi4 and Swi6 encode a heterodimer complex, SBF. It has been shown that SBF induces CLN2 transcription in the late G1 phase . In our Using the SISO model, we demonstrated that Swi4 and Swi6 regulate CLN2 with input delays of 0 and 2, respectively. The fitness of the corresponding models is 65% and 61%, respectively. We applied tdGRN-MISO to this data in an attempt to improve the model of CLN2 gene expression. tdGRN-MISO produces 4 possible models and MBF complexes promote G1 to S phase transition, Mcm1 regulates late G2 and some M/G1 genes, and Ndd1 functions at the G2/M phase . Hence, , Swi6 anrevisiae . A feed-revisiae as an FFMangan and Alon suggest We applied tdGRN-MISO to the four FFL motifs identified by the SISO model for CLB2 regulation. We hypothesize that if an FFL is present, one would expect the master and secondary regulators to work in a collaborative manner. That is, the unexplained variation seen in the principal TF's regulation can be elucidated by the feed-forward regulation of the secondary TF, and vice versa. On the other hand, if the FFL is inactive or if only one of the two regulators works, then the model will not be improved by tdRGN-MISO and the percent fitness of the model will remain roughly the same or be worse.The output of tdRGN-MISO is tabulated in Table Transcription is a very complex process that entails assembly of multiprotein complexes and enzymatic reactions, and the ultimate transcript output also depends on temporal factors that are not amenable to accurate analysis. In gene-gene interactions and in multigenic interactions (networks), the temporal aspects have appreciable biological consequences but these causal factors not readily deciphered. Delineating all these in terms of reverse engineering a genetic system requires collections of large and replicated data points that are commensurate with the complexity of the system and its components and also requires the computational power to analyze the data. Both can present difficulties, considering the inherent complexity. Against this backdrop is the adaptation of models that have originally been used in reverse engineering physical systems. State-space model is one such method. It has the advantage of taking the dynamic changes of gene expression into consideration unlike static models such as hierarchical or Transcriptomics studies have generated the most extensive datasets in genomics. Microarray analysis is being used increasingly to determine the expression patterns of tens of thousands of genes simultaneously. When the expression pattern of the same genes under two or more intracellular conditions is determined, there is a potential opportunity to discern gene-gene connectivity with respect to the changing internal environment. However, microarray data only measures the steady-state levels of the RNA product, and all other factors such as the level of DNA-binding factors are hidden variables. A pertinent question here is, \"what is the impact of the delay in making the product of Gene 1 on Gene 2 if Gene 1 is impacting the transcription of Gene 2?\" In this regard, the model developed in this study is useful.sample expression/reference expression). That is, one measures the changes in expression with respect to a common reference instead of absolute expression. A 2-fold change in expression, that is, Yeast cell-cycle regulated genes demonstrate a periodic pattern . The genAmong the five state-space or Bayesian network solutions referred to in this work, the models published by Sung et al. and Li eOn the other hand, the biggest challenge in quantitative modeling is the inherent noise in the expression data. Especially when a gene is expressed at a low level, a low signal-to-noise ratio causes an inaccurate measurement of fold-change. This will in turn affect the ability of quantitative models in learning the network structure and in getting good model fitness. In this study, the average model fitness for yeast expression data is 67%.A ChIP-on-chip experiment, in the context of our work, answers the question: what are the potential targets of a given TF? The evidence of in vivo protein-DNA interactions can help biologists to uncover regulatory network structure 2028. How20Analysis of the time taken by a gene to reach its full expression level (peak time) provides insights into when a gene is maximally expressed during the cell cycle. The understanding of gene expression timelines is useful for associating a time factor to the physiological changes in cells. However, the duration for a gene to reach its peak expression in a cell-cycle alone is not enough to constitute the full picture for gene regulation. For example, the transcription factor complex, SBF (Swi4 + Swi6), regulates CLN1 and CLN2 transcription in the late G1 phase and drives the transition into S phase. The peak times for Swi4 and Swi6 are 13% and 37%, respectively. The peak times for the SBF regulated genes CLN1 and CLN2 are 25% and 23%, respectively. One component of the SBF regulator, Swi6, reaches the peak time later than both CLN1 and CLN2. This shows that the peak time analysis does not convey information on how genes are regulated. One may hypothesize that Swi4 is the rate determining factor in the regulation of the cyclins and that the G1 cyclins will quickly reach their peak expressions at 25% after Swi4 reaches its peak at 13%. Our modeling results support the above mentioned assumption (refer to Section 3). The Swi4 and Swi6 transcription factors regulate CLN2 transcription in a combinatorial manner. The percent fitness of the Swi4+Swi6\u2192CLN2 model is better than two separated single-input and single-output models. Interestingly, our modeling results also suggest that CLN2 is regulated by both Swi4 and Swi6, and CLN1 is regulated only by Swi4. This could be the result of relatively weaker role of Swi6 in cyclin regulation as Partridge et al. have shown that MCB core elements of both CLN1 and CLN2 depend primarily on SWI4 [The tdGRN uses location analysis results to help identify the TF and target gene pairs. This significantly reduces the risk of overfitting by filtering out the unrelated inputs . In addition, Akaike's Information Criterion is appliWe have developed a new modeling tool, tdGRN, for determining prospective gene regulation models from time-series gene-expression data. The tool has been demonstrated on artificial data and yeast cell-cycle gene-expression data. Using the yeast microarray data, we have illustrated that our model can help identify regulatory relations with multiple time delays. The model complements ChIP-on-chip results by predicting the most probable gene regulatory relatioships between transcription factors and their target genes. The tool also identifies previously unknown regulatory relationships. For example, in the regulation of G1- and B-type cyclins, tdGRN uncovers 30 regulation relationships in a network with 15 nodes, 9 of which are novel findings. The existing literature contains support of these novel findings \u201331.The tdGRN tool uses genome-wide location analysis data to reveal the primary network structure. Additional regulatory relationships can be determined by goodness-of-fit of alternate models. It should be interesting to compare this method to the learning-by-modification method developed by Sung et al. where th"}
+{"text": "Elucidating the genotype-phenotype connection is one of the big challenges of modern molecular biology. To fully understand this connection, it is necessary to consider the underlying networks and the time factor. In this context of data deluge and heterogeneous information, visualization plays an essential role in interpreting complex and dynamic topologies. Thus, software that is able to bring the network, phenotypic and temporal information together is needed. Arena3D has been previously introduced as a tool that facilitates link discovery between processes. It uses a layered display to separate different levels of information while emphasizing the connections between them. We present novel developments of the tool for the visualization and analysis of dynamic genotype-phenotype landscapes.lsm14a in cytokinesis is suggested. We also show how phenotypic patterning allows for extensive comparison and identification of high impact knockdown targets.Version 2.0 introduces novel features that allow handling time course data in a phenotypic context. Gene expression levels or other measures can be loaded and visualized at different time points and phenotypic comparison is facilitated through clustering and correlation display or highlighting of impacting changes through time. Similarity scoring allows the identification of global patterns in dynamic heterogeneous data. In this paper we demonstrate the utility of the tool on two distinct biological problems of different scales. First, we analyze a medium scale dataset that looks at perturbation effects of the pluripotency regulator Nanog in murine embryonic stem cells. Dynamic cluster analysis suggests alternative indirect links between Nanog and other proteins in the core stem cell network. Moreover, recurrent correlations from the epigenetic to the translational level are identified. Second, we investigate a large scale dataset consisting of genome-wide knockdown screens for human genes essential in the mitotic process. Here, a potential new role for the gene http://arena3d.org/.We present a new visualization approach for perturbation screens with multiple phenotypic outcomes. The novel functionality implemented in Arena3D enables effective understanding and comparison of temporal patterns within morphological layers, to help with the system-wide analysis of dynamic processes. Arena3D is available free of charge for academics as a downloadable standalone application from: Mapping the phenome in the context of dynamic genetic factors is becoming one of the main interests of biology nowadays. There is an increasing amount of data originating from time-resolved imaging experiments on RNA interference screens, synthetic lethality or other systemic perturbations -4. The sThe phenotypic landscape reflects the robustness of the underlying genetic networks and its understanding should help in elucidating the reverse rewiring of genetic circuits. The dynamic factor in biological systems adds another dimension of complexity and plays a major role in understanding the process. Therefore the common approach of excessively simplifying the dynamic factor will result in a potentially critical loss of understanding. Visualization tools can greatly enhance the ability to perceive this type of complex data.Arena3D has been previously developed as a visualization and analysis platform for the display and understanding of connections between different data types of biological information . It usesWhile different tools for visualizing time course data, gene expression and network clusters already exist, e.g. VistaClara , GENeVishttp://www.java.com/ and Java3D libraries http://java3d.java.net/ are required for running Arena3D on any operating system and Macintosh users should also install the JOGL libraries http://opengl.j3d.org/. Simple API implementation for plug-in development is planned for the future. The source code is available for download for users that wish to customize their analysis.Arena3D was implemented using Java (JDK 1.6) and Java3D (1.6.1 API). The JFreeChart library is used , where minValue is the absolute minimal value that any node may have throughout the time course for the respective layer and maxValue is the absolute maximal one. The gradient colors can be customized by the user. The option of using other colorblind-safe gradients is also offered. The scale is mapped separately for each layer, as there may be cases where the parameters measured for different layers of information are not comparable in magnitude or units of measurement. Caution should therefore be taken when interpreting results from comparisons among different layers based on color alone.The nodes are colored according to the associated values of the respective biological entities on a yellow-blue color scale, with grey representing absolute zero . The conversion of the values to the scale is calculated such that the colors map from yellow to blue to the interval To compute and graphically display correlations between the time-resolved vectors associated to each node the Pearson correlation calculation has been used. Only correlations with a certain p-value are displayed. By default, correlations with a p-value of 0.05 will be shown. The significance of the correlation is assessed according to the Pearson product-moment correlation coefficient (PMCC) table of critical values, which describes the minimal Pearson correlation coefficient values for a certain level of significance depending on the number of degrees of freedom. Importantly, for this correlation measure the data is assumed to be normally distributed.As a non-parametric alternative to the Pearson correlation calculation, the Spearman rank correlation is also available for the user (results not shown). This is a better measure for the cases when the data is not normally distributed. The significance of the Spearman correlation r is assessed using the following formula:n-2 degrees of freedom under the null hypothesis, where n is the number of time points in the series ) and rnf2 (ring finger protein belonging to the Polycomb group [Ensembl:ENSG00000121481]), which are highlighted by connecting throughout the layers and are situated at the periphery of the ESC network ) is scored as highly influential for the polylobed phenotype according to scoring scheme (a) and not as much for the same phenotype according to scoring scheme (b). On the other hand, the suppression of gene ranbp3 (a RAN binding protein [Ensembl:ENSG00000031823]) receives a high score for the polylobed morphology under the latter scoring scheme and a lower score for the former scoring scheme. The timeline of variation for the two genes is obtained by clicking on the respective nodes and reveals the line chart for the respective genes for all phenotypic layers. Here one can see that in fact both genes have a high signal for the polylobed phenotype. Since gene ranbp3 has a lower average than incenp, it did not score high by the averaging scheme, but its signal is captured by the second scheme which manages to balance out some of the noise. This shows that similarity scoring performs well in identifying global patterns in the data, especially in the context of a high number of samples, and the two scoring schemes are best used complementarily.As highlighted in Figure Genetic pleiotropy and locus heterogeneity are two phenomena that contribute to making the landscape of genotype-phenotype relations progressively intricate . VisualiWe have shown how this tool can be used in phenotypic profile classification, as well as in multigene trait prediction from the genotype. The functionality of Arena3D can provide the basis to identifying both rare and prevalent phenotypes and their underlying signalling networks, components of which may be used as markers for diseases.One of the main assets of this tool is the interactive analysis of temporal data: it enables the discovery of global patterns, but also of time patterns for individual genes of interest, given small to medium datasets with a few or many time points. The advantage is that one can also focus on a particular time point that may stand out as exhibiting interesting behavior of genes/proteins and look deeper into the reasons for this highlight. This approach thus allows for a better understanding of the role time plays within the biological process.It is becoming increasingly important that the analysis of networks and pathways should switch from a global to a time-resolved, tissue-specific view, as there are essential differences encountered at this level . AnalyziProject name: Arena3DProject home page: http://arena3d.org/Operating system(s): Platform independentProgramming language: Java, Java3DOther requirements: Java 1.6 (or higher)License: Arena3D is available free of charge for academic use.Any restrictions to use by non-academics: Commercial users should contact the authors.Competing interestsThe authors declare that they have no competing interests.MS implemented the time course data analysis functionality for Arena3D and applied it to biological data. GAP was the first developer of the software and he contributed to the design and the implementation of the software. JA helped in analyzing and addressing specific visual concepts implemented by MS and GAP. RS conceived the concept and the main idea of Arena3D. He designed, coordinated and supervised the project. All authors drafted, read and approved the final manuscript.Files for testing in Arena3D. Input files in the Arena3D format, for users wishing to test the examples discussed in the paper directly with the software. The archive contains 3 files: ESC_core_network_timeseries_forARENA3D.txt - data for case study 1; mitotic_genes_all_timeseries_forARENA3D.txt - data for case study 2, all genes, 50 time points; mitotic_genes_subset_timeseries_forARENA3D.txt - data for case study 3, defined subset of genes, 90 time points.Click here for fileThe ESC core network. List of the genes involved in the ESC core network along with their description from Ensemble release 63 [lease 63 , as desclease 63 .Click here for fileTime series values for genes in the ESC core network. List of the genes involved in the ESC core network along with their associated time series values for the four levels: histone acetylation, chromatin binding, mRNA and protein levels for the 3 days of the experiment.Click here for fileList of potentially interesting mitotic genes. Subset of genes involved in cell division, chosen according to the targets discussed in [ussed in , along wussed in .Click here for fileTime series values for potentially interesting mitotic genes. Subset of genes involved in cell division, chosen according to the targets discussed in [ussed in , along wClick here for file"}
+{"text": "Xenopus tropicalis, a diploid frog with a sequenced genome.The molecular mechanisms governing vertebrate appendage regeneration remain poorly understood. Uncovering these mechanisms may lead to novel therapies aimed at alleviating human disfigurement and visible loss of function following injury. Here, we explore tadpole tail regeneration in Xenopus laevis, the Xenopus tropicalis tadpole has the capacity to regenerate its tail following amputation, including its spinal cord, muscle, and major blood vessels. We examined gene expression using the Xenopus tropicalis Affymetrix genome array during three phases of regeneration, uncovering more than 1,000 genes that are significantly modulated during tail regeneration. Target validation, using RT-qPCR followed by gene ontology (GO) analysis, revealed a dynamic regulation of genes involved in the inflammatory response, intracellular metabolism, and energy regulation. Meta-analyses of the array data and validation by RT-qPCR and in situ hybridization uncovered a subset of genes upregulated during the early and intermediate phases of regeneration that are involved in the generation of NADP/H, suggesting that these pathways may be important for proper tail regeneration.We found that, like the traditionally used Xenopus tropicalis tadpole is a powerful model to elucidate the genetic mechanisms of vertebrate appendage regeneration. We have produced a novel and substantial microarray data set examining gene expression during vertebrate appendage regeneration.The Humans have a limited capacity to regenerate, and thus, severe injuries result in unsightly scarring, loss of function and disfigurement (reviewed in ). Some vXenopus tadpole tail regeneration model has emerged as a powerful system for the study of vertebrate appendage regeneration . The neration -12. In aneration -17. GiveXenopus tropicalis tadpole tail in a genome-wide fashion. In particular, we sought to create a gene expression data set to serve as a resource in identifying the genes and processes involved in tail regeneration of this species. Although a similar study has been done previously in X. laevis, we chose to pursue this study in X. tropicalis, since this system contains more advanced genomic resources = 1 relative expression unit. A list biological process and molecular function GO processes (and their respective RefSeq protein IDs) and were downloaded from DAVID, and intracellular metabolic processes possessing 10 or more genes in the array data set were examined.Expression profiling was performed by analyzing an array data set that included the most significant target for each RefSeq protein ID based on q-value . Primers and probes are shown in Table Tail tissues were collected in biological triplicate at 0 h, 6 h, 12 h, 24 h, 36 h, 48 h, and 72 h post amputation and placed into RNAlater (Qiagen). Tissue was homogenized with syringe and QIAshredder (Qiagen), total RNA was extracted using Qiagen's RNA Easy Mini kit, and cDNA was synthesized using Applied Biosystem's High Capacity cDNA kit. Taqman assays were generated to target exon-exon boundaries and qPCR reactions were run with Taqman Fast Gene Expression Master Mix on a StepOne+ qPCR machine (Applied Biosystems). Expression values were calculated using the \u0394\u0394Ct method with genes ESTs: expressed sequence tags; PCA: principal component analysis; GO: gene ontology; hpa: hours post amputation.in situ hybridization, imaging, nucleic acid purification and RT-qPCR. BB assisted with immunohistochemistry and RL carried out sectioning. RP assisted with imaging. TJM and LF generated the mTie-2::eGFP X. tropicalis line. MG performed the array gene annotations. LAHZ performed the array bioinformatic analysis and clustering. EA guided the project and co-wrote the manuscript. All authors read and approved the final manuscript.NRL performed most experiments, prepared data and the figures, and co-wrote the manuscript. YC assisted with immunohistochemistry, whole-mount Xenopus tropicalis line that expresses eGFP in its vasculatureFigure S1 - A transgenic . (A) A series of merged confocal images from transgenic Xenopus tropicalis tadpole expressing eGFP under the control of the murine Tie-2 promoter; the four red boxes show the heart (B), brain and associated vasculature (C), the vasculature that surrounds the eye (D), and the vasculature in the tail (E). The single confocal slice in (E) depicts the dorsal lateral anastomosing vessel (l), spinal cord (s), dorsal aorta (d), posterior cardinal vein (p), epithelial vasculature (e), and intersomitic vessels (i).Click here for fileX. tropicalis transgenic lineMovie S1 - Movie showing circulating, eGFP+ cells in the mTie-2::eGFP .Click here for fileFigure S2 - An improved array gene annotation. (A-B) The graphics show the increase in annotation rate from the company provided annotation (A) and our improved annotation (B). The number of annotated probe sets is represented by the area of the squares.Click here for fileFigure S3 - Sequential gene expression changes of all gene targets in array dataset. The graphic maps the expression profiles of all 16059 RefSeq genes in the array data set. The area of the circles represents the number of genes in each respective expression level change group. Each subsequent node represents the transition of a set of genes from one array time point to the next (T0h - T6h - T24h - T60h). Between nodes, red lines represent a positive fold change of over two-fold between array time points, while blue lines represent a negative fold change over two-fold between array time points, and black lines represent a fold change that is between positive 2 and negative 2. In the end, this graphic allows one to track the expression level changes of all gene targets in the array. For example, from the T0h to T6h array, 2351 of the 16059 targets had an over two-fold increase in expression (indicated by the red line). Of these 2351 targets, 77 then had another over two-fold increase in the T6h to T24h array (indicated by the red line). Of these 77 targets, only 2 targets had an over two-fold increase in the T24h to T60h array (indicated by the red line).Click here for fileX. tropicalis microarray data to X. laevis macroarrayFigure S4 - Comparison of . The graphic plots the expression level changes reported in a previous X. laevis cDNA macro array (y-axis) versus the X. tropicalis data of this report (x-axis). The graphic was made by plotting the log2 expression level changes of the 47 targets from the X. laevis cDNA macro array that were also measured in our X. tropicalis array data and plotted. There are two comparisons shown on the graph, a comparison between the expression level changes comparisons of X. laevis D3/D0 post-amputation and X. tropicalis T60h/T0h (blue circles) and the expression level comparisons of X. laevis D1.5/D0 post-amputation and X. tropicalis T24h/T0h data (red squares).Click here for fileFigure S5 - Log2 expression profiles of intracellular metabolic processes. The graphic shows the average log2 expression profiles of all 155 intracellular metabolic processes present in the array data. By ranking the processes by their T6h vs T0h expression level change, the 1st, 2nd, 3rd, 4th, and 5th quintiles of the 155 intracellular metabolic processes are colored red, orange, green, blue, and purple respectively.Click here for file"}
+{"text": "Microarrays have been useful in understanding various biological processes by allowing the simultaneous study of the expression of thousands of genes. However, the analysis of microarray data is a challenging task. One of the key problems in microarray analysis is the classification of unknown expression profiles. Specifically, the often large number of non-informative genes on the microarray adversely affects the performance and efficiency of classification algorithms. Furthermore, the skewed ratio of sample to variable poses a risk of overfitting. Thus, in this context, feature selection methods become crucial to select relevant genes and, hence, improve classification accuracy. In this study, we investigated feature selection methods based on gene expression profiles and protein interactions. We found that in our setup, the addition of protein interaction information did not contribute to any significant improvement of the classification results. Furthermore, we developed a novel feature selection method that relies exclusively on observed gene expression changes in microarray experiments, which we call \u201crelative Signal-to-Noise ratio\u201d (rSNR). More precisely, the rSNR ranks genes based on their specificity to an experimental condition, by comparing intrinsic variation, i.e. variation in gene expression within an experimental condition, with extrinsic variation, i.e. variation in gene expression across experimental conditions. Genes with low variation within an experimental condition of interest and high variation across experimental conditions are ranked higher, and help in improving classification accuracy. We compared different feature selection methods on two time-series microarray datasets and one static microarray dataset. We found that the rSNR performed generally better than the other methods. DNA microarrays can be classified into static experiments, where a snapshot of gene expression in different samples is measured, and time series experiments, where a temporal process is measured over a period. While static experiments may reveal genes that are expressed under specific conditions, time series experiments may help in determining the temporal profiles of the genes expressed under a specific condition, as well as interactions between them An interesting problem in microarray analysis is the classification of unknown expression profiles with the goal of assigning them to one or many predefined classes. Such classes represent various phenotypes, for example, diseases. Moreover, classifying microarray data by cross-comparing microarray data from different laboratories and phenotypes could be helpful not only to identify unknown samples, but to reveal obscure associations between complex phenotypes, such as shared pathogenic pathways among different diseases. Such approaches have been made more feasible in recent years with the availability of large database repositories of high throughput gene expression data, such as the Gene Expression Omnibus (GEO) versus healthy samples versus non-pluripotent cells Many studies have shown that integrating microarray data with additional biological information improves classification accuracy. For example, Bar-Joseph et al. discuss how protein-DNA binding data and protein interaction data can be used to constrain the number of hypotheses that can explain a specific expression pattern Signal-to-noise (SNR) ratios have been extensively used in various fields. In image processing, the SNR is defined as the mean of the variable being measured divided by its standard deviation 1\u03bc and 2\u03bc are the mean expression value for class 1 and class 2 respectively while 1\u03c3 and 2\u03c3 are the standard deviation for class 1 and class 2 respectively.In this case, the standard deviation represents noise and other interference in comparison to the mean. The reciprocal of the SNR is known as coefficient of variation (CV), which has been widely applied as quality control and validation method for the analysis of microarray assays, see, e.g., The Here, we show that identifying biologically relevant features substantially improves microarray classification. First, we explore the addition of protein interaction information as a means to select features specific to particular experimental conditions and improve microarray classification. We found that, in the form presented here, the addition of protein interaction information resulted in no improvement in classification. Second, we introduce a novel feature selection method based on the SNR, which we call the \u201crelative signal-to-noise ratio\u201d (rSNR). Given a microarray dataset comprising various experimental conditions, the rSNR is a feature ranking method that ranks genes based on their specificity to a given experimental condition, by comparing variation in gene expression within that particular experimental condition, with variation in gene expression across other experimental conditions. Basically, the rSNR can be expressed as a quotient of SNRs or CVs, and in practice, gives higher rank to genes with high expression values and low standard deviations in the experimental condition of interest. We tested this and other feature selection methods on two time-series microarray datasets and one static microarray dataset. The rSNR method performed generally better than other feature selection methods, and its application substantially improved classification accuracy. Our results also suggest that the rSNR rank could be used to reduce the number of genes representing a microarray experiment in a database, hence, making searches across the entire database more efficient.To classify time course microarray experiments we adopted a nearest neighbor approach based on Pearson correlation. Training and test data consisted of gene expression profiles from two or more different time points. When considering test datasets comprising more than two time points, we first split the data into test subsets consisting of pairs of consecutive time points. We then evaluated the classification on each of these subsets, and decided the final classification of the entire test data by majority voting.With the goal of improving classification performance we examined different data features. First, we evaluated different manners in which gene expression profiles from two different time points can be combined into what we call transition profiles. Then, we incorporated protein interaction information from the STRING database into the definition of such transition profiles. Finally, we compared different feature selection approaches to extract the genes that are the most relevant to specific biological conditions.In the following, we adopt notation by Hafemeister et al. g of experiment i at time t,g of experiment i,t of experiment i. This vector of expression values is also called the expression profile at time t of experiment i,g for all experiments i across all time points t,i for all genes g across all time points t.Arabidopsis thaliana stress response (\u201cAtGenExpress\u201d) We trained and tested our method on the two time-series microarray datasets used by Hafemeister et al. http://www.arabidopsis.org/, Arabidopsis thaliana when exposed to various stress treatments. The dataset has 232 samples comprising 9 stress treatments and 2 tissue types (root and shoot) at 8 time points (T\u200a=\u200a8), with 2 replicates at each time point. In total, there are 18 unique combinations of stress treatments and tissue types to which we will refer as experimental conditions (N\u200a=\u200a18). We will refer to the variables in the experiment, i.e., stress treatment and tissue type, as \u201cexperimental factors\u201d. The original dataset contained expression values for 22810 genes. Hafemeister et al. reduced it to 2074 genes by applying a 2-fold-change filter G\u200a=\u200a2074).We downloaded this dataset from the TAIR database [T\u200a=\u200a12), with replicates ranging from 1 to 40 at different time points. In total, there are 11 unique combinations of toxin treatments and dosage levels, to which we will refer as experimental conditions. We will refer to the variables in the experiment, i.e., toxin treatment and dosage level, as \u201cexperimental factors\u201d. The dataset contains expression values for 1600 genes (G\u200a=\u200a1600). Since our validation framework requires samples from at least 4 time points, we discarded experimental conditions with fewer than 4 time points. This resulted in a reduced EDGE dataset with 6 unique experimental conditions (N\u200a=\u200a6). All our analyses are based on this reduced dataset.We obtained this dataset directly from its original author N\u200a=\u200a3), based on the associated diseases. Each microarray experiment consists of arrays or expression profiles which can be categorized in either \u201cnormal\u201d or \u201cdiseased\u201d (T\u200a=\u200a2). Each microarray experiment consisted of a unique set of genes. In order to create a single dataset with around 3000 common genes, we excluded 5 microarray experiments. The reduced dataset contains 3378 genes (G\u200a=\u200a3378). The expression values in the reduced dataset were quantile normalized.We obtained this dataset from Jesse M. Engreitz http://string-db.org/, We obtained protein interaction information from STRING, a database of known and predicted protein-protein (and protein-gene) interactions [transition profile. We used two different kinds of transition profiles:As test/training data we used pairs of expression profiles from two time points of the same experimental condition. We converted each of these pairs of expression profiles into a vector of expression values to which we refer as Differential transition profile (DTP). The DTP measures the change in expression value for all genes between two time points xt and yt. The DTP of an experimental condition i for a pair of time points xt and yt is calculated as:i at time points xt and yt respectively.Mean transition profile (MTP). The mean transition profile is the mean expression value for all genes between two time points. The MTP of an experimental condition i for a pair of time points xt and yt is calculated as follows:To obtain a transition profile, two expression profiles are combined into a single vector of expression values. Hence, we also compared transition profiles with single gene expression profiles:Time point expression profile. Here, we based the classification on individual gene expression profiles. To make the comparison with the classification based on pairs of gene expression profiles fair, the individual gene expression profiles were taken from a set of two profiles (see subsection below).We evaluated the performance of our method using the above-mentioned expression and transition profiles on the AtGenExpress and EDGE datasets.We classified gene expression and transition profiles according to the 1-nearest neighbor rule (1NN). Similarity between expression or transition profiles was evaluated using the Pearson correlation coefficient. Thus, for a given transition profile selected as test data, we computed, pairwise, the Pearson correlation coefficient between it and all the transition profiles in the training data. Then, we examined the Pearson correlation coefficient of each pair, and labeled the test data with the experimental condition of the transition profile in the training data for which we obtained the highest Pearson correlation coefficient. To make the comparison between expression and transition profiles fair, in the case of time point expression profiles, the test data consisted of two expression profiles. We computed, pairwise, the Pearson correlation coefficient between all the expression profiles in the test data and all the expression profiles in the training data. Finally, the test data were labeled with the experimental condition of the expression profile in the training data for which we obtained the highest Pearson correlation coefficient.We evaluated the predictive power of our method for different choices of parameters and features using leave-one-out cross-validation. In the following, we assume that pairs of time-points are sorted by time in ascending order.AtGenExpress and EDGE datasets. The test dataset was generated by randomly selecting 2 time points from a given experimental condition. For these 2 time points, we used one of the replicates as test dataset, and excluded all other replicates from the training and test datasets. The training dataset consisted of all the remaining time points for this experimental condition together with all the time points for other experimental conditions. We generated ten such test/training datasets for each experimental condition, and repeated this random sub-sampling validation procedure 30 times for AtGenExpress, and 100 times for EDGE.Engreitz dataset. This dataset contains 27 microarray experiments categorized into 3 experimental conditions. Each microarray experiment has expression profiles in \u201cnormal\u201d and \u201cdiseased\u201d states. We treated \u201cnormal\u201d as time point 0 and \u201cdiseased\u201d as time point 1. For the cross validation, we selected one microarray experiment (2 time points) as test data, and the rest as training data. This process was repeated so that each microarray experiment was selected as test data exactly once.If the predicted experimental condition for the test data is the same experimental condition from which the test data had been taken, then, the classification is correct; otherwise, the classification is incorrect. We used the accuracy to evaluate the classification results for different methods, parameters, and features:For the Engreitz dataset, in addition to the accuracy we used the area under the Receiver Operating Characteristic (ROC) curve (AUC). Each of the 27 microarray experiments were used to query the remaining 26 microarray experiments exactly once in order to determine whether they correspond to the same experimental condition . Thus, given a query microarray experiment, we computed 26 correlation coefficients. Then, we defined a cut-off on those correlation coefficients. All microarray experiments for which we obtained correlation coefficients higher than the cut-off were classified as \u201cpositives\u201d. Out of these positive microarray experiments, those that indeed corresponded to the experimental condition of the query microarray experiment were considered \u201ctrue positives\u201d (TP); those corresponding to a different experimental condition were considered \u201cfalse positives\u201d (FP). We then computed the true positive rate (TPR) and false positive rate (FPR). The ROC curve represents the TPR as a function of the FPR for different cut-off values. The AUC reported is the average computed for all 27 microarray experiments taken as query.http://string-db.org/l implies a protein interaction between gene xg and yg. and .,l,.O is a N\u00d7T matrix containing the link expression values of the link l for all experiments and time points.We also investigated the effect of adding protein interaction information on microarray classification. We retrieved this information from the protein interaction database STRING [We compared the classification performance of classifiers relying on gene, linked gene and link datasets.Microarray experiments are intrinsically noisy in that they involve a very large number of genes, most of which exhibit irrelevant variation STRING-based link selection. Our link datasets included only those links representing interactions between genes present on the microarray and with a confidence score in STRING greater than a given cut-off, which was selected from {0, 250, 500, 750, 900}.STRING-based gene selection. Our linked genes datasets included only those genes present on the microarray and for which there is an interaction in STRING with a confidence score greater than a given cut-off. As for the STRING-based link selection, cut-off scores were selected from {0, 250, 500, 750, 900}. It follows that for each cut-off score, the link and linked genes datasets included information of exactly the same genes.Random Selection. Randomly selected genes were used as controls. For each STRING cut-off score, we counted the number of genes present in the linked genes dataset, and randomly selected the same number of genes to create a control dataset. Leave-one-out cross-validation was performed on this control dataset. The process of creating a dataset from randomly selected genes was repeated 25 times for each STRING cut-off score.k, the g is given by:g in the positive set. And the The rSNR ranks genes according to their association with a given experimental condition. In regards to our classification problem, this means that when the transition profile of a test data is compared with the transition profile of an experimental condition in the training data, only those genes that were found to be relevant for that experimental condition will be used to assess the similarity between the two profiles. In a cross-validation framework, the rSNR gene rank is computed based exclusively on the training data. Hence, the sets of relevant genes are also based exclusively on the training data, and will be different for each cross-validation fold. In order to rank the genes based on their rSNR, for each experimental condition we first define a positive and a negative set in the training data. Basically, the positive set is the training data for the experimental condition of interest, while the negative set comprises the remaining training data not involving any of the experimental factors defining the positive set. For example, let us assume that we are computing the rSNR for the experimental condition involving the experimental factors cold and root (\u201ccold-root\u201d) in the AtGenExpress dataset. In this case, the positive set is the training data for the experimental condition of interest (\u201ccold-root\u201d), while the negative set is the training data involving neither \u201ccold\u201d nor \u201croot\u201d. Other experimental conditions sharing experimental factors with the positive set are excluded from the rSNR calculation. The definition of the positive and the negative set is exemplified in g under experimental condition i, g across all experimental conditions in the negative set, and g across all experimental conditions in the negative set. k and gene g as:Finally, we define the rSNR for experimental condition The rSNR can be also be interpreted as the ratio between two coefficients of variation:The rSNR calculation described above is repeated for all experimental conditions in the training data. Hence, for each experimental condition, we obtain a list of genes with their corresponding rSNR scores, which is then used to select relevant genes. After feature selection, each experimental condition in the training data is represented by a separate list of relevant genes and their corresponding expression values. Subsequently, when test data are compared with a given experimental condition (e.g. \u201ccold-root\u201d), only the genes that were found to be relevant to that particular experimental condition (\u201ccold-root\u201d) are used for similarity score calculation. This process is repeated for each combination of test data and experimental condition in the training data. For a fair comparison with the results based on the link and linked genes datasets, we selected the same number of genes based on their rSNR scores, as obtained for each STRING cut-off score (see previous subsection).We compared the performance of the rSNR with two standard feature selection methods. Like the rSNR, these feature selection methods rank genes and select genes based on a ranking:1\u03bc and 2\u03bc are the mean expression value for class 1 and class 2 respectively while 1\u03c3 and 2\u03c3 are the standard deviation for class 1 and class 2 respectively.Welch\u2019s t-test: Welch\u2019s t-test is an adaptation of Student\u2019s t-test to be used on two samples with unequal variance:i\u03bc, i\u03c3 and in are the mean, standard deviation and sample size, respectively.It is noteworthy that We began by evaluating the performance of our 1NN classifier on the entire AtGenExpress (2074 genes) and EDGE (1600 gene) datasets. These datasets had been previously investigated by Hafemeister et al in the context of time-series microarray classification We set out to improve the classification of microarray time series by various means. We evaluated how gene expression profiles from two different time points can be combined, yielding transition profiles. We incorporated protein interaction information into the definition of such transition profiles, and selected genes based on the same information. Finally, we compared different feature selection approaches, and developed the rSNR method to extract the genes that are the most relevant to a specific biological condition.We evaluated our method on the AtGenExpress and the EDGE datasets using leave-one-out cross-validation. First, we applied our method to the entire datasets. We compared the performance of the method using different gene expression and transition profiles, as well as with previous work. Next, we evaluated different feature selection methods with the aim of improving classification performance. We compared the performance of gene and link-based methods, using randomly selected genes as controls (see Methods). To render all feature selection methods comparable, the classification decision was always based on the same number of genes, independently of the feature selection method. The results are shown in Arabidopsis, would probably help in selecting interactions specific to each experimental condition in AtGenExpress.The STRING database is not a process-specific protein interaction database. A process-specific database, e.g., a database of interactions involved in stress response of Some genes that might be relevant for the experimental conditions under study are not present in STRING, and, hence, information provided by these genes is lost.It is disputable whether genes with more links in STRING are more important, or simply more extensively studied.The rSNR feature selection method exhibited the best overall performance, achieving higher accuracy values compared to randomly selected genes for both datasets. Additionally, applying the rSNR feature selection method generally resulted in a significantly higher accuracy as compared to the other feature selection methods. A detailed analysis of the rSNR method based on the AtGenExpress dataset is presented in the following section.To better understand the rSNR method, we systematically decreased the number of features in the dataset by uniformly removing 200 genes in a step-wise fashion, and compared the performance of the rSNR with control datasets containing the same number of randomly selected genes . The rSNFor each experimental condition in the training dataset, we created a gene list, sorted in ascending order (from bottom to top) according to their rSNR scores. For the rSNR to constitute a reliable feature selection method in the context of microarray classification, the genes on the top of the list should be relevant to the specific experimental condition of interest. On the other hand, the bottom of the list should contain genes considered to be noise. To verify that this is indeed the case, we selected 200 genes from different sections of the rSNR-based sorted gene list, and performed cross-validation, reporting average accuracy values. As control, we used 200 randomly selected genes. As shown in We compared the performance of the rSNR with In addition to testing our methods on time-series datasets, we applied them to the static microarray dataset from Engreitz et al. Engreitz et al. classified microarray experiments into three disease types, achieving an area under the ROC curve (AUC) of 0.729. As discussed in the Methods section, the application of our method required the modification of the original dataset. Hence, direct comparison with the results of Engreitz et al. is not appropriate. Even randomly selected genes perform extremely well at classifying these data , an obseWe first showed that the nearest neighbor method performs comparably to the method developed by Hafemeister et al. We found that for the purpose of classification, mean expression profiles describe time-series transitions based on microarray experiments better than differential expression profiles and single time point expression profiles.We used biological information as a feature selection criterion wherein a selected feature is known to involve a protein interaction. The source of this biological information was protein interaction information from the STRING database. We investigated the performance of different methods for reducing the high dimensionality of microarray data based on such information. We found that, compared to simple expression profiles, the addition of information on interactions does not provide a clear advantage in the classification of microarray experiments. Moreover, the application of feature selection methods relying on interactions resulted in performances comparable to those obtained with randomly selected genes. Alternative forms of including such information, or the use of interactions from more specific databases, describing the process of interest, might be of advantage.Finally, we proposed a novel method for feature selection that we called the relative Signal to Noise Ratio (rSNR). The rSNR gives a score to each gene based on its relevance for each experimental condition. This score can then be used to select relevant genes. We showed that the performance of classifiers based on genes with low rSNR scores is substantially worse than that of classifiers based on genes with high rSNR scores. This result indicates that the genes relevant for classification of the experimental condition of interest rank high, in contrast to irrelevant genes, which may be considered noise. Due to its simplicity, our method is particularly attractive for database searches. In a preprocessing step, the microarray datasets for different experimental conditions in the database can be summarized using only the most relevant genes. Hence, when a query microarray is searched against the entire database, instead of comparing the expression profiles of all genes, only the expression profiles of genes relevant to an experimental condition need to be compared. Both in terms of memory and performance, such a procedure would demand relatively few computational resources.Figure S1Example of cross-validation when all time points of an experimental condition are selected as test data.(TIF)Click here for additional data file.Text S1Results of comparison between rSNR and Significance Analysis of Microarray (SAM).(PDF)Click here for additional data file."}
+{"text": "Typical analysis of time-series gene expression data such as clustering or graphical models cannot distinguish between early and later drug responsive gene targets in cancer cells. However, these genes would represent good candidate biomarkers.We propose a new model - the dynamic time order network - to distinguish and connect early and later drug responsive gene targets. This network is constructed based on an integrated differential equation. Spline regression is applied for an accurate modeling of the time variation of gene expressions. Then a likelihood ratio test is implemented to infer the time order of any gene expression pair. One application of the model is the discovery of estrogen response biomarkers. For this purpose, we focused on genes whose responses are late when the breast cancer cells are treated with estradiol (E2).Our approach has been validated by successfully finding time order relations between genes of the cell cycle system. More notably, we found late response genes potentially interesting as biomarkers of E2 treatment. Breast cancer represents a major public health issue since it comprises 22.9% of all cancers in women and it is an important cause of death . Some brBiomarkers often refer to proteins measured in the blood whose concentrations reflect the presence or the severity of the disease. In the case of estrogen treatment, biomarkers can be seen as parameters reflecting the effects of the drug on the patient. The biomarkers of hormone therapy of the breast cancer is not well developed. For instance, although tamoxifen's pharmacology mechanism is well known, its clinical biomarker is not well established yet. Understanding the cascade of estrogen signaling pathway is the key to study the potential biomarkers.Gene expression-based biomarker discovery has demonstrated efficiency for breast cancer ,5. StandUnfortunately standard methods might fail to reveal key biomarkers, since they do not take into account the temporal aspect of gene expression and the complex network of gene regulation. To tackle this issue, the analysis of time series data through dynamic networks represents efficient alternatives . In thisLate response genes might represent relevant biomarkers because they are more stable over the time. Our approach relies on this biological aspect of biomarker discovery. To identify late response genes, we propose a new model based on a dynamic time order network (DTON). The model interpretation is simple and intuitive: it reflects which genes express in the early times and which ones in the late times after the hormone treatment. The DTON is constructed based on an integrated differential equation. Spline regression is applied for an accurate modeling of the time variation of gene expressions. A likelihood ratio test is implemented to infer the time order of any gene expression pair. The advantages of this modeling approach are numerous: (i) closed-form expressions of ODEs, (ii) accurate modeling of the time series data by using spline regression and by integrating differential equations, and (iii) model learning involving simple regressions quick to compute and only a few parameters have to be estimated. The method has been validated by successfully finding time order relations between genes of the cell cycle system. Most importantly, we found late response genes as candidate biomarkers of E2 treatment.This paper is organized as follows. Section Materials and methods first describes experiments and data preprocessing. Late response genes are defined and discussed. Then the dynamic time order network and its model learning are presented. It is described how dynamic time order relations between genes are inferred through a likelihood ratio test. The next section illustrates our method on real data analysis. Our model is validated with the well-known cell cycle system. Late response genes of E2 treatment are discovered. Finally, the last section concludes and points out promising perspectives.G0 - G1 synchronization cells were treated with 10-8 M of 17 \u03b2 - estradiol (E2). Then RNA was extracted from the cells before (0) or after 1, 2, 4, 6, 8, 12, 16, 20, 24, 28 and 32 hours of stimulation. For more details, the reader is referred to the original study hours. We divide the function i finto three parts using two knots at 5.333 hours and 17.333 hours. The decomposition of the cubic function using knots is presented in Additional file The time interval of our gene expression data is \u03b2ij = T and t = , then t. The distribution of i yis written as:Let j refers to the first, second or third interval.with t \u2208 {0, 1, 2, 4, 6, 8, 12, 16, 20, 24, 28, 32} and their associate LCRs for the gene i Gare the vector yi = . Based on Equation 5, the likelihood for the NCSR model of gene i Gfor the set of 12 independently and identically distributed (i.i.d.) samples In our study, we have 12 different time points \u03b2ij are learned by maximizing the likelihood in Equation 6 with constrains for gene i Gin the whole time interval 0 to 32 hours, as Figure with Based on Equation 2, the dynamic time order relationship between two genes can be learned using the following multiple linear regression:iG. The response variable yit is the LCR of gene i Gat time t and the predictor variables are integrations of the cubic functions at time t. For a predictor variable, the integration i Fof piecewise cubic functions fi1, fi2 and fi3 is calculated as follows:with where G1 and G2 are:We apply the model in Equation 10 to every pair of genes to determine whether there is a dynamic time order relation between them. The pairwise regression models for two genes G1 and G2, and iF(t) be the integration of if(t) getting from Equation 11 and iF(t) at each time point t. Then the predictor variable is X = .with yi (left hand side) come from data and values of X (right hand side) result from the integration of the NCSR functions. For the pair of genes G1 and G2, the model in equation 12 represents the dynamic time order relation G2 \u2192 G1 and the model in equation 13 represents the dynamic time order relation G1 \u2192 G2.Thus in Equations 12 and 13, values of Pairwise regressions are then computed for all pairs of genes and the log-likelihoods are calculated is constructed whose weights are the previously computed log-likelihood differences. In the matrix, for a couple of genes, only the positive log-likelihood difference value is kept and the negative (symmetric) log-likelihood difference value is set to 0. This adjacency matrix represents the complete directed graph of time order relationships.After determining the time order relationships, an A \u2192 B and B \u2192 C, then A \u2192 C is considered as redundant and is removed. For graph drawing, the Sugiyama's algorithm [When the network is small (less than one hundred nodes), it is interesting to keep as much as possible information about time order relations. The best strategy in this case is fine tune a threshold used to remove non-significant edges. For this purpose, a simple and efficient approach is the use of the median or other quantiles of the distribution of log-likelihood difference values. Then a simplification step is used to remove redundant edges. For instance, when one observes lgorithm providesO(n2logn) through the Prim's algorithm [When the network is huge, such as the genome-wide network from the microarray data, the previous approach cannot be used. The reason is that a low threshold value will create a network highly connected which is too complex to manipulate and to visualize, whereas a high threshold value will lead to a graph with many connected components from which it will only be possible to infer time orders between connected genes. To tackle this issue, we compute the so-called maximum weight spanning tree (MWST). This graph presents several advantages: (i) its tree shape is a very simple structure easy to manipulate and visualize, and (ii) every node is connected by a path such that we can access to the time order relation between each gene. Besides, the MWST can be quickly computed in lgorithm . Regardilgorithm .The dynamic time order network (DTON) has a biological interpretation. It is illustrated in Figure http://tulip.labri.fr/TulipDrupal/) was used. It is a user-friendly tool able to deal with about one million nodes.Our learning method is implemented in R. The R source code is available on request. For graph drawing and display, the software Tulip , cyclin A2 (CCNA2), cyclin B1 (CCNB1), cyclin B2 (CCNB2), cyclin D1 (CCND1), cyclin D3 (CCND3), cyclin E1 (CCNE1), cyclin E2 (CCNE2), cyclin-dependent kinase 1 (CDK1), cyclin-dependent kinase 2 (CDK2), cyclin-dependent kinase 4 (CDK4) and cyclin-dependent kinase 6 (CDK6). Regressions have be computed for all pairs of genes. Then, the network of cell cycle genes has been computed by thresholding using the median of the log-likelihood differences. After simplification, the inferred network is composed of 27 time order relations. It is depicted in Figure For genome-wide network modeling, an MWST has been constructed from all pairwise regressions on the 5003 genes. The network is depicted in Figure k of clusters [k = 5. However, when we looked at the clusters, we were unable to identify any cluster corresponding to late response genes. We thus tried with higher values of k. With k = 20, we are able to more accurately distinguish different trends in gene expression ), and (ii) then, from these selected genes, we only kept those whose absolute LCR values for the last time points 20, 24, 28 and 32 are significantly different from 0 ). Profiles of selected genes are depicted in Figure The identification of late response genes does not represent a well-studied issue. Most notably, no dedicated method has been developed for this purpose. Nevertheless, we tried to compare our method with standard approaches in gene expression analysis: agglomerative hierarchical clustering (AHC) and t-tests. On the one hand, AHC is a well-used tool to cluster gene expression profiles. After computing AHC, we used the silhouette criteria to determine the optimum number clusters . We obtahttp://www.itb.cnr.it/breastcancer//index.html). WDR51A is found associated with breast cancer in The Human Protein Atlas (http://www.proteinatlas.org/).We also search in the literature if the late response genes identified with our method can be good candidate biomarkers. Since biomarkers are molecules that are observed in cancer patients but not in healthly people, there are likely to be genes overexpressed after E2 treatment. Among the overexpressed hubs of the network, CALB2, PDZK1, MT2A and FANCD2 are well-known in the literature as diagnostic marker of breast cancer and E2 response -25. BesiBased on experimentations carried out on time-series gene expression data, our dynamic time order network has been shown to efficiently distinguish and connect early and late response genes. First, our model has faithfully reproduced the cell cycle temporal system. Over the 27 time order relations inferred, 89% correspond to the state-of-art network, 11% cannot be checked, but no one are false. Second, our approach has been successfully applied to a genome-wide level. The learning method has been able to process five thousands genes and the network simplification through the maximum weighted spanning tree provided a graphical display of the huge network. Most notably, several incoming-edge hubs showing very high connectivity have been discovered. All these hubs showed late gene response profiles. Regarding those which are overexpressed over the time, they have been reported as biomarkers of breast cancer and E2 response in the literature and databases.The comparison of results with other approaches is not straightforward, since our method is the only one dedicated to identify late response genes. When compared with standard methods in gene expression analysis, our approach yielded specific results, contrary to agglomerative hierarchical clustering. Moreover it does not need any complex thresholding such as with a t-test strategy. It is worth noting that all genes identified with DTON showed late responses, while this is not the case with the t-test strategy. Besides, our approach is based on the comparison of gene expression integrals combined with cubic spline regression, thus offering an accurate assessment of time order relations.The discovery of biomarkers is one of the application of our model. The distinction between early and late response genes is also an important application in developmental biology where the understanding of the temporal aspect of gene expression is a key issue such as for cell differentiation. For the moment, we mainly focused on the identification of late response genes. The use of another graph modeling would be more efficient for pointing out early response genes than the MWST which tends to display incoming-edge hubs.The authors declare that they have no competing interests.PZ and RM both wrote the paper. PZ, RM, YX, KH and LL conceived the dynamic time order network. PZ and RM carried out the implementation and the experiments. LL, TH, KN and YL designed the study and participated in its coordination. All authors read and approved the final version of the manuscript.Decomposition of the cubic function using knots.Click here for file\u03b2i1 and \u03b2i3Solving of parameters .Click here for fileMatrix T*.Click here for fileLikelihood computation of regression for the time order determination.Click here for file"}
+{"text": "Protein interaction networks (PINs) are known to be useful to detect protein complexes. However, most available PINs are static, which cannot reflect the dynamic changes in real networks. At present, some researchers have tried to construct dynamic networks by incorporating time-course (dynamic) gene expression data with PINs. However, the inevitable background noise exists in the gene expression array, which could degrade the quality of dynamic networkds. Therefore, it is needed to filter out contaminated gene expression data before further data integration and analysis.Firstly, we adopt a dynamic model-based method to filter noisy data from dynamic expression profiles. Then a new method is proposed for identifying active proteins from dynamic gene expression profiles. An active protein at a time point is defined as the protein the expression level of whose corresponding gene at that time point is higher than a threshold determined by a standard variance involved threshold function. Furthermore, a noise-filtered active protein interaction network (NF-APIN) is constructed. To demonstrate the efficiency of our method, we detect protein complexes from the NF-APIN, compared with those from other dynamic PINs.A dynamic model based method can effectively filter out noises in dynamic gene expression data. Our method to compute a threshold for determining the active time points of noise-filtered genes can make the dynamic construction more accuracy and provide a high quality framework for network analysis, such as protein complex prediction. Proteomics is the most exciting frontier in life science. It becomes one of the hottest research topics in systematically analyzing and comprehensively understanding proteins through the study of protein structures, functions, and interactions -6. In paMost researches on biological networks have been focused on static networks. The static PINs, in which the interactions are accumulated in different conditions and time points, cannot reflect the real dynamic PIN networks in cell, and therefore has certain influence on the accuracy of protein complex prediction. In reality, cellular systems are highly dynamic and responsive to cues from the environments , and a r\u03c3 principle to compute an active threshold for each gene based on their gene expression profiling. As a result, each gene has its own active threshold and a protein is active when its expression levels are more than its active threshold. With the notation that \u00b5 and \u03c3 are the mean and the standard deviation of a gene's expression levels, respectively, the choice of the term (\u00b5 + 3\u03c3) is based on the fact that the probability of the range between \u00b13\u03c3 in a normal distribution is more than 99%. Recall that each gene has its own threshold, which is the different point from the Tang's method [In those methods-13, a ths method .Although time-course gene expression data provides a dynamic snapshot of most of genes involved in a biological development process and may lead to better understanding of cellular function, not all genes on microarray are related to the biological process of interest. In addition, dynamic gene expression data is often contaminated by various noises or \"noisy\" genes . Either After the contaminated genes expression data is filtered out, in this paper we use a function in the mean and the standard deviation to compute a threshold for determining the active time points of noise-filtered genes (proteins). Then we construct a noise filtered active protein interaction networks (NF-APIN) of yeast. To evaluate our method, we compare the performance of MCL on NF-APIN and TC_PIN and APPIIn this section, we first introduce time-dependent model, time-independent model and statistic F-testing. Second, we will introduce our strategies to filter out contaminated gene expression data and deduce the active time points for each protein based on their gene expression data. Last, we construct a noise filter active protein interaction network (NF-APIN) based on the active information extracted from gene expression profile and the static PIN.x ={x1, ..., mx, ..., Mx} be a time series of observation values at equally-spaced time points from a dynamic system. AR (autoregressive) model [m depends on the past p (< m) time points. The time-dependent relationships can be modeled by an AR model of order p , denoted by AR(p), which is a linear function of the values of previous p observations plus a term representing possible errors, i.e.,Let e) model is adopti \u03b2 are the autoregressive coefficients, and m \u03b5 represent random errors, which independently and identically follow a normal distribution with the mean of 0 and the variance of \u03c32. The system of Model (1) can be rewritten in the matrix form as:where where The likelihood function for Model (2) isX) = p + 1 holds, it has proved [\u03b2 and \u03c32 areIf the rank = p + 1 is M \u2212 p \u2265 p + 1 or p \u2264 (M \u2212 1)/2.In Model (2), the matrix x = {x1, ..., mx, ..., Mx} be a series of time independent (random) observations. In agreement with Model (2), the last (M \u2212 p) observations can be modeled byFor a group of observation values which are not produced by the dynamic systems under consideration, but noisy (random) data, one can simply model them by a constant number plus random errors. Let \u03b20 is a constant number and m\u03b5 are the random errors which are subject to a normal distribution independent of time with the mean of 0 and the variance of where \u03b20 and The maximum likelihood estimates of andrespectively. The maximum values of the likelihood is given byp + 1) dimensional vector whose first component is where i \u03b2= 0 . These constraints can be rewritten in the matrix form as followsActually, the time-independent model is also an autoregressive model with the order of zero and can be viewed as Model (1) with constraints where The likelihood ratio of Model (7) to Model (1) is given byx = {x1, ..., mx, ..., Mx} is more likely time-dependent than time-independent. Although \u039b in Formula (13) is not a convenient test statistic, it has proved [As Model (7) can be viewed as a regression Model (1) with the Constraints (12), and the maximum likelihood method is used to obtain rinciple , if \u039b ins proved that theF distribution with degrees of freedom when Model (7) is true for a series of observations. When F is very large, thus the p-value is very small , Model (7) is rejected, i.e., observation series x = {x1, ..., mx, ..., Mx} is time-dependent. From Formula (14), one can calculate the probability that a series of observations is not time-independent. As the regression degree in Model (1) is unknown, the p-values are calculated by Formula (14) for all possible orders p (1 \u2264 p \u2264 (M \u2212 1)/2). The proposed method calls a gene to be significantly expressed (time-dependent) if one of these p-values calculated from its expression profile is smaller than a user-preset threshold value.follows an k selection of this paper later on.Gene expression profiles will be divided into two categories by using time-dependent model and time-independent model described in the previous subsection at the first step. It is understandable that a gene expression profile is time-dependent if it can be best modeled by a non zero-order AR equation, while a gene expression profile is time-independent if it can be best modeled by a zero-order AR equation. A gene will be considered being noise if the gene expression data belongs to time-independent and its mean is very small. Thus, the definition of \"small\" is very important. Our strategy is that, firstly all of genes belong to time-independent are sorted ascending by their means of genes expression data. Then, given a threshold value, a gene is considered being noise if the mean of gene expression value is less than the threshold. In this study, genes with the top 15% of the lowest mean are judged as the noisy genes, As a result, the mean threshold is set as 0.5. The reason of choice of 0.5 here will also be further discussed in effect of the coefficient Usually, the threshold in gene expression array is used to differentiate false-positive expressed gene (noise) from true-positive expressed gene-27. Tangu and \u03c3 are the mean and standard deviation of its expression values. If the fluctuation of expression values is high, corresponding to a high value of \u03c3 and thus small F, the threshold may be greater than its all expression values. In other words, some proteins with high fluctuation will be filtered out.For each gene, In this subsection, we describe a way to determine whether a protein at a time point is active from dynamic expression levels of the corresponding gene. Our threshold function is described as follows:k \u00d7 \u03c3 is less than by using parameter 3\u03c3. More gene expressing profiles will be retained. The Active_threshold is calculated by Formula (17) for all possible values k (0 \u2264 k \u2264 3). In this paper the value of coefficient k is selected as 2.5. The reason of why select 2.5 as the value of coefficient k will be discussed in the section effect of the coefficient k selection of this paper later on.Three standard deviations include about 99% of all observations. On the other words, in normal case only less than 1% of the time point may be active. In summary, in the same case, the threshold by using parameter Two proteins interacted in the static PIN may not interact with each other all the time in a dynamic network, because they may not always active at the same time. Dynamic network aims at reflecting dynamic interactions between proteins, which are changing with time and condition. The dynamic interactions are determined by the dynamics of protein activity. If the expression level of a gene is over its active threshold at a time point, the corresponding protein is regarded as active at the time point. For each time point, if two proteins interacted with each other in the static PPI network are active at the same time point, the proteins and their interaction form a part of NF-APIN at the time point. The process is repeated until the NF-APIN is created.In this section, we first construct an NF-APIN. Then we compare the efficiency of three dynamic network, NF-APIN, APPIN and TC-Active_threshold and determine if a gene is active at a time point.Protein interactions of many species are available, particularly in the model organism Saccharomyces cerevisiae (a strain of yeast). Since the relationship between proteins and genes of yeast is almost unique mapping, no need to consider the different combination of exons, and the genome of yeast have been well understood, the gene expression array of yeast can provide a comprehensive view of protein expression. Therefore, we construct an NF-APIN of yeast. The genome-wide set of PPIs of the yeast are downloaded from DIP updated p is up to 6 in AR model and the p-value is equal to 0.01 in F-testing. 19.4% gene expression profiles is time-dependent, 80.6% gene expression profiles is the time-independent. About 15% genes belong to time-independent are identified as noisy genes because of their small means. Active_threshold is calculated by Formula (17). For all possible values k (0 \u2264 k \u2264 3), k is selected as 2.5 in our experiment. Since many proteins are not active at the same time point resulting a small subset of efficacious interactions at the time point, these subnetworks in NF-APIN contains 646 nodes and 1101 edges on average while those in APPIN contains 776 nodes and 1281 edges on average and in TC-PIN contains 3558 nodes and 16961 edges on average. Compared with APPIN and TC-PIN, the average numbers of nodes and edges of the subnetworks of TC-PIN are about 5.5 times and 15.4 times than those of NF-APIN, and the average numbers of nodes and edges of the subnetworks of NF-APIN are a little less than those in APPIN.In TC-PIN , a potenk is greater than or equal to 0 and less than or equal to 3. To investigate the effect of values of k, we conduct experiments in various values of k and the mean. Specifically, the coefficient k ranges from 0 to 3 with the increment of 0.1 and the mean ranges from 0 to 1 with the increment of 0.1 too. As the coefficient k rises, the Active_threshold also rises. With the increasing number of filtered genes, the number of the new functional modules also increases. As shown in Figure f-measure of MCL is achieved to an optimal result in a fixed coefficient k. In other words, the number of noisy genes filtered out achieves to an optimal result. At the same time, it can be seen that when the value of coefficient k is in the vicinity of 2.5, the value of f-measure of MCL is achieved to an optimal result. Therefore, 2.5 is chosen as the coefficient k in our study.In this paper, we assume unlike in that a pOS \u2265 0.2 or OS \u2265 1. Other metrics, such as sensitivity, specificity and f-measure, respectively when OS \u2265 0.2 [sensitivity of MCL achieved on NF-APIN is much higher than that on TC-PIN 2. while it is slightly less than that on APPIN. Remarkably, specificity and f-measure of MCL achieved on NF-APIN are much better than those achieved on other two dynamic networks.The Markov Cluster algorithm (MCL) ,31 is a OS \u2265 0.2 -37, are p \u2212 value. In our experiments, the p \u2212 value [http://www.yeastgenome.org). A predicted protein complex is considered to be significant if its P-value \u2264 0.01. To evaluate the efficiency of different dynamic networks, we analyze the function enrichment of protein complexes predicted from three networks. Since the predicted protein complex whose p \u2212 value is more than 0.01 are considered to have little biological significance, in TABLE p \u2212 value are calculated in five different intervals: 0 | y) or p(\u03c6i < 0 | y) for i = 1,\u2026, n using MCMC. It indicates the significance of differential expression for each gene.To identify altered gene expression across time series, for each gene, the AR(1) model is applied and inference of y ~ NBC means that y has its probability function as follows:E(y) = \u03bc and its variance \u03bc + \u03bc2/k. The parameter k\u22121 is called the dispersion parameter.A more compelling methodological goal is to infer temporal dynamics when we have replicates within a time point and it is straightforward to establish a negative binomial model with AR(1):al as in . Here, yT time points, C different biological conditions , and L replicates. As RNA-seq experiments generally have small sample sizes, the identification of statistically significant temporally differentially expressed (DE) genes may have limited power. Also, some studies stress the importance of replication in microarray studies, which have inherent variability [yijcl) has I = genes, J = time points, C = conditions, and L = replicates. This algorithm has the Markovian assumption that the expression level at the current time only depends on that at the most recent time. We use hidden states to represent a change in expression levels between different biological conditions. Thus, this framework allows us to detect TDE genes and to facilitate the calculation of the posterior probabilities of all possible TDE patterns. For instance, with three time points, this method can estimate the posterior probability of pattern EE-DE-EE, where EE stands for equally expressed and DE for differentially expressed, respectively. Namely, the main interest is to identify the relationship among the C class latent mean values of expression level for each gene g at each time point T = t denoted by \u03bcgt1, \u03bcgt2,\u2026, \u03bcgtC. Hereby, the primary goal of HHM in time course experiment with multiple different conditions is to infer all potential relationships from different conditions; for simplest case with two biological conditions, it is binary outcome with EE/DE, and for complicated experimental design with more than two biological conditions, suppose that biological conditions correspond to different tissues, hereafter tissues A, B, C, and D. Correspondingly, there are 4 expression profiles \u03bcgtA, \u03bcgtB, \u03bcgtC, and \u03bcgtD, and 15 possible expression pattern states include the following:T = t, we want to estimate the probability of each hidden state A, initial probability distribution \u03c00, and unobserved hidden state at time T = t, and estimations are done by EM algorithm as described and implemented in the original paper of HMM. The parametric empirical models (PEM) of GP and NBD sample y = are considered here.We consider a Bayesian HMM to analyze factorial time course RNA-seq data. Our model follows the seminal work of Yuan and Kendziorski that chaiability , 34, 55 In the GP model, for two biological conditions at each time point and two marginal distributions of hidden states are given the following equations, as shown in Yuan et al., for microarray application. The underlying distributions and joint predictive density (JPD) for discrete count data are incorporated to infer posterior probability distributions:\u03c0i represents the proportion of TDE genes at time t, then the mixture type of marginal distribution of the data is given byIf fot(y | \u03bcgt) = \u03bbexp\u2061(\u2212\u03bby)/y!, x > 0. \u03bbt follows a conjugate prior with gamma distribution parameters, shape parameter \u03b1t, and rate parameter \u03b2t. Thus, three parameters \u03b8t = need to be estimated for a given gene. For the GP model, the Markov chain is assumed to be homogeneous and the marginal distribution of xgt is the finite mixture \u2211i=1d\u03c0ifit. We assume one-step first-order correlation time series structure so that HMM contains with Poisson distributed state-dependent distribution. The goal of this algorithm is to identify a certain set of genes that are TDE in a combination of time series and four different biological conditions, for example, distinct tissue types. To address the utility of HMMs proposed in time course RNA-seq experiments with multiple different tissues, we exploit a parametric hierarchical empirical Bayes model with GP (data w/o replication) and NBD (data w/replications) with beta-prior as a well-modified Bayesian approach [ And approach , 56, 57.approach identifyapproach proposedF-statistics to test if the residual values derived from the fitting smoother for gene A are incorporated into the equation for another gene B. If all the coefficients for the measurements of gene B are zero under the null hypothesis, then there is no statistically significant Granger causality between the trajectories for genes A and B. The concept of Granger causality between two distinct SETIs assumes that the data at the current time point affect the data at the succeeding time point . To deterkij is the correlation coefficient between the expression profiles of among all possible pairs. The null distribution was assumed to have i and all other genes, respectively.Similarly, each pair of two trajectories, which correspond to two gene expression levels, is explored by another dependency metric score and detailed notations are described in the following, when there is a given pair of two gene expression profiles:As proposed in Ma et al. and Barker et al. we propose a biologically motivated approach to measure the relationship between two different genes based on their temporal expression profiles in RNA-seq. Ma et al. proposed to consider lagged coexpression analysis to capture the scenario that there is a delayed response of gene B to gene A so that the profile of gene B is correlated with the time delayed profile of gene A. In this section, we describe the pairwise methods that we consider in our comparisons with the methods discussed above that can explicitly model the time dependencies nature in the data. For comparisons with our dynamic methods, we examined several popular static methods, including Fisher's exact test for simple two sample comparisons and log linear model for multigroup comparison, which can also be applied for RNA-seq time series data in temporal analysis as intuitive but limited. (i)P value for TDE of each gene is computed with Audic-Claverie statistics. DE analyses: we first employed pairwise condition comparison methods in digital measures at a given static status without respect to time. It is no surprise to take a union set of all possible pairwise comparisons using these static techniques to identify temporal dynamics in relatively small experiments, where single sample for each time point and very few number of time points are contained in experimental design.p(y | x) over read counts y in one sample in one given group informed by the read counts x under the null hypothesis that the read counts are generated identically and independently from an unknown Poisson distribution. p(y | x) is computed by infinite mixture of all possible Poisson distributions with mixing proportions equal to the posteriors under the flat prior over \u03bb. When the two libraries in a given Solexa/Illumina RNA-seq experiment are of the same size,The Audic-Claverie statistics are basex and y. These are Audic-Claverie statistics for give Log linear model with the Poisson link function and likelihood ratio test model. In the model, the time factor, biological condition factors, and their interaction terms are included. F-statistics under the linear model setting implemented in R package is also applied for time series RNA-seq read count data after variance stabilizing transformation. LIMMA (linear model for microarray) with Although such static algorithms have demonstrated a successful identification of temporally expressed genes in some degree in the past four years and our study, any temporal dynamic analysis false discovery results in static methods can be introduced due to violation of Markovian assumptions frequently revealed in time series expression profile. As the cost to sequencing continues to decline, there is urgent need for more sophisticated statistical methodologies of power in the identification of temporal expression or for use of characterization of temporal dynamics to assess isoform diversity within a gene level in a future investigation of time series RNA-seq. Ideally, it is very critical to appropriately have a good model to represent observed data since interpretation of a model that does not contain valuable information is useless. For this important purpose, our dynamic methods are compared to these static methods by evaluating the overlap in the number of differentially expressed genes in real data sets. Pooling methods: as with ANOVA in microarray, log linear model and linear models for microarray data (LIMMA), after variance-stabilizing transformation to allow multigroup and multifactor comparisons, can be applied by including a time variable as the main factor in the model . Log linThere are mainly two types of time series in RNA-seq. The first is factorial time series data that include at least two biological conditions to be compared in a given time point and have multiple developmental patterns over time as the number of conditions. The second type of time series has a single condition and corresponding developmental stage. In the third type of time series, there are subsequently two additional subtypes, circadian rhythmic data and cell cycle data. In this study, we formulate the statistical framework of identification of temporal changes in RNA-seq time series for first two types of data and the periodic data-sets are reviewed in \u201canother review manuscript\u201d with discrete Fourier transformation and other methods in a separation in depth.n = 5, group II, n = 6), where two groups are defined by standard healing system and delayed healing system. Thus, the authors considered two treatments: standard healing system and treatment with unstable external fixator leading to delayed bone healing. While the standard bone healing system was investigated in a 3\u2009mm tibial osteotomy model stabilized with a medially mounted rigid external fixator, delayed healing was investigated in a 3\u2009mm tibial osteotomy model stabilized with a medially mounted rotationally unstable external fixator. For each treatment, RNA-seq data were collected at 4 time points: 7, 11, 14, and 21 days, with 5-6 individuals' DNA samples pooled together at each time point. In their differential expression, they used the pooled samples from 5~6 lanes of animal samples at one time point and Audic-Claverie statistics were performed using 4 samples over 4 time points by taking a union set of all possible pairwise comparisons using static methods. We reanalyzed their sheep animal time series data using three dynamic methods to identify TDE genes.We consider this published RNA-seq time series data from a sheep model for delayed bone healing. In Jager et al., surgery was conducted as described in , 53 and L period from four segments: (1) basal (1\u2009cm above the leaf three ligule), (2) transitional (1\u2009cm below the leaf two ligule), (3) maturing (4\u2009cm above the leaf two ligule), and (4) mature (1\u2009cm below the leaf three tip). Thus, maize leaf data with different developmental stages are generated from mRNA isolated from four developmental zones: basal zone, transitional zone, maturing zone, and mature zone. In the differential expression analysis, they simply applied chi-squared static method and K-means clustering method that both do not take into account time dependency, but all samples are assumed to be independent. This maize leaf time series data are reanalyzed with proposed methods in this study.We applied two single biological condition time series data which are interested in exploring developmental transient patterns during a time period rather than timing difference patterns incorporated with multiple conditions at a time as This is a time series experimental design to be composed of eight stages during early zebrafish development, embryogenesis . In thei|119921123 and B6DXC7 are of low expression levels which we were not able to detect in static methods. In the second data for our study, we have reanalyzed maize leaf transcriptome data to identify TDE genes with static and dynamic methods and compare between two. In their paper, they investigated leaf development gradient in time series gene expression data at successive stages and identified a gradient of gene expression from base to tip: basal > transitional > maturing > mature from a total of 25,800 annotated genes. In the differential analysis in times series RNA-seq data, they used the method proposed in Marioni et al. [K-means clustering and showed eighteen clusters along the four developmental zones . To compare gene sets detected by our dynamic methods with their gene lists, dynamic methods, SETI, and AR(1) model are applied again in this study and all temporally differentially expressed genes are presented in Supplementary Tables 1 and 3, where filtered gene set to be tested in differential expression has 5273 and 12,322 temporal dynamic transcripts from 42399 transcripts through SETI and AR(1) model, respectively. On the basis of significant temporal expression, we compared dynamic methods to static methods, which were used in the original paper without accounting for correlated data structure type. As the third real data application, to identify temporal dynamics, we have reanalyzed the third data, zebrafish embryonic transcriptome, focusing specifically on the identification and characterization of temporally differential expression using statistical evolutionary trajectory index and autoregressive time-lagged AR(1) model. We furthermore implemented both methods to rank temporal genes by statistical significance. As consequence of the resampling-based procedures and posterior probabilities of autocorrelation, it was possible for gene-by-gene approach to order temporal genes by two dynamic methods and identify genes associated with cotemporal dynamics. To investigate such paired temporal dynamics, we examined the relationships between genes using bivariate identification methods. Glass-s-d score is reported in Supplementary Table 5. Likewise, the statistical evolutionary trajectory index with statistical significance for zebrafish data is given in Supplementary Table 2, where we filtered out genes by coefficient of variation (CV) criteria remaining 12,034 genes. Overall, both methods show more robustness at low and moderate expression levels when compared to existing parametric static methods indicating that our methods achieve relative improvements in test of identification of temporal genes and AR(1) model shows more sensitive TDE calls than SETI resampling procedure in two real data applications. Here, we examined how different results are obtained by dynamic time series methods. For simple pairwise static methods, we employed Audic-Claverie statistics and Fisher's exact test as these two methods have been widely used in previous studies. They showed highly concordant results on other RNA-seq datasets compared to DEGseq, DESeq, edgeR, and baySeq (data not shown). In differential analysis with simple pairwise methods, we took a union set after all pairwise comparisons across a time period and amongst different biological conditions as these methods only consider two pairwise comparison testing and confirm the results to those of original papers. For pooling static methods, LIMMA, log linear model, and edgeR R package with glmFit are carried out to identify TDE genes. To compare with above static methods, we employed three dynamic methods described in the previous sections. The results are shown in Figures For the sheep data, the authors applied the Audic-Claverie method to the normalized expression values, RPKM, to compare later time points to the reference time point (7 day) in both groups. After all pairwise comparisons, they combined the sets of differentially expressed gene sets with 884 genes detected in total from 24,325 mappable genes. Based on these 884 genes, they performed hierarchical clustering to identify gene clusters. Each cluster was then subject to gene ontology analysis to find significant biological functions. The differential analysis performed in original paper is based on static differential analysis method. We reanalyzed their sheep factorial time course experiment data to identify TDE genes over time through dynamic methods, HMM, SETI, and AR1) model to account for correlated time-dependency structure. HMM identifies temporal patterns with classification of DE/EE at each time point by posterior probabilities, whereas SETI with statistical significance from permutation resampling procedures and AR(1) model with gamma Poisson Bayesian assumption on count data are applied within single biological condition, separately. Results obtained by these dynamic methods compared those of static methods, simple pairwise methods, Audic-Claverie statistics and Fisher's exact test, and pooling static methods, glmFit in edgeR, LIMMA, and log linear model as shown i et al. for pair model toIn systems developmental biology where characterization of complexity of various time course data likely leads to address inference of temporal dynamic patterns from transcriptome, we are not often really interested in exactly how only a single gene is temporally differentially expressed at a particular time point or period. This knowledge would neither answer an understanding of how biological networks in temporal dynamics of gene regulation work nor enable predicting any cooperative sets of genes to occur under biological conditions across time points. Thus, it is well known that genes work collaboratively together in a structured biological network; these biological phenomena underscore the importance taking into account the multivariate techniques when modeling temporal dynamic gene expression. Since it is not known beforehand which gene features are connected to each other, investigators sought to define informative relationships between individual gene patterns to identify many relevant classes of dynamic temporal gene expression patterns. We explored highly correlated relationships between temporal gene sets detected by bivariate dynamic methods. Pairs of trajectories were further investigated to explore the coupled coordinated relationships between different temporal patterns based on the three dependency metrics in Once temporal dynamics in gene-by-gene test and in gene-to-gene interaction were determined, the resultant temporal gene expression sets detected by ranking individual analysis and multivariate approaches, respectively, were further explored to reveal temporal relationships underlying biological processes based on gene ontology and functional network/pathway analysis. Sheep gene symbols with 21,865 genes were converted into human gene symbols with 15,343 using BioMart in R package . In gene\u03c6). We generated data for equally expressed genes by sampling time series process parameters (w) of a gene in invertible Gaussian ARIMA process with \u03c6 = 0. We generated data for differentially expressed genes across time points in the same procedures as \u03c6 = 0.1, 0.25, 0.5, 0.75, and 0.9, respectively. After time series process, regression effects and autocorrelation parameters were simulated for 1000 genes, 4 simulated datasets were generated by setting the varying number of time points and replicates in a time point, nT = 5 and 10, nR = 3, and 5 to compute P value, FDR, and credible interval of each gene for static and dynamic methods and compared to gold standards to obtain true discovery rates in our simulated datasets.We show that dynamic methods outperform approaches that do not explicitly address the time series nature of the data in simulation studies for validation and evaluation. We evaluated the performance of dynamic methods with simulation studies in which temporal features are already known as gold standard TDE gene lists. Gold standard gene lists contain entire information to mimic RNA-seq time series profile if a gene is differentially expressed (DE) or equally expressed (EE) over time as reference set to compare to the results obtained from both dynamic and static methods in terms of recall and precision measurement. To this end, we generated simulated RNA-seq datasets with expression profiling data points representing nondifferentially expressed and differentially expressed genes in a series of time points by using different values of autocorrelation parameter where the Benjamini-Hochberg FDR is controlled at < 0.05 in maize dataSupplementary Table 2: Temporally differentially expressed gene sets detected by statistical evolutionary trajectory index (SETI) where the Benjamini-Hochberg FDR is controlled at < 0.05 in zebrafish dataSupplementary Table 3: Temporally differentially expressed gene sets detected by Poisson AR(1) model in maize dataSupplementary Table 4: Temporally differentially expressed gene sets detected by Poisson AR(1) model in zebrafish dataClick here for additional data file."}
+{"text": "The widespread use of high-throughput experimental assays designed to measure the entire complement of a cell's genes or gene products has led to vast stores of data that are extremely plentiful in terms of the number of items they can measure in a single sample, yet often sparse in the number of samples per experiment due to their high cost. This often leads to datasets where the number of treatment levels or time points sampled is limited, or where there are very small numbers of technical and/or biological replicates. Here we introduce a novel algorithm to quantify the uncertainty in the unmeasured intervals between biological measurements taken across a set of quantitative treatments. The algorithm provides a probabilistic distribution of possible gene expression values within unmeasured intervals, based on a plausible biological constraint. We show how quantification of this uncertainty can be used to guide researchers in further data collection by identifying which samples would likely add the most information to the system under study. Although the context for developing the algorithm was gene expression measurements taken over a time series, the approach can be readily applied to any set of quantitative systems biology measurements taken following quantitative treatments. In principle, the method could also be applied to combinations of treatments, in which case it could greatly simplify the task of exploring the large combinatorial space of future possible measurements. The widespread adoption in systems biology of high-throughput experimental assays designed to measure the entire complement of a cells genes or gene products in response to some set of experimental conditions has created a paradox. On one hand these techniques produce such large amounts of data that researchers often struggle to find meaningful and statistically significant patterns amongst the large amounts of noise. On the other hand, since the cost of producing one of these comprehensive measurements is relatively high, researchers are often limited as to the numbers of samples that they can afford to assay in their experimental design, even if the cost of collecting the sample material is relatively low. This may mean limiting the number of time points in a time course experiment, the number of different treatment levels, the number of biological or technical replicates, or all of the above. If the end goal is one of network inference or predictive model development, the scarcity of the measurements can lead to vastly under-determined systems.In order to design the most useful possible experiment, a biologist needs information about the most dynamic regions of the system in response to each independent variable (e.g. time duration or treatment levels). Usually this is provided from the biologists domain knowledge and/or pilot experiments. However, a more efficient approach would be possible if the most dynamic or uncertain regions of the system could be predicted quantitatively. This would enable more measurements to be concentrated within the most dynamic or uncertain regions and fewer within the less dynamic or more certain regions. Uncertainty in a system can result from error or noise in those measurements that do exist, it can result from a lack of measurements in certain regions of the system, and it can result from intrinsic dynamics of the system in certain regions. Most statistical methods focus on estimating the uncertainty due to error in the existing measurements. There are also a number of methods to deal with uncertainty in model inference caused by noisy data, mostly by using either an automated or manual cyclic refinement of model parameters through parameter perturbation or \u201ctuning\u201d et al.Many algorithms et al.Along with missing value imputation algorithms, algorithms for uncertainty quantification and minimization have been developed in a variety of fields. Lermusiaux Another domain where this idea has been explored is that of robotic vision, where images of a 3D surface are taken and then certain regions of the surface are re-sampled at a higher resolution in order to decrease the level of uncertainty in those regions. Huang and Qian Here, we describe a conceptually similar approach that may be used in conjunction with gene expression experiments, or other experiments where the cost of collecting samples is substantially less than the cost of assaying them. We introduce a novel algorithm to quantify the uncertainty in the unmeasured regions of gene expression time course experiments that is based on our (BTs) experience as a biologist regarding the dynamics of biological systems. The algorithm enables an experimental strategy where In any time course experiment, the region between measurements carries a certain degree of uncertainty. If a measurement is taken at some time point After the boundaries containing the complete set of plausible interpolations have been established, the next piece of information required is the likelihood distribution of those interpolations within the boundaries. To obtain a useful approximation of the many possible interpolations within the boundaries, two randomly distributed \u201cguide points\u201d were considered to exist within each interval between measurement time points and between the upper and lower boundaries. New interpolations were defined that connected all measured points and guide points , and werUsing this approach, the likelihood distribution of plausible interpolations at any time point could be calculated, using two methods. For clarity, the two methods are described in the following sections using the temporary assumption that the measured points have zero error. In the final two sections, a further elaboration to account for measurement error is introduced.While the complete set of interpolations could be infinite, our algorithm uses a simple, biologically plausible assumption to place reasonable bounds on the set of interpolations, namely that no novel regulatory event occurs within any interval unless there is experimental evidence for it. Mathematically we define this to mean that the total number of inflexion points in the actual path of the system through the measured points is assumed not to exceed the number of inflexion points implied by the original set of measurements. This assumption is an instance of Occams razor. and randomly along the y-axis, constrained by the interpolation boundaries. The number of inflexion points of the interpolation passing through the real measurements and the guide points was then calculated. If the number of inflexion points of the new interpolation was greater than that of the original, the interpolation was discarded. If the interpolation was accepted, then the positions of all the guide points were recorded. This process was repeated until a predetermined number of passing interpolations had been found. The distributions of the accepted guide points in the y-direction at each point in the time axis thus approximated the distribution of all possible plausible interpolations passing through each time point. Because of the geometric relationships among neighboring guide points and measured points, the distributions of plausible interpolations were markedly non-uniform and non-normal e.g. .This approach proved computationally expensive because the time required to discover sufficient passing interpolations to accurately calculate the likelihood distributions increased exponentially with the number of measured points.In order to create a computationally less expensive algorithm than the Monte Carlo method, we developed a method to calculate the likelihood distribution of interpolations directly from the geometric relationships among the measured points and guide points. The Direct method is based on exactly the same premises, but calculates the likelihood that plausible curves pass through a given location directly, rather than finding the likelihood by trial-and-error.Given a set In order to compute the distributions via the method outlined above, using numerical integration, for each Three probability values were defined for each interval, a probability conditional on the points to the right To calculate the values for Define If the total number of inflexion points in For every combination of midpoints Normalize The values for Define If the total number of inflexion points in For every combination of midpoints Normalize Next, the values of Finally, the algorithm calculates the values for Define If the total number of inflexion points in For each combination of midpoints Normalize As the algorithm uses the position of the measured points to define the boundaries of plausible interpolations, the error in these measurements must also be taken into account. To do so, the algorithm uses defined confidence limits of each measurement (we have used 99%) to define a plausible range of values for that point . In ordeSince each measured point In order to compensate for the added computational cost of this modification, which if implemented fully would increase the computational cost by a factor of For each set of midpoints spanning the confidence interval for a measurement By employing Latin Hypercube Sampling, the added computational cost of the modification to account for measurement error is reduced to a factor of In order to evaluate the accuracy of the algorithm in predicting where new measurements may lie, including the calculated likelihood distributions, we looked for gene expression time course experiments with two qualities. First, we needed datasets that contained a relatively large number of measurements. This would allow us to provide the algorithm with only a fraction of the measurements and then measure how well the values of the omitted measurements corresponded to the likelihood distributions calculated by the algorithm. Second, in order to evaluate the effectiveness of the algorithm at accounting for measurement error, we needed a dataset that provided multiple replicates for each measurement so that confidence intervals could be assigned for each measurement.Our first selected dataset comes from time series data recently published on the yeast IRMA gene network The authors ASH1) in the switch-on study. The graphs of the boundaries for the remaining genes can be found in the supplemental data . The 95% confidence intervals of the true locations of the measured points are also shown. The graphs for the remaining genes can be found in the supplemental data of the measured points fell within the 20% likelihood envelope, and 83% (5/6) fell within the 75% and the 95% likelihood envelopes, even ignoring the error in the new measurements.We used RT-PCR data provided by the authors, which were obtained using the tal data . In eachtal data . In the Next we applied the algorithm to the time course expression data from a study The nonsense-mediated mRNA decay (NMD) pathway is responsible for the rapid decay of transcripts that contain a premature stop codon YIL164C, YIL165C, YIL167W, and YIL168WWe analyzed the time course expression data for 4 genes from a yeast strain containing a mutation in a gene that the authors identified as potential targets of NMD, We generated expression values from the raw chip data by first correcting for optical noise using the GC-RMA YIL168W) are shown in Before generating the interpolation likelihood distributions we removed every Finally, in order to test the algorithm on a high-throughput dataset, we applied the algorithm to the entire NMD mutant array series As before, expression values were generated from the raw chip data by first correcting for optical noise using the GC-RMA Before generating the interpolation likelihood distributions we removed every The ability to quantify which gaps in a dataset contribute the most to the uncertainty in the knowledge about a particular system provides a powerful tool for planning future experimentation, especially when the cost of the future experiments is high. By utilizing a simple but powerful assumption about the plausible behaviour of biological systems, we have created an algorithm that quantifies the uncertainty created by gaps in biological datasets in a probabilistic fashion, including an intuitive graphical representation , we have shown its use in estimating the likelihood distribution of interpolations for a genome-wide microarray dataset. Several approaches can be used to estimate the uncertainty of the whole system at each time point. For example, the range of a particular quantile (e.g. 95%) for each gene at each time point could be used, or if it was considered desirable to take the size of the gap between measured time points into account, the area of the likelihood envelope between two measured points could be used. The range (or envelope area) of a particular gene of interest could be used to target a new measurement, or the ranges for all (or selected) genes in the system could be combined (e.g. by multiplication) to calculate an aggregate uncertainty. If time points were being considered that fell between measured points or guide points, then a simple interpolation (e.g. linear or cubic spline) could be used to estimate the distribution at the point of interest. Unlike sensitivity analysis techniques that attempt to measure the sensitivity of models to changes in parameter values or initial conditions One limitation of our algorithm is the assumption that in order for an interpolation to be \u201cbiologically plausible\u201d it must not introduce new, unmeasured regulatory events into the system, which is an instance of Occams razor. In cases where the initial measurements have been spaced far apart, there is obviously an increased likelihood that this assumption might be invalid, and biologists should take this into account in planning future experiments.For biologists interested in \u201ctake-home messages\u201d from this study, two are immediately evident: (i) since the predictions of the system are affected by the reliability of the measurements, the more replicates of the measured points the better (good advice in any context); (ii) intervals that contain an inflexion point are usually the most uncertain, and thus the best place for new measurements, because of the additional freedom in plausible paths this allows.Although the context for developing the algorithm was gene expression measurements taken over a time series, the approach can be readily applied to any set of quantitative systems biology measurements taken following quantitative treatments. In principle, the method could also be applied to combinations of treatments, in which case it could greatly simplify the task of exploring the large combinatorial space of future possible measurements. This methodology should have wide applications outside of biology as well. Our approach can benefit any application that uses continuous sets of measurements (e.g. time course studies), where the system under question can be expected to be constrained in a predictable fashion, and where it is desirable to quantify the uncertainty in the intervals between measurements.http://vmd.vbi.vt.edu/download/software/index.php System Requirements: Mac OS X 10.5 or higher.The Castor analysis software developed to calculate the likelihood distributions is available under the GNU GPL from: Data S1IRMA Switch-Off Data. Results of the analysis for all five genes of the IRMA \u201cSwitch-Off\u201d dataset.(XLSX)Click here for additional data file.Data S2IRMA Switch-On Data. Results of the analysis for all five genes of the IRMA \u201cSwitch-On\u201d dataset.(XLSX)Click here for additional data file.Data S3Nonsense-Mediated Decay Data. Results of the analysis for the Nonsense-Mediated Decay dataset.(XLSX)Click here for additional data file.Data S4Extended Nonsense-Mediated Decay Data. Results of the analysis for the Extended Nonsense-Mediated Decay dataset.(XLSX)Click here for additional data file."}
+{"text": "Motivation: The analysis and mechanistic modelling of time series gene expression data provided by techniques such as microarrays, NanoString, reverse transcription\u2013polymerase chain reaction and advanced sequencing are invaluable for developing an understanding of the variation in key biological processes. We address this by proposing the estimation of a flexible dynamic model, which decouples temporal synthesis and degradation of mRNA and, hence, allows for transcriptional activity to switch between different states.Results: The model is flexible enough to capture a variety of observed transcriptional dynamics, including oscillatory behaviour, in a way that is compatible with the demands imposed by the quality, time-resolution and quantity of the data. We show that the timing and number of switch events in transcriptional activity can be estimated alongside individual gene mRNA stability with the help of a Bayesian reversible jump Markov chain Monte Carlo algorithm. To demonstrate the methodology, we focus on modelling the wild-type behaviour of a selection of 200 circadian genes of the model plant Arabidopsis thaliana. The results support the idea that using a mechanistic model to identify transcriptional switch points is likely to strongly contribute to efforts in elucidating and understanding key biological processes, such as transcription and degradation.Contact:B.F.Finkenstadt@Warwick.ac.ukSupplementary information:Supplementary data are available at Bioinformatics online. One of the archetypal challenges of systems biology is the task of uncovering the network of interactions between genes and proteins using data such as that coming from high-throughput genome-wide technologies or multi-parameter imaging. Time series gene expression data from techniques such as NanoString, reverse transcription\u2013polymerase chain reaction, microarrays or advanced sequencing are particularly valuable for addressing such tasks especially if the system can be perturbed in an informative way. Such data can also be used to get genome-wide understanding of the variation in key biological processes, such as transcription and degradation. In many cases, one is concerned with better-understood systems, such as the circadian clock or cell cycle, where relatively sophisticated models exist. In these cases, it is of interest to uncover both new connections and deeper details of the regulatory interactions. However, when studying systems where there is a much lower density of understanding, one is relatively satisfied with gaining information on the likelihood of the existence of a regulatory interaction or the importance of a regulatory mechanism. Almost all examples studying the response dynamics when systems are subjected to perturbations, such as drug dosing model that addresses these issues and at the same time can be effectively fitted to data with sufficient computational efficiency to enable one to handle many genes. It is based on a simple dynamical model of mRNA synthesis and degradation, where transcriptional activity can \u2018switch\u2019 between an arbitrary number of states. The timing and number of transitions, or \u2018switches\u2019, can be estimated efficiently alongside mRNA stability with the help of a reversible jump Markov chain Monte Carlo (RJMCMC) estimation algorithm . MultiplSupplementary Section \u2018Simulation study\u2019). To demonstrate the methodology and its potential further uses, we focus on modelling the wild-type behaviour of a set of 200 chosen oscillatory expressed genes of the model plant Arabidopsis thaliana. The approach allows us to investigate whether genes with similar switch event times also have correlated promoter motifs. Furthermore, we introduce a Bayesian hierarchical approach to pool data from several experiments and present results for estimation of mRNA stability. The example datasets consist of time series from three experiments of varying timescales and sampling regimes under some mock treatment conditions . Each experiment originally consists of >30 000 probes [www.catma.org], which map >25 000 genes from the TAIR9 genome annotation [www.arabidopsis.org]. Here, we focus on a subset of 200 oscillatory genes (chosen according to their correlation to a sine function for the expression data from E1). The set includes a number of \u2018core\u2019 circadian clock genes . A list of the 200 genes can be found in Supplementary Table S1.The structure of the article is as follows. We first introduce the modelling approach and estimation algorithm. The performance of the algorithm has been studied extensively for artificial data from L is the length of the time interval over which gene expression is observed. An increase in the transcription rate switch model. Neither the location of the switch-timesswitches k is known and need to be estimated along with the kinetic parameters of the model. Solving the linear ODE for each linear regime and iteratively inserting the final state of a previous regime as initial condition of the next regime one can derive the following general solutionHere, k. The case of no interior switch points corresponds to k switching points. Using the notation of k and Hastings .R denotes the number of replicate time series, each with T observations for a given experimental setting. Note that the notation that each replicate has T observations is only used for simplicity. It will be obvious how to allow for a different number of observations per replicate. Assuming that the residuals between the ODE solution Let For given switch-times and degradation rates, we have originally devised a complete Bayesian regression approach to the model in k can be specified by a Poisson distribution We use a vague gamma prior for the precision, i.e. oned on . Note thL] and classify three possible moves:movement of a randomly chosen existing switch-point addition of a switch with probability deletion of a switch with probability for some constant With regard to the switch points, we adapted the reversible jump specifications of the algorithms used in ral rule (5)k existing switches, and a candidate value L given the constraints imposed by s. The proposed value will lie in some interval For the position change in (i), we randomly chose a switch-time A.thaliana clock, and it has been shown to be induced before dusk and has a peak of expression at dawn , and the posterior densities can be estimated from the RJMCMC traces of 75 K iterations, after the first 25 K iterations are discarded as burn-in. The computational time for 100 K iterations was on average 128 s on a 2.8 GHz computer. Fitting this model is thus computationally feasible for thousands of genes and can be easily parallelized.Convergence for gene expression datasets from E1 is usually achieved after 5 K iterations . We observe that the parameter estimation is invariant of the mode of the switch (increase or decrease transcription rate), that multiple switch points must have at least one observation between them to be reasonably estimated and that a higher sampling frequency is generally more informative for estimation than a larger number of replicate samples. To demonstrate further use of this approach, we now present case studies referring to the 200 example circadian time series.To gain a systematic understanding of how the estimation algorithm performs for time series of varying sampling frequencies and noise levels, we generate synthetic datasets, for chosen kinetic parameter values and using data from E1 to obtain realistic sample sizes and noise levels. The full study benchmarking the performance of the model can be found in A common aim of gene expression analysis is to identify potential common regulatory mechanisms between groups of genes through clustering gene expression and enrichment of semantic similarity, such as Gene Ontology . The KL Clustering is performed in all cases applying the affinity propagation algorithm to the sSupplementary Tables S2 and S3 for motifs). These motifs can be grouped by sequence similarity into three broad classes of promoter motifs, and a similarity matrix is generated from co-occurrence of motifs between each pair of genes, which can then be linearly combined with each of the three similarity scores and clustered.A commonly explored hypothesis is that correlated gene expression patterns will also have correlated promoter structure and regulation mechanisms. In practice, such correlations have yet to be confirmed. By combining our analysis of switch-time similarity with promoter motif data we ask whether our approach can shed more light on this issue. We investigate how frequently certain listed motifs are encountered in clusters of genes. If a motif has a high frequency for the genes in a given cluster then it is more likely that the corresponding transcription factor binding sites are key for the regulation of those genes. This could give us an indication of which genes may be turned on or off by the same transcription factors. Using position-specific scoring matrix (PSSM) data from the TRANSFAC , which does not yield any temporal correlation in the resulting expression profiles . On the other hand, clustering only the expression time series brings up some correlated promoter structure when using switch-times, rather than overall expression . However, the approach is most informative when both the motif co-occurrence and time series information are combined in that we are able to identify strong correlations in promoter structure together with temporal separation of profiles between clusters.One could obtain a clustering based on motif co-occurrence alone . We also wish to incorporate informative prior information from the study by i and N experiments. Bayesian hierarchical modelling model. The Narsai et al. estimate has an approximate range in half-life of 1.5\u20132.25 h, and our posterior estimates are broadly in a similar range. The estimated joint distribution summarizes the variability of the three experiments and provides a theoretically rigorous summary statistic of the degradation rate (which cannot be achieved by averaging over the independent results). Estimates from the individual models show a range of estimates from 1.3 to >2 h. Results for E2 and E3 are more variable probably because they cover shorter timescales of 17.5 and 6 h, whereas E1 covering two circadian cycles provides more precise estimates despite smaller sampling frequency. An interesting observation is the difference between the E1 estimate and E2, E3 and Narsai et al. estimates. There are a number of potential reasons for the difference, given the experiments were performed over different time intervals and in different laboratories. However, it may also be related to the light conditions, as E1 is the only experiment incorporating two 8 h dark periods. Light is a key driver in the A.thaliana circadian clock, and a recent study has suggested a light-specific degradation rate for CCA1, a core partner of LHY into five broad mRNA stability groups based on half-life, as used by Narsai et al. . However, despite this overlap, there is also considerable variability in degradation rates between the experiments, which may be natural variability or because of the experiments carried out under different conditions.For further comparison with the Narsai The aim of this article is to present a novel approach for identifying timing of transcriptional activity from time series mRNA expression data. The model introduced here consists of a piecewise linear simple ODE model of mRNA dynamics, which can be fitted efficiently with a RJMCMC sampler to estimate gene-specific parameters, i.e. mRNA stability and number and times of switches in transcriptional activity. Estimation and performance of the algorithm is investigated for synthetic data of varying sampling frequencies and noise levels in a simulation study. With the example of time series microarray data from 200 circadian genes, further directions are explored exploiting different aspects of the model output. Namely, using the timing of the switches as a basis for clustering, which, when combined with promoter motif data, seems to identify more significant groups of motifs than simple profile clustering with promoter motif data, potentially implying a stronger correlation with regulatory mechanisms. We also explore the potential for the estimation of mRNA degradation rates. Usually, degradation rate studies involve treatment with a transcriptional inhibitor, such as actinomycin D, or translational inhibitor, such as cycloheximide. It is not clear whether such inhibition is ever achieved fully and whether such treatments have undesired side-effects on degradation, and may, therefore, impact on estimated rates in unpredictable ways. The model introduced in this study has several advantages over a transcription inhibition study. The primary advantage is that a specific experiment does not have to be designed and performed, often at great cost in time and resources, to produce a suitable dataset for degradation estimation, effectively allowing recycling of existing datasets further increasing their potential scientific value. As only free-running mRNA expression dynamics are required, potential side-effects introduced by using a chemical inhibitor can be avoided. We demonstrate how to pool data from several experiments in a theoretically rigorous way with a Bayesian hierarchical model. Degradation estimates can easily be obtained for suitably resolved time series and can be compared between different experimental conditions. As the number of large high-resolution gene expression time series datasets publicly available is likely to increase with the development of cheaper and faster high-throughput technologies, new methods are required to analyse these data. The model proposed here is mechanistic yet is flexible and rich enough to capture a wide range of expression dynamics observed in mRNA time series data, from steady-state behaviour to oscillatory expression. At the same time, it is simple enough to be estimated with feasible computational time for thousands of genes. Using a mechanistic model to identify transcriptional switch points is likely to strongly contribute to efforts in elucidating and understanding regulatory interactions within transcriptional networks."}
+{"text": "In contrast, the cell cycle does not involve a perturbation-like phase, but rather continuous gene expression remodeling. Similar analyses were conducted using three other standard distance measures, showing that the one we introduced was superior. Based on these findings, we set up an adapted clustering method that uses this distance measure and classifies the genes on the basis of their expression profiles within each developmental stage or between perturbation phases.Available DNA microarray time series that record gene expression along the developmental stages of multicellular eukaryotes, or in unicellular organisms subject to external perturbations such as stress and diauxie, are analyzed. By pairwise comparison of the gene expression profiles on the basis of a translation-invariant and scale-invariant distance measure corresponding to least-rectangle regression, it is shown that peaks in the average distance values are noticeable and are localized around specific time points. These points systematically coincide with the transition points between developmental phases or just follow the external perturbations. This approach can thus be used to identify automatically, from microarray time series alone, the presence of external perturbations or the succession of developmental stages in arbitrary cell systems. Moreover, our results show that there is a striking similarity between the gene expression responses to these In higher eukaryotes, the life span is separated into discrete developmental phases that start from the embryonic phase and end with the adult phase, and are in some organisms separated by other stages such as larval and pupal stages. On the other hand, the gene expression levels of an organism evolve with time and this time evolution can be inferred from appropriate DNA microarray time series. The question we ask here is: can we infer the limits of the developmental phases from the gene expression profiles alone, in other words, is there a sudden change in behavior that is discernable in the profiles?Furthermore, both unicellular and multicellular organisms may be subject to external perturbations, which trigger a specific gene expression response. Abrupt temperature changes, oxidative stress or the addition of particular molecules are examples of such perturbations. A change in the amount of nutrients is another example. Bacteria for instance are usually able to grow on different kinds of sugars, but need to exhaust their preferred sugar before using the others, in a phenomenon called diauxie. The second question we ask here is whether we can also infer solely from the gene expression profiles the exact time point where the cells are subject to such external perturbations. The corollary question is whether this response appears to be different than for successive developmental stages.i.e. sea squirt, vinegar fly, silkworm and mouse. The detection of external perturbations is performed on several E. coli DNA time series subject to heat, cold and oxidative stress and to glucose-lactose diauxie. The approach is simple: the shapes of the gene expression profiles are compared over a few successive time points, and regions of large changes are identified as regions where developmental stage modifications or external perturbations occur.The possibility of detecting the limits of the developmental stages of higher eukaryotes from the gene expression profiles is analyzed here on the basis of model organisms for which long enough microarray time series are available, This approach leads us to design an appropriate clustering procedure, which consists of dividing profiles into subprofiles at the time points where sudden changes in the expression levels occur, and to group genes in the same class when they have similar subprofiles.N different time points it . These RNAs, labeled by \u03bc, may be mRNAs or miRNAs. Their concentrations are estimated by converting them into cRNAs or cDNAs, labeling these by fluorophores and measuring the fluorescence intensities \u03bc will refer indistinguishably to the RNA or the gene from which it is transcribed.DNA microarray time series yield the concentrations of all or a subset of the RNAs that are present in a given cell sample at Drosophila melanogaster, the urochordate Ciona intestinalis, the silkworm Bombyx mori and the mouse Mus musculus.DNA microarray time series that monitor the different developmental stages of multicellular eukaryotes and possess a sufficient number of time points per stage are available for the vinegar fly Drosophila melanogaster DNA microarray time series The Ciona intestinalis DNA microarray time series The Mus musculus were considered. The first Two oligonucleotide-based DNA microarray time series of the mouse Bombyx mori undergoes four distinctive main developmental stages, defined as embryo, larva, pupa, and adult moth, which are monitored by a DNA microarray series of 41\u201342 time points The silkworm Note that in several of the above listed series the cell samples were taken indistinguishably from any part of the organism and thus represent an average of the gene expression levels in the different tissues. In these cases, the measurements thus mix the dependencies of the expression levels on the organism's developmental stage and on the cell's host tissue.Escherichia coli.DNA microarray time series that monitor the response of gene expression levels upon perturbations have been considered for E. coli through a whole-genome DNA array time series A first kind of external perturbation is glucose\u2013lactose diauxie, which is monitored in Escherichia coli monitoring the expression profiles of 4,400 genes Other kinds of environmental fluctuations, in particular cold, heat and oxidative stress, were studied by DNA microarray time series in Saccharomyces cerevisiae by three DNA microarray time series, in which the cells were synchronized by three independent methods: a factor arrest, elutriation, and arrest of a cdc15 temperature-sensitive mutant cdc15), and profile more than 6,000 genes.The gene expression levels along the cell cycle have been monitored in the yeast a priori. Expression levels generally vary over time (except in stationary phases), often even in the absence of perturbations of any kind. We therefore do not search for changes in the expression levels of each gene individually. Rather, we choose to compare the profiles of the different genes, and detect time intervals where the variety of profiles is larger than on the average. Such a phenomenon could indeed be indicative of an uncoordinated response of the expression patterns to some general perturbation.The hypothesis we test here is that the limits of the developmental stages of higher eukaryotes appear in the gene expression profiles as regions where the expression levels undergo some kind of change. Similarly, the expression levels are also expected to undergo modifications in response to stress or other external perturbations. The kind of change that is expected to occur in such particular regions is not obvious To detect such a response, an appropriate distance measure between segments of gene profiles must be defined. An important point is that this measure must be insensitive to the sampling frequency of time points. Indeed, this frequency depends on the experimental setup and is generally different according to the developmental stages. Its effect must thus be overlooked. We test here four different distances to measure the similarity between gene profiles, which are all independent of the sampling frequency. They are described below.ED between two regions it and jt (iintra, given in RD>inter, are significantly higher than the corresponding intra, which indicates the reliability of the clustering. Note that the smaller values for the larval stage are due to the fact that the profiles in this stage exhibit much smaller variations.To illustrate this method, we apply it to the embryonic, larval and pupal stages of This classification leads us to describe the expression profile of each gene as a succession of representative subprofiles, one for each development stage or period between successive perturbation phases. Each representative subprofile represents a given cluster and corresponds to the average of all members of the cluster see . This alThe least-rectangle distance between expression profile segments, which is given by eqs (3\u20136) and is translation-invariant and scale-invariant with scaling dimension 1/2, appears to be a relevant measure for detecting perturbation or developmental phases from expression profiles. It allows the identification, on the basis of raw expression data alone, of time points where important phenomena take place, which lead to drastic rewiring of the gene expression network. Note that these expression data may involve all the genes in a system or a relevant subset, correspond to mRNA or miRNA, and come from cells of a specific tissue or a mixture of different cell types.Other distance measures have been tested, but turned out to be unable to detect developmental and perturbation stages. In particular, the Pearson distance and a variant of the least-rectangle distance with scaling dimension zero yielded basically constant values of the average segment distance i, tj] divided by the distance between the expression levels at ti and tj. Note that measures that even implicitly depend on the sampling frequency can sometimes appear to be very efficient for detecting the limits of development stages. However, they have to be rejected, as they just demonstrate that the sampling frequency is often different in the different development stages. Another approach that we have tested consists of approximating the profiles by P polynomials of a fixed degree d, where P is equal to the number of stages and d is chosen between 1 and 5. Requiring that they do not overlap, cover the complete profile and present a minimal deviation from the profile identifies the optimal connection points between the P polynomials. These points are then compared with the changes in development stages or with the perturbation points. All these methods sometimes give positive results for certain systems and for certain stages, but never systematically.We tested yet other approaches, but without success. These include the estimation of parameters that measure the changes in each gene profile separately, such as the maximum difference in expression level at neighboring time points, and the sum of the Euclidean distance between the expression levels at successive time points in an interval [tRD, appears thus as the only one suited to identify automatically, without prior knowledge, the time points where abrupt transcription network rewiring occurs, which corresponds to the passage to the next developmental stage or to a strong external perturbation. Note that for our method to be applicable, a sufficient number of time points must be available for each phase. Note also that the optimal length n of the profile segments that allows the best detection of the transitions between the phases depends on the number of time points. When many time points are available for each phase, the peaks appear more clearly for values of n equal to 5 or 6, as seen in n\u200a=\u200a5\u20136. In such cases, n-values of 3 or 4 must be considered.Our method, based on the average segment distance An interesting result is that we cannot distinguish, from the DNA microarray expression data, the response to an external perturbation from the succession of developmental stages. The only difference that can be noted is that the distance peak follows the perturbation whereas it usually occurs at the same time as, or slightly before, the changes in developmental stage.These results are consistent with the idea that some (unknown) external or internal perturbation affects the gene expression network at the end of each developmental stage, and triggers it towards the next stage. It has been argued that a cell system approaches a fixed point, a limit cycle or another type of attractor at specific moments of its life"}
+{"text": "The increasing availability of time series expression datasets, although promising, raises a number of new computational challenges. Accordingly, the development of suitable classification methods to make reliable and sound predictions is becoming a pressing issue. We propose, here, a new method to classify time series gene expression via integration of biological networks. We evaluated our approach on 2 different datasets and showed that the use of a hidden Markov model/Gaussian mixture models hybrid explores the time-dependence of the expression data, thereby leading to better prediction results. We demonstrated that the biclustering procedure identifies function-related genes as a whole, giving rise to high accordance in prognosis prediction across independent time series datasets. In addition, we showed that integration of biological networks into our method significantly improves prediction performance. Moreover, we compared our approach with several state-of\u2013the-art algorithms and found that our method outperformed previous approaches with regard to various criteria. Finally, our approach achieved better prediction results on early-stage data, implying the potential of our method for practical prediction. In the last decade, the development of a variety of techniques, such as microarray-based techniques, has enabled instant measurement of the expression of up to thousands of genes. The use of gene expression profiling allows clinical diagnosis to be made on a molecular level, thereby facilitating the choice of treatment based on the patients' genetic traits In the past few years, gene expression experiments have been limited to static analysis Although static analyses are appropriate for many cases, they are less appropriate for longer-term follow-up Over the past several years, a few methods for classifying time series expression data have been reported, including an exhaustive search strategy to identify genes for a Bayesian classifier Motivated by limitations of current classification methods in time series data and the good performance of network integration approaches in static gene expression, our aim in this study was to classify time series gene expression via integration of biological networks. To our knowledge, this study is the first attempt at integration of biological networks for classification of time series data. We emphasize that integration approaches in static analysis cannot be directly applied to time series data. In order to reduce the effect of the measurement noise, we have introduced a hybrid model of hidden Markov model/Gaussian mixture model (HMM/GMM) into our approach, converting the original gene expression value, which contains noise, into a discrete gene state that represents the qualitative assessment of the gene expression level. The HMM/GMM hybrid model also takes into consideration time-dependence, which is a special property of time series data. Instead of using single gene markers that may be function-related, we regard genes that show similar expression pattern as a whole (bicluster) with the hypothesis that these genes may share a common biological function. Because genes sharing an expression pattern are likely, but not certain, to be function-related, we integrated biological networks into our approach, weighting a bicluster according to its connected genes in the network. The more closely connected the genes, the more likely the genes share a common biological function, and the higher the weight. Every sample (patient) is denoted as a point in the bicluster space. Similarity between samples (distance between points in the space) can be calculated as the weighted sum of bicluster similarity. We classified a sample based on its similarity to other samples.Here, we investigated the classification of Multiple Sclerosis (MS) patients with respect to their response to interferon beta (IFN-\u03b2) treatment. MS is one of the most prevalent autoimmune disorders, and treatment with IFN-\u03b2 is widely applied to reduce the intensity and frequency of symptoms. Nevertheless, almost 50% of patients do not respond to the therapy http://home.ustc.edu.cn/~lwqian/PPI-SVM-KNN/PPI.html.The prediction process primarily consists of 4 steps . FirstlyWe tested our approach on 2 different sets of time series microarray expression data from Baranzini and Goertsches The protein-protein interaction network obtained from a public human PPI database (HPRD) The HMM/GMM hybrid model is a classical method in speech recognition ij], the state transition probability distribution, where aij is the transition probability from state i to j; (3) B\u200a=\u200a{bj(x)}, the emission probability distribution, where bj(x) is the emission probability of observing x in state j; and (4) \u03c0\u200a=\u200a{\u03c0i}, the initial state distribution, where \u03c0i is the start point probability of the state i.A standard continuous HMM is characterized by the following elements ji is the mixture weight, and gji is the component Gaussian function with expected value \u03bcji and standard deviation \u03c3ji.An HMM is often simply notated as \u03bb\u200a=\u200a. The HMM/GMM hybrid model is a specific form of the HMM model, in which the emission probability distribution can be modeled by a mixture of Gaussian functions: ji and \u03c3ji denote the mean and standard deviation of expression values in the jth bin of gene i, respectively. When the initialization process is complete, the model parameter is trained by EM algorithm We assume that expression values of each gene are from a Markov process, which is widely used by many existing methods Algorithm 1:QL_Biclustering AlgorithmInput: ml, mo, gene state matrixOutput: biclustersfor each gene expression sequence\u2003Construct all responding suffix strings (SA).end forSort all the suffix strings in SA by MSD (most significant digit) radix sort method.for every 2 adjacent suffix strings\u2003Compute longest common prefix and store the longest common prefix length (LCP_Length).end forfor each distinct value i in LCP_Length\u2003for each occurrence (j) of i in LCP_Length\u2003\u2003pos\u200a=\u200apinpoint\u2003\u2003if ipos and Blcp[k]\u200a=\u200a1}\u2003\u2003Blcp[pos]\u200a=\u200a1\u2003\u2003if (r\u2212l+1)1 && LCP_Length [l -1]\u200a=\u200a\u200a=\u200ai then continue;\u2003\u2003Bicluster:\u2003\u2003\u2003Gene set (G): Gene [l \u2026 r];\u2003\u2003\u2003J(Timepoint (T) and corresponding expression state (S)): SA, 1...i)end for\u2003end forPrevious approaches of classifying time series expression data have been limited to analysis of genome-wide expression profiles. Gene markers selected by these approaches may be function-related and hence contain redundant information, leading to the degradation of the overall classification performance. Accordingly, it is more effective to regard function-related genes as a whole. We, therefore, extracted biclusters from the gene state sequences of each patient. Biological processes start and finish in a contiguous, but unknown, period, leading to increased activity of sets of genes that can be identified as biclusters with continuous time points. Several authors have previously pointed out the importance of biclusters and their relevance in the identification of biological processes Biclustering algorithms identify groups of genes that show similar expression patterns under a specific subset of the experimental conditions Here, a new biclustering algorithm, named QL_Biclustering, is proposed to extract biclusters from the gene state matrix obtained in the previous step. In order to differentiate a specific time point, a transformation is introduced, appending a time point to each gene state. For example, given the gene state sequence of a certain gene at each of the first 3 time points S\u200a=\u200a{3, 2, 1}, the transformed state sequence is J \u200a=\u200a{13, 22, 31}.QL_Biclustering algorithm (Algorithm 1) is based on the suffix string and longest common prefix. The input is the gene state matrix, ml (minimum number of continuous time points), mo (minimum number of genes). The output is all biclusters satisfying the user's requirements. The algorithm traverses all the values in LCP_Length from the smallest to the largest. For each occurrence of each different value in LCP_Length, QL_Biclustering processes it as follows. At step 10, the jth occurrence position of value i in LCP_Length is pinpointed and stored in pos. The number of time points of the current bicluster is ensured to be not less than ml at step 11. The algorithm finds an interval at steps 12\u201313, the suffix strings among which share a common prefix with length i. Any bicluster, the gene number of which is less than mo, is ignored at step 15. Step 16 verifies whether the candidate bicluster can be extended to the left. If it does, the candidate bicluster will be discarded because it is not a maximal bicluster.The QL_Biclustering algorithm is simple but efficient in aspects of both time and space and is linear on the size of the gene state matrix . In addiIn the previous step, we extracted biclusters in which genes showed a similar expression pattern. Genes in a bicluster obtained according to their gene expression values are supposed, but not certain, to be function-related . Here, wIt is known that the distances among genes that are regulated by the same transcription factors in a PPI network are 2 because they have common upstream factors. The distance among genes that belong to a protein complex in a PPI network are 1, as genes represented in a complex are adjacent i->B is the number of network interactions between gene i and genes in bicluster B, the remaining notations may be deduced by analogy.For each bicluster B, its preference (PPIScore) is scored as follows:The Jaccard Index i,Bj) between 2 biclusters Bi and Bj:i\u2229Bj|\u200a=\u200a|Gi\u2229Gj|\u00d7|Ji\u2229Jj|, |Bi\u222aBj|\u200a=\u200a|Bi|+|Bj|-|Bi\u2229Bj|, |Bi|\u200a=\u200a|Gi|\u00d7|Ji|, |Bj|\u200a=\u200a|Gj|\u00d7|Jj|.where |B1 and P2, firstly, we normalize the PPIScore of all biclusters in P1 and P2, respectively. Then, the similarity between the 2 patients can be calculated as follows: 1, P2, respectively. B1i, B2j denotes the ith bicluster in P1, the jth bicluster in P2, respectively.Given 2 patients, PThe above formula bears 2 properties, which are crucial to following classifier design:PPISim\u200a=\u200a1; (5)PPISim\u200a=\u200aPPISim;An SVM Algorithm 2: PPI-SVM-KNNInput: C, KOutput: predictClassfor each test patient P do\u2003Calculate the similarity between all patients in the training set and the test patient P and rank those training patients by similarity measure; add the top K patients to kPatients.if the label of all the patients in kPatients is class_i\u2003\u2003predictClass\u200a=\u200aclass_i;else\u2003\u2003Compute similarity matrix PPISim among patients in kPatients.\u2003\u2003Train linear SVM with slack variable classifier by using PPISim as kernel and predict the label of test patient P.end if\u2003end forThe model of linear SVM with slack variables:where X is the matrix of training samples, y is the vector of corresponding labels, n is the number of samples.PPISim is calculated as described above.i \u2260 0}The label of a new sample P can be predicted as follows: i \u2260 0.The parameter C can be viewed as a way to control \u201csoftness\u201d: it trades off between maximizing the margin and fitting the training data (minimizing the error). The larger the value of C is, the less likely the classifier is to misclassify samples in the training set. The parameter K specifies the classifier's dependence on the choice of the number of neighbors. When K is small, the algorithm behaves like a straightforward KNN classifier. When K is large enough, our method is totally an SVM. A commonly used strategy to evaluate classification performance is the k-fold cross validation (CV) scheme. In this work, we use 10 repetitions of 4-fold CV, which was also used in previous approaches.In order to reduce the effect of the measurement noise, we introduced an HMM/GMM hybrid model into our approach, converting the original gene expression value, which contains noise, into a discrete gene state that represents the qualitative assessment of gene expression level. The HMM/GMM hybrid model also takes into consideration time-dependence, which is a special property of time series data. We compared the HMM/GMM hybrid model with many other discretization methods, including Average, Mid-Range, Max-X%Max, Top X%, EFP Instead of using single gene markers that may be function-related leading to degradation of classification result, the proposed approach regards function-related genes as a whole through biclustering. The biclustering procedure identifies groups of genes that show similar expression patterns under a contiguous segment of time points, which takes the time course information into consideration as well. On average, the number of biclusters inferred from Baranzini dataset is 24 per patient; the number of biclusters inferred from Goertsches dataset is 64 per patient. We randomly selected bicluster examples from each of the two datasets, which are shown in Many works on static gene expression We next investigated the contribution of integrating PPI network in our method. In the proposed method, we regarded genes that had a similar expression pattern (bicluster) as a whole. Because genes showing similar expression pattern are likely, but not certain to be, function-related, we integrated a PPI network into our approach, weighting a bicluster according to its genes' connection in the network. The more closely connected the genes, the more likely the genes share a common biological function and the higher the weight. We compared the results of integrating a PPI network (PPI-SVM-KNN) to those obtained when no PPI network was integrated (SVM-KNN). As shown in PPI-SVM-KNN employs the HMM/GMM hybrid model to explore the time-dependence property of the data and integrates PPI networks favoring those biclusters that are more likely to share a common biological function, resulting in clear advantages in the classification of time series gene expression. We next demonstrated its advantages in the following aspects: Sensitivity, Accuracy, Precision, Recall and F-measure.We first checked the influence of parameters C and K on classification performances. The parameter C of PPI-SVM-KNN trades off between maximizing the margin and fitting the training data (minimizing the error). The larger the value of C is, the less likely the classifier is to misclassify samples in the training set. The parameter K specifies the classifier's dependence on choice of the number of neighbors. When K is small, the algorithm behaves like a straightforward KNN classifier. When K is large enough, our method is totally an SVM. As shown in per se is insufficient to comprehensively measure the quality of a classifier, other commonly used performance criteria such as Recall, Precision, and F-measure Because the prediction accuracy Considering that accurate prediction from early-stage data is of great significance in clinical diagnosis, we evaluated the influence of the number of measurements on classification performances and compared it with other methods. We repeated our classification experiment considering the first n measurements . As expected, classification performances increase as the number of measurements grows . In compPersonalized medicine, the use of marker-assisted diagnosis and targeted therapies derived from individual's molecular profile, is involving in the pharmaceutical industry and medical practice and it is likely to affect many aspects of societyHere, we developed a novel method to predict the MS patients' response to IFN-Beta, which is a work toward personalized medicine. The proposed approach is based on the idea of integrating two hierarchies of life system-gene and protein. For this specific MS prediction problem, we did not design a feature selection component. The 70 genes were selected by Baranzini In this paper, we presented a sound and reliable methodology for the classification of clinical time series data based on the novel idea of integrating biological networks. Admittedly, there are few points that we would like to improve in the near future. For example, during the process of data discretization (HMM/GMM hybrid model), time course information of the data is, more or less, lost. However, our method outperformed prior approaches that do not consider biological networks with regard to various performance criteria. Compared with other approaches, our method achieved more stability in prediction across 2 different datasets. Moreover, the proposed method achieved high accordance in prognosis prediction across independent time series datasets. Finally, we found that our approach achieved better prediction performances on early-stage data, implying that our method has great potential in clinical prediction. All the results on the 2 independent datasets have indicated that integration of the network into classification of time series significantly improves prediction performance, which is similar to that of static gene expression demonstrated by recent research groups Figure S1The selected binary protein protein interaction network. Each node represents a gene. Each edge represents a protein protein interaction, i.e. there exists a interaction between the two proteins which are encoded by the two genes the edge connects.(PDF)Click here for additional data file.Figure S2Biclustering process. Biclusters were extracted from (A) the gene state matrix. In order to differentiate specific time point, a transformation is introduced, appending time point to each gene state (B). Biclusters extracted from gene state matrix are shown in (C).(PDF)Click here for additional data file.Figure S3Functional enrichment analysis of genes in each Bicluster. The function of genes in each bicluster was analyzed. (A) is the result of Baranzini Dataset; (B) is the result of Goertsches Dataset. In (A) and (B), the horizontal axis represents the range of function count; the left vertical axis represents the number of biclusters; the right vertical axis represents the cumulative frequency. The height of each bar represents the number of biclusters. The x-axis label of each bar represents function counts (e.g. the left most bar of (A) indicates that there are nearly 30% biclusters associating with 1\u223c50 functions). The line represents cumulative frequency of corresponding bar in the Figures.(PDF)Click here for additional data file.Table S1Precision, Recall and F-measure of different discretization methods on Baranzini dataset and Goertsches dataset: average (AVG) and standard deviation (SD).(PDF)Click here for additional data file.Table S2Patient similarity of different discretization methods on Baranzini dataset and Goertsches dataset.(PDF)Click here for additional data file.Table S3Function enrichment analysis of the bicluster examples on Baranzini dataset and Goertsches dataset. Gene functions with p-value <0.05 are selected here.(PDF)Click here for additional data file.Table S4Precision, Recall and F-measure of integration versus non-integration of PPI network on Baranzini dataset and Goertsches dataset: average (AVG) and standard deviation (SD).(PDF)Click here for additional data file.Table S5Precision, Recall and F-measure of PPI-SVM-KNN with the change of K from 3 to 9: average (AVG) and standard deviation (SD).(PDF)Click here for additional data file.Table S6Precision, Recall and F-measure of PPI-SVM-KNN with the change of C from 0.1 to 1000: average (AVG) and standard deviation (SD).(PDF)Click here for additional data file.Table S7Precision, Recall and F-measure of distinct approaches with the change of measurements: average (AVG) and standard deviation (SD).(PDF)Click here for additional data file.Supplemental Material S11. Complexity analysis of the proposed biclustering algorithm and its comparison with CCC-Biclustering; 2. Function enrichment analysis of bicluster; 3. Kernel validation.(PDF)Click here for additional data file."}
+{"text": "Timing common and specific modulators of disease progression is crucial for treatment, but the understanding of the underlying complex system of interactions is limited. While attempts at elucidating this experimentally have produced enormous amounts of phenotypic data, tools that are able to visualize and analyze them are scarce and the insight obtained from the data is often unsatisfactory. Linking and visualizing processes from genes to phenotypes and back, in a temporal context, remains a challenge in systems biology. We introduce PhenoTimer, a 2D/3D visualization tool for the mapping of time-resolved phenotypic links in a genetic context. It uses a novel visualization approach for relations between morphological defects, pathways or diseases, to enable fast pattern discovery and hypothesis generation. We illustrate its capabilities of tracing dynamic motifs on cell cycle datasets that explore the phenotypic order of events upon perturbations of the system, transcriptional activity programs and their connection to disease. By using this tool we are able to fine-grain regulatory programs for individual time points of the cell cycle and better understand which patterns arise when these programs fail. We also illustrate a way to identify common mechanisms of misregulation in diseases and drug abuse. From subcellular to population level, dynamic phenomena resulting from a combination of genetic and environmental factors shape diversity in wide arrays of phenotypes. The connection between the genotype and the phenotype is of great interest to researchers, as it can give clues into healthy and perturbed states, eventually leading to the differential treatment of diseases. Placing this in a temporal context enables us to better understand developmental features and triggers of disease onset and progression.In recent years, the high-throughput sequencing technology has delivered increasing amounts of data on this type of studies \u20134. AlongTime-focused visualization has been the object of different tools, like GATE , VistaClWe introduce PhenoTimer, an open source tool for the visualization of time-driven phenotypic relationships in a genetic context. By using a novel combination of 2D/3D temporal projection displays and 2D network visualization, it enables the dynamic capturing of key points of biological processes. Temporal gene-phenotype connections can be analyzed in an interactive manner for link discovery and hypothesis generation. PhenoTimer is available for download as a standalone application from http://phenotimer.org/, along with the source code, the files used for the examples in this paper and other sample files for testing.http://phenotimer.org/. The Java Run time Environment version 1.6 or higher (http://www.java.com/) is needed to run the tool. Mac users should also install the JOGL libraries (http://opengl.j3d.org/).PhenoTimer was developed using Processing 1.5.1 (http://processing.org/), a Java-based environment with OpenGL integration. It runs on Mac OSX, Windows and some Linux environments . PhenoTimer is open source and freely available for academic use under the GNU GPL v 3.0 license at the following website: PhenoTimer uses 2D and 3D temporal projections to track connections between different phenotypes. These connections underline common genetic factors through time. The purpose is to explore the genetic-phenotypic space from a different perspective: how are two phenotypes similar, how do they relate to each other and what common genetic mechanisms govern the two biological processes? It also looks at how phenotypic traits can evolve successively from previous traits and how networks come into play in this progression.The main novelty of this tool consists in visualizing connections between phenotypes in the form of arcs linking the respective phenotypes for every time point or for a time interval. We use 3D, linear 2D and circular 2D projections to represent these arcs see . Heat maThe \u201cconnection\u201d, represented as an arc, can be defined to suit the particular biological question under investigation. For instance, a \u201cconnection\u201d between two phenotypes can indicate that these phenotypes are the result of disrupting some genes involved in the same pathway or process or it can highlight a transition between these two phenotypes for at least one genetic event . The color of the arcs codes for the directionality of the connection where needed (like in the case of transitions from one phenotype to the other), corresponding to the end phenotype. In case there is no directionality associated to the link, all connections can be set to a single color. The height (for 3D) or the width (for 2D) of the arc is proportional to the number of genes or gene-linked events for which that connection appears at that particular time point.The reason for offering three types of depiction for the phenotypic connections is that, depending on the size and content of the dataset, one or the other visual representation may prove more useful in detecting patterns in the data. The height of arcs in the 3D view is better distinguishable than the width in the 2D layout in the case of overlapping arcs, thus acting as a more efficient indicator of the number of genes involved in the relationship. However, 3D layouts have been shown to be misleading, major issues referring to occlusion and perspective distortion . While tSome of the design choices are similar to the ones described by To minimize clutter, we reorder the phenotypic lanes for optimal viewing of connections. To this purpose, we use an agglomerative hierarchical clustering algorithm to maximt to time point t+x will be shown, where x\u2208{0,n} is the time offset, n being the total number of time points.Thresholds can be set for phenotypes and the time offset, in order to filter the dataset for relationships of interest or at different time intervals , protein\u2013protein interaction, metabolic and other types of networks can be loaded from a file or from the STRING database ,22 and vAdditionally, right-clicking a gene/protein name within the network will provide links to several databases: UniProt , EnsemblThe tool is interactive, with zoom-in, zoom-out, pan and rotate capabilities. Filters for specific genes or phenotypes can be set. There are several color schemes available for use (from http://colorbrewer2.org/), including color-blind schemes or single color display.The input consists of a special format space-delimited file, as described in http://phenotimer.org/tutorial.html.More details on the workflow and functionality of the tool can be found in PhenoTimer performs best on Mac OSX and Windows. Compatibility issues of Processing in Linux may impair the performance of PhenoTimer in this environment. For reasons of CPU load and physical visualization limits, it is recommended not to visualize datasets that exceed the following dimensions: a few thousand genes x 50 phenotypes x 100 time points. The main memory usage limitation is the number of phenotypes.Visualization of large-scale datasets is crucial for the understanding of key regulatory factors and a better elucidation of biological processes. Time-resolved data adds an extra challenge that tools currently available to biologists hardly meet. In this section, we demonstrate how to use PhenoTimer on time-resolved multiple-phenotype data for quick pattern identification and generation of new hypotheses.One of the best examples of highly time-regulated processes is the cell cycle. Intensely studied, it still poses interesting questions for biologists, with implications in senescence processes and disease .We illustrate an investigation of phenotypic patterns throughout the cell cycle as they arise from perturbations in the system. The data comes from a whole-genome siRNA knockdown study on genes essential for cell division, as described in . Here, tWe use PhenoTimer to visualize how the cell populations transition from one phenotype to the other upon knockdown of selected genes .We only represent transitions to the most prominent phenotypes at every time point . We apply thresholds to the phenotypic scores assigned to every suppression event according to the values mentioned in .We can easily observe patterns, as shown in Besides these, one can also identify less common transition patterns: \u201cmitotic delay\u201d to \u201cdynamic\u201d at large time intervals, fairly uniformly distributed throughout the time course; \u201cmitotic delay\u201d to \u201clarge\u201d, more frequent towards the end; \u201cmitotic delay\u201d to \u201cgrape\u201d; or \u201cgrape\u201d to \u201clarge\u201d. By comparing these, we can quickly identify the more prevalent and the rarer phenotypes . We also get an overview of the timing in the cell cycle when a particular transition can occur. The rarer transitions occur at longer time intervals, indicating that they are more likely slow and final transitions, as opposed to the transitions to polylobed and apoptosis that are more homogeneous throughout the time course and thus more frequent and faster .In this section, we show how PhenoTimer could facilitate the discovery of potential links between diseases. We investigated the impact of peak transcription events throughout the cell cycle on different types of cancer. For this, we used transcription profiles of 600 essential genes that periodically fire throughout the progression of the cell cycle . This meVEGFC [ENSG00000150630], a growth factor active in angiogenesis and endothelial cell growth. The network it is involved in, derived from STRING, is also shown in VEGFC relates to all the other periodic genes enriched in at least one pathway disrupted in cancer. It mostly connects through genetic interactions or co-expression to the rest of the network and is commonly enriched in cancer pathways with the directly linked partners, E2F2 and NF\u03baB1, both peaking after the G1 phase. It is possible that the disruption of the links with VEGFC upon malfunctioning of either one or the other of these two proteins plays a role as tumor-triggering factor. This analysis indicates that the regulation of most cancers might involve very similar mechanisms for replication of genetic material, but the errors of cell division leading to disease may be cancer type-specific.This example of mapping transcription events to pathways involved in cancer provides an idea of how PhenoTimer might be used for similar studies. While this model was rather naive, we are confident of the potential of this tool to provide new insights about common mechanisms for disease regulation and progression, given a more complex context.Drugs of abuse act on the brain reward system and employ similar mechanisms to generate addiction. The impact on human health makes this an intense topic of research. Elucidating the genes and pathways commonly affected by several drugs can help us better understand the downstream effects of drug intake, as well as identify potential side effects of drug combinations.We looked at transcriptome alterations in the mouse striatum upon acute administration of six addictive drugs: nicotine, ethanol, cocaine, methamphetamine, heroin and morphine. We used detailed time course profiles of gene expression as described in to analyFrom the set of 42 genes identified as drug-responsive in the paper, we looked at how the genes with transcription values in the lower and upper quartiles are commonly regulated by pairs of drugs. We term these genes as relatively \u201clowly\u201d or \u201chighly expressed\u201d within the group. The first observation is that mechanisms of action are very similar among drugs of abuse. For the genes whose expression is in the lower quartile, we notice an increase with time in the number of commonly affected genes by pairs of drugs. An inverse relation is observed for the upper quartile genes, as there are more of these genes influenced by the same two drugs in the beginning compared to the end of the time course. This suggests that drugs of abuse impact many of the differentially expressed genes by lowering their expression rather than enhancing it.The gene networks allow us to identify the stable and the variable elements involved in the drug connections. At every time point, a connection between two genes means that their expression is influenced by the same drug(s). The thickness of the link corresponds to the number of common drug influences. The yellow nodes form the core gene network that stays constant throughout the time course. Among the lowly expressed genes, many core components were related to proliferation; the highly expressed genes, on the other hand, were preponderantly involved in phosphorylation-mediated signalling and transcription events . This suThe orange (lower quartile) and green (upper quartile) nodes are variable elements, i.e. the genes that appear and disappear in the network at different time points. These include major regulators of transcription and cell division, as well as genes involved in hypoxia response see . The varTnfrsf25, a member of the tumor necrosis factor receptor superfamily [ENSG00000215788], is uniquely downregulated only by ethanol and heroin after 8 hours. Heat map and line plot analysis for this gene shows that the downregulation is a very slight effect and would have been difficult to capture otherwise . The gene specific only for that pair of drugs at a time point is circled in the same color in the network below. For the genes in the lower quartile, we notice that gene therwise . The twoll death , while Sresponse . This suFosb for cocaine and methamphetamine (both psychostimulants), which share effects on Egr2 also with nicotine; and Polr3e for ethanol and methamphetamine. Morphine has the same effects on Sgk1 as ethanol or heroin after 4 hours.Other specific genes include Differences between drug effects are clearer if we look at the time evolution of drug and gene connections for stimulants and depressants , as depicted in The 42 true positive genes found in the study were classified in the paper into four subclasses according to their gene expression patterns: A , B1 (reward learning and drug dependence), B2 (drug dependence) and B3 (anti neurotoxic response) . We lookThe similarities identified in the action of different drugs reveal an uneven pattern of regulation within a single drug class or a gene group, and several similarities between different classes. This further expands on the complexity of addiction mechanisms. This example also highlights the potential of using PhenoTimer to identify synergistic effects of drugs, which could have implications for designing drug therapies.We have shown how PhenoTimer can help with the better understanding of phenotypic relations by connecting back to the genetic background and by embedding time information. The tool has proven to be useful in fast patterning of phenotypic transition profiles within the cell cycle upon perturbation, as well as in the identification of similarities in drug action and potential novel links between diseases.Compared to similar software for time-resolved data, PhenoTimer introduces several new features. To our knowledge, no other tool is currently available for specifically mapping connections between phenotypes in the form of arc projections. Furthermore, these relationships and their genetic determinants can be tracked dynamically, along with network and functionality information. The alternation between different 2D and 3D views allows a detailed inspection of the data, and global patterns can be easily detected. In short, PhenoTimer enables an interactive exploration of multidimensional phenotypic screens for global trends and single time point details, in an adaptable manner that allows the integration of dynamic network information.We anticipate that the use of this tool is not limited to the examples we have shown, but its features are suited to a whole range of biological data. In general, it is relevant for the analysis of interesting subsets of high-throughput expression or imaging screens, as well as other experiments. As application, PhenoTimer could also be used for the visualization and analysis of spatiotemporal programs encoded within and among chromosomes. Identifying common points in the progression of various diseases could provide new strategies for combined treatment and drug repurposing. Visualization of common variations among bacterial populations in the human gut through time would give us further insight into the dynamics of the microbiota of the gut. Additionally, it can be used for quality control of different replicates in an experiment or for comparison between different tissues. We thus envision phenotypic patterning of processes from the smallest to the largest scale, for a global view of time-dependent regulation and rhythms governing life.Figure S1The different visualization modes in PhenoTimer.(A) 3D arc view. Connections between phenotypes are represented as arcs, the color encoding directionality . The height of the arc corresponds to the number of genes involved in the connection. Bar charts with values for every time point can be loaded as well. (B) 2D arc view. Connections are represented as 2-dimensional arcs, with the same color coding as in (A). The thickness of the arc corresponds in this case to the number of genes. (C) Circular view. The connections between phenotypes are visualized in a circular manner for every time point. (D) Heat map view. The gene-associated values are visualized for each phenotype in a separate heat map for every time point. The user can choose the clustering method. The heat maps are expanded upon hovering and can be individually analyzed. (E) Line plot view. The gene-associated values are visualized as timeline plots for every phenotype. The graphs are expanded upon mouse hovering.(TIF)Click here for additional data file.Figure S2Details of PhenoTimer graphical user interface.(A) Part of the canvas where the different 2D/3D graphical representations are drawn. (B) Part of the canvas where the different 2D network representations are drawn. (C) Controls for setting thresholds for phenotypic values. One can set new value ranges by dragging the sliders. (D) Slider controller for moving through time. Pressing the key \u201ct\u201d allows switching between visualizing connections for a single time point and for all time points up to the current one. (E) Slider that allows setting the time interval for arc display. (F) Controller for changing the unit height (in 3D) or width (in 2D) of the arcs, for better emphasis of visualized data. (G) Slider that allows the changing of the arc transparency, for optimized visualization (default is 20%). (H) Pop-up that indicates the action that can be taken using the corresponding slider.(TIF)Click here for additional data file.Figure S3PhenoTimer workflow.The experimental data coming from medium or high-throughput gene expression or imaging screens for which time-lapse recordings have been made is formatted into a special input file similar to the one in the top panel, parsable by PhenoTimer. This file is then loaded into PhenoTimer for processing. The tool produces already at this point the visual output, but one might wish to first set thresholds for gene-associated values for each phenotype, otherwise all phenotypes might appear connected. After this step, one is ready to visualize the data in different view modes and integrate network information (bottom panel).(TIF)Click here for additional data file.Figure S4Single phenotype transition plots, as produced by PhenoTimer.Each plot visualizes only transitions to and from phenotype \u201cpolylobed\u201d (A), \u201capoptosis\u201d (B), \u201cgrape\u201d (C) and \u201clarge\u201d (D), respectively. Prevalent phenotypes (A and B) are clearly distinguishable from rarer ones (C and D). This holds even when considering only transitions towards the phenotype of interest, depicted in purple (polylobed), green (apoptosis), blue (grape) or red (large).(TIF)Click here for additional data file.Figure S5Timeline of molecular functions enriched for genes essential for cell division.The gradient highlights the number of genes whose silencing causes transitions at a particular time point and that are enriched for the respective molecular function. The plot was produced in R.(TIF)Click here for additional data file.Figure S6The hypothesized network of synchronously activated genes or proteins involved in the same pathway.The nodes correspond to silenced genes and the genes are connected if they show the exact same phenotypic succession events upon knockdown. The genes are colored according to the first phenotype shown in the cells upon knockdown. Out of all interactions hypothesized, 62.4% have been validated from the literature using GeneMania, with the following distribution: co-expression 64.24%, physical interactions 14.68%, genetic interactions 11.16%, co-localization 5.46%, predicted 4.37%, shared protein domains 0.09%. The networks were visualized using Cytoscape.(TIF)Click here for additional data file.Figure S7Connections from the literature between genes of four hypothesized interactive modules.The cells where these genes are knocked down adopt a binuclear phenotype after: (A) 16.5 hours; (B) 15 hours; (C) 15.5 hours; (D) 26 hours. The networks were retrieved from GeneMania.(TIF)Click here for additional data file.Figure S8K-means clustering reveals 4 clusters of genes with similar phenotypic succession profiles.The clustering for the first two principal components is displayed. The clustering was performed on the vectors of phenotypic assignment of most prevalent phenotype at each time point for every gene. The clustering and plotting were performed in R.(TIF)Click here for additional data file.Figure S9The network of genes affected in cancer that are also periodically transcribed throughout the cell cycle.The genes that related to more than one type of cancer are highlighted. Circle colors indicate the different cancer types where genes are enriched. Genes highlighted in this way are the ones involved in the connections visualized using PhenoTimer. In general, genes affected in the same cancer types interact physically or genetically. The network was retrieved from GeneMania and further edited in Cytoscape.(TIF)Click here for additional data file.Figure S10Mechanistic similarities of stimulants and depressants .Action upon genes with transcription profiles in the lower quartile (top) and upper quartile (bottom) ranges is depicted. Two drugs are connected if treatment with either of them results in similar levels of gene expression for at least one gene. The thickness (2D) or height (3D) of the arcs corresponds to the number of genes commonly affected by two drugs. The networks connect genes that respond to the same drug(s). The thickness of the edges corresponds to the number of drugs to which the pair of genes is responsive. Orange nodes (top) and green nodes (bottom) are variable elements, while yellow nodes correspond to the core gene network that stays the same throughout the time course. The plots have been obtained using PhenoTimer and then combined and annotated to emphasize different aspects of the analysis.(TIF)Click here for additional data file.Figure S11Drug similarities in action on different gene subclasses.Two drugs are connected if they act similarly on at least one common gene, the thickness of the links indicating how many genes are influenced by that pair of drugs. To the right, the networks of the genes corresponding to the different subclasses have been retrieved from GeneMania. The genes highlighted in red appear in drug pair connections. The circular plots have been obtained individually using PhenoTimer and then combined and annotated to emphasize different aspects of the analysis.(TIF)Click here for additional data file.Table S1Comparison of the different view modes of PhenoTimer.The table lists the comparative strengths and weaknesses of the different graphical representations used in PhenoTimer.(DOC)Click here for additional data file.Table S2Example of an input file loadable into PhenoTimer.The first column specifies the gene names, the second column the phenotypes and the subsequent columns list the gene-associated values at each time point. The fields must be separated by white space.(DOC)Click here for additional data file.Table S3Example of network files loadable into PhenoTimer.(a) Input file containing the GO enrichment specifications: the columns must specify the GO identifiers, the corresponding descriptions, the p-values of the enrichment and the genes that are enriched for each category, separated by \u201c|\u201d. (b) Along with the enrichment file, an interaction file should also be loaded into PhenoTimer, specifying the interaction partners in the network, one pair per line. The format is the same for other types of networks . All these are tab-separated fields.(DOC)Click here for additional data file.Table S4GO enrichment for the network of hypothesized synchronous genes.The table lists the molecular functions of all the genes in the mitotic progression dataset whose knockdown causes identical phenotypic successions to at least one other gene.(DOC)Click here for additional data file.Table S5Quartile calculations for the measured transcriptional levels upon drug intake.2-transformed mRNA abundance measured for each drug treatment. The lower (25%) and upper (75%) quartile values are used as thresholds for subsequent visualization and analysis.The table lists the quartiles of the normalized and log(DOC)Click here for additional data file.Table S6The functionality description of core network and variable genes similarly regulated by drugs.Lowly and highly expressed genes are defined as before. The descriptions were taken from UniProt.(DOC)Click here for additional data file."}
+{"text": "Post-genomic molecular biology has resulted in an explosion of data, providing measurements for large numbers of genes, proteins and metabolites. Time series experiments have become increasingly common, necessitating the development of novel analysis tools that capture the resulting data structure. Outlier measurements at one or more time points present a significant challenge, while potentially valuable replicate information is often ignored by existing techniques.We present a generative model-based Bayesian hierarchical clustering algorithm for microarray time series that employs Gaussian process regression to capture the structure of the data. By using a mixture model likelihood, our method permits a small proportion of the data to be modelled as outlier measurements, and adopts an empirical Bayes approach which uses replicate observations to inform a prior distribution of the noise variance. The method automatically learns the optimum number of clusters and can incorporate non-uniformly sampled time points. Using a wide variety of experimental data sets, we show that our algorithm consistently yields higher quality and more biologically meaningful clusters than current state-of-the-art methodologies. We highlight the importance of modelling outlier values by demonstrating that noisy genes can be grouped with other genes of similar biological function. We demonstrate the importance of including replicate information, which we find enables the discrimination of additional distinct expression profiles.http://www.bioconductor.org/packages/release/bioc/html/BHC.html?pagewanted=all.By incorporating outlier measurements and replicate values, this clustering algorithm for time series microarray data provides a step towards a better treatment of the noise inherent in measurements from high-throughput genomic technologies. Timeseries BHC is available as part of the R package 'BHC' (version 1.5), which is available for download from Bioconductor (version 2.9 and above) via Post-genomic molecular biology has resulted in an explosion of typically high dimensional, structured data from new technologies for transcriptomics, proteomics and metabolomics. Often this data measures readouts from large sets of genes, proteins or metabolites over a time course rather than at a single time point. Most biological time series aim to capture information about processes which vary over time, and temporal changes in the transcription program are often apparent .iid) and are invariant with respect to the order of the observations. If the order of observations in two sequences is permuted, their correlation or Euclidean distance will not change. However, this does not hold for time series, where each observation depends on its past, and gene expression levels at adjacent time points exhibit correlation. This was demonstrated in the classic paper of Eisen et al. be the N = G \u00d7 T observations in a cluster of G genes, where the {yg, T} are time series of {1,..., T} time points. Each gene is normalised to have mean 0 and standard deviation 1 across time points. The prior of f is given for fixed values of \u03b8\u03a3, such that P(f|\u03b8\u03a3) = N . It follows that the likelihood function for f is I is the N \u00d7 N identity matrix. The marginal likelihood of the data, y, is then:Let y. We have implemented both the squared exponential and cubic spline covariance functions into BHC. The probability P (y) is given for fixed \u03b8\u03a3 and f and noise variance.where K describes the relationship between the values of the function, f, at different time points and must be positive semi-definite to be valid. In BHC we have implemented the squared-exponential covariance function KSE, which is a widely-used choice for K:The covariance function \u03b4ij is the Kronecker delta function and ti and tj are two time points for f. The covariance function encodes our assumptions about the underlying signal in the data. For example in KSE the hyperparameter l, is intuitively how far along the input time axis must be travelled between stationary points. As the inputs become closer in time, the value of KSE increases and tends to unity, meaning these values of f are more closely correlated. This encodes the intuition that our time series are smoothly-varying, once we have accounted for noise. We have also implemented the cubic spline covariance function, KCS, to facilitate comparison with the clustering method of Heard et al. [where d et al. , which cv = min . KCS only has two hyperparameters, where \u03b8\u03a3 and y, using a gradient ascent method. We want to use the replicate information to inform the value of For each cluster, we learn the hyperparameters The total noise variance, G is number of genes in the cluster, R is number of replicates per observation, T is number of time points in the time series and yr,g,t} is the set of replicates for an observation.where It is these averages of the replicate values, P ((\u03b1 - 1)/\u03b2) is the modal value of the Gamma distribution, and the hyperparameters \u03b1 and \u03b2 are chosen to give a weakly informative prior on P denotes the Gamma distribution and \u2126 is chosen to be 100. Equation 7 reflects our prior knowledge that where P (\u03b8|y) using a gradient ascent method. The partial gradient of the log marginal likelihood with respect to The hyperparameters, \u03b3 = K-1y, \u2202 K/\u2202\u03b8j is a matrix of element-wise derivatives and 'tr' denotes the trace of the matrix. In the case of the remaining hyperparameters, a flat prior, P (\u03b8j), is assumed, and therefore the corresponding partial gradients contain only the trace term above. If replicate information is not required to be included in BHC, a flat prior is also assumed for where We have so far considered the total noise in microarray measurements to have a Gaussian distribution. However, despite averaging replicate values, microarray data typically contain some outliers that are not well modelled by the Gaussian noise distribution used for the majority of the data.et al. [P (y|f), where the likelihood functions for observations with regular noise have a Gaussian distribution, and a likelihood function of a different form is assumed for the outlier measurements. Stegle et al. [Kuss et al. introduce et al. used sucg and time point t, as yn. Following the reasoning in Kuss et al. [b, that this value, yn, was generated by an unknown likelihood function, Po, producing outlier measurements, and a probability a = 1 - b that yn is a regular value, which was generated by a Gaussian likelihood function, Pr. This mixture likelihood function is therefore:We simplify our notation to denote, s et al. , we assuThe expression for the marginal likelihood then becomes:N terms. In the case that Po is a conjugate distribution to Pr, evaluation of this integral would be analytically solvable, but computationally intractable for large numbers of observations. However, if the proportion of outlier measurements is small, this series can be approximated. Making the following simplifications to notation: An = Pr and Bn = Po gives:Multiplying out the likelihood function product would result in 2aN represents the case where no observations are outliers. Terms with coefficient aN-1b represent the case that a single observation is an outlier.The term with coefficient b2 or higher order in their coefficients represent the case that two or more observations are outliers. Since b is small, these terms are considered to represent events unlikely to occur and are disregarded. Our first order approximation considers every datum as an outlier; higher order approximations would incur a disproportionate computational burden.Terms with Bn, is modelled as the same constant function for all measurements, B = 1/Range, where Range is the difference between the highest and lowest observations in the data set.The likelihood function for the outlier terms, An represent Gaussian distributions, it follows thatWhen the N - 1 observations excluding the nth observation and Kn -is the corresponding covariance matrix.Where K, the proportion of outliers, 1 - a, is calculated to optimise the marginal likelihood P (y|\u03b8). Simplifying the notation, such that P(y|\u03b8) \u2248 aNV1 + aN-1(1 - a)V2, we have amax = (1 - N)V2/N(V1 - V2) as the value of a giving the highest value for P(y|\u03b8). Therefore if 0 1, GlobalMIT* still admits the same complexity as the first order GlobalMIT.This assumption restricts that for each node +/*. Our proposed algorithms are implemented within the Matlab/C++ GlobalMIT+ toolbox, freely available as online supplementary material . BANJO also supports multi-threading, whereas BNFinder does not. While we could have run all algorithms with only a single thread, for a \u201cfair\u201d comparison in terms of run-time, our objective in carrying out the experiments this way is to highlight the capability and benefit of parallelization of GlobalMIT+. The 1-thread execution time would be roughly three to five times longer in our observation. As for parameter setting, BNFinder was run with default settings, while BANJO was run with 6 threads, simulated annealing+random move as the search engine, and its run-time was set to, either that required by GlobalMIT+ or at least 10 minutes, whichever longer. GlobalMIT+ has two parameters, namely the significance level \u03b1, to control the trade-off between goodness-of-fit and network complexity, and the DBN order d. Adjusting \u03b1 will affect the sensitivity and precision of the discovered network, very much like its affect on the Type-I and Type-II error of the mutual information test of independence. De Campos [\u03b1may be necessary to avoid overly penalizing network complexity. Thus, in our experiments we set \u03b1=0.999 for Ne<100 and \u03b1=0.9999 otherwise. The choice of a suitable DBN order d, on the other hand, is both species-specific and data-specific, in particular the data sampling rate. For example, in mammals, the transcriptional regulatory time delay can be from several minutes to several tens of minutes, and is composed of two components: the TF translation/post-translational processing/translocation time (\u223c10.5\u00b14 mins), and the target gene transcription and post-transcription processing time (\u223c20\u221240 mins) [d value is needed to cover the same time delay. It is also noted that increasing d will decrease the number of effective data points available for learning. In our experiments, we experimentally set d from 1 to several time units, depending upon the sampling rate. Whenever necessary, gene expression data were discretized using 3-state quantile discretization.It is noted that the GlobalMITe Campos suggeste40 mins) . Also, fE. coli SOS system [lexArecA and more than 30 other genes they directly regulate. In normal condition, LexA binds to the promoter regions of these genes and acts as a master repressor. When the DNA is damaged, the RecA protein senses the damage and triggers LexA autocleavage. Drop in LexA level leads to de-repression of the SOS genes. When DNA repair completes, RecA stops mediating LexA autocleavage, LexA accumulates and represses the SOS genes again. We used the expression data gathered in [uvrD, lexA, umuD, recA, uvrA, uvrY, ruvA and polB, to reconstruct the interactions between these genes. The data set contains 4 time series, each of 50 observations taken at 6-minute interval, under two UV exposition levels. Since the dynamics of each gene in all time series are similar, we can take the mean value of these time series as input to the algorithms. Thus, the input data consists of 8 genes\u00d750 observations.We study the S system which inhered in for 8 ge+ and BNFinder require only a few seconds, while BANJO was executed for 10 minutes with 6 threads in parallel. The experimental results are reported in Figure + (d=1), BNFinder (BDe & MDL) all returned the same network in Figure ruvA being disconnected. Overall, this structure closely reflects the SOS network, in which the lexA/recA compound acts as a hub that controls the other genes. BANJO returned the network in Figure umuD and uvrD/uvrA. Note that the ruvA gene is also disconnected in the BANJO\u2019s recovered network. When testing with higher orders, GlobalMIT+ discovered a similar hub structure. The most complete network was discovered at d=6 in , and the interaction order between nodes in adjacent levels assumed to be one. The network in Figure et al.\u2019s program, we simply shifted forward the expression profiles of the 2nd-, 3rd- and 4th-level nodes by 1, 2 and 3 time units respectively to create data for this DBN model. We generated ten time series of 125 observations, then for each N\u2208{25,50,75,100,125} we took the first N observations of these series for testing. Since the network structure in this experiment is known in advance by design, we can calculate the true positive (TP), false positive (FP) and false negative (FN) edges. The mean\u00b1standard deviation values for the performance metrics, namely sensitivity (=TP/(TP+FN)), precision (=TP/(TP+FP)) and runtime, over 10 time series for all algorithms are reported in Table We study a glucose homeostasis network of 35 genes and 52 interactions, first proposed by Le et al. [N increased. On the other hand, BANJO achieved a slightly better sensitivity, but at the cost of a significantly lower precision. This observation is in concordance with our earlier experiment on the E. coli SOS network, in which BANJO also learned many more edges than GlobalMIT+ and BNFinder. This result also highlights the major advantage of deterministic global optimization based approaches over stochastic global optimization based method such as BANJO. Wherever applicable, these methods never get stuck in local minima, and are able to deliver consistent and high quality results. Of course, BANJO on the other hand is the choice for very large datasets where deterministic methods are computationally infeasible.It is noted that we have omitted BNFinder+BDe in this experiment. The reason is that this algorithm becomes too expensive even for this medium network. For example, at N=25, BNFinder+BDe requires around 1 minute. The execution time quickly increase to 1206\u00b1167 mins at N=50. And at N=75, we could not even complete analyzing the first of the 10 datasets: the execution was abandoned after 3 days, with BNFinder+BDe having learnt the parents for only 2 nodes. Of the algorithms reported in Table + and GlobalMIT* (with d=3) achieves significantly better sensitivity compared to first-order DBN learning algorithms . The improved sensitivity is mainly credited to the ability of these algorithms to cover all the possible time-delayed interactions between the genes. More specifically, at N=125, GlobalMIT* discovers on average 16.9 high-order interactions, i.e., 43% of the total high-order interactions. Meanwhile, BANJO and BNFinder+MDL only recover on average 5.5 (14%) and 4.6 (12%) high-order interactions respectively. It is also noticeable from this experiment that GlobalMIT* delivered results almost identical to GlobalMIT+ but with a much shorter time, comparable to the 1st-order GlobalMIT.As for higher-order DBN learning algorithms, both GlobalMITCyanothece sp. 51142, hereafter Cyanothece, a unicellular cyanobacterial strain that is involved not only in photosynthesis but also in nitrogen fixation in the same cell. As a byproduct of nitrogen fixation, Cyanothece has been recently shown to produce biohydrogen at very high rates that are several fold higher than previously described hydrogen-producing photosynthetic microbes [This section presents our analysis on a large scale cyanobacterial network. Cyanobacteria are the only prokaryotes that are capable of photosynthesis, and in recent years have received increasing interest , due to microbes .* version, with order d=3 . GlobalMIT* inferred the network as in Figure \u03b3typically between 2 and 3 for various networks in nature, society and technology. The scale-free property is thought to be a key organization feature of cellular networks, as supported by recent analysis on model organisms such as S. cerevisiae and C. elegans[We used transcriptomic data from , where s. elegans,40. It i. elegans. Herein * inferred network to the power-law distribution using the method of maximum likelihood (ML). The ML estimate for \u03b3 in this network is 2.24, falling well within the typical range. From Figure P(x)=x\u22122.24curve. In order to verify that the scale-free structure is not merely an artefact of the inference algorithm, we test GlobalMIT* with the same parameters on the same microarray data set, but with every gene expression profile randomly shuffled. The resulting network is shown in Figure To formalize this observation, we fit the node degree (counting both in- and out-degree) in the GlobalMIT\u03b3 in this network is, interestingly, 2.25, very close to that of the GlobalMIT* network. BANJO was run with 6 threads for 1h. The resulting network, shown in Figure We next tested BNFinder and BANJO on this data set. BNFinder+BDE was abandoned after 3 days of execution without finishing. BNFinder+MDL on the other hand is relatively fast, requiring only 4 mins. The resulting network, shown in Figure Cyanothece sp. 51142 from Cyanobase [Cyanothece in particular are not very well annotated. For example, to date, nearly half of Synechocystis sp. PCC 6803\u2019s genes, the best studied cyanobacterium, remain unannotated. Therefore, we supplemented Cyanobase annotation with homology search using the Blast2GO software suit [p-value of less than 0.05 is considered significant. Following these procedures, of the top 20 hubs in the GlobalMIT* network, 10 were found to be significantly enriched in major Cyanothece cellular processes, such as nitrogen fixation, photosynthesis and other closely related pathways, as presented in Table Cyanothece strains thrive in marine environments, and in addition to carbon fixation through photosynthesis, these bacteria can also perform nitrogen fixation by reducing atmospheric dinitrogen to ammonia. Since the nitrogenase enzyme is highly sensitive to oxygen, Cyanothece temporally separates these processes within the same cell, so that oxygenic photosynthesis occurs during the day and nitrogen fixation during the night [Cyanothece cellular processes. This is reflected clearly in the GlobalMIT* reconstructed network. Upon inspecting BNFinder+MDL network, 6 out of the top 20 hubs were found to be significantly enriched, also in major relevant cellular processes. It is noted that while GlobalMIT* show the most hubs, BNFinder+MDL manages to recover several hubs with significantly better corrected p-value. In particular, 3 hubs for nitrogen fixation, proton transport and ribosome were recovered with significantly smaller corrected p-value. However, as opposed to GlobalMIT*, other important functional hubs for photosynthesis, photosystem I & II were missing. BANJO on the other hand produced relatively poor result, with only 1 out of 20 top hubs turned out to be significantly enriched, but not related to any major cellular pathway. The overall results suggest that both GlobalMIT* and BNFinder+MDL successfully reconstructed biologically plausible network structures, i.e., scale-free with a reasonable scaling parameter value, and with functionally enriched modules relevant to the wet-lab experimental condition under study. GlobalMIT* managed to produce more enriched hubs, as a result of the higher order DBN model employed and the improved MIT scoring metric. BANJO on the other hand, generally failed to produce a plausible network structure. This experimental result thus highlights the advantage of deterministic global optimization approach, as employed by GlobalMIT* and BNFinder+MDL, versus a stochastic global optimization approach as employed by BANJO.We next perform functional enrichment analysis for the top hubs in each network. For this purpose, we gathered annotation data for yanobase . Cyanobaare suit . In totaare suit for genehe night . Thus, u+ and GlobalMIT*, two DBN-based algorithms for reconstructing gene regulatory networks. The GlobalMIT suite makes use of the recently introduced MIT scoring metric, which is built upon solid principles of information theory, having competitive performance compared against the other traditional scoring metrics such as BIC/MDL and BDe. In this work, we have further shown that MIT possesses another very useful characteristic in that when placed into a deterministic global optimization framework, its complexity is very reasonable. As theoretically shown and experimentally verified, GlobalMIT exhibits a much lower complexity compared to the BDe-based algorithm, i.e., BNFinder+BDe, and is comparable with the MDL-based algorithm, i.e., BNFinder+MDL. GlobalMIT+/* are also designed to learn high-order variable time delayed genetic interactions that are common to biological systems. Furthermore, the GlobalMIT* variant has the capability of reconstructing relatively large-scale networks. As shown in our experiments, GlobalMIT+/* are able to reconstruct genetic networks with biologically plausible structure and enriched submodules significantly better than the alternative DBN-based approaches. Our current and future study of GlobalMIT+/* mainly focuses on the application of these newly developed algorithms to elucidate the gene regulatory network of Cyanothece, Synechocystis, Synechococcus amongst other cyanobacteria strains having high potential for biofuel production and carbon sequestration.In this paper, we have introduced GlobalMITThe authors declare that they have no competing interests.NXV developed the algorithms and carried out the experiments. MC provided overall supervision and leadership to the research. NXV and MC drafted the manuscript. RC and PPW suggested the biological data and provided biological insights. All authors read and approved the final manuscript.+ toolbox Implementation of the proposed algorithms in Matlab and C++, together with the user\u2019s guide [GlobalMIT+.zip \u2014 The GlobalMIT\u2019s guide -31,45-50Click here for fileSupplementary Material for Gene Regulatory Network Modeling via Global Optimization of High-Order Dynamic Bayesian Network ,34,51.Click here for file"}
+{"text": "BHC, which is available for download from Bioconductor (version 2.10 and above) via http://bioconductor.org/packages/2.10/bioc/html/BHC.html. We have also made available a set of R scripts which can be used to reproduce the analyses carried out in this paper. These are available from the following URL. https://sites.google.com/site/randomisedbhc/.We live in an era of abundant data. This has necessitated the development of new and innovative statistical algorithms to get the most from experimental data. For example, faster algorithms make practical the analysis of larger genomic data sets, allowing us to extend the utility of cutting-edge statistical methods. We present a randomised algorithm that accelerates the clustering of time series data using the Bayesian Hierarchical Clustering (BHC) statistical method. BHC is a general method for clustering any discretely sampled time series data. In this paper we focus on a particular application to microarray gene expression data. We define and analyse the randomised algorithm, before presenting results on both synthetic and real biological data sets. We show that the randomised algorithm leads to substantial gains in speed with minimal loss in clustering quality. The randomised time series BHC algorithm is available as part of the R package Many scientific disciplines are becoming data intensive. These subjects require the development of new and innovative statistical algorithms to fully utilise these data. Time series clustering methods in particular have become popular in many disciplines such as clustering stocks with different price dynamics in finance Molecular biology is one such subject. New and increasingly affordable measurement technologies such as microarrays have led to an explosion of high-quality data for transcriptomics, proteomics and metabolomics. These data are generally high-dimensional and are often time-courses rather than single time point measurements.It is well-established that clustering genes on the basis of expression time series profiles can identify genes that are likely to be co-regulated by the same transcription factors These statistical methods often provide superior results to standard clustering algorithms, at the cost of a much greater computational load. This limits the size of data set to which a given method can be applied in a given fixed time frame. Fast implementations of the best statistical methods are therefore highly valuable.The Bayesian Hierarchical Clustering (BHC) algorithm has proven a highly successful tool for the clustering of microarray data randomised algorithmsThe principal downside of the BHC algorithm is its run-time, in particular its scaling with the number of items clustered. This can be addressed via In this paper, we apply the approach of S. cerevisiae synthetic data generated in To demonstrate the effectiveness of the randomised BHC algorithm, we test its performance on a realistic synthetic data set. We use synthetic data constructed from several realisations of the Given that for these synthetic data we know the ground truth clustering partition, we use the adjusted Rand index as our performance metric We also consider how the run-time varies as a function of the total number of genes analysed, We note an interesting effect for the lowest value of It is also important to validate the randomised algorithm on real microarray data. To do this, we use a subset of the data of As a performance metric, we choose the Biological Homogeneity Index (BHI) We note an interesting difference between We also note that for We have presented a randomised algorithm for the BHC clustering method. The randomised algorithm is statistically well-motivated and leads to a number of concrete conclusions.The randomised BHC algorithm can be used to obtain a substantial speed-up over the greedy BHC algorithm.Substantial speed-up can be obtained at only small cost to the statistical performance of the method.The overall computational complexity of the randomised BHC algorithm is The randomised BHC time series algorithm can therefore be used on data sets of well over 1000 genes.Use of the randomised BHC algorithm requires the user to set a value of BHC, which is available for download from Bioconductor (version 2.10 and above) via http://bioconductor.org/packages/2.10/bioc/html/BHC.html.The randomised time series BHC algorithm is available as part of the R package https://sites.google.com/site/randomisedbhc/.We have also made available a set of R scripts which can be used to reproduce the analyses carried out in this paper. These are available from the following URL. In this section, we provide a mathematical overview of the time series BHC algorithm. Greater detail can be found in The BHC algorithm The prior probability, When For the purposes of the BHC algorithm, a complete dendrogram is constructed, with at each step the most likely merger being made. This allows us to see the log-probability of mergers in the whole dendrogram, even when this value is very small. To determine the likely number of clusters, given the data, we then cut the dendrogram wherever the probability of merger falls below 0.5 (i.e. non-merger is more likely).As described in The BHC algorithm provides a lower bound of the DP marginal likelihood, as shown in Gaussian processes define priors over the space of functions, making them highly suited for use as non-linear regression models. This is highly valuable for microarray time series iid Gaussian noise, For the time series BHC model, we model an observation at time Let Time series BHC implements either the squared exponential or cubic spline covariance functions. In this paper, we restrict our attention to the default choice of squared exponential covariance:To speed up the time series BHC, we implement the randomised BHC algorithm of top of the dendrogram. This is the highest level of the dendrogram, where the whole set of genes is split into two subsets.Throughout this paper we will refer to the For reasonably balanced trees, the top levels should be well-defined even using only a random subset of the genes. From this idea, we can define the following randomised algorithm.Select a subset of Run BHC on the subset of Filter the remaining Including the original Now recurse for the gene subsets in each branch, until each subset size is In effect, we are using estimates of the higher levels of the tree to subdivide the genes so that it is not necessary to compute many of the potential low-level merge probabilities. The covariance function of the Gaussian processes used in this paper are characterised by a small number of hyperparameters. These hyperparameters are learned for each potential merger using the BFGS quasi-Newton method This merge-by-merge optimisation allows each cluster to have different hyperparameter values, allowing for example for clusters with different intrinsic noise levels and time series with different characteristic length scales.We assume in this paper that each time series is sampled at the same set of time points. This leads to a block structure in the covariance matrix, which can be utilised to greatly accelerate the computation of the Gaussian process marginal likelihood.The computational complexity of BHC is dominated by inversion of the covariance matrix. Considering the case of a group of We also note that this is equivalent to a Bayesian analysis using a standard multivariate Gaussian. Indeed, considering the task in this way may be a simpler way of doing so and is certainly a useful way of gaining additional insights into the workings of the model.When proposed merges have constant cost (the case considered by For the time series BHC algorithm however, the merges do not have have constant cost. For a given node, we are merging Because The randomised algorithm for case of constant cost merges has"}
+{"text": "Accurate inference of causal gene regulatory networks from gene expression data is an open bioinformatics challenge. Gene interactions are dynamical processes and consequently we can expect that the effect of any regulation action occurs after a certain temporal lag. However such lag is unknown a priori and temporal aspects require specific inference algorithms. In this paper we aim to assess the impact of taking into consideration temporal aspects on the final accuracy of the inference procedure. In particular we will compare the accuracy of static algorithms, where no dynamic aspect is considered, to that of fixed lag and adaptive lag algorithms in three inference tasks from microarray expression data. Experimental results show that network inference algorithms that take dynamics into account perform consistently better than static ones, once the considered lags are properly chosen. However, no individual algorithm stands out in all three inference tasks, and the challenging nature of network inference tasks is evidenced, as a large number of the assessed algorithms does not perform better than random. The measurement of gene expression levels, by using microarrays or high throughput technologies, makes it possible to infer statistical dependencies between the expression of two genes. Some of these dependencies can be seen as a result of causal interactions, as the expression of a gene can influence the future expression of another gene . Several methods have been proposed to infer gene regulatory interactions from measured gene expression levels. Some of them are static, in the sense that they do not take temporal aspects into consideration, while others are designed in order to learn the dynamical aspects of the dependencies. Since gene interactions are not instantaneous, we expect that temporal aspects should shed light on the causal dependencies between genes. In other terms if two genes are part of a regulatory interaction, their expression levels over time are expected to be correlated with a certain lag and the time order is expected to elucidate the respective promoter/target roles. However, unfortunately such lag is unknown a priori and should be properly learned from data. If on one hand dynamic approaches may appear as more powerful than static ones because of the temporal representation, on the other hand they are more sensitive to the accuracy of the adopted lag. In machine learning jargon, this is known as a bias/variance trade-off. The adoption of temporal dynamic models makes the learner less biased but necessarily more exposed to high variance. In spite of this intuition, and although there are some comparisons between dynamic and static methods in the literature on gene regulatory networks, these are not systematic or extensive.t to variables at time superior than t. Adaptive lag models are dynamic approaches which include an automatic estimation of a temporal lag for each pair of genes, e.g., by maximizing some dependence score. In order to make a fair comparison, all the assessed approaches (static and dynamic) are causal, in the sense that they infer directed interactions.For this reason, we propose in this paper an experimental setting to assess the role of dynamics on the accuracy of the inferred regulatory network. To this aim, we compare a number of state-of-the-art static and dynamic approaches on three challenging inference tasks. As state-of-the-art static approaches, we consider Bayesian networks . The first outcome of the study is that dynamic models perform consistently better than static ones. The second outcome is an interesting insight on the most probable interaction lag between gene expressions. Our results suggest that this lag can take values in the range of a few hours, and that temporal network inference models should be adjusted to incorporate this information. In the next chapter we will present the assessed network inference algorithms, the third chapter describes the experimental setting and is followed by the results and discussion.Q: Which kinds of biological networks have been inferred in the paper?E.coli, yeast, fruit fly). Networks were inferred from time series gene expression datasets.A: 500 gene regulatory networks of 5 nodes were inferred for three species was assigned to each inferred network. The AUPRC values of the 500 networks predicted by an inference method were averaged, and this value was used to score that method.Q: What are the main results described in the paper?A: The general performance of state of the art network inference methods on the proposed task is weak . However, methods that take into account temporal information tend to perform better than static, non-temporal methods. The performance of temporal methods is expected to depend on the temporal sampling interval and on the sample size of the used time series. This fact is confirmed in our experiments and we infer general conclusions on the proper use of temporal network inference methods.Two family of network inference algorithms, static and dynamic, are considered in this study and will be discussed in the following section. Table Static network inference models do not take into account any information related to the temporal nature of the gene expression data. Two well-known examples are Bayesian networks and GGM.A Bayesian network is a graphical representation by directed acyclic graph of a multivariate probability distribution, where nodes denote variables and edges variable dependencies. Under the faithfulness assumption for the probability distribution, there exists a bijective mapping between the conditional independencies of variables in the distribution and topological properties (d-separation) in the graph. The main advantages of a Bayesian Network representation are its sparsity , the ease of interpretation and the availability of several inference algorithms. For further references on the estimation of Bayesian networks from biological data see Needham et al. or MargaA GGM is an undirected graph, where the presence of an edge indicates a non zero partial correlation between two nodes given all the others . For each edge we assigned a score equal to 1 minus the respective p-value.p is the number of genes and Xt is used to denote the value of the variable X at time t.We will distinguish dynamic models according to the approach used to define the lag between variables. In what follows lmax (VAR(lmax)) models each gene Xt, at time t, as a linear function of all the genes at time t \u2212 l, where l = 1,.., lmax.Vector autoregressive models of order lmax is set to 1. The coefficients \u03b2 in (1) can be estimated by Ordinary Least Squares algorithm (OLS), provided that there are enough samples. Alternatively, \u03b2 can be returned by a regularization algorithm, such as the Lasso denotes a lag-one model where the value of Xt \u2212 l to a node Yt. In our study we assessed three lag-one models, two of them penalty-constrained implementations of VAR(1) models, and one of them an implementation of a DBN. They are described below.Another fixed lag model is the Dynamic Bayesian Network (DBN). DBN are modifications of Bayesian networks to model time series: each gene is represented by different nodes, at different time points + lars models the data from a VAR(1) perspective: a variable Xti is regressed using all the variables lagged by one time point: Xt \u2212 1j, j = 1 \u2026 p. As with the Lasso, a penalty term proportional to the L1 norm of the regressor coefficients is added to the model. The coefficients of the model are estimated using the lars algorithm [ to 0 (corresponding to the OLS solution). Using lars, for each gene we computed the coefficients of its predictors at the points (in the lasso path) where the coefficient of a predictor becomes non-zero and enters the model. We then computed the average of the coefficients of each predictor variable and used it as the directed dependence score between the predictor and the target gene.Our implementation simone . Every gene is assigned to the group of hubs or to group of leaves, from an initial estimation . This initial estimation is done by computing a matrix of coefficients using the standard Lasso, and then group genes into hubs or leaves according to the L1 norm of the respective rows in the estimated coefficients matrix. The regressors are assigned one of two different weights, one for hubs and the other for leaves, which multiply the respective coefficients before they are used in the calculation of the penalty term. The idea behind this implementation is that interactions coming from hubs (transcription factors) should be less penalized than interactions coming from leaves. Simone returns a list of inferred networks for different values of the penalty weights. In the experiments here reported, we defined the score for an interaction as the number of times the interaction is associated with a non-zero coefficient in all the returned networks.The R package G1DBN is a R package and a p-value is returned. The maximum of these p-values is considered as a score for the respective interaction. A threshold \u03b11 is defined, and edges with scores lower than it are selected. The second step of the algorithm starts with this graph and removes more edges: for each gene, it is calculated the regression coefficient of it toward one of its parents, given all the other parents. To each of these coefficients is assigned a p-value, in an analogous way as in the first step. A new threshold \u03b12 is defined, and only edges with p-values lower than \u03b12 are kept. In our experiments, we defined \u03b11 = 0.7, as it was the value used in the method's original proposal. We used several values for \u03b12, and for each of them an adjacency matrix was returned, with the estimated p-values for each possible interaction. For each interaction, the subtraction 1 minus the average of the respective final p-values, was used as the final score.e Lebre, that estS. The lag between two genes X and Y is estimated as:Adaptive lag models are models where each possible interaction is assigned a score which is a function of an estimated temporal lag, that hypothetically characterizes the interaction. This lag is estimated as the one which maximizes some score lmax is the maximum allowed lag. The adaptive lag methods implemented are based on the measure of mutual information , between two variables X and Y).The parameter Time-Delay ARACNE property to break up fully connected triplets. A binary adjacency matrix, indicating the predicted interactions, is returned. We defined various values for the threshold and obtained different adjacency matrices. Each interaction is assigned a score equal to the number of times the interaction has been predicted in the returned adjacency matrices. The parameter lmax was set to 6 time points.The Time-lagged MRNET is the dynamic extension of the MRNET algorithm feature selection method is the one that has the highest mutual information toward the target gene. The next gene to be selected, XmRMRj, is defined as the one which maximizes the following mRMR score, u \u2212 r:uj and rj are defined as follows:The uj represents the relevance of Xj toward Y and the term rj represents the redundancy of Xj with the previously selected genes in S. This process is repeated for all genes. To any pair of genes the MRNET algorithm assigns a score which is equal to the maximum between two mRMR scores: the mRMR score of the first when the second is the target, and the mRMR score of the second when the first is the target. The time-lagged MRNET is a modification of the MRNET algorithm inference algorithm .On the implementations here described, the mutual information used by the time-lagged MRNET and CLR was estimated with the Pearson correlation. The value for the maximum allowed lag parameter, Inferelator database repository.Drosophila melanogaster, of length 22 h . Only strong evidence interactions were selected. From these adjacency matrices, we generated small regulatory networks, containing only genes whose expression levels are measured in the respective dataset. For each dataset, 500 sub-networks of 5 nodes were randomly generated. Using the algorithms in the way that was described in the previous section, we obtained for each algorithm and network, a square matrix of scores for all possible directed interactions represents the score of the interaction from gene i to gene j). For any pair of genes, only one interaction was kept, corresponding to the strongest direction. To assess the performance of an algorithm on a given network we used the AUPRC . Interactions were incrementally selected (from the highest to the lowest ranked), and at each selection, precision and recall values were computed. We assigned to each recall its highest associated precision . The AUPRC was estimated as the average precision, for all values of recall. For each algorithm and dataset, we averaged the AUPRC obtained for the 500 networks. The random baseline was estimated as being the expected average AUPRC of a random ranking, on all networks. Figure Adjacency matrices with documented interactions for the three different species were obtained in Gallo et al. , Gama-Cap-value was lower than 0.05. Of particular interest are the differences relative to the random ranking of interactions. Relative to the dataset Fly, dynamic models clearly outperform static models, which do not perform better than random. In the dataset E.coli, the best performers are the time lagged-MRNET and the time lagged-CLR when lmax is set to 18 time points (corresponding to 3 h). Fixed lag models and static models perform similarly, with only one method performing better than random (VAR1+lars). Relative to the dataset Yeast, the best performers are G1DBN and Time-Delay ARACNE, and are the only ones with a performance significantly better than random. As a control procedure, the ordering of the time points in the datasets was randomized, and the dynamic network inference methods were rerun (static models do not depend on the ordering of the samples). As expected, on all occasions the performance drops to the random level.The average AUPRC values for each algorithm and dataset can be seen in the Figure E.coli only three methods are better than random, and on the dataset Yeast there are only two. On the dataset Fly no static method performs better than random . This poor performance may be a result of the low number of samples of the datasets, or with the way the networks are generated and assessed, using gene regulatory interactions as a ground-truth that may not be adequate, or representative of the interactions that are regulating gene expression.The performance of some methods can be poor. On the dataset The best performers on all datasets are dynamic models. This suggests that incorporating temporal information is beneficial to the inference of gene regulatory interactions. On all datasets, static models do not perform better than random. The fact that the assessed dynamic models are computationally simpler than the static algorithms (particularly the ones estimating Bayesian networks) is another reason to prefer dynamic models over static ones when inferring networks from time series.E.coli . If we estimate lagged dependencies over a long and unrealistic range of lags, it may happen that some genes that do not interact, are eventually found to be correlated at some lag. This may be the reason behind the decrease in performance, when lmax is set to high values.On the dataset Fly, a long time series where each time point corresponds to 30 min, setting E.coli, setting lmax to 18 time points greatly improves the performance. Here, 18 time points correspond to 3 h. This number may be an indication of the true range of values of gene interaction lags.On the dataset lmax to 18 time points is likely to be a result of the fact that this dataset is composed of only 25 points. The number of samples used to estimate dependencies between genes varies from n to n \u2212 lmax where n is the number of samples in the dataset. On datasets of a low n, setting lmax to a high value may greatly reduce the number of samples used in the estimations, and if this number is too low, the variance of the algorithm increases, which causes the estimation of high correlations between genes that in reality do not interact. This may be happening in the case of the dataset Yeast, of 25 time points. When lmax is 90 min, the number of points used is only 7. If we compare with the dataset E.coli, when lmax is set to the maximum of 180 min, the number of samples used is still 14. When it comes to the dataset Fly, the number of samples used in the maximum lmax, of 9 h, is 27.Relative to the dataset Yeast, the performance decrease that is seen when setting The performance of fixed lag models (lag being one time point) should be influenced by the interval length of the time series. These models should perform, relatively to static models, better on time series with interval lengths similar to the true lags of interactions. It can be seen that fixed lag models perform consistently better than static models on the dataset Fly. The same cannot be said regarding the other two datasets, where static and fixed lag models perform similarly. This may indicate that fixed lag, with lag equal to one, models are more appropriate to model time series with a temporal step relatively high, in the order of 30 min, than to model time series of shorter steps.Some points can be drawn from the results presented:lmax is set to 9 h. We suggest that this is due to the fact that, when setting lmax to such a high value, some interaction lags are estimated to be unrealistically high. This is confirmed in the Figure E.coli, there is a large proportion of interaction lags estimated to be between 130 and 180 min. The fact that there is a great performance increase, when lmax is set to 180 min, suggests that maybe some interactions are characterized by these large lag values. However, it is possible that these high estimated lag values are a result of a decrease in the number of samples used to estimate the lagged dependencies. This phenomenon is certainly happening in the dataset Yeast, when the number of samples used to estimate dependencies reduces to 25% of the time series length , when lmax is set to 90 min, and increasing the variance of the algorithm.Adaptive lag algorithms are based on the estimation of lags between pairs of genes. These should reflect in some way the true lags of the interactions. The Figures Only three gene expression datasets were used, each with its own distinct characteristics. Further validation of the results here presented should be made using other datasets, preferably with higher number of samples, as they become more available to bio-statisticians. The inference of regulatory interactions was done on networks of 5 genes. All things equal, the network inference models here presented will return lower AUPRC scores if the number of genes increases, and the ratio true edges/possible edges decreases - the inference task becomes more challenging. Network inference was assessed using interactions reported in the literature, which means some true interactions may be missing, and some reported interactions may be biologically inexistent in the used datasets.E.coli and Yeast datasets, respectively. The differences in the results obtained in the datasets are likely due to the characteristics of the time series, such as the temporal interval. Regarding the dynamic models, the advantage of the considered fixed lag models is that they directly estimate conditional dependencies, instead of being based on pairwise dependencies, as the considered adaptive lag models are. On the other hand, the advantage of the adaptive lag models is that they can potentially infer interactions characterized by higher and variable lags. Their performance depends on the maximum allowed lag, lmax, and care should be taken when defining this parameter: if it is set to an unrealistic high value, in the range of many hours, eventually interactions will be estimated at that range, hurting the network inference performance (we argue that this is seen in the results regarding the dataset Fly). If lmax is set to be equal to a high fraction of the length of the time series, lagged dependencies between genes will be estimated with a small number of samples, increasing the variance of the algorithm and decreasing its performance (this is seen in the results regarding the dataset Yeast). Relative to the lag of regulatory gene interactions, the fact that lag-one models (the fixed lag models) perform, compared with static models, better on a dataset with a temporal interval of 30 min than in datasets with lower temporal intervals (10 and 5 min) suggests that the range of lags of gene interactions is likely to be closer to 30 min than to 10 or 5 min. The experimental results also suggest that there may exist gene interactions characterized by a longer lag, in the order of a couple of hours. As a general set of rules, we conclude from the experiments here reported that dynamic methods should be used to predict interactions in time series; fixed lag methods should be used when the interval scale is high (30 min to hours); adaptive lag methods should be used when the maximum allowed lag is set to high values (order of a couple of hours), and, in order to prevent an excessive algorithm variance, the number of samples minus the maximum allowed lag is still high .Results obtained using three different datasets show that dynamic models perform better on the inference of gene regulatory interactions from time series, than static models such as Bayesian networks. This is explained by the inclusion of beneficial temporal information. Nevertheless, the overall performance of the assessed models is poor: only three and two models outperformed random in the Miguel Lopes designed and implemented the experimental run, and contributed to the writing of the paper. Gianluca Bontempi supervised the study and contributed to the writing of the paper.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Reverse engineering of gene regulatory networks has been an intensively studied topic in bioinformatics since it constitutes an intermediate step from explorative to causative gene expression analysis. Many methods have been proposed through recent years leading to a wide range of mathematical approaches. In practice, different mathematical approaches will generate different resulting network structures, thus, it is very important for users to assess the performance of these algorithms. We have conducted a comparative study with six different reverse engineering methods, including relevance networks, neural networks, and Bayesian networks. Our approach consists of the generation of defined benchmark data, the analysis of these data with the different methods, and the assessment of algorithmic performances by statistical analyses. Performance was judged by network size and noise levels. The results of the comparative study highlight the neural network approach as best performing method among those under study. Deciphering the complex structure of transcriptional regulation of gene expression by means of computational methods is a challenging task emerged in the last decades. Large-scale experiments, not only gene expression measurements from microarrays but also promoter sequence searches for transcription factor binding sites and investigations of protein-DNA interactions, have spawned various computational approaches to infer the underlying gene regulatory networks (GRNs). Identifying interactions yields to an understanding of the topology of GRNs and, ultimately, of the molecular role, of each gene. On the basis of such networks computer models of cellular systems are set up and in silico experiments can be performed to test hypotheses and generate predictions on different states of these networks. Furthermore, an investigation of the system behavior under different conditions is possible . TherefoThe basic assumption of most reverse engineering algorithms is that causality of transcriptional regulation can be inferred from changes in mRNA expression profiles. One is interested in identifying the regulatory components of the expression of each gene. Transcription factors bind to specific parts of DNA in the promoter region of a gene and, thus, effect the transcription of the gene. They can activate, enhance, or inhibit the transcription. Changes of abundances of transcription factors cause changes in the amount of transcripts of their target genes. This process is highly complex and interactions between transcription factors result in a more interwoven regulatory network. Besides the transcription factor level, transcriptional regulation can be affected as well on DNA and mRNA levels, for example, by chemical and structural modifications of DNA or by blocking the translation of mRNAs by microRNAs . UsuallySeveral reverse engineering methods were proposed in recent years which are based on different mathematical models, such as Boolean networks , linear In order to perform a comparative study we have chosen six reverse engineering methods proposed in literature based on different mathematical models. We were interested in applications for the analysis of time series. The methods should be freely downloadable, easy in use, and having only a few parameters to adjust. We included two relevance network methods; the application ARACNe by Basso et al. , which iArtificial data has been used because validation and comparison of performances of algorithms have to be accomplished under controlled conditions. It would have been desirable to include experimentally determined gold standard networks that represent the knowledge of all interactions validated by single or multiple experiments. Unfortunately, there are not enough gold standard networks and appropriate experimental data available for a large comparative study. For such a study one needs a sufficiently large amount of data of different sizes, different types, that is, steady state or time series, from different experiments, for example, overexpression, perturbation, or knockdown experiments. Therefore we performed in silico experiments to obtain the required data for our performance tests.Quackenbush pointed An artificial data generator has to be independent of the reverse engineering algorithms to avoid a bias in the test results. In addition, the underlying artificial GRN of a data generator has to capture certain features of real biological networks, such as the scale-free property. For this study we used the web application GeNGe for the Having specified artificial networks the computed and the true networks can be compared and algorithmic performance can be assessed with statistical measures. We used various measures in this study, such as a sensitivity, specificity, precision, distance measure, receiver operator characteristic (ROC) curves, and the area under ROC curves (AUCs).By means of these measures we characterized the reverse engineering method performances. It is shown that the sensitivity, specificity, and precision of all analyzed methods are low under the condition of this study. Averaged over all results, the neural network approach shows the best performances. In contrast, the Bayesian network approaches identified only a few interactions correctly. We tested different sets of data, including different sizes and noises to highlight the conditions for better performances of each method.A variety of reverse engineering methods has been proposed in recent years. Usually a computational method is based on a mathematical model with a set of parameters. These model specific parameters have to be fitted to experimental data. The models vary from a more abstract to a very detailed description of gene regulation. They can be static or dynamic, continuous or discrete, linear or nonlinear, deterministic or stochastic. An appropriate learning technique has to be chosen for each model to find the best fitting network and parameters by analyzing the data. Besides these model driven approaches, for example, followed by Bayesian networks and neural networks, there are statistical approaches to identify gene regulations, for example, relevance networks.For this study we have chosen reverse engineering applications which belong to one of the following classes: relevance networks, graphical Gaussian models, Bayesian networks, or neural networks. In this section we will give an overview of the basic models and discuss the applications we used. All software can be downloaded or obtained from the algorithm developers. An overview is given in Table Methods based on relevance networks are statistical approaches that identify dependencies or similarities between genes across their expression profiles. They do not incorporate a specific model of gene regulation. In a first step correlation is calculated for each pair of genes based on different measures, such as Pearson correlation, Spearman correlation, and mutual information. The widely used Pearson correlation indicates the strength of a linear relationship between the genes. In contrast to that Spearman's rank correlation can detect nonlinear correlations as well as mutual information. It is assumed that a nonzero correlation value implies a biological relationship between the corresponding genes. The algorithm ARACNe developed by Basso et al. uses theAn inferred network from a relevance network method is undirected by nature. Furthermore, statistical independence of each data sample is assumed, that is, that measurements of gene expression at different time points are assumed to be independent. This assumption ignores the dependencies between time points. Nevertheless, we applied these methods on simulated time series data to study the predictive power of these approaches.Graphical Gaussian models are frequently used to describe gene association networks. They are undirected probabilistic graphical models that allow to distinguish direct from indirect interactions. Graphical Gaussian models behave similar as the widely used Bayesian networks. They provide conditional independence relations between each gene pair. But in contrast to Bayesian networks graphical Gaussian models do not infer causality of a regulation.Graphical Gaussian models use partial correlation conditioned on all remaining genes in the network as a measure of conditional independence. Under the assumption of a multivariate normal distribution of the data the partial correlation matrix is related to the inverse of the covariance matrix of the data. Therefore the covariance matrix has to be estimated from the given data and to be inverted. From that the partial correlations can be determined. Afterwards a statistical significance test of each nonzero partial correlation is employed.We used the graphical Gaussian implementation GeneNet by Sch\u00e4fer and Strimmer . It is aA neural network can be considered as a model for gene regulation where each node in the network is associated with a particular gene. The value of the node is the corresponding gene expression value. A directed edge between nodes represents a regulatory interaction with a certain strength indicated by the edge weight. The dynamic of a time-discrete neural network of The parameters of the model are the weights A learning strategy for the parameters is the Backpropagation through time (BPTT) algorithm described by Werbos and applby varying the parameters of the model which represents the topology and a family of conditional probability distributions. In contrast to other models nodes represent random variables and edges conditional dependence relations between these random variables. A dynamic Bayesian network is an unfolded static Bayesian network over discrete time steps. Assuming that nodes are only dependent of direct parents in the previous time layer, the joint probability distribution of a dynamic Bayesian network can be factorized: where For discrete random variables the conditional probability distributions can be multinomial. With such a distribution nonlinear regulations can be modeled, but a discretization of continuous data is needed. The number of parameters in such a model increases exponentially with the number of parents per node. Therefore, this number is often restricted by a maximum. The program package Banjo by Yu et al. , which wn et al. for moreA further reverse engineering approach is a state space model. They constitute a class of dynamic Bayesian networks where it is assumed that the observed measurements depend on some hidden state variables. These hidden variables capture the information of unmeasured variables or effects, such as regulating proteins, excluded genes in the experiments, degradations, external signals, or biological noise.A state space model is proposed by Sch\u00e4fer and Strimmer . The modHere, For the comparative study of reverse engineering methods we generated a large amount of expression profiles from various GRNs and different datasets. We performed in silico perturbation experiments by varying the initial conditions randomly within the network and data generator GeNGe . A discrIn a first step we generated random scale-free networks in GeNGe to obtain GRNs of different sizes. Directed scale-free networks are generated in GeNGe with an algorithm proposed by Bollob\u00e1s et al. , for eacwhere In the GRN models we used the logic described by Schilstra and Nehaniv for the A regulation strength n et al. , where mTime series of mRNAs are obtained by first drawing randomly the initial concentrations of each component of the model from a normal distribution with the steady state value of this component as mean and To simulate experimental errors we added Gaussian noise with different coefficient of variations (cvs) to each expression value in a final step of data generation. The mean of the Gaussian distribution is the unperturbed value. The cv represents the level of noise.We investigated the impact of different numbers of time series of mRNAs and noise levels on the reconstruction results. For this study we generated randomly five networks of sizes 5, 10, 20, and 30 nodes each. For each network we simulated 5, 10, 20, 30, and 50 time series by repeating the simulation accordingly with different initial values, as described above. For a network of size ten and ten time series, the data matrix contains 500 values , (11), and (12) are reduced then to the usual definition of sensitivity and specificity, respectively. The modified measures are denoted with To obtain a single value measure for one result we calculated the combined measure defined in (7). This distance measure Rather than selecting an arbitrary threshold for discretizing the resulting matrices it is convenient to use the curves of sensitivity versus specificity or precision versus recall for thresholds in interval We accomplished a systematic evaluation of the performances of six different reverse engineering applications using artificial gene expression data. In the program package ParCorA there are seven correlation measures implemented, including Pearson and Spearman correlation of different orders, which we all used. 600 datasets, with different numbers of genes, dataset sizes, and noise levels, were analyzed by each of the total twelve applications.For all relevance network methods, graphical Gaussian model, and neural network we determined an optimized threshold for discretization of the results considering all datasets. The thresholds are listed in Table The averaged reconstruction performances over all datasets with regard to different validation measures are given in Table None of the reconstruction methods outperforms all other methods. Further, no method is capable of reconstructing the entire true network structure for all datasets. In particular sensitivity and precision are low for all methods. A low precision means that among the predicted regulations, there are only a few true regulations. In the study the precision is always lower than 0.3. This is due to the fact that several input datasets carry a high error level. For example, the input data includes time series with noise up to 50% found the fewest true undirected links . Though, the distance measures for Banjo does change only slightly, it remains on a very large value. This indicates a poor reconstruction performance. SSM shows a similar behavior. The distance measure Among all partial Pearson correlations methods, the 2nd order outperforms the others. It has a slightly better performance measure under all conditions. This is similar to the 2nd order of partial Spearman correlation. It shows the best performance in all plots. Further, it is always below the best partial Pearson correlation.The comparative study shows that the performances of the reverse engineering methods tested here are still not good enough for practical applications with large networks. Sensitivity, specificity, and precision are always low. Some methods predict only few gene interactions, such as DBN, indicated by a low sensitivity and, in contrast to that, other methods identify many false regulations, such as the correlation measures. We tested different sets of data, including different sizes and noises to highlight the conditions for better performances of each method.DBN performs poorly on all datasets. Under no condition of the study it shows an appropriate performance. The specificity is always very large, but with a very low sensitivity. Only very few regulations were identified and the performance does not improve with larger datasets. It is known that Banjo requires large datasets for better performances . This maThe neural network approach shows the best results among all methods tested. It has a balance between true positives and true zeros. This is due to the appropriately chosen threshold for the postprocess discretization. Nevertheless, NN predicts many regulations and many of them are incorrect, that is, it has many false regulations. Even with a large number of datasets, a complete reconstruction is not possible.Sch\u00e4fer and Strimmer , the autThe assumption of statistical independence of each time point measurement is not satisfied, although ARACNE performs not all that bad. With larger datasets the performance increases, but decreases, as expected, with noisy data.The Spearman correlation is a nonparametric measure for correlation. It does not make any assumption for the probability distribution of the data. In this study it outperforms the Pearson correlation, which can only detect linear relationships. It seems that the rank correlation is more appropriate for analyzing time series data because of its robustness against noisy data.A crucial point in the determination of the distance measures is the chosen thresholds for discretization of the resulted matrices of continuous real values. An optimal threshold as shown in Table Some aspects have not been addressed in this study and can be investigated further. It would be interesting to see the performances for larger networks sizes (more than 50 nodes). Some methods, such as GGM, should then show increased performances. Further, many applications are not suitable for analyzing such large datasets. For these methods a reduction of the dimension of the data has to be performed in order to obtain datasets of appropriate sizes. Different reduction methods can be investigated for that. Moreover, it would be interesting to see the impact of missing data on the reconstruction results, since in real experiments there are often not all genes included in the dataset.It is shown that the reliable reconstruction of the whole GRN is anymore an ambitious intention and needs further progress. For that, the quality and quantity of gene expression measurements have to be improved as well as the performance of current or new algorithms. Benchmarks with realistic artificial data has to identify those methods which show the best results under different conditions."}
+{"text": "Here, Self-Adaptive Differential Evolution, a versatile and robust Evolutionary Algorithm, is used as the learning paradigm.Gene regulatory network is an abstract mapping of gene regulations in living cells that can help to predict the system behavior of living organisms. Such prediction capability can potentially lead to the development of improved diagnostic tests and therapeutics. DNA microarrays, which measure the expression level of thousands of genes in parallel, constitute the numeric seed for the inference of gene regulatory networks. In this paper, we have proposed a new approach for inferring gene regulatory networks from time-series gene expression data using Dictyostelium discoideum and has proved it's strength in finding the correct regulations. The strength of this work has also been verified by analyzing the real expression dataset of SOS DNA repair system in Escherichia coli and it has succeeded in finding more correct and reasonable regulations as compared to various existing works.To assess the potency of the proposed work, a well known nonlinear synthetic network has been used. The reconstruction method has inferred this synthetic network topology and the associated regulatory parameters with high accuracy from both the noise-free and noisy time-series data. For validation purposes, the proposed approach is also applied to the simulated expression dataset of cAMP oscillations in By the proposed approach, the gene interaction networks have been inferred in an efficient manner from both the synthetic, simulated cAMP oscillation expression data and real expression data. The computational time of this approach is also considerably smaller, which makes it to be more suitable for larger network reconstruction. Thus the proposed approach can serve as an initiate for the future researches regarding the associated area. Gene Regulatory Networks (GRNs) are the functioning circuitry in living organisms at the gene level. It is regarded as an abstract mapping of the more complicated biochemical network which includes other components such as proteins, metabolites, etc. The purpose of GRN is to represent the regulation rules underlying the gene expression. Understanding GRNs can provide new ideas for treating complex diseases and breakthroughs for designing new drugs.microarray technology. Using this method, expression levels of thousands of genes can be measured simultaneously, as they change over time and are affected by different stimuli. Thereby, it is possible to obtain a global view of the dynamic interaction among genes. But it is a great challenging problem to discover these networks of interacting genes that generate the fluctuations in the gene expression levels. Inference of GRNs based on microarray data is referred to as reverse engineering S-systemXi is the expression level of gene-i. The exponential parameters gi, j and hi, j are the interactive effect of Xj to Xi, which are also referred to as kinetic orders.Here, n is the number of genes or system components and \u03b1, \u03b2, g, h). But the major disadvantage of the S-system is the large number of parameters to be estimated: 2n(n + 1). Because the number of S-system parameters is proportional to the square of the number of network components, the algorithms must simultaneously estimate a large number of S-system parameters if they are to be used to infer large-scale network systems containing many network components. Thus, the regression task becomes difficult and time consuming as there is a large parameter space to be optimized with this nonlinear formalism. This is why inference algorithms based on the S-system model have only been applied to small-scale networks. So, to overcome the problems regarding nonlinear models, in this research we have considered a linear model. These models are very simple and can be applied to very large-scale networks. In case of all linear models, the regulatory interactions among the genes are represented by a weight matrix, W, where each row of W represents all the regulatory inputs for a specific gene. There are mainly two different types of models that lie under this category. These models differ by means of their ability to handle nonlinearity.The gene network inference problem based on the S-system model is defined as an estimation problem of the S-system parameters for extracting the relationships among genes clearer. That is, constructing a gene network which resembles a true and accurate genes interaction approach in a genome. As both quantity and quality of experimental data improve, we aim at a more biologically plausible, faithful reconstruction of the target network and , respectively. For both \u03d5 and \u03c9, the parameter range is set to . For all the individuals of the population, initially F = 0.5 and CF = 0.9 are considered. Later on, due to the self adaptation, the inference method automatically adjusts these control parameters for each individual. Here, the method has been experimented on the population size 200. The termination criterion for the algorithm is set to the maximum number of generations to run and the maximum generation number is set to 10, 000.In accord to the linear time-variant approach, an individual is represented by a candidate set of parameters {Our algorithm has been implemented in C programming language. The time required for solving a typical run of the associated GRN problem is approximately 4.0 minutes in a PC with 2.4 GHz Intel Pentium IV processor and 512 MB of RAM. The program has been run with the same experimental setup for 10 times (runs).\u03b1, \u03b2, \u03c9, \u03d5} estimated by our algorithm on noise-free data sets in a typical run. Using these parameters the weight matrix W is obtained by employing Equations 2 and 3. The W matrix provides information about relationships among genes and can be used to construct the underlying gene expression network. The inferred W matrix from the noise-free time-series data is shown in the Table Sn) and the specificity (Sp) averaged over 10 runs are 1 and 0.846, respectively. The MSE between the time-series produced by the underlying model and the observed time-series data as defined by equation 5 is on average 10-4. This result demonstrates the strength of the proposed framework in inferring an artificial gene network from noise-free time-series data.Table The real challenge lies on the capability of the inference algorithm in constructing the network from noisy data. We have also analyzed the performance of our proposed approach by conducting the experiments with the set of 5% and 10% noisy time series data. In all the cases discussed, the current work requires 4.16 minute to predict the GRN which is very small time on the above-mentioned PC configuration.As the target model we select the same network used in previous experiment with the same target parameter set. Data points were generated using the same sets of initial expression used in previous experiment. Two different experiments have been conducted along with 10 different 5% and 10% noisy data sets. Like before, 11 sampling points from each of these 10 time-series data sets have been used for optimization. Both of these experiments are also conducted with 10 runs using the similar setup described in the previous section.W inferred on this experiment is presented in Table Sn and Sp are 1 and 0.769, respectively. The MSE is approximately 0.56.In a typical run, the weight matrix W inferred in a typical run using these 10% noisy time-series data is presented in Table Sn and Sp averaged over all 10 runs are 0.916 and 0.667, respectively. The MSE is approximately 2.75 by considering the average of all such values on 10 runs. The observed and estimated time dynamics of all 5 genes obtained in this case is shown in Figure The weight matrix Dictyostelium discoideum cells signal each other by emitting stable oscillations of cAMP at the beginning of the aggregation phase of their development. The oscillations continue during chemotaxis towards higher gradients of cAMP concentration as D. discoideum aggregate to survive. In 1998 a model for the biomolecular network underlying the stable oscillations in cAMP was postulated by Laub and Loomis . The cor\u03b1 and \u03b2 are set to and , respectively. For both \u03d5 and \u03c9, the parameter range is set to . The above 7 genes are monitored with 10 instants in 5 data sets. That is, having 10 measurements in each data set, there exists 10 \u00d7 5 = 50 sampling points for each gene. In total 10 runs have been carried out to assure the statistical significance of the search method. All of the other experimental conditions are the same as those used in the experiments discussed in the previous section.The expression data of this cAMP oscillation have been generated using Loomis's model . These dth = 1.0. The value of this threshold was set empirically.Here, we have simulated noise-free and 5% noisy cAMP oscillation expression data. In different runs of these experiments, different regulations are being predicted. So Z-score (average/standard-deviation), a statistical approach has been applied to analyze which regulations are more significant and less diverse than others . Only thW from the noise-free oscillation data, is presented in Table Sn and Sp averaged over all 10 runs are 1.0 and 0.811, respectively. The MSE is approximately 0.051622 by considering the average of all such values on 10 runs. The observed and estimated oscillation of all 7 genes obtained by the underlying method is shown in Figure In a typical run, the inferred weight matrix W from the 5% noisy oscillation data is presented in Table Sn and Sp are 1.0 and 0.771, respectively. The MSE is approximately 1.31961 averaged over 10 runs.Lastly, the inferred weight matrix Along with correct predication, the main feature of the proposed approach is its less computational time. It requires 2.46 minute to infer the GRN of cAMP oscillation on the above-mentioned PC configuration. This is so little as compared to many other works.Escherichia coli as shown in Figure lexA and recA genes. This repressing is done by its binding to the interaction sites in the promoter regions of these genes. When DNA damage occurs, one of the SOS proteins, RecA, acts as a sensor. By binding to single-stranded DNA, it becomes activated, senses the damage and mediates LexA autocleavage. The drop in LexA levels, in turn, halts the repression of the SOS genes and activates them. When the damage has been repaired, the level of activated RecA drops and it stops mediating LexA autocleavage. LexA in turn accumulates and represses the SOS genes, and cell returns to its initial state and , respectively, whereas \u03c9 and \u03d5 are set to the same value as used in the previous experiments. In total 10 runs have been carried out to assure the statistical significance of the search method. All of the other experimental conditions are the same as those used in the experiments discussed in the previous section.In this research, only 6 genes have been chosen from Alon's experiment data, i.e., d by Cho and kimud by Cho ,22 and tth = 1.0. The corresponding SOS network structure inferred by the proposed approach is shown in Figure As the input data is from actual microarray experiment, there is noise present in that. No body knows how much noise is inherent in these data. These data may have had an influence on the inference algorithm. In this research, due to this noise level, the results have been much dispersed. In different runs of the experiment, different regulations have been predicted. Only those regulations have been considered whose Z-score value be above the threshold ZlexA to uvrD, lexA, umuD, recA and uvrA have been successfully identified. The regulation of lexA from recA have also been correctly identified. Regulation of lexA by umuD is also known. The regulation of umuD from recA, inferred by the proposed method, also appears to be reasonable, as it is contained in a network now known [recA \u22a3 uvrA [The observed and estimated time dynamics of all 6 genes in the SOS system obtained by the underlying method is shown in Figure ow known . These iA \u22a3 uvrA . Reason 6 generations to infer this network. The method proposed in [Table posed in took 47.Since it is difficult for the proposed method to control the number of regulations inferred, the obtained networks would generally contain a number of false positive regulations. To influence biologists to use the results obtained from this method, it is needed to ensure the reliability of the inferred regulations which are unknown. In a future work, the proposed method will be modified for this purpose.The performance of the proposed framework makes it more applicable to the problem of reverse engineering of gene networks. Amongst reverse engineering approaches linear time-variant model is of particular interest as it is capable of discovering the nonlinear relationships among genes like other nonlinear formalisms while dealing with noisy gene expression data. To infer an optimized network structure, here DE is used as the optimization engine. The proposed framework has been first verified by synthetic data and then its effectiveness is confirmed by analyzing simulated cAMP oscillation data and the real time series expression data of SOS DNA repair system. In real network analysis, the present work has been succeeded in finding several reasonable regulations as compare to the other existing methods. In all the cases, even with the presence of noise, the current work has inferred almost all the correct regulations. Thus, along with some future enhancements this work can boost systems biology research.For a gene regulatory network consisting of n genes, the mathematical formalism of this model is given by the following equation:Zi(t) be the total regulatory input to gene-i, Xi is the level of expression of gene-i at time t. The W matrix provides information about relationships among genes and can be used to construct the underlying gene expression network. The weight coefficient Wi, j indicates the strength of the influence of gene-j on the regulation of gene-i and is the respective element of the transition matrix W. A positive value of Wi, j means gene-j is inducing gene-i whereas a negative value is an indication of repression. On the other hand, a zero value on W indicates that gene-j does not influence the transcription of gene-i. Equation 2 shows that Wi, j is a time-varying function. Here, Wi, j(t) can be written as a finite sum of Fourier series [Here, r series given as\u03b1i, j, \u03c9, \u03d5i, j and \u03b2i, j are the constants to be determined for i = 1, 2, ......, n and j = 1, 2 ......, n. These values are the model parameters. Thus the linear time-variant model is defined by the parameters set ={\u03b1, \u03b2, \u03c9, \u03d5}. In equation 3, \u03b2i, j represents the linear part of the interactions and the sinusoidal term approximates the nonlinear terms in the interactions.Here, t + 1, i.e. Xi(t + 1). Thus, the value of Xi(t + 1) is obtained by normalizing Zi by using the following \"squashing\" function:The response of the gene-i to the regulatory input is the expression level at time Xi(t + 1) lies between 0 and 1.where the value of \u03b1, \u03b2, \u03c9 and \u03d5, in many cases, will not be uniquely determined. The reason behind this is that it is highly possible for the other sets of parameter values showing the similar time-course. Therefore, even if one set of parameter values that matches the observed time-series is obtained, this set may be only one of the best candidates that explains the observed time-series values. The main strategy is to explore and exploit these candidates within the huge searching space of parameter values. For searching an optimal set of parameters for gene networks, the most commonly used fitness evaluation criterion is to compute the difference between the calculated expression levels and the observed experimental dynamics. This is termed as Mean Squared Error(MSE) which was introduced by Tominaga et al. [In the current study, the gene network estimation problem has been formulated as a function optimization problem based on a linear time-variant approach. In an optimization problem, one is more interested to find the network that best fits the experimental data in the exploration of the search space. However, when adequate time-series expression values of relevant genes are given, a set of parameter values a et al. and latea et al. ,16-18. Ta et al. . Thus, fXk, i, exp(t) and Xk, i, cal(t) represent the experimentally observed and numerically computed expression level of gene-i in the k-th data set at time t, respectively.where Because of nonlinearity in gene regulations and curse of dimensionality, genetic algorithm can be applied to approximate the optimal solution in the solution space and can learn the network structure. Here, as the GA, the Self-adaptive version of Differential Evolutionary (DE) method has beenDifferential Evolution (DE) is a simple population-based, stochastic search algorithm for global optimization created by Ken Price and Rainer Storn . From 19Xi, low) and upper bounds defined for each variable Xi to cover the entire parameter space. This bounds are specified by the user according to the nature of the problem.as a population for each generation. Here, NP is the total number of individuals in a population and G denotes the generation. NP does not change during the optimization process. The initial vector population is generally chosen randomly between the lower is given as follows:PG covering the whole search space with candidate solutions, where G = 1. Here, each individual with parameters {\u03b1, \u03b2, \u03c9, \u03d5} is randomly generated.1. Initialize the population 2. Evaluate the fitness value of each individual using equation 5.PG + 1 with candidate solutions using self-adaptive DE.3. Generate the next generation population (a) Choose target individual.(b) Calculate the new CR and F for each individual using equations 7 and 8.Xi, G, i = 1, 2 ....., NP, employing these selected members, a new individual Vi, G+1 is generated using the mutation operation of DE described in [(c) Randomly choose 3 different population members. For each individual ribed in .Vi, G+1 with target individual Xi, G to get trial individual Ui, G using equations as described in [(d) Do crossover of ribed in . If any ribed in .(e) Compare the fitness values of the trial and the target individual and keep the better one in the next generation.(f) Go to step 3(a) and continue the whole process until the size of the next generation population is equal to NP.(g) Replace the current population by the next generation population.G = G + 1 and go to Step 3.4. If there have been 10, 000 evaluations, then stop. Otherwise set Note that, this algorithm will run until the generation is 10, 000. The number of generations can be more than 10, 000, but according to our investigation throughout this research work, then the convergence rate is almost same as that of employing 10, 000 generations.The authors declare that they have no competing interests.MK has implemented the proposed algorithm and performed the experiments. MK and NN have proposed the approach. NN and HI supervised the whole work. All authors have read and approved the manuscript."}
+{"text": "We tackle the problem of completing and inferring genetic networks under stationary conditions from static data, where network completion is to make the minimum amount of modifications to an initial network so that the completed network is most consistent with the expression data in which addition of edges and deletion of edges are basic modification operations. For this problem, we present a new method for network completion using dynamic programming and least-squares fitting. This method can find an optimal solution in polynomial time if the maximum indegree of the network is bounded by a constant. We evaluate the effectiveness of our method through computational experiments using synthetic data. Furthermore, we demonstrate that our proposed method can distinguish the differences between two types of genetic networks under stationary conditions from lung cancer and normal gene expression data. Estimation of genetic interactions from gene expression microarray data is an interesting and important issue in bioinformatics. There are two kinds of gene expression data: time series data and nontime series data. To estimate the dynamics of gene regulatory networks such as cell cycle and life cycle processes, various mathematical models and methods have been proposed using time series data. Since the number of observed time points in time series data is usually small, these methods suffer from low accuracies. On the other hand, a large number of nontime series data are available, for example, samples from normal people and patients of various types of diseases. Although these data are not necessarily static, we may regard these data as static data because these are averaged over a large amount of cells in rather steady states.For inference of genetic networks, various reverse engineering methods have been proposed, which include methods based on Boolean networks , 2, BayeIn recent years, there have been several studies and attempts for network completion, not necessarily for biological networks but also for social networks and web graphs. Different from network inference, we assume in network completion that a certain type of a prototype network is given, which can be obtained by using existing knowledge. Kim and Leskovec addresseIndependently, Akutsu et al. proposedIn this study, we propose a novel method, DPLSQ-SS (DPLSQ for Static Samples), for completing and inferring a network using static gene expression data, based on DPLSQ. The purpose of this study is twofold: first, to complete and infer gene networks from static expression profile, instead of time series data and, secondly, to investigate the relationship between different kinds of inferred networks under different conditions . Static data typically consist of expression levels of genes, which were measured at single time point but for a large number of samples. As discussed in the beginning part of this section, these types of data can be regarded as the gene expression measurements in a stationary phase. Many of static microarray data are publicly available, in particular for cancer microarray data with a relatively large size of tumor and normal samples. Therefore, it may be possible to estimate and investigate differences between cancer and normal networks. The basic strategy of DPLSQ-SS is the same as that of DPLSQ: least-squares fitting is used for parameter estimation and dynamic programming is used for minimizing the sum of least-squares errors when adding and deleting edges. In order to cope with static data, we modified the error function to be minimized. Although the idea is simple, it brings wider applicability because a large number of static gene expression data are available. We demonstrate the effectiveness of DPLSQ-SS through computational experiments using synthetic data and gene expression data for lung cancer and normal samples. We also perform computational comparison of DPLSQ-SS as an inference method with some of state-of-the-art tools using synthetic data.G denotes a given network where V and E are the sets of nodes and directed edges, including loops, respectively. In this graph G, each node corresponds to a gene and each edge represents a direct regulation between two genes. We let n denote the number of genes and let V = {v1,\u2026, vn}. For each node vi, e\u2212(vi) and deg\u2061\u2212(vi), respectively, denote the set of incoming edges to vi and the number of incoming edges to vi as defined below:The purpose of network completion in this study is to modify a given network by making the minimum number of modifications so that the resulting network is most consistent with the observed data. Here we assume additions and deletions of edges as modification operations see . In the We employ least-squares fitting for the parameter estimation and dynamic programming for identifying structure of the network. In the following we explain the algorithm of the proposed method.vi is determined by the following equation:vi1,\u2026, vih are incoming nodes to vi, xi corresponds to the expression value of the ith gene, and \u03c9 denotes a random noise. The second and third terms of the right-hand side of the equation represent linear and nonlinear effects to node vi, respectively , y2(s),\u2026, yn(s)\u232a, s = 1,\u2026, m, are given, where m is the number of samples and yi(s) denotes the expression value of node vi in the sth sample. The parameters can be estimated by minimizing the following objective function using a standard least-squares fitting method:We assume that static expression data \u2329k edges in total so that the sum of least-squares errors is minimized.Once the objective function is determined, the completion procedure is the same as that for DPLSQ . In orde\u03c3kj,j+ denote the minimum least-squares error when adding kj edges to the jth node and they are defined asvjl must be selected from V \u2212 vj \u2212 e\u2212(vj). In order to avoid combinatorial explosion, we constrain the maximum kj to be a small constant K and let \u03c3kj,j+ = +\u221e for kj > K or kj + deg\u2061\u2212(vj) \u2265 n.We let D+ byHere, we define D+ can be computed by the dynamic programming algorithm as follows:The entries of D+ is determined uniquely regardless of the ordering of nodes in the network. The correctness of this dynamic programming algorithm can be seen byIt is to be noted that The above dynamic programming procedure can be modified for addition and deletion of edges.\u03c3kj,hj,j denote the minimum least-squares error when adding kj edges to e\u2212(vj) and deleting hj edges from e\u2212(vj), where added and deleted edges must be disjoint. As described in kj and hj to be small constants K and H, respectively. We let \u03c3kj,hj,j = +\u221e if kj > K, hj > H, kj \u2212 hj + deg\u2061\u2212(vj) \u2265 n, or kj \u2212 hj + deg\u2061\u2212(vj) < 0 holds. Then, the problem is stated asWe let D byHere, we define We will also discuss the computational complexity of DPLSQ-SS. Since completion by addition of edges is a special case, we only analyze completion by addition and deletion of edges.O(mp2 + p3) time where m is the number of samples and p is the number of parameters. In our proposed method, we assume that the maximum indegree in a given network and the number of parameters are bounded by constants. In this case, the time complexity per least-squares fitting can be estimated as O(m).It is known that least-squares fitting for a linear system can be done in \u03c3kj,hj,j and D. The time complexity required for computation of \u03c3kj,hj,j is O(mnK+1) s is O(n3) is O(n3). Therefore, total time complexity for DPLSQ-SS isK \u2264 2 and n is not too large.Next we analyze the time complexity required for (mnK+1) , where ts O(n3) , consideO(mn4 + n6) and the number of combinations to be examined per node increases to O(nH+K), as discussed in [O(nH+K+1 \u00b7 (mn4 + n6)), which suggests that network completion should not start with dense networks but with sparse networks.If the maximum indegree of the initial network is not bounded by a constant, the time complexity per least-squares fitting increases to ussed in . In thishttp://www2.nict.go.jp/aeri/sts/stmg/K5/VSSP/install_lsq.html) for a least-squares fitting method.To evaluate the effectiveness of DPLSQ-SS, we performed two types of computational experiments using both synthetic data and real expression data. All experiments were performed on a PC with Intel Core 2 Quad CPU (3.0\u2009GHz). We employed the liblsq library corresponds to the expression value of ith gene. Therefore, an example network consists of 3 genes and 4 edges, including self-loops. If we solve this set of equations, we can find four solutions as below:We employed here nonlinear equations as gene regulation rules between genes. Since it is difficult to generate static data by numerical simulations, we made manually nonlinear equations with obvious solutions as the synthetic network topology and regarded each solution as static data for one sample. For example, if we make E = \u2205 in the initial network. It should be noted that we let upper bounds for the number of adding and deleting edges per node to be K = 2 and H = 0, respectively. Furthermore, in order to examine the CPU time changes with respect to the size of the network, we made synthetic networks with 10 and 20 nodes by making the nonlinear equation with corresponding number of variables.Under the above model, we examined DPLSQ-SS for network inference, using synthetic data which is generated as described above and letting Since the number of added edges was always equal to the number of edges in the original network, we evaluated the performance of DPLSQ-SS by means of the averaged accuracy, which was defined as the number of correctly inferred edges to the number of edges in the original network and the averaged computational time over 5 modified networks.M from them and regarded {vi, vj} as a correct edge if either or was included in the edge set of the original network. We employed datasets which were generated by the same way for DPLSQ-SS and default parameter settings for both tools. We evaluated the results by the ratio of correctly inferred edges and averaged CPU time . One aim of the DREAM project is to provide benchmark data on real and simulated expression data for network inference. This challenge includes several editions, where GNW has been developed to generate genetic network motifs and simulated expression data. In this evaluation, we used the DREAM4 challenge which is divided into three subchallenges called InSilico_Size10, InSilico_Size100, and InSilico_Size100_Multifactorial, consisting of five networks.In this subsection, we try to evaluate the effectiveness of DPLSQ-SS and perform a comparison with other methods in order to perform an unbiased evaluation since the results in m = 1) although perturbed data were also used. Although ARACNE was better than DPLSQ-SS in four cases, DPLSQ-SS was better than ARACNE in one case. DPLSQ-SS was better than GeneNet in four cases and was comparative to GeneNet in one case. This result suggests that although DPLSQ-SS is not necessarily the best for simulated data in DREAM4, it has reasonable performance when a very few samples are given.We validated the performance using InSilico_Size10 subchallenge consisting of gold standard 10 gene networks and simulated expression data generated under different conditions . Since only one set of wild-type data, which corresponds to static data, is provided for each network and it is not enough for inference, we generated 500 static data sets by randomly perturbing each data as in k edges and deleting h edges from an original network.We also examined network completion using synthetic data. In this experiment, we adopted the nonlinear equations described in Eorg and Ecmp are the set of edges in the original network and the completed network, respectively. This value takes 1 if all added and deleted edges are correct and takes 0 if all added and deleted edges are incorrect. For each , we took the averaged accuracy and CPU time for completing the network over 5 modifications for 10 and 20 gene networks, where we used the default values of K = H = 2. To avoid the numerical calculation error, we also generated additional 400 data sets for each of static solutions by adding random numbers uniformly distributed between \u22120.5 and 0.5.We assess the DPLSQ-SS performance in terms of the accuracy of modified edges and the computational time for network completion. The accuracy is defined as follows:k and h except for k = h = 5. It is also seen that the CPU time increases rapidly when applied to networks with 20 genes. In comparison with the CPU time for network inference by DPLSQ-SS, there seems to be a significant difference even if n equals 10. In this study, we used the default values of K = 2 and H = 2 for network completion, which were K = 2 and H = 0 for network inference. Moreover, the number of modified edges for network inference is much larger than that for network completion. However, the latter procedure requires more CPU time than the former procedure. This result suggests that the time complexity of DPLSQ-SS depends not so much on the number of modified edges, k and h, but depends much on the number of K and H as indicated in The results are shown in http://refgene.com/) for gene symbols and annotations and employed the resulting network as the original network.We also examined DPLSQ-SS for inference of gene networks from static data under multiple conditions. The aim of this experiment is to identify different static gene networks under different conditions and investigate the differences of these network topologies. We focus on the genetic network related to lung cancer and employed a partial network which contains RB/E2F pathway in human small cell lung cancer from the KEGG database shown inK = 2, H = 0, and k = 13 were used. In order to avoid the numerical calculation error, we also generated additional 5 data sets for each expression value by adding random numbers uniformly distributed between \u22120.5 and 0.5. The results are shown in As for the static expression data, we employed lung cancer microarray data obtained by Beer et al. . They clWe also compared DPLSQ-SS with ARACNE and GeneNet using these real data. The result is shown in Although the accuracy of DPLSQ-SS is not high for real data, there are significant differences between cancer and normal networks. The inferred normal network indicated the existence of RB/E2F pathway involved in the regulation of E2F activity. It is observed that the tumor suppressor gene P15 regulated CDK4 activity and E2F was under the regulation of both CDK4 and CCND1. On the other hand, in the inferred tumor network, we found no significant correlations between genes in RB/E2F pathway. Instead, we discovered the regulation of CCND1 and deregulation of E2F activity. It has been reported that overexpression of CDK4/6 and CCND1 and deregulated E2F could contribute to cancer progression , 25. TheIn this study, we addressed the problem of completing and inferring gene networks under the stationary conditions from static gene expression data. In our approach, we defined network completion as making the minimum amount of modifications to an initial network so that the inferred network is most consistent with the gene expression data. The aim of this study is (1) to complete genetic networks using static data and (2) to investigate the differences between two types of gene networks under different conditions. In order to achieve our goal, we proposed a novel method called DPLSQ-SS for network completion and network inference based on dynamic programming and least-squares fitting. This method works in polynomial time if the maximum indegree is bounded by a constant. We demonstrated the effectiveness of DPLSQ-SS through computational experiments using synthetic data and real data. In particular, we tried to infer the normal and lung cancer networks from static gene microarray data. As the results using synthetic data, DPLSQ-SS showed relatively good performance in comparison to other existing methods. As the results using microarray data from normal and lung cancer samples, it is seen that this method allows us to distinguish the differences between gene networks under different conditions.\u03c3kj,hj,j can be computed independently for different \u03c3kj,hj,js. Therefore, parallel implementation of DPLSQ-SS is also important future work. Although we have focused on completion and inference of gene regulatory networks, completion and inference of large-scale protein-protein or ChIP-chip/seq interaction networks are also important. Since the proposed method is only applicable to gene regulatory networks, extension and application of DPLSQ-SS for these networks should be studied in the future work.There is some room for extending DPLSQ-SS. For example, we employed here simple nonlinear equations as gene regulation rules, but it can be replaced by more complex types of nonlinear equations. Although DPLSQ-SS works in polynomial time, the degree of polynomial is not low, which prevents the method from being applied to completion of large networks. However, DPLSQ-SS can be highly parallelizable:"}
+{"text": "Finding genes that share similar expression patterns across samples is an important question that is frequently asked in high-throughput microarray studies. Traditional clustering algorithms such as K-means clustering and hierarchical clustering base gene clustering directly on the observed measurements and do not take into account the specific experimental design under which the microarray data were collected. A new model-based clustering method, the clustering of regression models method, takes into account the specific design of the microarray study and bases the clustering on how genes are related to sample covariates. It can find useful gene clusters for studies from complicated study designs such as replicated time course studies.YOX1 mutant, each with two technical replicates, and compared the clustering results with K-means clustering. We identified gene clusters that have similar expression patterns in wild type yeast, two of which were missed by K-means clustering. We further identified gene clusters whose expression patterns were changed in YOX1 mutant yeast compared to wild type yeast.In this paper, we applied the clustering of regression models method to data from a time course study of yeast on two genotypes, wild type and The clustering of regression models method can be a valuable tool for identifying genes that are coordinately transcribed by a common mechanism. Clustering is a useful tool to look for unknown groupings of objects . It has A number of analytic methods have been applied to the problem of gene clustering. They can largely be classified to two categories: (1) algorithmic clustering methods, such as K-means clustering and hierarchical clustering ,3; and method, which employs regression to model gene expression and clusters genes based on their relationship between expression levels and sample covariates . DiffereOur contributions in this paper are as follows: (1) we illustrate the methodologic advantages of the CORM method over K-means clustering, (2) we demonstrate the application of the CLMM method to gene expression data collected under the LWR design, using a yeast time course dataset measured for two yeast cell lines each with two technical replicates, and (3) we show empirical evidence of CLMM\u2019s benefits compared to K-means through a comparison of the clustering results for the yeast data \u2013 two clusters were uniquely identified by CLMM but missed by K-means and a spurious cluster was picked up by K-means and spared by CLMM.Given a set of objects, K-means clustering seeks a partition of all objects into K groups to minimize the total within group sum of squared Euclidean distance . The minygi denote the vector of expression levels for gene g and sample i, yg\u2009=\u2009T the vector of expression levels for gene g for sample 1 through sample m, G the number of genes, and K the number of clusters. Let ug denote the cluster membership for gene g. The model underlying K-means can be written asIt has been pointed out that K-means is equivalent to assuming a multivariate normal mixture model with component distributions having the same scalar covariance matrix and equal mixture proportions, and then fitting the model using an EM algorithm to maximize the classification likelihood -10. Here\u03f5g is the vector of measurement errors, I is an identity matrix, and ug is a random variable on with probabilities \u03c0k = 1/K. Cluster memberships are considered as missing data in the EM algorithm: cluster-assigning step corresponds to the E-step and cluster-center-recalculating step to the M-step.where For the problem of differential expression analysis, the regression modeling framework has been employed to characterize systematic variation in the expression profile of each gene and distinguish it from random variation. Differential expression is identified by contrasting expression levels measured under different experimental conditions or by identifying dependencies on concomitantly measured covariates. The resulting estimated regression models can provide an accurate and precise description of expression profiles. Similarly, the regression model framework can be used for the problem of gene clustering: systematic variation is separated from random variation and gene clustering is based solely on the systematic part of the variation. We call this the clustering of regression models method (CORM) .Xgi (ngi\u2009\u00d7\u2009p) denote the design matrix for gene g and sample i, F\u03b2k,\u03bek the conditional distribution of genes in cluster k given the covariates with parameters \u03b2k and \u03bek, \u03b2k (p\u2009\u00d7\u20091) the vector of regression coefficients, and \u03bc the regression function. The model underlying CORM can be written asLet g is a random variable on with probabilities . Complete specification of the CORM modeling framework requires identification of the error structure (parameterized by \u03be), which depends on the form of the regression model. The specific form of the regression model used for CORM is flexible. For example, it can be the linear model, the linear mixed model, the nonlinear model, and the nonparametric regression model. Its choice should depend on the experimental design and the scientific question. The EM algorithm can be used to fit the CORM model [where uRM model ,12. ImplRM model for the Both K-means and CORM are partitional clustering methods , which concern the problem of the optimal partitioning of a given set of objects into a prespecified number of mutually exclusive and exhaustive clusters. However, the two methods base clustering on different features of a gene. The feature of interest for K-means is the vector of sample-specific expectations for a gene. For each sample-specific expectation, sample size is 1 and genes in the same cluster are used as replicates for its estimation. K-means does not make any assumption on the relationship between the expected expression level and the covariates and is \u2018model-free\u2019 in this respect. The feature of interest for CORM is the vector of regression parameters shared by samples for a gene. It separates systematic variation from random variation and increases clustering precision especially when the sample size is large Selection of genes. A microarray provides measurements on thousands of genes, but it is common to select a small subset (tens to hundreds) of genes to cluster, especially for partitional clustering. One reason to select a subset is to keep the computation manageable. Another reason is to try to exclude uninformative genes to prevent them from deteriorating the clustering. For K-means, however, \u2018uninformative\u2019 is not well defined. One might select the most variable genes. However, that strategy does not distinguish genes with large signal and genes with large noise when including genes; nor does it distinguish genes with small signal and genes with small noise when excluding genes. With CORM, informative genes can be selected using a per-gene regression model and a significance cutoff appropriately adjusted for multiplicity , and gen(b) Characterization and interpretation of clusters. After clustering genes, it is useful to determine the cluster signatures for the identified clusters. Often they are set to be the cluster centers. CORM clusters can be identified by their regression coefficients and have a specific interpretation depending on the experimental design. For example, we can tell whether a gene cluster tends to be up-regulated or down-regulated comparing diseased samples to normal samples. The interpretability of CORM clusters allows a more interpretable comparison of gene clusters identified in different data sets with similar experimental designs \u2013 not only the clustering of genes can be compared but also the characteristics of the clusters.(c) Application of clusters. The average of genes in the same cluster has been proposed to act as predictors for sample classification . CORM teCORM provides an alternative clustering method for scenarios when K-means has limitations. For example, while applicable to both CS data and LNR data, K-means does not distinguish the two experimental designs. K-means cannot naturally handle LWR data \u2013 profiles of a gene need to be averaged or connected first. K-means might not use all information in the data; for example, in a longitudinal study, it considers time points to be exchangeable and ignores their ordering and correlation. Unlike K-means, CORM can naturally deal with missing values for any gene or sample (under the assumption of missing at random) as well as imbalanced experimental design . Moreover, CORM can easily incorporate technical replicates together with biological replicates in a hierarchical manner.The gains of CORM depend on the truth of the regression model and its robustness to model misspecification. Ideally, the design of an experiment determines the gene-related feature available for clustering and hence informs parameterization of the regression model for CORM. Experimental design should be chosen to produce the feature that most likely reflects biological clusters of interest. For example, a longitudinal design can be used to find clusters of genes that behave similarly across time, while a cross-sectional design can be used to find clusters of genes that behave similarly across different levels of covariate .YOX1 gene knocked out [To study the regulation of the cell cycle in yeast, we studied gene expression across the cell cycle for both wild type (WT) yeast and single mutant (SM) yeast with the cked out . Alpha fcked out , Zhao etcked out . We focuThe primary goal of our analysis is to cluster genes that have similar expression patterns among WT yeast. As a secondary goal, we clustered genes using both WT yeast and SM yeast to identify genes whose expression patterns are changed by the mutation. Unlike K-means clustering, CLMM can explicitly accommodate both the replication and the sample covariate (mutation status). In addition, CLMM can naturally deal with the imbalanced experimental design: WT had one bad time point at 105 min and SM had three at 25 min, 40 min, and 55 min, due to technical problems . These bad time points were removed from the cluster analysis. There was also missing data: 41 measures belonging to 17 genes for WT data and 17 measures belonging to 17 genes for SM data were clearly outliers based on signal strength compared to other time points, which arise due to technical failures of the measurement procedure rather than reflecting true biological variation that should be modeled. See Additional file CLMM was applied to cluster the 256 genes using WT data. The design matrix for fixed effects was the B-spline basis for time 0-120 min with 7 equally spaced knots. The number of knots was set to be 7 to allow a flexible modeling of the expression profiles and at the same time to avoid overfitting. Within a reasonable range, the clustering results were not sensitive to the number of knots for the B-spline basis. Determining the number of clusters is still an open question and has been under active ongoing research. This is particularly the case for time course data with a small number of replicates, as it does not allow the application of bootstrapping-based methods, such as the bootstrapped maximum volume measure based onWe did model checking by plotting the model residuals and the Best Linear Unbiased Predictions (BLUPs) , we compared the clustering using both WT and SM data and that using WT data only Table\u00a0. This isTo summarize, both K-means and CORM are useful tools for clustering genes using expression data. K-means makes no assumption about the relationship between expression levels and sample covariates. It is intuitive and has produced reasonable results in applications . K-meansCORM at the R CRAN.Gene clustering for time course data has been under active research over the past decade . A numbeThe authors declare that they have no competing interests.LXQ and SGS conceived the study. LB collected the yeast time course data. LXQ carried out the analysis of the yeast data. LXQ, LB, and SGS interpreted the data and wrote the manuscript. All authors read and approved the final manuscript.Supplementary materials.Click here for file"}
+{"text": "There has been a growing interest in identifying context-specific active protein-protein interaction (PPI) subnetworks through integration of PPI and time course gene expression data. However the interaction dynamics during the biological process under study has not been sufficiently considered previously.Here we propose a topology-phase locking (TopoPL) based scoring metric for identifying active PPI subnetworks from time series expression data. First the temporal coordination in gene expression changes is evaluated through phase locking analysis; The results are subsequently integrated with PPI to define an activity score for each PPI subnetwork, based on individual member expression, as well topological characteristics of the PPI network and of the expression temporal coordination network; Lastly, the subnetworks with the top scores in the whole PPI network are identified through simulated annealing search.Application of TopoPL to simulated data and to the yeast cell cycle data showed that it can more sensitively identify biologically meaningful subnetworks than the method that only utilizes the static PPI topology, or the additive scoring method. Using TopoPL we identified a core subnetwork with 49 genes important to yeast cell cycle. Interestingly, this core contains a protein complex known to be related to arrangement of ribosome subunits that exhibit extremely high gene expression synchronization.Inclusion of interaction dynamics is important to the identification of relevant gene networks. Life is a transient dynamic phenomenon. Biological functions and phenotypic traits, including disease traits, stem from the interactions across multiple scales in the living system. Therefore characterizing the condition-dependent interactions and emergent dynamics are important in the identification of relevant elements to a given biological process.Recently, a number of computational methods have been developed to identify the condition specific protein-protein interaction (PPI) subnetworks, through integration of generic PPI data and condition-specific gene expression data . For insUnderstanding cellular physiology from a dynamic and systems perspective is obviously very important and valuable as demonstrated by these studies and many others . IncorpoIn this study we investigate the application of an idea rooted in statistical physics and non-linear dynamics to characterize the state of gene interaction networks and use it to identify relevant subnetworks. We regard active subnetworks to be those showing high degree of differential expression, and high synchrony in expression changes among the members. The phase locking analysis will be utilized to evaluate expression synchrony, and to capture the dynamic interaction structure. Recently we found that the phase locking metric can identify interacting gene pairs more efficiently than correlation .Previously, we proposed a Pathway Connectivity Index (PCI) to represent the activity of pre-defined pathways, such as those defined in KEGG and Biocarta. PCI utilizes expression information of all genes in a pathway, as well as the topological properties of its interaction networks. Its advantages have been demonstrated . This mehttp://cytoscape.org/). There are 331 genes and 361 interactions in this network. Within it, we randomly selected subnetworks at three different sizes n , as condition-responsive. In each responsive subnetwork m% of genes are defined to be active. The significance values of active genes were assigned randomly with top http://mips.helmholtz-muenchen.de/genre/proj/yeast) edited by Gerstein Lab (http://www.gersteinlab.org/proj/bottleneck/mips.txt).Simulation utilized the sample expression data gal80R given in Cytoscape rate was defined to be the number of successful identifications divided by the size of the predefined network n. The false positive rate was estimated as the number of false identifications divided by the size of the identified subnetwork. The F score is a measure of a test's accuracy. It considers both the precision and the sensitivity of the test:We used the average sensitivity, specificity and F score to measure the performance of TopoPL. The performance is also evaluated with Receiver Operating Characteristic (ROC) curve, a plot of the true positive rate against the false positive rate .http://www.ebi.ac.uk/huber-srv/scercycle/). It is a time course study of yeast cell cycle, where cells were arrested using alpha factor or cdc28. The alpha factor dataset contains 41 time points and the cdc28 dataset contains 44 time points, both at 5-minute resolution. These datasets provide strand-specific profiles of temporal expression during the mitotic cell cycle of S. cerevisiae, monitored for more than three complete cell divisions [Gene expression data was downloaded from EMBL's Huber group (ivisions . Yeast Ps(t), its Hilbert transformation is given byThe details of definitions and steps of the phase locking analysis was described in our previous work and briewhere PV stand for Cauchy Principal Value of integration. The corresponding analytical signal can then be constructed by:where the instantaneous phase If two time series interact with each other, there will be rhythmic adjustment resulting in phase locking: \u03bb offers a new measure to infer potential interaction between gene pairs [In a perfect locking ne pairs .i, the EDGE software [For each gene software was usedi in sample s, and i. There again we demonstrated the advantage of incorporating network structural information [This metric is an improved version over the PCI that we previously proposed to identify active pathways from gene expression data : PCI= \u2211ius works . To reduormation .Obviously, We implemented the searching procedure based on simulated annealing. The pseudocode of the algorithm is described below:Input: the entire network Output: the subnetwork with the highest score.Steps: initialize each node with its expression significance score For i = 1 to N, DoCalculate the current temperature Exit loop if Randomly pick a node IF The commonly used network scoring method that sums significance levels of all genes in the network (hereafter referred to as the Additive scoring method):(2) A metric that we previously proposed in our TAPPA software package , at three states of activity . Though the three methods have similar sensitivity, the precision of TopoPL is higher. F scores showed that TopoPL performs better than TAPPA and Additive. The ROC curves also indicate that TopoPL performs better than the other two approaches, with the highest Area Under Curve (AUC), as shown in Figure Using the simulated yeast gene expression data, we compared TopoPL with two other methods: 1) Additive scoring method (see definition Eq. (7) in Methods); and 2) TAPPA (see definition Eq. (8) in Methods) . Additiv Additive TAPPA (shttp://www.bioconductor.org) to investigate how well the identified subnetwork captured the relevant functional modules [After 100,000 iterations play important roles in gene networks [It has been demonstrated that hub genes and high betweenness genes (networks . Table 2networks . Dsn1 hanetworks . TPK1 hanetworks . NOP15 inetworks .p = 0.11), and that of all genes in yeast . These results suggest that degree and betweenness can be utilized to further improve the performance of functional gene module identification.The top 30 high-degree and high-betweenness nodes from the identified subnetwork and their interactions are presented in Figure We investigated the distribution of the phase locking index within the identified subnetwork. Clearly on average there is a higher degree of phase locking in it than in the whole PPI network Figure . InteresWe further examined the highly synchronized regions in the network core. Figure In protein complexes, the core components, which consist of two or more proteins that are present in most complex isoforms, are often regarded as functional units as they show surprisingly high degree of functional, essentiality, and localization homogeneity ,25. We tInterestingly all six genes are annotated with GO:0042254 , it is defined as \"A cellular process that results in the biosynthesis of constituent macromolecules, assembly, and arrangement of constituent parts of ribosome subunits; includes transport to the sites of protein synthesis\".http://opossum.cisreg.ca/oPOSSUM3/) to identify shared transcription factor binding sites (TFBS) among the genes in the identified subnetwork [We have found that genes regulated by the same transcriptional factors are likely to be highly synchronized . Here tobnetwork . Given aFKH1 and MCM1 are well studied cell cycle related transcriptional factors . TOD6 overlapping genes in them . In contrast, there are only 87 (~17%) overlapping genes with the Additive method , and 145 (~29%) with TAPPA . This indicates that incorporating network structural and dynamic information can generate robust results.TopoPL scoring method with a simulated annealing search was proposed in this study to identify active subnetworks during a biological process by integrating PPI with dynamic expression data. It incorporates both structural and dynamics information of gene interactions. When applied to the simulated data and the yeast cell cycle data, it yielded more consistent results from different experiments, and predicted more meaningful active network modules, than two alternative scoring methods that either ignores information of the network dynamics, or that of both the dynamics and structure.The authors declare that they have no competing interests.SG and XW designed the study. SG wrote the algorithms, performed the analysis, and created the figures and tables. SG and XW wrote the manuscript, read and approved the final version of the manuscript."}
+{"text": "Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation.We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks.NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems.https://github.com/vjethava/NETGEMThe source code for NETGEM is available from Gene expression microarrays are increasingly used to determine transcriptional regulation in response to a genetic or environmental perturbation. Often the inference is presented as a static network of genes that are activated or repressed by relevant transcription factors, similar to a wiring diagram of electrical circuits . HoweverConventional methods of time series analysis cannot be applied to this problem due to the small number of observations (gene expression data) from different time points are available relative to variables (gene interaction strengths) . AdditioSaccharomyces cerevisiae is arguably the most well-constructed with a high level of confidence . Intuitively, while \u0393e = 0 for an edge directly incident to one of the knocked out genes, the perturbation gradually damps out with distance from the knocked out gene and for an edge e far away from one of the knocked out genes \u0393e \u2248 1. In the experiments, a value of i hdenotes the distance of node i from the knocked-out node.We compute the damping factor s \u2208 {1, ..., S} and observation time points t \u2208 {1, ..., T}. The quantities known apriori are the number of classes H, the set of edges E, the number of strains S and the number of observation points T. The generative process corresponding to our model is given in Table We now present a unifying view of the NETGEM model as a generative probabilistic model for gene expression data e and time t, we generate a functional category solely responsible for the change of interaction strength from The key principle underlying the model is that the interaction dynamics are governed by the functional categories. In particular, for each edge hQ, \u03b1e,h}. The hyper-parameters \u039b and \u0398 are chosen as non-informative priors. An Expectation Maximization (EM) [The hidden variables are ion (EM) procedure \u2208 E over the simulation time period. However, many edges have few changes, and we need to select the interactions having temporally significant interactions based on a suitable metric.The E.M. inference procedure in our model estimates the most probable sequence of states e is governed by the Markov transition probability e Qin our model. Therefore, we characterize the evolution of interaction strengths as significant or not significant under the hypothesis [The dynamics of the interaction strength pothesis H0 : The interaction ew(t) on an edge e \u2208 E is not significant if the transition probability matrix e Qhas more than P % of its mass on the diagonal, i.e. H1 : The interactions ew(t) on an edge e \u2208 E is significant if the transition probability matrix e Qhas at most P % of its mass on the diagonal, i.e. tr(eQ) denotes the trace of Markov transition probability e Qfor edge e given aswhere and e = corresponds to increasing degree of positive correlation between the expression data for genes g and g'. Therefore, a change from w = -2 (strongly repressing) to w = +2 (strongly activating) is more significant than a change from w = -2 to 0 (uncorrelated). Further, the transition probabilities for the edges e Qare dependent on the functional categories transition probabilities hQ.The choice of an appropriate test statistic is a non-trivial matter. For example, the weights {-2, -1, 0, 1, 2} on an edge T s(e) which measures the degree of change exhibited by the edge (interaction), e;We use the test statistic T is the total number of observations and ew(t) is the interaction strength of edge e at time t. We define change score h Caswhere T s(\u00b7) is given as:The decision rule based on d0 : The interaction for edge e does not have temporally significant dynamics if d1 : The interaction for edge e has temporally significant dynamics if s* is the critical value.where T s(e) is a random quantity depending on the Markov transition probability e Qdescribing the evolution of interaction strengths ew(t) defined on edge e. However, the change score Ts(e) generally decreases with increasing trace tr(eQ). Please see the supplementary material and Figure NETGEM using the Erdos-Renyi model consisting of |V | = 1000 nodes, which represents the genes, and |E| = 5961 randomly generated edges, which represent the gene interactions. Each edge e \u2208 E can have one of the following states We construct a synthetic graph H = 200 functional categories which govern the behaviour of the evolution of the gene expression. Nodes were randomly assigned to (multiple) functional categories reflecting the empirical distribution of functional categories for genes in MIPS database [h Qwere sampled randomly from two classes H0 : tr(hQ) >0.5W and H1 : tr(hQ) \u2264 0.5W. The interaction dynamics for each edge (eQ) depends on the functional categories (in terms of hQ) that influence the nodes of the edge as in eqn. (4).\u2022 The first model incorporates the functional categories, which impact the evolution of interaction strengths as specified in eqn. (4) and eqn. (5). We use database >0.5W and H1 : tr(eQ) \u2264 0.5W.\u2022 The second model assumes the evolution of the interaction strengths for an edge is independent of other edges. The transition probability matrix t \u2208 {1, ..., 8} for each of the two models by generating interaction strengths e by sampling from a Markov chain with random starting state and transition probability e Qchosen according to the model. At each time instant, we generate observation x \u2208 V in {-1, 1} based on the interactions t as in eqn. (1) using Gibbs sampling [T = 8) is small for generic statistical inference techniques, reflecting the size of gene expression datasets commonly available [We simulate the dynamics for interactions for sampling . This arvailable ,28.T s(e) and e and functional category h as in eqn. (13) and eqn. (14) respectively. The change score allows us to classify the edge as belonging to class H0 or H1 based on the choice of critical score s*. If we chose the critical value s* to be very high, most of the edges or functional categories will be classified as insignificant (belonging to class H0), as their test statistics T s(e) or s*. Conversely, if we chose the critical value s* to be 0, most of the edges or functional categories will be classified as significant (belonging to class H1), as their test statistics T s(e) or s*.The inference in our model is done to learn the evolution dynamics of the functional categories based on the interactions of multiple genes which have the corresponding functional classification. Based on the inferred interactions, we compute the change score s*. Figure e \u2208 E and functional categories h \u2208 H, based on the corresponding test statistics T s(e) and s* are indicated next to the obtained operating characteristics. The Area Under Curve (AUC) values for functional categories and edges are 0.9484 and 0.7208 respectively, which reflects the increased accuracy in determining the dynamic functional categories compared to edges. We note that the multiple edges corresponding to a functional category improves the detection of significant functional categories even under short time periods. If the gene expressions were truly independent of the function of the genes, one would observe fairly low accuracy in predicting the most dynamic functional categories. Additional File We characterize the sensitivity of the test statistic using the Receiver Operating Characteristics (ROC) curve based on different critical values of the test statistic in eqn. 3 and eqnFurther, the strain damping allows the incorporation of multiple gene expression datasets which have been generated under slightly different conditions. In effect, we learn the dynamics of functional categories from multiple instances of short time series which are not i.i.d. but are strongly related through the functional classification. This reduces the variance in the results partly alleviating the problem of inference in short time series.Q. However, the exponential state space makes such an approach impractical. For the purpose of study, we present a small experiment which shows that incorporating the functional information in beneficial in terms of computational requirements as well as accuracy and eqn. (2) using a simple HMM having a large state space. The quantity to be estimated is the transition probability and eqn.This section presents inference results using our model for identifying interaction dynamics from two gene expression datasets in yeast.s(e) as in eqn. (13). We fit an exponential distribution to it and consider the weights falling in the top-5% (p = 0.05) tail of the distribution. The interaction between the nodes was visualized as a graph in the Cytoscape [We are interested in the edges which show considerable activity during the measured time points. Towards this end, we compute the change score ytoscape environmSfp1 knockout strain shows that our model is able to successfully integrate multiple strains via strain damping. It also implicates many new actively interacting genes which have an important role in the biological conditions corresponding to the gene expression datasets. These can be used as test candidates in future biological experiments.Our results show that the model correctly identifies known interactions. For example, it discovers the gradual transition from positive to negative interaction strength in edges between carbohydrate metabolism and protein synthesis genes. Moreover, it detects abrupt changes in the interaction patterns. Combining expression data from a reference strain and v at time t in strain s asThe gene expression dataset in the following experiments is normalized independently for each strain by subtracting from the gene expression where e w= +2, e w= -2 and e w= 0 means the gene expressions for the genes of edge e is strongly positively correlated, strongly negatively correlated or uncorrelated respectively. Similarly, an interaction of e w= +1 or e w= -1 indicates weak positive or negative correlation between the expressions for the genes of the edge e respectively. Our choice of We chose the set The data in this experiment captures the changes in gene expression during the gradual transition from glucose to ammonia as the growth-limiting nutrient. Genes that have already been grouped into eight clusters were cont = 0.0 h. Please see the supplementary material . Our model identified a momentary repression in the synthesis of ribosomes at t = 9.6 hours, when the growth limitation was exerted by ammonia. These genes were constitutively active during all other time points, as is expected because ribosome biosynthesis is an essential cellular process. The temporary arrest in ribosome biosynthesis was attributed to the control exerted by Sfp1 transcription factor . Our modAn important aspect of NETGEM is to capture the dynamics in response to a perturbation in the network. The model allows identifying the significantly changed interactions in response to a deletion of a gene. In this experiment, we evaluate the effect of deleting a key transcription factor, Sfp1 . We chosp < = 0.05 were considered for subsequent analysis. In this manner, we identified 171 interactions among the genes that were already identified to have been differentially expressed between REF and MUT. Please see the supplementary material and eqn. (11). This ensures that interactions further from the point of perturbation in the network are affected to a lesser degree than those closer to it. The effect of damping is very sensitive to the network and in the network we considered in this study, a majority of the edges appear to be relatively unaffected by the perturbation.hQ, i.e., a) REF, (b) MUT, and (c) JOINT where the inference is based on both the strains combined to t = 0 min.After assessing the effect of perturbation for our network, we identified temporal changes in the interactions in the REF strain as well as the MUT strain independently. We observed some overlap in the actively interacting genes between REF and MUT. Many of these genes were hexose transporters and those responsible for pH homeostasis. The genes that are common to the strain indicate that they are not responsive to the mutations. NETGEM was able to identify temporal interactions that are most sensitive to the mutation. These interactions predominantly occurred between ribosome biosynthesis and amino acid metabolism. The results concur with the known role of Sfp1 in coordinating metabolism with ribosome biosynthesis and serve as an independent validation of the accuracy of the damping model incorporated in NETGEM. These interactions were identified by considering gene expression profiles in REF and MUT, using the damping model. Indeed the functional classification of the genes between which interactions change significantly indicate that Sfp1 transcription factor has widespread control over coordinating ribosome biosynthesis, pH homeostasis, transport of proteins and drugs, etc. Table entified to be sePlease see Additional File p(ws(t)|w0(t)) involving more parameters to be learnt and the limited amount of experimental data in order to model interaction dynamics. NETGEM is a systematic model that relates temporal changes in gene expression data to the dynamics of interactions in the context of a regulatory network. We believe that NETGEM achieves an optimal balance between model complexity and the data requirement, while allowing ample flexibility to adjust the parameters. The framework of the model will also inherently facilitate analyzing the effect of a perturbation in the network. For a given regulatory network and gene expression data, NETGEM was able to identify time-sensitive interactions in the network and determine their strength. It was able to deduce the most active functional categories that interacted. In addition to these, the NETGEM uses a damping feature that models the effect of a network perturbation by localizing more activity around the point of perturbation. These three novel features that NETGEM offers reflect its advantage over many other time-series models that have been developed recently. Of particular interest is its ability to capture abrupt changes in the interaction patterns. For example, NETGEM identified momentary arrest in ribosome biosynthesis during the transition in the nutrient that limits growth from glucose to ammonia (Experiment 1). We identified many actively interacting genes that were implicated to play an important role in the biological conditions from which we obtained the data. This lends the promise that new insights obtained from using NETGEM are also physiologically relevant. Given that the inputs to NETGEM are the topology of the network and temporal variation of the nodes, it is evident that this methodology has widespread applications in analyzing network dynamics, beyond biological systems.There is a trade off between using more sophisticated conditional probability models VJ, CB, DD and GNV designed the study. GNV provided the experimental data and critically revised the biological findings of the method. VJ performed the analysis. All authors interpreted the results. VJ and GNV wrote the manuscript. All authors read, edited and approved the final manuscript.Supplementary material. This file contains the supplementary material accompanying this manuscript.Click here for fileCytoscape visualization data. This file is a zip archive containing the Cytoscape attribute files and the figures corresponding to Experiment 1 and Experiment 2. See the included README file for more details.Click here for file"}
+{"text": "Time course microarray profiles examine the expression of genes over a time domain. They are necessary in order to determine the complete set of genes that are dynamically expressed under given conditions, and to determine the interaction between these genes. Because of cost and resource issues, most time series datasets contain less than 9 points and there are few tools available geared towards the analysis of this type of data.To this end, we introduce a platform for Processing Expression of Short Time Series (PESTS). It was designed with a focus on usability and interpretability of analyses for the researcher. As such, it implements several standard techniques for comparability as well as visualization functions. However, it is designed specifically for the unique methods we have developed for significance analysis, multiple test correction and clustering of short time series data. The central tenet of these methods is the use of biologically relevant features for analysis. Features summarize short gene expression profiles, inherently incorporate dependence across time, and allow for both full description of the examined curve and missing data points.http://www.mailman.columbia.edu/academic-departments/biostatistics/research-service/software-development.PESTS is fully generalizable to other types of time series analyses. PESTS implements novel methods as well as several standard techniques for comparability and visualization functions. These features and functionality make PESTS a valuable resource for a researcher's toolkit. PESTS is available to download for free to academic and non-profit users at A frequent goal of high-throughput biological studies, in general, and microarray studies, in particular, is the identification of genes that show differential expression between phenotypes (e.g. cancer vs. no cancer). Microarray experiments are used in a wide variety of studies to understand the mechanisms governing variation in complex traits , for exaTypical characteristics of microarray time course data are: 1) sparsity, in terms of both the number of replicates per sample and the number of time points per replicate and 2) irregularly spaced time points. Although there have been temporal microarray studies with as many as 80 time points, almost all are much shorter. In fact, Ernst et al. 2005) [ [5] founThe purpose of this paper is to introduce the Processing Expression of Short Time Series (PESTS) platform, designed for the complete analysis of short time series gene expression datasets. PESTS provides a set of methods targeted to the analysis of sparse and irregularly-spaced time course microarray expression data making minimal assumptions about the underlying process that generated the data. It is designed specifically for the unique methods we have developed for significance analysis, multiple test correction and clustering of short time series data. Although PESTS was specifically designed for short microarray time series analyses, it is generalizable to other, longer time series analyses. Together with its implementation of several standard techniques and its visualization capabilities, users may find PESTS to be a useful tool for time series data analysis with or without PESTS-specific algorithms.Much of the work on significance analysis of time series expression experiments uses methods originally developed for static or uncorrelated data -8. WhileThe fundamental principle behind the time series methods developed for PESTS is to appropriately use expression profiles and dependence across time points to determine salient genes and gain biological insight about them while accounting for sparsity in the data. Instead of using model-based techniques which do account for time dependence but generally tend to be inappropriate in cases of sparsity, PESTS methods summarize profiles using an innovative set of features. Features summarize short gene expression profiles, inherently incorporate dependence across time, and allow for both full description of the examined curve and missing data points. They are based on the structural characteristics of the time course data and reflect a clear link with subject-matter considerations, capturing the \"global picture\" of an admittedly short time horizon of expression. In the case of short time series, features are used as a dimension augmentation technique. By contrast, this algorithm could also be extendable to longer time series through the use of features which provide dimension reduction such as autocorrelation functions, skewness, kurtosis, etc. as well as the descriptive features presented here. These biologically relevant features or curve summarization measures are then used for significance analysis or clustering. We provide brief summaries for these methods in the context of the interface description next and further information can be found in ,18.In this paper, we will discuss details of the PESTS platform as well as give brief overviews of the relevant methodologies used and evaluation. First, we give implementation details and briefly discuss data requirements for using the platform. Then we give an overview of the interface, as well as the implemented visualization tools. Lastly, we compare the platform to other available resources for both significance analysis of time course data and clustering.i = 1,..., I treatments and ri = 1, ..., Ri replicates for each treatment. Additionally, there are i \u2260 j, Ri = Rj but this may or may not be the case for unpaired data. In either case, time points of measurement may not be the same, so for a treatment i and replicate The focus of this work is on time series data that is both sparse and irregularly spaced. Thus, the methods presented are implicitly tailored to these data characteristics. Here, we note our other guiding principles. First, the interface is designed for both paired and unpaired data. For significance analysis, the data must have more than one treatment, allowing for comparison. While paired data has the same number of replicates per treatment by definition, unpaired data is not required to. Furthermore, any given replicate can have measurements taken at different time points. In other words, for a given analysis, there are http://www.java.com and will work with any operating system supporting Java 6 or later. Advantages to using Java for this platform are that it is flexible, freely distributed, provides comprehensive graphical interface capabilities, implementations are platform independent, and the use of an interface does not require expertise in any programming language, statistical or otherwise, for the user. Further, Java is well-suited to memory management tasks, critical in data-intensive analyses such as microarray analyses. Because of the large open-source community, many implementations of methods found in standard statistical packages were available to us for development. However, we do note some limitations in this area, so some methods were implemented from scratch - most notably, the clustering algorithms. Several third party libraries were used to support the application. The Java Statistical Classes (JSC) http://www.jsc.nildram.co.uk/ package was used for some of the standard statistical computations. Foxtrot http://foxtrot.sourceforge.net/ was used for thread management. JFreeChart http://www.jfree.org/jfreechart/ provided implementations for plot rendering. The JExcel API http://jexcelapi.sourceforge.net/ was used to generate excel spreadsheets for saving results. Lastly, EaSynth http://www.easynth.com/ was used for the look and feel of the application.PESTS is implemented entirely in Java The main PESTS interface is structured as a portal of available functionality for processing and viewing data. This screen is shown in Figure PESTS requires as input two files. The first file is a tab-delimited file of gene expression data. The second file is a tab-delimited label file of the associated metadata for the arrays. The expression data file includes unique probe identifiers, optional gene symbols, and data values. An example is given in Figure The label file contains the covariate information which indicates the treatment, replicate number and time point of each array as well as a specifier indicating whether the data is paired. An example is given in Figure Once the files are loaded, users are then able to view the raw expression data, plot the data as in Figure Figure After this, the main portal activates the multiple test correction button and the view significance analysis results button. Figure 0 differentially expressed genes is addressed here and used to adjust the measure of the false discovery rate (FDR) for each gene. We present two methods for 0 mestimation here. The first uses a p-value plot [p vs. 1-pN, where p is the p-value (from 0 to 1) and p Nis defined as the number of p-values (across the entire dataset) that are greater than p. The p-value plot was first suggested by Schweder and Spjotvoll (1982) [2 coefficient which describes how well a straight line is fitted. More details can be found in [The multiple test correction screen is shown in Figure lue plot , definedl (1982) to deterfound in . And thifound in .0 mand the corresponding estimates for sensitivity, specificity and false discovery rate for various levels of significance. These can be used to determine an appropriate threshold of significance for a particular dataset and an estimated 0m. The right panel is a graphing panel that can show the ROC plot, the p-value plot or the CDF plot.Figure ks and choosing the optimal k as the clustering with the highest average silhouette [Finally, the user can perform cluster analysis. The clustering screen is shown in Figure lhouette .Clustering results can be viewed as shown in Figure There are few software platforms available for the purposes of short time-series data analysis. In terms of both significance analysis and clustering, PESTS is the only platform we are aware of that does both.0 mestimation to improve FDR calculations. Given that this method requires model-fitting, it may be more suitable to longer time series or data sets with many replicates, which allow for accurate estimation of model parameters. Similarly, maSigPro is a two-regression step approach targeted to determining differences in time course expression over multiple treatments of the data. The reliance on model fitting with a specific functional form for the time element and a two-step regression strategy suggests limitations, similar to those met in other model-based approaches, when applied to short time series. Additionally, maSigPro does not perform 0 mestimation. SAM is an R-based excel plugin tool. It is similar to PESTS in that its time series method uses features such as the signed AUC or slope across time points, and it uses the SAM test for significance. SAM also performs 0 mestimation for multiple test correction. However, using PESTS, other standard tests of significance can be applied using information about the data distribution. Furthermore, the PESTS interface allows more flexibility and usability in time point selection. A user would need to modify the input files in order to look at different periods of time with any of these platforms. Both EDGE and SAM use asymptotic 0 mestimation methods which are useful but may not be optimal in certain datasets. Additionally, PESTS provides information about the sensitivity and specificity to aid the user in selecting a reasonable threshold for significance. It also provides methods for outlier detection and removal. Genes with outliers are removed from testing, increasing the reliability of results.For identifying differentially expressed genes, the available options are Significance Analysis of Microarrays (SAM) , ExtractFor clustering, there are several more options. Order Restricted Inference for Ordered Gene Expression data (ORIOGEN) uses useIn this paper, we have introduced PESTS, a software platform for the analysis of time course data. It is designed specifically for the unique methods we have developed for significance analysis, multiple test correction and clustering of short time series data. The central tenet of these methods is the use of biologically relevant features for analysis which summarize gene expression profiles and inherently incorporate the dependence across time. It is fully generalizable to other types of time series analyses. PESTS was designed with a focus on usability and interpretability of analyses for the researcher. As such, it also implements several standard techniques for comparability, as well as visualization functions. These features and functionality make PESTS a valuable resource for a researcher's toolkit.Project name: PESTS (Processing Expression of Short Time Series)http://www.mailman.columbia.edu/academic-departments/biostatistics/research-service/software-developmentProject home page: Operating system(s): Platform independentProgramming language: JavaOther requirements: Java 6 or higherLicense: non-commercial research use licenseAny restrictions to use by non-academics: license needed for commercial useASTRO: Analysis of Short Time-series using Rank Order preservation; CAGED: Cluster Analysis of Gene Expression Dynamics; EDGE: Extraction of Differential Gene Expression; GQL: Graphical Query Language; ORIOGEN: Order Restricted Inference for Ordered Gene Expression data; PESTS: Processing Expression of Short Time Series; STEM: Short Time-series Expression Miner; SAM: Significance Analysis of Microarrays;AS and MM both contributed to the design of PESTS as well as the implemented algorithms. AS implemented PESTS. AS and MM both participated in the drafting of the manuscript. Both authors read and approved the final manuscript."}
+{"text": "Kr\u00fcppel, knirps, and giant in Drosophila melanogaster. We use detailed, quantitative datasets of gap gene mRNA and protein expression to solve and fit a model of post-transcriptional regulation, and establish its structural and practical identifiability. Our results demonstrate that post-transcriptional regulation is not required for patterning in this system, but is necessary for proper control of protein levels. Our work demonstrates that the uniqueness and specificity of a fitted model can be rigorously determined in the context of spatio-temporal pattern formation. This greatly increases the potential of reverse engineering for the study of development and other, similarly complex, biological processes.Systems biology proceeds through repeated cycles of experiment and modeling. One way to implement this is reverse engineering, where models are fit to data to infer and analyse regulatory mechanisms. This requires rigorous methods to determine whether model parameters can be properly identified. Applying such methods in a complex biological context remains challenging. We use reverse engineering to study post-transcriptional regulation in pattern formation. As a case study, we analyse expression of the gap genes Drosophila gap genes Kr\u00fcppel, knirps and giant, involved in segment determination during early embryogenesis. Rigorous fitting requires us to establish whether our models provide a robust and unique solution. We demonstrate, for the first time, that this can be done in the context of a complex spatio-temporal regulatory system. This is an important methodological advance for reverse-engineering developmental processes. Our results indicate that post-transcriptional regulation is not required for pattern formation, but is necessary for proper regulation of gap protein levels. Specifically, we predict that translation rates must be tuned for rapid early accumulation, and protein stability must be increased for persistence of high protein levels at late stages of gap gene expression.The analysis of pattern-forming gene networks is largely focussed on transcriptional regulation. However, post-transcriptional events, such as translation and regulation of protein stability also play important roles in the establishment of protein expression patterns and levels. In this study, we use a reverse-engineering approach\u2014fitting mathematical models to quantitative expression data\u2014to analyse post-transcriptional regulation of the Systems biology is characterised by the tight integration of experiments and computational modeling. One way to achieve such integration is through reverse-engineering approaches, where dynamical models of regulatory or biochemical reaction networks are fit to quantitative data a priori) parameter identifiability analysis can be used to examine whether the problem has a non-trivial solution at all a posteriori) parameter identifiability analysis tells us whether estimated parameter values are significant and reliable While this approach has great potential for the investigation of complex biological regulatory systems e.g. , it alsoSo far, unfortunately, the application of these powerful methods to gain specific and novel biological insights has been limited. This is due both to the complexity of most real-world biological regulatory systems and the nature of the data used in reverse-engineering studies. Most of these studies use models based on large systems of coupled non-linear differential equations. This makes it challenging to apply structural identifiability analysis. Moreover, model fitting is generally computationally intensive due to the significant number of parameters to be estimated. This renders rigorous practical identifiability analysis extremely time consuming. And finally, high-throughput datasets used for model fitting often exhibit high levels of measurement error, combined with low numbers of replicates. Under these circumstances, it is difficult to accurately assess data variance, which is required for both practical identifiability analysis and optimal experimental design. For all these reasons, reverse-engineering studies often proceed on an empirical basis, without being able to rigorously establish parameter identifiability or the suitability of the datasets and models used.Drosophila melanogaster. The biological question we are addressing is the importance of post-transcriptional regulation in animal development. While many studies of pattern formation focus on differential transcriptional regulation of genes (e.g. Drosophila (reviewed in hunchback (hb) and caudal (cad). mRNAs derived from those genes are distributed uniformly while their proteins form steep concentration gradients with antero-posterior polarity Mycoplasma pneumoniaeHere, we present a reverse-engineering study which combines model fitting by global optimisation strategies with rigorous structural and practical identifiability analysis. We apply this methodology to a complex regulatory problem: the dynamics of spatio-temporal pattern formation in the early embryo of the vinegar fly nes e.g. , other lDrosophila development , knirps (kni), and giant (gt). All these genes encode transcription factors, and are expressed in broad, overlapping domains along the embryo's antero-posterior (A\u2013P) axis. Gap genes respond to activating transcriptional regulatory inputs from long-range maternal morphogen gradients\u2014such as Bicoid (Bcd), Hb, and Caudal (Cad)\u2014as well as repressive inputs from the terminal gap genes tailless (tll) and huckebein (hkb). In addition, there is extensive repressive gap gene cross-regulation, which is required for the correct dynamic positioning, maintenance, and sharpening of each gap gene expression domain.In this study, we investigate the role of post-transcriptional regulation within the context of a well established experimental model system: the gap genes involved in segment determination during the blastoderm stage of early The advantages of using the gap gene network for our case study are twofold. The first advantage is that gap gene patterning is relatively simple and tractable compared to other developmental processes. It essentially occurs in one dimension, along the A\u2013P axis of the embryo. No significant tissue rearrangements or growth are involved. Diffusion is not yet limited by cell membranes as the embryo is still syncytial at this stage. In addition, all three genes considered here have a very compact structure, with only one or two short introns, and none of them exhibits any sign of alternative splicing. The second advantage is that the gap gene system is exceptionally well understood. All genes involved in segment determination have been identified and their interactions have been characterised at the genetic and molecular level as model observables. Therefore, these models implicitly assume that post-transcriptional regulation is not required to explain the patterns formed by gap genes. This assumption is not unreasonable, given the similarity of gap mRNA and protein patterns, and the fact that such simplified models can reproduce gap protein patterns to a high degree of accuracy and temporal resolution Kr, kni, and gt separately. Each model is then fitted to protein expression data, using mRNA patterns as external inputs, or boundary conditions. If our models are able to reproduce gap protein patterns correctly, we can conclude that no post-transcriptional regulation is required for the expression of the gap genes considered here. If our models fail to fit, however, we will be able to identify those expression features that do rely on post-transcriptional regulatory processes.As mentioned above, we test this hypothesis using a reverse-engineering approach. This is achieved by formulating a model, which incorporates the simple assumption that gap protein patterns reflect those of their respective mRNAs a given amount of time earlier in development , knirps (kni) and giant (gt), which spans the entire duration of the blastoderm stage in the early embryo of Drosophila melanogaster . In contrast to previously published semi-quantitative gap gene mRNA data\u2014based on colorimetric (enzymatic) staining protocols, wide-field microscopy, and an efficient but simple data processing pipeline http://urchin.spbcas.ru/flyex, We have created a quantitative mRNA expression dataset with high spatial and temporal resolution for the trunk gap genes Kr, kni, and gt shift towards the anterior of the embryo over time show higher levels of variability in mRNA compared to protein data at a majority of sampled data points. Moreover, we observe a high level of fluctuations between time points in our mRNA data indicating increased levels of experimental noise. This may be due to the harsher treatment of embryos for in situ hybridisation compared to antibody staining , and show a delay in dynamics with regard to each other, raises the non-trivial question whether protein patterns simply reflect earlier mRNA levels , or whether spatially and temporally specific post-transcriptional regulation is required to account for the observed distribution of proteins. We use a reverse-engineering approach to distinguish between these two alternative possibilities. In this approach, we test the hypothesis that no post-transcriptional regulation is required by fitting a simple dynamical model to data. This model incorporates the following assumptions (see The fact that mRNA and protein patterns of the gap genes ions see : It incla priori) parameter identifiability analysis. This analysis is performed under an ideal scenario where noise-free time-continuous experimental data are assumed to be available, and the objective is to answer the question whether under those ideal conditions the parameters can be given unique values. There are three possible outcomes: (1) The model is structurally globally identifiable (s.g.i.) if all parameters Our reverse-engineering approach can only give us quantitative and specific insights into the problem of post-transcriptional regulation if it is fit to data in a manner which is as rigorous and reproducible as possible. As a first step, this requires us to determine whether the model is formulated in a way such that the fitting procedure has a unique solution. Since our model is feed-forward and linear, this can be achieved using structural score of around 11.1\u201311.2 .Expression patterns resulting from Gt optimization runs exhibit similar properties as those of Kni, with a slightly lower residual error . Parameter estimates from different runs varied only minimally between solutions parameter identifiability analysis , while the projection of the ellipsoid region onto the parameter axis specifies the independent confidence interval. Independent confidence intervals typically overestimate the uncertainty in parameters, while dependent confidence intervals underestimate it. If both confidence intervals turn out to be similar and small, a parameter can be considered well determined.Confidence intervals, as calculated by le range . As for gt, and still very substantial for both Kr and kni. Again, this is to be expected since production delay can be mimicked to some degree by low production rates. In contrast, we found that diffusion rates are largely independent of other model parameters, except for a slight negative correlation between Correlation coefficients between parameters can be calculated from the covariance matrix . In all While computationally efficient, the linear identifiability analysis described above can lead to serious artifacts or biases in the estimation of confidence intervals due to its simplifying assumptions. Therefore, we validated its results by using the computationally much more expensive approach of bootstrapping Kr which shows two distinct clusters in parameter space. The cause of this bimodal distribution remains unclear. One of the clusters (568 solutions) has implausibly small values for Distributions of parameter estimates obtained by bootstrapping are shown in limit ; . DistribNone of these irregularities observed in parameter distributions seriously affects our ability to compute confidence intervals. This was done by determining the 95-percentile range for each parameter separately. The resulting confidence intervals and the initial guess are shown in Some notable exceptions apply. First, most confidence intervals based on bootstrapping are clearly asymmetric around the estimated optimal values. This asymmetry reflects a non-linear dependence of the optimisation problem on parameter values, which cannot be captured by confidence intervals calculated from linear approximation. Second, size of confidence intervals for Correlation matrices calculated by linear approximation or bootstrapping are very consistent. The anisotropic shape of parameter distributions resulting from bootstrapping reveal strong positive correlations between rate parameters for production and decay and ; , 6A\u2013C. S and ; , 6D\u2013F. Irameters , 6G\u2013I.Kr, kni, and gt covering the entire blastoderm stage. Comparison of gap mRNA and protein expression data indicates that both are remarkably similar, although features in the mRNA data emerge a few minutes earlier than those of the corresponding protein patterns. Results of our model fits confirm this general impression: the timing and position of gap protein domains can be explained largely by a simple linear delay model, which assumes that protein patterns correspond to mRNA patterns a given amount of time in the past . Based on this, we conclude that post-transcriptional regulation is not essential for gap gene mediated pattern formation. This result confirms a widely held assumption by the Drosophila research community that had never been put to a rigorous test.In this study, we have used a reverse-engineering approach to test whether post-transcriptional regulation is required for the correct expression of gap protein domains. For this purpose, we have created a high-resolution quantitative dataset of mRNA expression patterns for the gap genes On the other hand, our results reveal surprising and significant differences between mRNA and protein levels. In particular, our models fail to correctly reproduce both early dynamics of expression initiation (for Kni and Gt), and late maintenance of protein levels long, its paralogue knirps-related (knrl) contains a long intron which results in a primary RNA of about 23 kb. Its limited length allows kni to become expressed early, at cleavage cycle 13. In contrast, cytoplasmic mRNA of knrl only appears around mid cleavage cycle 14A, about half an hour later. The second aspect of the production delay is important in the context of the transient nature of gap gene patterning. While it has been shown that mRNA and protein levels of a gene converge at steady state One last aspect of post-transcriptional regulation that requires our attention is the protein production delay predicted by our models. These delays, between min long , are shoFinally, production delays that are on the same order of magnitude as the time scale of pattern formation can lead to severe alterations of the transient dynamic behavior of the system. For example, delays can greatly increase the time it takes for the system to reach its steady stage As in the case of delays, our models yield predictions of rate parameter values that are plausible, informative, and experimentally testable. Predicted decay rates imply gap protein half lives that lie between Our simple model of transcriptional regulation is limited in several important ways. We have explicitly refrained from implementing particular post-transcriptional regulatory mechanisms due to the absence of specific experimental evidence at this point. Our main aim in this current study was to first establish whether any post-transcriptional regulation is necessary for gap gene regulation. Our results clearly show that such regulation is required for the proper level, but not timing and position, of gap gene expression. Future investigations will combine experimental and data-driven modeling approaches to extend the model, and render it more mechanistically accurate.Another limitation concerns the coarse-grained nature of our production delay. It summarizes contributions by transcriptional elongation, mRNA processing and splicing, nuclear export, and translation see . Such coThe last, and most important, limitation of our current approach is that transcriptional and post-transcriptional regulatory processes involved in gap gene patterning are still implemented in different models. It is our aim to synthesise both of these stages into a regulatory network model featuring realistic production delays. We expect that such a model would solve several important issues. For instance, current gene network models still fail to reproduce the early regulatory and expression dynamics\u2014based on regulatory inputs from maternal gradients only\u2014in an accurate and biologically plausible manner At a more general scale, we have provided a proof of principle that rigorous model fitting and parameter identifiability analysis are possible in the context of the complex regulation of animal development. We hope that this will enable a more widespread and rigorous application of reverse-engineering approaches to problems of biological pattern formation. In our view, this constitutes an important methodological advance, which is crucial to apply the considerable potential of quantitative reverse-engineering strategies for our understanding of development.Drosophila melanogaster (raised at 25\u00b0C) were collected 1\u20134 hrs after egg laying. Embryos were fixed and stained using FITC- or DIG-labeled (kni) riboprobes, plus polyclonal antiserum against Even-Skipped (Eve) Blastoderm stage embryos of http://urchin.spbcas.ru/flyexData processing and quantification methods are described elsewhere in detail Kr: 25.5\u201388.5%, kni: 32.5\u201388.5%, gt: 32.5\u201395.5% A\u2013P position, where 0% is the anterior pole).The basic objects of our model represent nuclei plus their associated surrounding cytoplasm (energids). The state variables of the model describe the concentration of intra-nuclear gap protein within each energid. Change in gap protein concentration across time and space is described by a system of ordinary differential equations . Therefore, we solve these systems of ODEs numerically using an implementation of the MATLAB Structural parameter identifiability analysis was performed using the Laplace transform based approach Numerical solutions are produced for time points C10\u2013C13, and T1\u2013T8 within cleavage cycle 14A. We then calculate a weighted sum of squared differences around a given optimum as described in Correlations among model parameters can be calculated based on the covariance matrix Kr, which shows a bimodal parameter distribution for each of our protein expression data points. Data points for which no variance estimates were available were not randomised. Resulting sampled expression profiles were corrected by setting negative concentration values to zero. ribution , we onlyribution . For an Figure S1Quantification of Kr mRNA data. Each panel represents a time class (T1\u2013T8) in C14A showing an example embryo image (top), un-registered expression profiles (middle), and integrated expression patterns . Embryo images show lateral views: anterior is to the left, dorsal up. Graphs plot relative mRNA concentration against A\u2013P Position . Expression profiles consider only the central 10% strip along the dorso-ventral axis. Green profiles in middle panels were extracted from embryos shown in images above. Lightly shaded background in lower panels represents the region of the embryo considered in our models. See (PDF)Click here for additional data file.Figure S2Quantification of kni mRNA data. Each panel represents a time class (T1\u2013T8) in C14A showing an example embryo image (top), un-registered expression profiles (middle), and integrated expression patterns . Embryo images show lateral views: anterior is to the left, dorsal up. Graphs plot relative mRNA concentration against A\u2013P Position . Expression profiles consider only the central 10% strip along the dorso-ventral axis. Red profiles in middle panels were extracted from embryos shown in images above. Lightly shaded background in lower panels represents the region of the embryo considered in our models. See (PDF)Click here for additional data file.Figure S3Quantification of gt mRNA data. Each panel represents a time class (T1\u2013T8) in C14A showing an example embryo image (top), un-registered expression profiles (middle), and integrated expression patterns . Embryo images show lateral views: anterior is to the left, dorsal up. Graphs plot relative mRNA concentration against A\u2013P Position . Expression profiles consider only the central 10% strip along the dorso-ventral axis. Blue profiles in middle panels were extracted from embryos shown in images above. Lightly shaded background in lower panels represents the region of the embryo considered in our models. See (PDF)Click here for additional data file.Figure S4Positional variability in gap domain features. This figures shows standard deviations for the position of characteristic features of the central Kr domain (left), the abdominal kni domain (center), and the posterior gt domain Click here for additional data file.Figure S5Parameter distributions of 100 pLSA optimisation runs. This Figure shows illustrative examples of scatter plots for parameter values derived from Kr are shown in green , for kni in red , and for gt in blue . Parameter notation: (PDF)Click here for additional data file.Figure S6Parameter estimation with fixed delays. In order to test whether we can determine the value of delay parameters gt model, fixing (PDF)Click here for additional data file.Table S1Comparison of domain position and width between mRNA and protein data. Mean (Kr (green), the abdominal domain of kni (red), and the posterior domain of gt (blue).(PDF)Click here for additional data file.Text S1Testing significance of mRNA decay during late C14 using the two-sided Kolmogorow-Smirnow-Test.(PDF)Click here for additional data file.Text S2Structural identifiability analysis.(PDF)Click here for additional data file.Text S3Comparison of global optimization solvers.(PDF)Click here for additional data file.Text S4Sensitivity of parameter estimates to mRNA data.(PDF)Click here for additional data file."}
+{"text": "We consider the problem of network completion, which is to makethe minimum amount of modifications to a given network so that the resulting networkis most consistent with the observed data. We employ here a certain type of differentialequations as gene regulation rules in a genetic network, gene expression time series dataas observed data, and deletions and additions of edges as basic modification operations. In addition, we assume that the numbers of deleted and added edges are specified. Forthis problem, we present a novel method using dynamic programming and least-squaresfitting and show that it outputs a network with the minimum sum squared error inpolynomial time if the maximum indegree of the network is bounded by a constant. Wealso perform computational experiments using both artificially generated and real geneexpression time series data. Analysis of biological networks is one of the central research topics in computational systems biology. In particular, extensive studies have been done on inference of genetic networks using gene expression time series data, and a number of computational methods have been proposed, which include methods based on Boolean networks , 2, BayeOne of the possible reasons for the difficulty of inference is that the amount of available high-quality gene expression time series data is still not enough, and thus it is intrinsically difficult to infer the correct or nearly correct network from such a small amount of data. Therefore, it is reasonable to try to develop another approach. For that purpose, we proposed an approach called network completion by folloIn this paper, we propose a novel method, DPLSQ, for completing genetic networks using gene expression time series data. Different from our previous studies , 16, we Saccharomyces cerevisiae.In order to examine the effectiveness of DPLSQ, we perform computational experiments using artificially generated data. We also make computational comparison of DPLSQ as an inference method with other existing tools using artificial data. Furthermore, we perform computational experiments on DPLSQ using real cell cycle expression data of The purpose of network completion is to modify a given network with the minimum number of modifications so that the resulting network is most consistent with the observed data. In this paper, we consider additions and deletions of edges as modification operations see . If we bG denotes a given network where V and E are the sets of nodes and directed edges respectively, where each node corresponds to a gene and each edge represents some direct regulation between two genes. Self loops are not allowed in E although it is possible to modify the method so that self-loops are allowed. In this paper, n denotes the number of genes and we let V = {v1,\u2026, vn}. For each node vi, e\u2212(vi) and deg\u2212(vi), respectively, denote the set of incoming edges to vi and the number of incoming edges to vi as defined follows:In the following, DPLSQ consists of two parts: (i) parameter estimation and (ii) network structure inference. We employ least-squares fitting for the former part and dynamic programming for the latter part. Furthermore, there are three variants on the latter parts: (a) completion by addition of edges, (b) completion by deletion of edges, and (c) completion by addition and deletion of edges. Although the last case includes the first and second cases, we explain all of these for the sake of simplicity of explanation.vi is determined by a differential equation:vi1,\u2026, vih are incoming nodes to vi, xi corresponds to the expression value of the ith gene, and \u03c9 denotes a random noise. The second and third terms of the right-hand side of the equation represent linear and nonlinear effects to node vi, respectively s, which correspond to xi(t)s, are given for t = 0,1,\u2026, m. Then, we can use the standard least-squares fitting method to estimate the parameters ajis and aj,kis.We assume that time series data In applying the least-squares fitting method, we minimize the following objective function:k edges in total so that the sum of least-squares errors is minimized. In this subsection, we consider the problem of adding \u03c3kj,j+ denote the minimum least-squares error when adding kj edges to the jth node, which is formally defined byvjl must be selected from V \u2212 vj \u2212 e\u2212(vj). In order to avoid combinatorial explosion, we constrain the maximum k to be a small constant K and let \u03c3kj,j+ = +\u221e for kj > K or kj + deg\u2212(vj) \u2265 n. Then, the problem is stated asLet D+ byD+ is the objective value .Here, we define D+ can be computed by the following dynamic programming algorithm:D+ is determined uniquely regardless of the ordering of nodes in the network. The correctness of this dynamic programming algorithm can be seen byThe entries of h edges in total so that the sum of least-squares errors is minimized. In the above, we considered network completion by addition of edges. Here, we consider the problem of deleting \u03c3hj,j\u2212 denote the minimum least-squares error when deleting hj edges from the set e\u2212(v) of incoming edges to vj. As in hj to be a small constant H and let \u03c3hj,j\u2212 = +\u221e if hj > H or deg\u2212(vj) \u2212 hj < 0. Then, the problem is stated asLet D\u2212 byHere, we define We can combine the above two methods into network completion by addition and deletion of edges.\u03c3hj,kj,j denote the minimum least-squares error when deleting hj edges from e\u2212(vj) and adding kj edges to e\u2212(vj) where deleted and added edges must be disjoint. We constrain the maximum hj and kj to be small constants H and K. We let \u03c3hj,kj,j = +\u221e if hj > H, kj > K, kj \u2212 hj + deg\u2212(vj) \u2265 n, or kj \u2212 hj + deg\u2212(vj) < 0 holds. Then, the problem is stated asLet D byHere, we define In this subsection, we analyze the time complexity of DPLSQ. Since completion by addition of edges and completion by deletion of edges are special cases of completion by addition and deletion of edges, we focus on completion by addition and deletion of edges.O(mp2 + p3) time where m is the number of data points and p is the number of parameters. Since our model has O(n2) parameters, the time complexity is O(mn4 + n6). However, if we can assume that the maximum indegree in a given network is bounded by a constant, the number of parameters is bounded by a constant, where we have already assumed that H and K are constants. In this case, the time complexity for least-squares fitting can be estimated as O(m).First, we analyze the time complexity required per least-squares fitting. It is known that least-squares fitting for linear systems can be done in \u03c3hj,kj,j. In this computation, we need to examine combinations of deletions of hj edges and additions of kj edges. Since hj and kj are, respectively, bounded by constants H and K, the number of combinations is O(nH+K). Therefore, the computation time required per \u03c3hj,kj,j is O(nH+K(mn4 + n6)) including the time for least-squares fitting. Since we need to compute \u03c3hj,kj,j for H \u00d7 K \u00d7 n combinations, the total time required for computation of \u03c3hj,kj,js is O(nH+K+1(mn4 + n6)).Next, we analyze the time complexity required for computing Ds. We note that the size of table D is O(n3), where we are assuming that h and k are O(n). In order to compute the minimum value for each entry in the dynamic programming procedure, we need to examine (H + 1)(K + 1) combinations, which is O(1). Therefore, the computation time required for computing Ds is O(n3). Since this value is clearly smaller than the one for \u03c3hj,kj,js, the total time complexity isFinally, we analyze the time complexity required for computing O(m) time per execution. Furthermore, the number of combinations of deleting at most hj edges is bounded by a constant. Therefore, the time complexity required for computing \u03c3hj,kj,js is reduced to O(mnK+1). Since the time complexity for computing Ds remains O(n3), the total time complexity isK \u2264 2 and n is not too large .Although this value is too high, it can be significantly reduced if we can assume that the maximum degree of an input network is bounded by a constant. In this case, the least-squares fitting can be done in http://www2.nict.go.jp/aeri/sts/stmg/K5/VSSP/install_lsq.html) for a least-squares fitting method.We performed computational experiments using both artificial data and real data. All experiments on DPLSQ were performed on a PC with Intel Core i7-2630QM CPU (2.00\u2009GHz) with 8\u2009GB RAM running under the Cygwin on Windows 7. We employed the liblsq library .vi with h input nodes, we considered the following model:ajis and aj,kis are constants selected uniformly at random from and , respectively. The reason why the domain of aj,kis is smaller than that for ajis is that non-linear terms are not considered as strong as linear terms. It should also be noted that bi\u03c9 is a stochastic term, where bi is a constant and \u03c9 is a random noise taken uniformly at random from .We employed the structure of the real biological network named WNT5A see 17]. Fo. Fo17]. yi(t), we usedoi is a constant denoting the level of observation errors and \u03f5 is a random noise taken uniformly at random from . Since the use of time series data beginning from only one set of initial values easily resulted in overfitting, we generated time series data beginning from 20 sets of initial values taken uniformly at random from , where the number of time points for each set was set to 10 and \u0394t = 0.2 was used as the period between the consecutive two time points. Therefore, 20 sets of time series data, each of which consisted of 10 time points, were used per trial . It is to be noted that in our preliminary experiments, the use of too small \u0394t resulted in too small changes of expression values whereas the use of large \u0394t resulted in divergence of time series data. Therefore, after some trials, \u0394t = 0.2 was selected and used throughout the paper.For artificial generation of observed data ois as shown in h edges and deleting k edges and the resulting network was given as an initial network.Under the above model, we examined several Eorig and Ecmpl are the sets of edges in the original network and the completed network, respectively. This value takes 1 if all deleted and added edges are correct and 0 if none of the deleted and added edges is correct. For each , we took the average accuracy over a combination of 10 parameters and 10 random modifications . The success rate is the frequency of the trials in which the original network was correctly obtained by network completion. The result is shown in h = 0. However, no clear trend can be observed on a relationship between h, k values and the accuracies. It is reasonable because we evaluated the result in terms of the accuracy per deleted/added edge. On the other hand, it is seen that the success rate decreases considerably as h and k increase or the observation error level increases. This dependence on h and k is reasonable because the probability of having at least one wrong edge increases as the number of edges to be deleted and added increases. As for the computation time, the CPU time for each trial was within a few seconds, where we used the default values of H = K = 3. Although these default values were larger than h, k here, it did not cause any effects on the accuracy or the success rate. How to choose H and K is not a trivial problem. As discussed in H or K because of the time complexity issue. Therefore, it might be better in practice to examine several combinations of small values H and K and select the best result although how to determine the best result is left as another issue.We evaluated the performance of the method in terms of the accuracy of the modified edges and the success rate. The accuracy is defined here byE = \u2205 in the initial network. Since the method was applied to inference, we let H = 0, K = 3, and k = 30. It is to be noted that deg\u2212(vi) = 3 holds for all nodes vi in the WNT5A network. Furthermore, in order to examine how CPU time changes as the size of the network grows, we made networks with 30 genes and 50 genes (with k = 90 and k = 150) by making 3 and 5 copies of the original networks, respectively. We also examined DPLSQ for network inference, using artificially generated time series data. In this case, we used the same network and dynamics model as previously mentioned but we let ajis and aj,kis), where time series data were generated as in Since the number of added edges was always equal to the number of edges in the original network, we evaluated the results by the average accuracy, which was defined as the ratio of the number of correctly inferred edges to the number of edges in the correct network . We examined observation error levels of 0.1, 0.3, 0.5, and 0.7, for each of which we took the average over 10 trials using randomly generated different parameter values or was included in the edge set of the original network. As in Since both tools output undirected edges along with their significance values (or their probabilities), we selected the top The result is shown in It is seen from Tables N = 50, each of ARACNE and GeneNet worked in less than a few seconds per trial. Therefore, DPLSQ does not have merits on practical computation time.As for computation time, both methods were much faster than DPLSQ. Even for the case of We also examined DPLSQ for inference of genetic networks using real gene expression data. Since there is no gold standard on genetic networks and thus we cannot know the correct answers, we did not compare it with the existing methods. Saccharomyces cerevisiae extracted from the KEGG database [We employed a part of the cell cycle network of database , which iK = 3 and k = 25 were used. The result is shown in As for time series data of gene expression, we employed four sets of times series data in that werSince the total number of edges in both the original network and the inferred networks is 25 and there exist 9 \u00d7 10 = 90 possible edges (excluding self loops), the expected number of corrected edges is roughly estimated asIn this paper, we have proposed a network completion method, DPLSQ, using dynamic programming and least-squares fitting based on our previously proposed methodology of network completion . As mentIt should also be noted that the optimality of the solution is not guaranteed in most of the existing methods for inference of genetic networks, whereas it is guaranteed in DPLSQ if it is applied to inference of a genetic network with a bounded maximum indegree. Of course, the objective function is different from existing ones, and thus this property does not necessarily mean that DPLSQ is superior to existing methods in real applications. Indeed, the result using real gene expression data in"}
+{"text": "Towards a reliable identification of the onset in time of a cancer phenotype, changes in transcription levels in cell models were tested. Surprisal analysis, an information-theoretic approach grounded in thermodynamics, was used to characterize the expression level of mRNAs as time changed. Surprisal Analysis provides a very compact representation for the measured expression levels of many thousands of mRNAs in terms of very few - three, four - transcription patterns. The patterns, that are a collection of transcripts that respond together, can be assigned definite biological phenotypic role. We identify a transcription pattern that is a clear marker of eventual malignancy. The weight of each transcription pattern is determined by surprisal analysis. The weight of this pattern changes with time; it is never strictly zero but it is very low at early times and then rises rather suddenly. We suggest that the low weights at early time points are primarily due to experimental noise. We develop the necessary formalism to determine at what point in time the value of that pattern becomes reliable. Beyond the point in time when a pattern is deemed reliable the data shows that the pattern remain reliable. We suggest that this allows a determination of the presence of a cancer forewarning. We apply the same formalism to the weight of the transcription patterns that account for healthy cell pathways, such as apoptosis, that need to be switched off in cancer cells. We show that their weight eventually falls below the threshold. Lastly we discuss patient heterogeneity as an additional source of fluctuation and show how to incorporate it within the developed formalism. Monitoring the changing expression levels of mRNAs and more recently miRNAs Clustering methods Our paper provides both the basic theory and two illustrative applications to data from the laboratories of Varda Rotter From cell lines we proceed to human patient cells in renal cancer using the data reported in Stickel The mathematical details are given in section S1 of We outline the theory that we developed and applied and we provide more details around the working results. In particular, the most practical form of the results is fully discussed. The notation used is that of surprisal analysis and this is introduced first. The role of patient variability is presented last. Mathematical details including those elements of Singular Value Decomposition, SVD, that are special to our application, are referred to the i at time t is given by the procedure of maximal entropy as a fold change compared to the base lineThe expression level of transcript The fold difference is known as the surprisal. Surprisal analysis is the act of fitting of the surprisal by a sum of terms as shown in i at each time t. The best fit is sought by varying the values 4). When expression levels are quantitated for example via a microarray the data is measured several times. The reading of the expression level of transcript i in different replicas are typically not quite the same. A t test is usually employed to reject such readings that differ too much between different replicas. But even those results that are kept after this test the different replicas do not quite yield the same level for a given transcript. This is the experimental error that we are discussing. The variability of different readings implies that the fitted values of the Lagrange multipliers will vary. It is the magnitude of this variation that we are after. The operational procedure that we will follow is to fit the Lagrange multipliers to the mean of the level of expression, mean over replicas. What we seek is the error bar on the value of each Lagrange multiplier.Surprisal analysis consists in essence of the fitting of t the importance of each term in the sum in the surprisal is determined by the value of the Lagrange multiplier At each time t we find that t constraint One can state the conclusion about which constraint is important also in information theoretic terms: The value of the Lagrange multiplier is exactly by how much the constraint T where T is the number of measured time points. . In general T is much smaller than the number N of transcripts. Even so it is shown in T\u22121 constraints and a baseline N values one can reproduce the input data exactly.The first step is determining how many constraints are informative is to note that there can be no more than T\u22121 Lagrange multipliers provide that is the source of the issue we address in this paper. There is invariably some noise in the measured expression levels. So with T\u22121 Lagrange multipliers we fit both the real data and the noise. In this paper we estimate at what point the Lagrange multipliers begin to fit the noise It is the numerically perfect fit that the t, then it is not informative at that time ifThe criteria we employ is direct: A Lagrange multiplier provides no additional information if its value is zero. So in the presence of noise, when there is an error range associated with each Lagrange multiplier, a Lagrange multiplier provides no new information when zero is a possible value. If The remainder of the paper is how to determine the error bound on a Lagrange multiplier.In the maximum entropy formalism the numerical value of the Lagrange multipliers is determined by the mean value of the constraints. In terms of the time-independent variables The time dependence of the mean value is due to the expression levels ata, see , and so,s and a covariance matrix MAlhassid and Levine s is a (time-dependent) fold error that is summed over all expression levelsHere i so that, for example, it equals 0.1 to represent an experimental error of 10%. If the fold error is about the same for all transcripts then The upper bound given by When SVD is used to compute the surprisal 24) to N is the number of measured transcripts and \u03b5 is the root mean square errorThe practical error bound, tt it is necessary that the error bound is low enough t and at later times we have thatAt any time There is a complementary situation for such phenotypes that are important in healthy cells and whose role gradually diminishes. For these we need to reverse the directions of the inequalities in m, we define M different patientsA source of noise that requires a separate discussion is when the data is not from a cell culture but represents an average over different patients. Using nd constraint was identified as the tumor signature pattern. It is seen in The third constraint, When using SVD it is a matter of notational convenience to represent the steady state as The bounds shown in The second example is the HPV-16 cancer model of Levitzki et al Lastly we consider the additional \u2018noise\u2019 due to patient variability. For each patient we can compute the Lagrange multipliers and their error due to noise in the measurements. Such results are shown in For each diseased patient separately we can use the renal cancer data of Stevanovi\u0107 et al We analyzed transcription level changes over time in premalignant cell models and in cancer patients. A transcription pattern that is not expressed in healthy patients was seen in diseased patients. In early stage cells cultures an absent pattern was shown to become informative at later times. Later times but well before a cancer phenotype could be identified. Expressed or not expressed were judged on the basis of a conservative criterion based on an upper bound on the error in the weight of the transcription pattern. On both pragmatic and on information theoretic grounds it was argued that if the bound on the fractional error is below unity, the data warrants the conclusion that the phenotype is expressed. This suggest that with additional experience it could be possible to offer an earlier than currently possible diagnostics.Supporting Information S1(PDF)Click here for additional data file."}
+{"text": "The model was evaluated against the phenotypic data of the id1 dlf1 double mutant and the ZMM4 overexpressed transgenic lines. The model provides a working example that leverages knowledge from model organisms for the utilization of maize genomic information to predict a whole plant trait phenotype, flowering time, of maize genotypes.The transition from the vegetative to reproductive development is a critical event in the plant life cycle. The accurate prediction of flowering time in elite germplasm is important for decisions in maize breeding programs and best agronomic practices. The understanding of the genetic control of flowering time in maize has significantly advanced in the past decade. Through comparative genomics, mutant analysis, genetic analysis and QTL cloning, and transgenic approaches, more than 30 flowering time candidate genes in maize have been revealed and the relationships among these genes have been partially uncovered. Based on the knowledge of the flowering time candidate genes, a conceptual gene regulatory network model for the genetic control of flowering time in maize is proposed. To demonstrate the potential of the proposed gene regulatory network model, a first attempt was made to develop a dynamic gene network model to predict flowering time of maize genotypes varying for specific genes. The dynamic gene network model is composed of four genes and was built on the basis of gene expression dynamics of the two late flowering Arabidopsis thaliana have been defined in the past decades. Genetic Regulatory Network (GRN) models for flowering time control in Arabidopsis have been developed and often presented in graphical form Zea mays L.) Flowering time is a major adaptive trait in plants and an important selection criterion in plant breeding Arabidopsis and rice (Oriza sativa L.) are conserved in maize. Through comparative genomics, mutant analysis, genetic analysis and Quantitative Trait Locus (QTL) mapping and cloning, and transgenic approaches, more than 30 flowering time candidate genes have been identified in maize. Despite these advances in molecular mechanisms, a synthesis in the form of a GRN in maize is lacking.The understanding of the genetic control of flowering time in maize has advanced significantly in recent years, especially after the completion of the maize genome sequence The limited understanding of the genetic controls of flowering time in maize in the past decades led to the development of quantitative empirical models that use environmental and genotypic rather than genomic information to predict the floral transition and the timing of pollen shedding and silking in maize. Heat units or growing degree days to shedding and to silking are examples of empirical models widely used to synchronize shedding and silking events in seed production Arabidopsis and rice Empirical models such as the heat unit model have limitations to predict flowering time for novel genotypes. The advancement in the understanding of the genetic control of flowering time in maize, the availability of GRNs for model organisms, and the conservation of the main components of these GRNs across species suggest the opportunity to build upon models developed for The purpose of this paper is to develop a simple model that will serve as a foundation for Dynamic Gene Network (DGN) modeling of the vegetative to reproductive transition in maize. The overall objectives of this study are: (1) to develop a conceptual model in the form of a GRN of flowering time control in maize, (2) to translate the conceptual GRN model into a quantitative DGN model, and (3) to demonstrate and evaluate the prediction of flowering time of maize genotypes varying for specific genes. First, a GRN is proposed based on a synthesis of the literature for flowering time candidate genes and their interactions. Second, a quantitative DGN model is described. Third, the DGN model is evaluated against field experimental data for flowering time of novel genotypes created from allelic variation for specific genes and from expression of transgenes.id1/+ dlf1/+) for dlf1 and id1 mutant alleles was constructed by crossing the heterozygous dlf1/+ plants to the heterozygous id1-m1/+ plants in the B73 genetic background. Heterozygous plants were identified by the PCR genotyping method id1 and dlf1 single mutants and the id1 dlf1 double mutant. Leaf tissues of the offspring plants were taken around the V8\u2013V10 stage for genotyping. The genotypes of individual plants were confirmed by PCR genotyping. Construction of the ZMM4 transgenic lines in the B73, the dlf1 and id1 single mutant genetic backgrounds was described in detail by Danilevskaya et al. A segregating population observations, plants of different genotypes grown in field conditions at the Pioneer Johnston research farm were tagged. The fifth leaf and the tenth leaf, and sometimes the fifteenth leaf, of the tagged plants were identified by cutting the respective leaf tips during the first half of the growing season. TLN observations of all tagged plants were obtained at or after flowering. TLN data for plants with the same genetic composition were combined and the mean and standard error statistics were estimated.id1 and dlf1 mutants, the Gaspe Flint landrace, and the B73 inbred for tissue sampling were grown in a greenhouse at 25\u00b0C under 16-h day length. The V-stages were determined based on the topmost liguled leaf. Tissue samples of shoot apices were taken from the emergence stage for the Gaspe Flint landrace or the V1\u2013V3 stage for the mutants and the B73 inbred until about one week after flowering. The intervals between two sampling times and the total number of sampling times were determined by genotypes and developmental stages. Total RNA was isolated with TRIzol Reagent in combination with Phase Lock gel. The ZMM4 mRNA expression levels were measured by the GenomeLab GeXP analysis system at Althea Technologies. The raw RNA expression data were normalized against \u03b1-tubulin as the internal control within the same reaction. More details were described in Danilevskaya et al.Plants of the Arabidopsis, for which more than 100 flowering time genes were characterized ArabidopsisArabidopsis GRN, maize flowering time candidate genes were organized by pathways , ZmCCT and ZCN8 and stabilization and activation of the CO protein by light Arabidopsis CO-FT module is conserved in long-day plants, such as barley (Hordeum vulgare), wheat (Triticum aestivum) and poplar Pharbitis nilThe regulatory relationship between ID1 and ZmLD have been cloned and characterized at the molecular level, and may function in the autonomous pathway to positively regulate flowering time.The maize genes ID1 gene expression miR156 gene family is repressed by a developmental regulation factor produced in leaf primordia miR172 ZmRAP2.7miR156 , as a jua miR172 promotesDWARF8 and DWARF9 (Arabidopsis gene GIBBERELLIC ACID INSENSITIVE (GAI) gene. Studies show that DWARF8 is associated with the variation in flowering time in temperate inbred lines dwarf9-1 mutant exhibits a late flowering phenotype in maize while the same allele in transgenic Arabidopsis lines causes the opposite phenotype KNOTTED1 is an endogenous plant growth regulator that affects both growth and development. d DWARF9 encode pED1 KN1, negativeLFY and AP1 in ArabidopsisDLF1, ZMM4, ZmRAP2.7, ZFL1, ZCN2, and ZAP1 in Arabidopsis, encodes a bZIP protein that mediates floral inductive signals at the shoot apical meristem in maize. Loss-of-function dlf1 mutant flowers late, indicating that DLF1 promotes the floral transition. Gene transcript expression analysis reveals that DLF1 transcript increases and peaks at the floral transition, which indicates that DLF1 is involved in a positive feedback loop to promote the floral transition Arabidopsis, to activate downstream floral organ identity genes, such as ZMM4DLF1 , homologZMM4 , is a negative regulator of flowering time in maize VGT1 functions as a cis-regulatory element of the ZmRAP2.7 gene by down-regulating its mRNA transcript abundance VGT1 is mapped to chromosome arm 8L in the cross of the Gaspe Flint landrace and the N28 inbred. The Gaspe Flint allele reduces the flowering time, number of leaves, and plant height in the N28 background ZmRAP2.7 , homologZFL1 and ZFL2 were selected to develop a DGN model to predict the floral transition time in maize. The proposed DGN model includes an ordinary differential equation and can simulate the ZMM4 mRNA expression pattern, which in turn is associated with the floral transition time of maize genotypes varying for specific genes.Based on the regulatory relationships shown in the simplified GRN for maize , four keID1 regulates ZMM4 expression through two paths: 1) the DLF1-dependent path via regulation of the ZCN8 protein movement through the phloem and 2) the direct autonomous path. Because the ZCN8 and DLF1 proteins combine to form a protein complex to regulate the ZMM4 expression, the interaction between ZCN8 and ID1 can be substituted by a term that represents the interaction between ID1 and DLF1. The model includes two regulatory terms to account for the effect of ID1 alone on flowering time and the combined effect that results from the interaction between ID1 and DLF1. The regulatory relationship between VGT1 and ZMM4 is through ZmRap2.7. The double suppression relationship can be substituted by a positive term only involving VGT1. As discussed earlier, there is plausible positive feedback mechanism that governs the regulation of the ZMM4 gene expression before the floral transition and generates the exponential ZMM4 mRNA expression pattern. A feedback term involving ZMM4 mRNA expression is included in the model as a parsimonious approach to describe the observed growth pattern of the ZMM4 mRNA transcript. Because all the regulatory relationships shown in the GRN . In the final form of the DGN model, the ZMM4 mRNA expression level (mZMM4) was directly associated with the floral transition status (FTS) of the genotypes under investigation as follows.ZMM4 gene. ID1, DLF1, and VGT1 stand for allele status of the ID1, DLF1, and VGT1 genetic elements of each genotype; SSEg, SSEp, and SSE stand for sum of squared errors for gene expression data, phenotypic data, and the sum of both, respectively. The Euler integration method was employed to numerically integrate the differential equation model in Eq. 1. The time step used in the numerical integration was 0.01 d. The mZMM4 initial value was set to zero to reflect the negligible size of the plant at t\u200a=\u200a0.0 d. The Nelder-Mead downhill simplex method Levels of the ID1 is assumed to be 1, the parameter 2\u03b1 stands for the impact of ID1 and DLF1 combination while the parameter \u03b13 stands for the impact of VGT1 alone. The parameter \u03b2 is considered as the basal synthesis rate of the ZMM4 gene. By comparing the values of the coefficients, it is evident there is a large impact of VGT1 relative to the impact of DLF1 and ID1 combination and the impact of ID1 alone. The parameter \u03c9 has a positive value that indicates a positive feedback loop reinforced by other integrators at a switching point before the floral transition. The parameter 1\u03b1 is a scaling factor that influences the size of the gene effects, including the basal synthesis of the ZMM4 gene, relative to the effect of the positive feedback loop. Thus, the smaller value of the parameter \u03b11 relative to that of the parameter \u03c9 indicates the strong effect of the positive feedback loop.ZMM4 mRNA expression patterns match with each other well as parameterized above. Because the UBI:ZMM4PRO transgenic lines overexpressed ZMM4 cDNA by means of the maize constitutive ubiquitin promoter, significantly higher levels of the ZMM4 mRNA transcript are expected than in non-transgenic lines. To accommodate the constitutive expression of the transgenic ZMM4 gene, a conservative assumption was made to predict DTI for the UBI:ZMM4PRO transgenic lines. The coefficient \u03b2, the ZMM4 basal synthesis rate (Eq.1), was multiplied by 2 to represent the expression of two copies of the ZMM4 gene, the native and the transgenic copies.Predictions for all genotypes except for the 2\u200a=\u200a0.86, 2\u200a=\u200a0.87) The correlation between the predicted DTI and the observed TLN (RZMM4 gene regulates the floral transition through the timing of the transition but not the rate of the leaf initiation, the predicted DTI of the UBI:ZMM4PRO transgenic lines should randomly scatter around the fitted line in UBI:ZMM4PRO transgenic lines are under the fitted line. This implies the model overestimated DTI for the UBI:ZMM4PRO transgenic lines. We attribute this less accurate predicted result to an inadequate assumption about the expression level of the transgenic ZMM4 gene under the maize ubiquitin promoter. Multiplying the coefficient \u03b2 by 2 most likely underestimated the expression level of the transgenic ZMM4 gene in the transgenic plants thus increasing the predicted DTI (Assuming that the cted DTI . AdditioArabidopsis to organize knowledge and thoughts in a crop species such as maize. Despite different biological processes among species and processes being missing altogether in maize, the network topology identified in Arabidopsis provided fundamental insights to organize the knowledge created for maize. The conceptual GRN model provides the basic knowledge to conduct a rudimentary quantitative modeling exercise. The resulting DGN model is a step forward relative to current empirical models utilized to predict flowering time in maize. The performance of the simple model is encouraging and suggests there is an opportunity to develop quantitative models that transparently map genes and their effects to whole plant phenotypes. Numerous paths could be foreseen to advance this quantitative model with disparate objectives: from simply advancing our understanding of flowering time in maize, to the study of the emergent properties of GRN models, to facilitation of gene discovery, maize breeding and transgenic product development.This paper proposes a synthesis of our current knowledge of genetic determinants of flowering time in maize in the form of a GRN. This model can serve as a foundation to build upon as new genetic knowledge becomes available and to guide future studies. The process of model building demonstrated a realized opportunity that leveraged learning and networks created for"}
+{"text": "E. coli and yeast data with the assumption that these datasets consist of identical and independently distributed samples. Thus, the main drawback of these algorithms is that they ignore any time correlation existing within the TF profiles. In this paper, we extend previously studied FA algorithms to include time correlation within the transcription factors. At the same time, we consider connectivity matrices that are sparse in order to capture the existing sparsity present in gene regulatory networks. The TFs activity profiles obtained by this approach are significantly smoother than profiles from previous FA algorithms. The periodicities in profiles from yeast expression data become prominent in our reconstruction. Moreover, the strength of the correlation between time points is estimated and can be used to assess the suitability of the experimental time interval.Two-level gene regulatory networks consist of the transcription factors (TFs) in the top level and their regulated genes in the second level. The expression profiles of the regulated genes are the observed high-throughput data given by experiments such as microarrays. The activity profiles of the TFs are treated as hidden variables as well as the connectivity matrix that indicates the regulatory relationships of TFs with their regulated genes. Factor analysis (FA) as well as other methods, such as the network component algorithm, has been suggested for reconstructing gene regulatory networks and also for predicting TF activities. They have been applied to The number of transcription factors is believed to be much smaller than the number of regulated genes. Moreover, most genes are known to be regulated only by a very restricted number of transcription factors. This induces a sparse connectivity matrix for the representation of the connections between the TFs and the regulated genes. Microarray experiments measure the expression level of thousands of genes simultaneously. Unfortunately, a similar method that would allow us to measure simultaneously the abundance or activities of a larger number of proteins that act as TFs is not yet available. Some progress has been made with measurements of protein abundance by flow cytometry [Genes are transcribed into mRNAs which in turn are translated into proteins. Some of these proteins activate or inhibit, as transcription factors (TFs), the transcription of a number of other genes creating a complex ytometry followinytometry \u20134 are cofactors. Some of the advantages of FA over principle component analysis are the incorporation of independent additive measurement errors on the observed variables, the identification of an underlying structure, and the assignment of the factors as defined entities (in our case transcription factors). Finally, in contrast to independent component analysis, the factors are not assumed to be statistically independent.Factor analysis (FA) is often used as a dimensionality reduction approach assuming that the large number of observed variables becomes uncorrelated given a much smaller number of hidden variables called In a recent paper, , we exam6E. coli, yeast, and Arabidopsis data are obtained from time series experiments. Unfortunately, the present time correlation within the TFs is ignored in the above algorithms. Time information can act as a smoothing approach on the TF profiles and thus can improve the reconstruction process. As in our previous paper, here we are still concerned with sparse connectivity matrices, but we also aim to include time correlation within the factors. For this purpose, we extend the algorithm by Fokou\u00e9 and Titterington. [A serious concern regarding the currently applied FA algorithms as well as NCA and plsgenomics algorithms is the lack of incorporation of any time information provided by the experiments. Actually, most available microarray data such as the rington. , which prington. , to handrington. . If we aOther algorithms such as the linear dynamic systems or Kalman filter models have also been suggested for estimation of the parameters of a time series model with hidden states. Ghahramani and Hinton presenteBarenco et al. reconstrE. coli from [In this paper, we show how to incorporate time information in the factor analysis approach. Factor analysis is attractive, since it is on of the most straightforward ways to link hidden transcription factor activities to observed outputs without knowledge of the connectivity. However, time series information is ignored in all the methods discussed in our previous paper. Here, we explore an extension to factor analysis that integrates time series correlation. Since some data might show very little correlation or none at all, we estimate the posterior distribution of the strength of correlation of TF activities from one time point to the next. This information is useful in several respects as we show for gene expression data for oli from and for oli from . Based oIn this section, we describe the general factor analysis model and how time correlation information enters the model. We discuss how the model incorporates the sparsity of the connectivity matrix and discuss identifiability problems associated with factor analysis.factors. We assume that the number We denote an instance of a random vector variable with loadings matrix or connectivity matrix with each of its entries indicating the relationship between a TF and a gene, for example, where FA models assume that the error terms where tr is the trace, the sum of the diagonal elements. In Section 5, we discuss in detail the prior and posterior probabilities of the parameters The key difference between this and previously published work \u20134 is in between factors or spatial covariance matrix, which captures the correlation between the factors. We assign to within factors or temporal covariance matrix. We assign 1 to the diagonal entries of this matrix so that it can be treated as a correlation matrix. The where the t-distribution for each row of Most genes are regulated by a small number of transcription factors. This implies that the connectivity matrix There is an identifiability problem associated with (1). For a Orthogonal transformations also include permutations of the factors which, in the case of regulatory networks, limit our ability to assign factors to known TFs. Sabatti and James. constraiIn the FA algorithm, we utilise a Gibbs sampling algorithm in order to estimate the unknown parameters. In each iteration of the algorithm, it is possible that factors change sign or change position in the factors matrix. This is a widely known problem in FA. Here, we suggest the use of the Hungarian matching algorithm to identify factors that have changed sign or position in the factors matrix.E. coli and one from yeast, both from time series experiments.We analysed two-gene expression datasets, one from E. coli dataset using FA algorithms [We have previously analysed the gorithms . The recgorithms . We firsgorithms . This coFigure Figure We also analysed the influence of the prior on the reconstruction process by providing prior loadings matrices that are denser than the Kao connectivity matrix, that is, we run the FA algorithm with decreasing numbers of given zero position in the prior loadings matrix. We tested priors where 75, 50, 25, or 5 entries per TF were fixed to zero. These positions were chosen randomly out of the approximately 75 zero positions that are present for each TF in the Kao connectivity matrix. We found that even with 50 zero entries out of the 75 entries that are present in Kao connectivity matrix the FA algorithms can reconstruct the same TF activity profiles equally well without having an identifiability problem regarding the labelling of the factors. However, as the number of constraints to zero in the prior loadings matrix is reduced further, assigning factors to TFs becomes more difficult, although the reconstruction of the TF profiles is still acceptable. This result is interesting from a biological perspective since it shows the importance of integrating prior information regarding the system under consideration even if the available information is limited. Negative experimental results showing that there is no connection between a TF and a gene are even more informative than positive experiments for this kind of analysis since they restrict the structure of the connectivity matrix very effectively.E. coli expression data without any prior information. The reconstructed profiles strongly resemble the profiles when prior information is provided, as shown in Figure Finally, we analysed the lgorithm .E. coli dataset, we performed another test to assess the reconstruction process based on a sample with larger time intervals: we removed every other time point starting from the second. The new dataset consists of 12 time points instead of 24. Figure Given the The above correlation and reconstruction result for the subsample suggests that the experimental sampling frequency could have been reduced to about half the time points without too much loss of accuracy. Of course, if the correlation coefficient dropped much further that would indicate that a shorter time interval is required for an accurate reconstruction of the factor profiles.Finally, a reconstruction with half the time points was also performed without any prior information regarding the underlying gene regulatory network. Figure We also apply the proposed algorithm to gene expression data from yeast , a datasWe analyse each time series dataset separately given the connectivity matrix also used in . Figure E. coli profiles from the previous section. However, they are still smoother than the profiles estimated by the GNCA algorithm or the static FA algorithms. A possible explanation could be the\u2014in comparison to E. coli\u2014smaller correlation coefficients that are estimated for each time series: 0.68 for the elutriation, 0.56 for the The estimated yeast TF profiles are not as smooth as the Figure For completeness, Figure Finally, we measured the correlation of the constructed TF profiles with their corresponding expression profiles without finding any significant match. This result is consistent with that of Boulesteix and Strimmer. .E. coli and yeast. It seems that a fair amount of correlation is present in the data, as might have been expected. Consequently, the ragged profiles resulting from a simple factor analysis without time constraints seem to be artifacts; it is simply unlikely that activity levels fluctuate so wildly as seen for some genes. The profiles reconstructed incorporating the estimated correlation certainly look smoother, and it is likely that they are closer to the true activity profiles than the ragged reconstructions by simple factor analysis methods. For the E. coli dataset, we demonstrated the effect of the time interval on time correlation and reconstructed profiles. Finally, we showed that the reconstruction process is reliable even with little or no information regarding the connectivity matrix. However, some form of prior knowledge is necessary to obtain smoother profiles or to match unknown factors to known TFs.We presented a factor analysis algorithm that incorporates temporal correlation within each factor vector and also learns sparse factor loadings matrices. It is quite plausible that the underlying regulatory networks are quite sparse, and reconstruction algorithms based on this principle have received increasing interest recently. There is also increasing interest in time series experiments, since they are able to capture the dynamics of biological processes. The algorithm presented here utilises both sparsity and time correlation within a Bayesian framework. We estimated the distribution of time correlation coefficients for gene expression data from One drawback of the GNCA and FA algorithms is that they only model linear relationships between the TFs and the regulated gene. However, as shown in the PhD thesis by Pournara the assuspatial covariance matrix and temporal covariance matrix. The Here, we extend the methods of in orderThe prior distribution of The posterior probability of the factors is now derived aswhereThe convenient conjugate prior, inverted Wishart distribution, is chosen for the spatial covariance matrix of the factors where General Case The treatment of the temporal correlation matrix Thus, the posterior distribution is also an inverted Wishart distribution given byThis approach greatly increases the number of parameters to be estimated to First-Order Markov Structure If we assume that the correlation matrix According to a first-order Markov structure, the matrix where where Given the determinant where The posterior distribution of where gned in [, page 28Independent Gaussian priors are assigned to each element where The posterior distribution of A convenient Gamma prior with shape parameter A Gibbs sampler algorithm (Algorithm 1) is used to estimate the unknown parameters. We first initialise the loadings and noise covariance matrices. Then, we iterate through steps 1 and 2 sampling from the posterior conditional distributions of the parameters. In the analysed datasets, convergence was reached within the first 100 samples. However, we discarded the first 3000 samples in order to ensure convergence and collect another 500 samples for the analysis.Algorithm 1: Gibbs sampler algorithm for factor analysis.Step 1. Sample sample Step 2. Sample Hungarian algorithm which solves assignment problems in polynomial time. A cost matrix of dimensions The sign of the factors can change during the Gibbs sampling algorithm. Moreover, two factors can alternate position in the factors matrix. These are two widely known problems with factor analysis. In order to overcome these problems, we suggest the use of the by Kuhn . We alsoE. coli dataset and a prior of The choice of prior hyperparameters is very important in a Bayesian framework. We choose uninformative priors for most of the parameters. The estimation of the"}
+{"text": "Biological networks are constantly subjected to random perturbations, and efficient feedback and compensatory mechanisms exist to maintain their stability. There is an increased interest in building gene regulatory networks (GRNs) from temporal gene expression data because of their numerous applications in life sciences. However, because of the limited number of time points at which gene expressions can be gathered in practice, computational techniques of building GRN often lead to inaccuracies and instabilities. This paper investigates the stability of sparse auto-regressive models of building GRN from gene expression data.Criteria for evaluating the stability of estimating GRN structure are proposed. Thereby, stability of multivariate vector autoregressive (MVAR) methods - ridge, lasso, and elastic-net - of building GRN were studied by simulating temporal gene expression datasets on scale-free topologies as well as on real data gathered over Hela cell-cycle. Effects of the number of time points on the stability of constructing GRN are investigated. When the number of time points are relatively low compared to the size of network, both accuracy and stability are adversely affected. At least, the number of time points equal to the number of genes in the network are needed to achieve decent accuracy and stability of the networks. Our results on synthetic data indicate that the stability of lasso and elastic-net MVAR methods are comparable, and their accuracies are much higher than the ridge MVAR. As the size of the network grows, the number of time points required to achieve acceptable accuracy and stability are much less relative to the number of genes in the network. The effects of false negatives are easier to improve by increasing the number time points than those due to false positives. Application to HeLa cell-cycle gene expression dataset shows that biologically stable GRN can be obtained by introducing perturbations to the data.Accuracy and stability of building GRN are crucial for investigation of gene regulations. Sparse MVAR techniques such as lasso and elastic-net provide accurate and stable methods for building even GRN of small size. The effect of false negatives is corrected much easier with the increased number of time points than those due to false positives. With real data, we demonstrate how stable networks can be derived by introducing random perturbation to data. Biological networks are constantly perturbed randomly and there exist efficient and compensatory mechanisms to withstand such instabilities. Constructing gene regulatory networks (GRN) from time-series gene expression data plays a vital role in understanding complex biological mechanisms and in the development of novel drugs. Though microarrays allow measurement of thousands of genes simultaneous, gene expressions in practice can be gathered only over a few time points due to high cost and time involved, and limitations of the experiments. This makes building GRN inherently an ill-posed problem in practice, leading such networks unstable and irreproducible. Moreover, variable and complex nature of biological sources, and measurement noise and artifacts, add to the challenges of constructing accurate and stable GRN.A wide range of techniques for inferring GRN from microarray datasets have been proposed in the literature -4, incluThe linear multivariate vector autoregression (MVAR) provides a simple and efficient technique to estimate regulatory relationships among genes. However, due to less number of time points compared to the number of genes whose expressions involved in gene expression datasets, several penalized MVAR techniques using regularization ,7,13-16 in silico networks. Stability means that the network construction is robust to changes of network topology and parameters, and biological and instrumental noise. In this paper, we first introduce novel criteria for evaluating stability of building GRN at the level of connections and networks. MVAR models of ridge, lasso, and elastic-net penalty are evaluated with respect to their accuracies and stabilities by using synthetic gene expression datasets. In particular, we investigate how many time points of gene expressions are needed for a network of given topology and size. Using a real data set gathered in HeLa cell cycle T a matrix of size I \u00d7 I of regression coefficients, and t\u03b5 = the corresponding innovations. The multivariate model can be written in standard multivariate vector autoregressive (MVAR) form:where In vector form, (2) becomes:t-th row of Y, Z, and E, are ty, tz, and t\u03b5, respectively, and there are T \u2013 1 samples; Y is a (T \u2013 1) \u00d7 I matrix, Z a (T \u2013 1) \u00d7 I matrix, \u03b2 a I \u00d7 I matrix, and E a (T \u2013 1) \u00d7 I matrix. MVAR coefficients are estimated using standard least squares as:where T <= 2.14\u2022License: GNU GPL\u2022Any restrictions to use by non-academics: None\u2022The authors declare that they have no competing interests.KDB and LC conceived the study and developed the model. KDB implemented the model, conducted the case studies and statistical analyses and wrote the manuscript. PP helped in the design and implementation of the package. MA conducted the biological validation experiments and analyses and helped write the manuscript. OT and CC took part in several discussions related to the model. DI took part in several discussions regarding the biological data. All authors read and approved the final manuscript.waveTiling package vignette.Package vignettecontaining detailed information on how to perform a transcriptomeanalysis using a wavelet-based functional model with the waveTilingpackage. The data set of case study 1 (leaf development data) is used in thevignette.Click here for fileMethods for biological validation.Detailedinformation about the gene set enrichment and qRT-PCR analysis for casestudy 1 (leaf development data).Click here for file"}
+{"text": "Genome-wide time-series data provide a rich set of information for discovering gene regulatory relationships. As genome-wide data for mammalian systems are being generated, it is critical to develop network inference methods that can handle tens of thousands of genes efficiently, provide a systematic framework for the integration of multiple data sources, and yield robust, accurate and compact gene-to-gene relationships.g-prior to guide the search for candidate regulators. Our method is highly computationally efficient, thus addressing the scalability issue with network inference. The method is implemented as the ScanBMA function in the networkBMA Bioconductor software package.We developed and applied ScanBMA, a Bayesian inference method that incorporates external information to improve the accuracy of the inferred network. In particular, we developed a new strategy to efficiently search the model space, applied data transformations to reduce the effect of spurious relationships, and adopted the We compared ScanBMA to other popular methods using time series yeast data as well as time-series simulated data from the DREAM competition. We found that ScanBMA produced more compact networks with a greater proportion of true positives than the competing methods. Specifically, ScanBMA generally produced more favorable areas under the Receiver-Operating Characteristic and Precision-Recall curves than other regression-based methods and mutual-information based methods. In addition, ScanBMA is competitive with other network inference methods in terms of running time. Also, the number of actual regulators for a particular gene is only a small fraction of the number of possible regulators.Identifying gene regulatory networks is an important problem in biology. There have recently been many advances in this area in terms of tools for collecting and analyzing large-scale genomics data. Many of these datasets, from microarrays and next generation sequencing, quantify the expression levels of all genes in a given genome. Genome-wide time-series data, in principle, allow reverse engineering of the gene regulatory relationships by studying the temporal patterns of regulators and target genes. However, this can be a difficult problem due to the large number of genes (A popular method for inferring gene regulatory networks from time series data uses Dynamic Bayesian Networks (DBN)-5. DBN mOrdinary differential equations (ODE) are alternative methods for constructing networks,7. TheseAnother class of methods is based on regression models in which parent nodes (regulators) are inferred for each target gene. Vector autoregressive models have been proposed for inferring causal links between genes. Often this takes the form of a model selection problem, and methods such as the Least Absolute Shrinkage and Selection Operator (LASSO),10, elasMutual information methods have been used extensively on genetics data-24, but We present a new approach using Bayesian Model Averaging (BMA) for variable selection from time-series gene expression data. Our new method offers the following advances over our previous work,19: \u2022\u00a0We develop a new algorithm called ScanBMA that searches the model space more efficiently and thoroughly than previous algorithms. It is much faster than previous implementations of BMA for a large number of predictors, resulting in running time comparable to that of LASSO. It allows inference for networks of thousands of genes to be completed in minutes on a standard laptop computer.\u2022\u00a0We transform the time-series data to reduce spurious correlations. Specifically, we remove the effect of a gene on itself by subtracting the mean expression level for each gene at each time point and then using the residuals from a regression of its expression at the current time point on its expression at the previous time point.g-prior[g-prior to compute posterior probabilities out-performs our previous effort using the Bayesian Information Criterion (BIC).\u2022\u00a0We use Zellner\u2019s g-prior for the \u2022\u00a0We address the scalability of network inference methods. Our new implementation produces running times of minutes compared to hours or even days for some competing methods, thus offering substantial improvements.We also carried out extensive empirical studies of our new method. Specifically, we compared our new method, ScanBMA, to other network construction methods in the literature, namely LASSO, the mutual information methods MRNET (Maximum Relevance/Minimum Redundancy), CLR challenge. For the yeast dataset, we found that our method outperformed competitors and previous analyses in recovering regulatory relationships previously reported in the literature. For the DREAM4 data, for which no prior information was available, our method performed comparably to other methods, while producing more compact networks. Finally, the ScanBMA algorithm presents a substantial improvement in running time over previous implementations of BMA. The method is implemented as the ScanBMA function in the networkBMA Bioconductor software package.In ScanBMA, network inference is formulated as a series of variable selection problems in which parent nodes (regulators) are inferred for each target gene. The BMA framework accounts for model uncertainty in variable selection by averaging over the posterior distributions from multiple models, weighted by their posterior model probabilities,14. A chg-prior[g.We previously developed a supervised framework to integrate external data sources, including co-expression, genome-wide binding, sequence polymorphism, physical interaction, genetic interaction, and literature curation data. Using ag-prior to specinvar controls the number of top regulators used in the regression step of each target gene. We have performed empirical studies to study the effect of and estimate the optimal nvar.Before the regression step, we apply a univariate measure (such as R-squared or BIC) to rank candidate regulators for each target gene using these prior probabilities of regulatory relationships. The parameter A number of metrics have been used to evaluate the quality of inferred networks. We focus on a few that compare the inferred network with a gold-standard network of true edges. One measure that we use is the precision of the inferred network, equal to the number of true positives divided by the total number of edges in the inferred network. Precision is important to researchers because an experiment to verify relationships identified in exploratory work can be expensive. Thus, the more confident we can be when identifying relationships, the better. In light of this, we also look at the area under the precision-recall curve (AUPRC). This gives a more comprehensive view of network quality and does not require that a threshold be chosen for the posterior probability of an edge or for the number of edges included. We also look at the area under the ROC curve (AUROC), which is widely used to assess the quality of networks.Due to incomplete knowledge in real data, we use a partial assessment based on the YEASTRACT database. This isnvar\u2009=\u200920 had a precision of 0.39, much higher than any other method, including our previous method, iBMA[Tableod, iBMA. The areg-prior reduces the number of false positives, while the data transformation substantially reduces the size of the inferred network.Tablenvar\u2009=\u200920 performs better than nvar\u2009=\u20093556.The precision-recall curves for the different methods on the yeast data are shown in FigureWhen analyzing gene networks where the number of nodes is in the thousands, computation time can be an important consideration. We compared ScanBMA with the other methods on the yeast data by running each method on 20 target genes under controlled conditions to find the average cpu time per gene.nvar\u2009=\u20093556, i.e. considering all other genes whose expression varied as potential regulators, is within a factor of 3 of LASSO, the fastest of the other methods. Some of the mutual information methods, on the other hand, are much slower, with MRNET taking about 50 times longer than ScanBMA. Tablenvar is large. Dynamic Bayesian network methods were not included in the comparison because they analyze the entire network at once and do not scale well enough to be run feasibly on large network datasets such as the yeast data.Tableebdbnet R package[Table package, to the The precision-recall curves from the various methods are shown for the first of the five 10-gene networks in FigureThe results of the methods for the DREAM4 100-gene networks are summarized in Tableg-prior for model parameters, with g estimated from the data. We have introduced a new algorithm, ScanBMA, to search the model space efficiently. Our method infers compact networks with higher precision than the competing methods we have assessed, important features for further analysis in searching for new regulatory relationships.We have presented a Bayesian Model Averaging method for inferring gene regulatory networks from time series data. It incorporates external information in a principled way via the prior edge probabilities, transforms the data to reduce spurious correlations, and uses Zellner\u2019s We found that our method outperformed previous methods as well as LASSO and mutual information methods on yeast time-series data. In addition, our method performed comparably to competing methods, including Dynamic Bayesian Networks, on simulated data from the DREAM4 challenge, even in the absence of prior information. The networks from ScanBMA are also similar in size to the target networks.In regression-based methods for network inference, we infer regulators (parent nodes) for each target gene. Hence, network inference can be formulated as a series of variable selection problems. We use the following multiple linear regression model for network inference from time-series data:Xi,t,s is the expression level for gene i at time t for strain or replicate s, H is the set of potential regulators, andi\u2009=\u20091,\u2026,n, t\u2009=\u20092,\u2026,T and s\u2009=\u20091,\u2026,S. We are particularly interested in whether \u03b2hi\u2009\u2260\u20090, indicating a regulatory relationship from gene h to gene i.where \u03b2hi\u2009\u2260\u20090, also called the posterior inclusion probability of \u03b2hi, isOne difficulty is that for gene network time series data there are typically far more potential regulators than observations. To address this problem, as well as to take into account uncertainty in model selection, we use BMA to obtain the posterior probability that each regulator is in the model,14. BMA Mk given data D, p is the likelihood of the parameter vector \u03b8k of model Mk, the prior probability of model Mk is\u03c0hi is the prior probability that gene h regulates gene i, and the models considered are denoted by M1,\u2026,MK.whereAn additional issue is that the number of possible models is too large to enumerate in a reasonable amount of time. MCMC approaches have been applied where the number of potential regulators is large, but theOne concern when identifying regulatory relationships among genes is that there is a great deal of variation in gene expression levels that does not come from these interactions. For example, in the yeast data, many of the genes experience a sharp change in expression level over the first few time points caused by the application of the drug. This common trajectory is not important for inference and can produce many large correlations that do not correspond to actual interactions. In addition, we have found that removing the effect of a gene on itself can improve inference. By doing this we are removing excess variation in order to gain accuracy in inferring relationships.i at timepoint t across strains:In light of these observations, we transform the data in two steps to remove these extra sources of variation. First, we use time-adjusted data by subtracting the mean expression level for each gene whereSecond, we take the residuals from regressing the gene on itself at the previous timepoint:\u03b1i is the regression coefficient in the simple linear regression model of the expression level of gene i at time t on its expression level at time t\u2009-\u20091:where \u03b1i based on all S(T\u2009-\u20091) relevant observations for gene i.whereAn important feature of our method is a way to incorporate external information. The Bayesian approach requires the specification of prior distributions and so includes this directly.\u03c0hi, that potential regulator h regulates target gene i, for each h and i, and the prior distribution of the parameter vector for each model considered.BMA requires two types of prior information: the prior edge probability, \u03c0hi, we considered two different prior distributions. The first is based on the empirical finding of Guelzim et al.[\u03c0hi\u2009=\u20092.76/6000 for all h and i. This prior distribution does not incorporate any gene-specific information. We call it the Guelzim prior.For the prior edge probabilities m et al. that eacm et al.. We impl\u03c0hi specific to each regulator-target pair. We refer to this as the informative prior. Integration of multiple information sources has been shown to be beneficial in network construction[The second prior edge distribution we consider is that of Lo et al., which utruction,35.g-prior[Mk isAs a prior for the model parameters, corresponding to the strengths of the relationships, we use Zellner\u2019s g-prior, as in[g-prior. The priXk is the design matrix for model Mk and g\u2009>\u20090 controls the prior variance of the regression parameters. This prior yields an analytic form for the posterior model probabilities Pr(Mk|D), namelywhere M0 is the null model with no regulators, dk is the number of regulators present in model Mk, andR2 value for Mk. This is an alternative to using BIC to approximate the posterior model probabilities, as has been done previously[where eviously,37.g controls the expected size of the regression parameters \u03b2hi, and is approximately equal to the prior variance ofg should be at least 1, otherwise the individual \u03b2hi\u2019s would be expected to be nearly indistinguishable from the noise even under ideal conditions. Also, the effective number of data points in the prior is n/g. Using g\u2009=\u2009n corresponds to a unit information prior and yields similar results to using the BIC approximation. Raftery[g in the range 1\u2009\u2264\u2009g\u2009\u2264\u2009n.The parameter Raftery argued tg from the data by maximum marginal likelihood, where the likelihood is summed over the model space. We do this using an EM algorithm[\u03b3hi\u2019s indicating whether regulator gene h is in the model for target gene i. First, we run BMA with a selected value of g, and from this we obtain the posterior probabilities of the models used. We then maximizeWe estimate lgorithm,40, wherg. We use the new value of g in the next iteration of ScanBMA. We do this until convergence.with respect to C in posterior probability) are discarded[C\u2009=\u2009100, based on an extensive review of conventional standards of scientific evidence. To find the models in Occam\u2019s window, we propose a new, fast algorithm called ScanBMA.To find the models to be included in the BMA summation, we use the Occam\u2019s window principle, according to which models that are substantially worse than the best model found , which will be less than 100%.Because ScanBMA does not average over every model in the model space, the algorithm yields posterior inclusion probabilities of either 100% or 0% for many regulators. These extreme posterior inclusion probabilities are only approximations, and we can refine them. To estimate the posterior inclusion probability of a predictor Xh with approximate posterior probability 0%, we compute Oh,i, the ratio of the posterior probability of the best model to that of the best model with the predictor Xh added. Then the approximate posterior inclusion probability of predictor h is -1, which will be greater than 0%.Similarly, to estimate the posterior inclusion probability of a predictor This post-processing step yields a unique ordering among the predictors, which is useful in evaluation.nvar, of potential regulators h considered for each target gene i to those with the highest prior probabilities, \u03c0hi. Guelzim et al.[nvar\u2009=\u200920, which leaves some margin over the Guelzim maximum. We compare this with considering all genes with observed variation in expression as potential regulators, which amounts to setting nvar\u2009=\u20093556.A variant of ScanBMA that we found to greatly improve computational efficiency without degrading performance is to restrict the number, m et al. found thTo validate our method, we applied it to time-series data from a gene expression experiment on yeast as well as simulated time-series datasets from the DREAM4 competition.The yeast data come from an experiment on 97 strains of yeast crossed from two parent strains. Each stThe yeast data can be represented as a three-dimensional array consisting of 3,556 genes, 97 segregants and 6 time points. There are no replicates in these data. However, the segregant axis and the time axis capture the genetic and temporal variations. These yeast data are publicly available from ArrayExpress with accession number E-MTAB-412.The simulated DREAM4 In Silico Network Challenge-43 proviSpecifically, the size 10 DREAM data consist of 10 genes, 21 time points and 5 replicates. The size 100 DREAM data consist of 100 genes, 21 time points, and 10 replicates.Since our focus is on time-series data, we did not use the other data sources provided by the DREAM4 challenge. In particular, the DREAM4 challenge provided the results of simulated gene knock-out experiments for all genes, and in fact the winning entry in the competition used only the knock-out data and ignored the time series data. In pracTo evaluate the performance of our method, we compared it with LASSO, as wellglmnet package in R[\u03bb, chosen by cross-validation.We used the implementation of LASSO in the kage in R, with thminet package in R[Mutual information methods have been used extensively in identifying relationships among genes-23. Sinckage in R.GeneNet package[ebdbnet R package[G1DBN R package[ebdbnet package for the much smaller DREAM4 networks.We have also investigated the possibility of using Dynamic Bayesian Networks (DBN). Methods based on DBNs have been implemented in several R packages. These include the package, but thi package, and the package, do alloThe YEASTRACT database is a lithttp://www.ebi.ac.uk/arrayexpress[http://wiki.c2b2.columbia.edu/dream/index.php?title=D4c2[The yeast time series data are publicly available from ArrayExpressayexpress with accitle=D4c2.ARACNE: Algorithm for the reconstruction of accurate cellular networks; BMA: Bayesian model averaging; CLR: Context likelihood or relatedness; DBN: Dynamic Bayesian network; DREAM: Dialogue for reverse engineering assessments and methods; LASSO: Least absolute shrinkage and selection operator; MRNET: Maximum relevance/minimum redundancy; PRC: Precision recall curve; ROC: Receiver operating characteristic; TF: Transcription factor; YEASTRACT: Yeast search for transcriptional regulators And consensus tracking.The authors declare that they have no competing interests.WCY participated in method development, implemented the ScanBMA method, carried out empirical studies comparing ScanBMA to other competing methods, and drafted the manuscript. AER designed the ScanBMA algorithm, designed the empirical studies and edited the manuscript. KYY identified the datasets, participated in the design of the empirical studies, and assisted in manuscript preparation. All authors read and approved the final manuscript."}
+{"text": "Saccharomyces cerevisiae. The medium-scale network, derived from published genome-wide location data, consists of 21 transcription factors that regulate one another through 31 directed edges. The expression levels of the individual transcription factors were modeled using mass balance ordinary differential equations with a sigmoidal production function. Each equation includes a production rate, a degradation rate, weights that denote the magnitude and type of influence of the connected transcription factors (activation or repression), and a threshold of expression. The inverse problem of determining model parameters from observed data is our primary interest. We fit the differential equation model to published microarray data using a penalized nonlinear least squares approach. Model predictions fit the experimental data well, within the 95\u00a0% confidence interval. Tests of the model using randomized initial guesses and model-generated data also lend confidence to the fit. The results have revealed activation and repression relationships between the transcription factors. Sensitivity analysis indicates that the model is most sensitive to changes in the production rate parameters, weights, and thresholds of Yap1, Rox1, and Yap6, which form a densely connected core in the network. The modeling results newly suggest that Rap1, Fhl1, Msn4, Rph1, and Hsf1 play an important role in regulating the early response to cold shock in yeast. Our results demonstrate that estimation for a large number of parameters can be successfully performed for nonlinear dynamic gene regulatory networks using sparse, noisy microarray data.We investigated the dynamics of a gene regulatory network controlling the cold shock response in budding yeast, The online version of this article (doi:10.1007/s11538-015-0092-6) contains supplementary material, which is available to authorized users. All organisms must respond to changes and stresses in their environment to survive and reproduce. Such environmental stresses include changes in nutrient or oxygen availability, changes in osmolarity, salinity, or pH, the presence of reactive oxygen species or other damaging agents, and sudden or large changes in temperature, either an increase (heat shock) or decrease (cold shock). Organisms respond to environmental stresses through characteristic programs of gene expression. Among the most interesting and challenging problems in understanding this environmental stress response is the dynamic behavior of gene expression networks within the cell. The careful regulation of these networks is a fundamental activity of the organism. In this paper, we discuss the development and application of a dynamical systems model for regulation of gene expression during the early response to cold shock in budding yeast.Saccharomyces cerevisiae and cold shock is motivated by a number of factors. These yeast have been studied extensively, especially their response to heat shock, which occurs through the induction of heat shock proteins and the strengths of the regulatory relationships of controlling genes on targets in a complex feedback network of 21 genes (nodes) and 31 regulatory relationships (edges).We then develop parameter estimation techniques for extracting rate parameter information from time course microarray data obtained from cold shock experiments to infer the direction (activation or repression) and magnitude of influence that regulatory transcription factors have on their target genes. Other models of this type have either been developed on relatively simple small gene circuits or decrease what is the extent of ESR pathway overlap? (3) which part of the early transcriptional response to cold shock is due to indirect effects of other transcription factors? To approach these questions, we need complementary types of high-throughput genomic data, the tools of mathematical biology, and the perspective of systems biology.Unlike the response to heat shock and other environmental stresses, the transcriptional response to cold shock has been relatively less well studied in yeast. The previous studies that exist have revealed that the response varies depending on the temperature and the length of time spent at the cold temperature. The cold shock response occurs between the temperatures of 10 and Saccharomyces Genome Database (http://www.yeastgenome.org), and the network structure itself is pictured in Fig.\u00a0A great deal of research has focused on the empirical identification of the network structure from microarray or other genomic data. An established method called genome-wide location analysis, which uses chromatin immunoprecipitation with epitope-tagged transcription factors followed by hybridization to DNA microarrays spotted with intergenic sequences (ChIP-chip), has determined the relationships between transcription factors and the target genes they regulate on a global scale in budding yeast , with 4-node chains originating at CIN5, MAC1, PHD1, SKN7, and YAP1. Most nodes have a single input or are part of a simple regulatory chain, but several participate in complex feedforward motifs . Furthermore, there appears to be two distinct subnetworks and the influence magnitudes of the regulatory relationships. However, we first describe in more detail the nature of the microarray data that we will use to infer parameters in the model.Saccharomyces cerevisiae strain BY4743 grown at t statistic to determine whether each average p value based on the t statistic. We should note that the variability and the small number of replicates make for tests that are not very powerful. Table\u00a0p value cut-offs, p value cut-off, except for the We are grateful to Babette Schade for providing the complete microarray dataset for wild type yeast subjected to cold shock as published in Schade et\u00a0al. . In theientclass1pt{minimap values for the 21 genes in our network. Notably, only nine genes in the network show significant changes in gene expression at In Table Gene regulation can be modeled with a wide variety of mathematical structures at many levels of resolution. Schlitt and Brazma review fTaking a step in that direction, we build a model of gene regulation that adds the dynamics of transcription factor production onto their interaction network. Research along these lines has applied differential equation structures , and production rates from three different penalty levels.We conducted a number of additional computations to explore the quality of these estimates. First, we compared the estimated parameter values for several of the L-curve runs. In Fig.\u00a0We see that the magnitudes of the parameters are different, but that the trends and patterns agree for all mentclass2pt{minimIn a second test, we randomized the initial guesses for the iterative optimization scheme. We ran the minimization routine using 10 different initial guesses for each individual parameter. In the cases of the weights and thresholds, we sampled from a standard normal distribution, and for the production rates (which must be nonnegative), we multiplied the optimal production rates by a normal with mean 1 and standard deviation 0.03, truncating to 0 if negative. Using the penalty parameter As a final test of the estimation routine\u2019s accuracy, we performed some tests using model-generated data. We used the parameters in Table a priori knowledge concerning the quality of the model or the parameter values, we cannot say with certainty that our fit, as detailed in Figs.\u00a0Since we have no A final topic of interest along these lines is that of the sensitivity matrix. As discussed in Sect.\u00a0V. Some interesting patterns can be detected.A heat map image of the sensitivity matrix is dominated by the production rates, and the image itself is not very illuminating. In Fig.\u00a0The eigenvectors in the image are ordered in terms of largest to smallest eigenvalues . The eigenvectors First, we note that Eigenvector 22 involves the state equation of NRG1. In Fig.\u00a0The sensitivity is strongest with respect to the weight of SKN7 controlling NRG1, slightly dependent on the self-control of NRG1, with opposite sign sensitivity for the net threshold and the production rate. Eigenvector 23 shows a complex connection of sensitivities in the ROX1, YAP1, and YAP6 dynamics Fig.\u00a0.The weights corresponding to the indices 19\u201322, 24\u201331 are the controlling weights for the dynamics of ROX1, YAP1, and YAP6, while indices 43, 45, and 46 correspond to the net thresholds in those three genes.To interpret these sensitivities, we note that YAP1, ROX1, and YAP6 form a densely connected core in one sub-network of relationships, and rate of expression. The magnitude of the parametric uncertainties, as measured through the covariance, are large enough to preclude the use of this approach in extracting the network topology from data at this coarse level of time resolution, so the techniques described herein must be used in conjunction with other methods, either statistical clustering approaches or additional experiments, to identify the network connections. We are confident, however, in the utility of this approach to refine the dynamics and directionality of a candidate regulatory graph, which should have general applicability to other biological problems where time course gene expression data are available.S. cerevisiae for which we had three questions: (1) which transcription factors control the early response to cold shock in S. cerevisiae? (2) what is the extent of ESR pathway overlap? (3) which part of the transcriptional response to cold shock is due to indirect effects of other transcription factors? First, the Schade et\u00a0al."}
+{"text": "Time-course gene expression experiments are useful tools for exploring biological processes. In this type of experiments, gene expression changes are monitored along time. Unfortunately, replication of time series is still costly and usually long time course do not have replicates. Many approaches have been proposed to deal with this data structure, but none of them in the field of pathway analysis. Pathway analyses have acquired great relevance for helping the interpretation of gene expression data. Several methods have been proposed to this aim: from the classical enrichment to the more complex topological analysis that gains power from the topology of the pathway. None of them were devised to identify temporal variations in time course data.timeClip, a topology based pathway analysis specifically tailored to long time series without replicates. timeClip combines dimension reduction techniques and graph decomposition theory to explore and identify the portion of pathways that is most time-dependent. In the first step, timeClip selects the time-dependent pathways; in the second step, the most time dependent portions of these pathways are highlighted. We used timeClip on simulated data and on a benchmark dataset regarding mouse muscle regeneration model. Our approach shows good performance on different simulated settings. On the real dataset, we identify 76 time-dependent pathways, most of which known to be involved in the regeneration process. Focusing on the 'mTOR signaling pathway' we highlight the timing of key processes of the muscle regeneration: from the early pathway activation through growth factor signals to the late burst of protein production needed for the fiber regeneration.Here we present timeClip represents a new improvement in the field of time-dependent pathway analysis. It allows to isolate and dissect pathways characterized by time-dependent components. Furthermore, using timeClip on a mouse muscle regeneration dataset we were able to characterize the process of muscle fiber regeneration with its correct timing. Time course gene expression experiments are widely used to study the dynamics of biological processes. Usually, the main goal of such experiments is to identify genes modulated along a biological process or after a system perturbation (such as drug treatments or genetic modifications). However, time course data are costly and usually long time series have few or no replicates. In this context a differentially expressed gene can be defined as a gene with the expression profile changing significantly along time and/or across multiple conditions. Several statistical models have been proposed to account for clusters and differential expression in the contest of time series with -18 and wA pathway is a complex structure comprising chemical compounds mediating interactions and different types of gene groups (e.g. protein complexes or gene families) that are usually represented as single nodes but whose measures are not available using gene expression data. However, after appropriate biologically-driven conversion ,45, a biet al. [et al. [CliPPER, based on a two-step empirical approach. In the first step, it selects pathways with covariance matrices and/or means significantly different between experimental conditions dealing with the p >> n case; in the second step, it identifies the sub-paths most associated with the phenotype.Taking advantage of the structure of the graph, Massa et al. used Gau [et al. proposedtimeClip, to deal with long time course data without replicates. Specifically, timeClip combines principal component analysis, regression models and graph decomposition to explore temporal variations across and within pathways. Moreover, timeClip implements an easy and effective visualization of the dynamics of the pathways.Pathway analysis is mainly tailored to two-groups comparisons and few efforts have been dedicated to the time course design. Here, we propose a modification of , called timeClip shows good performances in term of power, specificity and sensitivity. Using real data on mouse muscle regeneration [On simulated datasets, neration , we obtagraphite a Bioconductor package for the storage, interpretation and conversion of pathway topology to gene-only networks [graphite discriminates between different types of biological gene groups and propagates gene connections through chemical compounds. Specifically, protein complexes are expanded into a clique , while the gene families are expanded without connections among them; see [graphite Bioconductor package is limited to human, so here we build a dedicated graphite package for mouse KEGG pathways. This package is available at http://romualdi.bio.unipd.it/wp-uploads/2013/10/graphite.mmusculus_0.99.2.tar.gz.A critical step in the field of topology based pathway analyses is the availability and the quality of the pathway topology. Our group has recently developed networks . graphithem; see ,45 for mtimeClip resembles the two-steps approach of CliPPER. In the first step, the whole pathway is explored for its temporal variation. If the pathway is defined as time-dependent, in the second step, timeClip decomposes the pathway into a junction tree and highlights the portion mostly dependent on time. A general schema of the approach is summarized in Figure A pathway is composed by multiple genes so to reduce the dimension of a whole or of a portion of a pathway, we used principal component analysis. Then the first principal component is explored for temporal variation. A vast amount of techniques exist for analyzing regularly sampled time series. Unfortunately, the irregular sampling of the values (a common practice in biology) makes direct use of such estimation techniques impossible. To avoid the well known biases associated with the most common approach for irregularly sampled time series based on transforming unevenly-spaced data into equally spaced observations using some form of interpolation, here we propose to use a regression model combining a polynomial trend and a continuous-time Gaussian autoregressive process of order 1 (AR(1)). Then, n \u00d7 t Xbe the normalized log transformed gene expression matrix with genes on the rows and experiments on the columns. Let P. Pathway P has p genes. Then, on the transpose of P, XP'X, we perform principal component analysis (PCA). We used both the classical (R package stats) and the robust (rrcov R package) version of PCA. Let p principal components. In this way, the first PCs summarize the temporal variation of the genes in pathway P (if present). Thus, from now on we will indicate PCA-maSigFun). PCA-maSigFun uses principal component analysis to identify temporally-homogeneous groups of gene within the pathway.Let posed by (PCA-maSZ(t) = p(t) + \u2208(t), where p(t) is a deterministic function, hereafter called \"trend\", and \u2208(t) is the realization of a stationary stochastic process with mean zero. Extensive exploratory analysis suggests that a reasonable choice for the trend component is a polynomial of degree 2 in t, i.e.,Then, for irregularly sampled time series we assume that our irregularly sampled signal \u03b21 capturing existing temporal behaviors of \u03b22 correcting for potential non linearities.with t) follows a continuous-time Gaussian autoregressive process of order 1. The model is fitted using generalized least squared (as implemented in nlme R package). The representative p - value of pathway P, Pp, is then taken to be the p - value of the test of nullity of \u03b21 (obtained by a t-test as implemented in the gls function of the nlme R package). Bonferroni correction is used to adjust p - values for multiple tests.Moreover, we assume that \u2208.We evaluated the possibility to fit a polynomial regression not only on the first PC, but also on few additional Pathways declared as time-dependent in step 1 are then moralized, triangulated and decomposed into a junction tree as described in .C1 and C2 in the tree, every clique on the path connecting C1 and C2 contains C1 \u2229 C2 [Briefly, moralization inserts an undirected edge between two nodes that have a child in common and then eliminates directions on the edges; triangulation inserts edges in the moralized graph so that in the moralized graph all cycles of size \u2265 4 have chords, where a chord is defined as an edge connecting two non-adjacent nodes of a cycle. A clique in the triangulated graph is a complete subgraphs having all its vertices joined by an edge while a junction tree construction is a hyper-tree having cliques as nodes and satisfying the running intersection property according to which, for any cliques C1 \u2229 C2 ,47. For k of pathway P, noted as k = 1,..., K), is composed by a subset of genes in P, X corresponding to the genes of the clique k of P we apply the same approach as described in step 1: PCA transformation and then a linear model with polynomial trend and autoregressive process of order 1 on the first PCs. The p - value of clique k in pathway P, p of the \u03b21 of the polynomial regression. Finally, the best time-dependent paths within a pathway P, hereafter called j = 1,..., J, are identified using the relevance measure as described in [p -- value of a clique in the path, higher the contribution to the score, in case of gap the score is penalized. The final score of a path is the maximum value reached by the score along the path. Then, the score is normalized for the path length; this quantity is called relevance [A clique ribed in . Brieflyelevance .timeClip and, as far as we known, there are no existing tools using a similar strategy.As final results, for each time-dependent pathway, we report a list of relevant paths, ranked according to their relevance. Currently, step 2 is the most innovative feature of timeClip step 2 simply as a consequence of type I errors in timeClip step 1, we used a simulation to evaluate the percentage of false positives under the null hypothesis and to estimate the statistical power in different scenarios.As some paths may be declared time-dependent by P and its graph structure (G), for 1,000 runs we randomly generate a gene expression matrix Xn \u00d7 t from a multivariate normal distribution with zero mean and variance \u2211, with S+(G) is the set of symmetric positive definite matrices with null elements corresponding to the missing edges of G). In this case, gene expression profiles are time independent. Then, for each run we calculate P p. Under this scenario, at the nominal level \u03b1 = 0.05 we expect a number of rejections around 5%. We repeat the simulation for different values of n and t .Given a pathway In order to be sure that the model were able to identify time-dependency coming from different models, we simulate data using polynomial models, autoregressive models of order 1 and a combination of both . Then, the power is estimated for irregularly and regularly sampled time points.P and its graph structure (G), for 1,000 runs we randomly generate a gene expression matrix Xn \u2212 s) \u00d7 t , autoregressive models of order 1 and the combination of both ).Given a pathway \u03b1* are independently generated from a U, and \u03c6i are generated so as to achieve stationarity. In this way, we simulate expression profiles with different degrees of temporal variations. Then, for each run we calculate P p(see Section Step 1: exploring the whole pathway). Under this scenario, the number of rejection estimates the statistical power. We repeat the simulation for different combinations of \u03c6, n, s and t.The coefficients The benchmark dataset used (GSE469)timeClip is implemented as an R package available from the authors. The package allows to analyze equally and non-equally spaced time series according to the user setting. To get better insights into the temporal activation of the different portions of the pathway, we develop a new way of visualization using Cytoscape software [timeClip exports in Cytoscape the structure of the junction tree where each time-dependent clique has a pie chart that represents the time trend. Specifically, the pie is divided into as many slices as the number of time points in the dataset. Each slice in the pie is colored (from green to red) according to the scores of first principal component: the higher the value, the stronger the activation of a clique in a specific time point (red color) and viceversa (green).software and RcyttimeClip, a two-step approach to perform topological pathway analysis for time course gene expression data, specifically tailored to long time series without replicates .n and t for the irregularly and regularly sampled time points, respectively. The average false positive percentage for each t and n is always limited to ~4-5%, with the exception of small time series (t = 5) and equally spaced time points where it is slightly higher. Thus, we can conclude that, in general, for long time series we have an excellent control of type I error even with exceptionally low sample sizes.Table n = 30 and different t and s for equally and not-equally spaced time points respectively. Here, the genes with temporal variation are simulated using different models and s/3 with the combination of both). As expected, the power increases with the increase of t and s: the longer the time course and the higher the number of time dependent genes s within the pathway, the higher the power.Table t = 10 \u2212 20) the maximum power reaches 60%, while with long time series t = 30 the power is above 80%. Moreover, it is worth noting that the increase of the time dependent genes does not affect significantly the power level. The greater impact that the number of time points has on statistical power with respect to the number of time-depending genes can be explained by the presence of two steps in our strategy: i) a data reduction step (with PCA on genes within pathways) and ii) a model-fitting step of the reduced variables on time points. PCA is an efficient method to detect variance components in the data. Thus, even in case of a small number of time-dependent genes, the first PC is able to capture the time trend when present. On the other hand, once the trend is captured, the goodness of fit of the regression model increases by increasing the number of time points. The use of robust PCA does not change the performance of the method substantially (data not shown).Specifically, when the time course is short .Comparing step 1 results for equally and not equally spaced time-point we obtain an overlap of 70%. This high degree of overlap makes us confident about the reliability of our approach. We summarized the results in the heat map of Figure In the early-intermediate pathway group, we can see the effects of the early signal secretion: in fact, the group contains pathways like 'mTOR signaling pathway', 'VEGF signaling pathway', 'Insulin signaling pathway' and other metabolic pathways like 'Ether lipid metabolism' and 'Citrate cycle (TCA cycle)'. Globally, these pathways indicate that the regeneration progress has begun.'mTOR signaling pathway', probably the most important pathway in the muscle regeneration, on one side sustains VEGF signaling and on the other promotes protein production needed for clonal expansion of the myoblasts, their growth and fusion. In particular, mTOR integrates growth factor signaling with a variety of signals from nutrients (amino acids metabolism activate mTOR pathway) and cellular energy status . The eneIntermediate-late activation pathways mainly present pathways involved in inflammatory responses like 'B and T Cell receptor signaling pathway', 'Toll-like receptor signaling pathway', 'Adipocytokine signaling pathway' and 'Leukocyte transendothelial migration'. Recent discoveries reveal complex interactions between skeletal muscle and the immune system that regulate all phases of the muscle regeneration . MoreoveThis contains also pathways involved in signaling transduction like 'HIF-1 signaling pathway'. HIF-1 has been recently demonstrated to be essential for skeletal muscle regeneration in mice . In facttimeClip step 2 deeply investigates the timing activation of different portion of the pathway.In step 1, we are able to see only the strongest signals and not always the pathway name alone reflects the activity of the pathway. To tackle the complexity of the pathway, In the second step, we focused on the the Akt-mammalian target of rapamycin (mTOR) signaling pathway. It regulates a pletora of signals: cell growth, VEGF signaling pathway, autophagy and its action is related to other pathways known to be involved in the muscle regeneration like Insulin signaling pathway and MAPK signaling pathway .The junction tree of mTOR signaling pathway Figure starts wst to the 21st clique and contains 16 cliques. The second and the third path share a big portion with the first one. This big portion goes from clique 1 to cliques 13 , while PCA-maSigFun returned 59 significant KEGG pathways (p \u2264 0.05). 26 out of 59 (44%) pathways are in common with timeClip step 1 results. Indeed, both the methods retrieve mTOR signaling pathway, however PCA-maSigFun did not call HIF signaling pathway as significant, although it seems to be closely related to the muscle regeneration [PCA-maSigFun specific pathways (15 out of 33) referred to metabolic processes like Inositol phosphate methabolism, Pyruvate metabolism, Tyrosine metabolism, Glycerolipid metabolism. The remaining pathways are highly heterogeneous and comprise Acute myeloid leukemia, Bladder cancer, Melanoma, Pancreatic cancer.In this section we compare posed by . Step 2,ng tool. proposedneration . Most ofPathway analysis is a useful and widely used statistical approach to test groups of genes between two or more biological conditions. Although many efforts have been dedicated to implement novel gene set analysis in a multivariate and topological contexts, few of them deal with time course experiments. Time course experiments are used to monitor the dynamics of biological processes under physiological conditions or after perturbations.timeClip, an empirical two-step approach specifically tailored to long time course gene expression data without replicates. Using simulated data timeClip shows good performance in terms of controlling type I error and power. Furthermore, we successfully identify most of the key pathways involved in the early, middle and late phases of the skeletal muscle regeneration process. A visualization tool has also been implemented to tackle the dynamics of the transcriptome.In this context there is a clear trade-off between the number of time points and the number of replicates. In general, if the goal of the study is the identification of time-dependency, long time course are required at the expense of replicates; on the other hand, if the goal is the characterization of short term response a large number of replicates for each time point is required to increase statistical power. In general, there are few long time series datasets and in our opinion this is partly due to the experimental costs but also to the lack of effective methods to study and interpret results. Here, we present The authors declare that they have no competing interests.CR define the concept proposed. PM developed the proposed methods and performed the analysis on gene expression data. GS and EC developed the computational infrastucture for pathway topology retrieval. CR and MC supervised the work and wrote the paper. SC participated in the biological discussion of the results. All Authors read and approved the final manuscript.Additional tables. This file contains additional tables mention on the text (pdf format).Click here for fileFigure 2 Heat map values. This file contains the values used to create the heat map in Figure 2. In column 2 the pathways that are called significant also by PCA-maSigFunp - values and adjusted p - values (Bonferroni) for with an alpha <= 0.05 are marked with \"*\". In addition timeClip are show in col 3 and 4 (tab delimited format).Click here for fileActivation of the HIF-1 signaling pathway. KEGG representation of HIF-1 signaling pathway. Genes of the 37 clique long path colored in cyan.Click here for file"}
+{"text": "It is well accepted that genes are simultaneously involved in multiple biological processes and that genes are coordinated over the duration of such events. Unfortunately, clustering methodologies that group genes for the purpose of novel gene discovery fail to acknowledge the dynamic nature of biological processes and provide static clusters, even when the expression of genes is assessed across time or developmental stages. By taking advantage of techniques and theories from time frequency analysis, periodic gene expression profiles are dynamically clustered based on the assumption that different spectral frequencies characterize different biological processes. A two-step cluster validation approach is proposed to statistically estimate both the optimal number of clusters and to distinguish significant clusters from noise. The resulting clusters reveal coordinated coexpressed genes. This novel dynamic clustering approach has broad applicability to a vast range of sequential data scenarios where the order of the series is of interest. Microarray and next-generation sequencing (RNA-seq) technologies enable researchers to study any genomewide transcriptome at coordinated and varying stages. Since biological processes are time varying , they ma Functional discovery is a common goal of clustering gene expression data. In fact, the functionality of genes can be inferred if their expression patterns, or profiles, are similar to genes of known function. There are published clustering methods that include into the analysis the duration of the experimental stages, or the staged dependence structure of gene expression. The results from these approaches are certainly more informative and realistic than groupings that are gained from static clustering methods , but their results are limited in interpretation. The seminal work from Luan and Li is a gooA variety of subspace clustering methodologies have attempted to address the time-dependent nature of transcriptome experiments through biclustering , or plaiq-clusters, where q is the time length of a bicluster , that can have different time lengths, but genes in the same cluster must have the same durations over time, even though time lags exist among the genes. Song et al. . Randomly choose Perform hierarchical clustering on the random data set from Step (2), and obtain a new merge distance set.M times . For each possible number of clusters the 95th percentile of M merge distances, dk*, serves as a threshold and the 95% threshold curve is constructed as M* = . Repeat Steps (2)-(3) M0 with M*; the largest k which satisfies dk > dk* is the optimal number of clusters k0. Compare Since some gene expression data can be quite noisy, an additional step in cluster validation necessitates differentiating the noise cluster from meaningful clusters. For a set i, its silhouette width is defined asa(i) is the average dissimilarity of object i to other objects in the same cluster and b(i) is the average dissimilarity of object i to objects in its nearest neighbor cluster. The range of the silhouette width is . The average silhouette width of objects in a cluster represents the quality of the cluster in terms of compactness and separation. The average silhouette of a noise cluster (if it exists) should be low, while for statistically significant clusters it should be high. A noise cluster differs from statistically significant clusters in terms of compactness and separation. The objects in a noise cluster are scattered, while the objects in a statistically significant cluster, which are similar to the tight clusters in , are denk0 clusters, a uniform reference distribution is employed to generate independently located objects upon which the hierarchical clustering operates. The silhouettes are obtained for the reference data for the same cluster number k0 as follows . The noise level varies from 0 to 1.0, in increments of 0.10. In spectral analysis, particularly in fast Fourier transformations, the length of a signal is usually a power of two. We consider 64 time points that equally partition the sample space. Signal decomposition is influenced by many parameters/factors, including frequency, frequency difference , time length of signal, ratio of the amplitude of the components, and noise level. Because of the limitation of displaying multiple factor effects simultaneously, a power study of signal decomposition on two-component signals is performed for investigating the effect of frequency, frequency difference, amplitude ratio, and noise. Additional simulation studies can be found in An's paper . In genei=1qui is the total time duration of the q true components , \u2211j=1mvj is the total time duration of the m estimated components, and \u2211k=1cwk is the total time duration of the coverlap components between the true and estimated components. For each parameter combination, the overall power of the signal decomposition is the average power for decomposing 1000 signals. The power of decomposing each gene expression time series is defined as The performance of the dynamic clustering approach coupled with the proposed validation method is investigated using the discovery index, which reflects the effect of both the signal decomposition and the clustering. It is a considerable challenge to display the discovery index for a set of data across all possible combinations of parameters. Fortunately, it is possible to assess the effect due to noise in the discovery index while holding the other parameter settings fixed. The performance of the dynamic clustering is assessed via fixed parameters over increasing noise. Further, the effect of time, when other parameter settings are fixed, is also assessed. Relying on the previous simulation, we use 140 simulated genes to illustrate the dynamic nature of the simulated time series. t and the phase shifts \u03c6j\u2009\u2009 are simulated as in the previous power study. There are 20, 40, and 80 genes in groups 1, 2, and 3, respectively. Each gene expression profile contains two components whose frequencies are 0.1\u2009Hz, 0.4\u2009Hz, and 0.8\u2009Hz. Each pair of genes from different groups shares one common component. Sixty-four time points equally partition the sample space . Noise level varies at 0.5, 1, and 1.5. Time series expression data for 140 genes in three groups are simulated as follows:The discovery index is calculated for each scenario, and the average discovery index-calculated for each of the 1000 simulated data sets . Figure The components in the previous simulation are simulated under va Three significant clusters are detected by cluster validation. The dynamic property of the clusters is displayed in Although we relied on the 140 genes from our earlier simulation to demonstrate the performance of the proposed method, it is worth noting that performance improves as the number of genes increases. Specifically, increasing gene number allows our algorithm to more accurately identify the noise cluster, thus separating the gene cluster(s) of interest more precisely. While our simulation studies demonstrate that our approach is able to both capture meaningful signals from very noisy data and group them very well, we cannot compare our method with existing methods simply bPlasmodium falciparum) [Plasmodium falciparum expression data from [ Many microarray experiments have been conducted for the purpose of understanding complex dynamic biological processes and gene function of cell cycle \u201340. Applata from . Plasmodium falciparum has a complex life cycle. Its genome is sequenced and has over 5,000 cell-cycle genes; 530 of them are annotated into 14 functional groups [P. falciparum data focus on either detecting periodic genes [http://dx.doi.org/10.1371/journal.pbio.0000005.st002. The missing data (for time point 23 and 29) are imputed by the k-nearest neighbor algorithm [k = 12. Between the mosquito vector and human host, l groups . A more ic genes , 42 or sic genes , 44. Dynlgorithm with k = Using the continuous wavelet transformation and ridge extraction, 530 time series gene expression profiles are decomposed into a set of 1,019 component signals whose frequencies are centralized for the purpose of calculation. Hierarchical clustering using CoCo similarity is employed. Two significant clusters and one noise cluster are detected . The numIn the signal decomposition, the phase or phase shift can be obtained for each component. P. falciparum data can be summarized in terms of number of genes at each time point , the number of genes in a cluster may vary across time. The dynamic property of clusters from the me point . A nonpaP. falciparum data have not been studied ; therefore, the expression pattern in As illustrated in e \u2212 19) for the genes appearing in Cluster 1. The genes involved in both clusters with significant GO terms are listed in We employed the web-accessible programs DAVID andThere are a quite few genes involved in multiple (independent) processes. For example, we find that the processes \u201camino acid activation\u201d and \u201ctRNA metabolic process\u201d share 19 genes in common, while these two processes have no ancestor\u2014child relationship. There is only one gene in Cluster 2 that is not in Cluster 1. This gene is involved in the \u201cglycolysis.\u201d As a point of future research, we noticed that the phase information is often used to cluster cell cycle gene expression profiles , 44, 51. When the application is gene expression, methods from signal processing have proven successful in decomposing the nature of complicated time series that contain multiple component signals. A two-step cluster validation is proposed to statistically determine the optimal number of clusters and to select the statistically significant clusters. To our knowledge, there are no clustering approaches that provide unique gene sets at different time points . A simulation study demonstrates the benefits of our approach by showing that it is able to capture meaningful signals and separate them, even for very noisy data. Finally, we understand and acknowledge that it would be useful and encouraging to compare our method with other existing methods. Unfortunately, this is not possible simply because no information about time is contained in the clusters that are obtained by other methods. Time information is the critical component for both determining the clusters obtained by our method and calculating the \u201cdynamic index\u201d formula which measures the clustering performance.The proposed method focuses on clustering periodic time series by considering the spectral frequencies that are decomposed and extracted from periodic data. Beyond the spectral frequency, phase information is obtained as well in the signal decomposition. In fact, clustering only the spectral frequency of time series may not be sufficient to understand very complicated biological processes. In other words, even though two genes involved in the same biological process have the same spectral frequencies, they may play different roles in the process. For example, one gene may serve as a regulator for the other. Studying the phase relationship between genes may help understand such regulation and is a point of future research. Further, consider two genes that participate in different phases of a cellular process. A phase study is certainly necessary after clustering the spectral frequencies. It is worth noting that we cannot change the order of studies, since genes with different spectral frequency must belong to different biological processes, and genes with different phases may or may not belong to the same process. Therefore, within each cluster of spectral frequency, genes can be subclustered according to their component phases so that the gene relationship may be revealed in greater detail . Genes involved in multiple biological processes (simultaneously) may play a major role in one process while playing a minor role in another process. The importance of a gene in multiple processes has potential for further investigation. Since the energy of a time series is proportional to its amplitude squared, the importance of a gene in a process can be measured using the squared amplitude of its corresponding component. Based on this, and due to the fact that genes may participate in different processes at different time, the dynamic importance of genes in biological processes can be established. As a point of future research, if three features of periodic time series, namely, spectral frequency, phase shift, and amplitude, are all included, as well as the time information of components, the complex dynamic biological processes may be better understood .Because the proposed dynamic clustering process is time dependent, an appreciation for the number of time points that are recommended for the method is a necessary discussion. Since the main feature of a periodic time series is spectral frequency, frequency detection is highly reliant on the sampling rate. Therefore, the minimal or recommended number of time points is related to the nature of biological processes/clusters in which the genes are involved. According to the Sampling Theorem , a signaFinally, since spectral frequencies are extracted from periodic time series, the time points occur at equally spaced intervals. For periodic time series with unevenly spaced points, evenly spaced time points can be artificially created by imputing missing data. Thus, our proposed approach is applicable, but some information may be lost due to data imputation. Further, since the proposed two-stage approach is designed for periodic data, for data that are not periodic, the signal decomposition approach is not applicable in the data preparation step. However, if some other characteristic can be defined and extracted from the nonperiodic time series, the second step of the proposed approach remains applicable. Dynamic clustering is a two-step cluster validation that is able to differentiate meaningful clusters from noisy clusters. The results from our approach provide insight into the dynamic association among time-limited coexpressed genes that might otherwise go undetected by current clustering approaches. Clustering and gene network inference are both known to help in predicting the biological functions of genes or unraveling the mechanisms involved in biological processes. Usually clustering and gene network methods are developed independently. Specifically, in the gene networks, the challenge is to deal with a large number of genes, but when clustering, the clusters are assumed independent. In actuality, gene network procedures and gene clustering procedures cover each other's shortcomings . As such"}
+{"text": "Motivation: Gene expression profiling using RNA-seq is a powerful technique for screening RNA species\u2019 landscapes and their dynamics in an unbiased way. While several advanced methods exist for differential expression analysis of RNA-seq data, proper tools to anal.yze RNA-seq time-course have not been proposed.Results: In this study, we use RNA-seq to measure gene expression during the early human T helper 17 (Th17) cell differentiation and T-cell activation (Th0). To quantify Th17-specific gene expression dynamics, we present a novel statistical methodology, DyNB, for analyzing time-course RNA-seq data. We use non-parametric Gaussian processes to model temporal correlation in gene expression and combine that with negative binomial likelihood for the count data. To account for experiment-specific biases in gene expression dynamics, such as differences in cell differentiation efficiencies, we propose a method to rescale the dynamics between replicated measurements. We develop an MCMC sampling method to make inference of differential expression dynamics between conditions. DyNB identifies several known and novel genes involved in Th17 differentiation. Analysis of differentiation efficiencies revealed consistent patterns in gene expression dynamics between different cultures. We use qRT-PCR to validate differential expression and differentiation efficiencies for selected genes. Comparison of the results with those obtained via traditional timepoint-wise analysis shows that time-course analysis together with time rescaling between cultures identifies differentially expressed genes which would not otherwise be detected.Availability: An implementation of the proposed computational methods will be available at http://research.ics.aalto.fi/csb/software/Contact:tarmo.aijo@aalto.fi or harri.lahdesmaki@aalto.fiSupplementary information:Supplementary data are available at Bioinformatics online. To quantify expressions of known genes, a common approach is to count the reads which are aligned to different genes. The discrete nature of count data led researchers to model the sequencing data using Poisson distribution cell lineage has been a focus of great research interest r is a predefined number of failures and the probability of success is p. We will parameterize the negative binomial distribution with mean p as a function of \u03bc and rM replicates Let us write the mean of the negative binomial distribution as a function of a random process i-th replicate, which can be achieved by constraining We also consider a situation where we possess a priori knowledge that the biological replicates are differentiating in different time scales. In this study, we assume that the different time scales between biological replicates can be modeled as Supplementary Figure S1 using the plate notation.The statistical dependencies of the variables in our model are depicted in The variance for the negative binomial distribution is estimated using the approach described in f.To account for different sequence depths of the samples , we make the read counts between different RNA-seq runs comparable by scaling factors, which are estimated using the procedure presented in F. In this case, the integral in f, k and \u03b8By using the likelihood in Require:Initialize:fori = 0 to N \u2013 1 do\u2003Sample:\u2003\u2003Sample:\u2003\u2003Sample:\u2003\u2003Sample:\u2003\u2003ifthen\u2003\u2003\u2003\u2003\u2003\u2003\u2003else\u2003\u2003\u2003\u2003end ifend forFor the integration in 2, i.e. 1 is set empirically to account for large differences in gene expression counts y between low- and high-expressed genes . Again, in defining the mean m and \u03c31, we should take into account the large range of different read count magnitudes; thus they are defined separately for each of the genes. The mean vector is defined as We assign the uniform prior distribution for the hyperparameter \u03b8 scaling A. The pak is an uniform distribution, where the probabilities for the three allowed transitions, i.e. +4 h, \u22124 h and 0 h, are 1/3. For the proposal distribution K is defined by the inputs X and the hyperparameters In our implementation, we use a truncated normal distribution as the proposal distribution Using the accepted samples k whose posterior we also get directly from the MCMC chain. Moreover, the estimated marginal likelihoods are used for model selection purposes as we will see in next section. The convergence of the chains was assessed using the potential scale reduction factors as described in Supplementary Figure S2.Another variable whose posterior distribution we are interested in is In this study we want to answer the question whether a gene is differentially expressed between different conditions, namely Th0 and Th17 lineages, and we assume to have replicated time series measurements from these two lineages. From now on we consider only two conditions but the same methodology can be easily generalized for any number of conditions of the specific Th0 and Th17 models , respectively. We constrained the effects of scaling to be discrete, i.e. from \u221232 to +32 h at the end of the time series (72 h) in 4 h steps. To demonstrate the methods applicability for estimating differentiation efficiencies, we carried out a simulation study. Using IL17A as a template profile, we generated two time series (2nd and 3rd replicate) with a similar behavior and third one (the first replicate) which is a delayed version of the two, i.e. the timepoint 72 h corresponds to 48 h. The method correctly inferred that the first replicate is delayed compared with the other two replicates as depicted in To study variable differentiation efficiencies in IL17 genes in an unbiased manner, we repeated the analysis but now taking into account the possibility of different time scales between the replicates. The model with time scaling allows the samples to be decelerated/accelerated relatively to each other, so that their scaled behavior is similar. We fixed the time scale of the second sample and allowed the other two samples to be accelerated or decelerated independently of each other using the transformation IL17A, IL17F and RORC are depicted in IL17A is delayed over 24 h at 72 h in the first Th17 sample. As expected, uncertainty of the estimates, especially at the end of time series, increases due to the time scaling. For the marker genes IL17A and IL17F, however, we notice that the time scaling is able to improve the model fit. To validate our observation of different time scaling, we performed a kinetic assay of IL17A, IL17F and RORC mRNA levels throughout the early Th17 differentiation using qRT-PCR in the same biological samples as the RNA-seq (Supplementary Table S1). Note that because time scaling (i.e. differentiation efficiency) is a replicate-specific random effect we need to use the same samples for qRT-PCR validation. These confirmed our conclusions: expression of IL17F and IL17A was delayed in the first and third series, while expression of RORC behaved similarly across the samples.The results with time scaling for the marker genes Next we wanted to confirm the presence of different time scaling by studying the posterior distribution of the time-scaling genome wide by repeating the analysis for all expressed genes (i.e. at least one read in Th0 and Th17 samples). To detect differentially expressed genes between the Th0 and Th17 lineages, we used the following criteria: (i) BF > 10, i.e. strong evidence for P-values). In this case, 500 genes overlap between temporal and timepoint-wise analysis. The overall agreement between the two methods is demonstrated by the hypergeometric test of gene set overlap (P < 1 e\u221216).In order to study advantages and disadvantages of our temporal analysis, we carried out a differential expression analysis at the individual timepoints using DESeq tool for comparison purposes. For each timepoint we call a gene differentially expressed if multiple testing corrected (Benjamini\u2013Hochberg method) Next we wanted to see how the overlap between temporal and timepoint-wise analysis changes when we consider separately the top 698 genes that are identified by DESeq exactly at one, two, three, or four timepoints. The number of genes belonging to each class is shown in Supplementary Fig. S3A), which closely resembles the posterior distribution of time-scaling parameters obtained from the application of DyNB showed the weakest level of agreement with the other methods as depicted in Supplementary Figure S3C.We also compared DyNB (with and without the informative delay prior) with the next-maSigPro . InteresISG20 has similar behavior as the IL17A gene, i.e. it is induced between the last two timepoints (48 and 72 h) but the activation is delayed in the first replicate. ISG20 has been reported to have a role in Th17 cells (Supplementary Figure S4 shows two representative genes, KIF11 and MAP1B, which are detected by the timepoint-wise analysis, but not by the temporal analysis implemented in DyNB. Temporal analysis together with the possibility to account for variable differentiation efficiencies can filter out those genes for which the replicated Th0 and Th17 profiles are seemingly similar and thus likely false positives.Three representative examples detected by DyNB, but not identified by DESeq from timepoint-wise analysis with the aforementioned criteria, are shown in We presented the first statistical method, DyNB, to study RNA-seq dynamics together with a method to correct for, or detect, different time scales between RNA-seq time-series datasets. DyNB is compared with a commonly used method, DESeq that relies on the same statistical assumptions but analyzes data from each timepoints separately and, therefore, ignores correlations between timepoints. As expected, the comparison showed that the agreement between the methods is high but at the same time temporal modeling approach has some benefits. The most notable advantage is the possibility to take into account different differentiation efficiencies between biological replicates. Indeed, many experimental systems in cell development and differentiation display subtle kinetic differences between replicates, which are not necessarily apparent until large-scale transcriptomics data are obtained. This method might critically help improve the interpretation of such experiments. Concerning future improvements, the proposed straightforward MCMC sampling scheme might lead to inefficient sampling if more parameters are marginalized. In those cases, sampling could be improved by using more elegant samplers, such as elliptical sampling (Our results show that a temporal analysis can bring insights into analysis of differentiation processes and help in the analysis of time-series datasets. We demonstrated applicability of DyNB by applying it to time series RNA-seq data from Th17 and Th0 lineages and identified novel Th17-specific genes. We used qRT-PCR to validate our computational predictions of sample-specific time scales. For example, by taking into account differences in differentiation efficiencies, we can identify a more complete set of differentially expressed genes. In turn, this improves our ability to discern subtle changes in regulatory pathways and broaden the scope of targets available for intervention."}
+{"text": "Mycobacterium tuberculosis (Mtb) that affects millions of people worldwide. The majority of individuals who are exposed to Mtb develop latent infections, in which an immunological response to Mtb antigens is present but there is no clinical evidence of disease. Because currently available tests cannot differentiate latent individuals who are at low risk from those who are highly susceptible to developing active disease, there is considerable interest in the identification of diagnostic biomarkers that can predict reactivation of latent TB. We present results from our analysis of a controlled longitudinal experiment in which a group of rhesus macaques were exposed to a low dose of Mtb to study their progression to latent infection or active disease. Subsets of the animals were then euthanized at scheduled time points, and granulomas taken from their lungs were assayed for gene expression using microarrays. The clinical profiles associated with the animals following Mtb exposure revealed considerable variability, and we developed models for the disease trajectory for each subject using a Bayesian hierarchical B-spline approach. Disease severity estimates were derived from these fitted curves and included as covariates in linear models to identify genes significantly associated with disease progression. Our results demonstrate that the incorporation of clinical data increases the value of information extracted from the expression profiles and contributes to the identification of predictive biomarkers for TB susceptibility.Tuberculosis (TB) is an infectious disease caused by the bacteria Mycobacterium tuberculosis (Mtb) that affects millions of people worldwide. While a small fraction of individuals who are exposed to Mtb either completely clear the infection or develop active disease, the majority enter a state of latency, in which an immunological response to Mtb antigens is present but there is no clinical evidence of infection. However, there is not a clear delineation between latent and active infection, with levels of bacterial activity encompassing a latency spectrum that varies considerably among individuals is an infectious disease caused by the bacteria Mtb infection from initial exposure to latency is through controlled time-course experiments. Non-human primates (NHPs), such as rhesus or cynomolgus macaques, display a continuum of TB infection which cannot be reproduced in other animal models as a linear combination of a set of B-Spline basis functions to the family of non-negative cubic polynomials on and , where 0 < \u03c41 < \u03c42, with continuous second derivatives globally on . Then z(t) can be expressed as z(t) = c0B0,3(t) + c1B1,3(t) + c2B2,3(t) + c3B3,3(t) + c4B4,3(t) where {c0, c1, c2, c3, c4} is a set of non-negative coefficients and {B0,3(t), B1,3(t), B2,3(t), B3,3(t), B4,3(t)} is the set of B spline basis functions. For each subject, \u03c42 is the duration of time from exposure to euthanasia and is known for each subject, while \u03c41 is an interior transition point (or \u201cknot\u201d) that is estimated. We use the notation z(t|\u03b8) where \u03b8 = .We use piecewise polynomials to estimate the trajectory of the unobserved disease progression de Boor, . We restsn = 17 experimental subjects. To reduce the high degree of fluctuation in CRP measurements, we discretized the observations to fall into one of three ordered groups based on previous clinical observations: Level 1 for CRP = 0; Level 2 for 010.The four variables WT, TMP, CRP, and CXR were measured longitudinally for the yvil be the value of the vth variable for the ith subject at the time point tvil, where v = 1, 2, 3, 4 , i = 1, 2, \u00b7 \u00b7 \u00b7, sn and l = 1, 2, \u00b7 \u00b7 \u00b7, vin denotes the time points observed for the corresponding subject and variable. At the initial time of Mtb exposure for each subject, we set yi0 = {100, 0, 0, 0}, denoting the baseline state for each subject i. Let Y = vily be the array of all observations for all subjects.Let z(t|\u03b8):We define the following hierarchical model for the relationship between the observed clinical variables and unobserved disease state i = 1, 2, \u2026, sn:For vil ~ N, v = 1, 2 are independent for each subject and variable. Equations are proportional odds models are not individually identifiable in our model since they only appear as a product. However, this is simply a matter of scaling and does not impact our ability to compare estimated disease trajectories across subjects.We note that the slopes \u03b8i = be the parameter vector for subject i. The disease progression function for the ith subject zi(t) is defined on the interval , in which \u03c4i2 is set to be the final time point, corresponding to the last measurement for any of the four clinical covariates. We assume that the pre-exposure disease state z(0|\u03b8i) = 0, so we set ci0 = 0. Therefore, among the components of \u03b8i, we need only estimate ci1,ci2,ci3,ci4, \u03c4i1.Let b10, B20, \u03b20, \u03b21, \u03b10, \u03b11, \u03b12 in Equations (1\u20134) determine the distribution of the corresponding clinical variables given the disease state is 0. Since initial values for WT and TMP were constant for all subjects at 100 and 0, respectively, the intercept terms b10, b20 were also fixed. Intercepts for the CRP and CXR probabilities at disease state 0 were also set to reflect the fact that pre-exposure measurements for these variables were fixed at 0 for all subjects. Values for the vector {\u03b20, \u03b21, \u03b10, \u03b11, \u03b12} were set to be {5, 9, 5, 9, 13}, corresponding to the initial state probability The intercepts b, \u03c32, \u03b81, \u00b7 \u00b7 \u00b7, \u03b8sn} where b = , \u03c32 = , and \u03b8i = for i = 1, 2, \u00b7 \u00b7 \u00b7, ns. Equations (1\u20134) specify that the WT and TMP values yil1 and yil2 are normally distributed while yil3 and yil4 have multinomial distributions. Using the conditional independence of the clinical variables, the joint likelihood function of the parameters L is just the product of the marginal likelihoods.The remaining parameters are {b11 ~ NI, b21 ~ NI, b31 ~ NI, b41 ~ NI, where NIa,b]. These priors, which correspond to the scaled normal distribution with support on the interval between the mean and one standard deviation to its right or left, place higher weight on values close to 0 but are not highly restrictive. For the variance parameters, prior distributions were motivated by preliminary inspection of the observed variability in the data and bounds were set to constrain the estimates to a reasonable range of values: 1/\u03c321 ~ U, 1/\u03c322 ~ U. Priors for the basis functions were defined by \u03c4i1 ~ U, and ijc ~ NI[0,+\u221e) for j \u2208 1, 2, 3, 4.We set prior distributions for these parameters to reflect our biological intuition and imposed boundary constraints to account for the lack of identifiability associated with our model. As we expect that increasing disease severity should be associated with weight loss, elevated body temperature, and/or higher levels of CRP and CXR, we use truncated diffuse normal priors on the slopes: T = 5000 points from the posterior distribution.We estimated the parameters by sampling from the posterior distribution using MCMC as implemented in WinBUGS progression to severe illness; (2) asymptomatic infection; (3) recovery following acute illness, and (4) gradual progression to moderate illness. Given a specified disease trajectory, we generated 100 simulated datasets of weekly clinical observations according to Equations (1\u20134). At each time point, WT and TMP observations were randomly generated from the normal distributions determined by Equations and the categorical CRP and CXR observations were taken to be the level with the maximum probability determined by Equations . We set the intercepts ci1,ci2,ci3,ci4,\u03c4i1) via MCMC. Due to the large number of samples in our study, we reduced our computational requirements to include a burn-in period of 50 iterations and then we retained every third sample of the following 1000 iterations. The empirical median of the posterior distribution for each parameter was chosen as its estimate, and these were then used to generate the estimated trajectories of the disease state for the four subjects.For each simulated dataset, we sampled from the posterior distributions of the parameters .In this setting, we fit gene-wise models as described in Smyth , in whicOur two-channel microarray data included expression levels of 44,449 gene probes for 17 subjects, arranged in a loop design that included other control samples. We analyzed the two channels individually, normalized the arrays using the Bioconductor packages LIMMA and OLIN Smyth, , and extugi denote the log2 expression level for the gth gene probe of the ith subject, we used the following filtering criterion: exclude the gth gene probe from the analysis if (maxigiu \u2212 miniugi) < 1.5.To focus our attention on genes that would likely be associated with disease progression, we removed probes that showed insufficient variability across the subjects. Letting J clinical features Z1, Z2, \u00b7 \u00b7 \u00b7, JZ as predictors of the expression profiles for the G probes remaining after the filtering step, we fit models of the formFor g = 1, 2, \u00b7 \u00b7 \u00b7, G where ug = T \u2208 \u211dS, the S \u00d7 (J + 1) design matrix \u03b2g = T. The intercept \u03bcg is the mean expression level for the gth probe across the subjects and \u03b1g1, \u03b1g2, \u00b7 \u00b7 \u00b7, \u03b1gJ are the slopes of the J clinical features for the gth probe.for each Z1, Z2, \u00b7 \u00b7 \u00b7, JZ if the p-values of the individual t-tests for the J covariates were all less than 0.05 and the p-value of the overall model F test H0: \u03b1g1 = \u03b1g2 = \u00b7 \u00b7 \u00b7 \u03b1gJ = 0 was less than 0.01. Given a set of competing models, the model with the smallest overall p-value for which all coefficients met the \u03b1 = 0.05 significance level was selected as the best model for that probe.We determined that a particular gene probe was significantly associated with the covariate combination p-values and we reset any model coefficients that were not associated with aggregate p-values of 0.05 or less to be equal to 0.Probe-level results were aggregated to the set of unique Agilent probe IDs. Because pairs of replicate probes occasionally were best fit by models with differing parameterizations, the statistical significance of the pooled coefficients was calculated by taking the geometric mean of the observed Hierarchical clustering was performed in R to classify the unique probe IDs with respect to the scaled matrix of fitted model coefficients, using the Euclidean distance and Ward's minimum variance method.p-value of 0.05 or less.Agilent probe IDs that were significantly associated with a fitted clinical model were mapped to annotated genes or genomic regions using the DAVID Gene ID conversion tool is the true trajectory and \u1e91i(t) is any one of the 100 estimated trajectories for the ith case where i = 1, 2, 3, 4. We expect that there exists a scaling constant \u03b3 such that \u03b3 zi(t) \u2248 \u1e91i(t) for all i, and the right panel of Figure \u1e91i(t) against the scaled true trajectory \u03b3 zi(t). In each case, our simulations faithfully recovered the underlying trajectory, thereby demonstrating the ability of our approach to accurately estimate model parameters in practice.The results from our simulation study are shown in Figure The left panel of Figure T is defined by the time from initial exposure to euthanasia, final severity S is the value from the fitted disease trajectory at the final time point, onset time O is the time when the subject initially reaches a specified severity level (which here was taken as 1.4 based on estimated severity scores associated with symptomatic illness), and the maximum severity M is the highest value attained for a given trajectory.While all parameters were statistically significant, estimated trajectories, as shown in the right panel of Figure T and final severity S. For each of these probes, the best-fitting regression model was chosen from among models containing all subsets of linear and quadratic terms for T as well as a linear term for S. A total of 9130 probes were significantly associated with at least one of these models, and 8453 of these corresponded to annotated genes or genomic regions as identified by the DAVID Gene ID conversion tool. The probe level results were then aggregated to represent a set of 4864 unique Agilent probe IDs. As shown in Table T, with the remaining 7% associated with either severity S alone or with both S and T.After preliminary filtering, 14,504 candidate gene probes were retained for further investigation. Not surprisingly, preliminary analysis indicated that the predictors in Table Hierarchical clustering was performed on the scaled set of estimated model parameters to determine subsets of probe IDs that had the most similar characteristics with respect to their fitted models. Following visual inspection, we determined that 6 clusters were sufficient to adequately classify the results, as shown in Figure T (with linear increases in the remaining few). Cluster 2 contained 102 probes that displayed a parabolic trend, initially increasing and then decreasing in expression over time. These mapped to 100 distinct gene IDs. Cluster 3 included probes whose expression significantly increased as a function of severity score S. Approximately half of these probes were also significantly associated with time T, either as a quadratic or linear term. Cluster 4 was the second largest and exclusively included probes whose expression decreased as a quadratic function of time T. Cluster 5 was the smallest of the clusters, containing 74 probes mapping to 73 unique gene IDs. Expression profiles for probes in this cluster were all negatively associated with disease severity S, and the majority of these were also significantly associated with time T. Finally, Cluster 6 included probes whose expression decreased linearly as a function of T.The largest cluster (which we denote Cluster 1) contained 2165 probes which mapped to 1950 distinct gene IDs. These expression profiles were predominantly characterized by quadratic increases in gene expression over time www.uniprot.org/docs/keywlist). Figure p = 0.005) while 14 of the 15 gene IDs associated with the keyword \u201cUbiquinone\u201d were contained in Cluster 4.The set of all unique gene IDs included in the six clusters was imported into the DAVID bioinformatics tool suite and analyzed for functional annotation. We found that this subset was significantly enriched for 181 GO Biological Process (BP) terms, 50 Cellular Component (CC) terms, 7 GO Molecular Function (MF) terms, 8 KEGG Pathways, 77 Swiss-Prot Protein Information Resource (SP-PIR) Keywords, and 5 UniProt Sequence Annotation (UP-SEQ) Features. To avoid redundancy, we present results for the 56 statistically significant SP-PIR keywords that were unambiguously defined in the Swiss-Prot controlled vocabulary of keywords were severity-associated , and over 10% of the gene IDs mapped to the keywords \u201cactivator,\u201d \u201cchemotaxis,\u201d \u201cchromosomal rearrangement,\u201d \u201ccytokine,\u201d \u201celectron transport,\u201d \u201cendosome,\u201d and \u201cSH3 domain\u201d were included in Clusters 3 or 5. While relatively few genes were significantly down-regulated with increased severity, an interesting inclusion was the chemokine CXCR5. Recent studies have demonstrated that CXCR5 activity is essential for TB immune response (Gopal et al., T and/or S. We found an overlap of 106 genes and applied hierarchical clustering to the expression profiles associated with this subset. The results, shown in Figure To put our results in a larger context, we compared our transcription profiles with those identified in recent studies seeking to identify TB expression biomarkers. In 2010, researchers published a list of 393 transcripts that were found to effectively discriminate between cases of active and latent TB in blood samples (Berry et al., Mtb exposure to the development of latent infection is a far from uniform process. Even in controlled experiments such as ours, reactions vary considerably and the clinical response is difficult to predict. Therefore, the analysis of gene expression profiles to understand the development of latent infection will be of limited value unless such variation is taken into account.The majority of clinico-genomic modeling efforts to date have emphasized the aggregation of clinical and genetic data in the prediction of binary disease outcomes. However, for infectious diseases such as TB that are associated with a spectrum of conditions, such an approach is unlikely to illuminate the subtle variations in genetic function that might predispose one individual to develop a more severe infection than another. As our experimental data clearly demonstrate, the progression from Despite the obvious benefits of incorporating clinical data in this setting, little work has been done to facilitate such analyses by effectively aggregating a set of disparate longitudinal clinical measurements in a practical and intuitive way. To contribute to this important area of research, we have applied Bayesian hierarchical B-Spline models to the estimation of disease trajectories. Our fitted estimates provide helpful summaries of the clinical profiles of each of our subjects and enable the direct incorporation of aspects of the individual disease progressions in a quantitative form. Furthermore, the modeling of continuous disease trajectories offers a great deal of analytical flexibility. While in our particular study we concluded that the duration of time post-exposure and the estimated severity of disease at the time of euthanasia were the most important explanatory factors for variation in gene expression, one might imagine that in other settings different aspects of an estimated disease trajectory would be more predictive. For example, the frequency of bouts of acute illness might be more relevant for some conditions, while for others the time to recovery might be of particular interest.As an alternative to predicting disease outcomes, we have focused our attention on the incorporation of clinical profiles in the identification of biomarkers associated with observed disease severity. Our results demonstrate that, even with a fairly limited set of subjects, our approach can identify key genes that have been shown to be factors in TB prognosis. This illustrates the potential of such integrated analyses for not only TB, but for a variety of complicated diseases in which subjects are monitored over time. While controlled experiments such as ours are, of course, limited to the laboratory setting, the ability to incorporate longitudinal clinical profiles in the analysis of gene expression data from human subjects is certainly an option in many observational studies and clinical trials. The estimation of individual disease trajectories in such studies would not only enable significant improvements in both the sensitivity and specificity of biomarker identification beyond current approaches, but would also provide insights into personalized treatment strategies.Qingyang Luo developed and implemented the models, analyzed the clinical and gene expression data, and contributed to the manuscript. Smriti Mehra infected the animals and collected samples. Nadia A. Golden produced the microarray data. Deepak Kaushal directed the primate experiments and contributed to the manuscript. Michelle R. Lacey directed the modeling project, performed the bioinformatics analysis, and revised and finalized the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Mus musculus dendritic cells incubated with Candida albicans and proofed our method by predicting previously verified PHIs as robust interactions.Inference of inter-species gene regulatory networks based on gene expression data is an important computational method to predict pathogen-host interactions (PHIs). Both the experimental setup and the nature of PHIs exhibit certain characteristics. First, besides an environmental change, the battle between pathogen and host leads to a constantly changing environment and thus complex gene expression patterns. Second, there might be a delay until one of the organisms reacts. Third, toward later time points only one organism may survive leading to missing gene expression data of the other organism. Here, we account for PHI characteristics by extending NetGenerator, a network inference tool that predicts gene regulatory networks from gene expression time series data. We tested multiple modeling scenarios regarding the stimuli functions of the interaction network based on a benchmark example. We show that modeling perturbation of a PHI network by multiple stimuli better represents the underlying biological phenomena. Furthermore, we utilized the benchmark example to test the influence of missing data points on the inference performance. Our results suggest that PHI network inference with missing data is possible, but we recommend to provide complete time series data. Finally, we extended the NetGenerator tool to incorporate gene- and time point specific variances, because complex PHIs may lead to high variance in expression data. Sample variances are directly considered in the objective function of NetGenerator and indirectly by testing the robustness of interactions based on variance dependent disturbance of gene expression values. We evaluated the method of variance incorporation on dual RNA sequencing (RNA-Seq) data of Organisms need to constantly adapt to environmental changes. On a molecular level, this is mediated by complex signaling cascades, which transmit the signal to cell nuclei. Transcription factors bind to their target genes, which consequently leads to a change in gene expression. This way, biological systems adapt to new environmental conditions.In most cases underlying networks are unknown. This is especially interesting for interacting organisms, such as pathogens and host. Both the experimental setup and the nature of PHIs exhibit certain characteristics: (i) pathogen and host are in a battle leading to constantly changing conditions, (ii) a change in gene expression is triggered by new environmental conditions and the response of one organism might initiate faster or persist longer than the response of the other organism and (iii) two different organisms interact and eventually one survives which can lead to missing data time points.The immune system of the host is permanently active to recognize and eliminate infectious microorganisms. As a first line of defense, components of the innate immune system such as the complement system, immune cells, and antimicrobial peptides recognize pathogen-associated molecular patterns (PAMPs). In contrast, pathogens developed many strategies to evade these mechanisms. They can shield microbe-associated cell surface proteins, mimic host surfaces or secrete proteases degrading host immune proteins , has to be considered. A high MOI results in more pathogenic RNA, but may also lead to a faster and stronger host response and less clinical relevance.The number of reads required to achieve a good genome coverage in both species has to be estimated in advance. The number of reads needs to be calculated for the least abundant species based on the intended fold coverage, transcriptome size and read length. The total number of reads can be estimated through the ratio of the amount of extracted pathogen and host RNA.Sorghum bicolor and the pathogenic fungus Bipolaris sorghicola. Pittman et al. . Engstr\u00f6m et al. and its application to predict PHI networks. NetGenerator requires logarithmic fold changes (logFCs) of gene expression time series data that can be obtained by various technologies, such as RNA-Seq or microarrays. Furthermore, the user of NetGenerator has to provide at least one input stimulus representing the external signal leading to a change in gene expression. Also, prior knowledge can be provided by the user to support the inference process Figure . It can 63 network topologies were possible not even including the interaction sign.We generated a benchmark example to evaluate the influence of different stimuli and missing data on the inference performance (see Data and Methods). The benchmark comprised six data points of seven genes and two stimuli Figure . Prior kvice versa. Weber et al. set to a value of 1 was given. In a second test, an additional stimulus set to 0 until 30 min and set to 1 afterwards (Test-2) was given . We evaluated the influence of missing data on the performance based on the benchmark example, prior knowledge data sets and two given stimuli as in Test-2 Figure . Again, We included data of one additional time point (165 min) for host genes, but additional data for pathogen genes were not given (Test-3). Thereby, we demonstrated the applicability of the extended NetGenerator version to data with missing values. We set the time point in such a way, that an additional data point covering the onset of the host response was provided and observed a noticeable increase of F-measure Figure . The difNetGenerator requires complete data for the last time point. In case of missing measurements at the end of the time range for a subset of candidate genes, their values must be obtained in a different way and provided by the user. Here, we set the last time point to its preceding value (Test-4). We found slightly greater F-measures for Test-2 in comparison to Test-4 independent of the number of given prior knowledge. We observed a maximal difference between the F-measures between Test-2 and Test-4 (0.02) given four, six and eight prior knowledge interactions Figure .Various differential expression analysis tools are available that calculate fold changes from multiple replicates. However, fold changes alone cannot reflect the degree of gene- and time point specific variances. This variance might be high especially regarding complex biological systems such as PHIs where cells from two species constantly interact and change the environment. However, biological variances can be considered in the network inference process to obtain robust predictions. For this purpose, we extended and applied NetGenerator to incorporate variances within the algorithm and in an outer robustness analysis. The extended NetGenerator algorithm was applied to one of the first published dual RNA-Seq data sets than to data points with higher standard deviation .We predicted a GRN and a maximum of 3.49 (Mta2 at 30 min) . This initiates an information flow through signaling cascades aiming to model a later onset of the host transcriptional response. Another possible scenario would be to provide a stimulus function representing a slow increase of the influence.We found that multiple stimuli functions improve network inference results significantly. Therefore, we recommended to provide two or more stimuli functions for inter-species network inference. One option to model the stimulus representing the influence of the host on the pathogen is a constant function. Therewith, the stimulus is active from time point zero onwards and models an early pathogen transcriptional response. More options for stimuli functions are possible when real experiments are carried out. For example, the number of differentially expressed host and pathogen genes can be determined for every time point and translated into stimuli functions. This can be done by scaling the number of DEGs to a range from zero to one. Additional measurements, e.g., cytokine release or cell contacts, can also be used as a basis for stimuli functions. Of particular interest is the growth curve of the pathogen, which we recommend to measure and integrate in the stimuli functions. Nevertheless, many biological events trigger responses, of which not all can be integrated in the network inference.Optionally, the user of NetGenerator can provide prior knowledge about interactions of candidate genes. This is strongly recommended to reduce the search space resulting from the large number of possible interactions corresponds to a Maximum Likelihood Estimator (MLE) , stimuli functions and prior knowledge have to be provided by the user. Stimuli are factors that (directly or indirectly) cause changes in gene expression. It is assumed, that stimuli are not influenced by genes or their products, at least in the experimental setup. Nevertheless, stimuli values may evolve over time.The inferred network model is described by a system of first order linear differential equations of the form\u1e8b is influenced by other genes and stimuli u. While interactions between genes are described by the system matrix N \u00d7 N, the influence of stimuli is represented by the input matrix N \u00d7 M, where N is the number of genes and M is the number of inputs. The inference procedure determines the elements of these matrices, i.e., the parameters \u03b8 of the model, by an iterative heuristics including structure and parameter optimization. In each iteration step, the algorithm includes a submodel which matches the available time series data best. The parameters of the ith submodel are determined by minimizing an objective functionThe change of gene expression The second term evaluates the integration of prior knowledge, see .described the error between measured data NetGenerator was extended to account for missing data values. Now, NetGenerator accepts missing values at intermediate time points provided by the user as \u201cNA.\u201d Internally, the time vector of the respective output is adjusted and interpolation is carried out based on existing measurement data. During inference, both simulation and objective function (Equation 4) can process that information of missing and replaced values.i,outputJ (Equation 3) was extended by additional weighting factors, which are the reciprocal variances 1/\u03c32 of the replicated data:The objective function 2 of the logFCs became additional input arguments to NetGenerator. Larger variances decrease the objective function value which effectively allows for a larger error between associated measured and simulated values in comparison to measurements of smaller variance.Therefore, the variances \u03c3Moreover, variances are considered in an outer robustness analysis by predicting GRNs based on disturbed logFCs. To simulate the measurement process, we sampled three replicates of Gaussian distributed logFCs and determined their mean. This resulted in a noisy logFC for each candidate gene and time point used as input for extended NetGenerator. We repeated this process 500 times.J = \u2211 iJ that is the sum over the values of each time series (Equation 2). The robustness score i,jS evaluating the interaction of gene j and gene i is calculated asFor better visualization of the robustness analysis results we introduced the bubble map Figure showing i,j,kJ being the objective function value of the thk predicted GRN and i,j,ka being the corresponding element of the interaction matrix i,jS of gene j interacting with gene i is illustrated by the bubble size of column j and row i and inhibiting (blue) interactions. Note, that the diagonal represents autoregulations. Exact robustness scores depending on how frequently an edge was predicted and corresponding objective function values of the predicted GRN are available as additional output and control (c) samples (error propagation):Both the extended version of the objective function and the robustness analysis require variances derived from data. The gene- and time point specific variance \u03c3i,j of all genes and time points can be obtained by taking the square root of the variances. Given only few replicates, standard deviations can be high leading to the prediction of diverse GRNs. In that case, the standard deviations need to be scaled to a maximal value \u03c3max:The respective standard deviations \u03c3mean = 0, \u03c3 = 0.01) to generate the benchmark data set.We constructed a benchmark system composed of differential equations representing the logFC time series data of three pathogen genes, four host genes and two stimuli. The network topology included 21 directed, signed edges representing interactions. Common biological motifs like feed forward loops and feedback loops are integrated, too. Based on this topology we set up a system of differential equations and simulated this model with the R-package deSolve and 50% is unspecific. Likewise, we generated prior knowledge data sets of four, six and eight interactions.To evaluate predicted GRNs we computed statistical measures that compare the known topology to the predicted topology. Sensitivity (SE), specificity (SP), precision (PR) and F-measure (FM) are calculated as:n), false positives with wrong sign (FPs), true negatives (TN) and false negatives (FN) into account , false positives not part of the known topology . Three biological replicates were generated at 0, 30, 60, 90, 120 min after infection. Differential expression analysis was carried out with DESeq CRC/Transregio 124 \u201cPathogenic fungi and their human host: Networks of interaction,\u201d subproject B3 (Sylvie Schulze) and subproject INF (J\u00f6rg Linde). Sebastian G. Henkel and Dominik Driesch are supported within the Virtual Liver Network funded by the German Federal Ministry of Education and Research .The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "The reconstruction of gene regulatory network from time course microarray data can help us comprehensively understand the biological system and discover the pathogenesis of cancer and other diseases. But how to correctly and efficiently decifer the gene regulatory network from high-throughput gene expression data is a big challenge due to the relatively small amount of observations and curse of dimensionality. Computational biologists have developed many statistical inference and machine learning algorithms to analyze the microarray data. In the previous studies, the correctness of an inferred regulatory network is manually checked through comparing with public database or an existing model.In this work, we present a novel procedure to automatically infer and verify gene regulatory networks from time series expression data. The dynamic Bayesian network, a statistical inference algorithm, is at first implemented to infer an optimal network from time series microarray data of S. cerevisiae, then, a weighted symbolic model checker is applied to automatically verify or falsify the inferred network through checking some desired temporal logic formulas abstracted from experiments or public database.Our studies show that the marriage of statistical inference algorithm with model checking technique provides a more efficient way to automatically infer and verify the gene regulatory network from time series expression data than previous studies. Advancement of DNA microarray technology and next generation sequencing technique have revolutionized the molecular biology, making it possible for biologists to measure and collect thousands of genes' expression levels simultaneously, efficiently and precisely in one experiment. Computational analysis of genome-wide transcriptomics data will help us understand the regulatory components and mechanisms underlying some diseases. These explosively growing amount of highdimensional gene expression data can be divided into two types: static and time series. The static expression data are assumed to be independently and identically distributed (IID), and many statistical inference algorithms -8 have bet al.'s work [The time series gene expression data can provide abundant information regarding the dynamic and temporal behaviors of biological system, which can not be handled by Bayesian network method. Dynamic Bayesian network (DBN) -16 is a .'s work proposedM satisfies a desired temporal logic formula \u03c8, denoted by M \u255e \u03c8. Our previous work proposed and applied different model checking techniques, including statistical model checking [100 possible states.Without verification or validation, the inferred regulatory networks can not help us correctly understand the mechanism in the cell cycle. Another limitation in the previous studies is, the correctness or verification of the inferred networks is manually checked by comparing with public database or existing/known models. This verification procedure is only good for small and already-known network \"inference\" and \"verification\". However, the signaling pathway or regulatory network is complex due to the excessive number of components and interactions, it is not realistic and efficient to use traditional methods to manually verify or analyze large networks. An intelligent verification technique called Model Checking has beenchecking ,22, syncchecking -25, asynchecking and probchecking , to formIn this work, we proposed a novel inference and verification procedure, which marries the dynamic Bayesian network inference algorithm with a powerful model checking technique, to analyze time course microarray data. We will first briefly introduce the dynamic Bayesian network inference with Java objects (Banjo) method dp genes measured at n different time points, can be described by an n \u00d7 p matrix X. If i X= T is defined as a random variable vector (at time i), then, i x= T corresponds to the values of a vector of p genes' expression measured at time i = 1, 2,..., n; that is, ij xrepresents an observation value of the random variable ij X(the jth gene's expression measured at time i). We adopt some conventions used in Kim et al.'s work [Probabilistic graphical model describes each node in the network by a random variable, and the directed edge represents a conditional dependence between two variables. Therefore, gene regulatory network can be graphically represented by a joint distribution of all random variables over time. The time series gene expression (microarray) data, which consists of .'s work .i Xis time dependent, the dynamic Bayesian network [i are dependent on those at time i \u2212 1 only which is illustrated in Figure n \u00d7 p random variables (or n vectors of random variables) can be written asSince the random variable vector network -15 assumPar(ijX) to denote the gene j (at time i)'s parents , and also assume each gene (node) at time I is influenced by itself and its parent genes (nodes) at time i \u2212 1 only. Therefore, the conditional probability distribution can be expressed asWe use D are discretized into k levels {l1,..., kl} using either quantile (qk ) or interval discretization (in) methods [G:Figure methods . Second, methods to evalu methods ,15. ThenD is a multinomial sample, that is, D|\u0398 \u223c Multinomial(\u0398). BDe also assumes the parameters \u0398 are globally and locally independent and the priors of \u0398, denoted by \u03c0, follow Dirichlet distribution with a hyperparameter vector \u039b, that is, \u0398|G \u223c Dirichlet(\u039b), which is a conjugate prior of multinomial distribution. The optimal network is selected according to the BDe scores which are dependent on P . Next, search all the possible optimal networks. Banjo allows two different search strategies, including the greedy search and simulated annealing algorithm proposed by Heckerman [n directed networks with highest scores, and it can also retain and average some highest scoring networks to produce a weighted consensus network [The BDe score function is based on the assumption that the microarray data eckerman , which c network .Bayesian network inference with Java objects (Banjo) can alsoti Xtakes a value of k given its parent gene Par(tiX) takes a value of j; the cumulative distribution function ti Xtakes a value less than or equal to k given its parent gene takes a value of j; and a predefined voting system. If there is a high probability for the gene ti Xto take a larger value given its parent's value increases, then, the voting system [which is the probability that gene g system in the Bg system ,32 for dThe dynamic Bayesian network implemented with Banjo can infer the high-scoring gene regulatory networks based on the BDe metrics, however, this algorithm is sensitive to the data discretization methods. Moreover, in many cases, the inferred optimal network might not be a correct one based on different scoring functions. Which model is closest to the truth in the biological system? Previous studies validate the inferred network through manually comparing with the public database or known models. The manual verification method is not realistic for the large or unknown network verification. The most innovative aspect of the proposed procedure in Figure M = , representing a finite-state concurrent system with the initial state s0 \u2208 S, states transition relation R , and a labeling function L. Given a model or concurrent system, we expect it to satisfy some desired property. So, model checking, a formal verification technique [M satisfies the desired property, which is expressed as a temporal logic formula \u03c8, denoted by M \u255e \u03c8. During formal verification, model checkers can search the state space of concurrent system exhaustively to find all states that satisfy the formula \u03c8. If the property is satisfied, model checker will output \"True\"; else, it will output \"False\" with a counterexample sequence that falsifies \u03c8. Model checking of hardware and software systems has been very successful in the past three decades. Recently, we proposed different model checkers to formally investigate the complex signal transduction networks in the cancer cells [A network or model can be described as a Kripke structure ,20 M = from the root [\u03c8 is composed of path quantifiers which describes the branching structure in the computation tree: A , E (there exists some path); temporal operators describing properties on a path through the tree: X (next time), F (in the future), G , U (until), R (release); and Boolean logic connectives (| (or), & (and), \u2192 (implies)). In the CTL formula, the temporal operator must be immediately preceded by a path quantifier [AX, EX, AG, EG, AF, EF) to construct CTL formulas for the verification of gene regulatory network. For example, AG\u03c6 means \u03c6 is globally true on all paths; EF\u03c6 means \u03c6 holds at some state in the future on some path. More interesting CTL operators and formulas have been discussed in Clarke et al.'s book [The desired properties describing some existing wet lab experimental results or phenomena are expressed in a high-level, expressive language - Computation Tree Logic (CTL) formula the root . CTL forantifier . Similarantifier ,20,26, w.'s book .M , the state formula and path formula are represented by \u03c8 and \u03c6 respectively in CTL syntax, and a path \u03c0 is defined as an infinite sequence of states, \u03c0 = s0, s1,..., where s0 is an initial state. We use i \u03c0to denote the suffix of \u03c0 starting at state is, and M, \u03c0 \u255e \u03c6 denotes the path \u03c0 satisfies the path formula \u03c6. The semantics of CTL have been defined in [Given a Kripke structure fined in , below is a pop studies ,23-25,27weighted symbolic model checking method . The value of influence score calculated by Banjo [X2 = 1 \u2192 AF(X4 = \u22121))\" means, overexpressed X2 will eventually inhibit X4's expression on all paths. The weighted SMV model checker will automatically verify all the CTL formulas (encoded by SPEC), and find the best model which satisfies all or most of the properties based on existing experimental evidence.Next, we will propose a by Banjo ranges fIn this section, we will apply the dynamic Bayesian network inference and weighted symbolic model checking methods proposed in Figure Saccharomyces cerevisiae collected by Spellman et al. [The time series microarray data of n et al. has beenn et al. . The Bann et al. .i2 interval discretization (two discrete states) method in Banjo. The weighted symbolic model checker will be applied to formally verify or falsify this optimal network.We will first infer and verify a small network of MAPK signaling pathway which plays an important role in the cell cycle. We focused on the subnetwok around Fus3 which contains 8 genes , while, Dig1/2 denotes the mean value of Dig1 and Dig2 in our analysis. Figure In Table i2, Figure q2, Figure Next, we will apply our methods to infer and verify a cell cycle subnetwork . Partial pathway has been registered in KEGG. Similar to MAPK pathway inference, we will use the mean values for some genes from the same family in the data analysis. Figure i2 \"optimal\" network in Figure Table A comprehensive understanding of the signaling pathways or gene regulatory networks will advance our knowledge in molecular biology. Network reconstruction from high-dimensional microarray data can help researchers to investigate the crosstalk of different pathways and develop effective multi-gene targeted treatments for some diseases, e.g., cancer and neurodegenerative diseases. Previous studies develop different statistical inference algorithms ,14,39 toThe authors declare that they have no competing interests.HG proposed the study, HG, JK, KD prepared the computational code, HG, JK, KD, XL, SH analyzed the results, HG wrote the manuscript. All authors approved the final manuscript."}
+{"text": "Network inference from gene expression data is a typical approach to reconstruct gene regulatory networks. During chondrogenic differentiation of human mesenchymal stem cells (hMSCs), a complex transcriptional network is active and regulates the temporal differentiation progress. As modulators of transcriptional regulation, microRNAs (miRNAs) play a critical role in stem cell differentiation. Integrated network inference aimes at determining interrelations between miRNAs and mRNAs on the basis of expression data as well as miRNA target predictions. We applied the NetGenerator tool in order to infer an integrated gene regulatory network.Time series experiments were performed to measure mRNA and miRNA abundances of TGF-beta1+BMP2 stimulated hMSCs. Network nodes were identified by analysing temporal expression changes, miRNA target gene predictions, time series correlation and literature knowledge. Network inference was performed using NetGenerator to reconstruct a dynamical regulatory model based on the measured data and prior knowledge. The resulting model is robust against noise and shows an optimal trade-off between fitting precision and inclusion of prior knowledge. It predicts the influence of miRNAs on the expression of chondrogenic marker genes and therefore proposes novel regulatory relations in differentiation control. By analysing the inferred network, we identified a previously unknown regulatory effect of miR-524-5p on the expression of the transcription factor SOX9 and the chondrogenic marker genes COL2A1, ACAN and COL10A1.Genome-wide exploration of miRNA-mRNA regulatory relationships is a reasonable approach to identify miRNAs which have so far not been associated with the investigated differentiation process. The NetGenerator tool is able to identify valid gene regulatory networks on the basis of miRNA and mRNA time series data. E. coli[Modelling of gene regulatory networks (GRNs) has become a widely used computational approach in systems biology . This deE. coli,6 and fuE. coli.In this study, we focus on the involvement of microRNAs (miRNAs) in the gene regulation of human mesenchymal stem cells (hMSCs) which differentiate towards chondrocytes. Therefore, we provide a biological background about hMSCs, characteristics and function of miRNAs and modelling approaches which integrate miRNA regulation. HMSCs are multi-potent adult stem cells, which have the capacity to differentiate into multiple cell types, such as chondrocytes, osteoblasts and adipocytes ,9. LineaMiRNAs are short (\u223c 22 nucleotides), noncoding RNA molecules, which are able to bind to complementary sequences in target mRNAs, thereby repressing translation or inducing degradation of mRNA molecules . SilenciNetwork inference approaches have considered the emerging knowledge about miRNA-dependent regulation by taking account of interactions between miRNAs and mRNAs. Such approaches utilise miRNA target predictions as well as miRNA and mRNA expression data. Consideration of post-transcriptional gene regulation has been contributing to the extension and refinement of GRNs. This new feature has promoted the analysis of dependencies between miRNAs and target genes. For example, tools like MAGIA , MMIA 22 and mirThis study aims to present the inference of miRNA regulation as a novel application for the previously published NetGenerator V2.0 tool. The focus of this work is integrated network inference based on mRNA and miRNA time series data and prior knowledge ,24,25. TWe analysed a dataset which contained mRNA and miRNA microarray measurements of cultured hMSCs in pellet cultures after stimulation with growth factors TGF-beta1 and BMP2. Both factors are known to induce the process of chondrogenesis ,13. MicrAs reported in the literature, there are prominent chondrogenesis marker genes such as COL2A1, ACAN (aggrecan) and COL10A1, whose expression level indicates the progress of differentiation ,38. TheyIn summary, the applied multi-step selection procedure resulted in a set of 11 network components, including 4 miRNAs , 4 transcription factor genes and 3 chondrogenic marker genes coding for components of the extracellular matrix .The NetGenerator tool was applied to infer a system of linear ordinary differential equations, which describes a network of regulatory interactions between the components and the influence of the external stimulus TGF-beta1+BMP2). The general model structure and the utilised optimisation approach is explained in the Methods section. Input data for the tool comprised time series data and prior knowledge about potential regulatory interactions between the components. Time series data were extracted from the available miRNA and mRNA microarray datasets, averaged across replicates at each time point, centered and scaled by their maximum absolute value (see Methods). The resulting time series matrix has 9 rows (time points) and 11 columns (nodes) miRNA, (2) transcription factor gene, (3) marker/target gene), prior knowledge regarding the typical biological function was derived as follows: (1) miRNAs primarily function by degradation of their target mRNAs . Therefo miRNA, and the number of correctly integrated known connections , by identifying regulator and target nodes for each of them. This promotes the understanding of the model, particularly of the mechanism that enables miRNAs to interfere with transcriptional regulation in order to control the differentiation process. As seen in the final model, the input stimulus (TGF-beta1+BMP2) inhibits the expression of 3 miRNAs and activates miR-500, which is in turn suppressed by TRPS1. Consequently, the negative effect of downregulated miRNAs on their target genes is attenuated, which leads to the activation of the transcription factor genes SOX9, MEF2C, TRPS1 and SATB2. SOX9, the main regulatory factor in chondrogenesis , is inhiTo summarise, the involvement of transcription factor genes is a central part of the model. The model integrates transcriptional regulation (by transcription factor genes) and post-transcriptional regulation (by miRNAs) and thereby displays the interrelationship between miRNAs and transcription factors. Since all four investigated miRNAs are ultimately downregulated, the model proposes the suppression of miRNA activity, which gives rise to the activation of the transcriptional regulators of chondrogenic differentiation such as SOX9. The model comprises miRNAs acting on different stages of the differentiation process including early proliferation and late hypertrophic stages. The downregulation of miR-524-5p provides an interesting explanation about how chondrogenic differentiation might be modulated on the level of post-transcriptional mRNA interference. Furthermore, we found expression of miR-524-5p to be differently regulated during osteogenic and adipogenic hMSC differentiation . Therefore, we performed overexpression experiments of mir-524-5p in hMSCs to validate if chondrogenesis is impaired in this case. Changes in chondrogenic differentiation were measured based on the expression of specific marker genes. For this, hMSCs were transfected with lentivirus harbouring the mir-524-5p coding sequence, while a non-related murine Jnk RNAi lentivirus was used as a negative control. Then, hMSCs were allowed to differentiate for 14 days into chondrocytes after which the expression of chondrogenic marker genes was measured by using qPCR. Relative expression of marker genes of transfected cells was compared to the negative control (incomplete: differentiation in culture medium without any growth factors added) and a positive control (TGFB1+BMP2: medium containing TGF-beta1 and BMP2) in which differentiation occurs in culture medium but without lentiviral transduction in conjunction with statistical criteria was applied. Four miRNAs were found to be potentially involved in the regulation of chondrogenesis. To infer an integrated network of miRNA and gene regulation, we presented a novel application of the NetGenerator tool i.e. its capability to integrate miRNA and mRNA time series data into a single network inference. The good quality of the resulting network model with regard to complexity, data fit and robustness underlines the tool\u2019s utility to infer the post-transcriptional level of gene regulation. Analysis of the network resulted in hypotheses and additional experiments which verified model predictions by showing that miR-524-5p can affect the expression of the central transcription factor gene SOX9 and differentiation marker genes. Therefore, this work demonstrated how dynamic modelling of miRNA regulation can enhance the understanding of a specific biological process and lead to the discovery of new regulatory interactions.2. Studies were performed with hMSCs from multiple donors, including 5F0138, 5F0138 and 1F1061. For chondrogenic differentiation, hMSCs were trypsinised and 2.5x 105 cells pelleted in a 10 ml round bottom tube for 10 min at 250xg. Cell pellets were subsequently cultured for 21 days in chondrogenic differentiation medium, consisting of proliferation medium supplemented with 6.25 \u00b5g/ml insulin, 6.25 \u00b5g/ml transferrin, 6.25 ng/ml sodium selenite, 5.35 \u00b5g/ml linoleic acid, 400 \u00b5g/ml proline, 1 mg/ml sodium pyruvate, 10-7 M dexamethasone, 50 \u00b5g/ml sodium L-ascorbate , in the absence (incomplete or control) or presence of 10 ng/ml recombinant TGF-beta1 in combination with 50 ng/ml recombinant human BMP2 (TGF-beta1+BMP2). Growth factors were obtained from R&D Systems.Human mesenchymal stem cells (hMSCs), harvested from normal human bone marrow, were purchased from Lonza at passage 2. Cells were tested by the manufacturer and were found to be positive by flow cytometry for expression of CD105, CD166, CD29 and CD44 and negative for CD14, CD34 and CD45. We confirmed multipotency of all donor batches based on in vitro osteo-, chondro- and adipogenic differentiation capacity [\u24c7 according to the protocol provided by the manufacturer (Invitrogen). For each sample, 5 \u00b5g of RNA was used for miRNA profiling. Hybridisation and profiling were performed using Exiqon capture probe sets spotted on Schott Nexterion Hi-Sense E glass slides [Affymetrix Human Genome U133A (HG-U133A) microarrays were employed in triplicate experiments at 9 time points . Further experimental details can be found in [s slides .Ct-\u00b7106 marker gene / 2Ct-\u00b7106 RPS27a. Data are presented as a fraction of RPS27a expression and all qPCRs were performed in duplicates kit according to manufacturer\u2019s instructions. The isolated total RNA (\u2248100 ng) was then used as a template in a 20 \u00b5l reverse transcriptase reaction using superscript reverse transcriptase from Invitrogen according to manufacturer\u2019s instructions using random hexamers to prime the reaction. The following cycling conditions were used: 10 min at 20\u00b0C, 45 min at 42\u00b0C and 10 min at 94\u00b0C. The resulting cDNA solution was diluted 5x by adding 80\u00b5l water. qPCR of chondrogenic markers was performed using the following human primers: COL2A1 , COL10A1 (forward: 5\u2019-AAAGCTGCCAAGGCACCAT3\u2019 and reverse: 5\u2019-AGGATACTAGCAGCAAAAAGGGTATT-3\u2019), ACAN (forward: 5\u2019-GACAGAGGGACACGTCATATGC-3\u2019 and reverse: 5\u2019-CGGGAAGTGGCGGTAACA-3\u2019) and SOX9 (forward: 5\u2019-GCAAGCTCTGGAGACTTCTGAAC-3\u2019 and reverse: 5\u2019-ACTTGTAATCCGGGTGGTCCTT-3\u2019), expression values were normalised and corrected using RPS27a housekeeping gene (Forward: 5\u2019-GTTAAGCTGGCTGTCCTGAAA-3\u2019 and reverse: 5\u2019-CATCAGAAGGGCACTCTCG-3\u2019). Relative expression was calculated using the following formula: Relative expression: 2Microarray data pre-processing and network inference were entirely performed in the statistical programming environment R using BiData from mRNA microarray experiments were pre-processed using the customised chip definition package \u201cgahgu133a\u201d and the robust multi-array average (RMA) procedures . The chiFirst, mean signal values were extracted for each of the measured miRNAs. Secondly, quantile normalisation was applied, which is provided by the RMA package. This led to logarithmised miRNAs expression estimates for 1,023 miRNAs. In contrast to mRNA microarray data, there can be multiple probe-sets representing the same miRNA.-10), miRNA selection was merely based on a 2-fold-change criterion, due to the low replicate number in the miRNA dataset (2 replicates per time point).We applied the LIMMA package of the Bioconductor software suite to the mTime series standardisation is a pre-processing step required by the NetGenerator tool . It inclNetwork inference was performed using the NetGenerator tool, which models gene regulation by a system of ordinary differential equations (Equation 1). xi of component i is described by the sum of weighted gene expressions of N genes and the weighted input u(t), which is a stepwise constant function representing the external stimulus (e.g. TGF-beta1+BMP2). The values of xi can be interpreted as standardised expression changes of component i between stimulated and non-stimulated (control) state, which serves as a reference point.Dynamic change of expression ai,j and the input parameters bi. A positive parameter value denotes an activating connection, a negative value denotes an inhibitory connection and the value zero denotes no connection. Consequently, the GRN structure is determined by the model\u2019s interaction parameters, which have to be identified by the NetGenerator algorithm. The algorithm\u2019s central part is a heuristic algorithm, which performs network structure and parameter optimisation. Structure optimisation applies the principle of sparseness. Iterative development of sparse sub-models explicitly restricts the number of identified connections. In each development step, parameter optimisation is applied to obtain interaction and input parameter values. The resulting model contains a minimal number of parameters that is necessary to obtain a good fit between simulated model and measured time series. A more detailed description of the algorithm can be found in [Regulatory interactions are modelled by the interaction parameters found in ,24,25.NetGenerator also allows for integration of additional information about regulation between the components, referred to as prior knowledge. As this knowledge is independent of the time series data, it represents valuable additional input for the network inference. NetGenerator is capable of using prior knowledge during the structure optimisation process, while also dealing with contradictions between prior knowledge and time series data. Knowledge is provided in form of an interaction matrix which contains values assigned to particular connections, coded in the following way: no connection (0), activation (10), inhibition (-10), activation or inhibition (1) or not available (NA). NetGenerator provides a flexible integration mode which ignores prior knowledge in case the model fit is worsened.Since NetGenerator contains a heuristic core, it depends on the setting of configuration parameters. The central parameter \u201callowedError\u201d controls the permitted total deviation between simulated and measured data for each time series. To achieve an optimal result, we performed a series of network inference runs varying the value of this parameter 0.05) resulting in ten models or pMIRNA vector with mir-524-5p premature DNA sequences (purchased from System Biosciences). Lentiviruses were added in various concentrations in addition to 1 mg/L polybrene (Milipore). The transfected cells were incubated for 2-3 days to allow for lentiviral integration and expression of the introduced transgenes. Transfected hMSCs were grown as pellets (by centrifugation) in high-glucose DMEM supplemented with 100 U/ml penicillin, 100 \u00b5g/ml streptomycin, 1% L-glutamate 6,25 \u00b5g/ml insulin, 6,25 ng/ml sodium selenite, 6,25 \u00b5g/ml transferrin, 5,35 \u00b5g/ml linoleic acid, 400 \u00b5g/ml proline, 1% pyruvate, 100 nM dexamethasone, 50 \u00b5g/ml sodium ascorbate and 1,25 mg/ml bovine albumin (listed compound from Sigma). This medium will be further referred to as incomplete medium. Differentiation experiments were performed using incomplete medium in the presence or absence of 10 ng/ml TGF-beta1 and 50 ng/ml BMP2 (both purchased from R&D Systems). Differentiation of hMSCs chondrogenic pellets was allowed for 14 days.hMSCs were maintained in DMEM medium supplemented with 10% FBS, 1% pyruvate, 1% L-glutamine, 100 U/ml penicillin and 100 \u00b5g/ml streptomycin and incubated at 37\u00b0C and an humidified atmosphere containing 7,5% COThe authors declare that they have no competing interests.MW and AS drafted the manuscript. MW performed pre-processing and modelling of the network. PK contributed to the network component selection. AS and EJvZ contributed to experimental set-ups, measurements and biological interpretation of the network. All authors read and approved the final manuscript.Expression data of network components. Expression data, which contains non-standardised expression differences between stimulation and control of the corresponding 11 network components.Click here for fileTable of connection frequencies from model validation. A table of the network model connections and their relative frequencies in the model validation.Click here for fileTime series of miR-524-5p expression. Time series of miR-524-5p expression after chondrogenic, osteogenic and adipogenic stimulation of human mesenchymal stem cells.Click here for fileExpression data for network validation. Expression data of the validation experiments which investigated the impact of miR-524-5p overexpression on the expression level of SOX9, COL2A1, COL10A1 and ACAN.Click here for file"}
+{"text": "Our pragmatic approach is based on: framing a specific biological question and associated gene-set, performing a wide-ranged experiment without replication, eliminating potentially non-relevant genes, and determining the experimental \u2018sweet spot\u2019 by gene-set enrichment plus dose-response correlation analysis. Examination of many cellular processes that are related to UV response, such as DNA repair and cell-cycle arrest, revealed that basically each cellular (sub-) process is active at its own specific spot(s) in the experimental design space. Hence, the use of range finding, based on an affordable protocol like this, enables researchers to conveniently identify the \u2018sweet spot\u2019 for their cellular process of interest in an experimental design space and might have far-reaching implications for experimental standardization.In transcriptomics research, design for experimentation by carefully considering biological, technological, practical and statistical aspects is very important, because the experimental design space is essentially limitless. Usually, the ranges of variable biological parameters of the design space are based on common practices and in turn on phenotypic endpoints. However, specific sub-cellular processes might only be partially reflected by phenotypic endpoints or outside the associated parameter range. Here, we provide a generic protocol for range finding in design for transcriptomics experimentation based on small-scale gene-expression experiments to help in the search for the right location in the design space by analyzing the activity of already known genes of relevant molecular mechanisms. Two examples illustrate the applicability: Design for experimentation plays an important role in transcriptomics research. There are several aspects that need to be considered: biological, technological, statistical and practical. Statistical principles for experimental design are well established This holds especially true for time-series experiments, for instance in toxicogenomics exposure studies Any experimental design space in a mechanism-oriented transcriptomics experiment usually has multiple axes, e.g. time, dose, space, etc. and is iin-vitro and an in-vivo case study are presented, emphasizing the broad applicability of the approach.Here, we provide a proof-of-concept range finding protocol for transcriptomics by determining the optimal dose and time ranges for studying several specific cellular processes in response to UV exposure. We show the value of executing a transcriptome-wide range-finding test before designing an in depth transcriptomics study, as was previously suggested This study was agreed upon by the Animal Experimentation Ethical Committee of the RIVM in Bilthoven, the Netherlands under permit number 201200128. Animal handling in this study was carried out in accordance with relevant Dutch national legislation, including the 1997 Dutch Act on Animal Experimentation.Biopsies were taken under Isoflurane anesthesia, at the end of the study animals were euthanized by cervical dislocation and all efforts were made to minimize suffering.5 cells per 6-cm plate. Six hours later when almost all cells are in G1 phase 2). Control samples were mock treated. At various time points after treatment , MEFs were rinsed with phosphate-buffered saline and collected in 350 \u00b5l RLT buffer (RNeasy mini kit). An overview of all samples is shown in Primary Mouse Embryonic Fibroblasts (MEFs) were isolated from E13.5 embryos in a C57BL/6 background (backcrossed for more than F8 generations). MEFs were cultured in Dulbecco's modified Eagle medium containing 10% fetal bovine serum , 1% nonessential amino acids , penicillin (0.6 \u00b5g/ml), and streptomycin (1 \u00b5g/ml) at 37\u00b0C and under 5% CO2, 3% O2 conditions. The experiment was performed with early-passage MEFs (prior to passage five). MEFs were expanded and plated at 3.5\u00d7102) in a chamber containing Phillips TL12 lamps. Control mice were mock treated. At various time points after treatment both treated and untreated mice were anaesthetized by isoflurane and 1.5 mm biopsies were sampled from the back by punching a half moon shape on folded skin. Biopsies were immediately snap frozen in liquid nitrogen and stored at \u221280\u00b0C until further processing. Total RNA was isolated as previously described in All mice were males of 7\u201310 weeks of age and at least 10 times backcrossed in SKH hairless strain. Mice received normal feed and water ad libitum. Mice were UV-B exposed at various doses containing 24,302 genes based on NCBI-GeneID. Per RNA sample, 200 ng total RNA was amplified according to the Agilent LRILAK kit manual (Agilent technologies). Amino-allyl modified nucleotides were incorporated during the aRNA synthesis , 0.75 mM rCTP , 0.75 mM AA-rCTP (TriLink Biotechnologies). Synthesized aRNA was purified with the E.Z.N.A. MicroElute RNA Clean Up Kit (Omega Bio-Tek). Each individual aRNA test sample was labeled with Cy3 and a reference sample, which was made by pooling equimolar amounts of RNA from either all in-vitro or in-vivo test samples, was labeled with Cy5. 5 \u00b5g of aRNA was dried down and dissolved in 50 mM carbonate buffer pH 8.5. Individual vials of Cy3/Cy5 from the mono-reactive dye packs were dissolved in 200 \u00b5l DMSO. To each sample, 10 \u00b5l of the appropriate CyDye dissolved in DMSO was added and the mixture was incubated for 1 h. Reactions were quenched with the addition of 5 \u00b5l 4 M hydroxylamine (Sigma-Aldrich). The labeled aRNA was purified with the E.Z.N.A. MicroElute RNA Clean Up Kit. The yields of aRNA and CyDye incorporation were measured on the NanoDrop ND-1000. For the in-vivo experiment 3 samples failed the quality requirements and were discarded for further processing.Gene expression levels of the mouse samples were analyzed with a 12\u00d7135 k in-vitro and in-vivo experiment each hybridization mixture was made up from a 1.1 \u00b5g test (Cy3) and 1.1 \u00b5g reference (Cy5) sample. Samples were dried and dissolved in 2 \u00b5l water. The hybridization cocktail was made according to the manufacturer's instructions . 5.2 \u00b5l from this mix was added to each sample. The samples were incubated for 5 min at 65\u00b0C and 5 min at 42\u00b0C prior to loading. Hybridization samples were loaded onto the microarrays and hybridized for 18 hours at 42\u00b0C with the Roche NimbleGen Hybridization System 4. Afterwards, the slides were washed according to the Roche NimbleGen Arrays User's Guide \u2013 Gene Expression Arrays Version 5.0 and scanned in an ozone-free room with a DNA microarray scanner G2565CA (Agilent Technologies). Feature extraction was performed with NimbleScan v2.5 (Roche NimbleGen). The array data have been deposited in NCBI's Gene Expression Omnibus (GEO) and is accessible through GEO Series accession numbers GSE50930 for the in-vitro experiment and GSE51348 for the in-vivo experimentFor both the The quality of the microarray data was assessed via multiple quality-control checks, i.e. visual inspection of the scans, testing against criteria for foreground and background signals, testing for consistent performance of the labeling dyes, checking for spatial effects through pseudo-color plots, and inspection of pre- and post-normalized data with box plots, ratio-intensity (RI) plots and PCA plots. All arrays passed the minimal criteria for quality assessment of the microarray data and were used in the analyses.http://cran.r-project.org/) using the Bioconductor (http://www.bioconductor.org/) packages limma and maanova.Handling, analysis and visualization of all data was performed in R algorithm was performed on only the normalized Cy3-sample data for between-array normalization through summarization of the intensity values of the probes in a NCBI-GeneID probe set.in-vivo data to correct for the effects of the individual mice on the gene expression levels.A mixed linear model was fitted on the These normalized expression values were used for the generation of log2 ratios of zero dose points in time compared to the dose and time point zero. These log2 ratios were used to filter out the genes with a log2 FC >1 in the zero dose range from the whole data set.The filtered data sets were used to make log2 ratios for the dose points per time point compared to the zero dose point of that time point. These log2 ratios were tested against 64 manually selected different gene sets for enriDose-response relations for each gene per gene set were generated from the filtered data sets and tested using Pearson correlation. Absolute correlations >0.8 were marked as relevant. Genes with no correlations >0.8 in any of the time points were removed from in-vitro study, the oxygen level, under which the MEFs were cultured, was lowered from 21% to 3% in-vivo constant experimental parameters, we performed a number of tests that determined the optimal biopsy punch diameter size and RNA isolation protocol in-vitro and an in-vivo experimental design space defined by two variable parameters: UV dose and recovery-time-after-exposure.Before looking for the location of the \u2018sweet spot\u2019 for follow-up UV experiments in a design space defined by variable parameters , we alsoin-vitro UV-C exposed MEFs, as well as in-vivo UV-B exposed mouse skin were used. The range-finding approach that we suggest here aims to explore the design-space at a high density without replicate sampling. The in-vitro range-finding experiment consisted of 48 samples from synchronized MEF cell-cultures grown under a 3% oxygen level after exposure to various doses of UV-C exposure and different recovery periods , peaking around 12 hours and returning to normal 24 hours after irradiation. In addition, the apoptosis response is shown to occur from 8\u201312 hours, peaking around 16\u201320 hours and continuing to increase until 48 hours after irradiation. We enriched the early time points since gene expression often precedes these phenomena and because we observed a clear change at t\u200a=\u200a6 in previous gene expression studies in-vivo time range was chosen based on previous in-vivo experiments in p53+/+ mice (hairless). These showed p53 accumulation and apoptosis at 6 and at 24 hours after UV exposure in-vitro dose points was based on experiments that studied the effect of different doses in-vitro on p53 accumulation, phosporylation and apoptosis 2. Based on earlier gene expression range studies we made the observation that 20 J/m2 was high (no cell growth). A recent study 2. Lower doses were selected since gene expression is more sensitive than endpoint apoptosis/cell survival. The in-vivo dose point selections were based on the minimal erythemal dose for p53+/+ mice that was 900 J/m2. Accumulation of p53 is observed 24 hours after UV exposure at a dose of 300 J/m2Both the in-vitro experiment, unexposed cell growth causes an about six-fold increase in total RNA yield per culture plate between the first and last time point. Here, a clear dose-related effect was present: at low UV-C doses there is a slight additional increase in total RNA yield as compared to the unexposed control sample and the increase in total RNA yield is reduced until the point of arrest at the highest UV-C dose . In both experiments the IVT amplification was started with the same amount of RNA, effectively normalizing the existing differences in total RNA yields. Three RNA samples in the in-vitro experiment did not amplify properly and were excluded from further analysis is an indication of a perturbation range that induces non-specific responses. As such, we determined the DEGs in both experiments. Log2 fold change (FC) ratios of normalized gene-expression values compared to the dose and recovery time point zero were generated for each point in our experimental design spaces. In the ferences . If we aesponses . In conts needed .in-vitro experiment this is to be expected, as cells continue to grow and as such their environment, to which they respond, changes. In the in-vivo experiment this phenomenon is less prominent, but still present. Our previous, unpublished, study design spaces, all occurring cellular processes can in principle be evaluated. An advantageous and more focused way is to explore an experimental design space using prior knowledge. This can be done by determining whether the genes known to be involved in a cellular process of interest have a changed expression as compared to the onset of the experiment. To substantiate this search, we explored the design spaces by performing gene-set enrichment analyses for 64 predefined, expert-selected, cellular processes and pathways retrieved from various commercial or freely available databases such as KEGG, BioCarta, Metacore, Ingenuity, and so on , combinein-vitro experiment for instance, a set of known genes (n\u200a=\u200a45) of the nucleotide excision repair (NER) pathway, showed to be most active at one specific spot in the design space: 6 hours and 2.25 J/m2 UV-C dose are changed in the in-vitro experiment at an earlier time and a much higher UV dose relation of each gene is an important indicator for the involvement of a gene in the response to an exposure. Each gene has a doses-response curve over each time point. The dose-response correlations were determined per time point for each gene in a gene set that was still present in the filtered data. (Anti-) correlations >0.8 indicate a strong relation between the exposure and the gene-expression frame a specific biological question with an associated gene-set; 2) define a wide-ranged sample space and sampling scheme without replication; 3) obtain transcriptomics data and estimate DEGs; 4) restrict the experimental design space by eliminating areas with potentially stress-related and noise-related DEGs; 5) determine the experimental \u2018sweet spot(s)\u2019 for the biological question using the selected gene set in gene set enrichment analysis and in dose-response correlation analysis for each point in the design space.in-vivo experimental design space where NER occurs, to be much later in time than those with extrinsic apoptosis experiment is not suitable for transcriptomics research. Often the argument of costs is employed as an excuse not to perform range-finding studies. The small-scale range finding studies proposed here are relatively cheap, as no replications are used and the costs for these types of experiments are still decreasing. In fact, this approach could in essence also be done by quantitative PCR only using the genes of interest. Vice versa, one could argue that an experiment executed at an less informative range will be more costly in the end. Of course, the amount of samples used here might be equal to the size of a simple study and might not be cost-effective in comparison. Again, the use of a qPCR based approach might be more interesting in this case. Also, these small studies are often part of a larger research strategy, for which it will be useful to run small-scale range finding studies upfront.Previous studies also recommended the use of range finding for the selection of experiment sampling points for transcriptomics Given the clear nature of the results presented here and the common-sense rationale that is behind it, we anticipate that range-finding tests will become general practice in transcriptomics experimentation in the near future. To support this, we suggest that the results of all range-finding studies are deposited in world-wide data repositories, so other researchers can use these data to determine for their specific cellular process of interest where they should design their experiment. At the same time it could provide means by which researchers can see in which -unexpected- experimental setups, i.e. cell state or response to perturbations, their favorite cellular process or gene is involved. Obviously, this only applies if the experimental setup is (near) identical. As such, range finding might even lead to more experimental standardization of specific research domains.Figure S1Effect of UV exposure on mRNA yield. Relative mRNA yields for all in-vitro (A) and in-vivo (B) experimental samples compared to the mRNA yield of the t\u200a=\u200a0 sample in each experiment. The RNA yields from a previous in-vitro UV exposure experiment are presented as reference (A).(PDF)Click here for additional data file.Figure S2Number of differentially expressed genes over time. Profile plots over time of the number of DEGs for each dose in both experiments Click here for additional data file.Figure S3Cellular process specific responses in the in-vitro experiment design space. The potential sweet spots in the in-vitro range-finding experiment diagrams for all 64 tested gene sets (same set up as (PDF)Click here for additional data file.Figure S4Cellular process specific responses in the in-vivo experiment design space. The potential sweet spots in the in-vivo range-finding experiment diagrams for all 64 tested gene sets (same set up as (PDF)Click here for additional data file.Table S1RNA isolation metrics. Detailed RNA sample information for the in-vitro experiment (A) and the in-vivo experiment (B).(PDF)Click here for additional data file.Table S2Differentially expressed genes. The numbers of differentially expressed genes found in both experiments, if compared to time-point 0 Click here for additional data file.Table S3In-vitro examples of dose-response correlations of individual genes per time point. Dose response correlations per time point of individual genes that belong to gene sets: KEGG nucleotide excision repair, KEGG cell cycle, KEGG extrinsic apoptosis and IARC p53 responsive elements, in the in-vitro experiment (set up as (PDF)Click here for additional data file.Table S4In-vivo examples of dose-response correlations of individual genes per time point. Dose response correlations per time point of individual genes that belong to gene sets: KEGG nucleotide excision repair, KEGG cell cycle, KEGG extrinsic apoptosis and IARC p53 responsive elements, in the in-vivo experiment (set up as (PDF)Click here for additional data file.Table S5Overview of GeneID's per gene sets used for GeneSetTesting.(XLSX)Click here for additional data file.Data S1Multiple longitudinal biopsies sampling in individual mice.(PDF)Click here for additional data file."}
+{"text": "Dynamic gene-regulatory networks are complex since the interaction patterns between their components mean that it is impossible to study parts of the network in separation. This holistic character of gene-regulatory networks poses a real challenge to any type of modelling. Graphical models are a class of models that connect the network with a conditional independence relationships between random variables. By interpreting these random variables as gene activities and the conditional independence relationships as functional non-relatedness, graphical models have been used to describe gene-regulatory networks. Whereas the literature has been focused on static networks, most time-course experiments are designed in order to tease out temporal changes in the underlying network. It is typically reasonable to assume that changes in genomic networks are few, because biological systems tend to be stable.We introduce a new model for estimating slow changes in dynamic gene-regulatory networks, which is suitable for high-dimensional data, e.g. time-course microarray data. Our aim is to estimate a dynamically changing genomic network based on temporal activity measurements of the genes in the network. Our method is based on the penalized likelihood with A single microarray experiment provides a snapshot of the expression of many genes simultaneously under a particular condition. Gene expression is a temporal process, in which different genes are required and synthesized for different functions and under different conditions. Even under stable conditions, due to the continuous degradation of proteins, mRNA is transcribed continuously and new proteins are generated. This process is highly regulated. In many cases, the expression programme is initiated by the activation of a few transcription factors, which in turn activate many other genes that react in response to the newly arisen conditions. Transcription factors are proteins that bind to specific DNA sequences, thereby controlling the flow of genetic information from DNA to mRNA. For example, when cells are faced with a new external environment, such as starvation , infectiIn this paper, we propose a graphical model for describing temporal interaction patterns between genes. Graphical models explore conditional independence relationships between random variables. They can be divide into directed graphical models, e.g. Bayesian networks ,6, undirThe class of Gaussian graphical models (GGM) have been particularly popular. The main advantage for GGMs is that the precision matrix, i.e. the inverse of the covariance matrix, can be used to \"read off\" the conditional independence relationships between the random variables. The literature on estimating the inverse covariance matrix goes back to , who alsRegulatory elements in genetic networks are highly structured. In order to guarantee an appropriate response to a particular change in the environment, most gene interactions are highly specific. The detailed molecular structure of genes and gene products are responsible for this level of specificity. Another biological requirement is that gene regulation is fast in reacting to changes in the environment. Heat shocks should almost instantaneously result in an adaptive response from the yeast cell. From this point of view signals should be able to travel fast through the gene regulatory network: the network should have a small world property. Consequently, most gene regulatory networks are sparse small-world graphs. If the expression of the genes can be assumed to be normally distributed, then this means that most of the elements in the precision matrix are equal to zero. A standard approach in statistical modelling to identify zeroes in the precision matrix is the backward stepwise selection method, which starts by removing the least significant edges from a fully connected graph, and continues removing edges until all remaining edges are significant according to individual partial correlation tests. A conservative simultaneous testing procedure was proposed by . However showed t matrix. introducodel and showed todel and introducodel and proposedodel and proposedchanges in the inverse covariance matrix over time. The new method is suitable for studying high-dimensional time-course gene activity data. In order to solve the penalized maximum likelihood problem, we take advantage of an efficient solver developed by [In this paper we propose a model to estimate slowly changing dynamic graphs using the loped by to solveThe rest of this paper is organized as follows. The next section gives a description of our motivating example and a brief overview of Gaussian graphical models. In Methods, we describe the slowly changing dynamic network model and its estimation. In Results, we show the results of a simulation study and apply our method to the time-course T-cell dataset. Finally, we discuss the advantages of our method and point out further directions for development.\u03b8, and eventual IL-2 production. Although certain things are known about the structure of the T-cell pathway, its timing and its precise structure are still unknown.T-cells are white bloods cell that play a central role in cell-mediated immunity. Activation of T-cells occurs through the simultaneous engagement of the T-cell receptor and a costimulatory molecule, like CD28 or ICOS. Both are required for the production of an immune response. The signalling pathways downstream from activation usually engages the PI3K pathway and the recruiting PH domain containing signaling molecules, like PDK1, that are essential for the activation of PKC-6 cells/ml, the cells were treated with two treatments, the calcium ionosphere and the PKC activator phrorbolester PMA. This stimulation of the T-cells resulted in their activation. Gene expression levels for 88 genes were collected across 10 time points: the first one just before T-cell activation, at a nominal time-point 0, and 9 time points at 2, 4, 6, 8, 18, 24, 32, 48, 72 hours after T-cell activation. In the first experiment the microarray was divided such that 34 sub-arrays were obtained. Each of these 34 sub-arrays contained the strands of 88 genes under investigation. Strands are the complementary bases for the mRNA, which is the single-stranded transcribed copy of the DNA. In the second microarray experiment the microarray was dived into 10 sub-arrays. Each of these 10 sub-arrays contained the strands of the same 88 genes. Both microarray experiments used 10 different slides to collect the 10 temporal measurements. The experiment is described in detail in [Two cDNA microarray experiments were performed to collect gene expression levels for analyzing T-cell activation. Human T-cells coming from the cell line Jakart were cultured in a laboratory. When the culture reached a consistency of 10Two further steps were conducted by to obtaiAt this point we assume that the 44 sub-array replicates are independent samples and that the temporal replicates across these sub-arrays are functionally dependent replicates. These two assumptions result in a dataset of 44 independent replicates across 57 genes and 10 time points.In this section, we describe the model that we adopt in order to study the underlying time-varying genomic network for the T-cell data. We argue that time-course datasets should be analyzed in a way, that is sensitive to the underlying biology. If one does not use a model that is able to describe a time-varying network, there would not have been a point in performing a time-course experiment. The bioinformatic tools should be adjusted to the needs of the biologist, who wants to infer particular aspects of the system. In this section, we first introduce a general graphical model. Secondly, we extend this model to the slowly changing graphical lasso model. Finally, we describe the computational details of performing penalized maximum likelihood.G, P), where G = is a graph with edges E that describe the conditional independence relationships of probability measure P on the vertices V. This means that one can use the graph G to read off the functional relationships between the random variables associated with the vertices. In particular, for any triple of disjoint subsets of V such that S separates A from B in G, we have that for Y ~ \u2119,A graphical model is a tuple , constitutes a Gaussian graphical model or a covariance selection model [-1 be the precision or concentration matrix, then \u0398 contains all conditional independence information for the Gaussian graphical model. In particular,In this paper, we will assume that the gene activity data on model . This Gac E= { |ij \u03b8= 0}, a multivariate normal probability distribution f(y) can be factorized as a product of functions f which do not jointly depend to i yand j ywhen \u2208 cE.In fact, it is easy to show that given the set of n observations on the Gaussian graphical model, y1,...,ny, the log-likelihood can be written asGiven a set of \u03bc irrespective of the number of observations and the underlying graph G. For the MLE for \u0398, the story is more complicated. For the complete graph, the maximum likelihood estimate is not uniquely defined when the number of observations is less than the number of vertices, n < |V|. This situation is really common for experiments to infer genomic networks. On the other hand, a gene-regulatory network is typically sparse, which means that the number of links is small with respect to the possible number of connections. This may mean that \u0398 is estimable with respect to the underlying true sparse graph, G. The only problem is that we don't know which sparse graph that is. Therefore, we impose an additional constraint,where subject toG = to be sparse, if |E| = O(|V|), where |V| is the number of vertices and |E| is the number of edges. A graph G is said to be dense, if |E| = O(|V|2). By constraining the estimate to satisfy an G.where typically we do not penalize the diagonal of the precision matrix. Sparsity of the genomic network is not only our current best knowledge of the gene-regulatory system, but coincidently it is also computationally useful. formallyG = . Consider a set of genes \u0393 = {\u03b31,..., \u03b3p} and a set of time points where these genes were observed \u03c4 = {t1,..., Tt}. We define the vertices of the dynamic graph as the Cartesian product of the genes and time points, V = \u0393 \u00d7 \u03c4. Therefore, a vertex in this graph is an element . The edges are some subset of the Cartesian product of vertices, E \u2282 V \u00d7 V. An element of E will be written as {, gsY, which represents the amount of gene activity of gene g at time s.In this section, we introduce the concept of a dynamic Gaussian graphical model, which extends the static Gaussian graphical model that was introduced in the previous section. We first define a dynamic graph dynamic Gaussian graphical model as the tuple , where G = is a dynamic graph and \u2119\u03bc, \u03a3 is a collection of multivariate Gaussian distributions with mean \u03bc and inverse covariance matrix \u0398, that are compatible with the conditional independence relationships described in the edge set E.With the above ingredients, we can now define a l, SlN) |l = 0,..., t - 1} of the inverse covariance matrix \u0398,In principle, the ordering of the vertices is arbitrary. For interpretation purposes, it helps to sort the vertices by time points and within time points by genes. This results in a natural partition {) as a dynamic Gaussian graphical model , for whichAs the full dynamic Gaussian graphical model is still heavily parameterized with a typically big k. In practice, we typically consider k = 1 or k = 2, which from an interpretational point of view are most interesting. It is important to note that the autoregressive Gaussian graphical model is directly associated to a particular network structure G, which represents the conditional dependence graph of the random variables associated with the vertices of the graph.This model assumes that genes are conditionally uncorrelated for time lags larger than sparsity and its persistence. DNA, RNA and proteins are very specific molecules that are capable of interacting, typically, with only a very limited number of other molecules. This means that a genetic network is highly structured and selective, and, therefore, characterized by a high degree of sparsity. As genetic interactions depend very much on the basic molecular structure of its constitutive parts, the potential to interact between various genes will typically not change over time, unless particular regime changes in the cell affect its thermo-dynamic properties. Interactions in the dynamic network G therefore tend to persist over time. We will show in this subsection how we can incorporate these two ideas, sparsity and persistence of the network, in the interferential objective function by means of a penalized likelihood function.The main question this paper wants to answer is how to infer a meaningful biological dynamic network from noisy data on the nodes, such as, e.g., RNA seq values or protein levels. The two features we will assume particularly relevant of a gene network are its k = 1. Not only do we want to infer a sparse network G, but also one for which the underlying network partitions n observations y1,..., ny, each coming from the autoregressive k dynamic Gaussian graphical model ).In the T-cell experiment, we assume we have 44 observations from the 57 \u00d7 10 dimensional autoregressive Gaussian graphical model of order \u03c11 and \u03c12, we define a slowly changing dynamic network as the solution of the penalized maximum likelihood of the autoregressive k dynamic Gaussian graphical model,Given two tuning parameters subject toWhereas the first constraint induces a generally sparse dynamic network, the second constraint penalizes large changes in the network coefficients, thereby inducing a slowly changing or persistent network through time. Therefore, the penalty parameters are directly related to the zero and persistence structure of the estimate A(\u0398) = ||\u0398||1 associated with constraint (2), lC(\u0398) = associated with constraint (4). This method introduces two sets of slack variables to deal with the two inequality constraints. The constraint optimization problem (1) is now written as:Solving the above penalized maximization problem is an active field of research in optimization. We use the log determinant proximal point approximation method developed by . Each co\u03bb1 and \u03bb2 are functions of \u03c11 and \u03c12 respectively. In this format, the optimization can be solved directly by LogDetPPA.where \u03bb1 and \u03bb2 effectively determine the sparsity and the persistence of the network through time, respectively. Selecting these tuning parameters is a form of model selection. Depending on the interests of the user, which can be maximizing posterior model probability or minimizing prediction error, either a BIC-type criterion or an AIC-type criterion is proposed. We consider a grid of values and minimize information criterion scores such as AIC, AICc, and BIC. Then we use stability selection to select a more stable graph [The non-negative tuning parameters le graph .Example: T-cell We consider a subset of the T-cell data to illustrate the performance of the autoregressive Gaussian graphical model approach with a slowly changing network penalty. Only 4 genes and 2 time points were considered. Table \u03bb1 = 0.01 and \u03bb2 = 0.1. It can be seen that GeneNet. [ebdbNet. There is a whole class of methods based on the graphical lasso [sglasso R-package and [SparseTSCGM.In this section we compare the dynamic network inference method with other methods proposed in the literature to estimate networks. suggest GeneNet. propose al lasso . Besidesal lasso proposedkage and considerN0. We simulate n = 100 observations and report the results of the methods described above. Due to the large number of links, GeneNet by default corrects for multiple testing, resulting in a very sparse graph. In fact, it merely detects seven edges throughout the whole time-course, when correcting at the 0.9 local FDR rate. In Figure We simulate data from a network along six time-points that is affected by a regime change between time points 3 and 4. Figure p \u2208 {20, 40, 60, 80}, each with n = 50 observations across T = 3 time points. For each scenario we simulate 100 datasets from a multivariate normal distribution with \u03bc equal to zero and \u03a3 equal to the inverse of a precision matrix \u0398. The structure of the graph slowly changes across time and observations are conditionally independent for time lags greater than one. Note that in all four scenarios the number of replicates n is fewer than the number of random variables pT.We perform a simulation study to show the performance of the autoregressive Gaussian graphical model of order one. We consider four different scenarios with a varying number of genes F1 score overr 100 simulations. We use the corrected and normal AIC, as well as the BIC to select the tuning parameters in the models. The corrected AIC adds an additional penalty to account for the small number of observations. These results show that the slowly changing autoregressive Gaussian graphical model is very reliable even with small numbers of observations and that it can be used for real applications when few changes in different time points are present using any type of model selection method.Table Ys,t and Ys,t+2, are conditional independent given the intervening observations. This means that the edge set for networks at lag 2, i.e. N2, is an empty set. Figure \u03c12. On the other hand, the bottom right figure shows the changes between these two time points. It shows, for example, that initially MCL1, a pro-survival BCL2 family member, is a highly connected node in the T-cell network. It is known that SCF(FBW7) regulates cellular apoptosis by targeting MCL1 for ubiquitylation and destruction [We apply the autoregressive Gaussian graphical model of order one to the human T-cell dataset. We assume that genes which are two time points apart, i.e. truction . This isMany time-course genomic experiments are performed in order to discover certain regime changes that may be taking place during that period. Under these circumstances, representing genomic interactions by means of a static graphs can be misleading. Certainly, it would fail to detect any changes in the topology of the network. We propose a sparse dynamic graphical model to infer the underlying slowly changing network. One of the major contributions is that this methodology is capable of providing fast inference about the dynamic network structure in moderately large networks. Until now, even sparse static inference could be painstakingly slow and would typically lack obvious interpretation. We applied the method to a human T-cell dataset to study the developmental aspects of the sparse genomic interactions. One result, backed up by recent research, is that MCL1 is targeted early on and thereby loses its connections to the rest of the genomic network.Once a graph has been estimated and changes have been evaluated, other questions on how to analyze time-evolution networks might be posed. Does the network retain certain graph properties as it grows and evolves? Does the graph undergo a phase transition, in which its behaviour suddenly changes? In answering these questions it is of interest to have a diagnostic tool for tracking graph properties and noting anomalies and graph characteristics of interest. For example, a useful tool is ADAGE , which iThe authors declare that they have no competing interests.Both authors have contributed equally to this manuscript."}
+{"text": "Dictyostelium discoideum is triggered by starvation. When placed on a solid substrate, the starving solitary amoebae cease growth, communicate via extracellular cAMP, aggregate by tens of thousands and develop into multicellular organisms. Early phases of the developmental program are often studied in cells starved in suspension while cAMP is provided exogenously. Previous studies revealed massive shifts in the transcriptome under both developmental conditions and a close relationship between gene expression and morphogenesis, but were limited by the sampling frequency and the resolution of the methods.Development of the soil amoeba Here, we combine the superior depth and specificity of RNA-seq-based analysis of mRNA abundance with high frequency sampling during filter development and cAMP pulsing in suspension. We found that the developmental transcriptome exhibits mostly gradual changes interspersed by a few instances of large shifts. For each time point we treated the entire transcriptome as single phenotype, and were able to characterize development as groups of similar time points separated by gaps. The grouped time points represented gradual changes in mRNA abundance, or molecular phenotype, and the gaps represented times during which many genes are differentially expressed rapidly, and thus the phenotype changes dramatically. Comparing developmental experiments revealed that gene expression in filter developed cells lagged behind those treated with exogenous cAMP in suspension. The high sampling frequency revealed many genes whose regulation is reproducibly more complex than indicated by previous studies. Gene Ontology enrichment analysis suggested that the transition to multicellularity coincided with rapid accumulation of transcripts associated with DNA processes and mitosis. Later development included the up-regulation of organic signaling molecules and co-factor biosynthesis. Our analysis also demonstrated a high level of synchrony among the developing structures throughout development.D. discoideum development as a series of coordinated cellular and multicellular activities. Coordination occurred within fields of aggregating cells and among multicellular bodies, such as mounds or migratory slugs that experience both cell-cell contact and various soluble signaling regimes. These time courses, sampled at the highest temporal resolution to date in this system, provide a comprehensive resource for studies of developmental gene expression.Our data describe The online version of this article (doi:10.1186/s12864-015-1491-7) contains supplementary material, which is available to authorized users. D. discoideum exhibits a developmental program unique among model organisms , and the complete time series data sets are accessioned at the NCBI Gene Expression Omnibus .Further analyses of these data may focus on identifying additional co-regulated transcriptional regulators and target genes. The enhanced temporal resolution of these data provides more informative transcription profiles than previous studies. Perhaps clustering of genes by expression pattern will yield improved hypotheses regarding shared regulation and function. As future studies examine the phenotypic consequences of transcription factor knockout mutations, as well as the binding specificity of important transcriptional regulators, these data will serve as a critical reference point for inferring regulatory interactions. To facilitate exploration by the larger community, user-friendly analysis tools for these data are available at dictyExpress . These data may also be viewed and analyzed using the dictyExpress toolkit [www.dictyExpress.org].The additional material includes the Additional file"}
+{"text": "Evidence is accumulating that perturbation of early life microbial colonization of the gut induces long-lasting adverse health effects in individuals. Understanding the mechanisms behind these effects will facilitate modulation of intestinal health. The objective of this study was to identify biological processes involved in these long lasting effects and the (molecular) factors that regulate them. We used an antibiotic and the same antibiotic in combination with stress on piglets as an early life perturbation. Then we used host gene expression data from the gut (jejunum) tissue and community-scale analysis of gut microbiota from the same location of the gut, at three different time-points to gauge the reaction to the perturbation. We analysed the data by a new combination of existing tools. First, we analysed the data in two dimensions, treatment and time, with quadratic regression analysis. Then we applied network-based data integration approaches to find correlations between host gene expression and the resident microbial species.The use of a new combination of data analysis tools allowed us to identify significant long-lasting differences in jejunal gene expression patterns resulting from the early life perturbations. In addition, we were able to identify potential key gene regulators (hubs) for these long-lasting effects. Furthermore, data integration also showed that there are a handful of bacterial groups that were associated with temporal changes in gene expression.The applied systems-biology approach allowed us to take the first steps in unravelling biological processes involved in long lasting effects in the gut due to early life perturbations. The observed data are consistent with the hypothesis that these long lasting effects are due to differences in the programming of the gut immune system as induced by the temporary early life changes in the composition and/or diversity of microbiota in the gut.The online version of this article (doi:10.1186/s12864-015-1733-8) contains supplementary material, which is available to authorized users. Evidence is accumulating that perturbations of the early life colonization of the gastro-intestinal (GI) tract by microbes induce long-lasting health effects in individuals , 2. ThouGut microbiota play an important role in modulating diverse gastrointestinal functions, ranging from enzymatic digestion to modulation of immune responses \u201310. In tThe effect of the use of antibiotics on the physiology of the host is believed to be due to the primary effect of antibiotics on the loss/change in bacterial (sub)-populations, especially in the GI tract . The speThe first objective of this study was to develop a workflow that can be used to analyse biological data in two dimensions simultaneously, over time and between treatments. The second objective was to apply this workflow on two types of gut-related data as measured in an experiment with pigs exposed to an early life perturbation, followed by effect measurements at three different time-points later in life. Using these methods we endeavour to identify gut system components that contribute to induction and maintenance of long lasting effects of early life perturbations. We aim to find major host- and microbe-related components that propagate or regulate these long term effects. Such components may form potential targets to modulate early life events that affect later life immunologic performance. We used piglets as a model and the exposure to an antibiotic and/or stress on day four after birth as the perturbations. We measured whole genome gene expression profiles of intestinal tissues and provided community scale data on the composition and diversity of microbiota on three different time-points, taken during different stages of the life-span of pigs, neonate, adolescent and full grown adult. In order to take into account the extreme changes in the physiology and morphology of the animal, we simultaneously studied the effect of the treatment and the time. Once the gene expression and microbiota data were analysed separately, we integrated both to obtain information on their possible interactions.th day after birth and then left undisturbed until the time of tissue sampling after sacrifice. The same dose of the antibiotic was given to the second treatment group (Tr2) on the 4th day after birth but these piglets were also subjected to stressful conditions on the same day. The stress was in the form of handling of the piglets, which is common practice in intensive pig husbandry systems . The control group (Ctrl) was not disturbed for the entirety of the experiment until the time of sacrifice for sampling.The animal experiment consisted of one control group and two treatment groups, each consisting of 48 piglets derived from 16 different sows (TOPIGS20 (GY \u00d7 NL)). Each litter contained 4 piglets of each treatment and control group. Litter-mates stayed with their sow until weaning at day 25. After weaning, the same number of piglets of each treatment and control group were housed together in pens. All pens were located in the same compartment. The first treatment group (Tr1) was given a dose of Tulathromycin on the 4Sampling was done on three time-points, day 8 after they were born, day 55 and day 176. On each of these days, 16 piglets from each treatment group and derived from 16 different sows were sacrificed and samples were collected for microarray and microbiota analysis. Further details of the experiment have been described elsewhere . The expFor transcriptome analysis, jejunal tissues scrapings were taken and RNA was extracted from the samples for microarray analysis as described in . For micth percentile of intensities were removed, this resulted in 25,915 genes from 66 microarray samples. Details of the applied analysis pipeline are given in Additional file Background correction and quantile normalisation was performed on the microarray data (GSE53170) using the R package LIMMA from BioOnlyTr1); gene unique to Tr2vsCtrl (OnlyTr2) and the overlapping genes (Tr1&Tr2).To identify genes with significant expression profile differences over time between the experimental groups, we used maSigPro from BioGO (Gene Ontology) enrichment analysis , with fowww.genecards.org.Functional interaction networks among groups of genes selected from the data, were built using the Cytoscape Reactomehttp://microbiome.github.io/) and the MySQL database as described by Rajilic-Stojanovic [Microbial populations from the jejunum were analysed using the PITChip.2 , which pojanovic . MicrobiOnlyTr1, OnlyTr2 and Tr1&Tr2. The first two networks were based on six variables (treatment and control in three time-points) and the third one on nine variables (two treatments and the control in three time-points).The mixOmics R package , 59 was The resulting un-directed network connects the microbial groups and the genes that have absolute correlation values above 0.8. A positive high edge weight hints to a positive regulatory relationship while a low (negative) weight indicates a possible repression relationship. Networks were visualised using Cytoscape 3.1.0. Enrichment analysis was done on the gene neighbours of the bacterial groups via topGO. Fisher\u2019s test was applied on the results and only terms with p-values lesser than 0.01 were further analysed. The functionality of the bacterial groups was determined with the help of experts in the field.OnlyTr1 and OnlyTr2 gene-lists have genes with long lasting differences between the treatment and control. The Tr1&Tr2 list has 91\u00a0% of the genes that have significant long term differences. The three gene lists; OnlyTr1, Tr1&Tr2 and OnlyTr2 consist of genes that have gene expression profiles over time that are significantly different in the treatment compared to the control group. These lists are used for the follow-up of the analyses. On these three lists, functional analysis was done using GO enrichment analysis for biological processes with topGO. A summary of the results is presented in Fig.\u00a0For gene expression analysis, the temporal profiles of the samples from each treatment group were compared with those of the control group. We used quadratic regression analysis to obtain two gene lists: one for Tr1vsCtrl 1643 genes) and another one for Tr2vsCtrl (1562 genes). These lists are in Additional file genes anOnlyTr1 network, 10 modules in the Tr1&Tr2 network, and 6 modules in the OnlyTr2 network. The modules are depicted by different colours in Fig.\u00a0OnlyTr2 list. Information on the various topological parameters of the three networks can be found in the Additional file In order to verify the results of the topGO analysis and to get insight into networks of genes potentially involved in the induction and maintenance of the long-lasting effects, a network based analysis was used. The Reactome Functional Interaction (FI) database was used to build three functional interaction networks, one for each list of genes. In any of the explored gene sets, about 30\u00a0% of the genes were represented in the FI networks. More than 50\u00a0% of the nodes had significant long-term differences from the control, with the Tr1&Tr2 FI network having the biggest fraction (94\u00a0%). Within each of the FI networks, we were able to identify several topological modules. We identified 9 modules with significant biological function in the OnlyTr1 and Tr1&Tr2 genes are mainly involved in innate and adaptive immune processes, whereas the OnlyTr2 list is dominated by genes involved in developmental processes. Compared to the topGO analysis, the enrichment analysis from Reactome FI provided more detailed functional information. It showed that the genes in all three lists are involved in immune processes with dominance of adaptive immune processes in OnlyTr1 and innate immune processes in OnlyTr2. The overlapping gene list Tr1&Tr2 included immune signalling functions, interferon and interleukin genes, that are speculated to contribute to both types of immunity [The results from topGO show that the immunity \u201362.OnlyTr1 FI network, all the hubs have long term differences in expression between the treatment and control based on the \u03b23 regression coefficient. The hubs in all the three networks can be roughly assigned to three functional categories: immune, cell cycle or proliferation, and genes involved in ubiquitination. There are two genes that are not part of these three clusters, MAPK14 and RPS3, where the latter is an important component of the ribosome. MAP kinases act as an integration point for multiple biochemical signals, involved in a wide variety of cellular processes like proliferation, differentiation and development. They are activated by environmental stresses or cytokines.The network analysis provided insight into genes with potential high level regulatory activity, i.e. genes in the network with high number of connections/edges. A list of these potential high level regulators or hubs and a gist of their known biological functions is given in Table OnlyTr1 are either immune or cell cycle/proliferation related genes. This is reflected in the positioning of these genes in the network. They are found in the two modules related to immunity; T-cell co-stimulation and leukocyte migration and also for proteins that perform the conjugation and ligation of ubiquitin to other proteins. These hubs in the Tr1&Tr2 network are in the module for protein ubiquitination; this module has the highest number of hubs. Other hubs in this network are from four modules as shown in Fig.\u00a0OnlyTr2 network, there are two hubs, UBA52 and STAT1.All the hubs in the ion Fig.\u00a0. The genBrachyspira show different contributions in different conditions but some groups like Bacillus et rel show major differences only at one time point and one treatment. Additional file The relative contribution of the microbiota was filtered as described in OnlyTr1, was built with 498 genes in 6 conditions and 46 level2 microbial groups in the same conditions. The second network (Tr1&Tr2) was built with 1145 genes and the same 46 microbial groups, and the conditions were the control and both Tr1 and Tr2 at the 3 time-points, this gave rise to 9 conditions. The third one had again 6 conditions (Ctrl and Tr2 in 3 time-points) with 417 genes and the same 46 bacterial groups. The networks represent the correlation between the microbiota and the gene expression data and are shown in Fig.\u00a0By performing statistical integration of both the microbiota and host gene expression datasets, (mixOmics R package), we tracked changes in jejunal gene expression which follow the changes in microbial populations as determined on the same location in the gut. The changes will reflect the similarity or dissimilarity of a pair of data points across time. The resulting similarity matrix was used to connect the microbial groups and the expression of genes into a network. The first network, OnlyTr2 correlation network. Three bacterial groups had consistently large number of gene neighbours: Eubacterium et rel, Faecalibacterium et rel and Ruminococcus bromii. These three groups also share several gene neighbours in all the three networks and are quite central in the network as indicated by network parameters was 0.8 to 0.9. This is apparent from the visualization of the networks in Fig.\u00a0The GO terms that appeared enriched in the gene list of each of the correlation networks are mostly related to four broad functions, metabolic processes, transport of substances, translation, and some immune processes negatively correlated with the microbial groups. But in the OnlyTr2 correlation network, these genes are mostly (60\u00a0%) positively correlated with the microbial groups. More than 50\u00a0% of the genes in the three networks have significant long term differences from the control.In each correlation network, 20\u00a0% of the genes are found in the FI networks from gene expression data alone. A separate topGO analysis shows that the genes in each of the correlation networks are enriched for GO terms that are related to four broad categories of biological processes. These categories are, Metabolic Process, Transport, Translation and Immune Processes. In the In this paper we describe a workflow and a set of methods capable of analyzing and integrating different types of data in two dimensions, treatment and time. These methods were then used for the identification of gut system components that contribute to the induction and maintenance of long-lasting effects in the GI tract as induced by perturbations at a young age. To this end we combined multiple, freely available tools and advanced statistical analytical tools. We used temporal gene expression patterns, obtained using microarray technology, to identify genes and biological processes that are affected over time by the early life perturbation with antibiotic and/or stress treatments. In addition, we used community-scale analysis of gut microbiota from the same location of the gut to identify changes in microbiota profiles over time due to the perturbations.With this approach we have taken the first steps in unravelling the genomic and microbial networks that contribute to long-lasting responses of early life perturbations in the gut. The results reveal that there are significant long-lasting differences in the system of the GI tract between the different perturbations groups, mainly at the gene expression level. The data presented are consistent with the hypothesis that observed long lasting effects on gene expression are most probably due to differences in the programming of the gut immune system as induced by the temporary early life changes in the composition and/or diversity of microbiota in the gut. Furthermore, we were able to identify potential key regulator genes (hubs) for the long-lasting effects and we have identified microbial groups that are potentially associated with the observed changes in intestinal gene expression.In the following sections of this Discussion we explain the rationale behind choosing these methods for our particular biological question. In the sections \u201cIntestinal gene expression and microbiota profiles were generated on three different time-points; day 8 , 55 (young adult), and 176 (adult). Day 8 was chosen because it is known that immediately after perturbations, as used here, changes occur in the pattern of microbial colonizing of the gut as well as in the GI tract mucosal gene expression , 28, 34.Intestinal bacterial composition and gene expression patterns change over the lifetime of an animal due to changes in nutritional, environmental and physiological factors , 66, 67.For analysis of data derived from three time-points and two treatments, a regression analysis is better suited than a pairwise comparison with linear models, PCOA or hierarchical clustering which was done in earlier work by Schokker et al. , 30. TheOnlyTr1 and OnlyTr2, more than half of the nodes have a significant long-lasting expression pattern difference in the treatment versus control. This is significant for most of the nodes in the Tr1&Tr2 FI network which is representative of the action of antibiotic. This reveals that most of the long lasting effects are due to the antibiotic effect in the treatment. These genes with long lasting differences are spread over all the modules which indicates that all the biological processes in the network Lelystad\u201d (2011077.b), in accordance with the Dutch regulations on animal experiments."}
+{"text": "Escherichia coli growth and starvation over a time-course lasting two weeks. We have measured multiple cellular components, including RNA and proteins at deep genomic coverage, as well as lipid modifications and flux through central metabolism. Our study focuses on the physiological response of E. coli in stationary phase as a result of being starved for glucose, not on the genetic adaptation of E. coli to utilize alternative nutrients. In our analysis, we have taken advantage of the temporal correlations within and among RNA and protein abundances to identify systematic trends in gene regulation. Specifically, we have developed a general computational strategy for classifying expression-profile time courses into distinct categories in an unbiased manner. We have also developed, from dynamic models of gene expression, a framework to characterize protein degradation patterns based on the observed temporal relationships between mRNA and protein abundances. By comparing and contrasting our transcriptomic and proteomic data, we have identified several broad physiological trends in the E. coli starvation response. Strikingly, mRNAs are widely down-regulated in response to glucose starvation, presumably as a strategy for reducing new protein synthesis. By contrast, protein abundances display more varied responses. The abundances of many proteins involved in energy-intensive processes mirror the corresponding mRNA profiles while proteins involved in nutrient metabolism remain abundant even though their corresponding mRNAs are down-regulated.How do bacteria regulate their cellular physiology in response to starvation? Here, we present a detailed characterization of E. coli. We exhaustively monitored changes in cellular components, such as RNA and protein abundances, over time. We subsequently compared and contrasted these measurements using novel computational approaches we developed specifically for analyzing gene-expression time-course data. Using these approaches, we could identify systematic trends in the E. coli starvation response. In particular, we found that cells systematically limit mRNA and protein production, degrade proteins involved in energy-intensive processes, and maintain or increase the amount of proteins involved in energy production. Thus, the bacteria assume a cellular state in which their ongoing energy use is limited while they are poised to take advantage of any nutrients that may become available.Bacteria frequently experience starvation conditions in their natural environments. Yet how they modify their physiology in response to these conditions remains poorly understood. Here, we performed a detailed, two-week starvation experiment in Escherichia coli, as it transitions from exponential growth to starvation where it eventually ceases dividing as nutrients become exhausted glucose.ine (DM) and limi600 and colony-forming units (CFU), cultures were grown separately from the main batches used for harvesting cells but under identical conditions. The OD600 (absorbance at 600 nm) of a sample removed from the culture at each time point was measured relative to a sterile DM500 glucose blank. These samples were also diluted in sterile saline and plated on DM agar supplemented with 0.2 g/l glucose. After incubation at 37\u00b0C for 24 h colonies on these plates were counted to determine CFUs.For graphs of OD2O. Each sample was then DNase treated and purified using the on-column method for the Zymo Clean & Concentrator-25 (Zymo Research). RNA concentrations were determined throughout the purification using a Qubit 2.0 fluorometer (Life Technologies). DNase-treated total RNA (\u22645 \u03bcg) was then processed with the Gram-negative bacteria RiboZero rRNA removal kit (Epicentre). After rRNA depletion, each sample was ethanol precipitated and resuspended in H2O again. A fraction of the RNA was then fragmented to ~250 bp using NEBNext Magnesium RNA Fragmentation Module (New England Biolabs). After fragmentation, RNA was ethanol precipitated, resuspended in 20 \u03bcl ultra-pure water, and phosphorylated using T4 PNK (New England Biolabs). After another ethanol precipitation cleanup step, sequencing library preparation was performed using the NEBNext Small RNA Library Pre Set for Illumina, Multiplex Compatible (New England Biolabs). Samples were ethanol precipitated again after library preparation and separated on a 4% agarose gel. All DNA fragments greater than 100 bp were excised from the gel and isolated using the Zymoclean Gel DNA Recovery kit (Zymo Research). Libraries were sequenced using an Illumina HiSeq 2500 at the Genomic Sequencing and Analysis Facility (GSAF) at the University of Texas at Austin to generate 2\u00d7101-base paired-end reads.Total RNA was isolated from cell pellets using the RNAsnap method . After eEscherichia coli B genome (GenBank:NC_012967.1) as the reference sequence [https://github.com/wilkelab/AG3C_starvation_tc_RNAseq.For RNA-seq analysis, we implemented a custom analysis pipeline using the REL606 sequence . We updasequence . Prior tsequence . The rawsequence . The fulE. coli cell pellets were resuspended in 50 mM Tris-HCl pH 8.0, 10 mM DTT. 2,2,2-trifluoroethanol (Sigma) was added to 50% (v/v) final concentration and samples were incubated at 56\u00b0C for 45 min. Following incubation, iodoacetamide was added to a concentration of 25 mM and samples were incubated at room temperature in the dark for 30 min. Samples were diluted 10-fold with 2 mM CaCl2, 50 mM Tris-HCl, pH 8.0. Samples were digested with trypsin (Pierce) at 37\u00b0C for 5 h. Digestion was quenched by adding formic acid to 1% (v/v). Tryptic peptides were filtered through Amicon Ultra 30 kD spin filtration columns and bound, washed, and eluted from HyperSep C18 SpinTips (Thermo Scientific). Eluted peptides were dried by speed-vac and resuspended in Buffer C for analysis by LC-MS/MS.For LC-MS/MS analysis, peptides were subjected to separation by C18 reverse phase chromatography on a Dionex Ultimate 3000 RSLCnano UHPLC system (Thermo Scientific). Peptides were loaded onto an Acclaim C18 PepMap RSLC column and eluted using a 5\u201340% acetonitrile gradient over 250 min at 300 nl/min flow rate. Eluted peptides were directly injected into an Orbitrap Elite mass spectrometer (Thermo Scientific) by nano-electrospray and subject to data-dependent tandem mass spectrometry, with full precursor ion scans (MS1) collected at 60,0000 resolution. Monoisotopic precursor selection and charge-state screening were enabled, with ions of charge >+1 selected for collision-induced dissociation (CID). Up to 20 fragmentation scans (MS2) were collected per MS1. Dynamic exclusion was active with 45 s exclusion for ions selected twice within a 30 s window.E. coli strain REL606 protein sequence database and common contaminant proteins ). Fully-tryptic peptides were considered, with up to two missed cleavages. Tolerances of 10 ppm (MS1) and 0.5 Da (MS2), carbamidomethylation of cysteine as static modification, and oxidized methionine as dynamic modification were used. High-confidence peptide-spectral matches (PSMs) were filtered at <1% false discovery rate determined by Percolator .Spectra were searched against an 13C labeled glucose, using methods previously described [N-tert-butyldimethylsilyl-N-methyltrifluoroacetamide with 1% tert-butyldimethyl-chlorosilane (v/v); vials were capped and baked at 85\u00b0C for 2 h, and samples were analyzed within 2 days of derivitization.Flux ratios were obtained from the samples grown with escribed ,29. CellAnalysis of derivitized samples was performed on a Shimadzu QP2010 Plus GC-MS with autosampler. The GC-MS protocol included: 1 mL of sample injected with 1:10 split mode at 230\u00b0C; an oven gradient of 160\u00b0C for 1 min, ramp to 310\u00b0C at 20\u00b0C/min, and hold at 310\u00b0C for 0.5 min; and flow rate was 1 mL/min in helium. A total of five runs were performed for each sample: a blank injection of DMF to waste, a blank injection of DMF to the column, and three technical replicates of each vial.Flux inference was performed using the FiatFlux software as described ,30. Each9 cells. Pellets were resuspended in 5ml 1:2:08 chloroform:methanol:water for 20 min and spun at 10,000\u00d7g for 10 minutes. Pellets containing lipid A were further purified by the Bligh/Dyer method as previously described [Lipid A and phospholipids were isolated from bacterial pellets containing 3\u20139\u00d710escribed . Phosphoescribed . Mass anescribed . Phosphoescribed . One of We analyzed raw counts from the proteomics and RNA-seq experiments as follows. Initially, proteins with low counts (<10) over the entire duration of the time course were filtered out. Each time point was then normalized to the read depth . Only proteins with a fold change of \u22651.5 were considered for further analysis. Protein profiles were then normalized to the maximum value for a given protein time course. To estimate the absolute protein abundance we made use of the APEX normalization method .p < 0.05) at least one time point for further analysis. To compare absolute RNA abundances within a single time point, raw RNA counts were normalized by gene length. Finally, normalized RNA and protein profiles, both relative and absolute, were averaged across all three biological replicates.To analyze relative RNA levels, raw RNA read counts per gene (ignoring rRNAs) were normalized within each sample using DESeq . To idenk-means clustering algorithm with the number of protein clusters set to 25 and RNA clusters to 15. To compare relative protein profiles with the integral of their relative transcript levels we integrated each of the transcript profiles, from the initial time to each additional time point, using the trapezoidal method implemented by the python library numpy [Clustering of protein profiles was performed using the python library scipy . We usedry numpy .di(tj), \u03c3i(tj), and si(tj) are the average of all experimental repeats of protein (or mRNA) i at time tj, the standard deviation of the experiments of the protein (or mRNA) i at time tj, and the average of the ensemble simulations i at time tj, respectively. Scaling by the standard deviation places a relatively lower weight on data points with relatively larger errors for a given protein or mRNA.We used a piecewise continuous curve to fit both RNA and protein profiles. This curve was defined by seven free parameters, four free time parameters, and three free amplitude parameters. To fit the profiles we used a custom implementation of a differential evolution (DE) algorithm . Brieflyt1 to reliably represent the time to first inflection, the sum of t2, t3, and t4 was a decent proxy to how long it took an RNA/protein to reach a steady state after entering a starved state, and we could reliably sort the behavior into four categories based upon the amplitude parameters. The four categories we used were that of up-regulated, down-regulated, transiently up-regulated, and transiently down-regulated. Genes that were up (or down) regulated were those genes that increased (or decreased) at some point during the time course and did not decrease (or increase) at some later time. Genes that were transiently up (or down) regulated were those genes that increased (or decreased) at some point during the time course but decreased (or increased) at some later time. The sorting into categories was aided by our estimate of the distribution of parameters that allowed for a good fit within the population of fits. A fit was considered good if it was on average (across the time course) one standard deviation or less away from the experimental average.Some of the profiles may be slightly over-fit by our curve (e.g. profiles that are up-regulated or down-regulated once during the time course without further modulation of expression). Thus care needs to be exercised in the interpretation of some of the parameters. However, we found david.abcc.ncifcrf.gov) to perform Gene Ontology term enrichment on each subset of sorted genes: up-regulated, down-regulated, transiently up-regulated, and transiently down-regulated. Specifically we made use of DAVID's API, instead of the web interface, to generate the GO-enrichment through a python script. GO terms were clustered based upon genes in a given term to reduce redundancy in the returned results.We used the DAVID database using the DAVID database API. The protein levels within each returned KEGG pathway were then averaged to see if there was any consistent response across the entire pathway. Those KEGG terms that gave inconsistent responses across proteins in that pathway returned a relatively flat average and were filtered out.https://github.com/marcottelab/AG3C_starvation_tc.All of the scripts used to perform the above analysis can be downloaded at http://dx.doi.org/10.5061/dryad.hj6mr. Raw Illumina read data and processed files of read counts per gene and normalized expression levels per gene have been deposited in the NCBI GEO database (accession GSE67402) [Raw data are available from the Dryad Digital Repository: SE67402) . The masSE67402) .S1 Table(CSV)Click here for additional data file.S2 Table(CSV)Click here for additional data file.S3 TableFor each gene, we list the common name, the entrez ID, the full-length gene name, the predicted function for uncharacterized genes, and the behavioral group each gene was sorted into .(CSV)Click here for additional data file.S4 TableFor each gene, we list the common name, the entrez ID, the full-length gene name, the predicted function for uncharacterized genes, and the behavioral group each gene was sorted into .(CSV)Click here for additional data file.S1 Fig\u2212100.Scatter plot between biological replicates 1 and 2 and 1 and 3 along with their associated Spearman correlation coefficients. P-values for all correlations are <10(TIFF)Click here for additional data file.S2 FigOverlap between protein IDs comparing the first three time points, 3\u20135 hrs, where cells and protein concentrations are roughly at steady state. The high overlap between time points indicates very reproducible protein IDs.(TIFF)Click here for additional data file.S3 FigFor each time point the fraction of total RNA reads in the RNA-seq results that mapped to tRNA (orange), rRNA (green), mRNA (red), or other noncoding RNA (purple) are shown. (A) RNA fractions for each total RNA sample that was processed without the rRNA depletion step. (B) RNA fractions for rRNA-depleted samples. Each bar represents an individual biological repeat. In some samples the rRNA depletion was not as successful as in others . Any residual rRNA counts were disregarded before analyzing relative RNA expression levels.(TIFF)Click here for additional data file.S4 Fig\u221243.Both the RNA and protein levels were scaled to their respective averages across all RNAs or proteins for each time point and then log transformed. All P values were <10(TIFF)Click here for additional data file.S5 Figt0 represents the time at which we start to collect data at 3 h into growth. The curve was fit using a differential evolution fitting algorithm that was gradient free and population based, allowing for a range of possible parameter sets that can explain our data given the experimental error. (B-E) Four random examples of measured RNA time courses averaged across 3 biological replicates (green circles) with their standard deviations (green bars) along with the corresponding fits (blue). The blue bars represent the standard deviation of the range of fits that agree with our data. Both experimental time courses and fits were normalized by the average of the time course. (F) Most of the RNAs began to change between 6\u20138 h, when the cells began to be starved. This is demonstrated by the histogram of t1, the time to the first inflection point.We grouped RNA and protein time courses based on general qualitative behaviors. After entry to stationary phase, RNA and/or protein can be shut off, turned on, transiently activated, or transiently repressed. (A) To sort the profiles, a piecewise continuous curve was fit to the data. The parameter (TIFF)Click here for additional data file.S6 Figt1, time to first inflection (A), t2 + t3 + t4, the time between the first inflection and time the profile levels off (B), and t1 + t2 + t3 + t4, the total time until a given profile levels off (C). (D-F) As (A-C), but for protein profiles.(A-C) mRNA distributions of (TIFF)Click here for additional data file.S7 Fig13C constraints. As FiatFlux considers each time point as an integral from the start of the experiment, this analysis allowed us to determine whether later time points during growth changed the overall central metabolic flux splits that were estimated from earlier time points. Flux ratios for (A) SER from GLY, (B) OYR from MAL upper branch, (C) PEP through TK upper branch, (D) PEP through PPP upper branch, (E) PEP from OAA, (F) OAA from PEP, (G) P5P from G6P lower branch, (H) E4P through TK, and (I) GLY through serine.Flux ratios were computed via the FiatFlux software from GC-MS derived (TIFF)Click here for additional data file.S1 File(ZIP)Click here for additional data file.S2 File(ZIP)Click here for additional data file.S3 File(ZIP)Click here for additional data file.S4 File(ZIP)Click here for additional data file."}
+{"text": "Most studies of epidemic detection focus on their start and rarely on the whole signal or the end of the epidemic. In some cases, it may be necessary to retrospectively identify outbreak signals from surveillance data. Our study aims at evaluating the ability of change point analysis (CPA) methods to locate the whole disease outbreak signal. We will compare our approach with the results coming from experts\u2019 signal inspections, considered as the gold standard method.We simulated 840 time series, each of which includes an epidemic-free baseline (7 options) and a type of epidemic (4 options). We tested the ability of 4 CPA methods methods and expert inspection to identify the simulated outbreaks. We evaluated the performances using metrics including delay, accuracy, bias, sensitivity, specificity and Bayesian probability of correct classification (PCC).A minimum of 15\u00a0h was required for experts for analyzing the 840 curves and a maximum of 25\u00a0min for a CPA algorithm. The Kernel algorithm was the most effective overall in terms of accuracy, bias and global decision (PCC\u2009=\u20090.904), compared to PCC of 0.848 for human expert review.For the aim of retrospectively identifying the start and end of a disease outbreak, in the absence of human resources available to do this work, we recommend using the Kernel change point model. And in case of experts\u2019 availability, we also suggest to supplement the Human expertise with a CPA, especially when the signal noise difference is below 0.The online version of this article (doi:10.1186/s12911-016-0271-x) contains supplementary material, which is available to authorized users. The US Centers for Disease Control and Prevention (CDC) define an epidemic as \"the occurrence of more cases of disease than expected in a given area or among a specific group of people over a particular period of time\". BecauseOur study aims at evaluating the ability of change point analysis (CPA) methods to identify the beginning and ending dates of a disease outbreak from weekly counts. We will compare our approach with the results coming from experts\u2019 signal inspections, considered by many as the gold standard method.Historically, identifying a whole outbreak signal in surveillance data has relied more on human judgment, for example through review by a committee of experts , than on1, x2,\u2026, xn} measured with an index of time \u03c4 \u2208 {1, 2,\u2026,n}, a change point is a time index where a structure change occurs in data.Following several authors \u201319, we cAs all previous authors working on use of CPA in the context of surveillance data of infectious disease counts \u201319, we wA change point model is a model assuming the existence of at least one change point and partitioning the data into disjoint segments (with parameters similar within each segment and different without). This analysis is recursively repeated for each segment, allowing the detection of multiple changes.Usually the state change occurs at an unknown time index \u03c4. It is the reason why the problem is formulated using a change point detection (detection formulation: \u201cHow many changes during the time series?\u201d) and a change point estimation (locations formulation: \u201cWhen do they occur within the time series?\u201d)1, x2,\u2026, xn} a time series of independent random variables and \u03b8i, i\u2009=\u20091,\u2026, n the corresponding structure parameters. The change-point analysis of the time series consists in the following two steps:Decide betweenLet {xH0\u2009:\u00a0\u03b81\u2009=\u2009\u22ef\u2009=\u2009\u03b8k\u2009=\u2009\u22ef\u2009=\u2009\u03b8\u03c4\u2009=\u2009\u22ef\u2009=\u2009\u03b8n No change pointandH1\u2009: \u03b81\u2009=\u2009\u22ef\u2009=\u2009\u03b8k\u2009=\u2009\u03b1\u2009\u2260\u2009\u03b8k\u2009+\u20091\u2009=\u2009\u22ef\u2009=\u2009\u03b8\u03c4\u2009=\u2009\u03b2\u2009\u2260\u2009\u03b8\u03c4\u2009+\u20091\u2009=\u2009\u22ef\u2009=\u2009\u03b8n\u2009=\u2009\u03b3 Change pointswhere 1\u2009<\u2009k\u2009<\u2009\u03c4\u2009<\u2009n, and \u03b1, \u03b2, \u03b3 are unknown, k and \u03c4 representing the start and end dates of the outbreak.0 is rejected.A change point is detected when H2)1, x2,\u2026, xn}, if H1 is true.Estimate k and \u03c4 from the sample {xThe next step of change point estimation is carried out only when the null hypothesis of no change point is rejected. The number of change points and their respective unknown positions has to be estimated.Depending on the underlying model used to solve the change point problem, we may distinguish parametric (based on Maximum Likelihood) non-parametric and Bayesian change point model.All the experiments were performed using R 3.1.1 .To evaluate precisely the ability of change point analysis methods to identify a whole outbreak time curve, we need to build a gold standard by controlling perfectly when the outbreak starts in time series and when it ends. In the absence of a consensual gold standard defined in real world, only simulated data allow this evaluation.n\u2009=\u200912). Due to the use of a probabilistic process to generate this curve, the resulting duration of the simulated outbreak can be shorter than the original one, in particular for outbreak with small number of cases (n\u2009=\u200910).For the realism of the generated outbreak curve, as proposed by Jackson , we usedSeven levels of baselines were generated, corresponding to the expected daily incidences of 0, 1, 3, 5, 10, 20, 30, with 0, 0.25, 2.25, 5, 15, 50, 100 as associated variances. The baseline durations were 72\u00a0days for the Norovirus. For each incidence level we randomly generated 200 baselines according to a Gaussian law.With an objective of result reproducibility, we choose to build, for each combination of outbreak magnitude and baseline level of daily incidence, a data set of 30 times series. Each time series was created by adding a randomly selected outbreak among the 100 to a randomly selected baseline among the 200. For fully controlling the beginning and ending dates of the outbreak within the time series, we systematically added the outbreak after the first 30\u00a0days of baseline and kept 30\u00a0days after the outbreak end. Finally, 840 time series (4 level of curves x 7 level of baseline x 30 replicates) in 28 data sets were produced for evaluating the algorithms.The building process was slightly different concerning the human experts. For avoiding a process of learning, we have chosen to randomly place the epidemic period in each series, with the constraint of keeping at least 10\u00a0days of baseline before the outbreak beginning and 10\u00a0days of baseline after the outbreak ending. Taking in account the workload, we built 28 data sets of 2 time series only. However, we have randomly reordered the 56 resulting time series presented to each expert for controlling a possible ordering effect.We enlisted 15 experts who have at least 1\u00a0year\u2019s experience in daily disease surveillance. To allow comparison with the algorithms, we gave the experts the information that one and only one outbreak was present in each time series. Each expert independently evaluated 56 time series. The time series were presented in both graphical and numerical format , we can hypothesize that x\u03c4 follows a Poisson distribution. Even if infectious disease data are often overdispersed, this distribution hypothesis may be kept, as it usually done in disease surveillance [As proposed by Chen 2011 [eillance , and to This CPA is detailed in Harchaoui et al. .1, x2,\u2026, xn} a time series of independent random variables. We can define a Kernel function K as:Let {x1, S2, S31\u2009=\u2009{x1, x2,\u2026, xi} with i observations,: pre-epidemicS2\u2009=\u2009{x\u03c41+1, x\u03c41+2,\u2026, xj} with (j-i) observations,: epidemicS3\u2009=\u2009{x\u03c42+1, x\u03c42+2,\u2026, xn} with (n-j) observations.: post-epidemicSIn a second time we can define a Kernel Fisher discriminant ratio (KFDR), which measures the heterogeneity between the successive segments Sk\u2009=\u2009xy that amounts calculating a Fisher test statistic to compare the heterogeneity between S1, S2, S3. The detailed method is presented in Additional file A kernel function is required to calculate this KFDR. In literature, many functions exist for k, but we choose for this work a simple linear kernel function: 1: \u201da change occurs in the baseline\u201c with a non-parametric technique are proposed in Yao (1993) [1 the law of the random variable X on the first segment (pre-epidemic), L2 the law on the second segment (epidemic) and L3 the law on the last segment (post-epidemic). The Kruskal-Wallis test is used to check the following hypotheses:0: L1\u2009=\u2009L2\u2009=\u2009L3.H1: Laws L1, L2, and L3 are not identical.HTests against the epidemic alternative Ho (1993) and Emado (1993) . AdaptedThe detailed method is presented in Additional file t) that takes discrete values from 1 to the total number of hidden regimes (m) in the series. Each discrete value indicates the kind of data-generating regime at each time unit (\u03c4). This approach allows reproducing the epidemic and non epidemic latent regimes that generate the disease surveillance time series Xn = {x1,x2,\u2026,xn}, with n observations. This Bayesian approach ensures some flexibility by using a limited number of dependent variables, while keeping the capacity of managing multiple change points [The Bayesian model used here is the one introduced in 1998 by Siddhartha Chib . Chib pre points .t the number of events at time t and m a hidden state (or regime) at t. The xt distribution according Xt-1\u2009=\u2009{x1,x2,\u2026,xt-1} depends on the transition parameters (transition kernel) \u03bet, which values {\u03b81,\u03b82,\u2026,\u03b8m} change at unknown dates {\u03c41, \u03c42,\u2026,\u03c4m-1}. In Chib\u2019s model, the transition of hidden states is constrained to move forward by a non-ergodic Markov chain that makes the regime changes irreversible. Discrete variable St is modeled by a Markov chain process with probability matrix P, without possibility to return back to a previous state. The detailed method is presented in Additional file Let xData analysis was done with the MCMCpoissonChange function provided by MCMCpack .We evaluated the algorithms using several metrics.1), measured in days, is equal to the absolute value of the difference between the start date according the algorithm and the real start date.The beginning detection delay (d2), measured in days, is equal to the absolute value of the difference between the end date according the algorithm and the real end date.The ending detection delay expresses the capability to consider a day as non-epidemic while it is really not epidemic.The sensitivity (Se) expresses the capability to consider a day as an outbreak day while it is really epidemic.0 (non epidemic day) and H1 (epidemic day), their prior probabilities are P0 and P1. The Bayesian risk associated with this binary detection problem is then:For the last metric we selected the Bayesian probability of correct classification. Our detection problem consists in deciding which is the true binary state of a day in the time series (baseline or outbreak), given the binary result of the algorithm for this day. If the 2 possible realizations for a day are noted H.. are the costs associated to each possibility and P(H./H.) are the conditional probabilities for each realization.where C00\u2009=\u2009C11\u2009=\u20091 and C01\u2009=\u2009C10\u2009=\u20090, the Bayesian risk is the binary Bayesian probability of correct classification (PCC), or the probability of exact decision.If CTo account for the influence of the outbreak and baseline sizes in the evaluation results we have used the difference between the signal (outbreak) and the noise (baseline), and not the signal to noise ratio as usual. This was required because of the existence of null baselines in the datasets. This signal to noise difference (SND) is defined as:A positive SND corresponds to a higher number of cases in the outbreak than in the baseline during the outbreak period, and a negative one to the opposite.1 and d2, while the inverse is true for larger mean baseline counts. Delay d1 (d2) goes from 13.4 to 2.4\u00a0days (11.7 to 2.8\u00a0days) when the outbreak size grows from 10 to 100 cases, and from 0.5 to 20.9\u00a0days (0.5 to 21.1\u00a0days) when the baseline level grows from 0 to 30. The precision of the dates increases with the outbreak size (16.5 to 1.1) and decreases when the baseline level grows (16.3 to 0.4).Across all algorithms, the baseline and outbreak sizes affect accuracy and dispersion Table\u00a0. Larger 1 and d2 .The Kernel algorithm provides the more precise and unbiased global results according to d1 and d2 errors across the 840 time series. No point is below the first diagonal with a 4-days offset because this area corresponds to the impossible case of an outbreak beginning after its ending. The maximum likelihood and the kernel algorithms show their results mainly along this diagonal. This reveals that these algorithms try to find the shortest outbreak possible while exploring the time series. In contrast, the experts\u2019 results are mainly grouped around the true values and do not show a specific alignment along the first diagonal.Figure\u00a01 but also for d2. It is most frequent for the Kruskal Wallis (26\u00a0% of the results) and Bayes (18\u00a0%) algorithms, and less frequent for the kernel and the Max-Likelihood (16\u00a0%) algorithms, and least frequent for the experts (10\u00a0%).The top left quadrant in each figure corresponds to the situations where all the real outbreak is included by excess within the detected outbreak, meaning that algorithm make an error not only for d1 and d2 errors), whatever the outbreak size and the baseline level Fig.\u00a0. The losThe cumulative standard deviation (used as a proxy of the cumulative dispersion of the date detection) according to SND shows that the accuracy decreases with SND Fig.\u00a0, the sloThe algorithm sensitivity and specificity are highly influenced by the outbreak size and the baseline level can be considered as a probability for an algorithm to provide the good decision for all days in a time series. A PCC equal to 1 correspond to perfect status identification for the whole time series. Kernel perform globally better (with a PCC\u2009=\u20090.904) than Max-Likelihood (PCC\u2009=\u20090.903), Kruskall-Wallis (PCC\u2009=\u20090.884), Bayes (PCC\u2009=\u20090.862) and Human (PCC\u2009=\u20090.848). Detailed results are presented in Fig.\u00a0On average, an expert could process 56 curves per hour . An average of 15\u00a0h is then required to manually process all the 840 curves. In contrast, the slowest algorithm handles the whole set in 24 mn 37\u00a0s , with a minimum of 3 mn 18\u00a0s for the Max-Likelihood algorithm.Buckeridge wrote thn\u2009=\u200969 recruited among 288 eligible), to assess 34 times series, and to obtain a consensus was approximately 3\u00a0months (without including the time needed for recruiting the experts).Concerning the time-consuming aspect, a recent study by DebinAlthough data-processing time is hardware configuration dependent we notice, only for comparison purpose, that expert need at least 15\u00a0h to assess 840 curves, in contrast with the maximum of 25\u00a0min and minimum of 3\u00a0mm 18\u00a0s of processing time for the CPA models. Taking in account the workload, the small number of skilled experts, and the probable volunteer fatigue as a result of solicitation increase, it is clear that considering expert review can be only an occasional solution for research purpose and is unusable in a routine context .Considering the reliability of the different methods (human and statistics), we tried to identify their limits. Our approach was to control the characteristics of the time series submitted to the different methods of outbreak identification by injecting controlled signals coming from real outbreaks within well-known baselines. Among the factors most influencing the evaluation results, we showed that the excess of case in the time series linked to the epidemics, represented by the SND, is the most pertinent factor and emphasizes the interaction between the outbreak size and the baseline level. Concerning accuracy, a drop out point, which is associated with an important accuracy decrease and dispersion increase and which is easy to materialize by the cumulative dispersion, can characterize each algorithm. Experts outperform the algorithms in term of PCC for low level of baseline but this ability quickly declines when SND decreases. Kernel and Maximum-Likelihood are the most accurate and less biased algorithms when the SND decreases under 0. All other algorithms loss their capacities as soon as SND reach 10. Kernel and Maximum-Likelihood can be considered, whatever the baseline level and outbreak size, as the best algorithms for providing a correct classification of the days in the time series according to their real status. For all algorithms, we observed a disposition to delay the detection of the first outbreak day and to anticipate the detection of the last outbreak day, probably in association with the slope of the outbreak curve.Among the differences observed between Experts and algorithms, we noticed that when an Expert doesn\u2019t find the outbreak signal he has a propensity to include the whole time series as the outbreak signal (increasing his sensitivity). Another individual variability observed in the Expert population, and increasing the result dispersion, is the way the instructions are understood and assimilated. In example, considering the instruction \u201cEnd date for the outbreak is the last day always included in the epidemic signal but before a return to a normal situation\u201d and the 0-level baseline, some Experts included in the outbreak signal the first baseline days (0 case) after the ending day, while in the same time considering outbreak cases occurring ahead of time as baseline noise and excluding them from the outbreak. As suggested by Wagner , this phWe also observed some limitations of both expert and CPA when the outbreak starts during a low level of the baseline fluctuation, making impossible the identification of the true beginning date.Our study shows that nonparametric and parametric methods are more accurate and less biased than Bayesian CPA. Kass-Hout et al found the same result and presAll this work has been done on detrended signals, without seasonality or autocorrelation. However, using a 3 states model, we integrate already the possibility of trend existence, as this allows the baseline to have different levels before and after the outbreak. Taylor considerDespite the numerous advantages of the CPA algorithms, three main problems can be raised. Firstly, they are unable to detect unique aberrations in time series, as a single peak outbreak , which are still easy to identify by an expert and usually best detected by usual time aberration detection methods. It's a reason why we recommend that CPA models should be compared to threshold-based methods in future studies as for example the Moving Epidemic Method . SecondlIn conclusion, for the aim of retrospectively identifying the start and end of a disease outbreak, in the absence of human resources available to do this work, we recommend using the Kernel change point model. And in case of experts\u2019 availability, we also suggest to supplement the Human expertise with this kind of technics, especially when the SND is below 0 .Data are simulated from a real outbreaks of Norovirus already published .Additional file 1:Supplementary material. (DOCX 1794 kb)"}
+{"text": "Exploratory analysis of multi-dimensional high-throughput datasets, such as microarray gene expression time series, may be instrumental in understanding the genetic programs underlying numerous biological processes. In such datasets, variations in the gene expression profiles are usually observed across replicates and time points. Thus mining the temporal expression patterns in such multi-dimensional datasets may not only provide insights into the key biological processes governing organs to grow and develop but also facilitate the understanding of the underlying complex gene regulatory circuits.\u03b4-TRIMAX. Its aim is to make optimal use of \u03b4-TRIMAX in extracting groups of co-expressed genes from time series gene expression data, or from any 3D gene expression dataset, by adding the powerful capabilities of an evolutionary algorithm to retrieve overlapping triclusters. We have compared the performance of our newly developed algorithm, EMOA- \u03b4-TRIMAX, with that of other existing triclustering approaches using four artificial dataset and three real-life datasets. Moreover, we have analyzed the results of our algorithm on one of these real-life datasets monitoring the differentiation of human induced pluripotent stem cells (hiPSC) into mature cardiomyocytes. For each group of co-expressed genes belonging to one tricluster, we identified key genes by computing their membership values within the tricluster. It turned out that to a very high percentage, these key genes were significantly enriched in Gene Ontology categories or KEGG pathways that fitted very well to the biological context of cardiomyocytes differentiation.In this work we have developed an evolutionary multi-objective optimization for our previously introduced triclustering algorithm \u03b4-TRIMAX has proven instrumental in identifying groups of genes in transcriptomic data sets that represent the functional categories constituting the biological process under study. The executable file can be found at http://www.bioinf.med.uni-goettingen.de/fileadmin/download/EMOA-delta-TRIMAX.tar.gz.EMOA- The online version of this article (doi:10.1186/s12859-015-0635-8) contains supplementary material, which is available to authorized users. One of the main aims of functional genomics is to understand the dynamic features encoded in the genome such as the regulation of gene activities. It often refers to high-throughput approaches devised to gain a complete picture about all genes of an organism in one experiment. Several steps, such as transcription, RNA splicing and translation are involved in the process of gene expression, which is subject to a great many of regulatory mechanisms. Analysis of such gene expression data provides enormous leverages to understand the principles of cellular systems, diseases mechanisms, molecular networks etc. Genes having similar expression profiles are frequently found to be regulated by similar mechanisms. Previous studies elucidated the impact of highly connected intra-modular hub genes on such regulations \u20133. Detec\u03b4-TRIMAX by introducing a novel mean squared residue score (MSR) to mine a 3D gene expression dataset and each tricluster must have an MSR score below a threshold \u03b4 , where i \u2208 I, j \u2208 J, k \u2208 K. Sub-matrix M represents a subset of genes (I) that have similar expression profiles over a subset of samples (J) during a subset of time points (K).Tricluster (M): A tricluster can be defined as a sub-matrix M = \u03c3)=0) of size 100 \u00d7 4 \u00d7 4, 80 \u00d7 4 \u00d7 4 and 60 \u00d7 4 \u00d7 4 into the dataset. In the next step, we have implanted 3 noisy triclusters with different levels of noise into the synthetic dataset.First, we have applied the proposed algorithm to an artificial dataset containing 1000 genes, 5 samples and 4 time points. We have then embedded 3 perfect shifting triclusters (standard deviation (\u03c3)=0) of size 50 \u00d7 3 \u00d7 3, 50 \u00d7 3 \u00d7 3 and 50 \u00d7 3 \u00d7 3 into the dataset. In the next step, we have added different levels of noise into the synthetic dataset.Moreover, we have generated another artificial dataset which contains 200 genes, 10 replicates and 10 time points. Afterwards, we have implanted 3 perfect shifting triclusters \u00d7 10 (replicates) \u00d7 20 (time points), 200 (genes) \u00d7 10 (replicates) \u00d7 25 (time points) and 200 (genes) \u00d7 10 (replicates) \u00d7 30(time points) in which we have embedded 3 perfect shifting triclusters of size 30 \u00d7 3 \u00d7 8, 30 \u00d7 3 \u00d7 6 and 30 \u00d7 3 \u00d7 4.%, 1 %, 1.5 % and 2 % of all elements of one artificial dataset of size 200 \u00d7 10 \u00d7 20 containing three triclusters of size 30 \u00d7 3 \u00d7 8, 30 \u00d7 3 \u00d7 6 and 30 \u00d7 3 \u00d7 4.In order to show the performance of the algorithm for the dataset containing missing values, we have randomly deleted the values of 0.5 In this work, this previously published dataset has only been used for comparing the performance of the proposed algorithm with that of the other existing triclustering algorithms since one of the algorithms we wanted to compare our approach with, OPTricluster, can only be efficiently applied to a short time series gene expression dataset and thus, was not suitable to be used for dataset 2 (see below) . DatasetThis dataset contains 48803 Illumina HumanWG-6 v3.0 probe ids, 3 replicates and 12 time points (GSE35671) [This experiment was carried out to study the dynamics of expression profiles of 54675 Affymetrix human genome U133 plus 2.0 probe ids in response to IFN-beta-1b treatment across four time points over 6 patients (GSE46280) .To evaluate the performance of the proposed algorithm on the artificial datasets described above (2.6.1), we have used the affirmation score , 9 definTim is the set of implanted triclusters, Tres represents the set of triclusters extracted by any triclustering algorithm, Tres with respect to Tim. The value of SM\u2217 ranges from 0 to 1. If Tres=Tim, then the affirmation score is 1.where, \u03b4, we have first clustered the genes over all time points and then the time points over the subset of genes for each gene cluster in each sample plane using the k-means algorithm. Taking a randomly selected sample plane, we have computed the MSR score of the submatrix of each gene and time-point cluster and repeated this procedure 100 times. Then we have taken the lowest value as the value of \u03b4. Although it is possible to minimize the MSR score without introducing the threshold parameter \u03b4, minimizing MSR without using any threshold may either yield some small sized triclusters which may not provide any biologically meaningful information or produce large sized triclusters which may contain genes and/or samples and/or time points lying far apart from the feature space. Thus using a threshold parameter \u03b4 may balance the size and quality of the resultant triclusters. The value of \u03bb has been experimentally set to maximize the speed of the proposed algorithm and minimize the risk falling into a local optimum. The values of \u03b4 and \u03bb used to run the proposed algorithm and our previously proposed \u03b4-TRIMAX algorithm on the artificial datasets are enlisted in Tables \u03b4-TRIMAX can effectively deal with the datasets having different number of time points.The affirmation score was also used to compare the performance of the proposed algorithm with that of the other triclustering algorithms and one biclustering algorithm . Before Moreover we compared the performance of the proposed algorithm with that of the existing ones in terms of CPU time. From Fig. In order to show the robustness of the proposed algorithm, we have used the artificial datasets 1 and 2 with different levels of noise described above 2.6.1). For each of these datasets, we have run the proposed algorithm for 20 times and reported the standard deviations of the affirmation scores obtained after each run in Table .1. For e\u03b4-TRIMAX are provided in Table \u03bb and \u03b4 of EMOA- \u03b4-TRIMAX and our previously proposed \u03b4-TRIMAX algorithms for each of the real-life datasets according to our criteria explained in section \u2018Results on an artificial dataset\u2019. As using default parameter values may often produce poor results, the input parameters of other algorithms were tuned in order to obtain better results on each of the real-life datasets. Table As a data preprocessing step, we have used robust multi-array average (RMA) method to normalize the datasets. The values of the input parameters of EMOA- In order to show the convergence of solutions towards the Pareto optimal front around its center region, we have computed minSum values in each generation as follows\u03a8 denotes the current population and f1, f2 and f3 correspond to the objective functions defined in section \u2018where We have applied our proposed algorithm on the three aforementioned real-life datasets and compared its performance with that of other triclustering algorithms. For this comparison, we have computed a Tricluster Diffusion (TD) score and a Statistical Difference from Background (SDB) score . The TD MSRi and Volumei stand for the mean squared residue score (see eq. (i. The volume of the ith tricluster can be defined as (|Ii|\u2217|Ji|\u2217|Ki|), where |Ii|, |Ji| and |Ki|, represent the number of genes, samples and time points of the ith tricluster, respectively. A lower TD score represents better quality of triclusters. Figures \u03b4-TRIMAX yields triclusters having lower TD scores than those produced by other algorithms for each of the three datasets.where (see eq. ) and forThe statistical difference from background (SDB) score signifies whether a set of n triclusters is statistically different from the background data matrix. The SDB score is defined by equation n is the total number of triclusters extracted by the algorithm. MSRi represents the mean squared residue of the ith tricluster retrieved by the algorithm and RMSRj stands for the mean squared residue of the jth random tricluster having the same number of genes, experimental samples and time points as the ith resultant tricluster. Here a higher value of the numerator indicates a better quality of the resultant tricluster. In our study we have set r to 100. OPTricluster can not be applied to Dataset 2 as it effectively mines only short time series gene expression data having approximately 3-8 time points. From Tables \u03b4-TRIMAX algorithm in case of the dataset 1, dataset 2 and dataset 3.where KEGG pathway enrichments have been found for each resultant tricluster for datasets 1, 2 and 3. To compare the performance of our proposed algorithm with that of the other algorithms using KEGG pathway enrichment we used a hit score . The hitT and its enriched KEGG pathway term i; |T| is the total number of genes in tricluster T. A higher hit score signifies that more genes in T participate in a canonical pathway.where %, 96 % and 94 % of all resultant triclusters for datasets 1, 2 and 3, respectively. We used a Hit score to compT and its enriched TRANSFAC matrix i; |T| is the total number of genes in tricluster T. A higher hit score signifies that more genes in T are regulated by a common transcription factor.where \u03b4-TRIMAX yields more significant triclusters than the other algorithms in terms of hit scores computed from the enriched KEGG pathways and TRANSFAC matrices for each of the real-life datasets because a higher percentage of triclusters obtained by the proposed algorithm have a smaller p-value than those produced by the other algorithms for each of the real-life datasets. Particularly striking is the inverse trend of Hit Scores in the TFBS enrichment observed with EMOA- \u03b4-TRIMAX, which has by far the largest population at the lowest p-values, and the other algorithms, where an increasing number of clusters is found with increasing p-values and Hit(TF) for each resultant tricluster using KEGG pathway and TFBS enrichment results, respectively. For each tricluster (T) we generated 100 random gene lists having the same size as the tricluster (T). The Hit scores for each randomly generated gene list were computed using KEGG pathway and TFBS enrichment results. As final step we have applied the non-parametric Mann-Whitney-Wilcoxon test to compute the significance between these two sets of hit scores in terms of p-values . From Fiues Fig. a\u2013c.FigTime series microarray experiments are performed to measure the expression profiles of genes at a set of time points. At each time point, the experiments are often repeated for a certain number of times, which in turn yield the expression profiles of the genes over a set of biological replicates at each time point. Though the expression profiles of these biological replicates are measured at the same time point keeping the experimental setup unchanged, peculiarities in experimental protocol or physiological variation of the population may cause disparity in the expression profiles of technical or biological replicates, respectively. Thus grouping those replicates which exhibit similar expression profiles might play an important role to identify those replicates that behave similarly. This enables us to retrieve biologically meaningful information from these samples rather than leveling effects by forcing together samples exhibiting dissimilar expression profiles. Here, we have tried to unravel the reason of not always getting all the replicates as the members of each resultant tricluster. In Fig. To detect key genes, we have first represented each tricluster by its eigen-gene and then computed the Pearson correlation coefficient between each gene of the tricluster and its eigen-gene. We then ranked the probe-ids in descending order of Pearson correlation coefficient. We consistently observed that the genes corresponding to, for instance the 10 top-most probe-ids exhibited clear functional characteristics with relevance for cardiac development when being mapped to GOBPs or metabolic pathways (from KEGG). Therefore, we considered them as \u201ckey genes\u201d of that tricluster. Usually, no similarly clear categorizations were found for all the genes of one tricluster. For instance, if we perform biological process enrichment test using all genes of tricluster 64, we would not find the biological processes like S-adenosylhomocysteine, lipoprotein metabolic processes as the enriched ones. From Fig. \u03b4-TRIMAX outperforms the other algorithms when applied to four synthetic datasets as well as on three real-life datasets used in this work. Moreover, after retrieving groups of co-expressed and co-regulated genes over a subset of samples and across a subset of time points from a microarray gene expression dataset of hiPSC-derived cardiomyocyte differentiation, using the singular value decomposition method we have detected tricluster key genes most of which have already been shown or inferred to play instrumental roles in cardiac development. Thus, the other identified key genes can be hypothesized to be meaningful in this context as well, which needs to be experimentally validated. Furthermore, the enriched biological processes for the identified key genes of each tricluster not only resulted in a set of biological processes, associated with stem cell differentiation into cardiomyocytes but also a set of metabolic processes, the majority of which are known to play crucial roles in preventing cardiac diseases. Thus, the identified metabolic processes can be used to provide insights into potential therapeutic strategies to the treatment of cardiovascular diseases. Moreover, the triclusters for which the identified key genes are found to be involved in heart development might be facilitative to unravel regulatory mechanisms during different stages of cardiomyocyte development.In this work, we have shown that the improved version of our previously proposed triclustering algorithm EMOA-"}
+{"text": "Monitoring healing progression after a musculoskeletal injury typically involves different types of imaging but these approaches suffer from several disadvantages. Isolating and profiling transcripts from the injured site would abrogate these shortcomings and provide enumerative insights into the regenerative potential of an individual\u2019s muscle after injury. In this study, a traumatic injury was administered to a mouse model and healing progression was examined from 3\u2009hours to 1 month using high-throughput RNA-Sequencing (RNA-Seq). Comprehensive dissection of the genome-wide datasets revealed the injured site to be a dynamic, heterogeneous environment composed of multiple cell types and thousands of genes undergoing significant expression changes in highly regulated networks. Four independent approaches were used to determine the set of genes, isoforms, and genetic pathways most characteristic of different time points post-injury and two novel approaches were developed to classify injured tissues at different time points. These results highlight the possibility to quantitatively track healing progression Lower-limb musculoskeletal injuries (LLMIs) are common amongst athletes and military personnel3568Methods to unambiguously determine injury state and healing progression can provide effective treatment decisions and rehabilitative strategies, as well as prevent premature return-to-activity lowering the risk of reinjury. Current approaches for gauging injury severity and healing progress have primarily focused on three-dimensional imaging1314in-vivo transcriptional patterns from small tissue samples. Herein, a traumatic injury was administered to the tibialis anterior of a young, healthy mouse model and the tissue was extracted at different times ranging 3\u2009hours to 672\u2009hours (1 month). A portion of the tissue was then processed (<5\u2009mg), mRNA extracted and polyadenylated RNA fractions were prepared into strand-specific sequencing libraries. The libraries were then sequenced, analyzed and the adaptive transcriptional responses and their associated pathways were integrated to construct a comprehensive view of the injury and healing progression through time.RNAs extracted from the injured muscle would serve as excellent candidates for monitoring injury severity and permit quantitative insights16With the goal of developing an unbiased method for tracking healing progression, the generated datasets were utilized as training data and twelve additional RNA-Seq datasets were generated where the time points were blinded to act as test data. Two traditional bioinformatics approaches , support vector machine (SVM) multiclass classification) and two novel gene signature methods were used to determine the set of genes and isoforms most characteristic of healing state post-injury. The PCA, SVM, and time point signatures methods were then re-applied to the datasets at the pathway level to identify pathways that were differentially activated over time. Ultimately, the time point signatures method enabled accurate classification of 10 out the 12 test samples, and led to the identification of 370 pathways with activation levels that varied significantly (p\u2009<\u20090.05) at various time points after injury. As traumatic LLMIs present variable healing trajectories for different individuals, using an unbiased transcript-based tracking approach at both the gene and pathway level can enable quantitative classification of injury severity as well as inform therapeutic efficacy.Administration of a freeze injury to the tibialis anterior (TA) muscle of ten-week old C57BL/6 mice72\u2009>\u20090.95). The derived mRNAs were also validated using quantitative PCR (qPCR) for multiple genes (n\u2009=\u200925) through three time points and biological replicates from different samples and showed excellent agreement with the RNA-Seq data , cDNA was then produced and sequencing libraries were prepared . Each library was then deep sequenced, mapped to the mouse reference genome (mm9) using TopHat and assembled with CufflinksSeq data .To obtain comprehensive, unbiased views of the different gene expression programs and pathways associated with the injury, genes with expression changes as identified by CuffDiff 2 were examinedComparison of the injured samples against the controls for the early time points showed a variety of gene expression profiles with sevPro-inflammatory202256718202122231020A significant fraction of the upregulated genes in the early and middle time periods can be ascribed to invading immune cells , which i32Migrating fibroblasts play an essential role in tissue remodeling after muscle injury, through production of new extracellular matrix (ECM) components and development of a phenotype that contracts the surrounding matrix31The changes to the physical microenvironment and cytokines from resident and invading cells along with muscle regulatory factors , numerous transcription factors44Summation of the different transcriptional networks for all of the time points shows the injury site is a complex environment with multiple cell types executing a wide variety of functions. 5822303132364316As the observed gene expression dynamics displayed excellent agreement with previous muscle injury studies and showed unique temporal kinetics, an unbiased bioinformatics strategy for tracking healing progression using gene expression data after an LLMI was developed. The previously generated 68 datasets were utilized as training data and twelve additional RNA-Seq datasets were generated where the time points were blinded to act as test data. The twelve test datasets corresponded to three different time points and represented a mix of injured and uninjured samples . Four methods were developed for evaluation of the test datasets: Support Vector Machines (SVM), Principal Component Analysis (PCA), and two time point signatures methods. A neural network approach was considered, but insufficient training datasets were available for proper trainingAn SVM classifier was developed and the best performance was obtained when the data was filtered to include all significant genes (see Methods). The weighted SVM calls from each pair of classifiers were summed, and the time point with the highest number of weighted votes was designated as the final classification call. The performance of the SVM classifier is illustrated in Principal component analysis (PCA) was performed on both the training and test datasets. The PCA results suggest several additional rules for evaluating the confidence of sample classification decisions. For both the PCA and SVM approaches, samples at 3\u2009h and 504\u2009h cluster near the controls. Similarly, adjacent time points are near to each other in the multi-dimensional feature space (in the case of PCA), and kernel space (in the case of SVM).In the time point-weighted signatures method, genes that underwent a dynamic change as a result of the injury were assigned a score for each time point (see Methods). The time point-weighted signatures method assigns normalized weights to the magnitude of the fold change between the injured and control samples and used the calculated weights to match gene profiles between training and blinded samples (see Methods). Using this approach, specific genes whose changes in expression are useful for classifying a blinded time point could be determined. Examples of the representative genes include TCDD-Inducible Poly(ADP-Ribose) Polymerase-Tiparp at 3\u2009h, FOS-Like Antigen 1-Fosl1 at 10\u2009h, Nicotinamide Riboside Kinase 2-Nmrk2 at 24\u2009h, Insulin-like Growth Factor 2 Receptor-Igf2r at 48\u2009h, Interferon-Induced Protein 35-Ifi35 at 72\u2009h, Phosphoglucomutase 5-Pgm5 at 168\u2009h, Collagen Type VI Alpha 6-Col6a6 at 336\u2009h, and Myosin Light Chain 10-Myl10 at 672\u2009h. These genes had a high fold change for a single time point, and low fold changes of injured versus control for each of the other time points. Genes such as these were then selected to differentiate between time points because pairwise profile comparisons at adjacent time points are of greater interest than the global expression profile of a gene across multiple time points.The overall performance of the four sample classification methods is illustrated in 2\u2009=\u20090.88), indicating the observed variation at 168\u2009h is biologically representative. Further inspection of the genes from the 168\u2009h time point that contributed the highest loadings to the overall variance showed enrichments in several pathways such as angiogenesis, ECM remodeling, immune response and endocytosis for multiple genes (n\u2009=\u200925) and biological replicates from different samples was performed . Compariocytosis . These pThe four classification methods gave inconsistent results for the injured 168\u2009h samples, suggesting that cells at that time may be undergoing transition states that lead to high variability between replicate samples. Though the injured 168\u2009h samples were challenging to classify, the successful classification of multiple uninjured control samples, injured 3\u2009h, and injured 10\u2009h datasets advances the overarching goal of identifying easily accessible biomarkers for healing status and early triage. The ability to classify a small volume of tissue such as a muscle biopsy from a fine-needle aspirate to the correct post-injury time point serves as a step toward the eventual goal of translating molecular ontology networks into quantitative diagnostics.The time point classification analysis with the PCA, SVM, and time point signature methods was repeated at the pathway level to identify if gene pathways could differentiate time points after injury with greater accuracy than at the gene level. Generally, analysis at the pathway level carries less statistical power than analysis at the gene level, as pathway expression values are the mean of the expression values of individual genes, and consequently have a greater amount of noise. The SVM approach (results not shown) led to a number of samples misclassified at the pathway level. The Time Point-Specific and Time Point-Weighted signatures method at the pathway level were able to classify 10 of the 12 test samples correctly , with anMultiple gene expression programs and pathways have been linked to muscle repair and regeneration, each of which acts with differing kinetics and degrees of activation for different individuals. This feature, in conjunction with limitations of available imaging tools, has prevented quantitative classification of healing progression for individuals who sustain a LLMI. Herein, high-resolution RNA-Seq was utilized to track the different stages of healing after a traumatic injury and using this approach, multiple types of cellular programs that confer different properties to the muscle repair and regeneration system were able to be monitored through time. In contrast to imaging modalities that provide low resolution and little information on the various gene expression programs, accurate classification of uninjured and injured tissues was carried out without a priori knowledge. Several different bioinformatics classification methods were utilized to dissect the genomic datasets and metrics of performance for each schema were assessed. This methodology may help clarify or further enable diagnosis of how a given patient is progressing towards healing after a traumatic injury as well as enable a clinician to determine the relative timing of different muscle repair and regeneration networks that potentiate a return- to-activity decision. Moreover, the approach can be coupled to guide treatments and evaluate therapeutic efficacies.The RNA-Seq results demonstrated that the injury site is highly dynamic with multiple gene expression programs contributing to healing progression, including several with antagonistic behavior. To gain further insight into the transcriptional networks, pathway analysis was performed and showed the networks progressively migrating from a pro-inflammatory protective state in the early period after the injury to an anti-inflammatory, supportive state in the middle and late time periods. The observed networks are consistent with previous studies and highlight the possibility to quantitatively track healing progression via transcript profiling using high- throughput sequencing. To test the robustness of the generated datasets and the ability to classify a given sample, additional datasets were generated and the corresponding time points after the injury were blinded. Four different bioinformatics techniques were then utilized to classify the blinded samples and the performance of the classification schemas was quantified. The novel time point signature approaches outperformed the SVM and PCA classifiers and the difference in performance of the two time point signatures approaches suggests the need for an optimized model to weight the dot products across pairwise sample comparisons. The model for time point weighted signatures method performed better because it accounted for FPKM expression levels for different genes in addition to the number of time points for which a given gene exhibited a fold change. A future direction of research might aim to improve this model via a grid search algorithm designed to determine the optimal set of weights for a given gene profileThe four classification methods gave inconsistent results for the injured 168\u2009h samples, suggesting that cells at that time may be undergoing transition states that lead to high variability between replicate samples. The variability from these states makes a single profile more difficult to determine for later stages of muscle repair and regeneration. To further develop predictive power for these time points, single cell transcriptomic profilingAll experimental protocols were approved by the USARIEM Institutional Animal Care and Use Committee (IACUC).Male C57BL/6J mice were obtained from The Jackson Laboratory . Mice were housed one per cage in the USARIEM animal facility at a constant Ta\u2009=\u200924\u2009\u00b1\u20091\u2009\u00b0C, 50% relative humidity, with a 12\u2009h/ 12\u2009h (0600\u20131800\u2009h) light/dark cycle. Standard laboratory rodent chow and water were provided ad libitum. Cages were supplied with Alpha-dri and cob blend bedding for nesting and enrichment and plastic houses for warmth and comfort. Food intake and body mass were recorded daily. Mice were cared for in accordance with the Guide for the Care and Use of Laboratory Animals in a facility accredited by the Association for the Assessment and Accreditation of Laboratory Animal Care (AAALAC).Prior to administration of the freeze injury, mice were anesthetized with a combination of fentanyl (0.33\u2009mg/kg), droperidol (16.7\u2009mg/kg), and diazepam (5\u2009mg/kg). The TA muscle was exposed via a 1\u2009cm long incision in the aseptically prepared skin overlying the TA muscle. Freeze injury was performed in the left, hind limb. The non-injured contralateral leg served as one control. Freeze injury was induced by applying a 6\u2009mm diameter steel probe to the belly of the TA muscle (directly below incision site) for 10\u2009seconds. Following injury, the skin incision was closed using 6-0 plain gut absorbable suture. The analgesic, Buprenorphine (0.1\u2009mg/kg SQ) was administered using a 25\u201327 gauge needle prior to recovery from anesthesia.2 inhalation (2 liters/min), thoracotomy and exsanguination. TA muscles were removed from the injured and contralateral limb; weighed, and a portion of the tissue was homogenized in Trizol, snap frozen in liquid N2, and stored at \u221280\u2009\u00b0C.Mice were euthanized at each time-point post-injury via COTotal RNA was isolated from the homogenized tissue in Trizol using the miRNeasy Mini Kit (Qiagen) as per the manufacturer\u2019s instructions. RNA concentration and integrity were measured with a Nanodrop spectrophotometer (Nanodrop 2000c) and Bioanalyzer (Agilent 2100). If a sample did not pass quality metrics for further processing (RIN >7), the samples were omitted from further processing. This quality check resulted in several time points that only had two tissues, such as the 3\u2009hour and 168\u2009hour time points, or the 48, 336, 504 and 672\u2009hour time points, which only had three tissues. At least 1\u2009\u03bcg of isolated total RNA was used to produce strand-specific cDNA libraries using the Truseq protocol, as per the manufacturer\u2019s instructions and previously describedThe RNA-seq datasets were separated into a training dataset and a test dataset. The training datasets consisted of 37 control samples, two injured 3\u2009h samples, five injured 10\u2009h samples, four injured 24\u2009h samples, three injured 48\u2009h samples, four injured 72\u2009h samples, two injured 168\u2009h samples, three injured 336\u2009h samples, three injured 504\u2009h samples, and four injured 672\u2009h samples.http://cufflinks.cbcb.umd.edu/igenomes.html. The UCSC mm9 build was utilized with Cufflinks and Cuffmerge.RNA data in BAM format was aligned to the reference mouse genome (mm9) using the TopHat aligner. All analysis was performed using the mm9 mouse assembly and annotation as reference. The aligned reads were then analyzed with the Cufflinks 21 software suite (v2.1.1). The Cufflinks tool was first used to assemble transcripts for each replicate and time point. Separate assemblies were generated for injured and controlled conditions. Next, Cuffmerge was applied to the assembled transcripts to create a single merged transcriptome annotation for each condition (injured or control). Third, Cuffdiff was used to find differentially expressed genes and isoforms across time points and conditions, as well as detect differential splicing and alternative promoter usage. Cuffdiff was executed using the merged transcriptome assembly along with BAM files from the TopHat tool for each individual replicate. Last, the CummeRBund R package was used to compute statistics on differentially expressed genes and isoforms. All reference information for Mus musculus was downloaded from the Illumina iGenome site: 2\u2009\u2265\u20090.95 was excluded from the analysis. Replicates were merged into aggregate gene and isoform expression values. The median FPKM was computed across each set of replicates. If this value was 0, the aggregate FPKM for the time point/condition was set to 0. Otherwise, the mean FPKM was computed across the replicates. The data was further filtered to limit the analysis to genes and isoforms with a significant change in expression. To meet this requirement, a gene/isoform was required to exhibit FPKM >=1 at one or more of the 9 time points with a q value less than 0.05. Additionally, the gene/isoform must have undergone a two-fold (or higher) fold change in FPKM at one or more time points. These criteria resulted in 5,668 significant genes, as described in the \u201cGlobal Transcriptional Dynamics\u201d section, as well as 7,258 significant isoforms.Gene and isoform FPKM values derived via the Cufflinks analysis were filtered to remove uninformative replicates. A pairwise Pearson correlation was computed across replicates for a time point (3\u2009h \u2013 672\u2009h) for each condition . Any replicate that did not correlate with all other replicates with RFiltered replicates were analyzed with the standalone version of the GSEADifferentially expressed genes, as found by the Cuffdiff analysis described above, were grouped by time point. These gene groups were than analyzed with the DAVID Functional Annotation Tool55The MISO software (0.5.2) was used to identify genes that underwent alternative splicing2 fold change in FPKM between an injured time point sample and the corresponding control, as well as a minimum FPKM of 0.5 at one of the 10 time points was used as the criteria to determine differential expression. As an alternative filtering technique, only genes that were differentially expressed at a single time point were used for the analysis. In the third filtering approach, isoforms rather than genes were examined, and an initial set of 30,564 isoforms was filtered to a set of 7,541 isoforms with differential expression. In a fourth approach, this set of 7,541 isoforms was further filtered to a set of 1,908 isoforms that exhibited differential expression at only a single time point. For each of these filtering approaches, the four classification techniques discussed below were applied to both the raw FPKM values in the set of genes/isoforms, and to the log2 fold change of injured/control.For each of the four methods, the RNA-seq data was filtered in several ways to determine the optimal subset of data to use for correctly predicting the unknown time pointi assigned by the support vector machine. The \u201cone vs. one\u201d and \u201cone vs. all\u201d approaches were used to train a cascade of support vector machines to classify the blinded RNA-Seq time pointsThe weighted SVM calls from each pair of classifiers were summed, and the time point with the highest number of weighted votes was designated as the final classification call61Principal component analysis was performed on the data using the Statistics Toolbox in MATLAB (v.2013a)Replicates for each time point in the training dataset were combined. If the median FPKM value across the replicates was zero, the combined FPKM was set to 0. Otherwise, replicate FPKM values were averaged for each time point. Two approaches were implemented to characterize the training data expression profiles.In the time point-weighted signatures method, any gene with FPKM greater than or equal to 0.5 and a logfold change in expression greater than 2 was assigned a score equal to one over the number of time points where the gene exhibited a significant change in expression. The number of time points with a significant change in expression was calculated separately for upregulated (injured/control) and downregulated genes. For example, if a gene was upregulated at 3\u2009h and 10\u2009h after injury and downregulated 504\u2009h after injury, the weighting would be \u00bd for the upregulated time points and 1 for the downregulated time point. The gene expression profile of a test dataset was then compared to each of the profiles for the training dataset. A match score of zero was initialized for each comparison training dataset. In the case of a match , the score was incremented by the upregulated or downregulated weight of the gene. If a gene did not exhibit a significant expression fold change in either dataset, the match score was incremented by 0.5. This number was determined via least squares regression as the optimal weight to assign in the case of no fold change, based on the composite profile of the training data. Ultimately, the test dataset was classified as the time point with the highest match score in the training data.The time point-weighted signatures method differs from the time point-specific signature method in how weights are assigned to the dot product of gene expression vectors. Rather than weighting a gene in a training dataset based on the number of time points where the gene was upregulated or downregulated, weights were assigned based on the magnitude of the fold change between the injured and control samples. The maximum fold change in expression for the gene across the nine time points was determined. The fold change for all remaining time points (injured/control) was computed and normalized as a fraction of the maximum fold change. This normalized value was used to weight a match between a training and blinded sample at the gene of interest.A set of genes was next identified where the training data profiles were upregulated with a fold change greater than 2 or downregulated with a fold change greater than \u22122 at a single injured time point. The genes were assigned a score of 1 (if upregulated) or \u22121 (if downregulted) at that time point, and a score of 0 at all other time points. This produced a matrix of scores . Subsequently, a similar matrix was computed for the testing datasets. Analysis was restricted to the subset of genes with a significant fold change at a single time point, as observed in the training data. Time points with a positive fold change greater than two were assigned a score of 1; time points with a negative fold change were assigned a score of \u22121; and the remaining time points were assigned a score of zero. The element-wise difference between the training and test score matrix was computed, and the time point in the test data with the smallest absolute distance from the training data was selected.The time point-specific signatures method was utilized to identify the set of genes most responsible between gene expression profiles across time points. In this method, genes with the highest weights are differentially expressed at only one of the nine time points, and are consequently most informative for distinguishing which time point a blinded sample belongs to.Mus musculus pathways were selected from version 4.0 of the Broad Molecular Signatures Database (MSIGDB). In total 1320 pathways were examined. The mean pathway expression score was calculated by taking the mean FPKM expression value for all member genes in the pathway. For each pathway, a two-tailed T test was performed to determine the Z-score across the 9 time points. 1189 pathways with Z-scores greater than 2 or less than \u22122 were identified.To find the pathways that account for the majority of the variance between the time points, Principal Component Analysis was performed on the pathway Z-scores following the approach described above for gene-level analysis. The pathways with the highest loadings for each time point were identified and plotted in The normalized time points signatures algorithm was applied to the significant pathways. Pathways with a Z-score less than \u22122 or greater than 2, in conjunction with a fold change greater than 2 at one or more time points, were identified. The 370 pathways that met these criteria are illustrated in 2 (fold change) values were clustered into heatmaps using the R heatmap.2 library.The normalized FPKM values computed by Cufflinks were used to generate heatmaps of upregulated and downregulated genes for the early (3\u2009h \u2013 24\u2009h), middle (48\u2009h \u2013 168\u2009h) and late period (336\u2009h \u2013 672\u2009h) phases. The data was filtered to include only genes with FPKM >1 at one of the time points, and significant genes were identified. These had a fold change of 2 or higher at one of the time points in the early, middle, and late phases. The logFor histological analysis of damage to the tibialis anterior muscle, an additional cohort of C57BL/6J mice (n\u2009=\u20093 for each injury time point and Sham) were administered Evans blue dye (EBD) (1% weight/volume solution to yield a 1\u2009mg EBD/10\u2009g of body weight) intraperitoneally two hours prior to experimental injury or sham treatments. At each collection time point muscles from injured and non-injured limbs were collected for analysis as described above. The removed tibialis anterior muscles were then fixed in 4% paraformaldehyde phosphate buffer solution (PBS) for 2\u2009hours and then washed three times in ice-cold PBS. The fixed samples were then placed in cryomolds (tissue tek) and covered in Optimal cutting temperature compound, . The OCT covered samples were then frozen in semi-frozen isopentane before storage at \u221280\u2009\u00b0C before processing. Muscle samples were processed into 8-\u03bcM thick sections on a cryostat and adhered to slides . Each section was stained with skeletal muscle actin antibodies and goat polyclonal anti-rabbit IgG conjugated with fluorescein isothiocynate (FITC) and the nuclear stain 4\u20326-diamidino-2-phenylindole (DAPI) . Sections were visualized on a LSM 700 . EBD was detected with laser excitation provided at a wavelength of 540\u2009nm and emission collections at 590\u2009nm. EBD positive cells were counted at 20X magnification. The percentage of EBD-positive areas was calculated by dividing the area of EBD staining by the total skeletal muscle area as defined by skeletal muscle actin. The fraction of skeletal muscle staining positive for EBD was used to calculate the percentage of muscle that was injured.How to cite this article: Aguilar, C. A. et al. In vivo Monitoring of Transcriptional Dynamics After Lower-Limb Muscle Injury Enables Quantitative Classification of Healing. Sci. Rep. 5, 13885; doi: 10.1038/srep13885 (2015)."}
+{"text": "Circadian oscillation in baseline gene expression plays an important role in the regulation of multiple cellular processes. Most of the knowledge of circadian gene expression is based on studies measuring gene expression over time. Our ability to dissect molecular events in time is determined by the sampling frequency of such experiments. However, the real peaks of gene activity can be at any time on or between the time points at which samples are collected. Thus, some genes with a peak activity near the observation point have their phase of oscillation detected with better precision then those which peak between observation time points. Separating genes for which we can confidently identify peak activity from ambiguous genes can improve the analysis of time series gene expression. In this study we propose a new statistical method to quantify the phase confidence of circadian genes. The numerical performance of the proposed method has been tested using three real gene expression data sets. Analysis of periodic patterns is an essential part of many studies of gene expression involving timeline sampling or targeting of rhythmically expressed genes. Recent publications report a large proportion of the entire transcriptome oscillating in a circadian (i.e. approximately daily) rhythm \u20133. The nstandard errors for estimators,confidence intervals for unknown parameters,p-values for test statistics under a null hypothesis.The maximum entropy bootstrap interval, then calculate sample quantiles of the Maximum Entropy at the generated points and sort them.Using the ordering index of step 1, reorder the sorted sample. This process permits to conserve the dependance relationships among observations in the original data.R = 999.The steps 2 to 6 are repeated many times, in our analysis we use Several bootstrap methods have been proposed for time series data. The most well-known is the posed by . It doesposed by . The repA complete simulated example for illustration of each step of the algorithm can be found in . Fig 3 s\u03c4 computed for a particular sample. Then p-value in situations where large values of \u03c4 support the alternative hypothesis. The process of calculating p-value consists of the following steps:H0. In our case we will use the Maximum Entropy Bootstrap Algorithm.Specify a way to generate bootstrap samples that resemble the real data while satisfying the null hypothesis MEBA denote this bootstrap data-generating process.Let MEBA, generate R = 999 bootstrap samples indexed by j. From each of them, compute a bootstrap test statistic p-value, we usep-value instead of the classical formulae Using found in , p. 148,H0 if \u03b1 is a given constant satisfying 0 < \u03b1 < 1. In general we take \u03b1 = 0.05.Reject the null hypothesis Let This algorithm will be used to assess significance of the correlation between a gene expression time series and one of the cosine peak time parameter t interval introduced in , the largest integer \u2264 (R + 1)\u03b1. Then we define the empirical \u03b1 and (1 \u2212 \u03b1) quantities by the kth largest and (R + k \u2212 1)th values of R = 999 and \u03b1 = 2.5% these are the 25th and 975th ordered elements.Let We have now all the pieces needed to accomplish the phase confidence analysis. Algorithm 1 summarizes the details of the proposed approachAlgorithm 1: Confidence in phase definition for periodicity in genes expression time seriesData: \u03c7 = {x1, \u2026, xn}: n realizations of a gene expression time series, the number of replications R, and a confidence level \u03b1.\u2003Result: Bootstrapped p-value, Bootstrap Percentile Confidence Interval \u2003forb \u2190 1 toRdo1 \u03c7b*;2 \u2003\u2003 Using the maximum entropy bootstrap algorithm, generate a bootstrap sample 3 \u2003\u2003 Calculate the maximum correlation 4 \u2003\u2003 Estimate the peak time p-value 5 Calculate the bootstrapped ifthen6 7 \u2003\u2003 the gene is considered as circadian.8 Calculate the Bootstrap Percentile Confidence Interval ifit existi \u2208 {0, \u2026, 5} such that then9 Gi, where Gj are defined in 10 \u2003the gene is assigned to the phase g-test, autocorrelation or permutation test , Brown Adipose Tissue (BAT) and Liver. Each individual data set contains more than 22,000 gene expression profiles. Each profile consists of 12 time points of 4-h interval difference. See for detaest when the overall gene expression activity is lower compared to all other times of the day.The Results of phase classification are summarized in However, it is even more important that our method can be applied to increase precision of observation in many studies involving timeline observation of gene expression. The sampling frequency still imposes limitation on our ability to separate molecular events (such as peak of gene expression) in time. To know the time of peak expression more precisely the experiment has to be repeated with higher a number of time points . However, with our method we can refine the existing data. For the groups peaking at a certain time we can be confident (at a selected confidence level) that certain genes peak at a certain time and filter out genes peaking sometime between out observation time points. This confidence is essential for functional annotation of co-expressed genes and can be critical in analysis of permutation of gene activity in reaction to environment or medication.g-test [i7 at 3.40 GHz. The permutation test is implemented in C++. Tables We compare the proposed method with some competing algorithms, namely Fisher\u2019s g-test , Permutag-test , and JTKg-test . All metT = 24h. For the data sets used in this paper, the measurements time are t \u2208 {0,4,8, \u2026, 44}. Since we are interested by the first peak expression time, the possible time points to be considered are t \u2208 {0,4,8, \u2026, 20}. If we solve for equations \ud835\udc9e\u03c6(t) = 0 for t \u2208 {0,4,8, \u2026, 20}, we obtain \u03c6 \u2208 {0, \u03c0/3, 2\u03c0/3, \u03c0, 4\u03c0/3, 5\u03c0/3}, this explains the use of \u03c0/3 as a resolution power of estimated phase in k we can generate 2k cosine waves using the equation:k = 30, which generates 60 cosine waves. Results are given for R \u2208 {9,99,999} bootstrap replications. Like any method based on resampling, the proposed method can be computationally expensive, because it involves fitting the same statistical method a large number of times using different replications of the original data. We can see that the average CPU timings increases with number of generated cosine waves and the number of bootstrap replications.In this paper, we are interested in genes that may have a peak expression coinciding or near one of the observation points. We approximate their expression profiles by an ideal cosine wave of the form:g-test is faster, followed by JTK-CYCLE and then the proposed method (one replication). We note here that the computing performance of the proposed method can be enhanced considerably (See Remark 4).g-test. Our method is not developed for detecting all the circadian genes, but rather it detects, with high confidence, the circadian gene for which the peak time (the phase) is near one of the time points; estimates this phase, and constructs a confidence interval for it. This explains the small number of circadian genes detected by our method compared to the competitors.Remark 2. This experiment design is rather typical for circadian biology. Some experiments collect samples at different intervals, such as 3h or, rarely, every 2h. Higher sampling frequency improves resolution ability, but costs a lot more and is harder to implement.Remark 3. Gene expression profiles are analyzed independently, thus it is possible that a researcher may find few or none of the gene confidently peaking at a given time. In fact, in the data set on which we tested the method, gene expression has a quiet period at which relatively few genes are active.Remark 4. We note that the computational performance of the proposed method can be enhanced. In fact, if we avoid using loops in R script that process one element per iteration, and instead we use apply family of functions that process whole rows, columns, or lists, the computing time is reduced significantly. In this case we need just 0.001 second to run the method for one replication using S1 R Codes(PDF)Click here for additional data file.S1 Data(TXT)Click here for additional data file.S2 Data(TXT)Click here for additional data file.S3 Data(TXT)Click here for additional data file."}
+{"text": "Gene expression time-course experiments allow to study the dynamics of transcriptomic changes in cells exposed to different stimuli. However, most approaches for the reconstruction of gene association networks (GANs) do not propose prior-selection approaches tailored to time-course transcriptome data. Here, we present a workflow for the identification of GANs from time-course data using prior selection of genes differentially expressed over time identified by natural cubic spline regression modeling (NCSRM). The workflow comprises three major steps: 1) the identification of differentially expressed genes from time-course expression data by employing NCSRM, 2) the use of regularized dynamic partial correlation as implemented in GeneNet to infer GANs from differentially expressed genes and 3) the identification and functional characterization of the key nodes in the reconstructed networks. The approach was applied on a time-resolved transcriptome data set of radiation-perturbed cell culture models of non-tumor cells with normal and increased radiation sensitivity. NCSRM detected significantly more genes than another commonly used method for time-course transcriptome analysis (BETR). While most genes detected with BETR were also detected with NCSRM the false-detection rate of NCSRM was low (3%). The GANs reconstructed from genes detected with NCSRM showed a better overlap with the interactome network Reactome compared to GANs derived from BETR detected genes. After exposure to 1 Gy the normal sensitive cells showed only sparse response compared to cells with increased sensitivity, which exhibited a strong response mainly of genes related to the senescence pathway. After exposure to 10 Gy the response of the normal sensitive cells was mainly associated with senescence and that of cells with increased sensitivity with apoptosis. We discuss these results in a clinical context and underline the impact of senescence-associated pathways in acute radiation response of normal cells. The workflow of this novel approach is implemented in the open-source Bioconductor R-package splineTimeR. In general terms, the expression of genes can be studied from a static or temporal point of view. Static microarray experiments allow measuring gene expression responses only at one single time point. Therefore, data obtained from those experiments can be considered as more or less randomly taken snapshots of the molecular phenotype of a cell. However, biological processes are dynamic and thus, the expression of a gene is a function of time . To be aHowever, compared to static microarray data, the analysis of time-course data introduces a number of new challenges. First, the experimental costs for the generation of data as well as the computational cost increases with the increase in the number of introduced time points. Second, hidden correlation caused by co-expression of genes makes the data linearly dependent . FinallySeveral different algorithms have been suggested to analyze gene time-course microarray data with regard to differential expression in two or more biological groups (e.g. exposed to radiation vs. non-exposed) \u20137. NeverCurrently, many different algorithms including cluster analysis \u201313 and sThe aim of the present study was to identify and compare signaling pathways involved in the radiation responses of normal cells differing in their radiation sensitivity that could be used to modulate cell sensitivity to ionizing radiation. For this, we propose an approach that combines the detection of genes differentially expressed over time based on statistics determined by natural cubic spline regression (NCSRM) followedMost exploratory gene expression studies focus only on the identification of differentially expressed genes by treating them as independent events and do not seek to study the interplay of identified genes. This makes it difficult to tell which genes are part of the interaction network causal of the studied phenotype and which are the most \u201cimportant\u201d with regard to the context of the investigation. The herein present approach combines the identification of differentially expressed genes and reconstruction of possible associations between them. Further analysis of identified GANs then allows hypothesizing which genes may play a crucial role in the investigated processes. This should markedly increase the likelihood to find meaningful results from an initial observation and help to understand the underlying molecular mechanisms. We applied our workflow on time-course transcriptome data of two normal and well-characterized lymphoblastoid cell lines with normal 20037\u2013200) and increased radiation sensitivity (4060\u2013200), in order to identify molecular mechanisms and potential key players responsible for different radiation responses , 20. Our037\u2013200 aThe schematic workflow of the presented novel approach for time-course gene expression data analysis is presented in A fraction of the probes was removed due to low expression levels, with not detectable signal intensities as described in . Table 1Pathway enrichment analysis was performed on differentially expressed genes to identify over-represented biological pathways. The analysis on genes identified with NCSRM revealed 634 and 964 significantly enriched pathways for the cells with increased radiation sensitivity after 1 Gy and 10 Gy irradiation dose, respectively and 758 pathways for the normal sensitive cell line after 10 Gy irradiation. For the seven differentially expressed genes of the cell line with normal radiation sensitivity after 1 Gy dose of irradiation we did not find any significantly enriched pathways. A summary of the pathway enrichment results can be found in None of the edge probabilities calculated for the seven differentially expressed genes in the cell line with normal radiation sensitivity after 1 Gy irradiation exceeded the considered significance threshold and hence no network was obtained. For the remaining conditions we were able to obtain association networks as presented in The combined topological centrality measure was used to characterize the biological importance of nodes (genes) in the reconstructed association networks. The 5% of the highest ranked genes listed in supplementary In order to assess the false positive rate, the spline regression based differential analyses between technical replicates of each treatment conditions and cell lines were performed. Here, we can state that the null-hypothesis of no differential expression is true for all genes. Then the q*-level of 0.05 for Benjamini-Hochberg method controls also the FWER at alpha-level equal to 0.05 (type I error) . For allThe evaluation of the two networks derived after 1 Gy irradiation of the cell line with increased sensitivity showed that the network reconstructed with the differentially expressed genes determined using BETR did not contain significantly more common edges than random networks (p = 0.529), whereas the network reconstructed with the differentially expressed genes determined by NCSRM did (p = 0.048). The networks derived after 10 Gy irradiation of the cell line with increased sensitivity and 10 Gy irradiation of the normal sensitive cell line contained significantly more edges that were common with the Reactome network compared to random networks for both methods.The success of tumor radiation therapy predominantly depends on the total applied radiation dose, but also on the tolerance of the tumor surrounding normal tissues to radiation. Toxicity towards radiation, which greatly varies on an individual level due to inherited susceptibility, is one of the most important limiting factors for dose escalation in radiooncology treatment , 24. To Here, we conducted time-resolved transcriptome analysis of radiation-perturbed cell culture models of non-tumor cells with normal and with increased radiation sensitivity in order to work out the molecular phenotype of radiation sensitivity in normal cells. Moreover, we present an innovative approach for the identification of GANs from time-course perturbation transcriptome data. The approach comprises three major steps: 1) the identification of differentially expressed genes from time-course gene expression data by employing a natural cubic spline regression model (NCSRM); 2) the use of a regularized dynamic partial correlation method to infer gene associations network from differentially expressed genes; 3) the identification and functional characterization of the key nodes (hubs) in the reconstructed gene dependency network .in silico and thereby potentially add new or additional value to existing data. Incomplete time-course data, e.g. due to the exclusion of samples for technical reasons, that often create major problems for the estimation of the model, are also suitable for fitting the spline regression model as long as enough data points remain in the data set. This is especially valuable when data on certain time points, derived from a very limited sample source, have been excluded from a time-course data set and cannot be repeatedly generated.Our proposed method for the detection of differentially expressed genes over time is based on NCSRM with a small number of basis functions. A relatively low number of basis functions generally results in a good fit of data and, at the same time, reduces the complexity of the fitted models. Treating time in the model as a continuous variable, a non-linear behavior of gene expressions was approximated by spline curves fitted to the experimental time-course data. Considering temporal changes in gene expression as continuous curves and not as single time points greatly decreases the dimensionality of the data and thereby decreases computational cost. In addition, the proposed NCRSM does not require identical sampling time points for the compared treatment conditions. Furthermore, no biological replicates are needed. Therefore, the method is applicable to data generated according to a tailored time-course differential expression study design and to data that were not specifically generated for time-course differential expression analysis, e.g. existing/previously generated data from clinical samples. Thus, the adaption of the method to differential expression analysis comprises the potential to reanalyze existing data, address new questions https://www.bioconductor.org).Since gene expression is not only dynamic in the treatment group but also in the control group, the inclusion of the time-course control data greatly improves the ability to detect truly differentially expressed genes, as the gene expression values are not referred to a single time point with static gene expression levels only. Comparing a treatment group to time point zero does not provide a proper control over the entire time-course, although it is widely practiced \u201328. The Amongst a panel, the two lymphoblastoid cell lines that were different with regard to radiation sensitivity after irradiation with 10 Gy , also reConcerning qualitative differences in the transcriptomic response of normal sensitive cells and cells with increased sensitivity after treatment with 1 Gy and 10 Gy pathway enrichment analysis was performed. Differentially expressed genes identified for all considered treatment conditions except for the normal sensitive cells after exposure to 1 Gy radiation showed statistically significant enrichment of pathways. Most of which were in agreement with known radiation responses such as DNA repair, cell cycle regulation, oxidative stress response or pathways related to apoptosis 30\u201332].\u201332.30\u201332Subsequently, we identified the network hubs (i.e. most important genes) of the GANs by combining three network centrality measures: degree, closeness and shortest path betweenness . CombiniIn order to get functional insights into the reconstructed GANs the 5% top important nodes were identified after a ranking with the combined centrality measure and mapped to the pathways from the interactome database Reactome . The obtA different outcome was observed after irradiation with 10 Gy. For the radiation sensitive cells three out of the ten top pathways were linked to apoptotic processes with the genes BBC3, BCL2, TP53 as key players, whereas for the normal sensitive cell line we mainly observed the induction of senescence related pathways. This indicates that different doses are necessary to induce a similar response in the two cell lines. The activation of senescence genes is a damage response mechanism, which stably arrests proliferating cells and protects them from apoptotic cell death . TogetheAlthough the senescence-associated pathways were not seen as the most important ones for the treatment condition 10 Gy/increased sensitivity, they were significantly enriched in the GANs of the three conditions 1 Gy/increased sensitivity, 10 Gy/ increased sensitivity and 10 Gy/normal sensitivity. All differentially expressed genes that related to senescence-associated pathways are shown in supplementary CDKN1A gene was identified as one of the most important key players linked to the identified senescence associated pathways for both 1 Gy/sensitive and 10 Gy/normal treatment conditions. For both conditions the expression of the CDKN1A was up-regulated for all considered time points. CDKN1A is a well-known damage response gene for which aberrant transcriptional response has been associated with abnormal sensitivity to ionizing radiation , 43. TheLMNB1 is another genes we identified as a response hub gene after irradiation of sensitive cell line with 1 Gy radiation dose that is associated with senescence. Although the LMNB1 gene was not identified as hub gene in the GAN of the 10 Gy/normal treatment condition, it was still differentially expressed. For both treatment conditions we observed significant downregulation of this gene 24 hours after irradiation. Shah et al (2013) has suggested that downregulation of LMNB1 in senescence is a key trigger of chromatin changes affecting gene expression . In factAnother potential therapeutic candidate associated with senescence that was identified for the 10 Gy/normal sensitivity treatment condition was MRE11A for which cell culture data suggest that treatment of cells with Mre11 siRNA increases radiation sensitivity and reduces heat-induced radiosensitization , 48. HowThe spline regression based differential analyses between technical replicates were performed in order to estimate the extent of random fluctuations of gene expression values. The detected 3% rejections of the overall null hypothesis of no differential gene expression are in accordance with the alpha-level of 5% of the familywise error rate (FWER) and can be considered as false positives. On the other hand, it shows that type I error, due to technical variation, is covered by the model and test assumptions so thatIn order to validate the previously mentioned biological results using NCSRM, we performed the differential expression analysis with another established method for time-course data analysis called BETR . The numin silico and in vitro analysis of the interactions in the proposed gene association networks in order to add meaningful knowledge to the mechanism of radiosensitivity at the experimental level. This novel knowledge has the potential to improve cancer radiation therapy by preventing or lowering the acute responses of normal cells resulting from radiation therapy. The results add novel information to the understanding of mechanisms that are involved in the radiation response of human cells, with the potential to improve tumor radiotherapy. Besides, the presented workflow is not limited to presented study only, but may be applied in other special fields with different biological questions to be addressed.Prospectively, we suggest and plan a detailed http://www.bioconductor.org.The software is provided as R-package \u201csplineTimeR\u201d and freely available via the Bioconductor project at LUng Cancer in Young) that differ in radiosensitivity, as tested with Trypan Blue and WST-1 assays [2 in RPMI 1640 medium (Biochrom) supplemented with 10% fetal calf serum . Mycoplasma contamination was routinely tested using luminescence-based assays .Experiments were conducted with two monoclonal lymphoblastoid Epstein-Barr virus-immortalized cell lines (LCL) obtained from young lung cancer patients of the LUCY study at a dose rate of 0.49 Gy/min. Samples were collected 0.25, 0.5, 1, 2, 4, 8 and 24 hours after sham or actual irradiation. Between the time of collection cells were kept in the incubator. Collected cells were washed with PBS and frozen at -80\u00b0C. Total RNA was isolated from frozen cell pellets obtained from two independent experiments using the AllPrep DNA/RNA/miRNA Universal Kit (Qiagen) including an DNase digestion step, according to the manufacturer's protocol. The concentration of RNA was quantified with a Qubit 2.0 Fluorometer (Life Technologies), and integrity was determined using a Bioanalyzer 2100 (Agilent Technologies). RNA samples with a RNA integrity number (RIN) greater than 7 indicated sufficient quality to be used in subsequent RNA microarray analysis.The cells were seeded in 75 cmwww.ebi.ac.uk/arrayexpress/) and the data set is available under the accession number E-MTAB-4829. All data analysis was conducted using the R statistical platform [http://www.ncbi.nlm.nih.gov/refseq/) [Transcriptional profiling was performed using SurePrint G3 Human Gene Expression 8x60k V2 microarrays according to the manufacturer\u2019s protocol. 75 ng of total RNA was used in labeling using the Low Input Quick Amp Labeling Kit . Raw gene expression data were extracted as text files with the Feature Extraction software 11.0.1.1 (Agilent Technologies). The expression microarray data were uploaded to ArrayExpress (ect.org) . Data quect.org) , 52. Allect.org) . Only HGrefseq/) . If the 0, b1, \u2026, bm are the spline coefficients in the control group and d0, d1, \u2026, dm are differential spline coefficients between the control and the irradiated group. B1(t-t0), B2(t-t0), \u2026, Bm(t-t0) are the spline base functions and t0 is the time of the first measurement. For x = 0, y = ycontrol and for x = 1, y = yirradiated. For three degrees of freedom (df = 3), m = 3.A natural cubic spline regression model (NCSRM) with three degrees of freedom for an experimental two-way design with one treatment factor and time as a continuous variable was fitted to the experimental time-course data. The mathematical model is defined by the following eq (1):Depending on the number of degrees of freedom, two boundary knots and df-1 interior knots are specified. The interior knots were chosen at values corresponding to equally sized quantiles of the sampling time from both compared groups. For example, for df = 3 interior knots correspond to the 0.33- and 0.66-quantiles. The spline function is cubic on each defined by knots intervals, continuous at each knot and has continuous derivatives of first and second orders.The time-course differential gene expression analyses were conducted between irradiated and control cells (sham-irradiated). Analyses were performed on the normalized gene expression data using NCSRM with three degrees of freedom. The splines were fitted to the real time-course expression data for each gene separately according to eq (1). The example of spline regression model fitted to the measured time-course data for one selected gene is shown on the Time dependent differential expression of a gene between the irradiated and corresponding control cells was determined by the application of empirical Bayes moderated F-statistics on the dAdditionally, in order to assess the false positive rate we applied differential gene expression analysis using NCSRM between two technical replicates for all treatment groups. Because only two technical replicates were generated for each time point and treatment, we could not use the same approach to assess the technical variability for the BETR method, as it requires at least two replicates in each compared groups.Differentially expressed genes were subjected to gene association network reconstruction from time-course data using a regularized dynamic partial correlation method . PairwisGraph topological analyses based on centrality measures were applied in order to determine the importance of each node in the reconstructed association networks . Three mThe Reactome pathway database was used to conduct the pathway enrichment analysis in order to further investigate the functions of the selected sets of differentially expressed genes . Statista priori known biological network provided by the Reactome database [Since we decided to use the set of genes that appeared to be differentially expressed we assessed the performance of the herein used NCSRM approach in comparison to the BETR approach implemented in the R/Bioconductor package betr . BETR isdatabase .de novo reconstructed gene association networks (GANs), we developed a novel method that compares the interactions in the reconstructed network to the experimentally validated interactions present in the Reactome interaction network. For this purpose we used the Reactome reference network, consisting of protein-protein interaction pairs stored in the Reactome database (http://www.reactome.org/pages/download-data/). For the comparison, sub-networks of reconstructed networks consisting only of genes overlapping with the Reactome network were built. The number of common edges between these two sub-networks was determined and referred to the total number of edges in the reconstructed network (percentage of common edges in the reconstructed network). Further, a permutation test was performed to assess whether the number of common edges in the reconstructed network was significantly higher than in randomized networks with the same genes. Random networks were generated by permutation of the node names in the network, while preserving the reconstructed sub-network topology. After each permutation (n = 1000) the number of common edges with the reference Reactome sub-network was determined. The reconstructed network was considered significantly better than random, if more than 90% of the random sub-networks contained lower numbers of edges common with the Reactome network than the reconstructed sub-network . All networks reconstructed with the genes determined as differentially expressed from the herein presented spline regression method and the BETR method were evaluated.In order to assess the quality of the S1 FileAll obtained gene association networks are provided as R-objects of type igraph.(RDATA)Click here for additional data file.S1 TableTable includes differentially expressed genes identified by spline regression and BETR methods. Additionally, a list of overlapping differentially expressed genes between both methods is included.(XLSX)Click here for additional data file.S2 TableFour lists of significantly enriched pathways correspond to each used treatment condition. Lists include total numbers of known genes in the pathways, numbers of differentially expressed genes that belong to a single pathway (matches), percentages of differentially expressed genes in comparison to the total number of know genes in the pathway (% match), p-values, FDRs and names of pathways related differentially expressed genes.(XLSX)Click here for additional data file.S3 TableLists of 5% highest ranked genes from the reconstructed gene association networks using spline regression and BETR methods. Overlap represents common most important genes identified in networks from compared methods.(XLSX)Click here for additional data file.S4 TableLists include names of pathways together with names of mapped most important genes.(XLSX)Click here for additional data file.S5 TableTable presents the names of significantly enriched (FDR<0.05) senescence associated pathways with corresponding differentially expressed genes for all treatment conditions.(XLSX)Click here for additional data file."}
+{"text": "Influenza viruses present major challenges to public health, evident by the 2009 influenza pandemic. Highly pathogenic influenza virus infections generally coincide with early, high levels of inflammatory cytokines that some studies have suggested may be regulated in a strain-dependent manner. However, a comprehensive characterization of the complex dynamics of the inflammatory response induced by virulent influenza strains is lacking. Here, we applied gene co-expression and nonlinear regression analysis to time-course, microarray data developed from influenza-infected mouse lung to create mathematical models of the host inflammatory response. We found that the dynamics of inflammation-associated gene expression are regulated by an ultrasensitive-like mechanism in which low levels of virus induce minimal gene expression but expression is strongly induced once a threshold virus titer is exceeded. Cytokine assays confirmed that the production of several key inflammatory cytokines, such as interleukin 6 and monocyte chemotactic protein 1, exhibit ultrasensitive behavior. A systematic exploration of the pathways regulating the inflammatory-associated gene response suggests that the molecular origins of this ultrasensitive response mechanism lie within the branch of the Toll-like receptor pathway that regulates STAT1 phosphorylation. This study provides the first evidence of an ultrasensitive mechanism regulating influenza virus-induced inflammation in whole lungs and provides insight into how different virus strains can induce distinct temporal inflammation response profiles. The approach developed here should facilitate the construction of gene regulatory models of other infectious diseases. Vaccines suffice for protecting public health against seasonal influenza viruses, but when unexpected strains appear against which the vaccine does not confer protection, alternative treatments are necessary. In this work, we used gene expression and virus growth data from influenza-infected mice to determine how moderate and deadly influenza viruses may invoke unique inflammatory responses and the role these responses play in disease pathology. We found that the relationship between virus growth and the inflammatory response for all viruses tested can be characterized by ultrasensitive response in which the inflammatory response is gated until a threshold concentration of virus is exceeded in the lung after which strong inflammatory gene expression and cytokine production occurs. This finding challenges the notion that deadly influenza viruses invoke unique cytokine and inflammatory responses and provides additional evidence that pathology is regulated by virus load, albeit in a highly nonlinear fashion. These findings suggests immunomodulatory treatments could focus on altering inflammatory response dynamics to improve disease pathology. Invading pathogens induce acute inflammation when molecular signatures are detected by pattern recognition receptors expressed on tissue-resident immune cells and non-immune cell types. PRR ligation triggers innate immune responses and leads to the induction of inflammatory and antiviral gene expression, which together function to limit pathogen growth, activate the adaptive immune response, and ultimately resolve the infection ,2. Preciin vivo studies with the 1918 Spanish influenza virus . The model was then fit to the scaled data. Panel (a) shows the time points selected as training data from each infection data set (indicated by the orange dots); and panel (b) shows the number of data points available for different ranges of the virus titer (top) and the segmented model's fit to the training data (bottom). In panel (c), we used the residuals to compare the SLM\u2019s accuracy to that of a simple linear model. The line is the running average and the shaded region is the 95% confidence interval of the mean. The mean of the residuals of the segmented model was always near zero for the full range of virus titers and, thus, is a better fit than the linear model.(TIF)Click here for additional data file.S3 Fig3 PFU of the H5N1 virus and lung tissues were harvested for virus titration and gene expression microarray analysis at the same time points used for the original experiment (3 mice per infection group and time point). For each time point, the mean and standard deviation of the virus titer is shown.To determine whether the threshold model could predict N1 module gene expression, mice were infected with 10(TIF)Click here for additional data file.S4 Fig3 PFU of H5N1 virus. (a) Compares the correlation between all 1,021 N1 gene transcripts and the eigengene (referred to as the kME) in the original gene expression data (105 PFU infection conditions (referred to as \u2018105 PFU\u2019)) and the 103 PFU infection condition (referred to as \u2018103 PFU\u2019). More than 90% of the genes exhibited a kME > 0.9. (b) The WGCNA algorithm was repeated with all differentially expressed genes from the 103 PFU H5N1 virus infection, and then the Fisher's exact test was used to identify modules enriched for N1 genes. Of the modules in the newly constructed co-expression network, only one module was enriched with the N1 transcripts. Specifically, of the 826 genes originally assigned to N1, 528 were differentially expressed and assigned to the same module in the 103 PFU infection condition, as illustrated by the Venn diagram.Two procedures were applied to determine whether the N1 genes that we identified as co-expressed in the original clustering analysis were also co-expressed in tissue from mice infected with 10(TIF)Click here for additional data file.S5 Fig3 PFU eigengene was constructed using the same set of probes assigned to the N1 module from the 105 PFU infection condition and scaled to between ; and then the segmented linear model trained to the 105 PFU H1N1, pH1N1, and H5N1 data was used to predict H5N1-103 PFU scaled eigengene values. Panel (a) shows a comparison of the predicted (black) and actual eigengenes, with error bars illustrating the standard deviation of the eigengene and of the log10 of the virus titer. Panel (b) shows how the prediction residuals are distributed over time. For each time point, individual data points (black points) are shown, as well as the average (red points) and standard deviation (gray bars). The greatest deviations occurred at d5 and d7, which was expected as the model was designed only to predict the onset of gene activation and the peak gene expression (peak of the eigengene). On days 5 and 7 post-infection, both the virus titers and gene expression has already peaked and are declining. Panel (c) shows how the prediction residuals are distributed across the spectrum of observed virus titers, as compared to a linear model directly fit to the H5N1-103 PFU N1 eigengene. Individual residuals are indicated by points, and the running average and the 95% confidence intervals are shown by the colored lines and the gray shading, respectively. As for the 105 PFU infection condition, the segmented model performed well across the entire virus titer spectrum, and was significantly better than a linear model fitted directly to the data.The H5N1-10(TIF)Click here for additional data file.S6 Fig5 PFU of H1N1, pH1N1, and H5N1 and compared with those in lung tissues from mock-infected animals. This heat map illustrates protein expression values for non-N1-associated cytokines (78) was not detected at any time point and is not included), with a blue-to-yellow scale indicating expression levels (see the key to the right of the panel). The module to which the protein's mRNA transcript was assigned during clustering is shown on the right hand side of the heat map (proteins whose gene transcripts were not DE are labeled \u2018NA\u2019), and the average virus titers are shown below the heat map (red indicates that titers exceeded the threshold concentration predicted by the segmented linear model). While IL-18 and leukemia inhibitory factor (LIF) exhibited protein expression patterns consistent with N1 module behavior, the transcripts mapping to these proteins were not differentially expressed and were excluded from the co-expression network construction.As described in the (TIF)Click here for additional data file.S7 Fig5 PFU of H1N1, pH1N1, or H5N1 virus to determine total and phosphorylated levels of transcription factors by means of immunoblotting. Here, virus titers (with standard deviation indicated by gray bars) for each infection condition are shown in panel (a), and immunoblot results are shown in panels (b) and (c). For immunoblot analyses, relative protein concentrations were determined by calculating the ratio of the gray intensity of the measured protein relative to the gray intensity of actin in each tissue sample. A linear model was used to compare the mean RSI of each protein at each time point to the mean RSI measured in uninfected animals (referred to as \u2018na\u00efve\u2019), and a significant difference was defined as having a false discovery rate (FDR) < 0.05. The mean RSI of total IRF3, IRF7, and I\u03baB\u03b1 did not significantly deviate from the na\u00efve data, but pSTAT1, pIRF3 and total STAT1 significantly differed from the na\u00efve data at several time points .As described in the (TIF)Click here for additional data file.S8 Fig5 PFU of H1N1, pH1N1, or H5N1 virus to determine IFN protein concentrations. The protein concentration of IFN-\u03b1/\u03b2 proteins for each replicate at each time point is shown.As described in the (TIF)Click here for additional data file.S9 Fig5 PFU of H1N1, pH1N1, or H5N1 virus. Representative images of Western blots of lung homogenates from 3 mice per infection per time point are shown.As described in the (TIF)Click here for additional data file.S10 FigThe intramodular correlation (kME or correlation between each transcript and the eigengene) can be used to assess clustering quality. We show boxplots to illustrate the distribution of the kMEs for all transcripts belonging to the N1 and N2 modules.(TIF)Click here for additional data file.S11 FigN1signed). Furthermore, 90.9% if the N1signed transcripts were N1 kME+ transcripts. We then confirmed that the same eigengene dynamics were observed. The scaled difference of the eigengene (SDE) of the N1signed module is shown versus time (a) and versus virus titers (b).The N1 module presented in this work was identified by developing an unsigned co-expression network and then focusing on the kME- and kME+. Therefore, the N1 kME+ set of genes should also cluster when developing a signed co-expression network. We confirmed this by repeating the WGCNA procedure for signed network (power = 14). Of the N1 kME+ transcripts, 93.2% were assigned to the same cluster in the signed co-expression network (the new module is referred to as (TIF)Click here for additional data file.S1 Filedavid.abcc.ncifcrf.govFor each module, we report the top enriched annotation clusters as well as various measurements of enrichment Click here for additional data file.S2 Filedavid.abcc.ncifcrf.govFor each module, we report the top enriched annotation clusters as well as various measurements of enrichment Click here for additional data file.S3 File(XLSX)Click here for additional data file.S1 TableFor each transcript we provide the Entrez Gene ID, the gene symbol, the module it was assigned to and the correlation between the transcript's expression dynamics and the expression dynamics of its assigned eigengene (kME). Module N0 is the set of transcripts which the algorithm did not identify as co-expressed.(XLSX)Click here for additional data file.S2 TableEach kME+ or kME- submodule was analyzed in ToppCluster for enriched domain, GO biological process, GO molecular function, GO cellular component, mouse phenotype, pathway or transcription factor binding site annotations. Columns A-D provide information on the annotation, including the category to which the annotation belongs, the database specific annotation ID and the full title of the annotation. Columns D\u2014AT contain the enrichment score for each annotation in each submodule. The enrichment score is the\u2014log10 of the false discovery rate [FDR]-adjusted p value. A threshold enrichment score of 2 (FDR-adjusted p < 0.01) was required to be considered as significantly enriched.(XLSX)Click here for additional data file.S3 TableThe probes assigned to module N2 were analyzed in CTen\u2014a platform for identifying the genomic signatures of select cell type in microarray data. The enrichment score reported for each cell type is the-log10 of the false-discovery adjusted p value.(XLSX)Click here for additional data file.S4 TableArrays were developed from the lungs of mice infected with H1N1, pH1N1, or HPAI virus at 14 time points. One array from the HPAI infected data set was removed due to quality concerns. For each transcript, we provide annotation information , identify to which module the transcript was assigned, the kME (the Pearson correlation coefficient between the transcripts expression and the eigengene of the module it was assigned to), and the log2 fold change in the transcript\u2019s expression across all arrays used the study is illustrated by a heatmap.(XLSX)Click here for additional data file.S5 Table(XLSX)Click here for additional data file.S6 Table(XLSX)Click here for additional data file."}
+{"text": "Single-cell technologies make it possible to quantify the comprehensive states of individual cells, and have the power to shed light on cellular differentiation in particular. Although several methods have been developed to fully analyze the single-cell expression data, there is still room for improvement in the analysis of differentiation.In this paper, we propose a novel method SCOUP to elucidate differentiation process. Unlike previous dimension reduction-based approaches, SCOUP describes the dynamics of gene expression throughout differentiation directly, including the degree of differentiation of a cell (in pseudo-time) and cell fate. SCOUP is superior to previous methods with respect to pseudo-time estimation, especially for single-cell RNA-seq. SCOUP also successfully estimates cell lineage more accurately than previous method, especially for cells at an early stage of bifurcation. In addition, SCOUP can be applied to various downstream analyses. As an example, we propose a novel correlation calculation method for elucidating regulatory relationships among genes. We apply this method to a single-cell RNA-seq data and detect a candidate of key regulator for differentiation and clusters in a correlation network which are not detected with conventional correlation analysis.We develop a stochastic process-based method SCOUP to analyze single-cell expression data throughout differentiation. SCOUP can estimate pseudo-time and cell lineage more accurately than previous methods. We also propose a novel correlation calculation method based on SCOUP. SCOUP is a promising approach for further single-cell analysis and available at https://github.com/hmatsu1226/SCOUP.The online version of this article (doi:10.1186/s12859-016-1109-3) contains supplementary material, which is available to authorized users. Conventional analyses of bulk cells, such as bulk transcriptome analyses, are based on the averaged data of an ensemble of cells and cannot reveal the states of individual cells. Therefore, such analyses cannot distinguish cell types due to the effect of averaging across all cells in a sample, unless each cell lineage is divided in advance by using prior knowledge, such as marker genes. Additionally, bulk transcriptome during differentiation is usually the ensemble of the cells of different degrees of differentiation and information regarding changes in cellular state is smeared. Accordingly, the accurate investigation for gene expression dynamics and regulatory relationships among genes during differentiation are difficult.With the advent of single-cell technologies, such as single-cell RNA-seq, quantification of the comprehensive states of individual cells is possible . Using sTo fully analyze the single-cell expression data during differentiation, novel computational methods are necessary , 13. FirWanderlust is a pioHowever, pseudo-time estimation and cell lineage estimation based on dimension reduction have several problems. For example, interpreting the biological meaning of the path in reduced space is difficult. Additionally, the position in reduced space is affected by noise and gene expression that is irrelevant to differentiation, and the results can therefore change significantly in a subsequent analysis. Moreover, deterministic approaches, such as applications of MST in reduced space, cannot quantify the subtle differences among cells and are inadequate to estimate the lineages of cells at an early stage of bifurcation, which are important for analyzing cell fate decisions. Hence, we developed another approach based on stochastic processes.In this research, we developed a novel method SCOUP . SCOUP describes the dynamics of gene expression throughout differentiation directly, including pseudo-time and cell fate of individual cells. SCOUP is based on the Ornstein\u2013Uhlenbeck (OU) process, which represents a variable moving toward an attractor with Brownian motion. In the case of differentiation, an attractor is regarded as a stable expression pattern of a gene after differentiation, and hence, an OU process is appropriate to describe expression dynamics throughout differentiation. Because OU processes suppose only a single attractor and cannot represent multi-lineage differentiation, we expand the typical OU process into a mixture OU process by representing the cell fate of each cell and lineage-specific expression patterns with latent values and different attractors, respectively. We compared the accuracy of pseudo-time estimates from SCOUP with those of previous methods using time-series scqPCR and scRNA-seq, and SCOUP was superior to previous methods in almost all conditions. We also evaluated the cell lineage estimation using scqPCR data in which cells exhibit multi-lineage differentiation. SCOUP successfully estimated cell lineage more accurately than Monocle, especially for cells at an early stage of bifurcation. In addition, SCOUP represents each gene expression dynamic directly and can be applied to various downstream analyses. As an example, we developed a novel correlation calculation method for elucidating regulatory relationships among genes. We normalized data based on the optimized parameters in our model, which assumes the conditional independency among genes, and calculated correlations within normalized data, and this method detected covariance that cannot be explained by the model alone. We applied this method to scRNA-seq data and detected a candidate of key regulator for differentiation and clusters in a correlation network which were not detected with conventional correlation analysis.We proposed a novel theoretical and computational method SCOUP to analyze single-cell data. The theoretical basis of SCOUP will be useful not only for pseudo-time and cell lineage estimation, but also for various biological analyses such as gene regulatory network inference. In particular, SCOUP can represent continuous-time stochastic dynamics and is suited for analyzing time-series data. As the number of single-cell data with high temporal resolution is increasing, computational methods for analyzing such data are becoming more important. Thus, SCOUP is a promising approach for further single-cell analysis and bioinformatics method development.Xt be an OU process. Xt satisfies the following stochastic differentiation equation: Let \u03b1, \u03b8, \u03c3, and Wt denote the strength of relaxation toward the attractor, the value of the attractor, the strength of noise, and \u201cwhite noise,\u201d respectively. If the initial value is given by X0, the value at time t (Xt) satisfies the following normal distribution:where \u03b8 with Brownian motion , where E is the expression data of all cells and genes and \u03a6 is the set of parameters, is the product of cell probabilities. Each cell has a degree of differentiation progression parameter tc. Although genes interact with each other and multivariate OU process can be more appropriate to describe all gene expression dynamics, multivariate OU process requires more computational and analytical complexity. Therefore, we assume that each gene follows its OU process independently and has parameters \u03b1g, \u03b8g. Despite the above assumption, we can infer the regulatory relationship between genes by calculating the covariance that is not explained by gene independent model (as explained in the section on \u201cCorrelation between genes\u201d). Thus, a cell probability is the product of gene expression probability P, where Ecg is the expression data of gene g in cell c. Thus, the probability of single lineage differentiation is given by We developed a probabilistic model for single lineage differentiation. Hereinafter, we denote the number of cells, the number of genes, the cell index, and the gene index as \u03a6={\u03a6g|g=1,\u2026,G}, T={tc|c=1,\u2026,C}, Scg is the expression of gene g in cell c at t=0, and Pou is a probability distribution based on an OU process and described by the following normal distribution: where P(Scg) is the initial distribution of a gene and is given by a normal distribution as follows: \u03bcg0 and Although optimization of these initial parameters is possible, a fully differentiated state may be regarded as an initial state and pseudo-time may be inferred in the reverse order of differentiation. In this way, deciding the direction of differentiation without the knowledge of initial condition is difficult. Moreover, the expression data of progenitor cells are available in many experimental studies. Therefore, we assume that Pou can be described as follows: Like a continuous Markov model for nucleotide evolution , the conXcg={Xcgs|s=0,\u2026,N} represents a path such that Xcg0 and XcgN satifsy Scg and Ecg, respectively. In other words, Xcgs corresponds to the variable at time stc/N. In this model, we assume Scg0 is fixed and consider Xcg as Xcg\u2208{Xcgs|s=1,\u2026,N} for simplicity . Accordingly, we consider the likelihood of Xcg as follows: where Xcg is described as follows is regarded as the multivariate normal distribution, and the expectation of Xs and N\u22121)\u00d7(N\u22121) precision matrix, and we must therefore calculate the inverse of the matrix to obtain the variance\u2013covariance matrix. Although we cannot use numerical methods to solve the inverse of the precision matrix because we consider N as the limit for infinite, we can solve for the inverse matrix analytically by using the tridiagonal property of the precision matrix is a uniform random number from 0 to \u03b5. We produced 20 replicates for each \u03b5 (noise level), and validated the pseudo-time of each method for each noise level.To evaluate the estimated pseudo-time in many conditions, we constructed a dataset, (Kouno\u2019s data (1)) follows. We added noise to raw expression data as described below to investigate the effect of noise in pseudo-time estimation. We added noise to raw expression data g\u2032 by the selected cells is equal to raw expression ), to validate lineage estimation by adding 45 pseudogenes that exhibit various expression patterns among lineages. We initially selected 60 cells randomly from 120 cells at a given time point. The expression The initial distribution C0 is the set of 0-h cells and |C0| is the number of 0-h cells.where + cells (4SG) or Flk1+GFP\u2212 cells (4SFG\u2212). We used the expression profiles of HF, 4SG, and 4SFG\u2212 and investigated whether SCOUP and Monocle can classify 4SG and 4SFG\u2212 using only their expression profiles. We randomly selected 1000 cells because Monocle did not seem to work correctly for a large number of cells and this procedures left 364 HF cells, 360 4SG cells, and 276 4SFG\u2212 cells. The initial distribution was calculated from HF cells in the same way as Kouno\u2019s data.To validate the lineage estimation in real data, we used a dataset produced by Moignard\u2019s group . They inWe also investigated the stimulation time-series single-cell RNA-seq dataset for primary mouse bone-marrow-derived dendritic cells that was produced by Shalek\u2019s group . This daTo evaluate the accuracy of pseudo-time estimated from each method, we regarded experimental time as genuine time and calculated the rate of inconsistency between pseudo-time and experimental time. By using the accuracy measure of TSCAN as a reference, we evaluated the inconsistency by calculating the rate of cell pairs whose pseudo-time ordering was inconsistent with experimental-time ordering, and we defined the pseudo-time inconsistency score (PIS) as follows: tc are respectively the experimental time and pseudo-time of cell c. I(ti0. The numerator is the number of pairs that vary concordantly between the two time series minus the number that vary discordantly and so the rank ordering of time series values is random. For a dataset generated from this null model, the p-values should be uniformly distributed from 0 to 1, exclusive: the highest Kendall\u2019s \u03c4 out of N tests should have a p-value of 1/(N + 1), the second highest test statistic has a p-value of 2/(N + 1), and the ith highest test statistic has a p-value of i/(N + 1) [p-values should be a linear function of the ranks . Restate\u03c4 values for all the reference time series against the signal of interest and then performs a selection step for the lowest p-value , which we refer to here as the \u201cinitial\u201d p-value. This procedure biases the p-values by multiplying the p-values by the number of hypothesis tests being performed. The FWER is the probability that there is at least one false positive for a given threshold. Therefore, a threshold of 0.01 means that there is a 1% chance that the list of time series with a Bonferroni adjusted p-value below 0.01 contains a false positive. This method, while rigorous, is overly conservative and overcompensates for the bias that comes from selecting the lowest p-value . The p-values are also adjusted such that there is no change in ordering: if the originally lowest p-value is adjusted so that it is higher than the originally second lowest p-value, the lowest p-value takes the value of the adjusted second p-value so that the ordering is not violated. The same holds for the relationship between the second and the third lowest p-values and so forth. While the Benjamini-Hochberg procedure is a reasonable approach to multiple hypothesis testing in general, it does not account for the selection step in JTK_CYCLE; it still is thus overly conservative , we focus on positive correlations and compute one-sided desired . Though p-values, Benjamini-Hochberg adjusted p-values, and empirical p-values directly. These have been adjusted on the basis of correcting for multiple hypothesis testing across different waveforms for a single time series. When we compare different time series to each other, we have to correct again for multiple hypothesis testing, this time across time series. To do this we use the Benjamini-Hochberg correction, as in the original implementation of the method [p-values, Benjamini-Hochberg adjusted p-values, and empirical p-values represent different quantities they are all at least as conservative as the \u201ctrue\u201d p-values in the range that we are examining or searching over 12 phases and 11 asymmetries . The added hypotheses for searching across asymmetries result in larger selection bias when choosing the highest \u03c4 value , impulse, and step, as well as an equal number of time series consisting of Gaussian noise. We compare all the precision-recall curves for all the methods on these data via the area under the receiver operating characteristic (AUROC), a measure of the sensitivity and specificity of the rhythm detection methods that does not depend on the proportions of positives and negatives in the dataset. The second simulated dataset contains 10% rhythmic time series of triangle waveform with uniformly distributed phases and asymmetries and 90% time series consisting solely of Gaussian noise. We use it to further assess the importance of considering asymmetric waveforms, and we explore how multiple hypothesis correction impacts the results when the true positives represent a relatively small fraction of the simulated time series, as we expect to be the case in genome-wide studies.To construct the first dataset described immediately above, for each of the four waveforms in p-values, which we use for the latter three methods. Although the AUROC for JTK_CYCLE can be in principle be computed directly from the Kendall\u2019s \u03c4 statistic, we include the multiple hypothesis testing correction because it impacts the TPR and FPR in practice; in particular, aggressive correction can lead to a loss of rank information because p-values must be less than or equal to 1.We scored each method by computing the area under the receiver operating characteristic curve (AUROC). The receiver operating characteristic (ROC) curve plots the true positive rate (TPR) as a function of the false positive rate (FPR) as the threshold for calling a time series as a positive is varied. The TPR and FPR are the fractions of the 10,000 simulated or Gaussian noise time series determined to be rhythmic at a threshold, respectively, and the threshold is varied over the entire range of false positive scores, such that the FPR ranges from 0 to 1. The AUROC is the integral of this curve; perfect classifiers have an AUROC of 1.0, while random classifiers have an AUROC of 0.5. An advantage of the AUROC as a metric is that it does not depend on the proportions of positives and negatives in the dataset because the TPR and FPR are calculated separately, i.e., they are normalized by the total number of positives and negatives, respectively. For stable persistence, the cyclohedron test, and address reduction, we calculated the AUROC from the test statistics themselves as opposed to the The performance of the different methods at 50% noise can be seen in While the empirical calculation approximates the null model well, it does not fully prevent multiple hypothesis testing from weakening the ability to identify rhythmic time series. Therefore, we do not sample phases and asymmetries more densely than the resolution of the data . We break this rule in The JTK_CYCLE with Benjamini-Hochberg correction (JTK_BH) has AUROC values that are in between the AUROC values for the original JTK_CYCLE with Bonferroni correction (JTK) and empirical JTK_CYCLE. This is to be expected since the Benjamini-Hochberg method is more conservative than the empirical method but less conservative than the Bonferroni method. An additional detail is that the original JTK_CYCLE here uses a cosine as a reference waveform, in comparison to the triangle used by the other JTK_CYCLE methods. The methods that use the triangle waveform do not do significantly worse than the methods that use the cosine waveform in any of the cases, justifying the use of the triangle waveform for rhythm detection.The cyclohedron test, address reduction, and stable persistence fail to improve as the number of replicates increases, and perform worse for low sampling rates. For example, a sine wave sampled at 4 time points per period for multiple periods has extrema at every other time point. Because cyclohedron test, address reduction, and stable persistence are essentially tests of monotonicity, they fail to detect the sparse periodic pattern in such data. In fact sparsely sampled data sometimes results in scores consistently lower than expected by chance, leading to the AUROC values less than 0.5 for these methods on some datasets in In summary, we find that all the methods tested can identify rhythmic expression patterns when the sampling density, replicate number, and signal-to-noise ratios are high. If data are sparse or noisy, however, method choice can significantly impact rhythm detection. In such cases, we find that ANOVA, F24, and JTK_CYCLE consistently better distinguish true and false positives. Empirical JTK_CYCLE outperforms ANOVA, F24, and original JTK_CYCLE for sine and ramp waveforms, but ANOVA performs better for impulse waveforms.The total number of samples required for an experiment is the product of the number of time points and the number of replicates. Consequently, it is important to consider how best to apportion resources. To this end, in To optimize the performance of ANOVA, it is best to maximize the number of replicates at the expense of the number of time points, which is not surprising given the importance of accurately estimating the variance in this test. For JTK_CYCLE and F24, the choice is less clear, but greater improvement is obtained with replicate increases in the case of the step and impulse waveforms .. By conp-value underestimates if not corrected. We found that the pseudo-replicates improved the performance mainly of ANOVA when the replicate number was low ; in particular, they allowed ANOVA to be applied and give good results for the single replicate case and one with a uniform sampling of possible asymmetries (by 2 h from 2 to 22 h). In both cases, phases were uniformly distributed over the possible discrete values. We added Gaussian noise with a standard deviation of either 25% or 50% of the amplitude of the time series, as previously described. We tested these data against the empirical JTK_CYCLE method with various asymmetries as well as original JTK_CYCLE, ANOVA, F24, and Benjamini-Hochberg adjusted JTK_CYCLE with various asymmetries for comparison. In all cases the JTK_CYCLE methods used the triangle waveform as the reference waveform, as it was the waveform used to generate the data.p-values less than 0.05, a reasonable threshold conditions . We do Ceriani , becausePdp1, a known cycling gene. This method is equivalent to treating measurements from the same zeitgeber time point as replicates. For probes that corresponded to the same gene, we chose the probe with the highest mean expression value to use in the analysis. This reduced 14,010 probes to 11,625 genes.All of the measurements in the contributing studies are at intervals of 4 h. Time points for circadian LD time series are referenced as zeitgeber time points (ZT); the beginning of the light period is ZT0. Under 12 hours of light and 12 hours of dark, ZT24 is the equivalent of ZT0. Three studies sampled at ZT0, 4, 8, 12, 16, and 20, and the fourth (Ueda ) sampledThe metadata has many places where the values are not available (NA). To prevent the need to recalculate the null distribution for every pattern of NAs in the data for empirical JTK_CYCLE (there were 5005 unique NA permutations in the data), the NAs were replaced by random noise drawn from a Gaussian distribution with mean and standard deviation that match those of the data on the whole. While this adds noise to the time series, it should not have a large effect given that each time series has 57 points. To mitigate the impact of this procedure on our study, however, time series that had more then half their points as NA were discarded from the dataset, leaving 9,313 out of 11,625 genes. We consistently used the dataset resulting from these preprocessing steps for all our analysis to ensure that comparisons between methods were fair; where comparisons with and without NA substitution were possible, we found that NA substitution led to slight increases in cycling numbers in all cases except ANOVA (114 vs. 101). However, these differences did not change any of the ontological results (discussed below).a priori, we compared the p-values for 6 positive examples and 4 negative examples . To evaluate the methods against genes for which we know the rhythmicity p-value cutoff of 0.05 , the number of genes and overlap between methods can be seen in Figs. Having again established F24, ANOVA, and JTK_CYCLE as the better methods, we now apply them to the full dataset . As in Fp-values against the empirical p-values, as in p-values are significantly lower than the Bonferroni-adjusted p-values, a pattern that is less pronounced without asymmetry search.Interestingly, among the JTK_CYCLE methods without asymmetry search the Bonferroni and Benjamini-Hochberg methods identified more genes than the empirical method did. For JTK_CYCLE without asymmetry search, there were only 6 hypotheses tested per gene time series (for each of the 6 phases searched), for which the Bonferroni and Benjamini-Hochberg correction across waveforms is very slight. For JTK_CYCLE with asymmetry search every 4 h, the number of hypotheses tested becomes 30, for 6 different phases paired with 5 different asymmetries, which results in a more stringent correction by the Bonferroni and Benjamini-Hochberg methods. As experimental sampling rates and sampling densities enable more extensive searching of phases, periods, and asymmetries, we expect the advantage for empirical JTK_CYCLE relative to the original formulation to grow because the Bonferroni correction strongly penalizes adding hypothesis tests. Provided that sufficient permutations are performed, empirical JTK_CYCLE provides the more robust identification of rhythmic genes. Another way of viewing this difference between the inclusion and exclusion of asymmetry search is by examining the distributions of the Bonferroni-adjusted p-value of 0.05 (upper left quadrant), while searching at asymmetries of 8 and 16 h excludes genes that have extreme asymmetries.We examine the effect of searching multiple asymmetries with empirical JTK_CYCLE further in We also examined how our results depended on using a triangle vs. a cosine for the reference waveform. Figs. et al. [et al., 169 remained after pre-processing to remove time series with more than half of their values as NAs. Of those 169, 111 had Benjamini-Hochberg adjusted p-values less than 0.05 for the empirical JTK_CYCLE with asymmetry search by 4 h (eJTK_aby4). 58 genes that were identified as cycling by Keegan et al. were not identified by eJTK_aby4. et al. with the cycling genes identified by eJTK_aby4. Keegan et al. identified genes as cycling primarily on the basis of scoring well (p < 0.05) on several tests following pre-screening by ANOVA. p-value from eJTK_aby4. While there appears to be a weak relation between the number of tests passed and the p-value, there is not a clear pattern that would enable one to predict the cycling genes common to both Keegan et al. and eJTK_by4. et al., which are organized by whether they are identified as cycling by eJTK_aby4 as well. The genes identified by Keegan et al. but not by eJTK_aby4 tend to have larger maximum amplitudes than the ones identified by both. The ANOVA pre-screening in Keegan et al. can account for this difference; our results with empirical JTK_CYCLE suggest that there are many cycling genes with lower amplitudes. et al. as cycling, as determined by eJTK_aby4. A large number of genes identified by Keegan et al., but not by eJTK_aby4, have asymmetry of 16 h. The bias in the earlier study may reflect the fact that one of the tests that Keegan et al. employs is based on correlation with the gene per, which has an asymmetry of 16 h. More generally, Keegan et al. fail to identify 231 genes as cycling that eJTK_aby4 identifies with Benjamini-Hochberg adjusted p-values below 0.05. Of these 231, 82 have Benjamini-Hochberg adjusted p-values below 0.01, 65 have values below 0.005, and 16 have values below 0.001.We compared our results to those of Keegan et al. , an earlet al., we also compared our results to those of Wijnen et al. [et al., and Wijnen et al. Whereas 31 genes that Keegan et al. identified as rhythmic were removed by the empirical JTK_CYCLE analysis pre-processing, 57 genes that were identified by Wijnen et al. were removed by the empirical JTK_CYCLE analysis pre-processing due to more than half their time points being NA. These genes are \u201cunassigned\u201d in et al. and eJTK_aby4 jointly identified 120 genes as rhythmic, of which Keegan et al. identified 59 as well. Wijnen et al. uniquely identified 177 genes as rhythmic, whereas eJTK_aby4 uniquely identified 167 genes. A comparison of the asymmetry distributions for all the genes , we examined the literature for references that independently suggest that these genes are cycling. Specifically, for each gene, we identified the references in FlyBase (http://flybase.org) that mention the gene. Of those references, those that have the term \u201ccircadian\u201d in their title or abstract were identified. As a first step toward validation the new genes that eJTK_aby4 exclusively identified as rhythmic . The gene cbt was previously unidentified as having rhythmic expression. The average time series from the metadata can be seen in cbt is a metal-ion binding transcription factor downstream of the JNK cascade and is involved in morphogenesis [et al. [Clk-regulated gene that was uniquely identified by eJTK_aby4 as rhythmic, twins , seen in cbt, it could have been missed by previous methods. These genes, though previously unidentified as rhythmic, are strong candidates for having roles in circadian regulation and processes based on our identification of them as rhythmic and the work of Kadener et al. and Abruzzi et al. This warrants further experimental studies of these genes in a circadian context as well as the other genes that we have identified.Among these references, there were some that discussed several genes. Kadener et al. assayed p<0.016 ). One geogenesis \u201348. It h [et al. also discbt is also referenced by another study that discusses several genes identified as rhythmic by eJTK_aby4. Fujikawa et al. [D. melanogaster following 24 h of starvation. 16 of these genes are not mentioned by the original five papers but are identified as rhythmic by eJTK_aby4, which has less than 0.3% probability of occurring by chance (Fisher\u2019s Exact Test unadjusted p<0.003). Fujikawa et al. refer to several genes from the circadian dataset papers that also appear in their lists of differentially expressed genes, but they do not associate rhythmic behavior with all the genes that they describe. In addition to the gene cbt, Fujikawa et al. reference other genes that were previously unidentified as rhythmic: Esterase-Q and 1, 4-Alpha-Glucan Branching Enzyme . Both have asymmetries of 16 h, which is also outside the range of standard symmetric-waveform detection and juset al. [et al. [Also recently, Zielinski et al. reviewed [et al. seek preet al. [Zielinski et al. , as wellet al. , 38, notet al. . Their met al. between et al. for simup-values, which yields much more accurate significance estimates than the Bonferroni correction employed in the original formulation of JTK_CYCLE [We were able to expand JTK_CYCLE to search for asymmetric waveforms without degrading sensitivity because we empirically calculate TK_CYCLE . Our anap-values in JTK_CYCLE. This enables control of the false discovery rate and testing waveforms beyond sinusoidal ones. For both simulated data and a circadian metadataset [In this paper, we compare methods for detecting rhythmic time series in genome-wide expression data. With regard to experimental design, we find that increasing the number of replicates is more important than increasing the sampling density for achieving greater sensitivity. A key aspect of our study is that we improve the estimation of adataset the resuS1 FigThe time series used for this example was a 24 h sine wave sampled every 2 h for 1 period (no replicates); noise was added at 25% of the amplitude. (A) Convergence of the mean and variance estimates, used to parameterize the Gamma distribution, as a function of the number of permutations performed, for testing the 24 h period . (B) The cumulative distributions obtained by random permutation fit to the Gamma distribution, as shown by their proximity to the diagonal (black). Shown are fits for testing a 24 h period, plus a 4 h period and a 48 h period . For these fits, 100 permutations were used.(EPS)Click here for additional data file.S2 FigThe correlation between triangle and cosine waveforms are compared for time series of different lengths for three different correlation metrics: Pearson, Spearman, and Kendall. Correlations can range from \u22121 (completely anti-correlated) to +1 (completely correlated).(EPS)Click here for additional data file.S3 FigLayout and abbreviations are the same as in (EPS)Click here for additional data file.S4 FigLayout and abbreviations are the same as in (EPS)Click here for additional data file.S5 Figti (indicated by the arrow) is obtained by linearly interpolating between time points ti\u22121 and ti+1 (dashed line). (B) Repeating this procedure for each time point (modulo 24 h) generates a new time series (\u2299 symbols).(A) A pseudo-replicate (\u2299) for time (EPS)Click here for additional data file.S6 FigWe compare performance with (Interp) and without pseudo-replicates for the first simulated dataset with 50% noise.(EPS)Click here for additional data file.S7 Figp-value cutoffs (FDR) along the x-axis. These data are with 25% noise, but the effects of Benjamini-Hochberg correction are significantly greater at 50% noise (not shown). The method abbreviations are the same as those in Simulated data with rhythmic time series without asymmetry (A) or with evenly distributed asymmetry (B) was tested with different methods. The vertical axis shows the Matthews Correlation Coefficients (MCC) for diff(EPS)Click here for additional data file.S8 Figp-value (P) (A and B) or false discovery rate (C and D) below or equal to a significance threshold, shown on the x-axis. These data are with 25% noise, but the effects of Benjamini-Hochberg correction are significantly greater at 50% noise (not shown). The legend in A applies to B, C, and D as well as A. The rightmost point on the horizontal axis is 0.2. eJTK_aby2: asymmetries sampled every 2 h, from 2 h to 22 h, eJTK_aby4: asymmetries sampled every 4 h, from 4 h to 20 h, eJTK_a04-12-20: asymmetries sampled at 4 h, 12 h and 20 h, eJTK_a08-16: asymmetries sampled at 8 h and 16 h, eJTK: no asymmetry .Simulated data with rhythmic time series without asymmetry or with evenly distributed asymmetry was tested with different asymmetries. The cumulative histograms are plotted before (A and B) and after (C and D) Benjamini-Hochberg multiple hypothesis correction across time series. The vertical axis shows the number of genes with a (EPS)Click here for additional data file.S9 Figper, tim, vri, Pdp1, cry, and Clk. The negative examples are known non-cycling genes cam, RpL32, cyc, and dco. As plotted, large values for the positive examples and small values for the negative examples are desirable. The magenta line marks a p-value of 0.05 (\u2212log10 0.05 = 1.3). Since 2 \u00d7 106 permutations were used to generate the empirical JTK_CYCLE p-values, they cannot be lower than 5 \u00d7 10\u22127. Abbreviations are the same as in The positive examples are known cycling genes (EPS)Click here for additional data file.S10 Fig(EPS)Click here for additional data file.S11 Figp-value (FDR) below 0.05 (blue) and 0.20 (red) are shown. (B) A comparison of the intersection and union of genes identified as rhythmic with Benjamini-Hochberg adjusted p-values less than 0.05 for the different methods. Abbreviations are the same as in (A) The number of genes with a Benjamini-Hochberg adjusted (EPS)Click here for additional data file.S12 Fig10 0.05 \u2248 1.30). Genes to the the right of the vertical line pass the threshold cutoff for eJTK_aby4, while genes above the horizontal line pass the threshold cutoff for eJTK with asymmetry search of 8 and 16 h. Genes that are above the horizontal line but left of the vertical line barely pass the threshold and have asymmetries in the range of 8 to 16 h. Genes that are right of the vertical line but below the horizontal line pass the threshold much more significantly than the previously mentioned genes and have asymmetries that are more extreme.Points represent genes, colored by the asymmetry search by 4 h-estimated asymmetries. The black vertical and horizontal lines mark a FDR of 0.05 (\u2212log(EPS)Click here for additional data file.S13 Figp-values less than 0.05 (blue bars) or 0.20 (red bars) for empirical JTK_CYCLE without asymmetry (eJTK), and empirical JTK_CYCLE with asymmetry search of 4, 8, 12, 16, and 20 h (eJTK_aby4) calculated with a reference waveform of a triangle (no prefix) or with a reference waveform of a cosine (prefix \u201ccos\u201d). (A) The number of genes with a Benjamini-Hochberg adjusted p-value below 0.05 (blue) and 0.20 (red) are shown. (B) A comparison of the intersection and union of genes identified as rhythmic with Benjamini-Hochberg adjusted p-values less than 0.05 for the different methods.A comparison of the intersection and union of genes identified as rhythmic with Benjamini-Hochberg adjusted (EPS)Click here for additional data file.S14 Figp-values below 0.05.Annotation terms identified as enriched by DAVID share many similarities and were therefore grouped. The number of annotation terms enriched in the genes discovered by each method are shown in grey shading and red numbers. Empirical JTK_CYCLE methods with and without asymmetry search and with a triangle as a reference waveform or cosine as a reference waveform . The annotation terms displayed are enriched with Benjamini-Hochberg adjusted (EPS)Click here for additional data file.S15 Figet al. [p-values are more significant than lower ones: the horizontal black line indicates a Benjamini-Hochberg adjusted p-value for eJTK_aby4 of 0.05. (B) All the genes shown were identified as cycling by Keegan et al. The mean and variance of the genes identified as cycling by Keegan et al. and eJTK_aby4 (blue), are 4.34 and 0.54, respectively. The mean and variance of the genes identified as cycling by Keegan et al. and but not eJTK_aby4 (red), are 4.75 and 0.56, respectively. (C) All the genes shown were identified as cycling by Keegan et al. The asymmetry of the genes was determined by eJTK_aby4.(A) All the genes shown passed the ANOVA pre-screen, but only the green ones are identified as cycling by Keegan et al. . Higher (EPS)Click here for additional data file.S16 Figet al. [et al. [http://flybase.org) that mention the gene were identified. The genes identified by eJTK_by4, Keegan et al., and Wijnen et al. are shown in a histogram with stacked bars colored to represent the genes being cited by references with \u201ccircadian\u201d in the title or abstract, genes cited in the original five dataset papers, or neither. While there are more genes uniquely identified by Wijnen et al., there are more total genes identified by eJTK_by4, as well as more genes that are cited in papers that have \u201ccircadian\u201d in their title or abstract.(A) Comparison of genes identified as rhythmic by Keegan et al. , Wijnen [et al. , and eJT(EPS)Click here for additional data file.S17 Fig(EPS)Click here for additional data file.S18 FigPeak expression (phase) of these genes is mainly in the light period. (A) Z-scored gene expression of genes from the metadataset involved in glutathione metabolism averaged across 24 h and interpolated to every 2 h. (B) Phase and asymmetry distribution of the genes from the metadataset involved in glutathione metabolism.(EPS)Click here for additional data file.S19 FigPeak expression (phase) of these genes is distributed over 24 h. (A) Z-scored gene expression of genes from the metadataset involved in oxidation reduction averaged across 24 h and interpolated to every 2 h. Black indicates time points where data were not available (NA). (B) Phase and asymmetry distribution of the genes from the metadataset involved in oxidation reduction.(EPS)Click here for additional data file.S20 FigPeak expression (phase) of these genes is distributed over 24 h. (A) Z-scored gene expression of genes from the metadataset involved in alternative splicing averaged across 24 h and interpolated to every 2 h. (B) Phase and asymmetry distribution of the genes from the metadataset involved in alternative splicing.(EPS)Click here for additional data file.S1 Data(XLSX)Click here for additional data file.S2 DataThe method results provided are all for the metadataset after pre-processing: eJTK_aby4, eJTK_a12, JTK_BF_aby4, JTK_BF_a12, cos_eJTK_aby4, cos_eJTK_a12, ANOVA, and F24.(XLSX)Click here for additional data file.S3 DataThe method results provided are all for the metadataset after pre-processing: JTK_BH_aby4, JTK_BH_a12 (these two come with DAVID results), eJTK_a04-12-20, and eJTK_a08-16.(XLSX)Click here for additional data file."}
+{"text": "Inter-individual variation in regulatory circuits controlling gene expression is a powerful source of functional information. The study of associations among genetic variants and gene expression provides important insights about cell circuitry but cannot specify whether and when potential variants dynamically alter their genetic effect during the course of response. Here we develop a computational procedure that captures temporal changes in genetic effects, and apply it to analyze transcription during inhibition of the TOR signaling pathway in segregating yeast cells. We found a high-order coordination of gene modules: sets of genes co-associated with the same genetic variant and sharing a common temporal genetic effect pattern. The temporal genetic effects of some modules represented a single state-transitioning pattern; for example, at 10\u201330 minutes following stimulation, genetic effects in the phosphate utilization module attained a characteristic transition to a new steady state. In contrast, another module showed an impulse pattern of genetic effects; for example, in the poor nitrogen sources utilization module, a spike up of a genetic effect at 10\u201320 minutes following stimulation reflected inter-individual variation in the timing (rather than magnitude) of response. Our analysis suggests that the same mechanism typically leads to both inter-individual variation and the temporal genetic effect pattern in a module. Our methodology provides a quantitative genetic approach to studying the molecular mechanisms that shape dynamic changes in transcriptional responses. cis and trans. Our findings suggest that associations of genes with the same genetic variant often occur via the same timing of state transition in genetic effects. Furthermore, the results uncover a previously unknown variant whose impulse-like temporal genetic effect suggests a novel molecular function for determining the timing rather than the magnitude of response. Our results show that steady-state association studies miss important genetic information, and demonstrate the power of DyVER to render a comprehensive map of dynamic changes in genetic effects.Genetic variation is postulated to play a major role in transcriptional responses to stimulation. Such process involves two inter-related dynamic processes: first, the time-dependent changes in gene expression, and second, the time-dependent changes in genetic effects. Although the dynamics of gene expression has been extensively investigated, the dynamics of genetic effects yet remain poorly understood. Here we develop DyVER, a method that combines genotyping with time-series gene expression data to uncover the timing of transitions in the magnitude of genetic effects. We examine gene expression in yeast segregants during rapamycin response, finding several distinct ways of change in the magnitude of genetic effects over time. These include impulse-like and sustained transitions in genetic effects, acting both in Two recent studies have demonstrated that genetic effects on longitudinal gene expression data might be either stable \u2013 where the genetic effect is similar at all time points to be identified linear-like genetic effect pattern . Such a mechanism can be revealed even when additional mechanisms are acting in parallel . The linear genetic effect pattern, in contrast, lacks sharp alterations and therefore does not specify finely-timed information about regulatory mechanisms , a statistical framework to predict genetic variants and study their dynamic changes in genetic effect sizes. DyVER was mainly designed to achieve an accurate detection of non-linear genetic effects Fig. 1C e.g., DyVER differs from extant genetic approaches in several aspects. First, some existing methods construct a full model of the response curve across individuals. Their number of parameters is therefore increasing with the number of time points e.g., . DyVER, Here we report on the use of DyVER to investigate temporal gene responses at six time points after stimulation with the TOR inhibitor rapamycin and across genotyped yeast segregants Methods): (1) It first calculates the observed effect of the variant, namely the difference in gene response between strains carrying the two distinct alleles (high-effect state); and second, a lower (such as zero) effect, or possibly an opposite effect (denoted the low-effect state). Several previous methods have employed a two-state model, although not in a dynamic or a genetic effect context v during a late time interval, DyVER successfully infers the correct effect pattern low\u2192low\u2192high\u2192high for the correct variant v as it attains the highest likelihood score DyVER calculates the statistical significance of association for each genetic variant based on a likelihood ratio score that takes as input the inferred temporal two-state model .We devised a new method, DyVER, to identify genetic variants that underlie the expression of genes and their particular dynamic effect patterns. DyVER takes as input the measured transcription response of a gene over several consecutive time points following stimulation and across a cohort, as well as a set of potential genetic variants and their genotyping Fig. 2A.probability of observed effects given a certain temporal two-state model; and (ii) the probability of a temporal two-state model, which may use a penalty factor to prioritize two-state models with a lower number of transitions between states, assuming dependencies among consecutive time points. In the absence of penalty, the order of time points is irrelevant and therefore the predicted two-state model can be viewed as a partition of an unordered group of time points into two sub-groups. The DyVER score exploits this partition for a different parameterization of the (unordered) time points in each of the two states. The addition of the penalty factor makes it possible to avoid an overfitted two-state model that is then given as input to the next step, hence further improving the DyVER score's performance.Overall, step 1 allows DyVER to focus on dynamics in genetic effects regardless of the magnitude of transcription response, whereas the discrete modeling in step 2 allows detecting any sequence of spikes up or down in genetic effects. The two-state model from step 2 enhances the performance of the DyVER score (step 3) by allowing a separate parameterization for each of the states. Specifically, to infer an optimal temporal two-state model, DyVER uses a two-state hidden Markov process where the observed effects are treated as the outcome of a sequence of hidden high-effect and low-effect states step 2; Fig. 2C.P value score. The second method builds on dimension reduction using principal component analysis (PCA): Given T time points for each strain as input, it first reduces the T-dimensionality of the data into a single dimension by projecting each strain onto the first principal component. Next, it applies an ANOVA test on this one-dimensional data P values were Bonferroni-corrected for the testing of multiple genetic variants. The quality of predicted variants were evaluated using the accuracy metric, defined as the tradeoff between the sensitivity and specificity of revealing genetic variants across different significance cutoffs. The accuracy metric ranges between 0 and 1 for poor and excellent performance, respectively (Methods).We compared DyVER's performance to that of five alternative methods. In the first method, the most na\u00efve approach, an ANOVA test is applied at each time point independently and the predicted genetic variant is the one with the most significant ANOVA , left), and impulse and multiple-pulse (complex) patterns based on the product of two sigmoid functions To characterize DyVER's ability to reveal dynamic genetic variants and distinguish their effect patterns, we generated synthetic collections of genes that are associated with genetic variants over time. A single synthetic \u2018collection\u2019 consisted of 500 genes, 300 of them associated with a genetic variant over time, with two characteristic parameters: (i) the number of time points, and (ii) the effect size . In a complete synthetic \u2018dataset\u2019 we generated 72 collections for various numbers of time points and effect size values. Overall, four synthetic datasets were generated in this study, each consisting of a different key class of dynamic effect patterns see Methods:Figure S1A and B. Results were similar for varying effect sizes and for an additional synthetic dataset that is based on prototypical effects in C. elegans . Furthermore, although DyVER's accuracy is reduced in the case of missing data, it is still notably high in comparison to alternative methods . Taken together, our results indicated that DyVER performs well on a broad range of genetic effect patterns.DyVER showed good accuracy in all non-linear dynamic effect patterns , whereas the long-impulse dataset consisted of a high-effect steady state of long duration (fifteen time points). Figure S4B). For example, for high-effect sizes (0.625), the sensitivity of DyVER is 0.7 and 1 with short and long impulses, respectively. The sensitivity of PCA, in contrast, is respectively 0.47 and 1 with short and long impulses for the same effect size. Thus, even when genetic variants acted during short time intervals, DyVER still performed relatively well. This was unlike the alternative methods, whose performances were drastically reduced even for relatively high-effect sizes.The performance of both DyVER and the alternative methods declined when applied on a short impulse compared to a long impulse of genetic effects, but notably, the performance reduction was lowest with DyVER , and calculated it both for the case of stringent (exact) matching or flexible (non-exact) matching between the true and inferred models (Methods). In both cases, we found that DyVER performs well in predicting two-state models, where the flexible case outperforms the stringent case, as expected. For example, using single state-transitioning patterns with nine time points, effect size 0.75, significance cutoff 0.001 and the absence of penalty (probability of transition 0.5), the stringent and flexible error rates are 0.41 and 0.33, respectively . The error rate increased with decreasing penalty and 0.5 , stringent error rates are 0.32 and 0.41, respectively). As expected, error rates rose when a higher statistical significance cutoff (0.05) was used, whereas the gap between the error rates for different significance cutoffs remained relatively constant when the penalty increased. Results obtained for other effect sizes were similar.DyVER predicts a temporal two-state model, which may provide insights concerning the timing of changes in genetic effects Fig. 2C.S4), and that these performance can be even enhanced by the addition of a penalty component . These results hold when the complexity of dynamic effect patterns is relatively low, as in the case of genetic effects in biological data .Collectively, our results indicated that DyVER outperforms extant methods even in the absence of penalty and the presence of missing data (Methods) Table S1, Methods) and 105 of them showed non-linear genetic effect patterns (Table S2). Correlations among genetic effects of consecutive time points were much larger than correlations between non-consecutive time points [P value <10\u221215 (Wilcoxon test)], justifying our \u2018memoryless\u2019 Markov assumption that the next time point is mainly dependent on the current time point . The 105 genes carrying non-linear effect patterns were partitioned into groups based on their predicted two-state pattern (Table S1); seven two-state pattern groups (C1\u2013C7) were created, each including at least two genes .We applied DyVER in an unbiased manner to the available dataset of 95 yeast segregants that were stimulated by rapamycin and profiled at six time points demonstrate a sustained genetic effect with a state transition at 0\u201310 and 0\u201320 minutes after rapamycin stimulation, respectively and were under-represented . Our findings of rare complex patterns in yeast parallel similar observations in the mouse ; Yet, the particular shape of effect patterns may differ between biological systems .The partition revealed three prototypical non-linear genetic effect patterns Fig. 4A,trans-acting variants that arise from this analysis. Using DyVER's predictions we organized the genes into six co-association modules, each containing a group of (at least two) genes with the same trans-associated variant . Functional enrichment strongly related all six modules with specific biochemical pathways. For example, the entire module no. 3 consists of genes that play a role in uptake of phosphate (Pi) from extracellular sources and its accumulation in vacuoles . The module's validated causal gene is PHO84, a high-affinity phosphate transporter that carries a missense mutation in one of the parental strains We next explored the pleiotropic , Figure S9C). Overall, we found four modules with over-represented patterns of single state-transitioning at specific time points and one sub-module of an impulse effect pattern (no. 5-II). The observed coordination of temporal genetic effects does not necessarily reflect a coordination of transcription responses . In previous reports, baseline expression levels were used to identify eight genetic variants underlying similar modules (Table S2), but the coordinated temporal genetic effects and the timing of upward or downward spikes of genetic effects were not characterized.Next we examined whether module genes show characteristic temporal effect patterns. On analyzing the modules we found that modules nos. 1, 3, 4, 5-I and 5-II relate to a specific prototypic temporal genetic effect pattern, whereas the remaining two modules (nos. 2 and 6) are more general and show several distinct patterns Fig. 5A.5E): The trans-associated causal gene of module no. 1 (IRA2) attains a sustained-like pattern of gene expression that resembles the temporal genetic effect pattern of its target genes (left). The cis-associated causal genes in modules nos. 3 and 4 (PHO84 and GPA1) exhibit drastic changes in their transcription response at the same time point at which there is a (downward or upward) spike in the genetic effect of their target genes .A plausible explanation for the \u2018shared variant, shared temporal genetic effect pattern\u2019 hypothesis is that the same molecular mechanism underlies both inter-individual variation and the dynamics of genetic effects. In such cases, the dynamic pattern of effect is an attribute of the underlying regulatory mechanism (rather than of the target genes), probably owing to temporal changes in the influence or activity of the regulatory mechanism. This hypothesis is further supported by the consistency in the timing of state transitions in module genes and their underlying (known) causal genes demonstrates the ability of our method to reveal novel associations acting on the timing of response and affecting an entire cellular pathway ,6. DuriDAL80 in response to rapamycin, which was already detected at 10 minutes after stimulation. The RM-carrying strains, in contrast, showed a clear delay in response to rapamycin, but all strains reached a similar expression level by 30 minutes after stimulation have temporal transcription profiles that match the expected early impulse of high genetic effect, the promoter of five genes is bound by nitrogen-related transcription factors RPB5, CNS1, ADH5, RTC2) were previously reported in nitrogen-related cellular processes . These criteria therefore suggest that RPB5 or CNS1 are two leading candidates in module 5-II.The impulse pattern reflects a difference in the timing of initiation of response among the strains carrying the RM and BY alleles in Chr2: 533\u2013562 kb. For example, strains carrying the BY allele showed early up-regulation of mulation Fig. 6D., S4), likely due to (i) a focus on genetic effects rather than on modeling the original phenotype values, and (ii) the prior knowledge about the separation of the time points into two distinct groups that differ in their observed effects , thus allowing a different parameterization for each of these groups.In this work we present the DyVER computational algorithm for identifying genetic variants that lead to dynamic changes in genetic effects. DyVER was tailored to identify abrupt changes in the levels of genetic effects, which may provide valuable information about the timing of alterations in the particular regulatory mechanisms interacting with the underlying genetic variant. In comparison with other approaches, DyVER attained the most accurate identification of non-linear genetic effect patterns, even in the absence of penalty , it still cannot be applied on non-synchronous data strains that are common in genetic studies due to several major advantages: first, inbred strain enable controlled stimulations, and second, they avoid major challenges that are common in human genetics, including haplotype analysis, rare variants and uncontrolled variables. Future extensions may generalize the method for the heterozygous case, possibly by calculating genetic effects between each pair of genotypes (rather than between the only two possible genotypes as in the homozygous case), requiring to add additional one or two Gaussians within each of the model states. Second, the usage of a our probabilistic model leads to several limitations: the number of states should be specified in advance; we only capture correlations between sequential time points but cannot capture higher-order correlations among time points; and we generally assume that the probability of a time point is independent of the probabilities of its neighboring time points. Future improvements that handle more than two states and a more sophisticated probabilistic graphical model ta as in . Data imBuilding on the DyVER approach, we analyzed temporal gene expression patterns following rapamycin treatment in yeast segregants. Our analysis identified 105 genes exhibiting significant non-linear genetic effects over time, 56 of them are well-established associations , and the remaining genes are new candidates for future experimental investigations e.g., Fig. 4B. and Figure S12). This organization is substantially different from previous studies The application of DyVER in yeast provided several novel insights that were mainly attained due to the unique capability of DyVER to classify associations based on their optimized temporal effect patterns. First, we use the temporal effect pattern to automatically organize the genes into clusters based on their predicted patterns . Furthermore, it may be possible to discriminate between two genetic variants differing in their dynamic over time, even when these variants are co-localized at a nearby genomic position . The observed effects were calculated only after this transformation.high-effect state (\u2018H\u2019), reflecting the observed presence of a high genetic effect, or the low-effect state (\u2018L\u2019), reflecting either small effect, an opposite effect, or the absence of effect. We denote by t-th time point sequence of states is H or L. We model the dependencies among time points as a first-order Markov chain, assuming penalty: Our digital model assumes that at each time point a genetic variant may attain one of two states: either the The lower the probability of a temporal two-state modelinitial state probabilityThe overall Assuming H or the L state), the probability of measuring such observed effect values isprobability of observed effects. In both states H and L the effect is modeled with mean Gaussian noise of H state and L state. Notably, the parameters for all time points in the same state are shared, opening the way to a large number of time points without increasing the number of model parameters.Assuming that the temporal two-state model of a variant S, model parameters likelihood function for a candidate gene is:Collectively, given a temporal two-state model For each candidate variant Notably, the model can be viewed as a two-state Hidden Markov Model (HMM) Na\u00efvely, the maximal likelihood score DyVER score. The significance level of the score is evaluated by repeatedly permuting the labels of strains. We calculate an empirical P value, defined as the proportion of permutation tests for which the DyVER score is larger than the observed (non-permutated) score. Figure S13 indicates that the DyVER score P value is indeed well-calibrated using a Q-Q plot analysis. Reported predicted association(s) are those with significant DyVER scores.The null hypothesis Figure S14) \u2013 have not been used throughout this study since they are prone to attaining Notably, in standard genetic methods, the null model's input dataset is similar for different variants. For example, the null model of a regression analysis utilizes the same data values but with variant-specific predictors; the null model of an ANOVA test is variant-independent. In the same sense, the null model of the DyVER score is based on dynamic association score may test the significance of the difference between the observed effects of the high-effect and the low-effect states . The partition into high-effect and low-effect states is determined by the predicted temporal two-state model , and since our scoring scheme relies only on these measures, thus the input data in each time point may consists of a different population of strains with a different population size. In particular, as long as the input data consists of multiple synchronous strains in each time point, DyVER allows missing data without a requirement for data imputation. DyVER's executable and source code, including an option for an additional imputation based on flanking time points of the same strain, can be downloaded from csgi.tau.ac.il/dyver.To generate synthetic data we first generated 50 strains carrying 100 genetic variants, sampling one of the two alleles with equal probabilities. A single synthetic collection consists of 500 genes, of which 300 are associated with a certain variant over T time points. Overall, for a single dataset we generated 72 collections, constructed for all combinations of eight possible \u2018effect sizes\u2019 and nine different numbers of time points (ranging between 3 and 27). In all cases, the low-effect state represents the absence of effect we generated a different collection of synthetic sustained dataset as follows: we first generated the temporal two-state model by sampling from the corresponding distribution .For the purpose of comparing predicted to gold-standard temporal two-state models . For the \u2018expression dynamics\u2019 method, we used the model t for strain j carrying genotype i, i and lme4 R package. In all cases above, an F-test was used to test the model. For the more sophisticated \u2018detailed dynamics\u2019 method, we use the longGWAS R package that is part of its original publication The compared methods were implemented as follows. In the \u2018na\u00efve\u2019 method we assumed a simple fixed effect model on each time point independently, P values were Bonferroni-corrected for multiple variants). To quantify the ability to correctly predict such genetic variants, we define the accuracy measure. Genes are split into two groups: one contains genes that are associated with a genetic variant, and the other contains the remaining, non-associated genes. A mapping method may provide a negative prediction , or alternatively, a positive prediction of either the correct variant or an incorrect variant. We define true positives as associated genes whose correct genetic variant is predicted with a significant P value. True negatives are non-associated genes that were not significantly associated with any variant. False negatives are associated genes that were not significantly associated with any variant. Finally, false positives are defined as erroneous significant predictions as a result of two possible scenarios, either a non-associated gene that is wrongly predicted to be associated with a certain variant, or alternatively, an associated gene whose predicted variant is incorrect. We adopt the standard formulations for sensitivity and specificity .For each synthetic dataset, DyVER was applied to predict a genetic variant using the DyVER score . Notably, using a standard sensitivity definition, sensitivity should increase with higher P value thresholds. In contrast, using our definition of sensitivity, it is dependent on the particular predicted variant. Thus, even with a very high P value threshold and many affected genes, the sensitivity of a random algorithm might remain close to zero. The accuracy therefore ranges between 0 (for a random prediction) and 1 (for a perfect prediction).Similarly to a standard \u2018Receiver Operating Characteristic\u2019 (ROC) analysis, we can plot the sensitivity against the 1-speificity across different two-state pattern error rate (shortened to error rate) as the number of wrongly predicted temporal two-state models expressed as a proportion of the total number of (significant) correctly identified variants. We test two different rules for matching between the simulated and predicted model. In the stringent case, we require a fully correct two-state model, and in the flexible case, we require correct transitions between states but allow incorrect timing of transition.Finally, to quantify the ability of DyVER to correctly predict the temporal two-state model, we define the P values were first Bonferroni-corrected for multiple variants; the corrected DyVER score P values were then controlled for multiple testing of genes (FDR 6%). We then further filtered the genes based on the dynamic association score (FDR 15%). In total out of 2700 genes, we obtained 351 (13%) predicted associations and 145 (5.3%) predicted dynamic associations (based on the dynamic association score). Next, we further removed 40 genes carrying linear-like patterns, based on strong correlation with a linear model (r>0.95) and more than 5% change in genetic effect in any two consecutive time points (Table S1). The partition into groups was generated automatically according to DyVER's predicated two-state model .We applied DyVER to genotyping data and gene expression data that were monitored during six time points following exposure to rapamycin in 95 yeast segregants and their two parental yeast strains: BY4716 (BY) and RM11-1a (RM) Figure S6B).In addition, we applied DyVER to genotyping data and log gene expression data of 403 genes that were monitored using a meso-scaled technology during three time points following exposure to lipopolysaccaride in 45 mouse BXD strains Figure S1Comparative performance analysis on synthetic data. Scatter plots for various performance measures (y-axis) of six alternative mapping methods (color coded) over genes that were measured in different numbers of time points , or over genes of different effect sizes . Shown are different patterns of genetic effects , linear, and complex sub-panels). In A,C, shown is the accuracy measure, whereas in B,D, presented are the sensitivity and specificity measures , exemplified for DyVER against the PCA method (chosen since its accuracy is the best among the compared methods). Plots A,C indicate that for the non-linear genetic effect patterns, DyVER has an advantage in accuracy over existing approaches. The sensitivity and specificity tradeoff that leads to this accuracy advantage are demonstrated in B and D.(EPS)Click here for additional data file.Figure S2Comparative performance analysis on nematode-based synthetic data. Shown is accuracy (y-axis) for several compared approaches (color coded) using different patterns of genetic effects that were built based on the effect curves from Francesconi et al. (2014) (Methods). Results are shown over synthetic genes that were measured at six time points and of different genetic effect sizes (x-axis). The plot demonstrates that DyVER has an advantage over the compared approaches in a biological relevant synthetic data.(EPS)Click here for additional data file.Figure S3DyVER's performance analysis using incomplete data. The plot depicts the accuracy measures (y-axis) for the DyVER method across various percentages of missing data and for the compared methods in the case of complete data (x-axis). The results are shown for single state-transitioning (sustained) pattern of genetic effects, over genes that were measured at nine time points and genetic effect size 0.5. The complete data consists of the same 50 strains in each time points, where in the missing data input, each time point consists of a different (in some cases overlapping) list of strains . The plot indicates the high accuracy of DyVER, even in the case of missing data.(EPS)Click here for additional data file.Figure S4Comparison of performance in a short-impulse and long-impulse synthetic data. (A) Performance measures (y-axis) for different effect sizes (x-axis). The results presented are for all genes consisting of 27 time points with either short impulses or long impulses . The plot depicts six alternative mapping methods (color coded). (B) The fraction of performance reduction for short impulses data compared to long impulses data (y-axis) for different effect sizes (x-axis). Results are shown for both accuracy (left) and sensitivity (right) measures, and are omitted when the accuracy or sensitivity in long-impulse data is low . The plots indicate that fraction of performance reduction is much lower in the case of the DyVER algorithm than in the alternative methods, providing evidence for the good performance of DyVER in the case of short duration genetic effects.(EPS)Click here for additional data file.Figure S5Effect pattern error rates. Two-state pattern error rates using a stringent (A) or a flexible (B) matching (y-axis) for a model penalty ranging between 0.01 and 0.5 . Performances were evaluated for a single state-transitioning effect pattern dataset of nine time points. Results are shown for effect size 0.75 and using both stringent (red dashed line) and relaxed (red solid line) cutoff P values, as well as using random predictions (gray dashed line). As expected, in all cases, the higher the penalty, the lower the error rate.(EPS)Click here for additional data file.Figure S6Non-linear genetic effect patterns. Percentages of dynamically-associated genes predicted by DyVER (y-axis) across different non-linear genetic effect patterns (x-axis). Results are presented for analysis of real data (blue dots) and reshuffled data (box plots for 1000 repeats). Presented are results in yeast response to rapamycin , and in three-time-point data in mice strains . Notably, although the percentages are similar in yeast and mouse, their total number of genes drastically differ and therefore the actual number of identified genes is different .(EPS)Click here for additional data file.Figure S7High correlations among genetic effects of consecutive time points. (A) Presented is a correlation matrix of genetic effects between every pair of time points . The matrix indicates that correlation among consecutive time points is higher than correlation among non-consecutive time points. (B) The distribution of mismatches between genetic effects of consecutive time points and between genetic effects of non-consecutive time points . Depicted are three plot representing the fraction (y axis) of mismatch values (x axis) across all 105 non-linear dynamic associations (Table S1); top, middle and bottom panels represent ti \u200a=\u200a10, 20 and 30 minutes after rapamycin treatment, respectively. To calculate mismatch between genetic effects of two candidate time points, we first calculated a regression model relating genetic effects at a certain time point ti (dependent variables) to genetic effects in the following time points ti+1 or ti+2 (independent variables). Mismatch values are defined as the residuals of this regression. The plots indicate that mismatches between consecutive time points are lower than mismatches between non-consecutive time points . (C) A heat map of the relations between genetic effects at 10 minutes (y axis), genetic effects at 20 minutes (x axis), and genetic effect at 30 minutes after rapamycin treatment . Each cell represents a 2D bin consisting of all genes with genetic effects in a defined range at time points 10 and 20 minutes after treatment. 2D bins are colored based on their average genetic effect at 30 minutes after treatment (empty 2D bins are colored white). The heat map demonstrates that genetic effect at 30 minutes after treatment is linked to genetic effect in its nearby time point (20 min) but not to an earlier time point (10 min), consistently with DyVER's \u2018memoryless\u2019 Markov model.(EPS)Click here for additional data file.Figure S8Non-mutually exclusive classes of temporal effect patterns in yeast and nematode. Comparison between the fraction of genes (y-axis) that were classified into five non-mutually exclusive classes (x-axis) of the yeast dataset and the C. elegance dataset ; white). Classification categories are as in Francesconi et al. (2014).(EPS)Click here for additional data file.Figure S9Phosphate (Pi) acquisition and storage module no 3. (A) Module no. 3 genes (pink) in the context of the phosphate acquisition and storage pathway (adapted from Ref. 29). The pathway shows the extracellular conversion of phosphate monoester into phosphate, phosphate transport into the cytoplasm, and deposition of phosphate into storage vacuoles. (B) The genomic interval underlying module no. 3, residing in Chr13: 28 kb. Shown are DyVER scores (y-axis) across the genomic positions in chromosome 13 (x-axis) for the genes in module no. 3 (color coded). The position of the known causal variants in PHO84 is marked below; all remaining genes are trans-associated. Genetic effects of the module and a representative gene are depicted in . (C) Genetic effects, relative to non-stimulated genetic effects for different trans-associated genes from B (color coded) at six time points (x-axis).(EPS)Click here for additional data file.Figure S10Genetic effects and transcription responses of co-associated genes in module no. 4. (A) Genetic effects, relative to non-stimulated genetic effects for the co-associated genes in module no. 4 (color coded) at six time points (x-axis). (B) Averaged transcription response, relative to non-stimulated transcription response for module no. 4 genes (color coded) at six time points (x-axis). Notably, the module genes share a similar genetic effect pattern (A), even though they do not share a similar transcription response pattern (B).(EPS)Click here for additional data file.Figure S11Identifying the genes likely underlying the nitrogen-regulated module no. 5 - II. (A) Potential causal genes underlying the nitrogen-regulated module no. 5-II (column 1), genetic linkage interval at chromosome 2 (column 2). Amino acid differences between the RM and BY strains are reported in column 3. The table presents three selection criteria: First, by reporting genes whose temporal transcription profiles fit the expected impulse effect pattern . Second, by reporting those genes that are significantly bound by nitrogen-related transcription factors . Finally, by reporting functionally-related genes . Shown are all genes selected by at least one criterion. Notably, RPB5 and CNS1 were selected by all three criteria (marked in B) Averaged temporal gene expression profiles of the genes in the linkage interval of module no. 5-II (rows) following rapamycin stimulation (columns). Genes showing a specific high expression level at ten minutes following stimulation are marked in arrows and listed in A . Plot (C) demonstrates the agreement between the averaged transcription response of RPB5 and CNS1 (solid lines) and the averaged relative genetic effect pattern in module no. 5-II during time points (x-axis). (D) Shown is a \u2013log P value of transcription factor binding data for the genes in the linkage interval of module no. 5-II . The two transcription factors, DAL80 (top) and GCN4 (bottom) are known as key regulators of the nitrogen and amino acid pathways. A threshold corresponding to the level of binding in known nitrogen-related genes (black) is indicated in dashed horizontal line. Genes with a similar or higher binding \u2013log P value are listed in A.(EPS)Click here for additional data file.Figure S12DyVER's predicted two-state model for dynamic genes in yeast following rapamycin treatment. Shown is a table of cluster identifiers (column 1) and their number of genes (column 2). The partition was generated automatically according to DyVER's predicated two-state model. The model for each cluster is shown either as a sequence of 'L' and 'H' states (column 3) or in a cartoon visualization . The pattern in columns 3 and 4 is shown for increasing time points from left to right. For example, the LLLHHH pattern indicates a high genetic effect only at 30\u201350 minutes after rapamycin treatment.(EPS)Click here for additional data file.Figure S13Q-Q plots for the DyVER's score. (A) An example of two QQ-plots of representative genes. The plots show no inflation and deflation of the expected minus log P values (x-axis) versus the observed minus log P values of the DyVER score (y-axis). Genomic control (GC) values were defined as the median of the observed minus log P value divided by the median of the minus log expected P value. (B) An overall distribution (box-plot) of GC values across all genes in the dataset. As expected, the distribution of genomic control values is centered in genomic control \u200a=\u200a1. Plots A and B were generated using a synthetic dataset of 500 genes that were measured at nine time points using single state-transitioning (sustained) pattern with genetic effect size \u200a=\u200a0.5.(EPS)Click here for additional data file.Figure S14Three possible formulations of the DyVER's likelihood ratio test. (A) A table presenting the three formulations (rows); including the name of the approach (column 1), its likelihood ratio formulation and parameters (column 2) and the degrees of freedom that should be used for a \u03c72\u2212 approximation of P values (column 3). In all cases, the null hypothesis is an absence of an effect and the alternative hypothesis is the presence of an effect. The formulation of the DyVER score is specified in line no. 1. The additional formulations I and II are focused only on the B,C) Comparative performance analysis on synthetic data. Scatter plots for the accuracy measure (y-axis) of different methods (color coded), including (i) the five existing approaches (implemented as detailed in Methods) and (ii) three formulations of the DyVER's likelihood ratio tests as specified in A . Results are shown over synthetic genes with a single state-transitioning (sustained) pattern of genetic effects; the genes were measured in different numbers of time points or different effect sizes , as presented in (EPS)Click here for additional data file.Figure S15Performance analysis of the PCA approach on synthetic data. Scatter plots for the accuracy measure (y-axis) of three possible PCA-based methods over synthetic genes with a single state-transitioning (sustained) pattern of genetic effects; the genes were measured in different numbers of time points (x-axis) for genetic effect size 0.5. Different line types indicate the results for PC1, PC2 and PC3, respectively. The plot demonstrates that the accuracy attained by the first component is the best among the consecutive components.(EPS)Click here for additional data file.Table S1DyVER's predicted associated genes in yeast following rapamycin treatment. Shown are 145 gene symbols (column 1), their genomic position (column 2), the genomic position of their associated genetic variant (column 3) and whether it is associated in cis or in trans (column 4). Column 5 provides information about the predicted two-state model of the association . The timeline (0-50 minutes) is ordered from left to right. For example, the LLLHHH pattern indicates a high genetic effect only at 30\u201350 minutes after rapamycin treatment.(DOC)Click here for additional data file.Table S2A comparison between previously reported genetic variants and DyVER's predictions. The table presents genomic position of all previously reported genetic variants (column 1), their known causal gene (column 2) and the particular condition in which the genetic variant was identified (column 3). DyVER's predictions are presented in columns 4\u20135: Column 4 provides the non-linear genes significantly associated with the variant , whereas columns 5 indicates the corresponding module number as listed in *non-significant DyVER score. References for known causal genes: 1Smith and Kryglyak. 2008, 2Perlstein et al. 2007 3Brem et al. 2005, 4Yvert et al. 2003, 5Brem et al. 2002 and Gaisne et al. 1999.(DOC)Click here for additional data file."}
+{"text": "Rift Valley fever Virus (RVFV), a negative-stranded RNA virus, is the etiological agent of the vector-borne zoonotic disease, Rift Valley fever (RVF). In both humans and livestock, protective immunity can be achieved through vaccination. Earlier and more recent vaccine trials in cattle and sheep demonstrated a strong neutralizing antibody and total IgG response induced by the RVF vaccine, authentic recombinant MP-12 (arMP-12). From previous work, protective immunity in sheep and cattle vaccinates normally occurs from 7 to 21 days after inoculation with arMP-12. While the serology and protective response induced by arMP-12 has been studied, little attention has been paid to the underlying molecular and genetic events occurring prior to the serologic immune response. To address this, we isolated RNA from whole blood of vaccinated calves over a time course of 21 days before and after vaccination with arMP-12. The time course RNAs were sequenced by RNASeq and bioinformatically analyzed. Our results revealed time-dependent activation or repression of numerous gene ontologies and pathways related to the vaccine induced immune response and its regulation. Additional bioinformatic analyses identified a correlative relationship between specific host immune response genes and protective immunity prior to the detection of protective serum neutralizing antibody responses. These results contribute an important proof of concept for identifying molecular and genetic components underlying the immune response to RVF vaccination and protection prior to serologic detection. Aedes, Culex, and Anopheles [Aedes, has expanded from tropical regions in south China and Indian Ocean Islands to other tropical and temperate zones in all continents, due to the passive transport of viable eggs and subsequent adaptation is a segmented, negative-stranded RNA virus and the causative agent of the vector-borne zoonotic disease, Rift Valley fever (RVF). The initial outbreak of RVF occurred in 1931 in the Rift Valley of Kenya in sheep, cattle and humans . The sprnopheles \u20137. Imporiewed in ). In addiewed in . RVFV ouiewed in , 11.80), and a strong long-term immune response measured by a RVFV antigen-specific IgG enzyme-linked immunosorbent assay (ELISA) induced by authentic recombinant MP-12 (arMP-12) in both pregnant sheep and cattle [While susceptibility of na\u00efve individuals to RVFV infection is high, protection can be achieved by humoral immune responses or colostrum , 13. In d cattle \u201317. In 2d cattle , 19.et al. assessed the transcriptome of CD8+ T and B cells from humans vaccinated with the Yellow Fever vaccine YF-17D by microarray and identified a gene expression pattern correlating to seroconversion [Mycobacterium bovis, cattle protection was highly correlated with IFN-gamma and IL-22 expression [Recent studies assessing vaccine protection have gone beyond the traditional serologic analysis and employed transcriptomic sequencing techniques and bioinformatic analyses to better characterize and understand the molecular and genetic underpinnings of the vaccine response. A seminal work from Querec nversion . Effortsnversion . A meticnversion . Stridespression .80. To test this hypothesis, we analyzed the global transcriptome of PBMCs from cattle vaccinated with arMP-12 against RVFV before and after vaccination, followed by a bioinformatic correlative analysis between perturbed cellular pathways and serologic analysis for protective immunity. To more completely utilize the data, we analyzed resulting RNASeq sequences by: 1) standard day-by-day analysis oriented from the day of vaccination, or 2) a time-shifted mechanism in which the time points were oriented around the day that each calf had protective 1:80 serum neutralizing antibody titers. For both of these orientations, sequence data from each time point were analyzed by standard differential gene expression analyses [To further our understanding of the genetic and cellular mechanisms underlying vaccination and seroconversion in arMP-12 vaccinated livestock, we hypothesized that bioinformatic analysis of the RNASeq sequenced transcriptome of peripheral blood mononuclear leukocytes (PMBC) will identify highly correlated gene transcripts for protective seroconversion as measured by PRNTanalyses and Dynaanalyses . Using tBos taurus steer calves as described previously [5 PFUs of authentic recombinant MP-12 (arMP12) virus in 1.0 ml of phosphate buffered saline (Sigma) [80) as previously described [80 values used in this work were calculated and reported to assess the immunogenicity of the arMP-12 vaccine to produce neutralizing antibodies in cattle [Retrospective RNASeq analysis was performed on PBMCs taken from five healthy 4\u20136 month eviously . The cal (Sigma) , 25. The (Sigma) , 25. Serescribed . PRNT80 n cattle .Whole blood was collected from vaccinated cattle at DAY 0\u20137, 10, 14 and 21 and immediately mixed at a 3:5 (blood:buffer) ratio with RNALater (Invitrogen). Samples were stored at \u201320\u00b0C until processing. To purify RNA, frozen samples were thawed at 37\u00b0C, centrifuged at 2000xG, and supernatant discarded. Remaining cell pellets were subjected to two rounds of treatment with Red Blood Cell Lysing Buffer per manufacturer\u2019s instructions (Sigma). RNA from resulting cell pellet was initially extracted by Trizol (Invitrogen) followed by treatment with RNeasy Kit with on-column DNase digestion (Qiagen). Purified RNA was quantified by spectrophotometry on a NanoDrop and by electrophoresis with a BioAnalyzer 2100 . Only samples with an RNA Integrity Number (RIN) greater than 8.0 were used for RNASeq analysis.http://www.ncbi.nlm.nih.gov/geo/), Accession #GSE71417.All RNASeq sequencing was performed at the Texas A&M Genetics and Bioinformatics Center following manufacturer\u2019s instructions. cDNA was generated from RNA samples of sufficient quantity and quality with Ovation RNA-Seq System , sheared by focused ultrasound and made into barcoded (multiplexed) libraries with Encore Rapid Library System . Samples were quantified, diluted, pooled and re-analyzed with a Bioanalyzer and loaded onto a Genome Analyzer II High-throughput sequencer . Sequence data (fastq format) were transferred to Seralogix LLC for bioinformatic analysis. All of the original data are available in the Gene Expression Omnibus at the National Center for Biotechnology Information described in more detail in The gene reads count table was imported into a database for further processing in a computational pipeline where DESeq was emplTo identify differentially expressed genes, GO terms , or path80) to later time points at the onset of protective immunity. The SWC procedure measures the temporal trajectory correlation of a pathway or GO term Bayesian z-scores and their associated gene Bayesian z-scores for an incremental series of three time points to a fixed PRNT80 temporal response signature that is also comprised of three time points surrounding the PRNT80 serum neutralization point. We elected to use three time points as our sliding window to better capture the early temporal dynamics of PW/GO events that may have direct correlation to the later serum neutralization temporal signature. Correlation coefficients (R) were computed for sets of overlapping successive pathway, GO and their gene scores across all time points represented by the sliding time window sets with the fixed PRNT80 temporal response at times -1, 0, 1 pre-serum neutralization that represents the onset of protective immunity. Additional details of the sliding window correlation procedure and definition of pre-serum neutralization time points are described in We developed a sliding window correlation (SWC) procedure (written in MATLAB ) to iden80 protective immune response from the highly correlated gene expression found present in a selected set of corresponding SWC correlated pathways and GO terms. This is a supervised learning method that can uniquely deal with time-course data. The method is described in greater detail in Ideally, we sought to identify early patterns in the pathway and transcriptomic host responses that can be used to predict protective immunity. We developed a dynamic Bayesian network (DBN) modeling approach to use aBos taurus heifer and steer calves were used in the present study as described previously [Healthy, 4\u20136 month old eviously . The TexRNA from whole blood was isolated from cattle prior to80) serve as an accurate indicator for protection against wild-type RVFV and are commonly used in determining RVFV vaccine effectiveness [During the time course collection of our samples, we sampled most frequently on days immediately following vaccination, as well as regular intervals out to 3 weeks post vaccination . This repeated sampling early in the experiment allowed for a more detailed analysis of the gene expression profiles present just after vaccination, in contrast to analysis of the transcriptome from a single time point post-inoculation, and provided for improved orientation of transcriptome data to previously reported serologic data . Plaque tiveness , 49\u201352. Differential gene expression was determined in conjunction with DESeq and resulted in a time-course profile of significantly perturbed up and down regulated genes . Differephosphatidylinositol signaling system, apoptosis, Wnt signaling pathway, VEGF signaling pathway, Jak-STAT signaling pathway and ErbB signaling pathway within the vaccine viral RNA and initiates the innate antiviral immune response, an essential precursor for launching an effective adaptive immune response. Viral life cycle and viral transcription are highly repressed GO terms with 72 genes being significantly down-regulated in the early phase. The majority of these genes encode for ribosomal proteins. The viral process GO terms are associated with processes by which a viral gene is converted into a mature gene product or products (proteins or RNA). This includes viral transcription, processing to produce a mature RNA product, and viral translation indicating viral replication. The dominant genes involved in these processes included 54 genes encoding a family of ribosomal proteins. These important GO terms and associated genes are listed in The DBGGA process scored 4354 biological process GO Terms (restricted to GO terms having between 5\u2013300 observed genes within a given term). Within this set of GO Terms, there were 6818 uniquely scored genes. The summary results of activated and repressed GO Terms and their associated gene sets are shown in antibacterial humoral response, positive regulation of B cell receptor signaling, positive regulation of macrophage cytokine production, and negative regulation of MHC class II biosynthetic processes. Interestingly, of these repressed GO terms only the positive regulation of B cell receptor signaling becomes significantly activated in the later phase . Also in the early phase, we observed the strong activation of such biological processes as the interleukin-1beta, 17, and 18 production, regulation of Fc receptor mediated stimulatory signaling, intrinsic apoptotic signaling by p53 class mediator, negative regulation of leukocyte mediated cytotoxicity, regulation of cytokine production, clathrin-mediated endocytosis,[positive regulation of natural killer cell differentiation.Also provided in ocytosis, and posiT cell activation, differentiation, and receptor signaling, regulation of T-helper cell differentiation, B cell activation, regulation and receptor signaling, and regulation of memory T cell differentiation, T-cell differentiation, T-helper 1 type immune response, immunoglobulin mediated immune response, and regulation of T cell cytokine production for calves #74 and 76 was D10 after inoculation showed a steady increase of both up and down regulated genes from \u20136 pSN until \u20132 pSN, after which there was a rapid drop in number of perturbed genes for the remaining time points . The topThe summary results of activated and repressed pathways and the up-regulated and down-regulated genes meeting a |Bayesian z-score| > 2.24 are shown in Phosphatidylinositol signaling was significantly perturbed , co-stimulatory receptors, cytokine receptors, and chemokine receptors.The majority of significantly modulated pathways was repressed, especially at in the Early Phase . At very early and late time points, erturbed . Severalerturbed . PhosphaPhosphatidylinositol signaling system, Jak-STAT signaling and ErbB signaling pathways were identified as up-regulated by both analyses , response to interferon-beta (GO:0035456), negative regulation of viral genome replication (GO:0045071), positive regulation of T cell cytokine production (GO:0002726), and positive regulation of interleukin-1 beta production (GO:0032731), reversing and becoming repressed in later time points.As before, the DBGGA process scored 4354 biological process GO terms (restricted to GO terms having between 5\u2013300 observed gene within the term). Within this set of GO terms, there were 6818 uniquely scored genes. The summary results of activated and repressed GO term gene sets only for GO terms of root \u201cbiological_process\u201d are provided in This emphasis on pathways associated with the Type-I interferons may serve as an important factor in distinguishing vaccinated and infected animals. During a vaccine trial with mutagenized or gene deletion strains of the virus on mice lacking interferon (IFN)\u2013alpha/beta, or\u2013gamma receptors, vaccine strains elicited strong IFN\u2013alpha and\u2013beta responses while the WT control strain failed to elicit the same response . Follow-In comparing the DBGGA GO term analysis between standard time and time-shifted data, there is a typical down regulation of GO Terms and genes, with the most perturbation at D10 for the standard time . With thviral transcription, several pathways involved with interferon alpha or beta production, and interleukin-1 production and response. As before, this analysis resulted in a large cohort of perturbed pathways. Under the time-shifted protocol, the overall number of perturbed pathways was decreased in both the Early and Late Phases, but immunologically relevant pathways were still detected. Of note was the down regulation of viral transcription, protein processing, and life cycle pathways and up regulation of pathways associated with interleukin-1 and response to interferon alpha.In the Early Phase, the standard time data identified numerous perturbed pathways associated with immune response, including 80) time points of days \u20131, 0, and 1 pSN to represent the onset and establishment of protective immunity approach to apply to pathways, GO terms, and genes . We choosee Figs and 7A-iThe SWC analysis included only the canonical pathways with the exclusion of metabolic pathways. 80 correlated terms for which the results are listed by the sliding window time points , viral life cycle (GO:0019058) , and viral release from host cell (GO:0019076). Additionally, we observed early indicators of the host immune response with the correlation of PRNT80 with the biological processes such as cellular response to interleukin-4 (GO:0071353), interleukin-1 beta production (GO:0032611), negative regulation of B cell activation (GO:0050869), and type I interferon signaling pathway (GO:0060337).Likewise for the Gene Ontology GO terms, we applied the SWC technique to obtain the most highly PRNTe points . With thviral life cycle, SRP-dependent co-translational protein targeting to membrane, viral protein processing, viral release from host cell, and ribosome pathway. Accordingly only a few of these gene types were represented in the table. Interestingly, the ribosomal genes were among the highest SWC correlated relationships to occur at the earliest time window . Also of note, many of the genes in the first two time windows are negatively correlated, but interestingly, the genes PIK3R5, SOCS5, PLCE1, and GHR were positively correlated. The majority of genes in the remaining time windows was positively correlated. It was also observed that the later time windows had a reduced number of correlated genes and weaker significance of correlation.To narrow the search space for genes that were candidate biomarkers of protective immunity, we selected the sets of genes for pathways and GO terms having significant correlations from 19 relevant pathways and processes . The SWC80 response was insufficient on their own for building a predictive model, since it should be constructed based on biologically relevant relationships and cascade of events. There were other relationships both upstream and downstream from these genes that may have critical roles in defining transcriptional events leading to a protective immune response. It was this set of biologically related genes of the innate immune response that are of prime interest, and hence were used to create a DBN model. We employed a two-step approach to constructing a predictive DBN. The first step employed a novel approach to learning a gene regulatory network (GRN) structure that is described in more detail in 80 found by the SWC analysis. These pathways included the Toll-like receptor, Jak-STAT signaling, RIG-I-like signaling, Fc gamma R-mediated phagocytosis, Calcium signaling, Antigen processing and presentation, selected ribosomal genes from viral life cycle, and included genes with type I interferon activation.The SWC analysis identified a set of genes associated with certain pathways and GO terms See . A set oa priori probability distribution method for combining prior biological knowledge from multiple sources that included KEGG [80 response. Expression of other specific genes having learned relationships downstream of TLR4 were found to include MYD88, MAP2K7, IL1B, FOS, and JUN, where FOS and JUN form the well-known transcription factor complex AP-1. TLR mediated signaling pathway predominately signal through interferon regulatory factors (IRF) as well as Nuclear Factor-kappa B (NFKB) and AP-1, eliciting the induction of the Interferon type-1 response and the expression of inflammatory cytokines. In support of interferon activation, GRN learning identified IFNAR1/2 and IFNGR1/2 as influential regulators in the GRN model. These genes encode proteins that function as antiviral factors. We found IFNAR2 to be a positive SWC correlate to PRNT80, and through our GRN learning, we found evidence supporting an activation relationship with STAT1 and a STAT1 relationship with the cytokines CISH and SOCS2.The GRN method used a Bayesian model consensus scheme employing the Markov Chain Monte Carlo (MCMC) Metropolis-Hastings algorithm. It incorporates a gene expression data-derived proposal matrix and ded KEGG , REACTOMded KEGG , BioGRIDded KEGG , DIP [63ded KEGG , MINT [6ded KEGG , GO, andded KEGG and JASPded KEGG . For the80 response while IL12RB1, JAK2 and SOCS5 were positive correlates. Also found by GRN learning was the gene, SYK, which is critical to innate and adaptive immunity having strong evidence regulating the downstream gene, PIK3R3, which is a positive SWC correlate to PRNT80 response. Also, the RIG-I-like receptor related genes, IFIH1 and DDX58, were identified by the GRN model as strong regulators. However, these genes were not found to be SWC correlates, but were significantly expressed and indirectly linked to other SWC correlate genes such as TBKBP1 and IL8. IFIH1 and DDX58 genes are known to encode for proteins of the RIG-I-like receptor family and functions as pattern recognition receptors that sense viral nucleic acids or other viral/vaccine products.In addition, we found other type I interferon associated GRN model genes that included MYD88, JAK2, SOCS2/5, CCL5, IL8, IL15, IL12RB1, and IRF9. The genes MYD88, SOCS2, IL8 and IL15 were negative SWC correlates to PRNT80 respectively) were found to be important receptors and perhaps novel to the correlated response to RVF vaccine. ERBB4 encodes a protein that is a receptor for neuregulins and EGF family members and is known to regulate cell proliferation, differentiation, migration and apoptosis. ADORA2B encodes a G-protein receptor and is known to inhibit monocyte and macrophage functions and stimulate mast cell mediator release. GRN learning found ERBB4 to have strong relationships with PLCB1 and PLCE1 that is a positive SWC correlate. PLCE1 belongs to the phospholipase family involved in the cascade of intracellular responses that result in cell growth and differentiation. GRN learning associated ADORA2B with the downstream genes GNAL, ADCY7, PRKX and ATP2B2 are positive SWC correlates to PRNT80 response suggesting a novel role as correlates to protective immunity. Vaccine viral replication and expression were strongly indicated in the early phase post immunization. Numerous genes encoding for ribosomal proteins were SWC correlates to PRNT80. We chose four of the most highly correlated to include in the DBN model, namely, RPS14, RPL14, RPL29 and RPLP2.The genes ERBB4 and ADORA2B . CD8B is a CD8 antigen found on most T lymphocytes. CTSB is a gene encoding a lysomomal cysteine proteinase believed to participate in intracellular degradation of proteins. The description of these genes and function is provided in In the later times where the time begins to overlap with the onset of serologic protective response, there were fewer SWC correlates to PRNT80 data time points for training the model to be predictive for these later PRNT80 time point responses. The DBN network is illustrated in 80 node represents a continuous variable having Gaussian distribution.Employing the majority of these genes described above, the DBN model was constructed to represent three early phase time points pSN, but included the later PRNT80 response at later time points. With limited sample size, we could only train and test the predictive response with gene expression evidence from five biological replicates at three distinct time points . Cross-validation and blind prediction were not feasible and are planned for future research validation efforts. We implemented a t-test statistic to test when an inferred PRNT80 value was determined to be statistically outside the experimental (true) mean population (p \u2264 0.05) for use in sensitivity analysis. Expression data for the true gene sets were used for the true positive test cases which consisted of individual time point data sets of the three time points and five biological replicates. A randomly selected set of gene expressions were used to create a test set to represent the true negative test cases. The predicted PRNT80 plot, The goal of the DBN model was to determine if gene expression evidence at earlier time points could robustly predict/infer a protective PRNT80. This approach has revealed transcriptomic signatures underlying the host response to vaccination with arMP-12, as well as a distinct set of genes that would be expected to be predictive of a protective immunological response.Understanding the mechanisms underlying the development of protective immunity against RVFV, and RVF vaccination, will be important for the development of more effective vaccines against RVF. Host genome-wide transcript abundance and signaling pathway profiling provides a means to identify changes in gene expression and biological function occurring immediately following vaccination that may play a role in the development of protective immunity. Here, we used the hypothesis that bioinformatic analysis of deeply sequenced peripheral blood mononuclear leukocytes (PMBC) transcript will identify highly correlated gene transcripts for protective seroconversion as measured by PRNTThere were 1587 unique genes differentially regulated following immunization by arMP-12, of which 678 were uniquely up regulated genes compared to 909 uniquely down regulated. Consistent with other previously reported gene expression results in the mouse model infected with RVFV , the larImmediately following vaccine inoculation, it appears that the vaccine viral components have successfully invaded the host cells, triggering the induction of a family of pleiotropic cytokines known as the IFNs (interferons) as evidenced by the activation of the type I interferon signaling and the significant perturbation of the interferon stimulated genes (ISGs) IFI16, MX1, IFIH1, IFI44, IFNAR2, IFITM1/2, IRF2BP2, IR12RB1, IRF1, IRF3 and IRF9 within the first 5 days post immunization. IFN interactions with their receptors induce a set of IFN-stimulated genes that inhibit viral replication and increase the lytic potential of natural killer (NK) cells. It has been reported that TypAt later time points, we observed the triggering/activation of T cells and B cells, suggesting that cell-mediated immunity was beginning as early as day 3 post vaccination. A number of genes were expressed that are associated with T cell activation and signaling that included PIK3R1, PIK3R3, CHUK, ICOS, NFkappaB2, RNUX2, AKT2, CCR2, CD48, RELA, STAT5A, TNFSF14, TLR4, CD72, ITFG2, NR1D1, and PTPN6. Interestingly, the majority of these genes was significantly modulated at days 5 and 6 and became less expressed at day 7 and beyond. Complementary to our work, a recent study using a deletion mutant of arMP-12, arMP12-deltaNSm21/384, showed rapid progression of protective antibodies after vaccination and after challenge with the wild-type strain ZH501 . This ra80 onset of protective immunity. Novel in this approach was the fact that we employed a sliding time window to find trajectory responses at earlier time points that have high positive or negative correlation to the PRNT80 response trajectory at time points just prior to and at the point of protective seroconversion.The primary objective of this retrospective study was to identify a set of genes for predicting protective immunity. The selection of the genes was based on a systems biology top-down approach in which we narrowed our focus on genes within pathways and GO terms that were significantly perturbed and had SWC correlates to the PRNT80 and the GRN relationships were used to select the genes used in constructing and training the predictive DBN model. The model performance was quite good when the model was tested with gene expression profiles from Time -5, -4, or -3 pSN and was successful in predicting a future PRNT80 value at Time \u20131, \u20130, and 1 pSN. In summary, the model had an overall sensitivity = 93% and specificity = 100%. While the limited number of replicates restricts our ability to fully test the robustness of this model, it does allow us to access our approach to developing models predicting protective immunity.An additional novel step was then employed to learn the gene regulatory network (GRN) from the selected set of genes. This step allowed us to further identify genes that had strong regulatory relationships among other upstream and downstream genes. Finally, the combination of SWC correlates to PRNTS1 Table(XLSX)Click here for additional data file.S2 Table(XLSX)Click here for additional data file.S3 Table(XLSX)Click here for additional data file.S4 Table(XLSX)Click here for additional data file.S5 Table(XLSX)Click here for additional data file.S6 Table(XLSX)Click here for additional data file.S7 Table(XLSX)Click here for additional data file.S8 Table(XLSX)Click here for additional data file.S9 Table(XLSX)Click here for additional data file.S10 Table(XLSX)Click here for additional data file.S11 Table(XLSX)Click here for additional data file.S12 Table(XLSX)Click here for additional data file.S13 Table(XLSX)Click here for additional data file.S14 Table(XLSX)Click here for additional data file.S15 Table(XLSX)Click here for additional data file.S16 Table(XLSX)Click here for additional data file.S17 Table(XLSX)Click here for additional data file.S18 Table(XLSX)Click here for additional data file.S19 Table(XLSX)Click here for additional data file.S20 Table(XLSX)Click here for additional data file.S1 Text(DOCX)Click here for additional data file.S2 Text(DOCX)Click here for additional data file."}
+{"text": "The regulation of gene expression by transcription factors is a key determinant of cellular phenotypes. Deciphering genome-wide networks that capture which transcription factors regulate which genes is one of the major efforts towards understanding and accurate modeling of living systems. However, reverse-engineering the network from gene expression profiles remains a challenge, because the data are noisy, high dimensional and sparse, and the regulation is often obscured by indirect connections.We introduce a gene regulatory network inference algorithm ENNET, which reverse-engineers networks of transcriptional regulation from a variety of expression profiles with a superior accuracy compared to the state-of-the-art methods. The proposed method relies on the boosting of regression stumps combined with a relative variable importance measure for the initial scoring of transcription factors with respect to each gene. Then, we propose a technique for using a distribution of the initial scores and information about knockouts to refine the predictions. We evaluated the proposed method on the DREAM3, DREAM4 and DREAM5 data sets and achieved higher accuracy than the winners of those competitions and other established methods.https://github.com/slawekj/ennet under the GNU GPLv3 license.Superior accuracy achieved on the three different benchmark data sets shows that ENNET is a top contender in the task of network inference. It is a versatile method that uses information about which gene was knocked-out in which experiment if it is available, but remains the top performer even without such information. ENNET is available for download from Regulation of gene expression is a key driver of adaptation of living systems to changes in the environment and to external stimuli. Abnormalities in this highly coordinated process underlie many pathologies. At the transcription level, the control of the amount of mRNA transcripts involves epigenetic factors such as DNA methylation and, in eukaryotes, chromatin remodeling. But the key role in both prokaryotes and eukaryotes is played by transcription factors (TF), that is, proteins that can bind to DNA in the regulatory regions of specific genes and act as repressors or inducers of their expression. Many interactions between transcription factors and genes they regulate have been discovered through traditional molecular biology experiments. With the introduction of high-throughput experimental techniques for measuring gene expression, such as DNA microarrays and RNA-Seq, the goal moved to reverse-engineering genome-wide gene regulatory networks (GRNs) . KnowledHigh throughput techniques allow for collecting genome-wide snapshots of gene expression across different experiments, such as diverse treatments or other perturbations to cells . AnalyziOne group of existing methods describes GRN as a system of ordinary differential equations. The rate of change in expression of a transcript is given by a function of the concentration levels of transcription factors that regulate it. Network inference includes two steps: a selection of a model and an estimation of its parameters. Popular models imply linear functions a priori -7. Bayest-test to score transcriptional regulations. The null-mutant z-score algorithm [\u03b72 method [Other approaches are motivated from statistics and information theory. TwixTwir uses doulgorithm scores ilgorithm -20, incl2 method . Improve2 method ,23, and 2 method . Another2 method , eliminaRecently, machine-learning theory has been used to formulate the network inference problem as a series of supervised gene selection procedures, where each gene in turn is designated as the target output. One example is MRNET , which aIn this paper, we propose a method that combines gradient boosting with regression stumps, augmented with statistical re-estimation procedures for prioritizing a selected subset of edges based on results from the machine-learning models. We evaluated our method on the DREAM3, DREAM4 and DREAM5 network inference data sets, and achieved results that in all cases were better than the currently available methods.P genes in form of a weighted adjacency matrix vi,j represents regulation of gene j by gene i. As an input, it takes gene expression data from a set of experiments, together with the meta-data describing the conditions of the experiments, including which genes were knocked out. Usually, the raw expression data need to be pre-processed before any inference method could be applied to reverse-engineer a GRN. Pre-processing has a range of meanings, here it is regarded as a process of reducing variations or artifacts, which are not of the biological origin. It is especially important when the expression is measured with multiple high-density microarrays [The proposed algorithm returns a directed graph of regulatory interactions between roarrays . Concentroarrays ,35 by thThe network inference process relies heavily on the type of expression data provided as an input. Two main groups of expression profiles are: the one with known, and the one with unknown initial perturbation state of the expression of genes in the underlying network of regulatory interactions. For example, knockout and knockdown data are provided with the additional meta-data, which describe which genes were initially perturbed in each experiment. On the other hand, multifactorial and time series data are usually expression profiles of an unknown initial state of genes. Wildtype, knockout, knockdown, and multifactorial data describe the expression of initially perturbed genes, which are however in a steady state at the time of measurement, whereas time series data describe the dynamics of the expression levels of initially perturbed genes. The types of data available in popular benchmark data sets are summarized in Table\u00a0N\u00d7P expression matrix E, where ei,j is the expression value of the j-th gene in the i-th sample. Columns of matrix E correspond to genes, rows correspond to experiments. We also define a binary perturbation matrix K, where ki,j is a binary value corresponding to the j-th gene in the i-th sample, just like in the matrix E. If ki,j is equal to 1, it means that the j-th gene is known to be initially perturbed, for example knocked out, in the i-th experiment. Otherwise ki,j is equal to 0. If no information is available about knockouts, all values are set to 0.The variability of possible input scenarios poses a problem of representing and analyzing expression data. Here, we operate on an P genes into P independent subproblems. In each subproblem incoming edges from transcription factors to a single gene transcript are discovered. For the k-th decomposed subproblem we create a target expression vector Yk and a feature expression matrix Xk\u2212. Columns of the Xk\u2212 matrix constitute a set of possible transcription factors. Vector Yk corresponds to the expression of the transcript, which is possibly regulated by transcription factors from Xk\u2212. In a single gene selection problem we decide which TFs contribute to the target gene expression across all the valid experiments. Columns of Xk\u2212 correspond to all the possible TFs, but if a target gene k is also a transcription factor, it is excluded from Xk\u2212. We do not consider a situation in which a transcription factor would have a regulatory interaction with itself. When building the target vector Yk corresponding to the k-th target gene, k\u2208{1,...,P}, we consider all the experiments valid except from the ones in which the k-th gene was initially perturbed, as specified in the perturbation matrix K. We reason that the expression value of the k-th gene in those experiments is not determined by its TFs, but by the external perturbation. Each row in the Yk vector is aligned with a corresponding row in the Xk\u2212 matrix. In order to justify all the possible interactions we need to solve a gene selection problem for each target gene. For example, if a regulatory network consists of four genes (P=4), we need to solve four gene selection problems. In the k-th problem, k\u2208{1,2,3,4}, we find which TFs regulate the k-th target gene. In other words, we calculate the k-th column of the output adjacency matrix V.We decompose the problem of inferring the network of regulatory interactions targeting all Yk and the TF expression matrix Xk\u2212 are created for each gene k, we solve each k-th gene selection problem independently, in the following way. We search for the subset of columns in Xk\u2212 that are related to the target vector Yk by an unknown function fk, as shown in Equation 1, Once the target gene expression vector \u03b5k is a random noise. A function fk represents a pattern of regulatory interactions that drive the expression of the k-th gene. We want fk to rely only on a small number of genes acting as transcription factors, those that are the true regulators of gene k. Essentially, this is a feature selection or a gene selection task [Yk with an optimal small set of important predictor variables, i.e., a subset of columns of the Xk\u2212 matrix. A more relaxed objective of the gene selection is the variable ranking, where the relative relevance for all input columns of the Xk\u2212 matrix is obtained with respect to the target vector Yk. The higher a specific column is in that ranking, the higher the confidence that a corresponding TF is in a regulatory interaction with the target gene k.where ion task ,32,36,37f0 to be an optimal constant model, without selecting any transcription factor. In other words, f0 is initialized to an average of Yk. At each next t-th step the algorithm creates an updated model ft, by fitting a base learner ht and adding it to the previous model ft\u22121. The base learner is fitted to a sample of pseudo residuals, with respect to a sample of transcription factors, and thus is expected to reduce the error of the model. Pseudo-residuals are re-calculated at the beginning of each iteration with respect to the current approximation ft. As a base learner, we use regression stumps, which select a single TF that best fits pseudo residuals. A regression stump ht(x) partitions the expression values x of a candidate TF into two disjoint regions Rt1 and Rt2, where \u03b3t1 and \u03b3t2, respectively, for those regions, as shown in Equation 2, Our solution to the variable ranking involves ensemble learning. We use an iterative regression method, which in each iteration chooses one transcription factor based on an optimality criterion, and adds it to the non-linear regression ensemble. The main body of our method, presented in Figure\u00a0I is the identity function returning the numerical 1 for the logical true, and the numerical 0 for the logical false. Regions Rt1, Rt2 are induced such that the least-squares improvement criterion is maximized: where wt1, wt2 are proportional to the number of observations in regions Rt1, Rt2 respectively, and \u03b3t1, \u03b3t2 are corresponding response means. That is, \u03b3t1 is the average of the values from the vector of pseudo-residuals for those samples where an expression of the chosen TF falls into the region Rt1. The value of \u03b3t2 is defined in an analogous way. The averages \u03b3t1 and \u03b3t2 are used as the regression output values for regions Rt1 and Rt2, respectively, as shown in Equation 2. The criterion in Equation 3 is evaluated for each TF, and the transcription factor with the highest improvement is selected. In each t-th step, we only use a random portion of rows and columns of Xk\u2212, sampled according to the observation sampling rate ss, and the TF sampling rate sf.where \u03c6t is selected in an iteration t, an improvement \u03c6t-th TF. If the same TF is selected multiple times at different iterations, its final importance score is a sum of the individual scores.The procedure outlined above creates a non-linear regression model of the target gene expression based on the expression of transcription factors. However, in the network inference, we are interested not in the regression model as a whole, but only in the selected transcription factors. In each t-th step of the ENNET algorithm, only one TF is selected as the optimal predictor. The details of the regression model can be used to rank the selected TFs by their importance. Specifically, if a transcription factor \u03bd, known as a shrinkage factor, is used to scale a contribution of each tree by a factor \u03bd\u2208 when it is added to the current approximation. In other words, \u03bd controls the learning rate of the boosting procedure. Shrinkage techniques are also commonly used in neural networks. Smaller values of \u03bd result in a larger training risk for the same number of iterations T. However, it has been found [\u03bd reduce the test error, and require correspondingly larger values of T, which results in a higher computational overhead. There is a trade-off between these two parameters.In the training of the regression model, the parameter en found that smaV representing a graph of inferred regulatory interactions. Each of the solutions constitutes a single column-vector, therefore we obtain the adjacency matrix V by binding all the partial solutions column-wise. Then we apply a re-evaluation algorithm to achieve an improved final result. The first step does not require any additional data to operate other than the previously calculated adjacency matrix V. It exploits the variance of edge probabilities in the rows of V, i.e., edges outgoing from a single transcription factor, as a measure of the effect of transcriptional regulation. We score transcription factors based on their effects on multiple targets. We assume that the effect of transcriptional regulation on a directly regulated transcript is stronger than the one of the regulation on indirectly regulated transcripts, e.g. transcripts regulated through another transcription factor. Otherwise, knocking out a single gene in a strongly connected component in a network of regulatory interactions would cause the same rate of perturbation of the expression level of all the transcripts in that component. As a measure of that effect we use previously a calculated adjacency matrix V and multiply each row of V matrix by its variance V1 is given by Equation 4: Once the solutions of the independent gene selection problems are calculated, we compose the adjacency matrix V. Note that V matrix is built column-wise, i.e., a single column of V contains the relative importance scores of all the transcription factors averaged over all the base learners with respect to a single target transcript. On the other hand, rows of V matrix are calculated independently in different subproblems of the proposed inference method. Each row of V contains relative importance scores with respect to a different target transcript. We reason that if a transcription factor regulates many target transcripts, e.g. a transcription factor is a hub node, the variance in a row of V corresponding to that transcription factor is elevated and therefore it indicates an important transcription factor.where V2, which is an update to an already derived adjacency matrix V1, as shown in Equation 5: The second step of refining the network requires knockout expression data. We reason that direct regulation of a transcript by a transcription factor would lead to a distinct signature in the expression data if the transcription factor was knocked out. A similar reasoning gave foundations for the null-mutant z-score method of rever\u03b1(i) in which the i-th gene was knocked-out, as defined by K matrix, \u03b2(i), and \u03c3j is the standard deviation of the expression value of that transcript in all the knockout experiments. The where in vivo networks of model organisms, such as E. coli[S. cerevisiae[in silico networks [in vivo benchmark networks is the fact that experimentally confirmed pathways can never be assumed complete, regardless of how well the model organism is studied. Such networks are assembled from known transcriptional interactions with strong experimental support. As a consequence, gold standard networks are expected to have few false positives. However, they contain only a subset of the true interactions, i.e., they are likely to contain many false negatives. For this reason, artificially simulated in silico networks are most commonly used to evaluate network inference methods. Simulators [in vivo networks, such as modularity [A considerable attention has been devoted in recent years to the problem of evaluating performance of the inference methods on adequate benchmarks ,39. The s E. coli and S. cerevisiae, as wellnetworks -45. The dularity and occudularity . They ardularity ,48,49, adularity using a dularity .Here, we used several popular benchmark GRNs to evaluate the accuracy of our proposed algorithm and compare it with the other inference methods. The data sets we used come from Dialogue for Reverse Engineering Assessments and Methods (DREAM) challenges and are summarized in Table\u00a0where minet R package [We assessed the performance of the proposed inference algorithm on large, universally recognized benchmark networks of 100 and more genes, and compared it to the state-of-the-art methods. We summarize the results of running different inference methods in Figure\u00a0 package , GENIE3 in silico networks and expression data simulated using GeneNetWeaver software. Benchmark networks were derived as subnetworks of a system of regulatory interactions from known model organisms: E. coli and S. cerevisiae. In this study we focus on a DREAM3 size 100 subchallenge, as the largest of DREAM3 suite. The results of all the competing methods except those that are aimed at multifactorial problems are summarized in Table\u00a0DREAM3 ,53,54 feE. coli and S. cerevisiae. In DREAM4 size 100 subchallenge all the data types listed in Table\u00a0DREAM4 challenge ,53,54 wain silico, the two other sets of expression data were measured in real experiments in vivo. Like in all DREAM challenges, in silico expression data were simulated using an open-source GeneNetWeaver simulator [in silico network, and the best Overall Score, as well as the best individual AUROC scores for all the networks. Clearly all the participating methods achieved better scores for an in silico network than for either one of in vivo networks. ENNET shows better in vivo results than the other methods in terms of an area under the the ROC curve. Still, predictions for in vivo expression profiles show a low overall accuracy. One of the reasons for a poor performance of the inference methods for such expression profiles is a fact that experimentally confirmed pathways, and consequently gold standards derived from them, cannot be assumed complete, regardless of how well is a model organism known. Additionally, there are regulators of gene expression other than transcription factors, such as miRNA, and siRNA. As shown in this study, in silico expression profiles provide enough information to confidently reverse-engineer their underlying structure, whereas in vivo data hide a much more complex system of regulatory interactions.Three benchmark networks in DREAM5 were difimulator . Howeverimulator , and Genimulator . The resT times for each k-th target gene, k\u2208{1,...,P}. Given a sorted input, a regression stump is O(PN) complex. We sort the expression matrix in an O(PN logN) time. All the other instructions in the main loop of ENNET are at most O(N). The computational complexity of the whole method is thus O(PN logN+TP2N+TPN). Because, in practice, the dominating part of the sum is TP2N, we report a final computational complexity of ENNET as O(TP2N), and compare it to the other inference methods in Table\u00a0Computational complexity of ENNET depends mainly on the computational complexity of the regression stump base learner, which is used in the main loop of the algorithm. As shown in Figure\u00a0in vivo expression profiles of S. cerevisiae from the DREAM 5 challenge. These are genome-wide expression profiles of 5950 genes (333 of them are known transcription factors) measured in 536 experiments. It took 113 minutes and 30 seconds to calculate the network on a standard desktop workstation with one Intel\u00ae;Core\u2122i7-870 processor with 4 cores and two threads per core and 16 GB RAM. However, it took only 16 minutes and 40 seconds to calculate the same network on a machine with four AMD Opteron\u21226282 SE processors, each with 8 cores and two threads per core and 256 GB RAM. All the data sets from the DREAM 3 and the DREAM 4 challenges were considerably smaller, up to 100 genes. It took less than one minute to calculate each of these networks on a desktop machine.When implementing ENNET algorithm we took advantage of the fact that gene selection problems are independent of each other. Our implementation of the algorithm is able to calculate them in parallel if multiple processing units are available. User can choose from variety of parallel backends including multicore package for a single computer and parallelization based on Message Passing Interface for a cluster of computers. The biggest data we provided as input in our tests were ss and sf, the number of iterations T and the learning rate \u03bd. The sampling rate of samples ss and the sampling rate of transcription factors sf govern the level of randomness when selecting, respectively, rows and columns of the expression matrix to fit a regression model. The default choice of the value of ss is 1, i.e., we select with replacement a bootstrap sample of observations of the same size as an original training set at each iteration. Because some observations are selected more than once, around 0.37 of random training samples are out of bag in each iteration. It is more difficult to choose an optimal value of sf, which governs how many transcription factors are used to fit each base learner. Setting this parameter to a low value forces ENNET to score transcription factors, even if their improvement criterion, as shown in Equation 2, would not have promoted them in a pure greedy search, i.e., sf=1. However, if a chance of selecting a true transcription factor as a feature is too low, ENNET will suffer from selecting random genes as true regulators.The ENNET algorithm is controlled by four parameters: the two sampling rates fT \u2208{0.1,0.3,0.5,0.7,1}\u00d7{0.1,0.3,0.5,0.7,1} with fixed \u03bd=0.001, T=5000. For each specific set of parameters we analyzed an average 5-fold cross-validated loss over all the observations . We further analyze our approach with respect to one of the challenges: DREAM4 size 100, as shown in Figure\u00a0ss=1 and sf=0.3 \u2208{0.1,0.3,0.5,0.7,1}\u00d7{0.1,0.3,0.5,0.7,1} with fixed \u03bd=0.001, T=5000. The highlighted points are corresponding to ss=1, sf=0.3, \u03bd=0.001, T=5000. An area under the Precision-Recall curve and an area under the ROC curve are two different measures of the accuracy of an inferred network, which are well preserved across the five networks: for each separate network we observe that AUPR and AUROC decreases in a function of an average loss. As the Overall Score is closely related to AUPR and AUROC, the results shown in Figure\u00a0In Figure\u00a0T and the learning rate \u03bd. It has been shown [T and \u03bd are closely coupled. Usually the best prediction results are achieved when \u03bd is fixed to a small positive number, e.g. \u03bd\u22640.001, and the optimal value of TY is found in a process of cross-validation. As described above, we reason that the choice of parameters, which gives a low average loss on a cross-validated test set, leads to an accurate network prediction. Therefore in Figure\u00a0T\u2208{1,...,5000} for different values of \u03bd\u2208{0.001,0.005,0.01,0.05,0.1}, with fixed ss=1, sf=0.3. Each of the line shows how much ENNET overtrains the data for a given T and \u03bd. Finally, the optimal choice of parameters for DREAM4 size 100 challenge is ss=1, sf=0.3, T=5000, \u03bd=0.001. Following the same practice, we used this default set of parameters: ss=1, sf=0.3, T=5000, \u03bd=0.001 to evaluate ENNET algorithm on all the benchmark networks using ground truth, i.e., for calculating the Overall Score and comparing it to the other algorithms.As ENNET uses boosting, it needs a careful tuning of the number of iterations en shown that parss=1, sf=0.3, T=5000, \u03bd=0.001, we expect numerous random resamplings, and therefore we need to know if a GRN calculated by ENNET is stable between different executions. We applied ENNET to the 5 networks that form DREAM 4 size 100 benchmark, repeating the inference calculations independently ten times for each network. Then, for each network, we calculated a Spearman\u2019s rank correlation between all pairs among the ten independent runs. The lowest correlation coefficient we obtained was \u03c1>0.975, with p-value <2.2e\u221216, indicating that the networks that result from independent runs are very similar. This proves that ENNET, despite being a randomized algorithm, finds a stable solution to the inference problem.Because ENNET uses random sampling of samples and features at each iteration of the main loop, as shown in Figure\u00a0We have proposed the ENNET algorithm for reverse-engineering of Gene Regulatory Networks. ENNET uses a variety of types of expression data as an input, and shows robust performance across different benchmark networks. Moreover, it does not assume any specific model of a regulatory interaction and do not require fine-tuning of its parameters, i.e., we define the default set of parameters, which promises accurate predictions for the future networks. Nevertheless, together with the algorithm, we propose a procedure of tuning parameters of ENNET towards minimizing empirical loss. Processing genome-scale expression profiles is feasible with ENNET: including up to a few hundred transcription factors, and up to a few thousand regulated genes. As shown in this study, the proposed method compares favorably to the state-of-the-art algorithms on the universally recognized benchmark data sets.The authors declare that they have no competing interests.JS and TA conceived the method and drafted the manuscript. JS implemented the method and ran the experiments. JS and TA read and approved the final manuscript."}
+{"text": "As time series experiments in higher eukaryotes usually obtain data from different individuals collected at the different time points, a time series sample itself is not equivalent to a true biological replicate but is, rather, a combination of several biological replicates. The analysis of expression data derived from a time series sample is therefore often performed with a low number of replicates due to budget limitations or limitations in sample availability. In addition, most algorithms developed to identify specific patterns in time series dataset do not consider biological variation in samples collected at the same conditions.Using artificial time course datasets, we show that resampling considerably improves the accuracy of transcripts identified as rhythmic. In particular, the number of false positives can be greatly reduced while at the same time the number of true positives can be maintained in the range of other methods currently used to determine rhythmically expressed genes.The resampling approach described here therefore increases the accuracy of time series expression data analysis and furthermore emphasizes the importance of biological replicates in identifying oscillating genes. Resampling can be used for any time series expression dataset as long as the samples are acquired from independent individuals at each time point.The online version of this article (doi:10.1186/s12859-014-0352-8) contains supplementary material, which is available to authorized users. Thus, most time series currently have a very limited number of biological replicates. This makes it difficult to identify genes that truly show time-dependent expression patterns (true positives) and genes that just seem to have similar patterns due to biological variance . The biological variance is likely to be relatively high, especially when samples are collected from higher eukaryotes, because animals and plants are usually sampled from different individuals to avoid perturbation artifacts during sampling. Thus, in most time course experiments, the samples at each time point are usually from different individuals, resulting in a high biological variance among samples. This is the main reason why sufficient numbers of replicates are necessary. Lee periment -4. HowevMimosa, and flower opening in night-blooming jasmine all show 24\u00a0h diurnal rhythms under both light/dark and approx. 24\u00a0h rhythms under constant conditions [Many organisms have an endogenous clock, known as a circadian clock, to coordinate daily activities. The output of the circadian clock has the period of approximately 24\u00a0h; for example, the body temperature and sleep-wake cycle in humans, leaf movement in nditions -7. Althonditions . These mnditions ,10. To unditions ,12.http://www.biodare.ed.ac.uk/) [Diurnal rhythms in transcript accumulation can be described in mathematical terms, including period, phase, and amplitude . There a.ac.uk/) ,16. ARSE.ac.uk/) . The BIOUsing these algorithms we show the influence of replicates and resampling on the accuracy of predictions of rhythmically expressed genes. Although we perform the analysis to identify oscillating genes in circadian expression datasets, the resampling method can be similarly used to improve the detection of other time dependent expression patterns as long as the samples are collected from different individuals at the specific time points.The determination of oscillating genes is a binary classification. There are only two possible outcomes: either a gene is rhythmically expressed or it is not. The accuracy of this classification can be estimated by a confusion matrix. There are four fundamental members of the matrix: true positives (expression profiles correctly classified as periodic), false negatives (expression profiles incorrectly classified as non-periodic), true negatives (expression profiles correctly classified as non-periodic), and false positives (expression profiles incorrectly classified as periodic). As the number of true negatives and false negatives can be directly calculated from the total number of oscillating and non-oscillating genes and the number of true- and false positive genes identified, we only analyzed true- and false-positives in our calculations. The total number of oscillating and non-oscillating genes was set to 8400 in our simulated datasets . In contrast, the differences in free running period between different individuals under constant conditions were simulated by generating a dataset that contained 36 time courses that differed in period according to published standard deviations for individual cells by randomly selecting one time point from each simulated time course. These initial datasets were in addition averaged to generate a fourth, averaged time course. True and false positives were then calculated for ARSER, HAYSTACK and using BIODARE. From BIODARE we initially tested all implemented algorithms but found that FFT-NLLS was performing best, confirming the observations form Zielinski et al. ,16. We tIn the averaged time courses the number of true positives detected by ARSER is in most cases slightly higher than in the individual replicates but this again comes at the cost of a higher number of false positives. HAYSTACK and BIODARE FFT-NLLS show similar performance but HAYSTACK has more problems to detect oscillating genes in ODE-based simulations. As detected false positive genes can be experimentally quite costly in follow up studies, we wanted to improve the accuracy of the prediction without increasing the number of replicates or time points required as this too would be experimentally costly if not infeasible.We hypothesized that transcripts identified several times in resampled datasets contain more true positive and fewer false positive transcripts. To test this hypothesis, we generated 36 resampled datasets and identified oscillating genes by ARSER and HAYSTACK algorithms in each resampled dataset. Subsequently, we calculated the consensus of detected oscillating genes in these 36 resampled datasets. A consensus of 10 means that the genes were detected in at least 10 out of the 36 resampled datasets. The consensus graphs for the analysis performed with ARSER and HAYSTACK are shown in Figures\u00a0et al. [We next analyzed the influence of the number of resampled datasets on the detection of true and false positive oscillating transcripts. As can be seen in Figure\u00a0et al. is similet al. [For the dataset from Na et al. 147 tranet al. resulted in a larger number of oscillating transcripts we used our simulated LL datasets to analyze how the number of replicates influences the number of true and false negatives and thus the accuracy of the detection of oscillating transcripts. To do so we initially simulated 72 datasets. Those were used to generate the different numbers of initial replicated datasets. The analysis showed that the number of oscillating transcripts detected for a full overlap between all replicates is decreasing with the number of replicates .As there is no way to determine whether an algorithm can distinguish true oscillating transcripts (true positives) from non-oscillating transcripts in a real gene expression dataset, we generated artificial time series to analyze the performance of different algorithms. The artificially generated time series contained the expression values of 8400 transcripts. To generate periodic patterns for synchronized datasets (LD dataset), we used the formula by Yang and Su . Thus thDesynchronizing individuals under constant condition (for example constant light (LL)) were simulated using the above formula but with a fixed period that was randomly selected for each of the 36 initially simulated time courses. The periods were normally distributed with a mean of 25\u00a0hours and a standard deviation of 3\u00a0h according to published experimental data .et al. [For more realistic circadian simulations we used the ODE-model of the mammalian circadian oscillators by Leloup et al. . We firsThe simulations described above were repeated 36 times for each type of data. If not described otherwise we generated 3 initial datasets from these time courses by randomly selecting once from each original simulation to generate a new 4\u00a0hour interval time course. This mimics the sampling procedure from different individuals in real experiments.Python scripts used to generate time series and initial datasets are provided as Additional file q-values are calculated for multiple comparisons and the output was filtered and only those transcripts with a q-value greater than 0.05 were consider in the analysis.Recently, Yang and Su developed the algorithm ARSER, which combines frequency domain and time domain analyses . The alghttp://haystack.mocklerlab.org/. The HAYSTACK algorithm compares gene expression profiles with predefined cycling patterns. Different cutoffs are used to detect oscillating patterns in gene expression. The most important parameter is the correlation coefficient. The higher value means a higher correlation between the experimental data and the predefined models. A coefficient of +1 indicates perfect positive correlation. Other cutoff values are the fold change and p-value, and these values are used to achieve statistical significance. The HAYSTACK algorithm searches for at least six different patterns, including \u201casymmetric,\u201d \u201crigid,\u201d \u201cspike,\u201d \u201ccosine,\u201d \u201csine,\u201d and \u201cbox-like\u201d patterns. The models that most successfully identify rhythmically expressed genes are \u201ccosine\u201d and \u201cspike.\u201dTo exclude the possibility that our results depend on the chosen algorithm, the analysis was repeated with the HAYSTACK algorithm and the BIODARE and the implemented algorithms are described elsewhere . Shortlyq-value) set to 0.05. HAYSTACK algorithm was used with the following parameter: p-value\u2009=\u20090.05; fold change\u2009=\u20092.0, correlation cutoff\u2009=\u20090.8; and background cutoff\u2009=\u20090.01. Using the oscillating transcripts detected the consensus between the 36 resampled datasets were calculated.The artificially generated time series dataset consists of 12 time points at 4\u00a0h sampling intervals, representing 48\u00a0h of observation. To generate resampled datasets, expression values of each gene were randomly selected from (if not stated otherwise) three initial replicate time series, and the values were combined to generate the new resampled dataset. Each expression value has an equal probability of selection, and the time points are treated independently of one another. If not stated otherwise, the procedure was repeated 36 times, and we created 36 different resampled datasets (python script provided as Additional file"}
+{"text": "Considering the roles of protein complexes in many biological processes in the cell, detection of protein complexes from available protein-protein interaction (PPI) networks is a key challenge in the post genome era. Despite high dynamicity of cellular systems and dynamic interaction between proteins in a cell, most computational methods have focused on static networks which cannot represent the inherent dynamicity of protein interactions. Recently, some researchers try to exploit the dynamicity of PPI networks by constructing a set of dynamic PPI subnetworks correspondent to each time-point (column) in a gene expression data. However, many genes can participate in multiple biological processes and cellular processes are not necessarily related to every sample, but they might be relevant only for a subset of samples. So, it is more interesting to explore each subnetwork based on a subset of genes and conditions in a gene expression data. Here, we present a new method, called BiCAMWI to employ dynamicity in detecting protein complexes. The preprocessing phase of the proposed method is based on a novel genetic algorithm that extracts some sets of genes that are co-regulated under some conditions from input gene expression data. Each extracted gene set is called bicluster. In the detection phase of the proposed method, then, based on the biclusters, some dynamic PPI subnetworks are extracted from input static PPI network. Protein complexes are identified by applying a detection method on each dynamic PPI subnetwork and aggregating the results. Experimental results confirm that BiCAMWI effectively models the dynamicity inherent in static PPI networks and achieves significantly better results than state-of-the-art methods. So, we suggest BiCAMWI as a more reliable method for protein complex detection. In cellular systems, proteins physically interact to form complexes to carry out their biological functions, 2. TheyAccording to the absence of temporal information in available physical protein-protein interactions, most computational methods that have been developed during the past decade \u201316 have The challenges now are how to employ the dynamic nature of PPI networks and how to detect temporal protein complexes. With recent advance in high-throughput experimental techniques, the massive data from differential expressions of thousands of genes under various experimental conditions/times is provided, 23. If In some methods , 32\u201334 aThe above recent methods have a considerable drawback? They completely neglect of the correlations between the subnetworks at successive time-points by merely focusing on the single dynamic PPI subnetworks. However, experimental observations confirm that protein complexes will also formed and carry out their functions dynamically in multiple consecutive conserved across different time-points , 32.stable and transient interactions. The stable interactions, as backbone of the protein interaction network are existed through different time-points, while a transient interaction exists at a particular time-point depending on the particular functions two correlated proteins.To overcome the above shortcomings, more recently, Ou-Yang, et al, proposeAlthough, TS-OCD is able to distinguish between stable and transient interactions to get higher accuracy, however, an important limitation of TS-OCD and similar methods is the correspondence of each subnetwork to a time-point in gene expression data. In other words, TS-OCD marks an interaction as transient; if the expression level of interaction is more than a certain value. But, many genes can participate in multiple biological processes and cellular processes are not necessarily related to every sample, but they might be correlated only for a subset of samples. So, it is more interesting to extract each subnetwork based on a subset of genes and time-points simultaneously in a time-series gene expression data.In Clustering, it is possible to cluster rows of gene expression data, however, in cell reality, the genes have same co-regulated and co-expressed patterns only over a subset of experimental samples/conditions and have almost different patterns over the remaining samples/conditions. Such local patterns cannot identify by typical clustering methods. Biclustering methods, provide simultaneous clustering of both rows and columns in the data matrix to discover genes that are co-expressed only in a subset of time-points. Here, biclustering as a powerful tool to discover the biological patterns of co-regulated genes that a clustering algorithm might not recover, give us a better view of the cell dynamic reality.Biclustering is a NP-Hard problem thereforWe propose a new method to extract dynamic PPI subnetworks from time series gene expression data. Firstly, it applies a genetic algorithm called GA-DCM to detect biclusters from input gene expression data. GA-DCM uses a novel fitness function called DCM to evaluate biclusters. In comparison with other methods , 54, 55,Then a post-processing procedure is run to filter out small and not biologically significant biclusters. Next, correspondent to each bicluster, a subgraph consists of the set of genes in bicluster is extracted as a dynamic PPI subnetwork. Similar to TS-OCD, the proposed approach is able to distinguish between stable and transient interactions. Stable interactions, those interactions that exist in all subnetworks while an interaction is transient if its two associated proteins exist at a bicluster. Finally, to assess the effectiveness of this approach, we present a dynamic version of some recent protein complex detection methods. In each case, we run each detection method on all dynamic subnetworks and aggregate all of predicted complexes while removing duplicate ones.Experimental results show that the proposed dynamicity based on novel biclustering algorithm, can retrieve more significant dynamic subnetworks (means subnetworks that are involved by more protein complexes) from static PPI networks and improves the accuracy of protein complex detection methods. Specially, BiCAMWI, that is a dynamic version of previously presented method, CAMWI, achieveThis section explains the proposed dynamic method that improves detection accuracy of protein complexes. Our method consists three steps: 1. Develop and applying GA-DCM, a genetic-based biclustering algorithm on gene expression data, to detect biclusters of genes/conditions (subsection 3.1); 2. Extracting dynamic subnetworks from static PPI network based on obtained biclusters (subsection 3.2); and applying a protein complex detection method on every dynamic subnetwork and aggregate the results (subsection 3.3). A genetic algorithm is a metaheuristic tool for solving optimization problems. It simulates the process of natural selection. Genetic algorithm has an iterative procedure. It starts from an initial population of candidate solutions called individuals. Properties (chromosomes) of each candidate solution can be changed. Commonly, the acceptable encoding for each solution is a binary string from 0s and 1s, but other representations are also possible.In each generation, the goodness of every chromosome is evaluated by a fitness function. According to selection paradigm, the better individuals from the current population are selected and modified by incorporating some genetic operators. The new generation of candidate solutions is now used in the next iteration of the algorithm. Commonly, the algorithm continues to reach a stopping condition. The stopping condition can be either a maximum number of generations, or an adequate fitness level has been reached for the population. Briefly, a traditional genetic algorithm requires: 1) a genetic encoding of the problem solution, 2) a fitness function to evaluate the solutions.In this subsection, we explained GA-DCT, the proposed genetic-based biclustering algorithm in details. Encoding of a bicluster of expression matrix into a chromosome of genetic algorithm (3.1.1), introducing of a novel fitness function to measure the quality of a bicluster (3.1.2), and presentation of the genetic operators of the algorithm (3.1.3) are presented in this subsection. 1, I2,\u2026, Im} over a series of n subsequent time-points(conditions) C = {C1, C2,\u2026, Cn} during a biological process. Each element Mi,j represents the expression level of the ith gene at the jth time-point. A bicluster interpreted as a submatrix B of expression matrix M, where I, J are subsets of genes set G and conditions set C respectively(I \u2286 G and J \u2286 C). A bicluster is encoded as a genetic chromosome that is represented by a fixed-size binary string composed of genes and time-points. If a gene or condition is included in a bicluster, the corresponding bit is set to 1, otherwise 0. Given a two-dimensional gene expression matrix M with m rows and n columns, it contains the expression level of m genes G = {I\u2019 after performing the normalization step); and (ii) discretization, for reducing the infinite set of real expression values to an acceptable range of discrete values. In normalization step, the expression value of every gene in all time-points are normalized to mean 0 and standard deviation 1. The discretization step provides a number of different discretization techniques replacing each absolute expression value by a symbol of a given alphabet. Alphabets of two or three symbols are the most common, containing the symbols {D, U} and {D, N, U}, respectively, where D means down-regulation, N is no-regulation and U means up-regulation. We consider four discretizing techniques:[We perform a preprocessing step before using gene expression data. Preprocessing step consists of two tasks: (i) Data normalization, to make restitution systematical differences between data measured by several microarrays/conditions SimpSimple threshold technique discretizes expression values in a binary alphabet {D, U} such that if an expression value is higher than the threshold, it is replaced with U Otherwise D.Mean and standard deviation of gene expression profile technique uses an alphabet of three symbols {D, N, U} and parameter \u03b1 defined by the researcher. Symbol D is used to replace all expression values below the difference between the mean value and the product of \u03b1 and the standard deviation. U is used for expression values higher than the sum of the mean value and the product of \u03b1 and the standard deviation. N is used for the remaining expression values.Transitional state discrimination uses a binary alphabet {D, U}. The element M\u2019i, j of the normalized matrix M\u2019 is set to U if the difference between Mi, j and the M\u2019i, j exceeds 0, otherwise, it set to the symbol D.Variation between time-points can be used in both two and three-letter alphabets. In the binary case, using parameter \u03b1, a threshold \u03b2 is calculated as the product of \u03b1 and the standard deviation of the expression values of all genes in time-point 0. Then, each element M\u2019i, j of the normalized matrix M\u2019 is set to U if the difference between M\u2019i, j and M\u2019i, j-1 exceeds the calculated the \u03b2, Otherwise, it is set to D. In the case of three-letter alphabet, the threshold \u03b2 is directly chosen by researcher. Each element M\u2019i, j of the normalized matrix M\u2019 is set to U if the difference between M\u2019i, j and M\u2019i, j-1 exceeds \u03b2, or it is set to D if such difference is lower than\u2014\u03b2, or N, otherwise.After choosing the best discretizing technique, we have a discretized expression matrix M\u201d. Here, we define a novel fitness function to determine the quality of biclusters. It is called Discretized Column-based Measure (DCM). Given a bicluster B of expression matrix M, where I \u2286 G and J \u2286 C. The DCM value of B is computed by (Eq ).fj is computed for every column j of the bicluster as follows; It counts the frequency of each discrete symbol {D, U}. If a symbol has the majority (means has more than |I|/2 occurrences), then fj is the number of discretized symbols in column j that are unequal to the majority symbol. Otherwise, if none of the symbols have majority, fj is set to |I|. Also, if the majority symbol is N , fj is set to |I|/2. Considering is equal to |J|. On the other hand, in the worst case, if no discrete symbols have majority, DCMB is equal to -\u03b1\u00d7|J|, So, -\u03b1\u00d7|J| 1 is the maximum delay. We also assume that the DBN is stationary, i.e. the dependency P is independent of t. Therefore, the joint distribution can be factorized as:The order X1,\u2026,Xd) is the prior network, which needs many independent samples to estimate, so the focus is usually on the transition network P. Assuming stationary DBN, the transition network could be represented as a multi-graph of n+nh nodes (representing the n+nh genes), where an edge i\u2192j is labeled with a delay \u03c4ij\u22650, meaning that Xj,t depends on i to node j, but with different delays.The P(transition network:n observed genes and nh hidden variable(s).there are nh is unknown, but 0\u2264nh0.We also assume that the subgraph for which e.g. in , MMHO-DB e.g. in and Glob e.g. in . This aseach hidden variable has at least two observed genes as children.if a gene has a hidden parent, it has no other parents.children with the same hidden parent are not linked with each other.ns states, one of the states has probability pbias, and the other states each has probability of pbias means a lower \u201cnoise\u201d level.for each conditional distribution with We make the following assumptions on the structure of the K discrete time series k\u2264K. The K time series should be discretized in the same way, so that the states are consistent in different time series.The given data consists of For the purpose of identifying genes with hidden common cause(s), the first step is to obtain an initial GRN. In principle, any HO-DBN learning algorithm could be used. In our preliminary test (unpublished), we have adapted CLINDE to discrconstrain the possible causal structure. It consists of two stages. In stage 1, independence tests (G2 test) are conducted on all gene pairs i\u2192j for all possible delays up to a maximum delay. If the null hypothesis of the independence test is rejected , and the null hypothesis is rejected if the score is larger than a score threshold), the link with the associated delay is kept for stage 2. The default value for score threshold is 2, corresponding to a p-value threshold of 0.01. Note that there may be multiple delays for i\u2192j after stage 1. Stage 2 attempts to eliminate indirect effects based on the fact that if x and y are conditionally independent given a set of variable(s) Z (not containing either x or y), then there should not be a link between x and y. So in stage 2, we iteratively condition on h=1 neighbor for each link to see if a link could be pruned, then condition on h=2 neighbors for any remaining links, and so on up to h=N0 for a given parameter N0, with a default value of 2. When performing a conditional test, the neighbors to be conditioned on are shifted using the delays estimated in stage 1, and if the null hypothesis is not rejected , the link is pruned.D-CLINDE is a constraint-based method, where conditional independence tests on the data GlobalMIT+ is a metNo need to check acyclicity: this allows the score to be calculated separately for each variable. Since GlobalMIT+ ignores instantaneous effects, so the network is always acyclic.Additivity: the score of a candidate network can be decomposed into the sum of the score of each gene. This greatly simplifies the search, and allows easy parallelization.Splitting: the score for each gene could be decomposed into a sum of complexity and accuracy parts as s(Pa)=u(Pa)+v(Pa), where both u(.)\u22650 and v(.)\u22650, and that the complexity part is \u201cnon-decreasing\u201d: u(Pa1)\u2264u(Pa2) for Pa1\u2286Pa2.Uniformity: the complexity is only a function of the number of parents, i.e. u(Pa1)=u(Pa2) whenever |Pa1|=|Pa2|.The characteristics of the score (to be minimized) that allows effective optimization and pruning are:complexity alone exceeds the best score so far, it is safe to prune the search, as adding more parents could only worsen the score. The key to the proof of polynomial time is a logarithmic bound p\u2217 on the number of parents to consider (e.g. by finding a p\u2217 for which u(Pa)\u2265u(\u2205) if |Pa|=p\u2217), so that there are In minimizing the score, if the k (for uniformity), all of the above conditions are satisfied, so the MIT score could be optimized in polynomial time with p\u2217\u2248 logk(Ne) where Ne is In the case of GlobalMIT+, with a simple trick the maximization is turned into minimization, and by assuming that all variables have the same number of states i\u2192j, there is only one delay, and that delay has the best MIT score. So GlobalMIT* first finds the best delay individually for each pair i\u2192j, and need not try the delays in subsequent optimization. This substantially reduces the search space, speeding up the search greatly. However, in our preliminary test, the practical running time could still be long for large number of genes and time points.Recognizing this shortcoming, GlobalMIT* is a heuristic and faster version of GlobalMIT+ with the additional assumption that for each pair of genes g, for each configuration Qi of its parent(s) Pag, we calculate the maximum probability of the conditional distribution as maxjP(g=j|Pag=Qi), and we use the median of the maximum probability over the parent configurations Qi\u2019s as the estimate g.Having obtained the initial GRN of the observed genes, we can estimate the conditional distribution of each gene by maximum likelihood, and then estimate the pbias, if \u03c1 is the tolerance with a default value of 0.05. The idea is that if a gene has no hidden common cause, we expect its parents (and delays) to be correctly determined (given sufficient data), so the estimated bias should be close to expected. On the other hand, if a gene has hidden common cause, its true parents could not be determined correctly, and we expect the estimated bias to be different from expected. Those genes determined to have hidden parents are called candidates.For each gene, we compare the estimated bias n is small, we assume that the expected bias is known and given. On the other hand, when n is larger, by the assumption that there are only a small number of hidden variables, we could attempt to estimate the expected bias from the estimated biases of the the observed genes. We simply use the median of the estimated biases as the expected bias for this study, if it is not given. We discuss a possible alternative strategy for estimating the expected bias as future works in the conclusions.If the number of observed genes candidates. Otherwise, based on the fact that genes with common parent are associated, we cluster the candidates to determine which genes have a common parent, and also to estimate their relative delays for estimating the hidden common cause(s).We simply output the initial GRN as the final GRN if there are no closest to it, and if it is close enough, it is added to that cluster; otherwise, the candidate forms a new cluster. The steps are:kcandidates be {g1,g2,\u2026,gk}Let the nc\u21901, c1\u2190g1, \u03c41\u21900, C1\u2190{g1}Set i=2,\u2026,k\u03c4i be the associated time shift of gi relative to Let If nc\u2190nc+1, then set \u03c4i\u21900Otherwise, set For nc clusters {Cj:1\u2264j\u2264nc}, and the time shifts {\u03c4i:1\u2264i\u2264k}Output the Although there are many different clustering algorithms, we found that even a simple greedy clustering algorithm works adequately from our preliminary tests. The idea is that we consider each candidate in turn, and find the cluster center that is ci is the center of cluster i, Ci is cluster i. \u03c4i is the time shift of candidate gi relative to its cluster center. d measures the similarity of two time series x and y, here we use the maximum \u2212 log10 of G2 tests of the shifted time series . S0 is the threshold for a series to be included in a cluster, with a default value of 2.3 .After the clustering, we would estimate a hidden common cause (estimating its time series) for each cluster with two or more members. If no cluster has size at least two, we simply output the initial GRN as the final GRN. For each cluster with size at least two, we perform up to two rounds of EM. The first round estimates a hidden common cause (as parent) of the genes in the cluster without considering potential parents of the hidden common cause. The second round uses the estimated time series of the hidden common cause to find potential parents from all observed genes (not limited to the cluster under consideration) by picking those with high associations with the estimated hidden common cause, and re-estimate the hidden common cause treating the found (if any) potential parents as parents of the hidden common cause. But note that any identified potential parents of a hidden common cause may not be the true parents of the hidden common cause, as they are found by only considering pairwise associations but not possible indirect effects. So we still rely on the relearning of the GRN after estimating hidden common cause(s) to more accurately identify the parents of the hidden common cause(s), if any. However, we expect the identified potential parents to contain useful information for the estimation of the hidden common cause.h. The number of states of h is either given as a parameter, or the maximum of the number of states of the children if not given. We perform two rounds of EM, each with a default of 100 iterations, and with restarts. Below we briefly describe the EM steps.We use simple Expectation Maximization (EM) to optimC={g1,g2,\u2026,gC||} with |C|>1 that we want to estimate a hidden common cause h with ns states, which may have potential parents identified (for the second round). We first note that the different series may not be aligned because of different time shifts, as illustrated in Fig. ts\u2264t\u2264te, we denote the state of h at time t as ht, which are the latent variables in the EM. Let the configuration of the potential parents of h be denoted by Q, and the value of Q at time t be denoted by Qt, and let xi,t be the value of gi at time t (if available). Our goal is to estimate the most probable ht for ts\u2264t\u2264te given D={Qt}\u222a{xi,t}. The parameter of the likelihood is \u03b8={P(h|Q)}\u222a{P(gi|h)}, where P(h|Q) becomes P(h) if h has no potential parents.Suppose for cluster \u03b8(0)={P(0)(h|Q)}\u222a{P(0)(gi|h)}, then repeat the E-step and the M-step for a default of 100 iterations:E-step: at iteration k, for each time t, and for 0\u2264j>\u2009n, like with the data analyzed here. Since finding an inverse for a rank-deficient matrix is an ill-posed problem, we resolved it by adding a noise term which renders the matrix positive-definite. In the comparative analysis, we selected the best result from 10 runs of the procedure as a final outcome used in the comparative analysis.For each of these methods used in the comparative analysis , we employed the data sets from each perturbation separately and all together, and divided the profiles into the 210 TFs (regulators) and 1561 non-regulator genes. For the network deconvolution and the global silencing methods, we proceeded as follows: First, for the given data set, we calculated the Pearson correlation coefficients matrix http://dream.broadinstitute.org/gp/pages/index.jsf). Finally, for different regularization models; L1, L1/2, and L0, we used the available source code from http://jjwanglab.org/LpRGNI/.We used the package GeneNet14i.e., activation and inhibition, respectively). The inference of regulatory type, in terms of activation or inhibition, would not be possible without capturing the edge weights before rescaling, described above.To be consistent with the obtained edge weights from each network inference method, we rescaled them to positive values between 0 and 1 . Then the edges were sorted in a decreasing order of their weights, and the top 100,000 highly ranked edges were selected for the rest of the statistical assessments. We note that the resulting networks from global silencing and network deconvolution are rescaled by default. In addition, the actual edge weights were stored for each network to estimate the percentages of predicted true positive and negative regulatory effects without further modification. The rest of the analysis was performed in the same way as for E. coli, except the selection of highly ranked edges for which due to the smaller network size, we selected the top 31 (the size of gold standard network) and 100 highly ranked edges.The comparative analysis was performed only on the combination of all data sets resulting in one gene regulatory network per method. As the gold standard network for E. coli, except the selection of highly ranked edges for which due to the smaller network size, we selected the top 248 (the size of gold standard network) and 500 highly ranked edges.The comparative analysis was performed only on the combination of all data sets resulting in one gene regulatory network per method. The inference of the gene regulatory networks and the comparative analysis of the compared approaches were performed in the same manner as for D composed of two blocks: The first block D1 is a diagonal kP\u2009\u00d7\u2009kP matrix for the LASSO penalty which includes W in diagonal, while the second block D2 corresponds to the fusion penalty, expressed as:To implement the modified fused LASSO approach for reconstruction of gene regulatory networks over different data sets, we defined a penalty matrix n is the number of observations and p is the number of regressors. We also slightly changed the function fused.lasso in lqa package to allow inclusion of the penalty matrices defined in Eq.(4). The regression coefficients were robustly estimated by 10-fold cross validation based on the optimum values for \u03bb1 and \u03bb2 from the sets {0.05, 0.1, 0.5, 1, 1.5} and {0.1, 0.5, 1, 1.5, 2}, respectively. To further speed up the algorithm, we used mclapply function from R package parallelTo solve the proposed extension to the fused LASSO, we used the lqa package in Rhttp://mathbiol.mpimp-golm.mpg.de/Mul-fLASSO/index.html. The complete implementation of the proposed approach for the data sets used in this study as well as the general analysis flow for applying the proposed approach on new data sets are available at https://github.com/omranian/inference-of-GRN-using-Fused-LASSO.The code snippets for all approaches used in this study is available at How to cite this article: Omranian, N. et al. Gene regulatory network inference using fused LASSO on multiple data sets. Sci. Rep.6, 20533; doi: 10.1038/srep20533 (2016)."}
+{"text": "Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of A gene regulatory network (GRN) represents the regulatory behaviour or dependencies among a group of genes inside a cell. A GRN is characterized by a directed graph in which nodes denote genes, and regulatory dependencies among genes are depicted by directed edges between the corresponding nodes. There are two types of interaction, namely, activation and inhibition. This kind of network is unique for particular functions within a cell. Thus, the study of GRN is essential to ascertain the genetic causes of a particular disease. As a consequence of this, scientists can venture into the development of new and improved techniques for the treatment of a disease .Nowadays, DNA microarrays , 3 are eK-Means Population-Based Incremental Learning (KPBIL) . The reason behind selecting FPA as optimization method is that it gives better convergence and accuracy than other popular metaheuristic techniques , \u03b2i = , and \u03c4i = , respectively.The CS is initialized randomly with a population of different solutions or eggs on the different random nest, and the quality of each egg in the host nest is calculated using FPA based RNN to observe the impact of regulatory genes on other genes in the GRN as part of one generation. However, we initialized with a population of 10, each consisting of 3 different combinations of genes starting from , ,\u2026, . This type of predefined initialization is used as it increases the probability of covering all regulatory genes without repetition. Search range for the model is chosen as previous work like wi\u22128; then it gives the almost actual value of the parameters of RNN for the regulatory genes. Therefore, a stopping criterion is introduced to minimize the execution time of the algorithm; that is, if the fitness value for a particular gene set becomes less than 1 \u00d7 10\u22128 after some iterations, the program stops execution instantly. Moreover, the corresponding genes set becomes the desired output.The quality of a host nest or fitness of a solution is simply proportional to the objective function that is the resultant error of RNN for the set of particular genes or nest. FPA always tries to minimize the training error by optimizing the value of the RNN model parameters. During training of RNN, each pair in the training dataset contains the gene expression values for only 3 regulatory genes from the one time instance of the microarray data to be the input values to the RNN model, and the expression value of the target gene at the next time instance of the microarray data is the target output of the RNN. This helps to reduce the execution time by reducing the unnecessary calculations corresponding to nonregulatory genes. Moreover, it is observed that during optimization if the fitness value for FPA based RNN becomes less than 1 \u00d7 10\u22128 even after a large number of iterations. Therefore, another alternative stopping criterion is also imposed which can be stated as follows: if the difference between current best fitness value and fitness value of the (current-200)th iteration is less than 1 \u00d7 10\u221210, then the program execution will also stop. Better host nest with better quality eggs of each generation will move to the next generations. After successful completion of all iterations, we have a set of regulatory genes which can affect the target gene most. This process is repeated for all 30 genes one by one to get the final GRN structure.Sometimes, it is found that if any solution does not consist of the actual regulatory genes, then the convergence is very slow which may increase the execution time as it is not able to go below the value 1 \u00d7 10jth) for a particular target gene (ith) along with the value of regulatory weights, that is, wi,j, corresponding to those 3 regulatory genes for which the minimum or optimum fitness value is achieved. It is worth mentioning that for a target gene we always get a combination of three responsible genes, but this does not mean that during GRN reconstruction there will always be regulatory edges from those regulatory genes towards target gene. The existence of a directed edge in a network also depends on the weights between target and regulatory genes. If amplitudes of weights are zero or very small, there will be no edges for the target gene; that is, there will be noninteraction. Small perturbation of obtained weights from actual one can be ignored unless and until it does not change the polarity or sign, that is, the type of regulation. Moreover, small values of weights can be considered as zero. It is believed that if the number of iterations and population size are large enough, these small perturbations can also be avoided, but execution time will also increase consequently. Thus, during reconstruction of GRN, both the set of regulatory genes and the value of the corresponding gene's weights must be kept in mind. The pseudocode of this CS-FPA hybrid is given as in Here inputs are the time series data, generated by , using pSn), specificity (Sp), accuracy, and Matthews Correlation Coefficient (MCC) .Microarray experiments on the SOS DNA repair network for E. coli were fir E. coli . The expwi,j = , \u03b2i = , and \u03c4i = , respectively. For CS, the value of N is set as 2 as the number of available genes is limited to 8 only and other parameter settings remain the same. If we apply CS-FPA hybrid to the SOS dataset, only 4 out of 9 potential regulations can be appropriately predicted from the data, which are the inhibition of lexA on uvrD, umuD, recA, and ruvA. Moreover, it also includes 11 false regulations in the network which is a disadvantage of this process. The results are shown in Search range for the model is chosen as previous work like wiFurthermore, using two or more time series does not yield any enhancement for the results. The cause behind this is that inference of real-time GRNs is an ill-posed problem that has no unique solution. Noise and measurement error in this type of real-time series microarray is another issue. This inherent difficulty is a limitation of our proposed method.Various researchers have already proposed numerous techniques to solve the reverse engineering problem of GRNs from temporal genetic expression data in the domain of computational biology and bioinformatics. It is imperative to enhance the accuracy of the inference algorithms as well as to reduce the number of incorrect predictions within a plausible runtime.Imax as it is observed that real-life GRNs are sparsely connected; that is, very few genes participate in regulations.The RNN formalism is a very popular candidate for inferring GRNs from microarray gene expression data regarding biological plausibility and computational efficiency. In this work, we have implemented the decoupled RNN model where the regulatory parameters of each gene are calculated independently in separate search instances. We incorporated hybridized technique where two metaheuristics are paired to obtain the RNN based GRN model with less search space, less computational complexity, and more accuracy. In this paper, hybridized CS-FPA is proposed for reconstruction of GRNs where FPA is used to train the RNN parameters, and CS is introduced to select the best combination of genes that are responsible for modifying the expression of each gene. Moreover, the maximum connectivity of each gene is restricted to E. coli dataset, it can detect only four true regulations and includes some false regulations.To prove the efficiency of this inference algorithm, it is applied to a benchmark problem of the artificial network with 30 genes with and without noise. With the use of fewer data points, CS-FPA based RNN can infer the network with very high accuracy. However, in the presence of noise, the number of FPs increases significantly, but it can still identify all TPs (inferred in the noiseless scenario) with good accuracy. It is also found that noise robustness is better than other existing methods for artificial data. In the instance of theAnother important observation that is apparent from our results is that the proposed methodology can reconstruct the large artificial GRNs more efficiently than that of real-life GRNs. However, this needs further study on different networks available to us, and the existing boundary of our work validates this observation. In the future, various regularization techniques, the inclusion of prior knowledge about GRNs, and parallel computing methods may be utilized to improve the accuracy and speed further.(i) The noiseless dataset of the artificial network, used for training, and the corresponding results.(ii) The dataset of the artificial network with 5% noise added, and the corresponding results.E. coli DNA SOS repair network, and the corresponding results.(iii) The dataset of the (iv) All the codes developed by the authors.We have included the following in the supplementary materials:"}
+{"text": "Being able to infer one way direct connections in an oscillatory network such as the suprachiastmatic nucleus (SCN) of the mammalian brain using time series data is difficult but crucial to understanding network dynamics. Although techniques have been developed for inferring networks from time series data, there have been no attempts to adapt these techniques to infer directional connections in oscillatory time series, while accurately distinguishing between direct and indirect connections. In this paper an adaptation of Granger Causality is proposed that allows for inference of circadian networks and oscillatory networks in general called Adaptive Frequency Granger Causality (AFGC). Additionally, an extension of this method is proposed to infer networks with large numbers of cells called LASSO AFGC. The method was validated using simulated data from several different networks. For the smaller networks the method was able to identify all one way direct connections without identifying connections that were not present. For larger networks of up to twenty cells the method shows excellent performance in identifying true and false connections; this is quantified by an area-under-the-curve (AUC) 96.88%. We note that this method like other Granger Causality-based methods, is based on the detection of high frequency signals propagating between cell traces. Thus it requires a relatively high sampling rate and a network that can propagate high frequency signals. To understand how complex behaviors arise, we must learn how populations of elements communicate with each other to produce coherent outputs. For example, in the brain, neurons dynamically interact with each other to represent, store and respond to the physical world in real time. Many methods have been developed to discriminate and map functional connections \u20135. In soSynchrony among circadian cells is essential for daily rhythms in physiology and behaviors including sleep-wake, hormone release and metabolism. The suprachiasmatic nucleus (SCN) of the hypothalamus is comprised of a population of approximately 20,000, intrinsically circadian, cells that synchronize their daily rhythms to each other. This is an excellent system for developing algorithms for mapping functional connectivity. Enough is known about how rhythms are generated and synchronized that we can use computational methods to simulate the network. The population of cells is small enough that we can compute their performance over time. There is strong interest in identifying the network topology within the SCN, as disruptions in the network may underlie diverse behaviors from seasonal reproduction to jetlag to fragmented sleep in aging \u201311.Although approaches for inferring network connections from spike train data , 13 and Although there is a clear difference in appearance between synchronized and unsynchronized raw cell traces see , it is vTraditionally, Granger Causality has been a popular choice for inference of networks in general \u201322. HoweWe have developed a technique for the inference of functional networks which is based on Granger Causality and tailored specifically to nonlinear oscillatory systems such as the circadian system, which exhibit non constant frequency. The technique, called Adaptive Frequency Granger Causality, or AFGC, involves a series of manipulations that make circadian gene expression data a viable candidate for Granger Causality. Unlike Granger Causality based techniques for spike train data, AFGC does not require stationary data, making it suitable for not only spike train data, but oscillatory data as well. This was previously not possible in the literature.We test our method on known networks of circadian cells modeled with a stochastic version of the LeLoup and Goldbeter model that is Granger Causality is a statistical technique that has been used successfully in the past for inferring gene networks via time course data . GrangerM), cytoplasmic PER protein (PC), and nuclear PER protein (PN). The relationships between these expression levels are given for the ith cell by,k1, k2, Ks, vd, vm, K1, and Kd are constants. The coupling between cells was modeled as in : = Mi)([t] \u2212 Mi) model is described byn. Since is a linn. Since by ordiners from only in K)+ut(i),where \u03b1iK)+ut(i),which dit Models , is nestAlgorithm 1: Algorithm For Reconstructing Functional Circadian Network From Time Series DataData: K Gene Expression Time Series Corresponding to Each CellResult: Graph Corresponding to Cell NetworkG = with vertex for each of the K cells and no vertices;initialize graph fori = 0 to K \u2212 1 do\u2003estimate coefficients \u2003use these coefficients to estimate standard error of full model, i.e. forj = 0 to K \u2212 1 do\u2003j;\u2003\u2003estimate coefficients \u2003\u2003use these coefficients to estimate standard error of nested model, i.e. F-statistic;\u2003\u2003use ifF statistic > significance thresholdthen\u2003\u2003e = in to E, i.e. cell j influences cell i;\u2003\u2003\u2003insert directed edge end\u2003\u2003end\u2003endKp + 1 = 101. As the number of data points is increased, we we see that the area under the ROC curve (AUC) increases and thus the accuracy of our inference if higher, as we would predict. For parameter estimation where the number of parameters is large relative to the number of observations, parameter estimates vary highly and can lead to misleading inference. For instances where observations are small relative to the number of parameters being estimated, the group LASSO method , 32 has K groups {1, \u22ef, K} such that jth cell into one group. Let Y be the vector of gene expression observations and Partition the set of parameters duced in .For statistical inference, the variance-covariance matrix of the parameter estimates in \u03b1 which we set to 100. Observations of our system are collected once every minute. For larger models where more observations are needed, we concatenate multiple differenced time series from a single realization using of course only the portions of the time series where mRNA is increasing. We then treat this concatenation as a single time series.We demonstrate the effectiveness of our methodology by applying it to the inference of various Circadian networks simulated stochastically. The model we use is a stochastic version of , coupledThe model order was chosen to be 5 for all VAR models. This was chosen to reflect the information transfer lag of 5 minutes between cells that we observed empirically in cross correlation functions. m0 and cell m2 in the model. We seek to infer only the direct connections.Our first example is the three cell model where all cells are lined up and each cell influences the cell to its right, depicted in p-values for the significance of a causal connection from each cell to another. Computation was done on simulated data from the three cell model. Six hours (360 observations) of time series data was used to calculate the statistics.p-values of the true connection (from m0 to m1 and from m1 to m2) are several orders of magnitude smaller than the p-values of all other connections. Our inference methodology is able to perfectly capture the structure of this network, including not falsely naming the indirect connection as direct. There was no need to use the group LASSO to improve the results.The To further test our methodology when indirect connections are present, we introduce a 4 cell model with a bypass, depicted in p-values for inferred connections are shown in The m0 to m1 and m2, m1 to m2, and m2 to m3) are found to have significance values several orders of magnitude lower than other connections. One again, there was no need to use group LASSO to try to improve results.Again, the true connections (We introduce a 10 Cell Dual Chain cell network depicted in p-values, which can be cumbersome, in Rather than examining a table of We introduce a twenty cell model that includes two groups of cells that are all connected to each other and four randomly selected cells from one group that influence four randomly selected cells from another group. A diagram of the model is shown in Lastly, we simulate a 100 cell scale free network created using the Barabasi-Albert model . The net\u03b2 small world algorithm. We used a beta = 0.1 and had a network with an average node degree of two. For this network, our algorithm method was able to achieve an AUC of 81.23%.We also generated a 100 cell small world network using the Watts-Strogatz Because AFGC is a Granger Causality based approach, it is useful for data collected at high frequencies. Although in our results we sampled our systems every minute, we tried our method on lower frequency data of the 10 cell dual chain model to better characterize its effectiveness at lower sampling rates. For optimal results on SCN recordings, we suggest applying AFGC only when samples are taken at least once every five minutes and ideally once every minute. Applying AFGC to current experimental data, which is typically sampled once every thirty minutes to an hour, will most likely lead to erroneous results. If and when the sampling rate of experimental results reaches five minutes or faster we anticipate AFGC will provide accurate network inference results. As AFGC uses only a portion of the time series for each day, we also recommend applying AFGC to longer recordings. Two weeks of recordings is plenty of recording time for most cases, even when only five hours of recording time is used per day. This is assuming a sampling rate of at least five minutes.In general, for optimal results the sampling frequency can vary. We suggest looking at cross-correlation plots and validating AFGC on simulated data to see where the cut-off for high frequency noise is for each particular application.As experimental recordings often contain measurement noise or are able to record only proxies for mRNA, such as PER2::LUC, we characterized our method\u2019s robustness to measurement noise by running the method on data generated from our 10 cell dual chain model with varying levels of noise. In our experiments, all results were obtained by applying AFGC to time series of PER mRNA. This is significant because all coupling between cells in our model is facilitated through the respective PER mRNA levels of each cell. In our model, PER protein in the cytoplasm is a proxy for PER mRNA in a sense, because rising levels of PER mRNA lead to rising levels of PER protein. Thus we would expect these two time series to be approximately correlated in the same cell. This however does not imply that AFGC can recover networks from PER protein data when coupling is exclusively facilitated through PER mRNA. The reasoning is that for our particular model, high frequency noise is damped in the relationship between PER protein and PER mRNA, thus information crucial to AFGC is lost. More specifically, noise in transcription is damped during translation. Noise in the protein arises mostly from the translation process itself. This noise explains why in In general, the usefulness of a proxy will depend on how much incoming high frequency noise the proxy filters out. This is highly model dependent. Of course, we expect that in real world scenarios coupling happens through multiple channels and is highly complex. We have shown that AFGC can be highly successful in network inference when applied to signals that are directly responsible for coupling.For all results, we selected portions of the synchronous time series where PER mRNA was rising. These portions were usually at least three hours in length, but often up to seven. Time series need not be synchronous in order for AFGC to work. As long as the linear rising portion of the oscillations match for at least some significant time chunk, AFGC will provide fruitful results. The AUC of 80.72% obtained on the 10 cell network was obtained using only five hours of observations. Thus even if the oscillations contained only one hour of overlapping rising portions, it would only take five days to obtain enough data to conduct that inference.For larger networks, such as the 100 cell small world network we simulated, lack of synchrony can be more limiting because more sampling is needed and thus recording must go on for longer. In the case of the small world network, 60 hours of observations were needed to obtain an AUC of 81.23%. This would amount to 60 days of observations if only one hour is extractable every day, which unfortunately rules out many experimental recordings. In cases where there are groups of oscillations that exhibit large phase differences from other groups of oscillations or they are not in synchrony at all with other oscillations, we recommend conducting AFGC separately on the different groups. In these cases we can at least extract network topology within the group and assume weak coupling between groups.Granger Causality has proven to be an effective method for detecting direct causality in multivariate time series but is applicable only when data meets certain assumptions. These assumptions include, but are not limited, to linearity of the system, normality of noise, and stationarity of time series. Furthermore, the number of observations in the time series must be large relative to the number of cells in a system.We have proposed a methodology for application of Granger Causality to circadian data, for the detection of functional networks. This technique is able to accommodate the assumptions that are required by Granger Causality through the use of approximation and differencing techniques. The technique works by first selecting a specific subsection of each cycle from each of the oscillatory time series. The subsections are then differenced and spliced to form stationary processes of equal lengths. Vector autoregressive models are then fit to these stationary time series and tests are conducted on parameters to assess Granger Causality and answer the question at hand.We also showed a way to improve the results of this technique when the number of time course observations are small relative to the number of cells in a system. This involves penalizing parameter estimates in accordance to the group LASSO methodology. The high level of accuracy that was displayed by our method on simulated circadian networks provides encouraging evidence that one way relationships between circadian cells in the SCN can be detected from time course gene expression data.Although our analysis provides an accurate way of detecting direct one way connections between cells in simulated data, it remains to be seen how the method performs on large sets of real biological data. Granger Causality relies on analysis of noise propagation, thus it is a necessary assumption that high frequency noise in cell traces indeed propagates between cells that are connected. We chose a coupling parameter in our model to ensure propagation of noise in simulations given the coupling mechanism. This was to illustrate how well Granger Causality is able to achieve the task of network inference when the propagation exists. Although the mathematical justification of our method relied on the particular form of the coupling mechanism, our method will work under any coupling mechanism that allows for high frequency noise propagation. It is in fact unknown whether high frequency noise propagates between cell traces. We also note that because AFGC is solely reliant on noise propagation, it is robust to minor phase differences between cells. Phases must not be aligned exactly but only so that their upswing in mRNA coincides for some portion of time, since that is when noise best propagates."}
+{"text": "Scientific Reports5: Article number: 1114410.1038/srep11144; published online: 06052015; updated: 11192015In this Article, the legend of Figure 5 and Figure 6 are incorrect:In Figure 5:RPSA in insect species.\"Phylogenetic analysis of Neighbor-joining method was used to construct the phylogenetic tree. Bootstrap values with 1000 trials are indicated on branches.\"should read:fruitless gene in ovary and testis of P. americana with real-time PCR.\"Quantification of the expression level of The expression level of this gene in ovary was set as 1. All the data collected from the real-time PCR analyses were shown as averages \u00b1 SE; P < 0.01\"In Figure 6:fruitless gene with real-time PCR.\"Quantification of the expression level of P. americana was set as 1. All the data collected from the real-time PCR analyses were shown as averages \u00b1 SE; P < 0.01.\"The expression level of this gene in female should read:RPSA gene with real-time PCR.\"Quantification of the expression level of P. americana was set as 1. All the data collected from the real-time PCR analyses were shown as averages \u00b1 SE; P < 0.01.\"The expression level of this gene in female"}
+{"text": "In recent years Singular Spectrum Analysis (SSA) has been used to solve many biomedical issues and is currently accepted as a potential technique in quantitative genetics studies. Presented in this article is a review of recent published genetics studies which have taken advantage of SSA. Since Singular Value Decomposition (SVD) is an important stage of this technique which can also be used as an independent analytical method in gene expression data, we also briefly touch upon some areas of the application of SVD. The review finds that at present, the most prominent area of applying SSA in genetics is filtering and signal extraction, which proves that SSA can be considered as a valuable aid and promising method for genetics analysis. SSA has already transformed itself into a standard tool in the analysis of biomedical, mathematical, geometrical and several other time series The emergence of SSA is usually associated with the work by Broomhead in 1986 The main advantages of the SSA technique in the field of genetics can be attributed to its signal extraction and filtering capabilities For example, microarray is a very useful method for acquiring quantitative data in genetics and researchers today are conducting most of their studies using this method. The main advantage of microarray is the capability of studying thousands of genes simultaneously. However, microarray data usually contains a high level of noise, which can reduce the performance of the results This article categorises and summarises almost all recently published articles associated with the application of SSA in genetics.Presented below, is a short description of SSA technique in doing so we mainly follow [25]YN\u00a0=\u00a0 with length of N. After choosing a window length L where (2\u00a0\u2264\u00a0L\u00a0\u2264\u00a0N\u00a0\u2212\u00a01), we construct the L-lagged vectors Xj\u00a0=\u00a0T, j\u00a0=\u00a01, \u2026, K where K\u00a0=\u00a0N\u00a0\u2212\u00a0L\u00a0+\u00a01. Define the matrix X is our multivariate data with L characteristics and K observations. The columns Xj of X, are the vectors, positioned in an L-dimensional space XXT: SVD of XXT gives us the collections of L eigenvalues (\u03bb1\u00a0\u2265\u00a0\u22ef\u00a0\u2265\u00a0\u03bbL\u00a0\u2265\u00a00) and the corresponding eigenvectors U1\u00a0\u2026\u00a0UL where Ui is the normalised eigenvector corresponding to the eigenvalue \u03bbi. A group of r (with 1\u00a0\u2264\u00a0r\u00a0<\u00a0L) eigenvectors determines an r-dimensional hyperplane in the L-dimensional space RL of vectors Xj. By choosing the first r eigenvectors U1, \u2026, Ur, then the squared L2-distance between this projection and X is equal to L-dimensional data are projected onto this r-dimensional subspace and the final diagonal averaging gives us an appropriate approximation of the first one dimensional series.Consider a set of genetics observations in a series of The remainder of this paper is organised as follows. In the following section we present a review of papers involving the application of SSA and SVD2In this section we identify existing applications of SSA for signal extraction and noise filtering in genetics.Drosophila melanogaster's gene expression profile Bicoid (Bcd) protein profile in Drosophila melanogaster was presented in The first such application is reported in 2006 where SSA was used for signal extraction of hunchback (hb) gene in response to different concentrations of Bcd gradient was studied in In addition, the activation of the 2.1bcd gene expression signal BcdRecently, a modified version of SSA was examined for filtering and extracting the X, will be achieved which in turn is corresponds to a filtered series.SSA based on minimum variance mainly relies on the concept that by dividing the given noisy time series into the mutually orthogonal noise and signal+noise components, an enhanced estimation of the signal can be achieved. Thus, after performing SVD, by adapting the weights for different obtained singular components, an estimation of the Hankel matrix A short description of the SSA based on minimum variance is given below. For more details, see X:Let us begin with the Singular Value Decomposition (SVD) of the trajectory matrix W is the diagonal matrix of the weights to be determined. The SVD of the matrix X can be written as:W. If we represent the SVD of the Hankel matrix related to the signal as S, by considering different criteria in choosing this matrix, different estimation of S can be achieved. The LS Estimate of S is the current widely used approach in selecting the weight matrix W. This approach is based on the idea of removing the noise subspace but keeping the noisy signal uncorrelated in the signal+noise subspace. However, the accuracy of this estimator is mainly dependent on the estimation of the signal rank r since selecting singular values in LS follows a binary approach. Although in using this estimator, considering any assumptions is not needed.Now, the issue is in selecting the weight matrix S proposed by Hassani in In MV Estimate of W of the LS and MV estimates:Let us now consider the weight matrix U1 and V1, of LS and MV estimates are the same, but the singular values are different.Bcd gradient. As it appears SSA-MV yields more promising result for Bcd signal extraction.2.2Among many applications of the microarray technique, the study of rhythmic cellular processes has been considered as an important application. Rhythmic cellular processes are mainly regulated by different gene products, and can be measured by using multiple DNA microarray experiments. Having a group of gene experiments over a time period, a time series of gene expression related to the rhythmic behaviour of that specific gene will be achieved. Several studies on extracting the regulatory information from time-series microarray data and detection of cyclic and non-uniformly sampled short time series of gene expressions can be found in P. falciparum, which was a considerable achievement in terms of detecting 777 additional periodic genes in comparison to the results in In 2008, Du et al. implied SSA for analysing microarray results for extracting the dominant trend from noisy expression profiles and reducing the effect of noise Four subsequent research works followed this procedure and successfully improved the capability of detecting periodicity from 60% to 80% \u2022At the first step SSA is used for the reconstruction of each expression data. For this aim only those expression profiles with sum of first two eigenvalues over the sum of all eigenvalues greater than 0.6 are selected for reconstruction.\u2022fi at peak value point and the ratio of the power in fi Regions of Interest (ROI) of the reconstructed profiles achieved in the first step. It should be noted that to obtain the ratio between the power of the signal within the frequency band and the total power we follow S\u00a0=\u00a0poweri/powertotal.The second step is devoted to the calculation of the AR spectrum, frequency \u2022S is larger than 0.7, the corresponding profile would be classified as periodic If the obtained power ratio According to Presented below is a summary of the theory combining SSA and AR based on g\u00a0\u00d7\u00a0N matrix (g\u00a0\u226b\u00a0N), where g is the number of gene expression profiles and N corresponds to the number of samples. Time series gene expression YN\u00a0=\u00a0 can then be written in a form of an ARp) can be estimated and gene expression can be recognised as a linear system. In an ARp).Using the forward prediction linear system the AR coefficients , and ultimately estimated \u03b1i are obtained using Yule\u2013Walker equations As mentioned above, the SSA technique is often applied prior to spectral analysis, as a filtering method, in order to achieve a better accuracy. This is done by ignoring the small singular values in the reconstruction stage. Removing the noise component from the original noisy signal 2.3Although SVD is an stage of SSA technique, it has also been used independently as a very useful and applicable tool for analysing the microarrays data , whilst in univariate SSA we only require to select one window length. Note that if L2\u00a0=\u00a01, then 2D-SSA is equivalent to SSA. By choosing L2\u00a0=\u00a0M, the interaction among different series is taken into account The 2D-SSA approach has been used to process two-dimensional scalar fields It is worthy to mention that the gene expression can be traced either in just anterior\u2013posterior (AP) axis or both AP and dorso-ventral (DV) axis which the latter case is the subject to 2D-SSA. The data points used in 2D-SSA study attributes to the intensity levels for the positions along both AP and DV axis and are considered as a sequenced series.Bcd is a transcriptional regulator of downstream segmentation genes where the alteration in Bcd gradient shifts the downstream patterns Drosophila was studied in hb gene by the Bcd protein gradient in the anterior part was studied by modelling the noise observed in hb regulation using a chemical master equation approach. For solving this model, the MesoRD software package has been used which mainly follows a stochastic approach Hb output noise is mostly dependent on the transcription and translation dynamics of its own expression, and that the multiple Bcd binding sites located in the hb promoter also improve pattern formationm indicates the positions. This measurement is obtained for the activated region (15\u201345% EL).Noise in that study is calculated by:Bcd in Drosophila embryos bcd (bcd-GFP).In 2012 Golyandina et al. used 2D SSA to measure between-nucleus variability (noise) seen in the gradient of It should be noted that using SSA for signal extraction gives the ability of using both dimensions (AP and DV) which leads to a more reliable result as in this case more detailed information regarding the data has been considered by the technique to give the results.4The aim of this paper was devoted to review the applications of SSA in genetics studies. Previous research has shown that the SSA technique is very effective in signal extraction and noise filtering in genetics data.Theoretical developments presented here as two-dimensional SSA and SSA based on minimum variance have also enabled the researchers to achieve enhanced results and a better estimation of extracted signal. As a non-parametric method SSA has given us very promising results which are more reliable than those obtained by other methods. However, SSA has not revealed its full potential yet, areas like optimising the embedding dimension and change point detection are still open to pursue."}
+{"text": "Repeated ethanol exposure and withdrawal in mice increases voluntary drinking and represents an animal model of physical dependence. We examined time- and brain region-dependent changes in gene coexpression networks in amygdala (AMY), nucleus accumbens (NAC), prefrontal cortex (PFC), and liver after four weekly cycles of chronic intermittent ethanol (CIE) vapor exposure in C57BL/6J mice. Microarrays were used to compare gene expression profiles at 0-, 8-, and 120-hours following the last ethanol exposure. Each brain region exhibited a large number of differentially expressed genes at the 0- and 8-hour time points, but fewer changes were detected at the 120-hour time point (400-600). Within each region, there was little gene overlap across time (~20%). All brain regions were significantly enriched with differentially expressed immune-related genes at the 8-hour time point. Weighted gene correlation network analysis identified modules that were highly enriched with differentially expressed genes at the 0- and 8-hour time points with virtually no enrichment at 120 hours. Modules enriched for both ethanol-responsive and cell-specific genes were identified in each brain region. These results indicate that chronic alcohol exposure causes global \u2018rewiring\u2018 of coexpression systems involving glial and immune signaling as well as neuronal genes. Long-term alcohol use and dependence alter brain function and are linked to persistent changes in gene expression \u20133. Gene Genomic approaches have successfully identified alcohol-mediated changes in gene expression in animal models of alcoholism ,13,14. TWe defined global gene expression profiles in amygdala (AMY), nucleus accumbens (NAC), prefrontal cortex (PFC), and liver of C57BL/6J mice exposed to 4 cycles of intermittent ethanol vapor. Tissue was harvested at 3 time points following the last vapor treatment to assess time-dependent changes in gene expression. We identified time-dependent gene clusters in AMY and NAC that were enriched with astrocytes, microglia, and oligodendrocyte cell types. These sets of genes were primarily associated with inflammatory response function. In contrast, the PFC was enriched with neuronal genes and displayed a greater diversity in directional expression changes, suggesting that the PFC is under greater transcriptional regulatory control than the AMY and NAC.All procedures were approved by the Medical University of South Carolina Institutional Animal Care and Use Committee and adhered to NIH Guidelines. The Medical University of South Carolina animal facility is accredited by the Association for Assessment and Accreditation of Laboratory Animal Care.Chronic intermittent ethanol vapor exposure (or air) was delivered in Plexiglas inhalation chambers, as previously described ,22,24 toFrozen brains were placed in a plastic mold containing Optimal Cutting Temperature compound (OCT) and maintained in a mixture of powdered dry-ice and isopentane. A Microm HM550 cryostat was used for sectioning at a thickness of 300 \u03bcm. Micropunches were collected from amygdala , nucleus accumbens , and prefrontal cortex . See http://bioinformatics.mdanderson.org/MicroarraySampleSize/). Aliquots of labeled cRNA were sent to the Yale Center for Genome Analysis where they were hybridized to Illumina MouseRef-8 v2 Expression BeadChips according to manufacturer protocols. Since each BeadChip contains 8 independent arrays, samples were hybridized in a group counter-balanced format to minimize batch effects. Each array was hybridized with material obtained from a single animal; thus, 192 arrays were included in the analysis . Each expression array contains approximately 25,600 transcripts representing over 19,100 unique genes. Transcript abundance was measured by fluorescent intensity after scanning. Microarray data have been submitted to the NCBI Gene Expression Omnibus (GEO) (http://www.ncbi.nlm.nih.gov/geo/) under accession number GSE60676.A web-based tool was used to determine the number of arrays required to detect meaningful statistical changes with a power of 0.8 (http://bioconductor.org) designed for the statistical language R (http://www.r-project.org) and Microsoft Excel. The data were first filtered to include only genes with a detection p-value of \u22640.05 that were present on >80% of the arrays. Data pre-processing included a variance stabilization transformation [Unless otherwise noted, the data were analyzed using open source software packages from Bioconductor was used to investigate the modular structure of the data at a gene network level. The general framework of WGCNA has been described in detail elsewhere ,30. Thiswww.ingenuity.com) was used to identify overrepresented functional pathways of known gene networks and biological functions. Hypergeometric tests were used to evaluate modules and individual data sets for over-representation of cell type-specific genes. Datasets for neurons, astrocytes, microglia, and oligodendrocytes were obtained from previously published work [Gene ontology terms were identified using the Database for Annotation, Visualization and Integrated Discovery (DAVID) ,32, and hed work ,34. The The Bioconductor package maSigPro was usedHistorically it has been standard practice to verify a subset of microarray-generated gene expression changes using qRT- PCR. However, we did not include such confirmation in the present study because we have used the Illumina platform (including the particular array used in this study) extensively and \u2018\u2018validated\u201d expression differences with independent qRT- PCR experiments in the past. The level of correspondence between the microarray and RT-PCR results exceeds 80% ,36,37.Time- and brain region-dependent changes in gene expression were detected in response to CIE vapor exposure in mice. These procedures do not result in overt behavioral signs of withdrawal at the exposure levels used this study. A similar number of genes were detected in each brain region , but fewer genes were detected in liver . CIE vapor elicited pronounced gene expression changes in all brain regions, as well as in liver . HoweverFor all tissues, there was distinct clustering of gene networks at 120-hours compared to other times, and this time point had a greater effect than treatment on sample clustering due to the lack of expression changes (data not shown). WGCNA identified 34\u201345 modules in brain and 24 modules in liver, with module sizes ranging from 78\u20131,412 transcripts. The Database for Annotation, Visualization and Integrated Discovery (DAVID) was used for over-representation analysis and to evaluate the biological function of each module.These results were further substantiated by enrichment analysis using cell-specific and functional gene lists. Published cell-type gene lists see . The NACWe also examined the overlap of cell-specific differentially expressed genes (FDR\u22640.05) at the 0- and 8-hour time points. Microglial genes were the most highly conserved group of cell type-specific genes across these time points for all brain regions . In addiTo further investigate ethanol-responsive modules, we used an effect-size based approach and determined the direction and magnitude of ethanol-induced changes (adjusted p\u22640.05) for each coexpression module. Mean t-values were calculated for ethanol-responsive modules identified by WGCNA at each time point. The magnitude and direction of change were relatively consistent in AMY and NAC at 0- and 8-hours . In contA time series analysis was performed using the Bioconductor package maSigPro to identThe goal of the current study was to determine time-dependent transcriptional changes in brain and liver that result from administration of repeated ethanol vapor. This paradigm has been shown to escalate voluntary drinking in both rats and mice and represents a rodent model of dependence . Our hypA multi-level analysis approach was utilized which included differential expression, network analysis, cell-type specificity, and time-series clustering analyses. These approaches include computational algorithms that enhance the analysis of gene coexpression networks existing in diverse expression datasets. In addition, since the brain transcriptome is organized into gene modules associated with major cell classes and specific synaptic and cellular functions , we clasAlox5ap, B2m, Cd74, Fcgr2b, Hla-A, Pglyrp1, Psmb9, Spp1). Interestingly, B2m is known to be important for immune responses and has been shown to be alcohol responsive in multiple studies [et al. [of B2m reduced ethanol consumption in the limited access two-bottle choice test for ethanol intake, supporting a hypothesis that genes within this cluster may play a role in mediating voluntary drinking. In addition, the innate immune cytokine Cd74 was a member of this cluster. The expression of this gene is rapidly induced by alcohol and has been linked to the progression of cytokine responses during alcohol withdrawal [A time-series analysis (Bioconductor package maSigPro) was performed to identify clusters of genes with similar expression patterns. Each brain region displayed distinct clusters overlapping with WGCNA modules enriched in differentially expressed and cell type-specific genes. In the AMY, a cluster was identified . Slc1a2 is highly expressed in microglia [Slc1a2 in alcohol intake and dependence. As in the AMY, the cytokine Cd74 was also identified in NAC, suggesting that it may have a role in innate immune responses in multiple brain regions. Other genes in this cluster that are differentially expressed in mouse models and human alcoholics include Htra1 [Il17rc [Tsc22d3 and Gata-2 are alcohol-responsive members of this gene set but are not members of inflammatory response pathways. Tsc22d3 functions as a transcriptional regulator and is differentially expressed in human alcoholics [Tsc22d3 may be associated with neuroplastic changes in response to drugs of abuse, including ethanol in mouse striatum [In NAC, a cluster of genes overlapped with modules enriched with microglia and oligodendrocytes . Bdnf has a well-documented role in synaptic plasticity [Bdnf is significantly down-regulated in homogenized medial PFC tissue [Bdnf levels [Bdnf expression increase ethanol-drinking behavior [Bdnf in the modulation of ethanol intake. Alcohol consumption is known to be modulated by circadian-related cellular function [Adcyap1 was identified as differentially expressed member of an enriched neuronal module and may represent a regulatory mechanism involved in the time-dependent expression changes.In contrast to the glial signature of many of the clusters, one PFC gene cluster was used for sectioning at a thickness of 300 \u03bcm. Micropunches were collected from amygdala , nucleus accumbens , and prefrontal cortex .(PDF)Click here for additional data file.S2 FigA two-step regression approach was used to identify clusters of differentially expressed genes with similar expression patterns across time. Each plot shows the hierarchical clustering (clusters = 4) of average expression profiles by time in the amygdala (AMY). The dots represent average expression values for each gene in the time series.(PDF)Click here for additional data file.S3 FigA two-step regression approach was used to identify clusters of differentially expressed genes with similar expression patterns across time. Each plot shows the hierarchical clustering (clusters = 4) of average expression profiles by time in the nucleus accumbens (NAC). The dots represent average expression values for each gene in the time series.(PDF)Click here for additional data file.S4 FigA two-step regression approach was used to identify clusters of differentially expressed genes with similar expression patterns across time. Each plot shows the hierarchical clustering (clusters = 4) of average expression profiles by time in the prefrontal cortex (PFC). The dots represent average expression values for each gene in the time series.(PDF)Click here for additional data file.S5 FigA two-step regression approach was used to identify clusters of differentially expressed genes with similar expression patterns across time. Each plot shows the hierarchical clustering (clusters = 4) of average expression profiles by time in the Liver. The dots represent average expression values for each gene in the time series.(PDF)Click here for additional data file.S1 Table(XLSX)Click here for additional data file.S2 Table(XLSX)Click here for additional data file.S3 Table(XLSX)Click here for additional data file.S4 Table(XLSX)Click here for additional data file.S5 Table(XLSX)Click here for additional data file."}
+{"text": "Here, we used metabolic RNA labeling and comparative dynamic transcriptome analysis (cDTA) to derive mRNA synthesis and degradation rates every 5\u00a0min during three cell cycle periods of the yeast Saccharomyces cerevisiae. A novel statistical model identified 479 genes that show periodic changes in mRNA synthesis and generally also periodic changes in their mRNA degradation rates. Peaks of mRNA degradation generally follow peaks of mRNA synthesis, resulting in sharp and high peaks of mRNA levels at defined times during the cell cycle. Whereas the timing of mRNA synthesis is set by upstream DNA motifs and their associated transcription factors (TFs), the synthesis rate of a periodically expressed gene is apparently set by its core promoter.During the cell cycle, the levels of hundreds of Saccharomyces cerevisiae mRNA provide a high\u2010quality data set that allows for the first time for a systematic analysis of the dynamics of mRNA synthesis and degradation during the cell cycle. With the use of a new dynamic model that estimates changes in mRNA synthesis and degradation rates, we demonstrate that most periodically expressed transcripts show non\u2010random periodic changes in their degradation rates that lead to sharper and higher mRNA expression peaks. Our study provides the first evidence for variable mRNA degradation as a ubiquitous phenomenon that can shape periodic gene expression.Here we apply cDTA to synchronized et\u00a0al, bar1 strain of yeast . The entire time series experiment was performed in two biological replicates. Because labeled mRNA levels correlate well with mRNA synthesis rates . Correlations were consistently above 0.93. Strikingly, periodic expression already shows in the samples correlation structure. Samples taken at similar time points in the cell cycle have a higher correlation than samples taken at more distant time points in the cell cycle. This leads to a characteristic tri\u2010band diagonal correlation structure, corresponding to the three cell cycles that we monitored. A principal component plot automatically places consecutive samples in a \u201ccell cycle clock\u201d, a clock\u2010wise spiral, demonstrating that most variation in the data (>74%) is due to periodic expression fluctuations . Second, by ignoring the time at which measurements were taken, we use all labeled and total measurements of a mRNA as replicates to calculate a high precision estimate of its (cell cycle averaged) synthesis and degradation rate. We compared these estimates with the most recent estimates in . To each labeled and total mRNA expression time course, MoPS calculates a likelihood ratio statistic that compares the best fit of a periodic expression curve to that of a non\u2010periodic curve . Periodic expression is modeled by a dampened, deformed cosine wave using six parameters . The magnitude of expression changes is another criterion for improving the detection of periodic transcripts . The cell cycle length \u03bb and the synchronization loss \u03c3 were estimated for each gene. The distribution of obtained cell cycle lengths \u03bb sharply peaks at a median of 62.5\u00a0min, and the distribution of the synchrony losses \u03c3 has a median of 7\u00a0min . This shows that the detection of periodic genes strongly depends on the method, the experimental conditions, and the stringency cut\u2010off that has been applied. We included MoPS into the benchmark studies in . A cut\u2010off value was chosen based on gold standard sets of periodically and non\u2010periodically expressed genes, to control the false discovery rate at a 20% level Fig\u00a0. The powet\u00a0al, We sorted all periodically expressed genes by their synthesis peak time Fig\u00a0C. Among k\u2010means clustering. For each of the clusters, we performed an XXmotif search that potentially regulate the cell cycle. Since there is no consensus set of TFs that regulate the cell cycle we systematically screened for transcriptional regulators of periodic genes Fig\u00a0A. We useh TOMTOM .et\u00a0al, et\u00a0al, We obtained a total of 50 DNA motifs that were associated with a total of 32 DNA\u2010binding transcription factors (Supplementary Table 1). The top motif identified from a G1 cluster perfectly matched the known Mlu1 cell cycle box (MCB) motif . The distribution of the mean expression m of the periodic genes is comparable to that of all genes, with the exception of the left tail of weakly expressed genes . This is not surprising, because periodic genes fluctuate in their expression, which necessarily leads to a certain minimum mean expression level. The genes exclusively regulated by MBP1, though agreeing well in their timing, showed a remarkable diversity in their synthesis mean and amplitude , and TATA\u2010less genes showing one or two mismatches compared to the TATA box consensus at the experimentally defined location where the transcription pre\u2010initiation complex is formed . Although the differences are significant for non\u2010periodic and periodic genes, the effect is threefold stronger for periodic genes. Indeed, periodic genes with very high levels in total and labeled mRNA were almost exclusively found in the perfect TATA box group. Gene Ontology analysis is the time\u2010dependent synthesis rate and \u03b4(t) is the time\u2010dependent degradation rate for that population. Given \u03bc(t) and \u03b4(t), Equation (*) can predict the time course of total and labeled mRNA levels. Note that Equation (*) leaves one degree of freedom, the boundary condition on T. By setting T(0) to the total RNA level at time 0, the resulting solution T(t) to Equation (*) is the time course of the total RNA. By letting T(tj)\u00a0=\u00a00, the solution T(t), for t\u00a0>\u00a0tj, is the amount of labeled RNA obtained after a (t\u00a0\u2212\u00a0tj) min labeling pulse starting at time tj. For a description of the numerical and analytical solutions to Equation (*) see .Assuming that all copies of a transcript in an mRNA population share the same hazard of being degraded, the time course of an mRNA population is described by the differential equation We used Equation (*) to simulate how a peak in mRNA synthesis translates into total mRNA in different degradation rate scenarios Fig\u00a0A. In paret\u00a0al, et\u00a0al, et\u00a0al, 2002). For example, the ten cyclins which are found as periodically expressed in our data have a mean peak shift of 0.8\u00a0min. The observed short delays between synthesis and total mRNA peaks in periodic transcripts are therefore incompatible with the assumption of constant degradation rates.We therefore compared the peak time in labeled and total mRNA for each periodic gene Fig\u00a0B. This rt) and a degradation time course \u03b4(t) into predictions of total and labeled mRNA and \u03b4(t) from a pair of observed labeled and total mRNA time courses is modeled as sine function . Note that we did not use the smoothed synthesis rate estimate of MoPS, because MoPS aimed at the detection of periodic expression, and did not take into account changes in mRNA degradation. Moreover, we wanted to exclude any model bias and avoid findings due to slightly biased model assumptions. The measurement error that determines the quality of fit was as in MoPS. The parameters were then fitted to the measured cDTA data by Markov Chain Monte Carlo . This enabled us for the first time to decompose cell cycle dependent mRNA expression into the processes of mRNA synthesis and degradation. The rate estimates for all expressed genes are listed in Supplementary Tables 2 and 3.To investigate the potential role of mRNA degradation rate changes quantitatively, we extended the DTA method such that it allows for the estimation of changes in mRNA synthesis and degradation rates. We exploit the fact that equation (*) translates a synthesis time course \u03bc\u00a0=\u00a0const, and the other assumes a sinusoidal degradation rate, \u03b4(t)\u00a0=\u00a0a*cos(t\u00a0\u2212\u00a0\u03c6)\u00a0+\u00a0const. The log likelihood ratio of the respective best fits, termed \u201cvariable degradation score\u201d, was used to rank genes according to their fluctuations in mRNA degradation . The variable degradation score was averaged over both replicate time series. Periodic transcripts had a mean variable degradation score of 0.64 (\u00b10.47\u00a0s.d.), as opposed to non\u2010periodic transcripts . Conversely, genes with a variable degradation score above 0.3 comprised 74.7% of all periodic transcripts. Additionally, the variable degradation score was positively correlated with the periodicity score of periodic transcripts . This indicates that periodic variation in mRNA degradation is a common feature of periodic transcripts.We further developed a score quantifying the strength of periodic mRNA degradation. It is based on the comparison of two models for the explanation of the labeled and total mRNA time series of a gene. One model assumes a constant mRNA degradation rate, \u03b4(t) was combined with periodic degradation time courses of variable amplitude to generate labeled and total mRNA profiles. Noise was added according to the MoPS error model, and the variable degradation score was calculated for all instances. The variable degradation score rose with increasing amplitude and decreasing mean of the degradation time course . In order to assess the power of our approach for discriminating between genes having constant respectively variable degradation rates, we calculated its sensitivity and specificity for various degradation amplitudes. The results are summarized as receiver\u2010operating characteristic (ROC) curves . Changes in degradation rates might be confined to a single cell cycle phase or might be gene\u2010specific. We grouped the 358 periodic genes with variable degradation according to the cell cycle phase in which their transcription peaks and examined the distributions of their degradation peaks cell cycle TFs peak when transcription of its targets is maximal . Further investigation of co\u2010regulated gene clusters revealed that the timing and the magnitude of periodic expression have different causes. Genes that have common binding sites for cell cycle TFs show coherent timing of expression, but differ in their mRNA synthesis rates. Striking examples are genes exclusively regulated by MBP1, a transcription factor that has a well\u2010studied role in regulating expression of late G1 genes. Although these genes have very similar temporal profiles, they exhibit large differences in their synthesis rates and total mRNA levels. These differences are related to the composition of the core promoter TATA sequence, and correlate with the binding of general transcription factors. Periodic genes that drive cell cycle progression or regulate fundamental processes like chromatin organization in S\u2010phase are found to be highly induced and tend to have a consensus TATA box.The excellent reproducibility and the high temporal resolution at which mRNA synthesis rates and total mRNA expression were determined will make our data an ideal resource for more advanced reverse engineering approaches of cell cycle related gene expression networks.et\u00a0al, et\u00a0al, et\u00a0al, The most intriguing finding from our results is however that most periodically expressed genes show periodic changes in the degradation rates of their mRNAs. We realized that total mRNA levels peak on average only 2\u00a0min after labeled mRNA, which indicates the peak of mRNA synthesis activity. This short time delay could not be explained when constant degradation rates were assumed. Computational modeling of degradation kinetics of periodically transcribed genes indicated that the stability of mRNAs decreases shortly after transcription ceases. This highlights the importance of post\u2010transcriptional control on the regulation of genes involved in cell cycle\u2010associated processes. Varying mRNA degradation rates during the cell cycle were previously observed by replacing the BAR1 open reading frame from its start\u2010 to stop\u2010 codon with a KanMX module. The \u0394bar1 strain was inoculated from a fresh overnight culture at OD600 0.1. At OD600 0.4 alpha factor (Bachem) was added at a final concentration of 600\u00a0ng/ml for 2\u00a0h. Synchronization was followed visually by counting the number of budding cells under the microscope. Cells were centrifuged for 2\u00a0min at 1,600\u00a0\u00d7\u00a0g at 30\u00b0C and washed once with 3\u00d7 the original culture volume prewarmed YPD. Cells were then resuspended in the original culture volume with prewarmed YPD. 41 consecutive samples were labeled for 5\u00a0min with 4\u2010thiouracil every 5\u00a0min for 200\u00a0min. Labeling and sample processing was performed as described and processed on a FACS Calibur (Beckton Dickinson). Total RNA purification, separation of labeled RNA as well as sample hybridization and microarray scanning were carried out as previously described ,\u2026, g(tK) be a time series, e.g. of gene expression measurements, at time points t1,\u2026,tK. We approximate this time series by a continuous function \u03b3, where \u0398 is the set of parameters characterizing \u03b3. We assume that the g(tk), k\u00a0=\u00a01,\u2026, K, are measurements of the values \u03b3(tk). We specify a heteroscedastic Gaussian error model that has been developed specifically to gene expression measurements . According to this error model, the time series is approximated by its maximum likelihood fit \u03b3 . The space \u0492 of test functions \u03b3 used in the fitting procedure determines what we actually model \u2014 it can be periodic behavior or aperiodic behavior. In each case, the quality of fit is crucially dependent on the proper choice of \u0492 and a suitable parameterization enabling an efficient maximum likelihood search. In the MoPS algorithm, we construct periodic test functions from cosine\u2010like functions Let Here, \u03bb\u2032 is the cell cycle length, \u03c6 is the peak time . Additionally, the \u201cshape\u201d parameter \u03c8 is a bijective transformation of the interval which describes the deformation of the cosine wave of cell cycle lengths is modeled as a lognormal distribution with mean \u03bb standard deviation \u03c3 . Finally, the space \u0492 of periodic test functions is the set of all affine transformations of these \u03b3 functions. Conversely, we also fit an exhaustive set of non\u2010periodic expression time courses which exhaustively represent constitutive expression, constant drift, or initial fluctuations due to the synchronization procedure. A list of all functions that are considered examples of non\u2010periodic curves is given in .where the distribution d\u03bb\u2032 as a function of the cutoff value c by MoPS computes a periodicity score for each gene and thus allows ranking of all genes according to their likelihood ratio to be periodically expressed resp. constantly expressed. However, there is no obvious way to assign significance to this score. We use existing knowledge derived from published studies about periodically expressed genes to define a positive set and a negative set. The positive set comprises the top 200 periodic genes from Cyclebase .k\u2010means clustering (k\u00a0=\u00a010) according to their modeled 1\u00a0min resolution labeled expression time courses. Sequences 500 bases upstream of the respective transcription start site were used as input for XXmotif derived from ChIP\u2010chip data .Genes were grouped with P\u2010value\u00a0<\u00a00.01) of TFs and their targets (MacIsaac et\u00a0al, Subsets of the 479 periodic genes are formed by using ChIP\u2010chip derived associations (KM and NP conducted cDTA time series experiments, BS performed initial data processing and normalization, PE, CD and AT developed and implemented the statistical workflow and carried out computational analyses, PC initiated the study, and DM, AT and PC supervised research, PE, PC and AT wrote the manuscript.The authors declare that they have no conflict of interest.Supplementary Information and Supplementary Figures S1\u2013S38Click here for additional data file.Supplementary Table 1Click here for additional data file.Supplementary Table 2Click here for additional data file.Supplementary Table 3Click here for additional data file.Supplementary Table 4Click here for additional data file.Supplementary Table 5Click here for additional data file.Review Process FileClick here for additional data file."}
+{"text": "Motivation: The widespread adoption of RNA-seq to quantitatively measure gene expression has increased the scope of sequencing experimental designs to include time-course experiments. maSigPro is an R package specifically suited for the analysis of time-course gene expression data, which was developed originally for microarrays and hence was limited in its application to count data.Results: We have updated maSigPro to support RNA-seq time series analysis by introducing generalized linear models in the algorithm to support the modeling of count data while maintaining the traditional functionalities of the package. We show a good performance of the maSigPro-GLM method in several simulated time-course scenarios and in a real experimental dataset.Availability and implementation: The package is freely available under the LGPL license from the Bioconductor Web site (http://bioconductor.org).Contact:mj.nueda@ua.es or aconesa@cipf.es The use of RNA-seq for transcriptome profiling as a replacement for microarrays has triggered the development of statistical methods to properly deal with the properties of these types of count-based data. RNA-seq measurement of gene expression is based on the number of reads mapped to transcripts, which results in discrete quantities and left-skewed distributions. In contrast, microarray signals are scanned fluorescence intensities, and this translates into continuous and nearly normal expression data. Although normality was typically assumed and linear models (LMs) were applied to model microarray experiments, other distributions such as Poisson and Negative Binomial (NB) capture better the nature of count data. Hence, methods such as edgeR updated The first RNA-seq experiments were still constrained by the relatively high costs of sequencing in comparison with microarrays, which restricted experimental designs to case\u2013control studies with low replication. As a consequence, the novel statistical methods mostly addressed this analysis scenario. As the technology became more affordable, other types of designs involving more samples, such as time-course experiments, started to appear. In a time-course study, the dynamics of gene expression are evaluated at different time points after induction by a particular treatment or in relation to development. Statistical analysis of time-course data implies the identification of genes that change their expression along time and/or follow a specific expression pattern. maSigPro is an R package designed for the analysis of transcriptomics time courses into theT time points and S experimental groups or series , maSigPro uses polynomial regression to model the gene expression value iy at condition i and time it, and defines S \u2212 1 binary variables (sz) to distinguish between each experimental group and a reference group parameter:To accommodate the GLMs, the existing family.Theta (\u03b8) can be estimated using available software in RNA-seq time-course data, we have created different synthetic datasets in which we consider several possible experimental designs. Each dataset has been analyzed with maSigPro-LM, maSigPro-GLM and edgeR. Comparison with maSigPro-LM was included to highlight the limitations of this modeling with count data when the number of replicates is low, even after normalization.Both maSigPro and edgeR methods are based on the GLMs but with a different approach. The major difference between the maSigPro and edgeR methods is that maSigPro is specialized in the estimation of serial data, i.e. when the independent variable is quantitative such as time. This is achieved by providing an easy way to define a polynomial model for the data. Another important difference is that maSigPro follows a second stepwise regression that obtains the best model for each gene and retains only significant coefficients in each model, whereas edgeR applies the same model to each gene.i, where the targeted total number of reads is N, and the relative abundance of each gene g is pgi, the expected gene counts, \u03bcgi, can be computed asSimulations have been created using NB distributions with a parametrization based on the mean \u03bc and size \u03b8. In each sample iN of the sample i.Note that, as gene counts are randomly drawn from a NB distribution, the simulated count values of each gene will slightly vary among samples and so will the total number of reads K = 4 gene expression level classes, which are defined by a fixed reference value at time 1 (kv1) and a given size in each k level as indicated in Simulated datasets were designed to contain genes that belong to one of the gb values different from zero and are differentially expressed. Furthermore, we modeled three different data scenarios by assigning different values to the gb parameter to subsets of genes: (A) In this scenario, all DEGs increase their expression linearly with gb = 0.2; (B) In this scenario, half of the DEGs increase gb = 0.2 and half decrease with gv1 to avoid negative means; (C) Genes follow a strong upregulation in the second time-point followed by decrease with To model time-associated gene expression changes we considered the following linear expression:Datasets were modeled either with one or two time series. In the two series case, one series was modeled as described and the second was modeled as a flat profile. For each scenario and series number, datasets were simulated with 1, 2, 3 or 5 replicates. Finally, genes were considered to have constant length equal to 1 kb in all datasets and no length correction was applied in the data.g in sample i isFollowing this simulation scheme, the relative proportion of counts of gene This approach provides the way to take into account not only the expression level, but also the composition of the RNA population in the sample, as gene proportions are computed a posteriori and are affected by the gene expression changes modeled in each scenario.Arabidopsis thaliana lines to the barley powdery mildew fungus Blumeria graminis (Bgh) , one or two time series and one of the four replication levels. Datasets were created with \u03b8 = 10, and 6 time points. Here, we show results from data with 20 000 genes. Simulations with a smaller dataset of 6000 genes gave similar results.R2 parameter in the second regression step. We analyzed False Discovery Rate [FDR : false positives (FP)/Selection] and False Non-discovery Rate [FNR: false negatives (FN)/Non-selected] for varying R2 values at fixed FDR = 0.05 . Our results indicate that one replicate is clearly not sufficient for the proper control of the FDRs. While initial RNA-seq took advantage of the accuracy of the technology to avoid replication, recent studies highlight the importance of appropriate replication for a sound RNA-seq data analysis (P-value, showed that the maSigPro filtering based on a R2 cutoff value resulted in genes with consistent models. Genes that were significant with both methods but discarded by maSigPro because of a R2 < 0.5 used to have outliers or highly variable measurements (Finally, although significance thresholds in maSigPro-GLM maintain their statistical meaning, the goodness of fit, which is used in the second step of maSigPro to select genes with well-fitted models, is evaluated in GLMs in terms of the deviance: the percentage of deviance explained by the model. We conducted experiments with simulated data to understand how this parameter behaves in different experimental settings. Our results indicated that similar to the recommended threshold in the LM version of maSigPro, a cutoff value of 0.7 is valid in most scenarios. However, when data are abundant, i.e. triplicated measurements and multiple series, this threshold could be lowered to 0.5. Indeed, this value was used in the analysis of the real Arabidopsis dataset. The comparison with edgeR, which solely selects genes on the basis of a significant urements .In conclusion, we show that maSigPro-GLM is suitable for the identification of DEGs from time-course RNA-seq data under a wide range of experimental settings. The updated package successfully controls both false-positive and false-negative detection rates.Funding: This work has been funded by the FP7 STATegra [GA-30600] project, EU FP7 [30600] and the Spanish MINECO [BIO2012-40244].Conflicts of Interest: none declared."}
+{"text": "The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used.Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the dynamic patterns present in the network.This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in http://www.paola-vera-licona.net/Software/EARevEng/REACT.html.Boolean polynomial dynamical systems provide a powerful modeling framework for the reverse engineering of gene regulatory networks, that enables a rich mathematical structure on the model search space. A C++ implementation of the method, distributed under LPGL license, is available, together with the source code, at The inference, or reverse-engineering, of molecular networks from experimental data is an important problem in systems biology. Accurate methods for solving this problem have the potential to provide deeper insight into the complexity and behavior of the underlying biological systems. So far the focus has been largely on the inference of network topology, that is, on the wiring diagram representing the regulatory relationships connecting different genes ,2.It has been argued that one can obtain a significant improvement in performance with inference methods that make use of data that capture the dynamics of a network in response to perturbations . This poe.g. steady state data vs. time series).Making effective use of prior knowledge is also crucial in any inference problem , becauseFinally, another desirable property of inference methods is the tolerance to certain levels of noise in the experimental data used. This is especially important for methods that capture dynamical properties of the network in order to avoid the problem of over-fitting the data . Sourcescoarse-grained models based on discrete variables, such as Boolean networks, Bayesian networks, Petri nets, and polynomial dynamical systems [fine-grained models based on continuous variables, such as systems of ordinary differential equations, artificial neural networks, hybrid Petri nets, and regression methods [Several inference methods have one or several of the aforementioned features. Some of these methods fall in the category of systems -25; othe methods -32 (For methods -35). HowIn this paper we present a novel reverse-engineering method that combines all of these relevant features. It uses input that consists of (1) time courses of experimental measurements, which can include various network perturbations, such as data from knock-out mutants and RNAi experiments, and (2) prior knowledge about the network in the form of directed edges between nodes (representing known regulatory interactions) or as information about the regulatory logic rules of individual nodes. The output of the algorithm is a family of Boolean dynamic models, from which one can extract a directed graph whose edges represent causal interactions between nodes. The Boolean dynamic models are identified by an optimization algorithm that searches through the space of Boolean dynamic models that approximate the given data and satisfy the constraints imposed by the prior biological information. An important feature of the algorithm is that it uses the expression of Boolean functions as polynomials, leading to a model search space that has a rich mathematical structure that can be exploited. This effective representation of dynamic models lends a criterion for measuring model complexity and to select models accordingly. We show that the method is robust to a significant level of noise in the data. Additionally we show that the method\u2019s performance on the data set used in , comparen variables can be viewed as a functionWe use the modeling framework of Boolean networks, represented as time-discrete, state-discrete dynamical systems. A Boolean network on fi:kn\u2192k is a Boolean function in n variables and k={0,1}.where each coordinate update function k supports the algebraic structure of a number system, using addition and multiplication modulo 2, then we can express each Boolean function as a polynomial function with binary coefficients, using the translation xANDy=x\u00b7y, xORy=x+y+xy, NOTx\u2009=\u2009x+1, while addition corresponds to the logical exclusive OR function XOR. Since x2\u2009=\u2009x in k, each polynomial can be assumed to be square-free, that is, every variable in every term of the polynomial appears with exponent 1. With this reduction, there is in fact a one-to-one correspondence between Boolean functions in n variables and square-free polynomials in n variables over k (see [If we use the fact that r k (see ). Note tr k , where \u03c1ij denotes the probability that a causal influence exists from node j to node i.The primary input to our algorithm is one or more time series of experimental measurements for all the variables associated with the nodes in the network. These can include measurements from network perturbations, such as knock-out mutants and RNAi experiments, discretized into binary data. Additional input can include prior knowledge about the network in the form of an n variables associated with the nodes in the network, let \u03bc0 input time series. For 1\u2264\u03bc\u2264\u03bc0, \u03bcth time series. Thus, jth measurement in the \u03bcth time series, and consists of a vector of dimension n, with coordinates representing measurements for the individual n variables associated with the n nodes in the network. We assume that the data contains a proportion 0\u2264\u03be<1 of noise; that is, a proportion \u03be of entries in the time series are assumed to be \u201cflipped\u201d through noise. As a result of noise, or as a result of the data discretization process, the given time series might end up being inconsistent, in the sense that from a given state Given a collection of The network inference problem in our context can then be formulated as follows:f:kn\u2192kn} that: Choose a family of Boolean dynamical models { f, generate \u03bc0 new time series of length \u03b1\u03bc by iteratively applying f to the initial time points \u03be of the \u03bc\u2264\u03bc0 and 1\u2264j<\u03b1\u03bc, we search for Boolean dynamic models satisfying (1) Best fit the data, in the following sense. For each candidate \u03c1ij).(2) Conform to the prior information available about the biological system, given by the matrix ((3) Contain Boolean coordinate functions that are as \u201csimple\u201d as possible, in a well-defined sense.We emphasize that in (1), we allow models to disagree with the input data commensurate with the expected noise level. Instead, one of the optimization criteria is the Goodness-of-Fit of a given Boolean dynamic model, which measures this deviation. This relaxation is the reason for the method\u2019s robustness to data noise, and it is one feature that sets our algorithm apart from others of this kind. We choose an evolutionary algorithm as our optimization procedure, although other optimization methods could be chosen as well.We derive now a computationally efficient characterization of polynomial functions that fit a time series of a given length. This characterization greatly reduces the space of all polynomials that the algorithm needs to search over.f =\u2009:kn\u2192kn satisfying the condition fi:kn\u2192k for i\u2009=\u20091,\u2026,n, satisfying the condition:First observe that finding a dynamic model \u03bc\u2264\u03bc0, 1\u2264j<\u03b1\u03bc, and 1\u2264i\u2264n, except for at most for 1\u2264Modeling framework Section, each function fi can be expressed as a square-free polynomial in n variables, with coefficients in k. A priori, the search space for fi is the vector space of all such polynomials in n variables. Since this space has dimension 2n, the number of square-free monomials in n variables, an exhaustive exploration of the search space quickly becomes intractable. However, each polynomial fi in this space is described by the monomials that appear as summands in fi , and each monomial support, that is, the list of variables that appear with exponent 1 in xa. Let supp(xa) denote the support of the monomial xa and |supp(xa)| denote the number of variables in xa, that is, As previously described in the fi has prescribed values at exactly m points, whereNote that each coordinate polynomial function m the total length of the input time series. Recall that the floor \u230ar\u230b of a real number r is the largest integer not greater than r. Then we define the set \u03a6:=\u230alog2(m)\u230b, that is,We call the integer n with Hamming weight less than or equal to \u03a6. This expression depends on both n and \u03a6, thus preventing a direct comparison with 2n. In Table\u00a0\u03a6=4, which corresponds to time courses of total length m between 16 and 31, and \u03a6=9 which corresponds to time courses of total length m between 512 and 1023. The rows of the table are labeled by the variables in the network.We restrict the search space to polynomials that are linear combinations of monomials in m has a polynomial representation in f and g have the same values at each point t in the input time series, that is, f(t)\u2009=\u2009g(t) for each time series point t, then f can be written as f\u2009=\u2009g+h, for some polynomial h with h(t)\u2009=\u20090 for each time series point t.The fact that each polynomial function that fits a given set of time series of total length f and the valuations at each time series point t, we can find polynomials g and h with f\u2009=\u2009g+h, such that h(t)\u2009=\u20090 for each time series point t, and such that g cannot be further decomposed into the sum of two polynomials g\u2009=\u2009p+q with q(t)\u2009=\u20090 for each time series point t. In [et al. show that the exponent vectora\u2009=\u2009 of any monomial xa appearing in any such polynomial g must satisfyFurthermore, given a polynomial nt t. In , Babson a, this criterion translates to xa)|\u2264 log2(m). This justifies the definition of the set For a square-free monomial xn. Although the search space might still be too large to admit an exhaustive exploration, this reduction makes the search space more amenable for the application of stochastic optimization algorithms. Furthermore, this reduction in the model space is not arbitrary, but is based on a careful analysis of the form of polynomials relevant for interpolating time series of a given length.As observed in Table\u00a0\u03bc=1,\u2026,\u2113 and, knockt-out mutant/RNAi time series for \u03bc=\u2113,\u2026,\u03bco. We formulate the inference problem as an optimization problem with a multi-objective function that measures: We separate the binary time series 1) The Goodness-of-Fit of a Boolean dynamic model with respect to the input data.support of the model\u2019s coordinate polynomial functions.2) Model complexity with respect to the 3) Consistency with the network topology obtained from the dynamic model with respect to the prior knowledge of the network\u2019s topology.4) Consistency with any existing information about the model\u2019s polynomial structure.For the solution of this optimization problem we chose an evolutionary computation approach and we developed an evolutionary algorithm. Evolutionary algorithms (EAs), population-based heuristic optimization algorithms, are known to perform well on under-determined problems and noisy fitness functions . AccordiAlthough there are many different variants of EAs, the common underlying idea behind all these methods is the same: given a population of individuals, environmental pressure causes natural selection which causes an increase in the fitness of the population. Given a fitness function to be maximized, a population of candidate solutions is created. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and/or mutation to them. Recombination or crossover is an operator applied to two or more selected candidates -the so-called parents- to form new candidates or children. Mutation applied to one candidate results in one new candidate. Executing recombination and mutation leads to a set of new candidates (the offspring) that compete, based on their fitness score, for a place in the next generation until a candidate with sufficient quality is found or a previously defined computational limit is reached .n coordinate polynomial functions. Within a given individual, polynomial functions are mutated by changing some of their monomial terms. Crossover occurs by assembling a new candidate model from optimal polynomial coordinate functions for each i=1,\u2026,n. Additionally, to prevent a decrease of the fitness score of a given generation, some of the candidate solutions with better fitness scores are allowed to be directly cloned, that is inherited unchanged, to the next generation.In our context, polynomial dynamic models play the role of individuals in the population. Each one of these individuals are made of n optimization sub-problems (Divide and Conquer Strategy): Consider a Boolean polynomial dynamic model f\u2009=\u2009 for a given generation in the EA. Based on the synchronous update schedule that we have selected for our approach, the state value of a node i at time t is computed as fi for a node i to compute its state at a given time t for a given time series \u03bc, it is enough to have the values of the time series \u03bc at time t-1. Thus we do not require the other n-1 coordinate functions. Therefore, the Goodness-of-fit of a model f\u2009=\u2009 can be evaluated one coordinate function at a time. Similarly, the other optimization criteria, such as complexity, can be evaluated one coordinate function fi at a time. Once each of the coordinate functions fi have been evaluated, they can be assembled to a dynamic model f= via an n-point crossover. This newly assembled model can then be evaluated for all the optimization criteria to estimate its fitness as a mutant or clone for the next generation in the EA.Hence, the fitness function for our EA is built as a multi-objective function consisting of the weighted sum of the different fitness criteria for each coordinate function fi and the fitness of the assembled candidate dynamic models f=. We next list the different criteria.Before describing our fitness function, it is important to observe that our optimization problem can be divided into \u03bc\u2264\u2113 we let f\u2009=\u2009. For each initial time point f\u2009=\u2009 for \u03b1\u03bc times. We compute the Hamming score as:This score measures the ability of a candidate model to fit the time series data. As previously stated, for 1\u2264D\u03bc is the Hamming distance between input wild type time series and the input time series generated by f. That is, the total number of bits where the input wildtype time series and the time series generated by the candidate model f differ. Hence, the Goodness-of-Fit score is computed as ModelFIT(f)=WHM(1-Hf), where WHM is the weight assigned to the model\u2019s data fit, part of the EA parameters.where \u2113+1\u2264\u03bc\u2264\u03bco we let r-th gene. In this case, all the candidate models will be considered to have the r-th coordinate function fr\u2009=\u20090. That is, for a given candidate model f= we let f\u2217:\u2009=\u2009 by setting the rth coordinate function fr\u2009=\u20090, and keeping all the other coordinate functions the same as for the wildtype case. Consider the initial time points corresponding to the knockout and/or RNAi time series. From these initial time points, we consider the knockout and/or RNAi time series generated by iterating f\u2217\u2009=\u2009 for \u03b1\u03bc times. For each \u2113+1\u2264\u03bc\u2264\u03bco, analogously to equation 4, we compute the f\u2217. Similarly, we compute Now we consider the knock-out mutant and/or RNAi time series. For each fi to fit the ith column of each input time series. In the case of the wildtype time series data, we consider the time series generated by synchronously iterating the coordinate function fi\u03b1\u03bc times, starting at the initial time point ith column of each input time series. We compute the Hamming score as:This score measures the ability of a candidate coordinate function Di\u03bc is the Hamming distance between the ith columns of the input wildtype time series and the time series generated by fi. Thus, the Goodness-of-Fit of a coordinate function fi is given by WHP is the weight parameter assigned to the Goodness-of-Fit of coordinate functions. For the knock-out mutant and RNAi time series, we proceed analogously to the wildtype case (Equation 5), but considering the model f=, representing the knock-out mutant or RNAi experiment on the rth gene . The complexity score for a candidate model is measured as the average of the complexity scores of its coordinate functions. Notice that the complexity score in our proposed method is enabled from the algebraic identification of the upper bound for the monomial support. However other complexity criteria such as Bayesian Information Criterion [With the previously introduced algebraic description of the search space, we can evaluate the complexity of each coordinate function as the ratio between its total degree and riterion and Akairiterion , could bn\u00d7n matrices BioProbMatrix and RevEngProbMatrix, respectively. The entries \u03c1\u0131\u0237 of either matrix represent the probability of a causal influence from node \u0237 to node \u0131. For a candidate model f, let us consider its adjacency matrix V, in which an entry a\u0131\u0237 is \u20181\u2019 if x\u0237 appears in the \u0131th coordinate function f\u0131 and \u20180\u2019 otherwise. For such a matrix, consider the \u0131th row V(x\u0131)\u2009=\u2009, corresponding to all variables appearing in the \u0131th coordinate function. Let be the \u0131th row of the BioProbMatrix, and letPrior information about the topology of the network can be available from two different sources: (1) From previous biological knowledge and (2) from knowledge acquired from the prior use of another inference method, thus applying our method as a \u201cmeta-inference method\". Prior knowledge from these two sources is encoded in the f\u0131 is given by BioScore assigned to a candidate model f is given byThus, the Prior Biological Knowledge score assigned to each coordinate polynomial WB is the weight assigned to the model BioScore in the EA parameters.where Analogously, we compute the RevEngScore to obtain the Prior Reverse Engineering score.Inference of structure and dynamic polynomial modelsWe summarize the full algorithm as follows: Inference algorithms using a discrete modeling framework, such as Boolean or certain Bayesian methods, face an additional challenge: their performance depends on the choice of a data discretization method. Thus we separate the effect of data discretization on the method\u2019s performance from that of robustness to data noise. We use a binary data set generated from the published Boolean model of a gene regulatory network in . We addeThe Segment Polarity Gene Network: The Boolean model in [Drosophila melanogaster responsible for segmentation of the fruit fly body. This Boolean model is based on the binary ON/OFF representation of mRNA and protein levels of five segment polarity genes. The authors constructed their model based on the known network topology and validated it using published gene expression data.model in represenThe expression of the segment polarity genes occurs in stripes that encircle the embryo and are captured in the Boolean model as a one-dimensional representation. Each stripe consists of 12 interconnected cells grouped into 3 parasegment primordia in which the genes are expressed in every fourth cell. The authors assumed the parasegments to be identical so that only one parasegment of four cells is considered. The variables of the dynamical system are the segment polarity genes and proteins in each of the four cells. Thus, one stripe is represented as a 15\u00d74=60 node network which we aim to infer. In Additional file % noise.Because we did not assume any prior knowledge about the amount of noise present in each one of the input data sets, we uniformly choose all EA simulations to assume 5Generation of input time series. We generated 24 time series, including wildtype and knock-out mutant data, with a total of 202 time points (\u226a1% of the 221 possible states in the system). In Additional file \u03be=0,.01,.05 percent noise to the input data by randomly flipping \u03be(202)(n) bits, respectively. Note that, since data discretization may filter out some amount of noise in the experimental data, adding noise to the already discretized data probably results in the addition of more noise than if we would have added the noise to the continuous data and then discretized it.We added a) Prior Biological information. We included only the 5 biologically obvious molecular dependencies, that is, from each one of the 5 genes in the network to their corresponding protein products.b) Prior use of an inference method. Using our software as a inference method, independently of prior biological information, we input information about the network topology obtained by first applying the inference method from Jarrah et al. in [t al. in to the at al. in infers nWe assess the robustness to noise under a broad sampling of parameter sets, rather than only presenting the best results after parameter tuning. To create a good sampling of multi-variable parameter sets while reducing the number of runs necessary, we used a Latin hypercube sampling (LHS) method, as introduced in . From the.g.[A key issue, when applying heuristic search algorithms, is their dependence on the choice of various parameters. For EA algorithms, the number of parameters that can be changed to optimize the method\u2019s performance is often quite large. Furthermore, for inference methods that utilize EA strategies e.g.-61), typ, type.g.We generated triplicates for each one of the EA runs, for a total of 180 runs for each one of 3 noise levels. In Figure\u00a0As mentioned earlier, we assumed no prior knowledge in terms of the amount of noise present in each one of the input data sets; thus, we uniformly choose all EA simulations to disagree with up to 5% of the data. In that sense, since for the input data set containing 0% and 1% of noise, the polynomial models are allowed to disagree with up to 5% of the bits of the input, it is not surprising that rather than being detrimental, the EA\u2019s performance slightly improves when considering data sets with some level of noise.Along with the network topology, by design, our method infers dynamic models. Thus we tested the method\u2019s ability to predict dynamic patterns of the network across the different levels of noise.% of the expected steady states were always retrieved by the method.We considered the 6 steady states retrieved in Albert and Othmer\u2019s model , 3 of whSaccharomyces cerevisiae, denoted IRMA, for \u201cin vivo benchmarking of reverse-engineering and modeling approaches\". The network was constructed from five genes: CBF1, GAL4, SWI5, ASH1 and GAL80. Galactose activates the GAL1-10 promoter, cloned upstream of a SWI5 mutant in the network, and is able to activate the transcription of all five network genes. The endogenous transcription factors were deleted from the organism to constrict the impact of external factors. The authors measured both time series and steady-state expression levels using two gene profiles described as Switch ON and Switch OFF, obtained by shifting the growing cells from glucose to galactose medium and from galactose to glucose medium, respectively.To test the performance of our method on expression profiles, we use the biological system in . This syInput time series and their discretization. The translation from continuous data to their discrete equivalent is a crucial step in preserving the variable dependencies and thus has a significant impact on the performance of network inference algorithms [gorithms . Althouggorithms , to discPrior knowledge of the network topology. In the specific scenario of the yeast synthetic network, we observed that across the different discretization methods used, we had issues with loss of some dynamic patterns observed in the continuous data. To counteract the data limitation due to the 2-state data discretization, we used our inference method as a meta-algorithm, that is, we input a previously inferred network topology from the dynamic Bayesian method BANJO [et al. in [Q3, we input the BANJO inferred network as prior knowledge about the topology of the network.od BANJO . First, t al. in , from [PAn objective benchmarking procedure. Possible bias can occur when comparing inference methods: 1) Only methods that are \u201calike\u201d to the method of interest are used for comparison purposes and/or 2) A lack of experience with the methods or software used for benchmarking, prevent an optimal use of such methods. To avoid these two biases, we decided to benchmark our method with a broad spectrum of inference methods from fundamentally different modeling frameworks and to exclusively use the best results reported by the authors in their corresponding publications of their own methods. With that in mind, we benchmarked our method with the broad range of inference methods proposed in [et al.posed in ,64,65 anposed in . They alposed in ,67, as rIn Figure\u00a0In Table\u00a0e.g., the Rosetta error model [% noise.One important aspect to mention is with respect to the level of noise we assume in the data. Detailed error models have been proposed to attempt to quantify the uncertainty in gene expression data (or model ) and theor model ,70. HoweOne of the goals of modeling gene regulatory networks is to obtain a predictive model. To assess the prediction capabilities of our method, we used the Switch ON time series as input data and we tried to predict the expression profiles in the Switch OFF experiment time series.In Figure\u00a0Because the aforementioned last two variables are representing GAL4 and GAL80 mRNA levels when the cell\u2019s environment is shifted from galactose to glucose medium, one would expect to observe a degradation of their mRNA levels. However, as noted by the authors in , the tra%,50%,75% and 100% of randomly chosen true positives in the network. Our objective then is to investigate whether successively adding more prior knowledge about the network topology will also show an incremental improvement in the method\u2019s performance. Accordingly, in Table\u00a0% of the network\u2019s structure as prior, our method also outperforms the method described in [As mentioned in the introductory section, the effective use of prior knowledge is crucial to overcome the lack of quantity or quality of data. To illustrate this, consider again the IRMA network as an example. As noted in the previous section, the Switch OFF data are a challenge due to a lack of significant stimulus compared to the Switch ON data. This scenario is ideal to highlight that, with a lack of sufficient information provided by the observational data, the performance of inference methods can overcome the input data limitations by the additional consideration of prior knowledge about the topology of the network. To show this, we set up the next experiment: with the Switch OFF data as input we ran our method using as prior knowledge different amounts of information about the network topology. We created 4 networks with prior information about the network topology by considering 25ribed in .RevEngProbMatrix). The latter source of information is particularly useful when different types of data are available so that ad hoc methods can be applied to each type of data.In this small example it is possible to highlight the benefit acquired from adding just partial information about the network. However, reliable sources of information are not necessarily easy to retrieve. In our method we have proposed two main sources of such prior information: 1) Prior biological knowledge of the network\u2019s topology and, 2) Prior information about the network obtained from other inference methods applied to available data in the network can be refined for the polynomial function fi, describing the dynamic patterns of gene i. First, an unbiased initial run of the EA algorithm can be done, with the \u03a6 upper bound from the input data. From these runs, one might identify the \u03a6 most prevalent variables in fi, i.e., the more likely activators/repressors of the gene in question. With these \u03a6 most prevalent variables, the ith row in the BioProbMatrix can be fixed with 1\u2032s in the corresponding columns of these variables and the rest fixed to 0, in order to find the most optimal polynomial models. In the case that the number of most prevalent variables is less than \u03a6, several runs of the EA can be done; in each one of these runs, for the ith row of the BioProbMatrix, one can fix to 1 the values for these variables while considering combinations of the other variables to be fixed to 1 and the rest of the variables fixed to 0, until we find best scored polynomial models.It is possible to imagine other scenarios in which other kinds of prior biological knowledge can be used with our method. For instance, suppose that for a given gene The development of algorithms for the inference of molecular networks from experimental data has received much attention in recent years, and new methods are published regularly. Most of these methods focus on the inference of the network topology and cannot use information about the temporal development of the network. Additionally, there is still a need for methods that can take different types of prior information about the network. Finally, well justified search space reductions are needed to improve the performance of inference methods.The method we present in this paper combines several useful features: (1) it effectively uses time series data; (2) it takes into account prior information about the network; (3) it infers dynamic models so that it can predict long-term dynamic behavior of the network; (4) it is robust to noise in the input data; and, (5) it uses theoretical tools from computer algebra and a local search algorithm to efficiently explore the model search space and to optimize between model fit and model complexity.Our method compares in general favorably with other inference methods that also utilize time series data. As we have shown here, a good strategy for increased performance is the introduction of an effective search space reduction and the combination of different inference methods.Lastly, although our method is within the PDS modeling framework, our introduced description of the search space can be applied as well to other Boolean modeling approaches. We expect this description to be useful for Boolean methods proposed in the future or to improve the performance of existing ones.The authors declare that they have no competing interests.PVL developed the design of the algorithm and helped with its encoding, performed the robustness to noise analysis and performed the benchmarking analysis. AJ participated in the design of the algorithm and supported PVL with the benchmarking analysis. LDG participated in the design of the algorithm and performed the comparison of dimensionality of the classes of Boolean functions. JM participated in the design of the inference algorithm, helped with its encoding and supported PVL in the robustness to noise analysis of the algorithm. RL participated in the design of the inference algorithm, provided funding and directed the project. All authors read, edited and approved the final manuscript.Algebraic description of the model search space.Click here for fileSegment polarity gene network.Click here for fileContains the comparison of discretization methods for the Switch ON and Switch OFF data for the IRMA network.Click here for file"}
+{"text": "Obesity is a metabolic state associated with excess of positive energy balance. While adipose tissues are considered the major contributor for complications associated with obesity, they influence a variety of tissues and inflict significant metabolic and inflammatory alterations. Unfortunately, the communication network between different cell-types responsible for such systemic alterations has been largely unexplored. Here we study the inter-tissue crosstalk during progression and cure of obesity using multi-tissue gene expression data generated through microarray analysis. We used gene expression data sets from 10 different tissues from mice fed on high-fat-high-sugar diet (HFHSD) at various stages of disease development and applied a novel analysis algorithm to deduce the tissue crosstalk. We unravel a comprehensive network of inter-tissue crosstalk that emerges during progression of obesity leading to inflammation and insulin resistance. Many of the crosstalk involved interactions between well-known modulators of obesity and associated pathology like inflammation. We then used similar datasets from mice that in addition to HFHSD were also administered with a herbal concoction known to circumvent the effects of HFHSD in the diet induced model of obesity in mice. We propose, the analysis presented here could be applied to understand systemic details of several chronic diseases. Obesity has emerged as a global epidemic and lies at the forefront of a vast repertoire of metabolic/life style diseases in developing and developed world2The positive energy balance leads to accumulation of stored fats in the adipose tissues across the bodySeveral studies implicate changes in the food habits due to surplus availability of food and a sedentary life-style as a major factor behind the surge in obesity and obesity related illness. Consequently animal models of diet-induced obesity have been the major experimental model to achieve better understanding of the disease physiology and possible intervention strategiesAll animal studies were carried out at BIONEEDS Laboratory Animals & Preclinical Services, Bangalore, India, and approved by Institutional animal ethics committee (IAEC). All experimental protocols were performed in accordance with the approved national and international guidelines. BIONEEDS is approved by the committee for the purpose of control and supervision of experiments on animals (CPCSEA), Ministry of Forests and Environments, Government of India.Animals and diet, treatment with Kal-1, tissue isolation, RNA isolation and microarray experiments has been described our earlier report and are briefly explained in the suplementary informations9http://amigo1.geneontology.org/cgi-bin/amigo/go.cgi). The protein-protein interactions between the extracellular (secreted) and cell surface (receptor) gene products were extracted from major expert-curated databases (APID and InnateDB) in an automated manner using the web service PSICQUIC (https://code.google.com/p/psicquic/)The list of extracellular (GO:0005615) and cell surface (GO:0009986) mouse gene products was downloaded from the AmiGO gene ontology browser was constructed by assigning the values 0 or 1 to each cell. Value of 1 was assigned to a cell only if both the secreted and receptor genes, for the corresponding tissues, were up-regulated . Value 0 was assigned to all other cells. SQL queries were written to obtain the tissue interaction matrix for each secreted-receptor interacting pair at each time point. For each of the 100 possible tissue interaction all the 1\u2009s, from each tissue interaction matrix, were summed (interaction count) at each time point. Along with it, the sum of secreted genes up-regulated (sec count) and the sum of receptor genes up-regulated (rec count) was extracted for each tissue at each time point using SQL queries. Networks were constructed at each time point to visualize the tissue crosstalk, with rec count and sec count as node attributes and interaction count as edge attribute, using CytoscapeTime-course trend was generated for each secreted and receptor genes by assigning either of the three values: \u22121, 0 or 1 at each of the time points, based on the difference between the expression value at that time point and the baseline expression . If the difference is greater than or equal to 1, the value 1 is assigned. If the difference is less than or equal to \u22121, the value \u22121 is assigned. If the difference is between \u22121 to 1, the value 0 is assigned. Following script was used to calculate the trend:The time-course trend of secreted gene and receptor gene, of a secreted-receptor interaction pair, is generated for all tissues using a Java program. A tissue interaction matrix was generated and the value 1 was assigned only if the time-course trend of the secreted and receptor gene matched exactly for the corresponding tissues otherwise value 0 was assigned. Tissue crosstalk was visualized using network generated with Cytoscape For each gene, the expression at each of the time points was normalized with respect to the expression at the first time point and then the area under the graph (expression area), of the plotted normalized expression value, was determined using a Java program. A cut-off had to be designed in order to classify the genes as up-regulated or down-regulated based on the expression area. The cut-off for classifying up-regulation was defined as the expression area obtained from a hypothetical gene which has an expression value of 1 at exactly one time point and 0 at all the other time points. The cut-off for classifying down-regulation was defined as the expression area obtained from a hypothetical gene which has an expression value of \u22121 at exactly one time point and 0 at all the other time points. The dynamic interactions were classified into four groups based on the regulation of the sec and rec genes involved in the interaction.Statistical significance testing was carried out by performing hypothesis testing of the difference between two population proportions. We calculated Z-value, standard error and the corresponding P-value for each data point in Microsoft Excel.In order to study the crosstalk between various tissues during the establishment of obesity/diabetes in an unbiased manner, we previously reported changes in the expression level of genes across 10 different tissues across multiple time points while being fed on high fat, high sugar diet (HFHSD)supplementary information561314supplementary information9The earliest time point comprised of 1, 6 and 14 days after the mice were put on HFHSD. We transformed the tissue crosstalk matrix into a network as shown in 17181920The efficacy of Kal-1 in controlling diet induced obesity and diabetes has been recently demonstrated by our group22We next scrutinized the tissue participation among frequent receptor-ligand interaction pairs. The tissue that expresses \u201csec\u201d was considered to be regulating the function of those that express \u201cRes\u201d. Thus the receptor-ligand interaction table were transformed on account of participating tissues rather than participating receptor-ligand pairs. The resulting interaction network was then analysed through hierarchical clustering to reveal tissues which received maximum signal or those which released more sec molecules . Several interesting patterns emerged through this analysis . For exaHaving observed the distinct involvement of macrophages from spleen and adipose tissues in inter-tissue crosstalk, we asked how the res and sec genes studied here are distributed between these two biological function classes. Many of these molecules can be easily classified into immune regulators like cytokine and chemokine while some may regulate metabolism due to their influence on central metabolic pathways and anabolism. We classified res and sec genes into the two broad gene ontology classes: GO:0008152 for metabolism and GO:0006954 and GO:0006955 for immune and inflammation. Expression of all these genes across different tissues and time points was checked in the expression data to infer overall perturbation to these two functional classes . We thenSo far, our crosstalk analyses were time-point specific and independent of expression pattern at other time points during the study. We next decided to check the time course trend for expression of res and sec genes. The time-course trends took into account the changes in expression of a gene at any given time with respect to the preceding and following time points. Any pair was considered dynamically interacting when they followed exactly similar time-course trend (see methods for detail). We developed a simple algorithm to calculate the net change for each receptor-ligand pair and tissue participation in this interaction was catalogued . The analysis above allowed us to extend the tissue crosstalk network in a dynamical manner. Thus those tissue pairs which showed exactly same time-course trend for sec and res respectively were considered to be communicating with each other. All such observed interactions between all the ten tissues are represented as tissue crosstalk network in While tissue crosstalk patterns seemed intriguing, we went on to identify dynamic molecular crosstalks that were differentially regulated between the two conditions. The interactions lists were used to generate a network file which incorporated all the interactions in both the conditions. Then condition specific network was extracted to compare similarity and differences between the two conditions. We also looked at the interactions that were lost/down-regulated in HFHSD and Kal1 treated animals. The complete molecular crosstalk network covering all possible combintions of Res and Sec expression profiles are shown in The present study was planned keeping in view the inclusive nature of obesity that affects several organs of the individuals resulting in insulin resistance and other metabolic and cardiovascular illness6Inclusion of Kal-1 treated group in the study allowed further filtering of interactions that were specific to the disease progression27In summary, we report here a comprehensive and dynamic inter-tissue crosstalk that gets estabished in the diet induced model of obesity in mouse. Using an unbiased approach we were able to filter-out specific inter- and intra-tissue crosstalk during progression of obesity and diabetes. Functional analysis of the observed crosstalk reiterated significant role played by inflammatory pathways in regulating the pathologies associated with obesity. The corrective therapy in addition to controlling the weight gain in animals, also mitigated the inflamamtory signaling reaffirming the importance of this pathway. Identification of differential tissue participation, specifically selective regulations in the brown adiposes and corresponding infiltrating macrophages certainly warrants further investigations as they could potentially provide oportunities for novel and unconventional intervention strategies.How to cite this article: Samdani, P. et al. A Comprehensive Inter-Tissue Cross-talk Analysis Underlying Progression and Control of Obesity and Diabetes. Sci. Rep.5, 12340; doi: 10.1038/srep12340 (2015)."}
+{"text": "In vivo Reverse-engineering and Modeling Assessment (IRMA) and a 9-node human HeLa cell cycle network of similar size and edge density. Performance was more sensitive to the number of time courses than to sample frequency and extrapolated better to larger networks by grouping experiments. In all cases performance declined rapidly in larger networks with lower edge density. Limited recovery and high false positive rates obtained overall bring into question our ability to generate informative time course data rather than the design of any particular reverse engineering algorithm.There is a growing appreciation for the network biology that regulates the coordinated expression of molecular and cellular markers however questions persist regarding the identifiability of these networks. Here we explore some of the issues relevant to recovering directed regulatory networks from time course data collected under experimental constraints typical of in vivo studies. NetSim simulations of sparsely connected biological networks were used to evaluate two simple feature selection techniques used in the construction of linear Ordinary Differential Equation (ODE) models, namely truncation of terms versus latent vector projection. Performance was compared with ODE-based Time Series Network Identification (TSNI) integral, and the information-theoretic Time-Delay ARACNE (TD-ARACNE). Projection-based techniques and TSNI integral outperformed truncation-based selection and TD-ARACNE on aggregate networks with edge densities of 10-30%, i.e. transcription factor, protein-protein cliques and immune signaling networks. All were more robust to noise than truncation-based feature selection. Performance was comparable on the in silico 10-node DREAM 3 network, a 5-node Yeast synthetic network designed for Before the emergence of high throughput techniques, biology was deeply entrenched in a reductionist study of one component gene or protein at a time. Though now often depreciated, such studies have provided a wealth of information about the various roles of individual molecular entities. With the advent of high-throughput techniques such as microarray, mass spectrometry, RNA-seq, chip-seq and multi-channel flow cytometry, it is now possible to simultaneously survey many cellular components including mRNA, proteins, and metabolites. There is now a growing appreciation that almost all biological morphologies and functions emerge as a result of complex interactions between constituent molecules or entities . These iIn the past two decades, the reverse engineering of causal gene regulatory networks from time course expression profiles has received special attention with a number of methods and mathematical formulations being proposed for network inference. In a priori knowledge and represent response dynamics as discrete on-off transitions with a simple delay. However BN algorithms such as REVEAL http://geIn our assessment, both projection-based techniques namely, broken stick (F scores = 0.40 and 0.56) and Bartlett\u2019s method (0.47 and 0.60) not only outperform TSNI integral (0.17 and 0.26) in the reconstruction of 9-gene HeLa cell cycle network but also match the performance of methods used in and 30]30] respein silico networks using data from the DREAM3 challenge, from the synthetic IRMA network as well as from 9-gene HeLa cell cycle network. In our analysis, none of the methods evaluated performed to the standards of their reported average performance on single simulated time courses created using the logic-based NetSim. It is important to note that in many cases the edge count of the simulated networks was not reported. In addition many of these methods were typically assessed under the more ideal condition where simulated time course data was generated using models similar to those encoded in the reverse engineering algorithm. For example, in evaluating the probabilistic method TD-ARACNe Zoppoli et al. (2010) used a random network to define a set of statistical dependencies then translated these into stochastic differential equations to simulate the actual time course data. Likewise the authors of TSNI integral used very similar differential equation models to generate both the test data and to perform the reverse engineering . Though this is an important departure from real-world conditions it nonetheless offers a possible upper bound for the performance achievable under near ideal conditions. Despite such favorable conditions these reverse engineering methods barely achieve an F score of 0.4 in recovering a 10-node network from 50 time points sampled with 10% noise . Based on this body of literature an F score of 0.40 might be considered a near-ideal performance for these methods in recovering networks with small to moderate node degree using single time course data alone.The main purpose of this study was to examine the core design features of some of the basic classes of methods currently available for the reverse engineering of biological networks. In particular, we attempted to gauge their applicability to data collected under experimental constraints typical of in vivo studies where the range of allowable perturbations , the sample frequency and the number of subjects are all significantly limited. Using the two alternative types of parameter estimation commonly applied in the identification of ODE models we assessed the recovery of known networks from simulated perturbation time course data produced by the NetSim platform. Importantly, this was done under a range of sampling frequencies and group sample sizes, key parameters in the design of such experiments. The conventional gradient-based ODE form was also compared to the equivalent time-lagged difference equation . In addition, the performance of TD-ARACNE, a recently reported information theoretic method adapted for use with time course experiments was also assessed. Finally, the general applicability of our simulation results was explored by reconstructing C. elegans, and global protein-protein interaction networks in human show edge densities of ~0.4%. Our results suggest that in such sparse networks one might expect reasonable recovery only in the better-connected component sub-networks where edge density is maintained above 10%. Examples of such networks include cytokine signaling between immune cells (60% edge density), cytokine signaling with tissue (40%) and neural networks of cat brain (30% edge density) [Here we purposely generated data using a simulation method based on a fuzzy logic framework that differed significantly from the ODE model structure used for reverse engineering. Though still artificial, we consider this situation more realistic. Understandably under these conditions the recovery performance based on single time courses consisted in a median F score of 0.30 or less. Rather than focus on the recovery of networks in individual subjects, we used the commonly accepted practice in human studies of stratifying the cohorts into groups of subjects. In our analysis, projection methods applied to the standard ODE as well as the difference equation model were successful in recovering typical biological networks having edge densities of 10\u201330%, producing a median F score of 0.40 or more when used on groups of time courses. This translated into a predictive precision (PPV) in the range of ~30\u201340% with recall values between ~50\u201360% for simulated data from sparsely connected artificial networks designed to exhibit key topological properties similar to those expected in real biological networks. Interestingly this is consistent with values obtained from in vivo regulatory sub-networks surveyed in human immune cells. In recent work, Wang et al. (2009) applied density) .For the sparser networks studied here, we found that ODE-based methods generally performed better than TD-ARACNE under conditions that approximated in vivo time-course studies, namely when data collection was restricted to smaller subject groups and infrequent sampling. Among the ODE based methods, stepwise feature selection was less tolerant of experimental noise. Projection-based methods typically faired better as they aggregate terms into composite features, creating an averaging effect that attenuates noise. Unfortunately as we report in this work, projection methods tend to produce far less parsimonious models and a high rate of false positive calls. This is only compounded further in real biological networks where indirect associations may be introduced by unobserved moderators , 86. InfHomo sapiens, their analysis supports the broad identification of direct regulatory structure but does not support the identification of co-regulatory motifs nor does it support the identification of regulatory kinetics supporting pharmacokinetic studies. The latter will require kinetic experiments over a range of frequencies as well as dose response methodology [Nor was there much to be gained on this type of perturbation data by increasing the complexity of the model. Marbach et al., 2010 conductehodology , 89.Though preliminary and rooted in a set of basic assumptions regarding the properties of the data, these findings offer an approximate set of guidelines for the design of pilot studies directed at the inference of in vivo network regulatory kinetics as well as some approximate bounds on what we may realistically expect from simple in vivo perturbation studies. Our results and those of others suggest that even in the favorable case of more densely connected sub-networks such as those surrounding transcription factors, the reverse engineering algorithms and the current generation of confirmatory assays may have reached an upper bound in terms of their ability to recover the underlying regulatory network from experimental time course data. This points to the larger issue of information content in the data collected, which is limited by the breadth of experimental conditions that can be safely deployed in human subjects. Despite advances in the reverse engineering algorithms, more informative data sets will be required if we are to realize the potential of personalized network medicine. Algorithms continue to be developed that design new incremental sets of experiments in order to iteratively refine the recovery of the network model . HoweverS1 Fig20 different networks of 10 nodes each were used to generate a single time course profile sampled at 10, 25 and 50 time points. Box plots on the left show the median and inter-quartile range of F scores for selected methods on the datasets in the absence of noise. Box plots on the right show the range of F scores for each method on the datasets with 20% random noise added.(TIFF)Click here for additional data file.S2 FigFor each network scale of node degree between 5 and 50 nodes, a single reference network was created. From each network 20 simulated time courses were obtained using different initial conditions and sampled at 50 time points. All time courses included 20% Gaussian noise. F scores were obtained based on the network recovered from each simulated time course and median values plotted against node scale and with respective edge density.(TIFF)Click here for additional data file.S1 FileSimulated time course data from 20 random NetSim generated 10-node networks, each sampled at 10, 25 and 50 time points with 0% and 20% experimental noise as used in (MAT)Click here for additional data file.S2 FileSimulated time course data for 5, 10, 15, 20, 30 and 50-node NetSim networks all sampled at 50 time points with 20% experimental noise as used in (MAT)Click here for additional data file.S3 FileSimulated data for 10 different NetSim-generated 10-node networks. Each network was used to generate groups of 5, 10, 15 and 20 simulated time courses each sampled at 5, 10 and 50 time points were selected as reported in (MAT)Click here for additional data file.S1 TableReview of methods for the reverse engineering of directed networks from time course data published over the past 10 years with the inclusion of select older references describing seminal methods that remain popular.(XLSX)Click here for additional data file.S2 TableMedian (a) and mean (b) performance of all selected methods in recovering 20 different 10-node simulated networks, each from a single time course sampled at 10, 25 and 50 time points .(DOCX)Click here for additional data file.S3 TableMedian (a) and mean (b) performance of all selected methods across different expression profiles for random networks of increasing node degree. Each network was used to generate 20 simulated time course experiments, sampled at 50 time points, where 20% Gaussian noise was added mimic experimental noise .(DOCX)Click here for additional data file.S4 TableImprovement in performance of selected methods by inferring a consensus network for a group of time series experimental data. The reference network consists 10 nodes with 19 edges i.e. (21% edge density). Each experimental time series have 50 time points. A consensus threshold to achieve best possible F score value was used to infer consensus network.(DOCX)Click here for additional data file.S5 TableMedian PPV, recall and F score obtained by applying broken stick, Bartlett\u2019s and TSNI integral methods on comparable networks of DREAM 3 challenge (E.coli2 with 15 interactions) and NetSim . A set of 20 different networks consisting of 12\u201317 interactions were simulated by NetSim whereas, 4 time series provided in DREAM 3 challenge were used for E.Coli2 network. Values in parentheses show the performance when self-regulation is not considered.(DOCX)Click here for additional data file."}
+{"text": "Detecting periodicity signals from time-series microarray data is commonly used to facilitate the understanding of the critical roles and underlying mechanisms of regulatory transcriptomes. However, time-series microarray data are noisy. How the temporal data structure affects the performance of periodicity detection has remained elusive. We present a novel method based on empirical mode decomposition (EMD) to examine this effect. We applied EMD to a yeast microarray dataset and extracted a series of intrinsic mode function (IMF) oscillations from the time-series data. Our analysis indicated that many periodically expressed genes might have been under-detected in the original analysis because of interference between decomposed IMF oscillations. By validating a protein complex coexpression analysis, we revealed that 56 genes were newly determined as periodic. We demonstrated that EMD can be used incorporating with existing periodicity detection methods to improve their performance. This approach can be applied to other time-series microarray studies. Microarray technologies allow genome-wide gene expression to be measured. Although microarray data are noisy, attempts have been made to elucidate the sources of this noise and to reduce its impact Recent surveys of the methods used for time-series periodicity detection showed that each method had different strengths and limitations Empirical mode decomposition (EMD) is an adaptive time-frequency data analysis method that decomposes any time series into a collection of components called intrinsic mode functions (IMFs) In this study, we extended the application of EMD to the analysis of time-series microarray studies. Recently, Tu et al. demonstrated that approximately 52% of a genome was periodically expressed in yeast metabolic cycles (YMC) by taking samples during three consecutive intervals of 300 min every 25 min, producing data at 36 time points Our results demonstrated that EMD is a powerful tool that can be used to compensate for the impact of the complexity of oscillation and to improve the performance of existing periodicity detection methods. The framework proposed in this paper can be applied to other time-series microarray studies.We reanalyzed the YMC dataset by using a slightly modified data preprocessing protocol see and applTo improve understanding of the characteristics of the temporal structure of the YMC dataset, we analyzed the dataset by using EMD. The expression level of a probeset can be decomposed into a series of oscillations (IMFs) and a residual trend . In gene2-1 combinatorial IMFs could be interpolated. The algorithm described by Tu et al. was then used to determine whether the newly reconstructed time series were periodic. This approach enabled us to search all the intrinsic patterns derived from the original time series.To decipher how IMFs contribute to the performance of periodicity analysis, we developed a reverse-engineering approach involving an IMF-based interpolation of the original time series. All of the possible combinations of IMFs for each probeset were reconstructed as a new time series. Based on the number of extracted IMFs of a probeset n, a total of nWe compared the optimal combinatorial IMF with the original time series to identify the IMF(s) that directly contributed to or interfered with the detected periodicity. Consequently, the 2748 non-periodic probesets were further divided into two subgroups: non-detectable (ND) and putative periodic (PP) probesets. Of the 1279 ND probesets (46.54%), none of the reconstructed combinatorial IMFs were periodic. In agreement with Tu et al., these probesets were non-periodically expressed. However, we observed that the optimal combinatorial IMFs of the 1469 PP probesets (53.45%) were periodic. The average expression level of the PP probesets was similar to the average expression level of the ND probesets, but the PP probesets peaked at time points 9, 20 or 21, and 32 . This suWe constructed a panel of benchmark genes to validate the under-detected probesets by using a coexpression analysis. A recent study reported that the expression of more than 85% of yeast genes was affected by nine large, highly coexpressed protein complexes totaling to 303 genes In the YMC dataset, 279 genes were associated with these highly coexpressed protein complexes, suggesting that these genes are periodically expressed. In other words, these genes can support the discovery of under-detected periodicity. Whereas Tu et al. identified 211 (76%) periodic genes, our coexpression analysis suggested that 56 genes were under-detected. This result demonstrated that performing EMD substantially improved the detection of periodic genes, leading to the identification of 267 (96%) genes . Over 80Coexpressed protein complexes are visualized using network graphs, in which a node indicates a gene and an undirected edge between nodes indicates a gene pair that is coexpressed see . As showFour of the five protein complexes involved in oxidative phosphorylation were included in the MIPS database In Tu et al., YMC transcriptome profiling revealed three major metabolic phases: oxidative , reductive-building , and reductive-charging phases. We examined the most recently updated version (April 2014) of the gene annotations for the YMC dataset, and approximately 4% of the periodic genes were excluded because of discrepancies between the gene annotation versions. The revised numbers of genes in the OX, RB, and RC clusters were 986, 946, and 1439, respectively.To gain a comprehensive understanding of the functional roles of the 1469 PP probesets, a gene ontology (GO) enrichment analysis Although many of the enriched GO terms were specific to PP probesets, most of the terms were similar to terms common to OX genes, which are associated with rRNA, RNA, and ribosomes . The genIn recent decades, intensive efforts have been exerted to compare existing periodicity detection methods, but each method has different strengths and limitations First, we demonstrated that the data structure can be extracted using a series of IMFs without any data shape assumptions. These IMFs extend the dimensions of the original time series and can provide additional information on the empirical data structure. This process enabled us to comprehensively examine and visualize oscillations within the original time series .Second, a universal benchmark was proposed to evaluate the performance of existing periodicity detection methods. We defined levels of oscillation within a time series by using the number of extracted IMFs. This simple quantized measure can divide time-series microarray data into groups based on the results of an existing periodicity detection method across a discrete spectrum of oscillations .Third, we analyzed the intrinsic periodicity of all the possible combinatorial IMFs to identify the IMFs that directly contributed to or affected the detected periodicity. These combinatorial IMFs can be used to examine the effects of superposed IMFs and to address the limitations of existing periodicity detection methods. Moreover, this reverse-engineering approach enabled us to compensate for the effect of IMFs by identifying the periodicity of the optimal combinatorial IMF instead of the periodicity of the original time series.Our EMD-based framework is a simple tool that can be used for benchmarking and for leveraging periodicity analyses beyond their limitations. In this YMC study, we observed that 1469 probesets representing 1394 genes might have been under-detected by Tu et al. . CombineRecently, Park et al. Our reanalysis of the YMC dataset revealed that the under-detection problem was not case-specific to the algorithm described by Tu et al. We also applied a different periodicity detection algorithm In this study, we used the YMC dataset to demonstrate a novel EMD application to improve the performance of existing periodicity detection methods. A recent study showed that combining different pattern detection approaches to analyze microarray datasets is a valuable strategy to identify novel candidate cyclic genes In this paper, we present a novel method for identifying under-detected periodicity from time-series microarray data by using EMD. Our analysis demonstrated that periodicity of under-detected genes might be interfered with between decomposed IMF oscillations. By using a protein complex coexpression analysis, we revealed that 56 genes were newly determined as periodic in a yeast time-series microarray dataset. We also used EMD as a universal benchmark to assess the performance of periodicity detection. Based on the strength of EMD to further decipher the characteristics of the temporal structure of time-series microarray data, the approach can be used to improve existing periodicity detection methods.Raw data from the YMC dataset We used an R/Bioconductor-based http://metadb.bmes.nthu.edu.tw/emd_ymc/.Identification of periodic signals and p-value of detection were determined by using algorithm described in Tu et al. The \u223c300 min period of the oscillatory intrinsic mode function (IMF) was determined based on calculating autocorrelation function (ACF). To assess the p-value of the level of statistical significance, the 300-min autocorrelation of the oscillatory IMF was compared with autocorrelation expected from random (Gaussian distribution) data with no periodicity. The cutoff criterion of p-value is 0.05. We requested the C-based computer code of the ACF algorithm from authors of Tu et al. and rewrote the source code by using R. We integrated this algorithm with EMD to search the periodic patterns derived from all of the possible combinations of IMF. The R source code and instructions to identify the under-detected genes by using our EMD approach is available at 6). A gene pair was classified as coexpressed if its queried ACS was ranked in at least the 95th percentile of a background distribution of randomly selected ACSs from the same protein complex.We collected 210 protein complexes from the MIPS database Functional categorization of gene clusters was analyzed using GO::TermFinder Figure S1Expression profiles of five single IMF probesets.(TIF)Click here for additional data file.Figure S2Visualization of networks of three highly coexpressed MIPS protein complexes associated with oxidative phosphorylation.(TIF)Click here for additional data file.Figure S3Distribution of the number of extracted IMFs for genes associated with ribosome biogenesis (GO:0042254).(TIF)Click here for additional data file.Figure S4Boxplots of the expression profiles for (A) non-periodic, (B) PP, and (C) ND probesets. A periodicity analysis was performed using an algorithm described in (TIF)Click here for additional data file.Table S1Summary of three highly coexpressed MIPS protein complexes associated with oxidative phosphorylation.(XLS)Click here for additional data file.Table S2Functional categorization of all enriched GO terms (BP) for the OX, RB, RC and PP gene clusters. Statistical significance of enrichment is presented by \u2013log.(XLS)Click here for additional data file.Table S3A complete list of 1469 under-detected probesets.(XLS)Click here for additional data file."}
+{"text": "A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. Many public health bodies tasked with surveillance of infectious diseases use statistical surveillance systems to process large quantities of data in order to detect emerging outbreaks and, if appropriate, implement control measures. For England and Wales, a laboratory-based reporting system has been the mechanism for national and regional surveillance of infectious diseases, with laboratory reports collated at the centre in London. The centre was called the Health Protection Agency (HPA) until 2013, when it became part of Public Health England (PHE). For more than twenty years the HPA/PHE used an algorithm reported in and and4] anA number of criteria have been proposed for evaluating the performance of outbreak detection systems; reviews may be found in and 7]..7]. It iInstead, we propose the use of a scoring rule that is based on the probabilities that a surveillance method gives for \u2018outbreak\u2019 and \u2018no outbreak\u2019 each week. A novel feature of the scoring rule is that it reflects the number of cases that have been observed, giving greater weight to higher counts, regardless of whether or not there was an outbreak that week. In practice, this means that missing a large outbreak is generally penalised more than missing a small outbreak. This seems appropriate, as missing a large outbreak tends to have more adverse consequences. Also, a surveillance algorithm must sometimes miss very small outbreaks if it is to have reasonable specificity.strictly proper. The scoring rule we use is strictly proper, so the expected score is maximised by giving probabilities that mirror reality. The purpose of the surveillance algorithms is to monitor a large number of diseases to detect which of them, if any, display signs of an outbreak. From this perspective, the only results of interest are those that suggest an outbreak is likely or highly likely to have arisen. Our chosen scoring rule reflects this purpose; it is asymmetric and discriminates far more between probabilities in the range 0.95\u20131 than in the range 0\u20130.9. To our knowledge, this is the first use of a proper scoring rule that is targeted to a small section of the probability range.An important question is how to strike a balance between specificity and sensitivity. One way of partially answering this question is to consider the situation where the time series of disease occurrences is generated from a mathematical model. Then, given the data for the current week and preceding weeks, we could know, in principle, the true probability each week of \u2018outbreak\u2019 and \u2018no outbreak\u2019. A surveillance system that consistently matches these probabilities should expect to get a better score than a system that does not match these probabilities. By definition this will be the case if, and only if, the scoring rule is In Section 2 we briefly describe the outlier detection algorithms examined in this paper. In Section 3 we describe the test data used for evaluation and the procedure for deriving it from the historical time series. The motivation for the procedure and its benefits are discussed. The evaluation criteria used to compare the performance of algorithms are given in Section 4. These include the novel scoring rule. Results from the comparison of algorithms are presented in Section 5 and some concluding comments are made in Section 6.HPA algorithm because of its long-term use by the HPA. ..2].U while the latter use a normal approximation. The differences between the HPA algorithm and these other two algorithms are greater. The HPA algorithm only includes a linear time trend if there is evidence that there is a trend and uses different criteria for down-weighting past values that are high. Also, to handle seasonality it restricts the data it uses from any year to a seven-week window centred on the current week, while the other algorithms use the full data and model seasonality using a 10-level factor. From its original design, the HPA algorithm does not flag a disease as a potential outbreak unless the total count in the past four weeks exceeds 4. Noufaily et al. considered forms of negative-binomial and the quasi-Poisson algorithms that adopted this policy and those are the forms used here, as small isolated counts of a very rare disease should not, in general, be flagged as aberrant. The only difference between the negative-binomial algorithm and the quasi-Poisson algorithm is that the former uses a negative binomial distribution to calculate the threshold From the simulation study that they conducted, Noufaily et al. concluded that the false-positive rate given by the HPA algorithm is too high, primarily due to excessive down-weighting of high baseline values and reliance of too few baseline weeks. They applied the algorithms of interest here to real data from PHE for the year 2011 and found that the two modified algorithms flagged high values at a much lower rate than the HPA algorithm. However, the specificity and sensitivity of algorithms could not be determined because of the limitations of using real data: the precise occurrences and sizes of outbreaks are not known. Here we start with historical time series of real disease counts and from them construct realistic time series in which the details of outbreaks are known.The data are from PHE for the years 1991-2011 and relate to 3,303 distinct types of infectious organism whose occurrence frequencies range across six orders of magnitude. These data have been analysed by ). Some oStreptococcus coagulase negative) showing counts that sometimes exceed 600, while the counts in plot (b) (Schistosoma haematobium) never exceed 10. Shapes also vary. Plot (a) shows a clear trend, plot (b) is flat with a number of one-week spikes, plot (c) shows marked seasonality and plot (d) (Campylobacter coli) and plot (e) (Acinetobacter SP) show both seasonality and trend.The diversity in the time series of different diseases is illustrated in A) and another of outbreaks (series B). The procedure is iterative and starts with A equated to the observed time series, {iy, i = 1, 2, \u2026}, where yi is the count in week ti for the organism of current interest.The following steps were followed for each organism to separate its time series of weekly counts into one time series of baseline data or generalised additive model (GAM) is fitted to A using the mgcv package in R. In either case, a quasi-Poisson model with a log link function is used and a seasonal covariate factor is included if there is evidence (at the 5% significance level) that its inclusion improves model-fit. The seasonal factor has 12 levels, one for each month. A GLM is fitted if there is no evidence (at the 5% significance level) of a linear trend over time; otherwise a GAM with a smoothed time trend is fitted. Output from the GLM or GAM includes fitted values giving the weekly expected disease counts and an estimate of the dispersion parameter, \u03d5, that is constrained to A is long (weekly counts over at least an 8-year period) and the models are not complex, so the risk of overfitting is small and any overfitting would have little effect.Step 3. We next determine whether any weekly counts in A should be classified as extreme. Let A for week ti. , where \u03b7i is a low estimate of the baseline count for week i. In implementing the scoring rule, we take the corresponding week in each of the previous five years and set \u03b7i equal to the smallest number of cases in any one of those weeks. . Thus gi is proportional to the square-root of a high estimate of the outbreak size. This gives greater importance to large outbreaks, while gi will usually also be non-zero in weeks without outbreak. We use max, rather than max, so that a few common diseases do not dominate the overall scores of algorithms.The approach we advocate is to set While the above form of scoring rule is useful for evaluating surveillance methods, it cannot be applied to weather forecasting because of intrinsic differences between the two contexts. With surveillance methods a quantity is observed that relates to the magnitude of an outbreak if an outbreak has occurred, but the question of whether an outbreak has occurred is still of interest. In weather forecasting this does not happen when, say, the forecast is giving the probability of rain: if the quantity of rain that has fallen in a day is known, then we know whether it has rained. Consequently, in weather forecasting other approaches have been proposed to take the quantity of rainfall into account in evaluating rainfall forecasts , 14. Howpi must be calculated for each week. If there were four or fewer cases of disease in the last four weeks then, by assumption, there is no outbreak in the current week and pi is set equal to 0. Similarly, we set pi equal to 0 in weeks where the observed count is less than the number the algorithm expects when there is no outbreak. For other weeks, let Bi denote the event that there is an ongoing outbreak of the disease under consideration at time ti and let ti. To obtain pi, we calculate (i) P(Bi|Hi), and (iii) P, where Hi denotes the historical data available at time ti. Then pi is given by Bayes theorem,To apply the scoring rule to an algorithm, the algorithm\u2019s value for Yi is greater than a threshold value, U, when there is no outbreak. Hence we can obtain P(Bi|Hi) to the proportion of weeks in the series that are classified as \u2018extreme\u2019 by the algorithm. It seems prudent to anticipate an outbreak in at least one week in twenty years, so we set P(Bi|Hi) equal to 0.001 if it is less than that value.The outbreak detection algorithms were designed to evaluate the probability that the count P, is trickier to determine. For each organism, we first identify the weeks for which the algorithm flags an abberation or outbreak . If there were more than five outbreak weeks then:Quantity (iii), 1yi is the observed count that week and For both outbreak and non-outbreak weeks we calculate the standardised count,2We select the outbreak weeks and fit a log-normal distribution to the standardised counts of those weeks.3pi has not been set to 0), we equate P to the difference between P{\u03be \u2265 SCi(yi)} and P{\u03be \u2265 SCi(yi + 1)}, where \u03be is a variable that has the log-normal distribution fitted in (ii).For all weeks (except when If there were five or fewer outbreak weeks then:4\u03bb) multiplied by the standard deviation of the baseline count at ti, (\u03d5\u03bci)1/2. The maximum likelihood estimate of \u03bb is Taking just the outbreak weeks, a Poisson distribution is fitted to the counts above baseline, 5pi = 0), P is equated to the probability of For all weeks and were calculated for five groups: counts of 1\u20134, 5\u201314, 15\u201350, 51\u2013150, and over 150. The false positive rates for the other 959 diseases are shown in Turning to sensitivity, long outbreaks are easier to detect than short outbreaks, so sensitivity will vary with outbreak duration. Rather than simply detecting outbreaks, it is important to detect them in a timely fashion. The beta scoring rule was used to make comparisons between algorithms that reflect both specificity and sensitivity. The comparison of algorithms provided by a scoring rule is important. For this reason, the sensitivity of conclusions to the precise form of the scoring rule should be examined. To this end, we also evaluated the algorithms using two other forms of the scoring rule:Scoring rule 2. The scoring rule parameters, a and b, were set equal to 31 and 1.5, so that F1 and F2 in p exceeds 0.95 and non-occurrence gives a higher score if p is less than 0.95. scoring rule is 0.985, as noted earlier.)Scoring rule 3. An unweighted form of the scoring rule in which the gi-weights in F1 and F2 are the cumulative distribution functions of beta and beta distributions.)Results for these scoring rules are given in The study reported here followed the straightforward path ofconstructing realistic series of test data in which it is known where outbreaks had been injected;selecting/forming suitable evaluation metrics;applying algorithms to the series of test data and comparing their performances using the evaluation metrics.A) and a series of outbreak counts (series B). The outbreak counts were translated by twelve months (giving series C) and recombined with A to form a test data series. This was done for 2304 different diseases, giving test data that reflected the diversity of patterns found in real data. The critical step in constructing the test data is the separation of the original series into baseline counts and outbreak counts. A number of methods were tried (not reported here) and their resulting time series of baseline and outbreak counts for individual diseases were examined. With the method that was adopted, visual inspection of plots showed that weeks with/without outbreak appeared to be classified sensibly.The test data were formed from time series of the actual weekly counts of the number of cases of individual diseases. The series for each disease was separated into a series of baseline counts , the investigation will often be terminated without identifying causal exposure. In addition, some outbreaks are missed, as is clear from work reported in , who appThe study reported here is one of the most realistic evaluations of disease surveillance systems to have been conducted. It was possible because of the large reservoir of past data that was available. The original algorithm was in operation for over twenty years, monitoring infectious disease incidence for a population of 57 million, so decisions on its replacement could not be taken lightly. During the preparation of this paper, Public Health England have replaced the HPA algorithm with the negative binomial algorithm, in line with the findings reported here. As well as monitoring combined disease surveillance data from England and Wales, the new algorithm has also been applied to individual hospital trusts to examine time series of cases of important antimicrobial resistant pathogens. Modified forms of the new algorithms are also in use in the Robert Koch Institute, Berlin .yi, i = 1, 2, \u2026} denote the time-series of the number of cases of the disease under consideration, where yi denotes a count in week ti. Let t0 denote the current week. The HPA algorithm uses quasi-Poisson regression to identify aberrations in the weekly counts. It assumes that yi follows a linear time trend and is distributed with mean \u03bci and variance \u03d5\u03bci:\u03d5 denotes a parameter that allows for over-dispersion. The following gives details.Let {t0 is week \u03c4 of the current year, then only data from weeks \u03c4 \u2212 3 to \u03c4 + 3 of previous years are used in the analysis. Let t(1),\u2026, tn*)( denote the weeks that are used and y(1),\u2026, yn*)( and let wi be the weight in week ti). The algorithm\u2019s estimate of \u03d5 isr = 1 or r = 2 depending on whether a time trend has been fitted. The weights satisfy\u03b3 is a constant such that si are scaled Anscombe residuals,hii are the diagonal elements of the hat matrix.An iterative reweighting procedure is used to estimate model parameters and correct for past outbreaks in the baseline data. If there is no evidence of a linear time trend at the 5% significance level, then y0, the count for the current week. The algorithm calculates the threshold value, U, fromz\u03b1 is the 100(1 \u2212 \u03b1)-percentile of the standard normal distribution. Applying a 2/3 power transformation to a Poisson variate induces an approximately symmetric distribution, which underlies the 3/2 power in Let The exceedance score is then given by \u03c4 \u00b1 3, as in the HPA algorithm) and nine 5-week periods each year:\u03b4\u03c4(ti) is the seasonal factor. These algorithms always include a term for linear trend (never setting \u03b2 to 0), and only set wi equal to si > 2.58, rather than si > 1 Click here for additional data file."}
+{"text": "Most \u2018transcriptomic\u2019 data from microarrays are generated from small sample sizes compared to the large number of measured biomarkers, making it very difficult to build accurate and generalizable disease state classification models. Integrating information from different, but related, \u2018transcriptomic\u2019 data may help build better classification models. However, most proposed methods for integrative analysis of \u2018transcriptomic\u2019 data cannot incorporate domain knowledge, which can improve model performance. To this end, we have developed a methodology that leverages transfer rule learning and functional modules, which we call TRL-FM, to capture and abstract domain knowledge in the form of classification rules to facilitate integrative modeling of multiple gene expression data. TRL-FM is an extension of the transfer rule learner (TRL) that we developed previously. The goal of this study was to test our hypothesis that \u201can integrative model obtained via the TRL-FM approach outperforms traditional models based on single gene expression data sources\u201d.To evaluate the feasibility of the TRL-FM framework, we compared the area under the ROC curve (AUC) of models developed with TRL-FM and other traditional methods, using 21 microarray datasets generated from three studies on brain cancer, prostate cancer, and lung disease, respectively. The results show that TRL-FM statistically significantly outperforms TRL as well as traditional models based on single source data. In addition, TRL-FM performed better than other integrative models driven by meta-analysis and cross-platform data merging.The capability of utilizing transferred abstract knowledge derived from source data using feature mapping enables the TRL-FM framework to mimic the human process of learning and adaptation when performing related tasks. The novel TRL-FM methodology for integrative modeling for multiple \u2018transcriptomic\u2019 datasets is able to intelligently incorporate domain knowledge that traditional methods might disregard, to boost predictive power and generalization performance. In this study, TRL-FM\u2019s abstraction of knowledge is achieved in the form of functional modules, but the overall framework is generalizable in that different approaches of acquiring abstract knowledge can be integrated into this framework.The online version of this article (doi:10.1186/s12859-015-0643-8) contains supplementary material, which is available to authorized users. With the advent of high-throughput \u2018transcriptomic\u2019 technology, biomarkers measured in tissue or bodily fluids have generated a vast amount of data, from which classification models can be and have been developed to predict the early development, diagnosis, and prognosis of diseases . A majormeta-analysis and cross-platform data merging [To address these challenges a combination of multiple, but independent studies, which were designed to investigate the same biological problem, have been proposed to improve classification performance in diagnostic and prognostic models \u20134. Two o merging . In the A major limitation about these approaches is that they are unable to incorporate prior domain\u00a0knowledge nor transfer latent biological information, which might help boost predictive performance. Studies by Ptitsyn and colleagues revealedGanchev and colleagues proposed a novel framework \u2014 transfer rule learning (TRL) \u2014 which leverages the concept of transfer learning to build an integrative model of classification rules from two datasets . TransfeThe TRL framework has limited capabilities. Its strategy for knowledge transfer could be improved. Generally, humans are able to recognize and apply knowledge learned from a previous task to a new task if they can align the commonalties between the two , 10. ForTP53 gene, which encodes the tumor protein p53, is known to play a key role in the activation and/or control of apoptosis [CASP6 genes, cleaves to other proteins to trigger the apoptosis process [TP53 and CASP6 are different, but they both play a prominent role in apoptosis. TRL and several meta-analysis methods cannot capture this functional similarity or many others for integrative analysis.Several genes, though represented by different symbols, could have something in common. For instance, they might belong to the same biological pathway or be associated to the same disease. In humans, for example, the poptosis . Meanwhi process . Superfifunctional modules to capture and abstract underlying commonalities, such as functional similarities, among variables across MAGE datasets. To the best of our knowledge, this is the first paper proposing the application of functional modules via knowledge transfer for integrative rule modeling of multiple gene expression datasets.We present in this paper, TRL-FM, an extension to the TRL framework, which can capture and incorporate abstract knowledge to improve integrative modeling of MAGE datasets. TRL-FM leverages TP53 and CASP6 as illustrated above) to facilitate knowledge transfer.A functional module (FM) consists of a group of cellular components and their interactions that can be associated with a specific biological process. An FM can be a discrete functional entity separable from other FMs or an amalgam of various FMs with a single functional theme . TRL-FM Our goal in this study was threefold. First, to test whether FMs can be used to capture the underlying commonality among variables of different but related gene expression datasets, and are more effective when used as bridges to assist knowledge transfer than relying on identical variables. Second, to test the hypothesis that integrative modeling via the TRL-FM approach outperforms traditional models based on single gene expression data sources. Last, to evaluate and compare the classification performance of TRL-FM with traditional methods, using 21 gene expression datasets that were collected from three respective studies: one on brain cancer, one on prostate cancer, and one on a lung disease (idiopathic pulmonary fibrosis or IPF).Figure\u00a0MAGE datasets are comprised of hundreds or thousands of measured variables. The goal in integrative modeling is to identify and select a handful of relevant variables from among these hundreds or thousands that can accurately predict a disease state or estimate the risk of disease in an individual. The selected variables serve as building blocks for constructing classification models. Moreover, the variables in MAGE data are continuous in most cases, meaning that the variables can take an infinite number of possible values within a specified range. Continuous data pose several challenges to knowledge discovery and data mining tasks. It makes it more difficult to create compact, interpretable, and accurate classification models . SeveralWe applied Efficient Bayesian Discretization (EBD) , a superGiven a list of arbitrary genes, several methods can be used to identify underlying biological commonalities, which can subsequently be abstracted into domain knowledge in the form of functional modules. Biological commonality here can mean association to a common disease, function, pathway, etc. Gene set enrichment analysis (GSEA), for instance, is a popular method which is used to identify functional sets of genes associated with particular conditions of interest from \u2018transcriptomic\u2019 data. There are a plethora of GSEA methods, each with its inherent strengths and limitations , 22. TheFigure\u00a0G denotes the set of input genes, then we map each gene g (where g\u2009\u2208\u2009G), to the GO term go (where g\u2009\u2208\u2009G),) that annotates it. Here, GO refers to a set of biological process terms in the GO. For example, the mapping M(g1)\u2009=\u2009{go1,\u00a0go3} means that terms go1 and go3 annotate gene g1. Subsequently, we formed a union of all GO terms that annotate at least one member of the input gene set. This set of GO terms served as input to the clustering phase.First, we mapped each gene in the input set to the corresponding GO term(s) that annotate(s) the gene, according to the GO annotation database . For exaSecond, using semantic similarity as a disgi to cluster Ci if there existed at least one term in Ci that annotates gi. This enabled us to identify groups of genes that perform the same or similar functions as well as genes that perform multiple functions. Any group of genes that mapped to a particular GO cluster forms a functional module.Finally, we mapped each gene The TRL-FM framework is driven by the rule learner (RL) , a classGiven a set of training examples \u2014 a vector of variable-value pairs, including a class label \u2014RL learns a set of IF-THEN propositional rules. RL induces rules of the form:Condition THEN ConsequentIF Condition consists of one or more variable tests, which we also call conjuncts, and the Consequent denotes prediction of the target variable, also known as a class variable. Every induced rule has classification-relevant statistics associated with it. For example, let us consider the hypothetical rule below:where the gene1\u2009>\u20091680)\u00a0AND\u00a0(gene2\u2009\u2264\u200928.6)) THEN (Class\u2009=\u2009Case)IF and gene2 is down-regulated , then predict the target class as Case.\u201d Relevant statistics are associated to each rule induced by RL. In the given example above, the ensuing statistics mean that RL induced the rule with a 98 % degree of confidence, which we call the Certainty Factor (CF). Several rule evaluation functions, such as precision or Laplace estimate, are used by RL to calculate CF. P represents the p-value, computed by Fisher\u2019s exact test. We define other relevant statistics as follows: True Positives (TP) are the number of positive examples that are correctly predicted as positive, while False Positives (FP) are the number of negative examples that are incorrectly predicted as positive. The TP and FP values in the example above mean that, out of 60 data instances that the rule antecedent (Conditions) matched logically, 56 were predicted correctly.where apriori in order to improve a search in the hypothesis space. Third, RL covers rule with replacement. That is, it does not recursively partition the instance space of the training example , nor dog., C4.5 ), but ing., C4.5 , 31.Internally, RL stores induced rules in a priority queue by sorting them according to their CF and coverage. By default, we set the beam-width to 1000. To construct a rule model,\u2014a set of disjunctive rules\u2014the RL algorithm proceeds as a heuristic beam search through the space of rules, using a general-to-specific approach . First, The TRL-FM algorithm, illustrated in Fig.\u00a0The algorithm accepts as inputs the source and target datasets, including user specified constraints for RL. EBD discretizes the input dataset if they contain continuous variables. Next, FMs are discovered among the selected variables from EBD to facilitate the transfer of knowledge for learning a model on the target dataset. Knowledge transfer occurs via the formulation of prior hypothesis , which is used to seed learning of the target model.GeneratePriorRules function . A rule is \u201cgood\u201d if it satisfies the user-specified constraints.Subsequently, the instantiated rules are loaded onto the beam and learning proceeds as a heuristic beam search in a typical RL fashion, as described above. In the specialization step see Fig.\u00a0, throughTo test the feasibility of TRL-FM as a viable tool for integrative modeling of MAGE datasets we applied the framework to learn classification rule models using publicly available datasets. The goals of the experiments were threefold. First, to ascertain TRL-FM\u2019s ability and flexibility in capturing abstract biological knowledge from source datasets in order to facilitate transfer learning. Second, to evaluate the classification performance of models built by TRL-FM, and how it compares with traditional methods built on single source datasets. Last, compare the performance of integrative modeling via the TRL-FM approach with meta-analysis and cross-platform data merging methods.Table\u00a0n datasets, D\u2009=\u2009{D1,\u00a0D2,\u00a0\u2026,\u00a0Dn}, where Di represents the ith dataset. Within a set, each dataset, Di, in turn was set as target, while the rest, {D\u2009\u2212\u2009Di}, were designated as source data for knowledge transfer. Guided by the TRL-FM framework experiments. This study design strategy was necessary to test the notion that knowledge transfer from multiple sources will more likely improve learning on the target.Our task was to build a classification rule model that can classify normal tissue versus diseased tissue from same organ, e.g., to distinguish normal prostate tissue from prostate cancer. We designed our experiments to evaluate if knowledge learned from datasets of the same MAGE example set can be transferred to enhance learning of a classification model on a new dataset. For each example set of the experimental datasets, consider a set of n FMs, such an approach will yield approximately We used the area under the Receiver Operative Characteristic curve (AUC) to evaluith dataset that was designated as target, the rest of the datasets in the same study in turn had to be set as source.In addition, we compared the performance of TRL-FM over TRL and RL (baseline). Note that the TRL framework is constrained with a single dataset as source, while TRL-FM extracts knowledge from multiple sources, via functional mapping. This means that for evaluating the TRL experiments, for every Finally, we compared the performance of our methods with traditional algorithms for single source datasets namely, Support Vector Machines (SVM), Linear Discriminant Analysis (LDA), Random Forest (RF), C4.5, Na\u00efve Bayes (NB), and Penalized Logistic Regression (PLR). Using the same metric , we evaluated the classification performance of these methods on the raw datasets as well as cross-study integration via meta-analysis and data merging. Several methods have been proposed for microarray data merging via meta-analysis and cross-platform data merging , but forFor each round of TRL-FM experiments, we identified a set of FMs from the list of relevant source variables. In all, 21 sets of FMs were generated, since each dataset, in turn, was set as target in each round of experiment. To simplify the rest of this discussion, we have randomly selected and present one FM table from each disease study \u2014 Tables\u00a0Table\u00a0ADCY3 (adenylate cyclase 3) was associated with DNA repair, protein phosphorylation, transport, and response to glucose stimuli. We made a similar observation in Table\u00a0CBS (cystathionine beta synthase) was associated with some metabolic processes, brain development, and the regulation of kinase activity. Lastly, most of the discovered functional themes like signal transduction, apoptotic processes, cell differentiation, cell proliferation, and many others, are associated with the hallmarks of cancer [We made three observations from the functional modules. First, almost all of the functional themes were composed of more than one gene. Second, some genes were multi-functional. That is, they were associated with more than one different functional theme. In Table\u00a0f cancer , 36.confidence and support . Other subjective methods, which leverage background knowledge or an expert opinion, have also been proposed to define explicit criteria for rule interestingness [The biological information revealed from these observations obtained using the TRL-FM style to capture, abstract, and formulate propositional rules for knowledge transfer could be essential for algorithm and model development for integrative modeling of MAGE datasets. Normally, for symbolic data mining algorithms like RL, the interestingness criteria for a newly induces rule is evaluated by objective methods like tingness . Prior kWith Tables\u00a0\u03b1\u2009=\u20095\u2009% showed that transfer with TRL-FM statistically significantly improves the baseline than even the best TRL consistently improved classification performance across all target datasets. That is, there was no direct correlation between functional themes and positive (or negative) transfer. However, what became clear was that an ensemble of the FMs, most often than not, resulted into positive transfer. The reason for this improvement could be that an aggregate of FMs widens the space of relatedness among variables of the source and target datasets. The intuition here is that, the more related two domains are, the better the learning performance of transfer learning. In addition, when snippets of information from the FMs are fused together, potential errors inherent in knowledge transfer via individual FMs can be alleviated. Meanwhile, results from other studies support our take that a combination of FMs , more often than not, improves performance for integrative analysis of genomic data , 38.MPP6, PKIG) used to build the source model does not overlap the set of variables which the target model incorporates. TRL-FM, on the other hand, is able to transfer knowledge because of the association of PKIG (from source model) and ASPN (from target model) to cell signaling contained in the source model, does not overlap with that incorporated in the target model. Nevertheless, TRL-FM was able to use functional mapping via FMs to instantiate prior rules for seeding learning on the target using ADCY2, GJA1, SNAI2, and MAP3K14 due to their functional association with MUC1 \u2014 signal transduction and regulation of transcription . For example, in the transfer of classification rules from the Larsson to KangA data , TRL is unable to transfer knowledge because the set of variables , the KyoIn this paper, we develop and evaluate a novel TRL-FM framework that extends existing classification rule-learning methods to use abstract domain knowledge to facilitate integrative modeling of multiple types of gene expression data. Empirical results from this study highlight a couple of key points. First, the results from our comprehensive experiments conducted in this paper lend strong support to our hypothesis that the TRL-FM approach can statistically significantly outperform TRL, including traditional models based on single gene expression data sources. Second, TRL-FM\u2019s ability to leverage functional modules to capture the relatedness among source and target variables is more intelligent, effective, and biologically intuitive than TRL\u2019s reliance on variable overlaps, which can be superficial and uninformative. Third, integrative modeling, via the TRL-FM framework leads to better performance than other integrative analysis approach, like meta-analysis, which cannot transfer vital information from one dataset to another. Last, the TRL-FM framework, when extended and refined, can serve as a viable alternative and/or complementary methodology for integrative modeling of multiple \u2018transcriptomic\u2019 datasets."}
+{"text": "Despite their high information content, analysis remains challenging. \u2018Omics\u2019 technologies capture quantitative measurements on tens of thousands of molecules. Therefore, in a time course \u2018omics\u2019 experiment molecules are measured for multiple subjects over multiple time points. This results in a large, high-dimensional dataset, which requires computationally efficient approaches for statistical analysis. Moreover, methods need to be able to handle missing values and various levels of noise. We present a novel, robust and powerful framework to analyze time course \u2018omics\u2019 data that consists of three stages: quality assessment and filtering, profile modelling, and analysis. The first step consists of removing molecules for which expression or abundance is highly variable over time. The second step models each molecular expression profile in a linear mixed model framework which takes into account subject-specific variability. The best model is selected through a serial model selection approach and results in dimension reduction of the time course data. The final step includes two types of analysis of the modelled trajectories, namely, clustering analysis to identify groups of correlated profiles over time, and differential expression analysis to identify profiles which differ over time and/or between treatment groups. Through simulation studies we demonstrate the high sensitivity and specificity of our approach for differential expression analysis. We then illustrate how our framework can bring novel insights on two time course \u2018omics\u2019 studies in breast cancer and kidney rejection. The methods are publicly available, implemented in the R CRAN package Over the past decade, the use of \u2018omics\u2019 to take a snapshot of molecular behaviour has become ubiquitous. It has recently become possible to examine a series of such snapshots by measuring an \u2018ome\u2019 over time. This provides a powerful tool to study stressor-induced molecular behaviour , developRobust and powerful analysis tools are critical for capitalizing on the wealth of data to answer key questions about system response and function. In addition to addressing the high-dimensionality of the data, such tools must account for a high number of missing values, and also variability within and between studied subjects. Many methods are limited by scale, and are unable to handle either a large number of time points, a varying number of time points per subject or a verThe benefit of decreasing the number of profiles analyzed via filtering is evident when considering the scale of typical time course \u2018omics\u2019 experiments. Tens of thousands of molecules can be measured at different time points, requiring multiple hypothesis tests to determine differential expression. While the false positive rate can be controlled using multiple testing corrections , these A popular modelling approach for time course data is smoothing splines, which use a piecewise polynomial function with a penalty term . The twoAfter the filtering and modelling steps, the resulting summarized profiles can be clustered to gain biological insight from their similarities. Indeed, clusters of correlated activity patterns may predict putative functions for molecules and reveal stage- and tissue-specific regulators . To thatHypothesis testing can also be performed within the mixed effect model framework to gain biological insight from differences between groups and across time. Several methods have been proposed which can all handle missing data and different numbers of replicates per time point, but are often limited when only a few time points are observed, as is typically the case for costly high-throughput experiments. Approaches such as linear models for microarray data test coIn this paper we propose a novel framework for time course \u2018omics\u2019 studies which is summarized in We first applied the filtering and modelling stages of our framework to two publicly available transcriptomics datasets, which are briefly described below. The main analyses and biological interpretations were then performed on two proteomics datasets from breast cancer and kidney rejection studies.S. paradoxus) with two to four biological replicates per time point.The evolutionary principles of modular gene regulation in yeast were investigated by . They trThe anti-tumour efficiency of a chemotherapeutic drug on bone marrow in mice was investigated by . Expresslog2 fold changes for time points 6, 12 and 24 h relative to baseline (0 h) were reported for 264 proteins with minimum two measured replicates. We applied our full data-driven modelling approach to this dataset, finishing with cluster analysis to explore patterns of protein response to IGF-1 stimulation.Proteomic changes in MCF-7 cells resulting from insulin-like growth factor 1 (IGF-1) stimulation were investigated by . W. WModel yij(tij) its expression for subject i at time tij, where i = 1, 2, \u2026, n, j = 1, 2, \u2026, mi, n is the sample size and mi is the number of observations for subject i. We fit a simple linear regression of expression yij(tij) on time tij, where the intercept \u03b20 and slope \u03b21 are estimated via ordinary least squares:The first model assumes the response is a straight line and is not affected by subject variation. For each molecule, we denote by f represents a penalized spline which depends on a set of knot positions \u03ba1, \u2026, \u03baK in the range of {tij}, some unknown coefficients uk to be estimated, an intercept \u03b20 and a slope \u03b21. That is,K and their positions influences the shape of the curve. As proposed by [T as \u03ba1\u2026\u03baK at quantiles of the time interval of interest.As nonlinear response patterns are commonly encountered in time course biological data , our secposed by , we estiUi to the mean response f(tij). Assuming f(tij) to be a fixed (yet unknown) population curve, Ui is treated as a random realization from an underlying Gaussian distribution independent from the previously defined random error term \u03f5ij. Hence, the subject-specific curves are expected to be parallel to the mean curve as we assume the subject-specific random effects to be constant over time:In order to account for subject variation, our third model ai0 and slopes ai1:A simple extension to this model is to assume that the subject-specific deviations are straight lines. Our fourth model therefore fits subject-specific random intercepts Derivative information for Linear Mixed Model Splines (DLMMS): The derivative of expression profiles contains valuable information about the rate of change of expression over time [f(t) from t in the relevant time interval is:ver time , 30. We Clustering of time profiles allows insight into which molecules share similar patterns of response, which may in turn indicate a shared biological basis. Similarities between trajectories may be seen not only in terms of shape and magnitude, but also rates of change, or speed. However, detecting these similarities can be challenging due to noise and missing values in subject-specific measurements. Hence, the choice of modelling approach often has critical impact on the ability to identify clusters of biologically similar molecules.We compared our modelling approaches LMMS and DLMMS to two single-step models using the workflow shown in clValid R package [For clustering, we compared the performance of five algorithms using the Dunn index from the package . The Dunmclust; [cluster; [kohonen; [We selected clustering algorithms for comparison based on representatives of different classes of standard techniques: a model-based algorithm , hierarcluster; ), and Sekohonen; ). The laorg.Hs.eg.db R package [A size-based Gene Ontology (GO) term enrichment analysis was then performed to validate the biological relevance of each cluster, using the hypergeometric distribution based on the number of molecules in the domain of interest . We spec package .While cluster analysis can provide valuable insight into behaviour patterns common to groups (clusters) of molecules, differential expression analysis in a time course experiment can highlight significant responses to perturbations of each molecule. Our LMMS framework enables assessment of the significant differences over time or between individual groups based on the whole molecular trajectory instead of analysing individual time points.LMMS for differential expression analysis (LMMSDE): We extended the LMMS modelling framework to test between groups, across time, and for interactions between groups and time as follows. Suppose we have R different groups of subjects, with hi denoting the group for each subject i. Further, we define hir to be the indicator for the rth group, that is, hir = 1 if hi = r and 0 otherwise. Starting from the model in fhi in the full LMMSDE model is given by:r = 1, \u2026, R, \u03b10 = \u03b1r0 are the differences in intercept between each group and the first group; \u03b11 = \u03b1r1 are the differences in slope between each group and the first group; and vrk are the differences in spline coefficients between each group and the first group.r > 1, we have hir = 0, and time effects will be detected only if the goodness of fit of this model is better than the null model which fits only the intercept. Secondly, to detect differences between groups, we set \u03b11 = 0 and \u03b21 = 0, and test a goodness of fit against the null model which also has hir = 0. Finally, if we include all parameters we can model the group * time interactions, by allowing different slopes and intercepts in the different groups. We compare this to the null model where the effects over time do not differ between groups. For each case we compared the fit of the expanded model from anova function from the R package nlme [We can test different hypotheses depending on which parameters are equal to zero. Firstly, for a single group, \u2200age nlme .Comparisons with LIMMA: We compared our approach to LIMMA [to LIMMA , which ito LIMMA . Note thRT and RI ratios, and a second cluster with high values for the two ratios. We therefore removed the molecules from that second cluster. Similar types of clusters were observed for all transcriptomics datasets.We considered the performance of our filtering procedure in both proteomics and transcriptomics datasets. On the iTraq breast cancer and iTraM. musculus data (similar results were obtained in the other datasets). However, contrary to our expectation we also observed for some low RT values large p-values. We can explain the large p-values for low RT in RT and RI values , glycolysis (GO:0006096) and gluconeogenesis (GO:0006094). These processes play an important role in cancer progression , indicatWe compared the proposed LMMSDE with LIMMA on the unfiltered simulated data with varying expression patterns and levels of noise. For each scenario, we recorded how many of 50 differentially expressed molecules were detected as significant after correction for multiple testing and calculated average sensitivity and specificity over all 100 replicates . OverallWe performed a differential expression analysis on the iTraq kidney rejection dataset to illustrate our LMMSDE analysis on complex and real data. In addition to applying the differential expression approaches LIMMA and LMMSDE on the full data set as in the simulated case study, we also applied our filtering approach for multiple conditions and removed profiles that were identified as non-informative in both conditions (64% of profiles were removed) before LMMSDE analysis. Filtering before differential expression analysis was only applied for LMMSDE, since removal of non-informative profiles should increase statistical power without biasing results. In contrast, filtering before LIMMA analysis affects posterior estimates and can bias p-values.We compared LMMSDE and LIMMA in terms of the number of proteins declared as differentially expressed between the two groups and investigated their biological relevance with respect to the biological questions from the study. Two analyses were performed: to identify the molecules with significant differences between groups, and to identify molecules showing significant group*time interactions leading to different trends between the two groups over time. While no differentially expressed molecules were identified by LIMMA for either group or interaction effects, LMMSDE identified 35 differentially expressed proteins with a group effect and 12 proteins with a significant interaction effect . On the filtered dataset LMMSDE identified 13 molecules with a significant group effect and nine molecules with a significant interaction effect. Note that these differentially expressed proteins were also identified in the analysis of the full dataset. The effect size of differential proteins identified with both group and interaction effects tended to be small, with a magnitude of average fold change of < 1.5.For the 13 (three not annotated) molecules that were declared as differentially expressed between groups, the top enriched biological process was the Out of the nine molecules (1 not annotated) with a significant interaction between group and time, the most promising protein differentially expressed was IQ calmodulin-binding motif-containing protein 1 (IQCB1). This protein is particularly relevant to this study, as it is a nephrocystin protein localized to the primary cilia of renal epithelial cells. Mutations in this gene were shown to be strongly associated with Senior-L\u00f8ken-Syndrome Type 5, a disorder causing nephronophthisis and renal failure .Thus far, very few methods have been developed to analyse high-throughput time course \u2018omics\u2019 data. Statistical analysis is challenging due to the high level of noise relative to signal in such data, and the time measurements add an extra dimension of variability both within and among subjects. Our data-driven approach focuses on magnifying the inherent signal, by removing non-informative profiles that potentially interfere in downstream analysis, and by using a linear mixed model spline framework to account for subject-specific variability. This procedure provides clearer signals in both clustering and differential expression analysis.RT and RI with the test statistics from differential expression analysis over time.The filtering of non-informative profiles is an important first step in analysis, as such profiles otherwise introduce noise and reduce statistical power in downstream clustering and differential expression steps , 26, 48.For multiple treatment groups, we filtered separately for each group, removing only molecules identified as non-informative in both groups. An alternative option would be to calculate the ratios for each group separately, but apply the model-based clustering on all ratios from all groups. We found very little differences compared to a filtering approach applied on each treatment group. Using one of these approaches, it is possible that molecules that vary between groups, but show little change over time could be removed. However, these molecules, though differentially expressed, would be detected in a cross-sectional study, and are most likely not of primary interest in time course studies where the focus is on molecules changing expression over time.lmms allows the user to set their own thresholds. A drawback of our proposed filtering method is the requirement of the same sampled time points across subjects, and the need for at least three replicates per time point. If these do not hold, it may be necessary to collapse time points into bins prior to analysis to have sufficient density of data. Further investigation of filters allowing for less constrained sampling could be very useful for adaptive sampling designs.In spite of the clear relationship between differential expression and filter ratios, we found the selection of thresholds to be challenging. Threshold choice can be affected by a variety of issues such as level of missing data and the number of replicates at each time point. In our analysis, we applied 2-cluster model-based clustering on the ratios to discriminate informative from non-informative profiles. However, we suggested guidelines to address these issues and our R package Current modelling approaches for time course data fit the same statistical model to each molecule, allowing for either subject-specific intercepts or subjeIn this study we clustered time course data based on their summarized profiles to identify groups of molecules representing relevant molecular processes. We did not consider here clustering of subjects to identify groups with similar sub-phenotypes. However, similar approaches can be applied to this alternate biologically interesting question .Clustering analysis relies not only the choice of algorithm, but also on the number of clusters and the distance metric. There are a variety of options available for all of these, but we have focused on common choices in this study, and expect that other options would produce similar results. We observed that application of different modelling approaches resulted in different input data structure to the clustering algorithms. As clustering outputs are highly dependent on the input data structure , it was lmms. Higher-degree polynomials may provide additional power for detection of differential expression over time when the profiles display nonlinear behaviour, as in cluster 1 from the first to the last time points, in Figure B the fold change between the two groups is equal to log(2), in Figure C the profiles measured on individuals from group 1 (group 2) increase (decrease) over time with a fold change of log(2).The noise level is equal to that in the kidney rejection data and the groups of each individual are indicated in grey full lines (group 1) or black dashed lines (group 2). In (PDF)Click here for additional data file.S2 FileRT (x-axis) and RI (y-axis) are shown for: simulated data ; iTraq breast cancer data ; Saccharomyces paradoxus evolution data ; iTraq kidney rejection Allograft Rejection (AR) data . Molecules are coloured according to \u2212log10 p-values for Linear Mixed Model Spline for Differential Expression analysis (LMMSDE) test for differential expression over time (first column) and the proportion of missing values (second column).Filter ratios (PDF)Click here for additional data file.S3 FileFigure A); Smoothing Splines Mixed Effects (SME) ; Linear Mixed Model Spline (LMMS) and Derivative LMMS (DLMMS) for summarizing the profiles across the biological replicates. Dunn indices are displayed for a number of clusters varying from two to nine with the five different cluster algorithms: hierarchical clustering (HC), kmeans (KM), Partitioning Around Medoids (PAM), model-based (model) and Self-Organizing Maps (SOM). Higher Dunn indices indicate better clustering performance.using the mean ((PDF)Click here for additional data file.S4 FileFigure A) and after removing GO terms that contained only one molecule.Venn diagram of significantly enriched GO terms identified by clustering of the mean, Smoothing Splines Mixed Effects (SME), Linear Mixed Model Spline (LMMS) and Derivative LMMS (DLMMS) before ((PDF)Click here for additional data file.S1 TableShown are the GO terms identified concordantly by clustering of at least two of the modelling approaches (Linear Mixed Model Spline (LMMS), Derivative LMMS (DLMMS), mean or Smoothing Splines Mixed Effects (SME)).(PDF)Click here for additional data file.S2 TableEnriched GO terms uniquely identified by clustering of the profiles modelled by the different approaches considered. For each enriched term, the cluster number (Cluster), the number of molecules with GO terms in that cluster (Counts), the number of molecules in the data with that GO term (NMol), the number of molecules in the cluster (Size), the GO description, ontology (Ont), false discovery rate adjusted p-value (adj. p), and log odds ratio (OR) are given. The table is sorted by p-value within each cluster. Linear Mixed Model Spline (LMMS); Derivative LMMS (DLMMS) and Splines Mixed Effects (SME) use hierarchical clustering while the mean uses PAM clustering. For LMMS three clusters were identified, while two clusters were identified for DLMMS, mean and SME.(PDF)Click here for additional data file."}
+{"text": "Drosophila chronotypes and their human orthologs.The circadian clock provides the temporal framework for rhythmic behavioral and metabolic functions. In the modern era of industrialization, work, and social pressures, clock function is jeopardized, and can result in adverse and chronic effects on health. Understanding circadian clock function, particularly individual variation in diurnal phase preference (chronotype), and the molecular mechanisms underlying such chronotypes may lead to interventions that could abrogate clock dysfunction and improve human health and welfare. Our preliminary studies suggested that fruit-flies, like humans, can be classified as early rising \u201clarks\u201d or late rising \u201cowls,\u201d providing a convenient model system for these types of studies. We have identified strains of flies showing increased preference for morning emergence (Early or E) from the pupal case, or more pronounced preference for evening emergence (Late or L). We have sampled pupae the day before eclosion (fourth day after pupariation) at 4\u2009h intervals in the E and L strains, and examined differences in gene expression by RNA-seq. We have identified differentially expressed transcripts between the E and L strains, which provide candidate genes for subsequent studies of Drosophila melanogaster as a model system for gaining an initial insight into the transcriptional changes associated with different chronotypes. The emergence of adult flies from their pupal case (eclosion) is an event that is tightly gated by the circadian system: it was the original phenotype used for screening for clock genes . Our adaptor is made of Perspex, and the whole structure is placed in horizontal position, with modified (shortened) vertical activity tubes . A single fly pupa was placed in each tube, just below the infra-red sensor of the DAM2. This design minimizes the time the fly needs to travel until detected by the infra-red sensor, and also takes advantage of the strong tendency of the fly to climb up (negative geotaxis). The advantage of this system compared to the Trikinetics eclosion monitor is that after the first crossing event detected by the sensor (and recorded by the computer), the fly is kept in the glass tube, rather than being drawn in a water\u2013ethanol mixture. Flies can then be scored for gender or collected for further analysis or crossing. A custom made Perl script was used to extract the eclosion times from the TriKinetics data files.For automatic monitoring of eclosion times, we developed an adaptor that fits the DAM2 system by TriKinetics . Following purification, the mRNA was fragmented using divalent cations at elevated temperature and the first-strand cDNA was synthesized using random hexamer primers and Superscript TM III . The second strand cDNA was synthesized using buffer, dNTPs, RNaseH, and DNA polymerase I. Short fragments were purified with a QiaQuick PCR extraction kit (Qiagen) and resolved with EB buffer for end reparation and poly(A) addition. The short fragments were then connected using sequencing adapters. After agarose gel electrophoresis, suitable fragments were used as templates for PCR amplification. During the QC steps, an Agilent 2100 Bioanaylzer and an ABI StepOnePlus Real-Time PCR System were used in quantification and qualification of the sample library. Finally, the library (200\u2009bp insert) was sequenced using Illumina HiSeq\u2122 2000 . The single-end library was prepared following the protocol of the Illumina TruSeq RNA Sample Preparation Kit (Illumina).D. melanogaster transcriptome (NCBI_build5.41) downloaded from the illumina igenome website. The libraries were alignment using TopHat (version 2.1.0) and the adapter sequences were trimmed using trimmomatic (version 0.32) . Each lin 2.1.0) with theFragments Per Kilobase of transcript per Million fragments mapped (FPKM).We used cufflinks and cuffmerge (version 2.2.1) , 18 to qTo compare gene expression between E and L time-series samples, we employed the Time-series RNA-seq Analysis Package (TRAP) . To idenD. melanogaster , sub-networks were isolated comprising the DEGs identify by the Time-series TRAP analysis and the proteins that interact with them (first neighbor nodes). To discover over-represented gene ontology (GO) categories of the genes in the sub-networks, we used the on-line tool DAVID, set at false discovery rates (FDR) <0.01, for the two estimates calculated by the program . At 18\u00b0C, the difference between the chronotypes became smaller, 2.41\u2009\u00b1\u20094.6 vs. 3.39\u2009\u00b1\u20094.13, but was still significant . Notably, there was a substantial increase in the phase distribution at the higher temperature, particularly in the L chronotype , huckebein (hkb), and anterior-open (aop) genes showed similar profiles in DD and LD, while torso-like (tsl) differed between the chronotypes only in DD. We note that tll and hkb encode transcription factors, and all four genes are important for development. Of particular interest is aop (aka yan), which is involved in the development of the eye photoreceptor cells (CCKLR-17D3), encoding a G protein-coupled receptor (rhodopsin-like) differed only in LD (elevated in E compared to L). Members of the Trypsin (Try) cluster showed opposite trends in DD and LD.We explored in further details the expression of the DEGs in the MAPK and NLRI pathways Figure . In the or cells and therNext, we have used the publically available protein\u2013protein interaction data to constAnalysis of the networks (DEGs and their first neighbors) for enriched gene ontologies revealed large number of significant GO terms Figure . While pTo test whether the difference between the E and L chronotypes is simply driven by phase shift of gene expression, we tested the cross-correlation between the time series of the two chronotypes, for each transcript. In this analysis, the correlation between the two time-series is calculated repeatedly, at different time-lags. If the profiles are similar, but just phase shifted, the correlation will be at maximum at the lag that corresponds to the phase shift. Figure Drosophila core clock genes seems to show substantial expression difference between the E and the L chronotypes. Yet, it is possible that variation in clock genes drives different chronotypes post-transcriptionally. Indeed, previous studies demonstrated that variation in phase preference is often due to genetic variation in clock genes; for example, variation in the per gene between D. melanogaster and D. pseudoobscura are underlying phase differences in locomotor and sexual behavior rhythms (Drosophila cry, leading to variation in eclosion phase (In general, our results suggest that chronotype diversity is largely mediated by genes, which are downstream of the circadian clock. None of the rhythms . Anotheron phase . NeverthTo date, very little is known about the transcriptional variation between chronotypes in other model organisms. Our study may provide candidate genes and molecular pathways that could be explored in other insects and possibly even mammals, given the highly evolutionary conservation of the circuits that we identified here such as MAPK and Hedgehog. In addition, the possible link of chronotype variation to genes associated with development that we identified here, may well be relevant to mammalian systems that worth further investigation.The general conclusion emerging from our time-series analysis is that gene expression is not merely phase shifted between the E and L chronotypes, but is more fundamentally affected Figure . It seemEP and CH carried the phenotypic analysis and collected the samples. MP assembled the sequence data and carried the analysis. ET, ER, and CK designed the experiments. ET and ER supervised the experiments. ET, MP, ER, and CK contributed to the preparation of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.http://journal.frontiersin.org/article/10.3389/fneur.2015.00100/abstractThe Supplementary Material for this article can be found online at Click here for additional data file.Click here for additional data file.Click here for additional data file.Click here for additional data file.Click here for additional data file."}
+{"text": "Molecular networks act as the backbone of molecular activities within cells, offering a unique opportunity to better understand the mechanism of diseases. While network data usually constitute only static network maps, integrating them with time course gene expression information can provide clues to the dynamic features of these networks and unravel the mechanistic driver genes characterizing cellular responses. Time course gene expression data allow us to broadly \u201cwatch\u201d the dynamics of the system. However, one challenge in the analysis of such data is to establish and characterize the interplay among genes that are altered at different time points in the context of a biological process or functional category. Integrative analysis of these data sources will lead us a more complete understanding of how biological entities coordinately perform their biological functions in biological systems.2D3-altered mechanisms in zebrafish embryo development. We applied the proposed method to a public zebrafish time course mRNA-Seq dataset, containing two different treatments along four time points. We constructed networks between gene ontology biological process categories, which were enriched in differential expressed genes between consecutive time points and different conditions. The temporal propagation of 1\u03b1, 25-Dihydroxyvitamin D3-altered transcriptional changes started from a few genes that were altered initially at earlier stage, to large groups of biological coherent genes at later stages. The most notable biological processes included neuronal and retinal development and generalized stress response. In addition, we also investigated the relationship among biological processes enriched in co-expressed genes under different conditions. The enriched biological processes include translation elongation, nucleosome assembly, and retina development. These network dynamics provide new insights into the impact of 1\u03b1, 25-Dihydroxyvitamin D3 treatment in bone and cartilage development.In this paper, we introduced a novel network-based approach to extract functional knowledge from time-dependent biological processes at a system level using time course mRNA sequencing data in zebrafish embryo development. The proposed method was applied to investigate 1\u03b1, 25(OH)2D3. Our approach enables the monitoring of biological processes that can serve as a basis for generating new testable hypotheses. Such network-based integration approach can be easily extended to any temporal- or condition-dependent genomic data analyses.We developed a network-based approach to analyzing the DEGs at different time points by integrating molecular interactions and gene ontology information. These results demonstrate that the proposed approach can provide insight on the molecular mechanisms taking place in vertebrate embryo development upon treatment with 1\u03b1, 25(OH)The online version of this article (doi:10.1186/s13040-015-0057-1) contains supplementary material, which is available to authorized users. The gene expression in these systems is a temporal process. Different genes are required to play different functional roles under different conditions. This is highly regulated by a complex regulatory system of diverse molecular interactions, such as protein-protein interactions (PPIs), protein-DNA interactions (PDIs), and metabolic signaling pathways . Taking S.cerevisiae [D.melanogaster [C.elegans [Molecular interactions such as PPIs and PDIs are essential for a wide range of cellular processes and form a network of astonishing complexity. Until recently, our knowledge of such complex networks was rather limited. The emergence of high-throughput technologies has given us possibilities to systematically survey and study the underlying biological system. The molecular interaction maps have been built in model organisms , as wel.elegans , mouse [.elegans and huma.elegans ). Eviden.elegans . Recentl.elegans . However.elegans proposedThe gene ontology (GO) Consortium has deve2D3 treatment on gene expression patterns in zebrafish embryo development and the causal relationship between DEGs at consecutive time points. The resulting networks suggest that well-studied as well as novel molecular mechanisms are regulated by 1\u03b1,25(OH)2D3 treatment.In this paper, we developed a novel network-based computational approach to study causal relationships between DEGs at consecutive time points in a case\u2013control time series experiment. To overcome the limitation that the intervals of time series experiments usually would not fit the time scale of functional communications between most genes and the statistical power from only several time points would be too low for robust analysis, we constructed networks of GO biological process terms connected by significant interactions between DEGs on sequential time points. This enables us to understand the biological processes at GO scale, in which relations between nodes (representing GO terms) are more statistically stable. This is more statistically significant and biologically meaningful compared to single co-expressed links. The detail of the proposed approach is presented in Fig.\u00a02D3 treatment; (2) an overview the interactome-based analysis that we proposed; (3) a chronologically organized analysis of the transcriptome changes and interactome dynamics altered by 1\u03b1,25(OH)2D3 treatment during early zebrafish development. Figure\u00a0In this section, we present: 1) a description of generation and initial characterization of the mRNA-seq dataset obtained from zebrafish embryos altered by 1\u03b1,25(OH) a descri2D3- or vehicle-treated zebrafish, 48, 96, 144, and 168\u00a0hours post fertilization (hpf) as described in our previous publication [http://genome.ucsc.edu/). The refFlat annotation file from the University of California Santa Clara (UCSC) Table Browser was used to generate raw reads mapped to each annotated gene in the annotation file. The genes altered by 1\u03b1,25(OH)2D3 treatment at each time point were identified using the negative binomial model as describe in [2D3 treatment. To more efficiently derive biological insights from the genome-wide transcriptomic response to the treatment, we proposed a network-based analysis in the following sections.Genome-wide transcriptional profiling were performed using Illumina HiSeq sequencing technique for four replicate cDNA libraries of 1\u03b1,25(OH)cribe in . A list cribe in . However2D3 specific interactome. Many network interactions connect the few genes altered on day 2 and many altered on later days. We found that there was a statistically significant enrichment in links between genes that were 1\u03b1, 25(OH)2D3-altered earlier and genes regulated later in the course of experiment. This suggested that treatment affected signals were propagated along network routes from the initially affected genes (on day 2) towards network regions that were perturbed later.We overlaid the DEGs onto the zebrafish functional interactome from the FunCoup database . The DEG2D3 on at least one of the four days in the experiment . On day 2, only 77 genes were changed. 331 genes on day 4, 1672 genes on day 6, and 2673 genes on day 7 differentially expressed in response to 1\u03b1,25(OH)2D3 treatment . This indicated that DEGs were more enriched in hub genes (genes with higher node degree). This can partially explain the initially altered genes on days can pass the changes to more interacted genes on later days through the network links/interactions.Specifically, 3134 genes were up- or down-regulated by 1\u03b1,25(OH)2D3-altered genes might mean, we analyzed the GO categories associated with the connected nodes in the context of interactome.To gain a better perspective on what this temporal pattern in enriched connections between 1\u03b1,25(OH)A at time point t helps to predict the state of gene B at time point (t\u2009+\u20091), then a causal relation A-\u2009>\u2009B might be inferred [2D3-altered genes, we generalized a network of GO terms connected by the links between these DEGs on consecutive time points. At this broader scale, relations between nodes are statistical reliable: links reflect statistically enriched temporal connections between multiple genes of one node with multiple genes of another. Thus, this GO-GO network highlights flow between GO biological processes altered by 1\u03b1, 25(OH)2D3 on different days.The FunCoup network links among these genes can indicate a general likelihood how they are functionally related, but don\u2019t highlight the temporal directionality in these connections. Causal relations can be suggested by examining temporal changes, i.e., if information associated with gene inferred , 25. How2D3-altered genes in individual gene-gene interactions in FunCoup interactome were labeled with days when these genes were detected as differentially expressed. We were particularly interested in identifying the links in which one gene was altered earlier than the other. Thus, if there were a significant number of genes in GO category X altered on day d interacting with gene in GO category Y altered on day (d\u2009+\u20091), we hypothesize that a causative relation X -\u2009>\u2009Y. Limiting the output to only enriched GO-GO connections allowed us to focus on the major changes of propagation of 1\u03b1, 25(OH)2D3 and organismal response to it. Compared to the individual category enrichment approach such as GOMiner, our approach yielded a much richer analysis for interpretation of time series changes unique to time series gene expression data. The Figs.\u00a01\u03b1, 25(OH)2D3 altered changes in the transcriptome at GO level in the context of functional interactome. To show time-dependent information flow in embryonic development altered by 1\u03b1,25(OH)2D3 treatment, GO networks of enriched GO-GO interactions were reconstructed.The approach described above enabled flexible and deep monitoring of 1\u03b1, 25(OH)2D3 treatment. The eye development of zebrafish starts as early as 28 hpf [The network of GO terms between DEGs on day 2 and 4 suggested a cascade initiated by changes in xenobiotic metabolism genes and leading to genes involved in ion transport and transcription regulation 2D3 altered several pathways in developing eukaryotes.We also constructed the condition-specific meta-flow network based on co-expressed links identified. The statistics of three types of condition-specific links is presented in Table\u00a0ork Fig.\u00a0, severaler cells . In addi2D3 in zebrafish embryo development. This enabled us to review the progression of 1\u03b1,25(OH)2D3-induced changes in gene expression and the network structure itself in zebrafish embryo development. The efficiency of our analysis of 1\u03b1,25(OH)2D3-alered global gene expression was enhanced by the interactome approach, as the network-based analysis approach were superior to their single-gene approach in terms of both statistical power and biological interpretability, A variety of interesting biological hypotheses were derived from our analysis. The significant biological processes include iron metabolism, neuronal and retinal development, and many organ development related pathways. Our approach is useful for discovering candidate biological processes that can serve as a basis for generating new testable hypotheses. Such network-based integration approach can be extended to any temporal- or condition-dependent genomic data analyses. Other types of interaction or ontology data can also be incorporated into this approach.In this work, we have developed a network-based computational approach that analyzes time series mRNA-seq gene expression profiles in the context of molecular interactome and GO information to reveal temporal transcriptional changes altered by 1\u03b1,25(OH)2D3-altered transcriptional changes from a few genes that altered initially, to large groups of biologically coherent genes at later times. The most notable biological processes included calcium and iron metabolism, neuronal and retinal development, and generalized stress response. Such network-based integration approach can be extended to other condition-dependent studies. Also graph theory can be incorporated to compare condition-specific coexpression networks and meta-flow networks of GO terms can be inferred based on such information.We have developed a network-based analysis approach that integrated mRNA-seq gene expression profiles with molecular network and GO annotation to reveal dynamic propagation of 1\u03b1,25(OH)2D3- or ethanol-treated zebrafish, 2, 4, 6 and 7\u00a0days post-fertilization (hpf) was obtained by the Illumina HiSEQ 2000 platform. The generated 50-bp FASTQ sequence reads were aligned to both the latest Zebrafish genome assembly (zv9) and our in-house exon junction database using BWA [The mRNA-Seq profiling in four biological replicate samples of 1\u03b1,25(OH)sing BWA . The alising BWA . A totalhttp://FunCoup.sbc.su.se/). In total, there are 1,999,529 interactions between 13033 proteins in the zebrafish interactome downloaded on January 3rd, 2012.The zebrafish molecular interaction network was downloaded from FunCoup database (http://www.geneontolgy.org/) on Januray 20th, 2012. In this paper, we used the biological process terms only since our goal is to identify the 1\u03b1,25(OH)2D3-altered mechanisms.The gene ontology annotation was downloaded from the original website 2D3-altered) than expected by chance. This allows us to focus on the major tendencies of propagation of 1\u03b1,25(OH)2D3 treatment and organismal response to it. Compared to the individual category enrichment, this approach yielded a much richer analysis for interpretation. The detailed reconstruction step is as follows:For any two GO terms, a link was counted if any two DEGs in these two GO terms were connected in the original FunCoup network;Day 2 -\u2009>\u2009Day 4: one gene was differentially expressed on Day 2, while the other on Day 4;Day 4 -\u2009>\u2009Day 6: similar definition as in (a);Day 6 -\u2009>\u2009Day 7: similar definition as in (a).The GO-GO links were classified into time-dependent patterns according to the days when the gene were differentially expressed for the first time:The GO For each candidate GO-GO network link, its statistical significance was evaluated by the permutation test, i.e. gene names were randomized in the FunCoup network for 10,000 times. The links between GO terms with P value less than 0.01 were considered statistically significant.Enriched GO-GO links were kept in the GO-GO network, i.e. ones with P value less than 0.01. The network was visualized in the Cytoscape tool .A network of GO terms was generalized from the network of DEGs at different developmental stages in zebrafish embryos. At GO scale, relations between nodes (representing GO terms) are more statistically stable. Links reflect statistically enriched temporal connections between multiple genes in one specific GO term and multiple genes in another one. Thus, this GO-GO network highlights information flow between GO biological processes affected by 1\u03b1,25(OH)X and Y, which encodes one pair of interacting proteins, is defined asTo obtain the condition-specific expression information, a network called the co-expressed interaction network (CEIN) was constructed. Correlation of gene expression profiles between each pair of interacting proteins in FunCoup was evaluated by Pearson correlation coefficient (PCC). PCC of paired genes n is the number of condition-specific samples; Xi and Yi is the expression level of gene X(Y) in the sample i under a specific condition 2D3 or ethanol treated); X (Y) and \u03c3(X) \u03c3(Y)) represents the standard deviation of expression level of gene X (Y). Large absolute value of PCC indicates higher correlation between two gene pair evaluated. Besides correlation relationship, when applied to a pair of gene expression profiles, the experimental design allowed measuring effects of factors \u201c1\u03b1, 25(OH)2D3 treatment\u201d, \u201cdevelopmental stage\u201d, and \u201cgene\u201d as well as any of their combinations. The procedure was executed under the terms of the standard 3-way factorial ANOVA. By combining PCC and ANOVA analyses, we defined three types of coexpression networks:2D3 \u2013 related coexpression network with strong correlation between observed gene expression profiles only after 1\u03b1, 25(OH)2D3 treatment;1\u03b1, 25(OH)Ethanol \u2013 related coexpression network with strong correlation between observed gene expression profiles only in ethanol treatment;Developmental - related coexpression network with strong correlation between observed gene expression profiles under both conditions and with a significant developmental pattern and synchronous between two genes.where The first two types of coexpression links were assigned if the following conditions hold:PCCVD3 refers to the PCC value for the 1 \u03b1, 25(OH)2D3-treated samples, and PCCethanol refers to the PCC value for the ethanol-treated samples. Eq. .where PCC. Eq. states tThe third type of coexpression link was assigned given all the following conditions hold:PCCall refers to the PCC value for all samples across all conditions, and fullPCC is the minimum PCC value for a link to be considered coexpressed. In this paper, we set the cutoff values 0.9, 0.6, 0.9 for minPCC, diffPCC and fullPCC.where 2D3-sensitive coexpression network) and assigned to at least one GO biological process. The reconstruction step is as follows:For any two GO \u201cbiological process\u201d categories, a link was counted if any two genes in these two GO categories were connected in the condition-specific coexpression network;For each potential GO-GO network link, its statistical significance was evaluated by the permutation test, i.e. gene names were randomized in the co-expression network for 10,000 times. The links between GO biological process terms with P value less than 0.01 were considered statistically significant.To generate the condition-specific GO-GO network view, a condition-specific network of GO categories was reconstructed. It was based on the genes that were involved in condition-specific network P value less than 0.01. The network was visualized in the Cytoscape tool.Enriched GO-GO links were kept in the GO-GO network, i.e. ones with The gene level Gene ontology enrichment analysis was performed using GoMiner on the D"}
+{"text": "Groups of genes assigned to a pathway, also called a module, have similar functions. Finding such modules, and the topology of the changes of the modules over time, is a fundamental problem in understanding the mechanisms of complex diseases. Here we investigated an approach that categorized variants into rare or common and used a hierarchical model to jointly estimate the group effects of the variants in a pathway for identifying enriched pathways over time using whole genome sequencing data and blood pressure data. Our results suggest that the method can identify potentially biologically meaningful genes in modules associated with blood pressure over time. It has long been recognized that genetic analysis of longitudinal phenotypic data is important for understanding the genetic architecture and biological variations of complex diseases. The analysis can help identify the stage of disease development at which specific genetic variants play a role. However, the statistical methods to analyze longitudinal genetic data are limited. A commonly used approach is to analyze the longitudinal genetic traits by averaging multiple response measurements obtained at different time points from the same individual. This approach may miss a lot of useful information related to the variability of repeated genetic traits, although it is simple and computationally less expensive. Linear mixed models have also been used for repeated measures data .Recently, there has been a shift to testing rare variants, mostly using next-generation sequence technologies, for association with complex diseases. We explored dynamic pathway-based analysis of genes associated with blood pressure over time using whole genome sequencing data. We first performed gene-based association analysis at each of the 3 time points by stratifying the variants into rare and common. Then we performed pathway enrichment analysis separately at each time point. Finally, we built pathway crosstalk network maps using the enriched pathways to identify potential subnetworks associated with blood pressure over time.p value = 0.27), \u22120.02 , and \u22120.006 , respectively. Q1 was generated primarily to facilitate assessment of type 1 error.For genotype data, we analyzed sequencing data of the 142 unrelated individuals on chromosome 3, which includes 1,215,120 variants. For phenotype data, we analyzed the simulated phenotypes of replicate 1. We analyzed 2 quantitative traits: systolic blood pressure (SBP) and Q1. SBP was measured at 3 time points , and was close to normally distributed (data not shown) after treatment effect adjustment (see below). There are 31 functional loci (genes) on chromosome 3 that influence the simulated SBP. Q1 was simulated as a normally distributed phenotype but not influenced by any of the genotyped single-nucleotide polymorphisms. It also has no correlation with SBP measured at T1, T2, and T3. The Pearson correlation of SBP at the 3 time points with Q1 based on the 142 unrelated individuals is \u22120.09 in the model. The method was implemented in R package BhGLM (http://www.ssg.uab.edu/bhglm/).The data distribution is expressed as k = 1) and common variants (k = 2). Separately we analyzed all the genetic variants in a gene, irrespective of allele frequency. Our main objective was to estimate gene effects k = 1 (rare variants) and k = 2 (common variants) for the first analysis and k = 1 (rare and common variants) for the second analysis. We corrected for multiple testing using the Benjamini and Hochberg method [We assigned the genetic variants to a gene if they were in the gene or within 10 kilobases (kb) of either side of the gene. We performed 2 analyses to evaluate the association between genotype and SBP at each study exam separately. First, we divided the variants within a gene into rare (g method .http://www.broadinstitute.org/gsea/msigdb/), which includes 2934 gene sets collected from 186 Kyoto Encyclopedia of Genes and Genomes (http://www.genome.jp/kegg/), 430 Reactome, 217 BioCarta pathways, 880 canonical pathways, 825 biological process, and 396 molecular function gene ontology terms. We kept only the pathways with at least 5 genes in our data set, which left 531 pathways for analysis.We mapped the approximately 1200 genes on chromosome 3 to the c2 curated pathways (version 3) from the Broad Institute . The test is essentially a streamlined version of the gene set enrichment analysis approach introduced by Subramanian et al [There are different ways to test for genes associated with an excess of SBP in the same pathway. We used the \"gene set enrichment test\" implemented in the limma R package . The appan et al .p value of <0.05. Two pathways were considered to crosstalk if they shared at least 1 functional locus (gene). This ensures that each of pathway and its crosstalk has biological meaningfulness. We built pathway crosstalk subnetworks using Cytoscape (http://www.cytoscape.org/).We performed dynamic pathway crosstalk analysis between each pair of time points using the enriched pathways with a nominal p value smaller than 0.05, and the negative as those genes that had an adjusted p value larger than or equal to 0.05. As shown in Table Given a false discovery rate (FDR) of 0.05 at the gene-level analysis, we identified 116, 57, 2, and 0 significant genes for SBP measured at T1, T2, T3, and Q1, respectively, using rare variants. However, there were no significant genes for SBP measured at the 3 time points and Q1 using common variants. Of those significant genes from the rare variant analysis, 4, 1, 0, and 0 were true positives , which had 286 rare variants with 1 functional variant .Using the p value smaller than 0.05. Two pathways crosstalk if at 2 time points they included at least 1 common true gene. As shown in Figure To identify pathway crosstalk, we built 2 pathway subnetworks Figure for the p value cutoff of 0.05. Interestingly, we also found a subnetwork with 3 enriched pathways that showed crosstalk between each pair of time points, suggesting the dynamic pathway crosstalk may have a key role in the pathogenesis of SBP. It should be noted the \"'functional\" loci defined in simulation answers provided by Genetic Analysis Workshop 18 (GAW18) organizers were polymorphic based on all individuals, but they may be not polymorphic in the unrelated individuals analyzed in this study. In this case, some functional loci (or genes) may not have effects in the unrelated data, which may lead to the bias in calculation of false negatives.In this study, we evaluated the associations between rare and common genetic variants and the simulated quantitative trait (SBP) measured at 3 time points at the gene and pathway levels. We found that joint modeling all the variants (rare and common) together had a high type I error, which may be a result of linkage disequilibrium between common and rare variants, or the average effect between rare and common variants. However, a strategy that categorized variants into rare or common and used a hierarchical model to jointly estimate the group effects showed rare variants had higher power to detect functional loci than did common variants. Although we did not find statistically significant pathways associated with SBP (FDR of the 0.05 level), we showed some enriched pathways shared across time at a nominal In summary, we proposed a framework to identify dynamic pathways which have the potential in regulating SBP via analyzing repeated traits with next-generating sequencing. This can generate insights into the progressive mechanisms of the underlying disease. This analysis strategy can also be applied to examine the mechanisms that drive the progression of complex diseases.The authors declare that they have no competing interests.PH designed the study, performed the data analysis, and drafted the manuscript. ADP participated in designing the study. ADP supervised the study. All authors helped revise the manuscript. All authors read and approved the final manuscript."}
+{"text": "Caenorhabditis elegans and provide a nomenclature for the previously unnamed ones. We show that, out of 103 homeobox genes, 70 are co-orthologous to human homeobox genes. 14 are highly divergent, lacking an obvious ortholog even in other Caenorhabditis species. One of these homeobox genes encodes 12 homeodomains, while three other highly divergent homeobox genes encode a novel type of double homeodomain, termed HOCHOB. To understand how transcription factors regulate cell fate during development, precise spatio-temporal expression data need to be obtained. Using a new imaging framework that we developed, Endrov, we have generated spatio-temporal expression profiles during embryogenesis of over 60 homeobox genes, as well as a number of other developmental control genes using GFP reporters. We used dynamic feedback during recording to automatically adjust the camera exposure time in order to increase the dynamic range beyond the limitations of the camera. We have applied the new framework to examine homeobox gene expression patterns and provide an analysis of these patterns. The methods we developed to analyze and quantify expression data are not only suitable for C. elegans, but can be applied to other model systems or even to tissue culture systems.Homeobox genes play crucial roles for the development of multicellular eukaryotes. We have generated a revised list of all homeobox genes for Caenorhabditis elegans, compilations of the complement of homeobox genes in C. elegans have become available ).vailable . A previvailable . Here weC. elegans is a widely used model system for understanding metazoan biology microscopy has been successfully applied to gain many insights into the biology of C. elegans and other species . Due toy . Wit. WitC. e)ogenesis . A majord e.g., , 25). No. NoC. eld bacterial broth. The cultures were semi-liquid and allowed for fast and efficient visual screening of the Dpy phenotype. Between 500 and 1000 animals were selected from the progeny of gamma-irradiated animals as a start, then approximately 20 progeny of a potentially heterozygous animal were singled onto new plates in seek of homozygotes. Homozygosity was confirmed by putting single progeny of a highly transmitting animal onto 5 cm NGM plates seeded with OP50. If the non-transgenic phenotype re-occurred even in a minority of animals, the line was not considered integrated.Most transgenic escribed . Other sh-1::GFP , ceh-2::h-2::GFP , ceh-10:-10::GFP , ceh-13:-13::GFP , ceh-14:-14::GFP , 41, ceh-22::GFP , ceh-23:-23::GFP , ceh-26:-26::GFP , ceh-30:-30::GFP , ceh-32:-32::GFP , ceh-34:-34::GFP , ceh-43:-43::GFP , lim-4::m-4::GFP , mls-2::s-2::GFP , mec-3::c-3::GFP , unc-4::c-4::GFP . Sourcesc-4::GFP . Lines tOnly well growing wild-type behaving lines were isolated and considered. A minimum of two independent lines from different irradiated P0s were isolated for each construct. Differences in the absolute expression level were expected and regularly occurred among unrelated lines that originated from the same extrachromosomal array.Gamma irradiation causes double strand breaks and chromosomal rearrangements\u2014an effect that is used for integration of extrachromosomal transgenes . Crossinhttp://www.c-control.de), programmed to accept serial commands sent from the computer. Whenever we recorded RFP, we used the Zeiss halogen light source both for GFP and RFP. The acquisition software was OpenLab .The microscope used is a Zeiss Axioplan 2, equipped with an Applied Scientific Instrumentation (ASI) ASI-S1630 piezo Z-stage, controlled by an ASI PZM-2000 controller. Images are acquired by an Hamamatsu ORCA ER (C4742-95-12ER) through an Active Silicon Snapper-DIG16 frame grabber installed in a PowerPC Macintosh computer running Mac OS X 10.4. Most images were acquired at 63x using a Zeiss 440762 oil-immersion objective and an Optivar attachment, usually set at 1.6x. For GFP a Zeiss filter set 38 HE or 09 was used. To reduce phototoxicity, particular with mercury light bulbs , we usedth largest intensity instead protects against shot noise. Exposure should NOT be adjusted every frame as intensity is not entirely linear against exposure time, instead it should be changed when light goes above or below certain thresholds. When this happens, the new exposure time is the last exposure time multiplied or divided by a correcting factor. The thresholds and the correcting factor are provided by the user and can be adjusted for every recording. Typically, we allow the exposure time in the fluorescent channel to fluctuate between 200ms and 15ms.For the initial recordings, an OpenLab Automator script was created. However, with long overnight recordings, we found that every so often, an error would cause the software to stall. Further, on-the-fly image analysis is not possible with Openlab. Subsequently, we used Openlab only to record a single stack at a time. The main control loop was implemented as an AppleScript that simulated user input, which passed on all the relevant parameters such as binning, slice number, slice spacing, exposure time, and light and filter configuration to Openlab. For automatic exposure control, the algorithm regulates exposure time by examining the signal intensity of the last acquired frame. The maximum intensity is a usable solution, but taking, e.g., the 10Much effort was spent on reducing light exposure for viability, while capturing as much information as possible. This was achieved by increasing camera binning and reducing the number of Z slices and the stack sampling rate in the fluorescent channel. Further, halogen or LED light sources were used. Routinely, we acquired 70 DIC slices and 35 fluorescent slices. Time resolution is an important parameter for lineaging. Similar to Schnabel et al. 1997 [www.endrov.net.The flexibility of the recording parameters is a key feature of our imaging platform Endrov to obtain optimal sample acquisition. In addition, the on-the-fly adjustable exposure times allow a vastly increased dynamic range for capturing fluorescent signals that are not limited by the camera hardware. Endrov is open source software in Java available at Sensors in a digital camera count incident light on a quantized integer scale, e.g., 0\u2013255 for an 8-bit camera. If a long exposure time is used to acquire a weak signal, often overexposure results later in development when the signal becomes strong. We have developed an algorithm that expands the effective sensitive range by dynamically adjusting the exposure time during the recording. Each new stack is analyzed during recording, and when the signal is becoming too bright or weak the exposure time is decreased or increased, respectively. The exposure time and other settings are stored in the metadata of the recording so that the overall intensity of the expression can be reconstructed later. In this fashion, we obtained about 10-fold increase in dynamic range .Dynamic range expansion method: Each recording has been annotated with the embryo outline. The background signal is first subtracted for each frame. The background signal has to be estimated very conservatively to avoid artifacts, e.g., hatched worms that crawl by the embryo. The total average of the background is rather sensitive to such perturbations, unlike the median. However, the median does not change continuously over time. Instead, we take the average of the 40\u201360-percentile , since it changes more continuously with the background signal distribution over time and is insensitive to extreme outliers. We use the minimum value of the filtered average inside and outside the egg as the background signal; while normally the region outside the embryo represents the background sufficiently well, checking the embryo area also avoids some rare cases with negative values. The signal is almost linear to the exposure time but occasional discontinuities can be avoided by demanding that the average signal is the same between two frames at those time points when the exposure changes. It is important to note that the exposure time is not changed every frame during acquisition, but only when the signal is moving out of the sensitive range. We have also tried to fit the signal over the entire embryo from the last frame to the next frame by means of a linear model. This produces very smooth expression patterns but it has a severe problem: the signal intensity of the expression pattern converges to 0 over time. The reason is that linear least squares has a systematic bias towards zero, of a proportion that is related to the level of noise .The first four cells were manually annotated. Further, the location and time of the gastrulation, ventral enclosure, and the 2-fold stage were marked. To make annotation more convenient in 3D space, we have expanded the manual annotation with a novel feature that allows annotation in 3D rendered volumes . To normT, APT, XYZ, and SC summary methods, we also explored Dorsal-Ventral-Time (DVT) and Left-Right-Time (LRT) profiles. The data for the latter two is presented in the Supplementary Material website, but were not further analyzed.Based on the normalized data, we evaluated both how to best summarize (reduce) the data, and how to compare the recordings based on the reductions. In addition to the 2)-distance. The raw comparison data are available online (see online data). Based on the pair-wise similarity, we performed clustering to visualize the results. We have qualitatively found that none of the algorithms we tried are strongly discriminatory. Neighbor-joining gave trees with long unlikely branches (data not shown). We also implemented our own algorithm of weighted spring-clustering [To compute pair-wise similarity, we attempted traditional methods, for example, using Pearson's colocalization coefficient, Manders' coefficient , or k-coustering , but theC. elegans embryos [T) expression summary of each gene, with time points taken from the mapped SC model. The significance was assessed by bootstrapping against random pairing of genes from our model versus the microarray. The code for loading the SOFT microarray file, comparing, and bootstrapping was written in Java.The microarray dataset GSE15234 for staged embryos was down embryos . The dathttp://www.gnuplot.info/), except for XYZ summaries that were generated directly in Java. Expressions on the lineage and on the 3D model are shown with Endrov. Calculations and scripts were prototyped with Matlab and Octave . Final implementation is in Java 1.5 using Endrov as a library and host [Gnuplot (version 4.2) was used for plotting expression patterns class homeobox genes, one with five HDs (zag-1) and one with three HDs is present. A number of homeobox genes encode multiple HDs that tend to be also rather divergent, i.e. ceh-79 (2), duxl-1 , ceh-82 (2), ceh-83 (5), ceh-84 (2), ceh-85 (2), ceh-88 (2). ceh-99 has four HDs, while the related gene ceh-100 has a record-setting 12 HDs that are tightly packed. None of these genes apart from ceh-79 have obvious orthologs in other Caenorhabditis species. This lack of conservation suggests that these homeobox genes have mostly arisen de novo in the C. elegans lineage, and several of them are located on a duplication-rich chromosome arm (see below). Double homeobox genes have also been identified in mammals (DUX), but these seem to have originated in early mammalian evolution [C. elegans genes. We also identified a special subgroup of homeobox genes that are so far specific to Caenorhabditis species and encode a novel double HD motif, which we term HOCHOB (see below).While most homeobox genes encode only a single HD Figs \u20136, a nums zfh-2, . Furthervolution , and theC. elegans genome. Furthermore, there are nine HD-related proteins in C. elegans. Seven of them belong to the PRD domain group of proteins . Four of these have been named NPAX, because they only have the N-terminal PAI subdomain of the PRD domain (NPAX [psa-3 is a Prep (TALE\u2014MEIS class) family protein whose orthologs in other phyla have a highly conserved HD, and ocam-1 has an OCAM motif otherwise found only in ceh-21 and ceh-41. Using phylogenetic analyses . Howevein have also been classified as C. elegans orphans by the C. briggsae genome project [We find that 70 (68%) n genome and 3. I species , 15 do nCaenorhabditis species. The bottom row lists genes with only a PRD domain.The left column shows the classes or superclasses , 35, 36.C. briggsae, C. remanei, and C. brenneri. The new motif consists of two divergent HDs that are separated by a linker of about 17 residues (Homeobox\u2014cysteine loop\u2014homeobox). The second HOCHOB HD has extra residues inserted in loop 1 and loop 2 of the HD. The HD similarity of HOCHOB was initially detected by PSI-blast searches that detected the second HOCHOB HD. When the first HOCHOB HD of C. brenneri CAEBREN_14312 is used as query in a PSI-blast search, fungal HDs can be detected in the second iteration with P-values of < 0.001, supporting the notion that the first motif is also a divergent HD.During the analysis of the divergent HD proteins, we identified two proteins, CEH-91 and CEH-93 that shared extended sequence similarity with each other upstream of their typical HDs (CEH-91_HD3 and CEH-93_HD3). Just upstream of these HDs each has a divergent HD , which has an insertion in loop 1 of the HD. Such insertions have also been observed in other HDs [residues . The linCaenorhabditis genus. The two absolutely conserved cysteine residues in the linker region between the two HDs suggest they could be involved in metal binding. However, additional residues would be required to form, for example, a zinc finger. There are two conserved histidine residues, one in each HD (in CEH-91 displaced by two positions), and there is also a conserved aspartic acid have also separated far from the cluster, while the main HOX cluster is split into two parts .We mapped the chromosomal location of the homeobox genes . No larg cluster . Two Abdduxl-1 and ceh-81 to ceh-86, are located on the left arm of chromosome II . Thus, this region of chromosome II has been subject to rapid evolution with many duplication events, which probably also gave rise to these divergent homeobox genes. While CEH-86 does not have a direct ortholog with a HD protein in other Caenorhabditis species, it does share sequence similarity upstream of the HD with several uncharacterized ORFs that are clustered on cosmid C35E7 is not obviously related to known cysteine motifs. ceh-86 might have arisen by a duplication event, where a homeobox translocated into a UCM family gene, or vice versa.Several homeobox genes, i.e. osome II , S3 Fig.id C35E7 . This repolg-1 [C. elegans embryogenesis with a conventional fluorescent microscope is sample viability [pie-1::Histone::GFP or nmy-2::NYM-2::GFP, we found that our system can detect early 1 to 4 cell expression as starting point , and suppolg-1 . The striability , 25. We iability ). Furtheiability ). To maniability . The 4D movies, , 75).C. elegans is at the lineage level, many biological systems are not amenable to single-cell lineaging. Further, often one would like to perform global gene expression analysis and comparison of large datasets, e.g., clustering, which requires extraction of a suitable set of parameters from the images. As previously described, we have developed plug-ins for manual lineaging [T); signal intensity in slices along the anterior-posterior (AP) body axis over time (APT); signal intensity of cubes that are aligned with the AP and left-right (LR)-axes (XYZ); Finally, we explored the possibility of superimposing the Ce2008 4D model [SC). To apply these methods the recordings were normalized with respect to time. When mapping time from an annotated lineage, the life span of individual cells was used. For the other methods, several annotated time points based on the morphology of the embryo were used and dorsal-ventral (DV) axes. The total number of cubes arises from the distance between EMS-ABp and ABa, P2, enlarged by 35% to cover the embryo. For the SC method, in the absence of an annotated lineage, we superimposed the 4D model Ce2008 using the first four cells. In the few cases where the recording started later (up to eight cells), the coordinates of these cells were found by averaging the daughter cell coordinates. The cell geometry was approximated by Voronoi polyhedrons, as previously described [While the ultimate goal of gene expression analysis in ineaging . Here, w4D model onto theused see . We haveescribed . It ensuAPT, i.e. slicing along the AP axis over time is the best method, followed by the single-cell (SC) approximation. Adding more parameters (subdividing more) as in XYZ enables better discrimination of recordings of different genes, however at the cost of lower reproducibility. One way of representing cluster data is with a dendrogram. Using the APT data, a tree was generated from 122 selected recordings . It is easy to rapidly scan T and APT profiles, for example, it is easy to see how the expression of eyg-1 turns on before comma stage and later fades in late larval stages The global expression pattern extraction methods were assessed for their ability to discern different types of expression pattern in a reliable way. Several clustering methods were examined as described in Materials and Methods. In summary, cordings . Even thT and APT profiles are included in T data is easy to view, but the information content is low and does not distinguish well between genes. APT is easy to view as a 2D heat map or a 3D graph. It is hard to visualize XYZ data in a way that captures both time and spatial information (see online data). XYZ does however give additional information about expression localization lacking in APT. However, the resolution-limits of the microscope in the Z-direction introduce errors in DV and LR subdivisions. Therefore XYZ is also subject to large variation and is less reproducible than APT. The SC method can be rendered on the 4D model of the embryo, giving good spatial information (see ceh-37::GFP below), and on the lineage, giving time information. If the lineage has not been determined, then the SC method is powerful and can yield tentative cell identifications. However, like XYZ, it is critically dependent on the precise annotation of the initial coordinate system and that the embryo does not deviate from the Ce2008 model. If rotation around the AP axis is observed, a rotation of the model could realign the cells again, although we have not explored this.A further important, but subjective aspect is also which of the four methods produces the best visual summary. Examples of T profile is useful for comparing data to other global data derived from sources such as microarrays, SAGE or deep sequencing. Staged C. elegans embryonic gene expression levels have previously been analyzed using microarrays [T data with the microarray embryo data at the gene level. Even though a delay between mRNA levels (microarray) and GFP production is expected, we find with 94% statistical significance a low correlation of 0.14. Qualitatively, when examining individual genes, we often find good correspondence , and RNAi experiments have revealed that they have a redundant function during embryogenesis [C. elegans, the Hox gene ceh-13, the labial/Hox1 ortholog, is expressed in early embryogenesis [We find that most homeobox genes are expressed later than the 100 cell stage. Examples of early expression are the paralogs ox genes . Both ceogenesis . The PBCogenesis \u201379. In Cogenesis , 81. Ourceh-37, ttx-1, and ceh-36 are Otx/Otd family homeobox genes of the PRD-LIKE class and have been shown to be involved in neurogenesis [Xenopus and mouse [ceh-36::GFP is expressed during gastrulation, most notably in a region surrounding the ventral cleft , suggesting a role beyond neurogenesis. Recently, ceh-36 has been shown to be expressed in the MI progenitor cell AB.araap [ceh-36::GFP expression supports this finding [ceh-45::GFP expression pattern . However pattern , C. elegn of gsc , 89. Likee e.g., ), and we elegans .ceh-26 is an ortholog of the Drosophila gene prospero, which is also involved in nervous system specification [ceh-26 also functions in the excretory cell [ceh-26 clusters tightly together with divergent homeobox gene ceh-74. It indeed has a broad expression pattern like ceh-26 . We obtained GFP expression data for some, indicating that at least some are probably functional. Several of these genes are clustered on the left arm of chromosome II that has been subject to substantial gene duplication, demonstrating ongoing evolution. We find that most of the homeobox genes are expressed later in embryogenesis, most likely reflecting the fact they are involved in final cell fate specification events. However, those genes that are expressed during gastrulation, or even earlier, such as the TALE and HOX homeobox genes .C. elegans. Most biological model systems do not have a precise lineage like C. elegans. The gene expression extracting algorithms we have developed as part of Endrov can certainly be applied to other systems, i.e. embryogenesis in other species, or in vitro organ development, where precise cell lineage is not available, but spatial patterns of gene expression can be observed.Our method of summarizing and comparing expression patterns is not restricted to ceh-16 and ceh-14 look different, with ceh-14 almost completely lacking expression in EPIC embryos [pal-1 expression in EPIC starts later than the early expression seen with antibodies [Our expression survey of homeobox genes contributes to the ongoing efforts to determine gene expression patterns and functions of developmental control genes , 29, 97. embryos . Also, ptibodies . Certaintibodies ). NevertS1 FigSequences from Figs (PDF)Click here for additional data file.S2 FigThe three HDs are marked in red, the 15 zinc fingers are marked in green, cysteine and histidine residues are yellow. Two partial fingers are underlined, the second may form a finger using an Asp (D) residue.(PDF)Click here for additional data file.S3 Figmath, btb, fbxa, fbxb, and fbxc genes. Homeobox genes are marked in red.Expanded view of chromosome II (expanded from (PDF)Click here for additional data file.S4 FigCaenorhabditis ORFs that share sequence similarity with the upstream region of CEH-86. A blastp search with CEH-86 retrieved three C. elegans ORFs, all located on cosmid C35E7, as well as related genes from other Caenorhabditis species. No similarity was found beyond Caenorhabditis. The sequence similarity starts at the N-terminus and extends to the HD of CEH-86. Furthermore, the newly identified ORFs extend their sequence similarity into the region that corresponds to the HD of CEH-86 and beyond. The sequence conservation is characterized by conserved cysteine residues, suggesting that multiple metal binding fingers may be present. However, further analysis will be necessary to define the motif in depth, at present we refer to it as UCM (uncharacterized cysteine motif). The location of a HD attached to another protein coding region suggests that ceh-86 may have arisen by a duplication event (maybe from the ceh-84 homeobox), where a homeobox translocated into a UCM gene, or an N-terminal section of a UCM gene translocated upstream of a homeobox. (B) Neighbor-joining tree of the conserved region of the sequences in (A). Numbers show bootstrap values for 1000 trial runs. The tree shows that three clades exist that share orthologous genes in different Caenorhabditis species. A chromosomal cluster with multiple genes must have already existed before the divergence of C. elegans, C. briggsae, and C. remanei. It appears that CEH-86 does not seem to have a direct ortholog, supporting the notion of a recent duplication event.(A) Multiple sequence alignment of (PDF)Click here for additional data file.S1 Tabletbx-2::GFP [xbx-1::GFP [F55A4.3::GFP (+ elt-2::mCherry) [efn-4::GFP [pie-1::GFP::HIS-11 [mec-18::GFP [The third column gives TB strain designations, the fourth column are strains from other sources. TB strains were often derived from BC strains by integration. Some strains were obtained from CGC. Sources of additional strains: x-2::GFP , xbx-1::x-1::GFP , F55A4.3mCherry) , efn-4::n-4::GFP , pie-1::::HIS-11 , mec-18:-18::GFP .(DOC)Click here for additional data file.S1 TextExtracted from WormBase release WS220 with BioMart. Microsoft Word document, zipped.(ZIP)Click here for additional data file."}
+{"text": "Solanum lycopersicum) leaves through molecular timetable method in a sunlight-type plant factory. Molecular timetable methods have been developed to detect periodic genes and estimate individual internal body time from these expression profiles in mammals. We sampled tomato leaves every 2 h for 2 days and acquired time-course transcriptome data by RNA-Seq. Many genes were expressed periodically and these expressions were stable across the 1st and 2nd days of measurement. We selected 143 time-indicating genes whose expression indicated periodically, and estimated internal time in the plant from these expression profiles. The estimated internal time was generally the same as the external environment time; however, there was a difference of more than 1 h between the two for some sampling points. Furthermore, the stress-responsive genes also showed weakly periodic expression, implying that they were usually expressed periodically, regulated by light\u2013dark cycles as an external factor or the circadian clock as the internal factor, and could be particularly expressed when the plant experiences some specific stress under agricultural situations. This study suggests that circadian clock mediate the optimization for fluctuating environments in the field and it has possibilities to enhance resistibility to stress and floral induction by controlling circadian clock through light supplement and temperature control.The timing of measurement during plant growth is important because many genes are expressed periodically and orchestrate physiological events. Their periodicity is generated by environmental fluctuations as external factors and the circadian clock as the internal factor. The circadian clock orchestrates physiological events such as photosynthesis or flowering and it enables enhanced growth and herbivory resistance. These characteristics have possible applications for agriculture. In this study, we demonstrated the diurnal variation of the transcriptome in tomato ( Recently agricultural technologies have been rapidly developing, for example, application of information and communication technology, automation, and cultivation in closed systems with controlled temperature, humidity, and light conditions. In particular, plant factories are the more recent and highly regarded. These are of two types, closed-type, and sunlight-type. Both types have abilities of mass production, stable supply, and safe products. Furthermore, both types can control the cultivation environment artificially compared with open culture. Accordingly, there has been increased expectation of the ability to create cultivation control technologies for several cultivars according to various internal factors, which could be a key for high productivity or high quality. There is a real need for an exhaustive understanding of gene expression or metabolism to determine what these internal factors are, for example stress responses , TOC1 (TIMING OF CAB EXPRESSION 1), LHY (LATE ELONGATED HYPOCOTYL), and PRRs (PSEUDO-RESPONSE REGULATORs), which create feedback loops in each cell . Photosynthesis or phenylpropanoid biosynthesis genes are expressed periodically, with these genes peaking at different times such as subjective day and dawn under constant light under field conditions cultivated in a sunlight-type plant factory (4480 cm [W] \u00d7 2300 cm [D] \u00d7 500 cm [H]) in the Faculty of Agriculture, Ehime University, Japan. Individual plants are usually cultivated for a year; in this experiment, tomato seedlings were grown by Berg Earth Co. Ltd. and transplanted into rockwool cubes in August 2013. Rockwool cubes were placed on rockwool slabs at four per slab. The four rockwool cubes were placed at 25 cm intervals and watered using nutrient solution , an aqueous nontoxic tissue storage reagent that rapidly permeates tissue to stabilize and protect the integrity of RNA.http://trace.ddbj.nig.ac.jp/DRASearch) under the accession number DRA003529 and DRA003530.We isolated total RNA using an RNeasy Plant Mini Kit (Qiagen). RNA quality was checked using an Agilent 2100 Bioanalyzer and RNA quantity control was performed using a Qubit\u00ae 2.0 Fluorometer . We prepared a RNA-Seq library to identify the best-fitting cosine curve. The peak time of the best-fitting curve was estimated as the peak time for each gene. This estimated peak time defined as the molecular peak time. Thus, molecular peak time was estimated from a single gene, and all genes were estimated it individually. Then, to analyze amplitude, we calculated the average and standard deviation for every gene expression level. The amplitude value (a) was calculated as the standard deviation divided by the average of gene expression level.First, we selected genes whose expression indicated periodicity and high amplitude from the time-course transcriptome data\u2014these were time-indicating genes. To analyze periodicity, we prepared 1440 test cosine curves. These curves had different peaks (0\u201324 h) measured at increments of 1 min. We fitted test cosine curves to data from each time-course gene expression generated via RNA-Seq and calculated the correlation value based on TAIR10 in Arabidopsis. MapMan software can be downloaded from http://mapman.gabipd.org/web/guest/mapman.We used MapMan to categorize stress-responsive genes in tomato which indicates periodicity, and amplitude value (a) for 18,332 genes . The histograms showed that neither r nor a values were normally distributed between them was high at all times.We estimated the internal time from the expression profiles of 143 time-indicating genes for each time and calculated the difference between time and estimated internal time Figure . Internas Figure . There wr = 0.635 and a = 0.15 in the stress-responsive genes (Figure We focused on the stress-responsive genes, which are important to increase sweetness in tomato fruit and to defend against disease. Furthermore, stress-responsive genes are possible markers of the internal factors because these genes show sensitivity to external stimuli and affect several physiological events (Wang et al., s Figure . These es Figure ; howeverLactuca sativa differed from Arabidopsis (Higashi et al., At least 1000 genes, included some clock-related genes (Presentation Stress-responsive genes were also indicated for periodic expression. Thus the stress-responsive genes were normally expressed periodically as regulated by light\u2013dark cycle as the external factor or the circadian clock as the internal factor, and they could be specifically expressed when some stresses were experienced by the plant. Similar events were also reported for soybean, barley, and Arabidopsis (Covington et al., In conclusion, we demonstrated that many genes were expressed periodically and that gene expression was stable in a sunlight-type plant factory. Furthermore, internal time could be estimated from time-course gene expression data in tomato leaves through molecular timetable method. The results also showed that stress-responsive genes were expressed periodically under non-stressed conditions. This study suggests that circadian clock mediate the optimization for fluctuating environments and environmental control tailored to internal time may enhance resistibility to stress and floral induction, eventually the yieldability.HF and TH designed the experiments. YT did the MapMan analysis and constructed the heat map. KT supported the sampling in a sunlight-type plant factory. AN and MH prepared a RNA-Seq library. TH performed RNA-Seq data analysis and the molecular timetable method. TH and HF wrote the manuscript. All authors discussed the results and implications and commented on the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Trypanosoma vivax is a cause of animal trypanosomiasis across Africa and South America. The parasite has a digenetic life cycle, passing between mammalian hosts and insect vectors, and a series of developmental forms adapted to each life cycle stage. Each point in the life cycle presents radically different challenges to parasite metabolism and physiology and distinct host interactions requiring remodeling of the parasite cell surface. Transcriptomic and proteomic studies of the related parasites T. brucei and T. congolense have shown how gene expression is regulated during their development. New methods for in vitro culture of the T. vivax insect stages have allowed us to describe global gene expression throughout the complete T. vivax life cycle for the first time. We combined transcriptomic and proteomic analysis of each life stage using RNA-seq and mass spectrometry respectively, to identify genes with patterns of preferential transcription or expression. While T. vivax conforms to a pattern of highly conserved gene expression found in other African trypanosomes, , we identified significant differences in gene expression affecting metabolism in the fly and a suite of T. vivax-specific genes with predicted cell-surface expression that are preferentially expressed in the mammal or the vector . T. vivax differs significantly from other African trypanosomes in the developmentally-regulated proteins likely to be expressed on its cell surface and thus, in the structure of the host-parasite interface. These unique features may yet explain the species differences in life cycle and could, in the form of bloodstream-stage proteins that do not undergo antigenic variation, provide targets for therapy.The parasitic flagellate Trypanosoma vivax is a single-celled parasite that infects cattle and non-domesticated animals through the bite of the tsetse fly. The parasite causes animal trypanosomiasis, a chronic condition resulting in severe anemia, muscle wastage and ultimately death if untreated. This disease is endemic across sub-Saharan Africa but has also spread to South America and causes considerable losses in animal productivity, impeding economic development in the world\u2019s poorest nations. To develop new ways of preventing and treating animal trypanosomiasis, we need an accurate understanding of how the parasite causes disease. In this study, we present an analysis of gene expression throughout the T. vivax life cycle that compares the abundance of gene transcripts (mRNA) and proteins in the mammalian and insect hosts. We have identified genes that are preferentially expressed in each life stage, including many that are unique to T. vivax and probably expressed on its cell surface. Our findings provide a comprehensive understanding of how gene expression is regulated in T. vivax and further refine a pool of T. vivax-specific genes that could be exploited to prevent and treat animal trypanosomiasis. Glossina spp.). This endemic disease causes considerable morbidity in livestock herds and associated losses in animal productivity. The threat of Animal African trypanosomiasis in tsetse-infested areas also prevents effective exploitation of available pasture, thereby impeding economic development in the world\u2019s poorest nations.African trypanosomes are unicellular vector-borne hemoparasites of humans, domestic livestock and wild animals. They cause African trypanosomiasis, an endemic disease of sub-Saharan Africa otherwise known as sleeping sickness in humans and nagana in animals, and are transmitted between vertebrate hosts by the bite of tsetse flies ..Trypanosg insect \u20134 and hag insect \u20136.T. brucei [T. congolense and T. vivax [T. brucei, except for three studies [T. brucei is particularly apparent in the expression of major surface glycoproteins belonging to the procyclic, epimastigote and bloodstream forms respectively, i.e. procyclin [BruceiAlanine-Rich Protein .In recent years our understanding of trypanosome biology has progressed substantially through the determination of genome sequences for . brucei \u201310 and fT. vivax , as well studies \u201314), usi studies \u201320 and p studies \u201323 approrocyclin , Brucein and theT. vivax will be different to T. brucei in important ways, not least due to differences in life cycle development , as well as numerous gene families that appear to be unique [in vitro cultivation of insect stages has not previously been possible, gene expression in T. vivax has only been analyzed in the bloodstream form, and then only through transcriptomic analysis [in vitro cultures of the insect stages of T. vivax [There are compelling reasons for supposing that gene expression in elopment , but alse unique . As in vanalysis . Moreoveanalysis ), differT. vivax , and so T. vivax epimastigote, metacyclic and bloodstream forms. Our results show that the numerous T. vivax-specific genes predicted to function on the parasite cell surface are transcribed and often developmentally regulated. Genome-wide patterns of developmental regulation are conserved across African trypanosome species, with some notable exceptions concerning pyruvate metabolism in T. vivax, which might indicate an important species difference in energy metabolism. Comparative genomics suggests that T. vivax differs quite considerably from the model T. brucei; by illuminating the expression of distinctive features in the T. vivax genome, this study moves us closer to understanding their phenotypic effects.Using transcriptome sequencing and proteomics, we have analyzed differences in gene expression between All mice were housed in the Institut Pasteur animal care facilities in compliance with European animal welfare regulations . Institut Pasteur is a member of Committee #1 of the Comit\u00e9 R\u00e9gional d\u2019Ethique pour l\u2019Exp\u00e9rimentation Animale (CREEA), Ile de France. Animal housing conditions and the protocols used in the work described herein were approved by the \u2018\u2018Direction des Transports et de la Protection du Public, Sous-Direction de la Protection Sanitaire et de l\u2019Environnement, Police Sanitaire des Animaux\u201d (#B 75-15-28), in accordance with the Ethics Charter of animal experimentation that includes appropriate procedures to minimize pain and animal suffering. Authorization (to PM) to perform experiments on vertebrate animals is granted by license #75\u2013846 issued by the Paris Department of Veterinary Services, DDSV.Trypanosoma (Duttonella) vivax IL 1392 was originally derived from the Zaria Y486 Nigerian isolate. Bloodstream form parasites were maintained in vivo by continuous passage in mice, as previously described [8 parasites per ml blood was collected by cardiac puncture onto heparin (2500 IU/kg), and was then diluted 1: 10 (v/v) with PBS 0.5% glucose to 5x107 parasites per ml. Parasites were separated from red blood cells by differential centrifugation using a swing-out rotor . Diluted blood was processed by one round of centrifugation (5 min at 200 g) and the supernatant withdrawn with a pipette without disturbing the red blood cell layer and the thin interface containing the white blood cells. Parasite enriched suspension was submitted to a second round of centrifugation (5 min at 200 g) to eliminate all residual cells. The supernatant was then centrifuged for 10 min at 1800 g and bloodstream form-containing pellets devoid of host cells were submitted to two further PBS washes under the same centrifugation conditions. Bloodstream form-containing pellets were further treated for RNA or protein extractions.escribed . Once paT. vivax epimastigote cultures have been previously described [in vitro by serial passages. Epimastigotes attached to the surface of the culture flask formed micro-colonies and covered the entire surface after two weeks; the number of cells in the supernatant increased proportionally to the density of the adherent cell layer. Adherent epimastigotes were recovered from the flask by scraping and washed three times with PBS. As previously described, metacyclic forms are produced during in vitro growth and are found in the cell culture supernatant [escribed . Brieflyernatant . Metacycernatant . SupernaTotal RNA was isolated using an RNeasy Mini Kit in accordance with the manufacturer's instructions. RNA purity and concentration were evaluated by spectrophotometry using NanoDrop ND-2000 (ThermoFisher). RNA quality and the relative contributions of total and small RNA were assessed by the Agilent 2100 Bioanalyzer microfluidics-based platform . Four biological replicates were prepared for bloodstream form and metacyclic cells each. Five replicates were produced for epimastigote cells.For each replicate, poly-adenylated RNA (mRNA) was purified from total RNA using an oligo-dT magnetic bead pull-down, using TruSeq RNA Sample Prep v2 kits (Illumina). The mRNA was then fragmented using metal ion-catalyzed hydrolysis. A random-primed cDNA library was synthesized and double-strand cDNA was used as the input to a standard Illumina library preparation, with a fragment size of 400bp. The libraries were amplified with 10 cycles of PCR using KAPA Hifi Polymerase. Samples were quantified and pooled based on a post-PCR Agilent Bioanalyzer, followed by size-selection using the LabChip XT Caliper. The multiplexed library was sequenced on the Illumina HiSeq 2000 with forward and reverse primers, according to the manufacturers standard protocol, resulting in 100-nucleotide paired-end reads. Sequenced data was analyzed and quality controlled and individual indexed library BAM files created.T. vivax Y486 reference strain [T. vivax life stages, combining all replicates in each case. Cuffdiff applies the Benjamini-Hochberg correction for multiple testing when assessing the significance of fold changes. To ensure accurate assessment of differential expression, transcript abundance was corroborated using a second method, edgeR [2 = 0.89\u20130.91). Significant differences in transcript expression were defined as at least 2-fold enrichment between conditions and q < 0.05, where q is the p value corrected for false discovery rate (FDR).Paired-end RNA-seq data were mapped to the e strain and made up to 160 \u03bcl by addition of 25 mM ammonium bicarbonate. The proteins were denatured using 10 \u03bcl of 1% (w/v) RapiGest in 25 mM ammonium bicarbonate followed by three cycles of freeze-thaw, and two cycles of 10 min sonication in a water bath. The sample was then incubated at 80\u00b0C for 10 min and reduced with 3 mM dithiothreitol at 60\u00b0C for 10 min then alkylated with 9 mM iodoacetamide at room temperature for 30 min in the dark. Proteomic grade trypsin was added at a protein:trypsin ratio of 50:1 and samples incubated at 37\u00b0C overnight. Three biological replicates were prepared for each cell type.Peptide mixtures were analyzed by on-line nanoflow liquid chromatography using the nanoACQUITY-nLC system coupled to an LTQ-Orbitrap Velos mass spectrometer equipped with the manufacturer\u2019s nanospray ion source. The analytical column was maintained at 35\u00b0C and a flow-rate of 300nl/min. The gradient consisted of 3\u201340% acetonitrile in 0.1% formic acid for 90 min then a ramp of 40\u201385% acetonitrile in 0.1% formic acid for 3 min. Full scan MS spectra (m/z range 300\u20132000) were acquired by the Orbitrap at a resolution of 30,000. Analysis was performed in data-dependent mode. The top 20 most intense ions from MS1 scan (full MS) were selected for tandem MS by collision induced dissociation (CID) and all product spectra were acquired in the LTQ ion trap. Ion trap and Orbitrap maximal injection times were set to 50 ms and 500 ms, respectively.T. vivax reference genome (downloaded from TriTrypDB v-6.0). Search parameters were as follows; precursor mass tolerance set to 10ppm and fragment mass tolerance set to 0.5 Da. One missed tryptic cleavage was permitted. Carbamidomethylation (cysteine) was set as a fixed modification and oxidation (methionine) set as a variable modification. Mascot search results were further processed using the machine learning algorithm Percolator. The false discovery rates were set at 1% and at least two unique peptides were required for reporting protein identifications. Protein abundance (iBAQ) was calculated as the sum of all the peak intensities (from Progenesis output) divided by the number of theoretically observable tryptic peptides [Thermo RAW files were imported into Progenesis LC\u2013MS . Runs were time aligned using default settings and using an auto selected run as reference. Peaks were picked by the software and filtered to include only peaks with a charge state of between +2 and +6. Peptide intensities were normalized against the reference run by Progenesis LC-MS and these intensities are used to highlight differences in protein expression between control and treated samples with supporting statistical analysis calculated by the Progenesis LC-MS software. Spectral data were transformed to mgf files with Progenesis LC-MS and exported for peptide identification using the Mascot search engine. Tandem MS data were searched against a custom database that contained the common contamination and protein sequences predicted for the peptides . Proteinhttp://www.ebi.ac.uk/ena), accession number ERP001753. Details of the transcriptomic experiments are also available from the Array Express website (https://www.ebi.ac.uk/arrayexpress/), accession number E-ERAD-100. The mass spectrometry proteomics data have been deposited with the ProteomeXchange Consortium via the PRIDE partner repository (http://www.ebi.ac.uk/pride/archive/) with the dataset identifier PXD001617.All cDNA sequence data are available from the European Nucleotide Archive ; 8994 of these transcripts (88.9%) were observed with at least 10 FPKM. The abundance of each transcript, as estimated using Cufflinks [T. vivax . The most abundant transcripts in the epimastigote and metacyclic cells encoded the same set of highly abundant tubulins and ribosomal proteins, but not the putative VSG, displaying instead an abundance of BARP-like proteins TvY486_0012620 (975 FPKM) and TvY486_1114940 (847 FPKM). Abundance estimates across our independent replicates were consistent, with strong positive correlations of replicates (ranging from 0.94 to 0.99) across all life stages but improved for genes with evidence of developmental regulation (r2 between 0.38 and 0.65).The degree to which relative abundance of transcripts and peptides concur throughout the life cycle is an important question with implications for regulation of gene expression, especially in trypanosomatids in which regulation is thought to be mostly post-transcriptional . CorrelaT. vivax-specific families that are included in a Cell-Surface Phylome (CSP) that we published previously [T. vivax but not included in the CSP presently, show greater differential expression in bloodstream forms than any other family except for VSG. This gene family occurs 25 times among transcripts up-regulated in bloodstream forms relative to epimastigotes of transcripts showed significant differential expression in one or more stage comparison; we refer to these as \u2018developmentally regulated\u2019 and they are listed in eviously for genestigotes and provT. vivax-specific, CSP gene families, namely Fam35 and Fam43 . However, these gene families were more abundant still in metacyclic forms (see below), indicating that their main focus of expression was not the epimastigote. Aside from these uncharacterized gene families, transcripts implicated in cellular respiration were also seen, for example, components of the electron transfer chain such as cytochrome c1 , cytochrome c and cytochrome c oxidase subunits (FC between 4.3\u201320.9). Also transcripts for multiple cation transporters (FC between 3.8\u201324.7) and a meiotic recombination protein DMC1 .In epimastigotes, we identified 393 transcripts that were developmentally regulated, 387 of which are significantly more abundant in epimastigotes relative to bloodstream forms, while 8 transcripts are significantly more abundant in epimastigotes relative to metacyclic forms see . The deaT. vivax-specific, CSP families 34, 35 and 43 (see above), and other transcripts encoding DNA polymerase kappa , an adenylate cyclase , and various reverse transcriptases derived from SLACS elements (average FC = 6.8).In the final, metacyclic life stage, 357 transcripts were significantly more abundant relative to bloodstream forms, and these gave a very similar picture to the enriched transcripts in epimastigotes. A further 136 transcripts were significantly more abundant relative to epimastigotes see includinWith respect to these observations, it should be noted that further analysis using edgeR produced very similar results to Cufflinks, with only 8.6% of the gene set displaying significant differential expression. Also, there was substantial overlap in the identities of developmentally regulated transcripts between comparisons; thus, of 518 transcripts significantly enriched in bloodstream forms relative to epimastigotes, 372 of these were also enriched relative to metacyclics; similarly, of 387 transcripts significantly enriched in epimastigotes relative to bloodstream forms, 285 of these were also enriched in metacyclics relative to the latter. From this it should be clear that, of the three life stages, the epimastigotes and metacyclic transcriptomes were most alike.glycolysis and glycosome , while those preferentially expressed in epimastigotes are enriched for cytochrome-c oxidase activity and ATP synthesis coupled proton transport . While suggestive of consistent differences in energy metabolism between life stages, differences in transcript abundance do not guarantee disparity in protein expression; thus, we sought to corroborate these observations with our proteomic data.We examined the developmental regulated transcripts for Gene Ontology (GO) terms that were significantly enriched using a Fishers Exact test in BLAST2GO . This coglucose metabolism is significantly enriched relative to epimastigotes, while the citric acid cycle is significantly enriched relative to metacyclics.A protein shows significant differential expression if a constitutive peptide displays at least 2-fold enrichment and q < 0.05. Under these criteria, 595 or 30.5% of observed proteins (74.6% of quantifiable peptides) were developmentally regulated and these are listed in T. vivax-specific transcripts described above as being the most highly abundant family in bloodstream forms were also identified among our peptides. Four members of this family are preferentially expressed in bloodstream forms, one relative to epimastigotes and three more relative to metacyclics. None were expressed in either insect stage, further indicating that this is a novel and very prominent feature of bloodstream forms.The multi-copy, oxidation-reduction processes and amino acid metabolism were found to be significantly enriched. Peptides suggestive of oxidative phosphorylation, although no other elements of mitochondrial energy metabolism, were also significantly more abundant relative to metacyclics.In the epimastigote form, 147 and 104 proteins were significantly more abundant relative to bloodstream forms and metacyclics respectively. GO terms associated with Sec24C are significantly enriched. The same test shows that purine ribonucleotide catabolism , (which refers here to the ATP-binding requirements of the same intracellular transport processes), is also significantly enriched.In the metacyclic stage, 62 peptides were significantly more abundant relative to bloodstream forms, and 89 peptides were significantly more abundant relative to epimastigotes. In both cases, the greatest fold changes pertain to hypothetical proteins encoded by members of CSP gene families (see below). Among the 88 peptides, various proteins with roles in intracellular trafficking were implicated, for example, the vesicle formation protein = 11.5; ) and sigIt should be noted that, unlike the cohorts of stage-specific transcripts, there was no overlap in the membership of these various stage-specific peptide groups, i.e. there were no peptides significantly enriched in both epimastigotes and metacyclics relative to bloodstream forms.T. brucei and T. congolense have shown developmental regulation of such genes, as well as those encoding the principal cell surface glycoproteins [T. vivax lacks certain enigmatic components of T. brucei and T. congolense cell surfaces, such as the VSG-related transferrin receptor of bloodstream forms and procyclin, as well as some elements of Fam50 (see below) [T. brucei [T. congolense [T. vivax and procyclic form for T. brucei and T. congolense) to the bloodstream form, for each protein observed in all species (N = 128). Transcriptomic and proteomic data from diverse African trypanosomes indicate consistently that life stages differ with respect to primary energy metabolism. Previous studies of global gene expression proteins , 21\u201322. e below) . To relangolense , calculaa contains proteins that are preferentially expressed in the bloodstream form in all species; the GO term for glycolysis is enriched among these proteins. This indicates that the use of substrate-level phosphorylation as the dominant process for ATP generation in the bloodstream form is a consistent feature of African trypanosomes. Subset b contains proteins with the opposite expression profile, i.e. preferentially expressed in the non-infective insect stage in all species. Analysis of GO terms associated with these proteins shows that proton-transporting ATP synthase activity , ATP synthesis coupled proton transport and oxidation-reduction process are enriched. This is consistent with the widespread use oxidative phosphorylation in the low-glucose environment of the insect vector to generate ATP via a proton-motive force across the mitochondrial membrane. Hence, at the broadest level, developmental regulation is conserved across all species, as their shared insect host will have predicted. However, there are obvious differences also.Subset T. vivax, and which might contribute to its unique phenotypes. Subset c contains proteins that are significantly more abundant in the insect stages of T. brucei and T. congolense than the vertebrate stage, but preferentially expressed in T. vivax bloodstream forms. In the larger dataset of T. congolense (for which proteome coverage is lowest); the number of proteins falling into subset c is increased to 27 and these are listed in T. vivax is not simply a lack of regulation or low expression generally, since many are highly abundant. Ten of the proteins in succinate-CoA ligase (GDP-forming) activity , pyruvate dehydrogenase (acetyl-transferring) activity and glucose metabolic process . Hence, comparison of differentially expressed genes across African trypanosomes shows that much is conserved at the regulatory level but that important differences exist, even in the most essential physiology.Against this background of conserved developmental regulation, we are interested in genes that are regulated differently in T. vivax, might distinguish the parasite from T. brucei and T. congolense [T. vivax also appear to be developmentally regulated at the transcript level. For example, Fam27 , Fam35 and Fam 43 are preferentially transcribed in insect stages. Conversely, Fam29 , Fam30 and Fam32 are preferentially transcribed in bloodstream forms. Indeed, rarely are transcripts belonging to one of the families found throughout the lifecycle; Fam34 being one such case. We have previously identified several gene families, known as Fam27-45, that are predicted to encode cell surface proteins and which, being unique to ngolense . Fam27-4The best supported cases for developmental regulation concern the metacyclic-specific expression of Fam34, 35 and 43. We observed 34 distinct Fam34 transcripts and 24 of these were differentially expressed; 12 in the metacyclic and another 12 in the bloodstream form. However, the proteomic evidence indicates more selective developmental regulation; of 11 Fam34 proteins that were observed, five were preferentially expressed and all in the metacyclic stage (FC between 2.2\u201310.8).Of 17 distinct Fam35 transcripts, 11 were differentially expressed; all were significantly more abundant in the metacyclic stage (FC between 13.2\u201392.0). Proteomic data support this view; of the six Fam35 peptides observed, all were most abundant in the metacyclic stage and two significantly so (TvY486_0041300 (FC = 6.55) and TvY486_0039920 (FC = 22.19)).T. vivax-specific gene families are (mostly) genuine protein-coding sequences, and are often developmentally regulated at the transcript and (where observed) protein level.We observed seven distinct Fam43 transcripts and of these five were differentially expressed, all most abundant in the metacyclic (FC between 19.7\u201386.3). All three Fam43 peptides that were observed were preferentially expressed in the metacyclic stage (FC between 19.8\u201330.9). Hence, these results indicate that the putative T. brucei and the GARP and CESP genes of T. congolense, known to be preferentially expressed on their respective cell surfaces during the insect stages [T. vivax is smaller and less diverse than those of the other species, which may reflect the simpler existence of T. vivax in the tsetse fly [T. vivax Fam50 genes, 13 of which are transcribed most abundantly in the insect stages , which is not seen in other African trypanosomes.The bloodstream forms of African trypanosomes are defined partly by the expression of a VSG coat on the cell surface. In our analysis, we observed 89 distinct VSG transcripts with q < 0.05 , of whicT. vivax, and identified the transcripts and peptides that are significantly enriched in each. These data provide the first profile of global gene expression and developmental regulation throughout the complete T. vivax life cycle. The profile suggests a situation broadly similar to that already observed in T. brucei and T. congolense, though with significant distinctions, not least further evidence for developmental regulation of species-specific cell surface glycoproteins in both the vertebrate and insect stages of T. vivax.We have produced transcriptomes and proteomes for three different developmental forms of T. vivax genes are represented in our transcriptome, but few transcripts are unique to a particular life stage and the proportion of differentially expressed transcripts is only 11.2%. By contrast, the proteome represents only 16.3% of all genes, but in 798 cases where differential expression could be assessed, 74.6% of peptides show significant differential expression and these are unique to one life stage. It may be that developmentally regulated proteins are also particularly abundant, certainly this is true for the components of stage-specific cell surface coats, and this would cause differentially expressed peptides to be overrepresented within the proteome. Previous proteomic studies for T. brucei have reported more proteins than we have found here for T. vivax but a spression . Hence, T. vivax genome sequence with those of T. brucei and T. congolense demonstrated that there are more than 2000 genes that are only present in T. vivax [T. vivax lacked procyclin, the canonical cell surface glycoprotein of T. brucei and T. congolense insect stages [T. vivax lacking a procyclic stage in the insect mid-gut and raises the question of what coats the T. vivax surface if not procyclin. Clearly, the abundant T. vivax-specific gene families offer plausible candidates for the role, but it could also be filled by Fam50; which has been shown to include various surface glycoproteins expressed during the insect stages of T. brucei and T. congolense [Previously, comparison of the T. vivax . Their sT. vivax . Comparat stages . This isngolense .T. vivax and preferentially expressed in the epimastigote, while several transcripts (belonging to different loci) were significantly enriched in both epimastigotes and metacyclics. The fact that the transcripts and peptides are not derived from the same loci may suggest that different Fam50 genes became activated in the period between our preparation of RNA and protein. This is not the situation we observe for VSG, for which the identity of enriched transcripts and peptides largely match, suggesting that regulation of Fam50 gene expression is highly dynamic with multiple isoforms being promoted and repressed over short intervals. The presence of transcripts in the metacyclic stage at levels comparable to the epimastigote levels may be an artefact , since Fam50 peptides in metacyclics are sparse and comparable in abundance to bloodstream forms. In short, the expression of multiple Fam50 proteins in the T. vivax epimastigote supports the view derived from T. brucei and T. congolense that this is a conserved family of glycoproteins performing diverse roles in the insect stages of the life cycle. Given that BARP in T. brucei and CESP in T. congolense are cell-surface glycoproteins [T. vivax.Certainly, our data indicate that multiple Fam50 proteins are expressed in proteins , 40, FamT. vivax-specific cases, seldom are. This study has confirmed that most T. vivax-specific gene families are expressed. In the cases of Fam33, 40, 41 and 45, which should now be discounted from the CSP, the apparent lack of transcription raises the question of what function these repetitive non-coding sequences might perform. Three T. vivax-specific gene families are very strongly enriched in the metacyclic stage, which is intriguing because in T. brucei and T. congolense the metacyclic coat is characterized by VSG. While metacyclic VSG are replaced by other VSG upon differentiation into bloodstream forms, and so are temporally distinct, there is no metacyclic-specific cohort of VSG sequences [T. vivax metacyclics, since we observed a low abundance VSG protein preferentially expressed in metacyclics (TvY486_0027560). However, assuming that Fam34, 35 and 43 are expressed on the cell surface as predicted, it is clear that the infective form of T. vivax has a qualitatively different surface architecture to the other species, with a considerable non-VSG component.Although Fam50 is preferentially expressed in epimastigotes, other multi-copy families, including the various equences , 11. VSGT. vivax VSG coat is less dense than that of T. brucei [T. vivax transcripts [The same could be claimed for bloodstream forms also. Three families show exclusive enrichment in bloodstream forms at the transcript level (Fam28-30), though without proteomic support. This could indicate that our analysis lacked the sensitivity to detect them, perhaps because in bloodstream forms the superabundant VSG dominates the sequencing effort, making the detection of lower abundance proteins less effective than for either metacyclic or epimastigote. The presence of numerous non-VSG surface proteins might account for the observation that the . brucei , and VSGnscripts . AssuminT. congolense life stages, Helm et al. (2009) recorded 13 distinct VSG transcripts in metacyclic cells, with the most abundant comprising 24% of the total number, and 26 distinct VSG transcripts in bloodstream forms, with the most abundant contributing 62% of the total [T. congolense population express the same active VSG, while a few individuals express a range of low abundance alternatives. In fact, when combined, the 12 least abundant VSG EST were only 0.5% of all VSG transcripts in bloodstream forms [T. brucei, Jensen et al. (2009) identified in a microarray-based study cohorts of less abundant transcripts in addition to the known, active VSG, some of which were expressed most abundantly in the insect stages [T. vivax using 454 sequencing technology concluded that only one VSG was expressed [The VSG genes themselves present an expression profile typical of other African trypanosomes. VSG expression is regulated to produce a succession of structural variants that can evade specific immune responses but also prevent exposure of the total VSG structural repertoire to the host immune system, which would lead to a comprehensive immune response. Thus, VSG genes are expressed in a monoallelic fashion from the highly regulated context of a dedicated VSG expression site . In theihe total . This sut stages . In contxpressed .T. congolense, 11 different VSGs were identified across all life stages [T. brucei, one proteome identified 10 canonical VSGs [T. brucei strains were used. These did not include the active VSG because neither study used the reference strain (927), and so the active VSG did not map to a VSG gene in the reference genome. Consequently, the 10 VSGs identified by Butter et al. (2013) are all low abundance alternatives, represented by few peptides (< 7) and achieving poor coverage (< 9%) [T. brucei and T. congolense. The role, if any, of these \u2018accessory\u2019 VSGs is unclear; some are very likely metacyclic VSGs and it is known that expression of these can continue several days after transmission [Proteomic analyses have presented a similar picture. In e stages . Four wecal VSGs , while acal VSGs , althouge (< 9%) . Three oe (< 9%) . Taking smission and so cT. vivax consistent with the expression profiles of VSG observed in T. brucei and T. congolense. The two dominant VSG sequences were superabundant at both transcript and peptide levels. Therefore the identity of the active VSG remained constant in the period between RNA and protein preparation, meaning that this is unlikely to represent a transition between two VSGs and that T. vivax strain IL1392 probably a mixture of parasites expressing one of two different VSGs. Both of these active VSG belong to Fam24, the subtype homologous to canonical b-type VSG in T. brucei and T. congolense [T. congolense, the less abundant VSG in T. vivax may represent metacyclic VSGs. One VSG, TvY486_0027560, may be a metacyclic VSG in this strain as it was preferentially expressed in the metacyclic form (its transcript was not recorded). Finally, two VSG-like sequences belonging to Fam25, a T. vivax-specific subtype [In contrast to the previous result , we obsengolense . In a si subtype , are preT. vivax relative to T. brucei (and probably T. congolense) indicate that this is so. In the bloodstream stage, African trypanosomes exclusively employ glycolysis to exploit abundant glucose in host plasma to generate ATP via substrate-level phosphorylation in the glycosome [T. vivax is consistent; all glycolytic enzymes are preferentially expressed in the bloodstream form, where they are among the most abundant transcripts and peptides, and all components of the electron transfer chain are preferentially expressed in the epimastigote , and in the mitochondrion by catabolizing pyruvate. This results in insect forms excreting succinate and acetate, while bloodstream forms excrete pyruvate [T. brucei [T. vivax. We see that enzymes for the catabolism of PEP, such as glycosomal malate dehydrogenase and glycosomal phosphoenolpyruvate carboxykinase, and for the conversion of pyruvate to acetate, i.e. multiple components of the pyruvate dehydrogenase complex and of the succinyl-CoA synthetase complex, are significantly more active in the bloodstream form than in the epimastigote. Additionally, the fumarase responsible for reaction 13 in T. brucei, is constitutively expressed in T. vivax . However, the final enzyme in the pathway is preferentially expressed in the insect stages in both species. We speculate that some other genes in Experimental evidence indicates that pyruvate . Accordi. brucei , 22. FigT. vivax excretes fumarate, acetate and perhaps succinate in its bloodstream stage rather than in the insect. It is not clear why T. vivax would benefit from pyruvate metabolism in the bloodstream when substrate-level phosphorylation using glucose should suffice. However, in the insect stage, when the parasite remains in the proboscis and without access to the hemolymph, it could be that such metabolism serves little purpose. Therefore, this may reflect a lack of upregulation in the epimastigote rather than adaptive upregulation in the bloodstream form, illustrating how life cycle variation has affected the regulation of energy metabolism in these organisms.Thus, we would predict that T. vivax has confirmed that a broadly similar process of developmental regulation occurs in all African trypanosome species. However, subtle differences, for instance in energy metabolism and putative cell surface molecules, offer new insights into the molecular basis for the life cycle differences that exist between species. Beyond the background of conservation, this study has confirmed the presence of numerous T. vivax-specific gene families and shown that these are developmentally regulated, indicating that the surface of T. vivax differs quite substantially from the model derived from other African trypanosomes.The first global perspective on gene expression in S1 Fig(DOCX)Click here for additional data file.S2 FigLeading log-fold change in the first dimension is plotted on the x-axis and the second dimension is on the y-axis. Distances here correspond to leading log-fold-changes between replicates in each pairwise comparison of life stages: BSF vs EPI (left), BSF vs MET (centre) and EPI vs MET (right). These plots demonstrate that replicates cluster by life stage, reflecting the consistency in transcript abundance among replicates of the same stage.(DOCX)Click here for additional data file.S3 FigNormalized protein abundance levels across different samples were plotted to determine the principle axes of abundance variation. The first principle component is plotted on the x-axis and the second is plotted on the y-axis. The mass of grey numbers in the background refer to individual data points (proteins). Data points derived from individual replicates collected in each life stage are summed and represented by coloured dots: red (BSF), green (MET) and blue (EPI). The coloured dots cluster by life stage reflecting the consistency in expression profile provided by replicates of each condition.(DOCX)Click here for additional data file.S4 FigThe number of comparisons possible for each life stage comparison is given in brackets.(DOCX)Click here for additional data file.S1 TableGenes without a non-zero value in all three conditions have been removed.(XLSX)Click here for additional data file.S2 Table(XLSX)Click here for additional data file.S3 TableSignificance is defined as p < 0.05 and FC \u2265 2.(XLSX)Click here for additional data file.S4 TableSignificance is defined as q < 0.05 and FC \u2265 2.(XLSX)Click here for additional data file."}
+{"text": "Arabidopsis thaliana. Model parameters were estimated based on expression time-courses for relevant genes, and a consistent set of flowering times for plants of various genetic backgrounds. Validation was performed by predicting changes in expression level in mutant backgrounds and comparing these predictions with independent expression data, and by comparison of predicted and experimental flowering times for several double mutants. Remarkably, the model predicts that a disturbance in a particular gene has not necessarily the largest impact on directly connected genes. For example, the model predicts that SUPPRESSOR OF OVEREXPRESSION OF CONSTANS (SOC1) mutation has a larger impact on APETALA1 (AP1), which is not directly regulated by SOC1, compared to its effect on LEAFY (LFY) which is under direct control of SOC1. This was confirmed by expression data. Another model prediction involves the importance of cooperativity in the regulation of APETALA1 (AP1) by LFY, a prediction supported by experimental evidence. Concluding, our model for flowering time gene regulation enables to address how different quantitative inputs are combined into one quantitative output, flowering time.Various environmental signals integrate into a network of floral regulatory genes leading to the final decision on when to flower. Although a wealth of qualitative knowledge is available on how flowering time genes regulate each other, only a few studies incorporated this knowledge into predictive models. Such models are invaluable as they enable to investigate how various types of inputs are combined to give a quantitative readout. To investigate the effect of gene expression disturbances on flowering time, we developed a dynamic model for the regulation of flowering time in Flowering at the right moment is crucial for the reproductive success of flowering plants. Hence, plants have evolved genetic and molecular networks integrating various environmental cues with endogenous signals in order to flower under optimal conditions . VariousThe complexity of flowering time regulation is enormous, even when focusing on the network involved in integrating the various signals. To understand how gene disturbances influence flowering time, it is not only important to know which genes regulate each other, but also how strongly these genes influence each other. Hence, quantitative aspects of flowering time changes upon perturbations of input signals cannot be understood by merely assessing qualitatively which interactions are present. To this end, a quantitative model describing how different genes in the network regulate each other is needed. Indeed, other complex plant developmental processes have been subject to extensive modeling efforts . This inArabidopsis thaliana. Substantial qualitative information is available about the factors involved and how these interact genetically. However, the information that is needed for quantitative and dynamic modelling is missing to a large extent. This includes comprehensive and standardized quantitative data on flowering time under various conditions and in different genetic backgrounds [Arabidopsis thaliana has been scarcely studied using modeling approaches. Recently a few promising mathematical modeling approaches appeared aimed at modeling the floral transition in various plant species [Arabidopsis halleri [Arabidopsis thaliana flowering time did not take genetic regulation into account or used a mainly qualitative approach [Arabidopsis thaliana flowering time integration network was presented [Flowering time regulation has been extensively studied experimentally in the plant model species kgrounds , and tim species \u201321. Dong species , and Satapproach . Only veresented .Arabidopsis thaliana flowering time integration network, by investigating a core gene regulatory network composed of eight genes (SHORT VEGETATIVE PHASE (SVP), FLOWERING LOCUS C (FLC), AGAMOUS-LIKE 24 (AGL24), SUPPRESSOR OF OVEREXPRESSION OF CONSTANS 1 (SOC1), APETALA1 (AP1), FLOWERING LOCUS T (FT), LEAFY (LFY) and FD. Although certainly more genes are involved in integrating the various signals influencing the timing of the floral transition [FT [SOC1 [AP1 expression [FLC and SVP are also expressed in the SAM, where they repress the expression of SOC1 [SOC1 integrates signals from multiple pathways and transmits the outcome to LFY [AGL24 is involved upon dimerizing with SOC1 [AP1 [AP1 and LFY. Once the expression of AP1 is initiated, this transcription factor orchestrates the floral transition by specifying floral meristem identity and regulating the expression of genes involved in flower development [SOC1, SVP and AGL24 in our model.We aimed to obtain a mechanistic understanding of the ht genes : SHORT Vansition , we focution [FT . FT is ption [FT ,29. FT htion [FT ,31 and cFT [SOC1 and AP1 pression ,33. FLC of SOC1 \u201336. SOC1e to LFY ,38, whicith SOC1 . In turnOC1 [AP1 and of FOC1 [AP1 . The comelopment . Importaelopment we incluAP1 as orchestrator of floral meristem identity specification, the moment at which the AP1 expression level starts to rise is used as a proxy for flowering time in the model.The above introduced interactions between the flowering time integration genes and the floral meristem identity genes at the end of the pathway allow to derive a set of Ordinary Differential Equations (ODEs) describing how genes in the network regulate each other. ODEs were chosen because they arise from continuum modelling of molecular interactions and allow quantitative analysis of the effect of perturbations on expression levels and finally on flowering time. Because of the above mentioned role of i and ii). We validated our model by comparing predicted expression time-courses for mutants in components of the network with experimental data (dataset iii). Finally, we obtained detailed understanding of how genes are affected by perturbation in other genes, via the regulatory interactions that constitute the network.In order to build and validate an ODE model describing the network constituted by the eight selected genes, we obtained three quantitative datasets: i) gene expression time-courses of the selected eight genes in wild type; ii) flowering time of plants of different genetic backgrounds; and iii) expression data of the selected genes in the plants of these different genetic backgrounds. A key aspect of our approach is that we estimate model parameters using the dynamic gene expression time-course data for the components of the model, in combination with flowering time data , which describe how the expression level of each gene is influenced by the other genes. This regulation is described by Hill functions , which rmeristem , we assuAP1 [AP1 expression and fine tuning of flowering time in response to different environmental cues. In wild type Arabidopsis, the AP1 level remains barely detectable in the SAM until about day 13 after germination and then sharply increases was obtained. For SOC1, the overall fit was good, but does not capture the data point at day 9, which deviates from the general trend in the time series, resulting in a nrmse of 19%. For AP1 and FD the value of the nrmse was around 14%, and for AGL24 and LFY it was 7%. The FLC and SVP expression data were used directly as input to the model; these are shown in Figure B in AP1, we could only obtain a good fit by introducing a particular value of one parameter describing how AP1 is regulated by LFY. As further discussed below, this parameter indicates DNA binding cooperativity for which indeed experimental evidence exists.A total of 35 parameters in six equations were estimated from the time series data containing 13 datapoints (expression levels) per gene . Besides this exception, there is considerable agreement between data and predictions. Indeed, comparison of the Pearson correlation with correlation obtained using randomized data demonstrates the significance of this result (p<0.005), indicating a satisfactory model fit.Simulated flowering times in various genetic backgrounds are shown in soc1, agl24, fd and flc). In these experiments, a flowering inducing shift from short-day to long-day conditions was used [SOC1, FD and AGL24 mutations, but not for FLC mutation. The latter could be due to the low expression and limited role of FLC in the Col-0 background due to the FRIGIDA (FRI) mutation [A key issue in our model is the mechanism by which the network is able to give a quantitative response to specific perturbations. How are changes in a given gene expression level transferred to other components of the network, and how does this impact flowering time? In order to validate model predictions of how changes in expression propagate through the network, we simulated the expression time-courses for mutants and obtained independent experimental data for comparison. For that, microarray experiments were used, which were carried out for wild type and four mutant backgrounds for the influence of SOC1 on LFY (\u03b27) is much smaller than that for the influence of LFY on AP1 in yeast-two-hybrid assay, AGL24 and SOC1 form a heterodimer [LFY is expressed only in those tissues where SOC1 and AGL24 expression overlap [soc1 and agl24 mutants SOerodimer ; (III) L overlap . Neverth mutants . If thesLFY. We tested an alternative model version in which AGL24 and SOC1 only regulated LFY as a dimer and not separately from each other. This resulted in a decreased goodness-of-fit in particular for LFY (nrmse 43% instead of 7%) and in this alternative model, indeed the effect of agl24 and soc1 mutation on LFY and on flowering time were comparable, which contradicts available experimental data.Based on these considerations, in our final model, AGL24 and SOC1 have independent roles in regulating LFY, the simulated LFY expression is reduced by only ~25% in the agl24 knockout mutant relative to its time-course expression in wild type. In contrast, LFY expression is predicted to be reduced by ~65% in the soc1 mutant and expression activation strength (parameter \u03b2). A difference in any of these two parameters between SOC1 and AGL24 could lead to a difference in the effect of SOC1 versus AGL24 mutation. In the set of parameter values we obtained for our model, the DNA binding efficiency for AGL24 (K10) and SOC1 (K11) binding to the LFY promoter is quite similar. However, there is a substantial difference in activation strength (\u03b27 vs \u03b26), with SOC1 being much more able to activate LFY, resulting in a much larger effect of soc1 mutation compared to agl24 mutation. Analysis of predicted flowering times for a range of values of \u03b2 for SOC1 and AGL24 confirms the dependency on the SOC1 activation strength . In agreement with this observation, the model simulation predicts a small additional reduction in LFY expression for the soc1/agl24 double mutant , which is a stronger activator. In the case of the different effect of the soc1 mutation compared to the agl24 mutation, it is important to consider that both SOC1 and AGL24 are known to form additional complexes, and such dimers might also play a role in their differential functioning [An important reason to apply computational models to a biological system, such as the floral integration network, is that it allows investigating how the various interactions that together constitute the network, transmit perturbations into a final readout. Indeed, by integrating experimental data with modeling we analyse how different components of the flowering time regulation network react to changes in other components, finally leading to a specific flowering time. We specifically analysed the regulation of ctioning . In addiIn a recently published review, an overview is given of attempts to model plant reproduction Gene Regulatory Networks, including networks involved in flowering time regulation . PreviouSVP overexpression mutant and an FT knock-out mutant on flowering time were predicted less accurately compared to other mutants (including SVP knock-out and FT overexpression mutants). FT and SVP are connected to each other in the network, which could indicate that in this part of the network the model needs refinement. In particular, given that SVP overexpression results in lower FT expression, the fact that both SVP overexpression and FT knock-out were not well predicted indicates that the effect of lower FT levels, either directly on AP1 or more indirect via SOC1, is not perfectly captured. It is however also important to consider that the FT levels used as input in our model are relatively low, which is related to the fact that they are not measured at the peak of diurnal expression of FT. Another aspect to consider is that FLC and SVP are present as external inputs in the model and are not directly modelled; if a mutation in one of these impacts the other as well, the model would miss such effect, which would deteriorate prediction performance. This might indeed be the case, according to ChIP-seq data [When analysing for which genes the model predictions were of better quality, the effects of an seq data ,36.soc1 vs. agl24 mutation, for which we provide an explanation in terms of a difference in a specific parameter in the model, difference in protein levels in spite of similarity in RNA levels could also be relevant, although there is currently no experimental data that indicates this.Clearly, there are several directions to expand our work. We do not specifically represent protein and RNA separately; currently the state of the art in the proteomics field does not allow high-throughput and precise quantification of protein levels during the vegetative phase of plant development. Recent evidence indicates however that for at least one component in the model, SVP, the effect of protein stability is important . In theoIn general, the amount of detail in the model will always be a compromise. This holds as well for the type of interactions in the network. Currently, regulatory interactions are modelled, whereas protein-protein interactions are not explicitly included. Nevertheless, the way in which regulatory inputs are combined gives an implicit representation of the way in which proteins interact with each other. Although the importance of complex formation for the components of the network is clear ,53, one SVP decay parameter; for FLM, additional equations describing the two isoforms would be needed. As for the modelling of upstream pathways, in a recent overview of known effects of mutations, ~150 genes were listed as being currently known to impact flowering time [Currently, we focused on a core set of genes involved in integrating various flowering time signals. Given that input from the environment converges on various components of the flowering integration network, an exciting follow-up step will be to incorporate environmental cues as the next layer of information in the gene regulatory network. This could include both direct environmental effects on some of the model components, or modelling complete upstream pathways. As an example of direct environmental influence that could be modelled, recent data indicates that the above mentioned effects of SVP protein stability as well alternative splicing of the flowering time regulator Flowering Locus M (FLM) depend on temperature . To incling time . It remaing time \u20136 and thTo conclude, we present a dynamic and predictive model for flowering time regulation. Our work presents a framework for studying the mechanisms of flowering time regulation, by addressing how different quantitative inputs are combined into a single quantitative output, the timing of flowering.For the time-course gene expression studies Arabidopsis Col-0 wild type plants were grown under long-day conditions on rockwool and received 1 g/L Hyponex plant food solution two times per week. Rosette leaves and shoot apical meristem enriched material was harvested daily at ZT3 from seven plants per sample in duplicate.-2 s1. For flowering time measurements, the total number of primary rosette leaves was scored at visual bolting. The position of the plants from the different genotypes were randomized in the trays, and the flowering time phenotype was recorded without prior knowledge of the genotype. Plants for microarray experiments were grown on soil in growth chambers under short-day conditions for 25 days , agl24 (SALK_095007), flc-3) or 28 days (fd-3). Flowering was induced by shifting plants to long-day conditions .Plants for flowering time analysis were grown in growth chambers with controlled environment under long-day conditions . Plants were raised on soil under a mixture of Cool White and Gro-Lux Wide Spectrum fluorescent lights, with a fluorescence rate of 125 to 175 mmol mYELLOW-LEAF-SPECIFIC GENE8 (YLS8) was implemented as reference gene for the analyses.RNA was isolated from the plant samples using the InviTrap Spin Plant RNA Mini Kit. Subsequently, a DNAse (Invitrogen) treatment was performed, which was stopped with 1\u03bcL of a 20 mM EDTA solution and 10 minutes incubation at 65\u00b0C. Total RNA concentration was measured, and 1 \u03bcg RNA was used to perform cDNA synthesis by the Taqman MultiScribe Reverse Transcriptase kit (LifeTechnologies). qRT-PCR was performed with the SYBR green mix from BioRad using the gene specific oligonucleotides indicated in Table D in Etarget = 2Ct\u0394, where Ct stands for the threshold cycle and \u0394Ct = Cttarget\u2014Ctreference. From that, the absolute abundance was estimated by Atarget = Etarget \u00d7 s, where s stands for a scaling factor obtained by dividing the average abundance that a transcript reaches in a cell by the highest Etarget value among all samples, and multiplying by an assumed maximal protein abundance. Since a linear relationship between abundances of RNA and protein is assumed in the model, the average transcript abundance was adjusted based on average abundance of a protein in cell. An available estimate for the range of protein abundance is between 400nM and 1400nM [The relative gene expression was given by d 1400nM . From thsoc1\u20136, agl24, and fd-3. Briefly, biotinylated probes were prepared from 1 \u03bcg of total RNA using the MessageAmp II-Biotin Enhanced Kit (Ambion) following the manufacturer\u2019s instructions and hybridized to Arabidopsis ATH1\u2013121501 gene expression array (Affymetrix). Arrays were washed on a GeneChip Fluidics Station 450 (affymetrix) and scanned on an Affymetrix GeneChip Scanner 7G using default settings. Expression data for Col-0, soc-6, agl24, and fd-3 have been deposited with ArrayExpress (E-MEXP-4001). Expression data for flc-3 (ArrayExpress: E-MEXP-2041) have previously been published [Microarray time series experiments were performed as previously described using RNublished . The proublished .FLC and SVP, gene expression is represented in the leaves and meristem . For all the other genes, the variables correspond to expression in the meristem. Note that for SVP and FLC there are no equations; they act as external inputs in the model, and their regulation is not explicitly modelled. The parameters in the equations have the following meaning FD requires dimerization with FT in order to activate SOC1 expression. (III) FD can activate AP1 as a monomer. (IV) Recently it was shown that in rice the interaction between FT and FD is bridged by a 14\u20133\u20133 protein [AP1, the degree of cooperativity (n) for the LFY-mediated regulation of AP1 was set to n = 3. Model source code is available as Equations \u20136 are ba protein and probxi of a gene i depends on the parameter values associated to i. To independently fit an equation i in the right-hand side of the equation were taken from the data, and interpolated with a polynomial fit. This decoupling method has previously been described in full detail [MultiStart solver implemented in MATLAB . The parameters were then input in the whole systems of equations as starting point for a second optimization step. In this second step, the equations were solved as a system and the expressions of the direct regulators of i were taken from their associated ordinary differential equation solutions. This was carried out by the lsqnonlin solver (implemented in MATLAB) to fine-tune the fitting obtained by the first optimization step.In the model, the expression dynamics l detail . By applxmax, xmin are the maximum and minimum observed expression value; xiexp and xipred are the experimental and predicted values at time i; and the sum is over all n timepoints.To assess the goodness of fit for each gene, the normalized root mean square error (NRMSE) was used, which equalsode23s. For simulations of gene expressions of Arabidopsis wild type grown at 23\u00b0C/LD, the initial gene abundances were taken equal to the first expression time-points and the parameters were the same as described in Table C in i was fixed to a constant value xi = kmut. For the knock-out null mutants , the values of kmut were adjusted to zero; and for the knockdown mutants (not null mutations), kmut values were adjusted to a small percentage of the expression of i observed in the first time-point from wild type Col-0 .The equations were solved using MATLAB, integrated with the stiff solver trapz function in MATLAB. Subsequently, these values were scaled by subtracting the wild type value and then dividing by the wild type value. Similarly, the experimental relative change was calculated based on the microarray data. Note that comparing these values focusses on the effect of a mutation on dynamics of genes in the network over the complete time-course and as such takes into account the fact that the experimental conditions of the microarray experiment cannot directly be simulated (flowering-inducing shift from short-day to long-day conditions).To assess the model predictions of changes in gene expression we compared predicted relative changes with relative changes obtained with microarray data. To do so, we calculated the predicted total amount of expression using the AP1 expression. For that it was assumed that, at a molecular level, Arabidopsis undergoes the floral transition in the moment that AP1 expression initiates. Therefore, according to our experimental AP1 time series, for wild type Col-0, the floral transition takes place between days 12 and 13 after germination. For simplification, we take the exact day 12.6 because it corresponds to the average number of rosette leaves (RLs) observed at the onset of flowering for wild type Col-0. To estimate the flowering time from mutant simulations, we use the time in which AP1 expression reaches the same simulated expression value as obtained at day 12.6 for wild type Col-0. This implies that the AP1 expression threshold for triggering the floral transition is the same for different plant growth conditions and mutants. Because flowering times are usually reported in number of rosette leaves (RLs) we subsequently scaled the predicted days to RLs by assuming a linear relationship between the number of RLs observed at the onset of flowering and the time in days after germination that Arabidopsis thaliana undergoes the floral transition at a molecular level.The predictions of flowering time were based on In addition to the set of mutants obtained in consistent conditions in this work, we also included existing mutant data. Wild type Col-0 flowering time in these experiments is somewhat different from that observed in our experiments. In addition, flowering times in literature are mostly reported in rosette leaves (RL), and not directly in days. To be able to integrate these data, we scaled existing mutant data with a linear factor which is chosen in such a way as to scale the wild type Col-0 flowering time to 12.6 RL.S1 File(DOCX)Click here for additional data file.S1 DatasetContains model source code (MATLAB) and RT-PCR data used for parameter estimation. Main model file is \u201code_application_final.m\u201d. This uses two files (LoadParametersFromFile_Leaf.m and LoadParametersFromFile_Meristem.m) to read the parameter values; two files that contain the RT-PCR data and two files with ODEs (ode_equation_FT_leaf.m and ode_equations.m).(ZIP)Click here for additional data file."}
+{"text": "Oryza sativa) growing in rainfed and irrigated fields during two distinct tropical seasons and determined simple linear models that relate transcriptomic variation to climatic fluctuations. These models combine multiple environmental parameters to account for patterns of expression in the field of co-expressed gene clusters. We examined the similarities of our environmental models between tropical and temperate field conditions, using previously published data. We found that field type and macroclimate had broad impacts on transcriptional responses to environmental fluctuations, especially for genes involved in photosynthesis and development. Nevertheless, variation in solar radiation and temperature at the timescale of hours had reproducible effects across environmental contexts. These results provide a basis for broad-based predictive modeling of plant gene expression in the field.Plants rely on transcriptional dynamics to respond to multiple climatic fluctuations and contexts in nature. We analyzed the genome-wide gene expression patterns of rice may vary at the same time. Therefore, it is important to also analyze the effect of fluctuations in multiple environmental factors in more complex field experiments.Plessis et al. developed mathematical models to analyze the gene expression patterns of rice plants grown in the tropical environment of the Philippines using two different farming practices. One field of rice was flooded and constantly supplied with fresh water (referred to as the irrigated field), while the other field was dry and only received water from rainfall (the rainfed field). The experiments show that temperature and levels of sunlight (including UV radiation) have a strong impact on gene expression in the rice plants. Short-term variations in temperature and sunlight levels also have the most consistent effect across the different fields and seasons tested. However, for many genes, the plants grown in the irrigated field responded to the changes in environmental conditions in a different way to the plants grown in the rainfed field.Further analysis identified groups of genes whose expression combined responses to several environmental factors at the same time. For example, certain genes that responded to increases in sunlight in the absence of drought responded to both sunlight levels and the shortage of water when a drought occurred. The next step is to test more types of environments and climates to be able to predict gene expression responses under future climatic conditions.DOI:http://dx.doi.org/10.7554/eLife.08411.002 Plants have evolved responses to complex environmental fluctuations that take place at time\u00a0scales that vary from seconds to years and shape plant developmental and physiological responses. Variations in environmental signals, including temperature, water levels, solar radiation, biotic interactions and resource availability, are often unpredictable and need to be integrated and transduced to changes in gene expression, which may then be associated with physiological and/or morphological adaptations . PredictStudies of plant transcriptional responses to environmental perturbations are nearly exclusively undertaken in controlled, static laboratory conditions that are divergent from what is seen in the natural world. While these laboratory experiments have enriched our knowledge of the molecular pathways involved in abiotic stimulus responses, it is clear that organismal phenotypes and the genetic architecture of various traits differ between controlled laboratory and field conditions . This so2 concentration and irradiance\u00a0. Indeed, a reason invoked by plant molecular biologists to avoid experiments in natural environments despite the lab\u2013field gap is the perceived\u00a0low reproducibility of results that would be generated under unpredictable fluctuating conditions and the A. thaliana and one during the wet season (July\u2013August) at the experimental rice station of the International Rice Research Institute (IRRI), Los Ba\u00f1os, Laguna, in the island of Luzon in the Philippines in 2013 . The\u00a0plaTo avoid the major shift in gene expression patterns induced by the transition to the flowering stage , which wWe measured global gene expression using RNA sequencing . We exclOur goal is to relate gene expression variation over time to variation in climatic conditions and plant developmental stage, and assess how these relationships are affected by season, field type and genetic background. We focused on trends in gene expression variation common to a high number of genes: after removing 1251 genes with a low coefficient of variation and 2962 genes with a low mean expression, we grouped the remaining genes into co-expressed gene clusters . The numWe used a model selection approach to explain gene expression patterns by environmental and developmental variation. This approach relies on selecting a linear combination of environmental/developmental (ED) input parameters that both minimizes model mean squared error (MSE), quantifying the difference between the model and the expression data, and limits model complexity . A preliminary analysis showed that allowing for more than three parameters per equation over-fit the model more often than improving it, so we limited the number of parameters per linear equation to three. A typical ED equation had the following form: cluster mean = \u03b1ED1\u00a0+ \u03b2ED2 +\u00a0\u03b3ED3, where ED1, ED2 and ED3 are ED parameters, and \u03b1, \u03b2 and \u03b3 are linear regression coefficients. The ED parameters used in these models were measurements of current conditions at the time of sampling, recent changes in temperature, humidity, wind speed and solar radiation, temperature fluctuations, and short-term and long-term averages for all climatic conditions . We incliED1\u00a0+ \u03b2iED2\u00a0+ \u03b3iED3 in the irrigated field and cluster mean = \u03b1rED4\u00a0+ \u03b2rED5\u00a0+ \u03b3rED6 in the rainfed field.We designed our approach to take potential differences in climatic response between fields, genotypes and seasons into account. To assess whether disparities in transcriptional patterns could be explained using distinct ED equations, we considered different ED models for each cluster. The simplest model is a single equation for the whole cluster mean. A more complex model would combine two different equations, for example, in the case of a field-specific model: cluster mean = \u03b1If we were to take into account all possible differences between fields, genotypes and seasons, we would get models with as many as eight equations, which would be difficult to interpret. We therefore limited the maximum number of equations to four by applying the method to one season at a time, testing different equations between fields and genotypes; or considering only one genotype and examining the possibility of different equations between fields and seasons had a correlation coefficient between genotypes below 0.8 . ClusterIn this two-season analysis, in addition to identifying which ED parameters the expression of each gene cluster can be related to, we are assessing whether the field environment and the season affect the identified transcriptional response. As an example, the simple season-specific ED model selected for cluster 9 is presented in We used genotype correlation within a cluster as a measure of replicability of ED effects: the higher the genotype correlation, the more gene expression response was driven by factors common to both genotypes . While the median genotype correlation of all genes in the analysis was 0.55, the median of all gene cluster means was 0.90, showing that averaging expression profiles over many genes remarkably reduces sources of non-replicability. We focused on 27 gene expression clusters with a genotype correlation greater than 0.9 . They enWe analyzed gene clusters according to their field response to understand how distinct modes of cultivation affect gene expression under the same climate. The expression of a gene cluster can be affected by the field environment in two ways: (i) distinct responses to climatic/developmental factors and/or (ii) a shift in expression level, representing different ways in which the effect of the field environment can be integrated with the climatic response and developmental program. Enrichment in specific functions or pathways within the different types of gene clusters can be indicative of the role of certain processes in the adaptation to distinct field environments.We divided the 27-gene clusters into four groups based on whether they showed different expression responses to climatic/developmental factors in each of the two fields (correlation between the expression patterns of the two fields below 0.8) in one or two seasons . We alsoCorrelations between ED parameters make it difficult to ascertain the causal factor of gene expression change from the parameter selected in a model. This is why, for the interpretation of ED models, we grouped those of the parameters that showed high correlations to each other . The parThe field environment had the least effect on the climatic/developmental expression response of the six clusters in group 1 . Neverth\u201318, hypergeometric test), response to high light intensity (p<10\u20139) and, more generally, response to abiotic stimulus (p<10\u20136), which is consistent with an induction by drought, solar radiation and/or temperature increase. The four gene clusters from group 1 with models that have negative terms for short-term temperature/solar radiation are enriched for signaling/regulatory categories of genes \u2013 cluster 8 for response to hormone stimulus, cluster 21 for cell communication and signaling, cluster 24 for protein kinase activity and hormone-related signaling pathway and cluster 30 for regulation of transcription, DNA-dependent. This shows that short-term temperature/solar radiation effects on the expression of groups of co-expressed abiotic stress response and signaling/regulatory genes can be impervious to the field environment.Gene clusters 9 and 18 are both modeled with positive terms for short-term temperature/solar radiation across both fields and both seasons but are differently affected by the field environment. While cluster 18 displays a higher expression in the irrigated field than the rainfed field in the dry season, cluster 9 shows an increase in the\u00a0mean expression level in the rainfed field compared with the irrigated field in both seasons, but more distinct in the dry season. This field effect is modeled for cluster 9 by the inclusion of a negative term for soil moisture, indicative of an additive effect of water availability on gene expression over the short-term effect of temperature/solar radiation . ClusterThe 12-gene clusters in group 2 have in common strong differences in gene expression pattern between the rainfed and irrigated fields that are specific to the dry season. These differences are probably due in great part to the drought period experienced during that season. For these clusters, the\u00a0field environment during the dry season affected both climatic/developmental responses and mean gene expression level . Gene clThis steady increase/decrease in gene expression in the dry season rainfed field was modeled in some cases with a soil moisture term consistent with a drought effect. However, parameters for long-term averages of solar radiation, temperature, humidity or even developmental stage were also selected. All of these parameters vary somewhat monotonically over the course of the dry season but diffAmong the gene clusters with a lower level of expression in the rainfed field, clusters 1 and 5 are strongly enriched for genes associated with photosynthesis. Both clusters are also enriched in genes related to lipid biosynthetic process and for cluster 1, secondary metabolic and carbohydrate catabolic processes. This result is consistent with the deleterious effect of abiotic stress on photosynthesis and showGene clusters in groups 3 and 4 show differences between the field environments during the wet as well as the dry season, which suggests that their expression is affected by aspects of the field environment not related to water availability. Gene clusters in group 3 had different mean expression level in the two fields in the dry season and a low correlation between the two field environments in the wet season . Group 4A. thaliana). We looked at whether the expression patterns of these genes were consistent with their putative function. We focused our attention on the candidate genes that showed a replicable environmental response in our data (genotype correlation above 0.8) and belonged to one of the 22 gene clusters whose main ED model terms are environmental parameters but not between fields. It is an ortholog of A. thaliana ELIP1 and 2 genes, which are possibly involved in pigment accumulation in response to stress all of which have a potential role in photoprotection. OsCHS, OsC4H and orthologs of A. thaliana UVR2 and PDX1.3, four genes that could be involved in UV acclimation \u00a0climatic\u00a0conditions . From thThe models selected for the 60 gene cluster means calculated from our data could beNon-null models were selected for 24 of the 36 PND cluster means . They ex\u201327 and 10\u201326, respectively), which were both modeled with season-dependent models. This result indicates that the transcriptional response of groups of co-expressed genes involved in photosynthesis is affected by the seasonal context. Two clusters (13 and 25) were enriched for genes associated with the\u00a0developmental process and cell cycle. Cluster 13 was modeled with a season-dependent model while cluster 25 could be modeled with the same equation for both seasons of our experiment but no parameter of this equation could be transferred to model expression in the PND. This shows that the transcriptional regulation of some genes involved in development is not only sensitive to the field context but also to the season and climate type context.As we found that several clusters enriched for genes involved in photosynthesis and development were among the most sensitive to the field context, we investigated whether this was also true for the seasonal and climatic context. We performed a GO term enrichment analysis on the clusters from the irrigated field analysis . We iden2 > 0.7). Our method was thus more efficient in matching the Nagano et al. models with a clear environmental term than the To compare our method with the method from We used a model selection approach to identify relationships between major variations in global gene expression and environmental conditions/developmental stage. Focusing our analysis on groups of genes showing consistent variation across two different genotypes in each of the two seasons, we determined that most of these representative expression patterns could be explained through the combined effects of several environmental parameters, related to distinct climatic/soil-related factors and/or on different time-scales.Co-occuring abiotic stresses trigger complex responses that cannot be predicted from the effect of single stresses , especiaOsCDPK7 and OsMIOX have been shown to be induced by drought were modeled with a common equation for both seasons . A fair In contrast, the type of field in which rice was cultivated had a major effect on the response to climatic conditions of many genes. A large part of the differences in climatic response between field environments that we detected were especially pronounced in the dry season and linked to the water status of the rainfed field. Some expression differences, however, seemed independent of this drought effect, as they were detectable in the wet season.Finally, we used independent expression data to assess the effect of a different type of climate on transcriptional responses in irrigated conditions. We found that only a small subset of the gene expression patterns observed under the tropical climate of our experiment could be generalized to the temperate conditions of the When investigating the reproducibility of transcriptional responses, we found that gene expression patterns driven by short-term averages of solar radiation and temperature, and to a lesser extent long-term wind speed, were the most consistent responses across seasons, fields and climates. Short-term averages of solar radiation and temperature belong to the group of parameters that was used to model most of the 27 gene co-expression clusters, with the highest correlation between genotypes in our two-season analysis. In particular, the exponentially transformed short-term averages of solar radiation (NL\u2212 ) that model stronger effects of variations in the lowest range of irradiance were selected to model at least one entire season or field subset for 12 of the 27 gene clusters of our analysis and could systematically be transferred from the models of our irrigated field data to explain the expression variation of the same clusters of genes observed under the Japanese temperate climate . In the In contrast, we found that the effect of longer-term variation in temperature and solar radiation were much less reproducible across seasons, fields and climates than short-term effects. One explanation is that, as these responses may be integrated over long periods of time, there is greater opportunity for them to be modulated by broad climatic and developmental factors. The only long-term environmental factor that could be transferred to model gene expression under the Japanese climate for several clusters was the 15-day average of wind speed. In our two-season analysis, it was selected to model across both seasons and fields cluster 32, which is enriched for thigmotropism genes. While there have already been studies aiming at identifying genes responsive to mechanical cues , they weThe analysis for GO annotation enrichment of the 27 clusters with the most reproducible expression patterns revealed functionally related groups of genes that are co-expressed in response to environmental and developmental signals. These results point to the biological processes most critical for plant physiological response to dynamic environments. The GO terms for biological processes with the most significant enrichment were nucleic acid metabolic process (and its parent term nucleobase-containing compound metabolic process) and photosynthesis.\u201318 and 10\u201322, respectively), which were also both enriched for developmental process and cell cycle associated genes. The strong enrichment in nucleic acid metabolic process related genes highlights the importance of a tight transcriptional regulation of this molecular process, probably linked to the control of cell division, in the adaptation of the developmental program to environmental fluctuations . These gene clusters were all down-regulated in response to drought, in accordance with previous results in A. thaliana ; it was applied as recommended. Carbofuran insecticide/nematicide was applied at 1 and 15 days after sowing (DAS) and herbicide was applied at 1 DAS in the upland ecosystem. Manual weeding and general plant protection were performed as needed.The first sampling took place 16 days after transplanting seedlings in the irrigated field in the dry season and 23 days after transplanting in the wet season. Each sample consisted of six young leaves (of approximately the same size throughout the experiment) from six individual plants. Each leaf was immediately frozen in liquid nitrogen upon collection. We tried to reduce as much as possible the effect of circadian variation on gene expression, first between sampling time-points, by always starting the collection exactly 4 hours after sunrise. Second, to avoid a shift in expression within a sampling time-point due to the delay between the first and last collected samples, we ensured fast sampling by marking beforehand each plant and leaf to be collected. Collection took, on average, 13 min .Averages of temperature, relative humidity, wind speed, solar radiation, precipitation and atmospheric pressure at the field site were recorded every 15\u2009min using Wireless Vantage Pro2 ISS with 24-hr fan aspirated radiation shield from Davis Instruments . Measurements started 15 days before the first sampling day. After the ninth sampling of the dry season and during all of the wet season, from 1\u00a0hr before sampling to the end of sampling, we recorded climatic averages every minute. There was a weather station in the irrigated and rainfed fields, one of which also included a solar radiation sensor. No significant difference in measurements was detected between the fields, so the data of the station with the solar radiation sensor was used. We used 60 cm long 2710ARL tensiometers from Soilmoisture\u00a0Equipment\u00a0Corp. placed 30 and 15 cm deep in the soil at four different locations, two per replicate of the rainfed field.Plant height and tiller number were measured every 6 days for the same plants all along each season until the end of sampling .Frozen leaf tissue was ground manually with a mortar and pestle cooled in liquid nitrogen. Total RNA was extracted from about 200\u2013400 \u00b5l of ground tissue using the RNeasy Plant Mini Kit , following the manufacturer\u2019s protocol and eluting the RNA in 40 \u00b5l of water. RNA was treated with Baseline-ZERO DNase according to the manufacturer\u2019s instructions then cleaned up with the Qiagen RNeasy Mini Kit and eluted in 32 \u00b5l of water. We assessed RNA quality using nanodrop and electrophoresis on an agarose gel. Total RNA,\u00a04\u00a0\u00b5g, were depleted of ribosomal RNA using Epicentre Ribo-Zero Magnetic Kit for plant leaf tissue. We purified the depleted RNA with the Agencourt RNAClean XP kit . We constructed RNA libraries using the Epicentre ScriptSeq v2 RNA-Seq Library Preparation Kit, purifying the complementary\u00a0DNA and libraries with the Agencourt AMPure XP System. We added barcode index using the Epicentre ScriptSeq Index PCR Primers. We quantified the libraries by Qubit , with the DNA HS kit. Libraries quality and average fragment size was determined using the 2100 Bioanalyzer with high sensitivity DNA reagents and DNA chip. We quantified the libraries on the LightCycler 480 using the KAPA Library Quantification Kit. Libraries were sequenced using HiSeq 2000 51 bp paired-end sequencing, with either 6 or 8 libraries per lane. Each sample provided a mean of 58 million sequencing reads.O. sativa Nipponbare \u2013 release 7 of the MSU Rice Genome Annotation Project reference genome under the accession number GSE73609.We used Tophat version We conducted a multi dimensional scaling of the normalized expression data where the samples clustered by genotype and field and to a lesser extent, season. We used these results to detect potential sample mislabeling and identified one sample switch. We also found that the sample for the first replicate of the Pandan Wangi rainfed field in the dry season, sixth time-point, did not cluster with any of the genotype/field groups so we removed it from the analysis and replaced it with a duplication of the second replicate.We considered each subset of the data that consisted of one genotype and one season and excluded from the normalized expression dataset genes for which we detected no read for more than 40 samples in any of these subsets, which reduced the dataset to 22,144 expressed genes. We transformed the obtained value with the following function: log2(x +\u00a01), as to keep positive values of expression and averaged the biological replicates. We calculated the coefficient of variation of the log-transformed expression over the 240 data points for each expressed gene. We identified 1251 genes with a coefficient of variation below 0.01, which were considered as having a stable expression in our experiment and were removed from the analysis. A preliminary cluster analysis showed that genes with a very low expression level had a weak correlation with the center of their clusters, thus 2962 genes with a mean lower than 1 were not included in the clustering step. To remove the\u00a0absolute differences in expression level between genotypes and seasons, which we did not intend to model, each genotype/season subset was centered.For each of the analyses, the expression data for each gene was scaled over the whole profile. Our clustering method was the Partitioning Around Medoids algorithm from the \u201ccluster\u201d package version 1.15.2 in R R using 1 http://geneontology.org/page/about) from the November 2014 release. For the first one, the annotation database was queried via MSU locus identifiers; for the second one, the database was queried via Uniprot Ids, obtained via a mapping from MSU Id to OMA Id, and then OMA Id to Uniprot Id (mapping files available in the current release of OMA). A third set of annotations was obtained directly from OMA (http://omabrowser.org/oma/about/). All three annotation sets were then combined non-redundantly in order to produce the final annotation file for rice genes. The enrichment analysis was conducted using the GOstats package in R , 1\u00a0hr (\u03b41\u00a0hr) and 2\u00a0hr (\u03b42\u00a0hr) earlier, using 5, 10, and 30\u2009min averages,\u00a0respectively. We evaluated temperature fluctuations by decomposing daily variation in temperature with the seasonal decomposition by loess (stl) function in R and calculating 1\u00a0hr (\u03b51\u00a0hr), 4\u00a0hr (\u03b54\u00a0hr) and 24\u00a0hr (\u03b524\u00a0hr) before sampling averages of the remainder of the decomposition. The value used for the soil moisture parameters was a mean of measurements from four tensiometers, two in each replicate of the rainfed field. Exponentially transformed values of solar radiation used the following equations:We designed a parameter that would give an estimate of the developmental stage, rather than age, of the plant to be able to compare appropriately the plants in the rainfed and irrigated fields as they followed different developmental itineraries relative to their age in days. This parameter used three stages as fixed points: transplanting stage , corresponding to the actual transplanting event for the irrigated field and determined for the rainfed plants as the stage where they were the same height as just transplanted irrigated plants, end of tillering production (40) and heading stage (100). Intermediary time-points were calculated linearly between these fixed points. Input parameters were centered per genotype and season and scaled over the whole dataset to match the expression data.When input parameters were more highly correlated than the highest genotype correlation of all clusters (r = 0.98), the correlated parameters were averaged together, as we would not have enough precision in the expression data to discriminate between them.The same method was used on each cluster of a given set to select a model. It is described here for the analysis of one genotype in two seasons We used the high domensional inference (hdi\ufeff) function of the hdi R package using thAll possible combinations of one, two and three parameters from this subset were used to calculate linear regression models, the MSEs of which were computed using five times five-fold cross-validation. The models were compared using the BIC to select the optimal model that fits the data while limiting overfitting. To avoid linear models containing correlated parameters in the same equation, which does not bring much more information than only one and increases the risk of overfitting the data, when two parameters from the subset selected with the hdi function were correlated with r > 0.85, we only kept in the group of tested parameters the one that resulted in the best model. The linear model selection step was applied to select a model for the 60 data points together, as well as for pieces of the cluster mean: the 30 data points of the dry season alone, wet season, irrigated field, rainfed field, as well as the 15 data points of the irrigated field in the dry season, rainfed field in the dry season, irrigated field in the wet season and rainfed field in the wet season.model in one piece dry season +\u00a0wet seasonirrigated field +\u00a0rainfed fielddry season +\u00a0irrigated field in the wet season +\u00a0rainfed field in the wet seasonwet season +\u00a0irrigated field in the dry season +\u00a0rainfed field in the dry seasonirrigated field +\u00a0rainfed field in the dry season +\u00a0rainfed field in the wet seasonrainfed field +\u00a0irrigated field in the dry season +\u00a0irrigated field in the wet seasonirrigated field in the dry season +\u00a0rainfed field in the dry season +\u00a0irrigated field in the wet season +\u00a0rainfed field in the wet seasonWe chose from these eight models using the BIC. The MSE of composite models was computed by assembling the squared residuals from cross-validation calculated from each distinct linear equation individually into one vector of squared residuals for the whole model and the number of parameters per model was the sum of the number of parameters of each equation.We used these results to form seven piecewise models to be compared to the model selected for the whole dataset :model inIn the case of strong differences in ED responses between fields and seasons, the hdi function might not select ED parameters that can fit each field or season using as output the expression data from both seasons and fields. To take that possibility into account, we repeated the hdi parameter selection with season and field subsets of the data, with B = 300, EV = 2, threshold = 0.7 and fraction = 0.7. We repeated the model selection steps described above using first a pool of the parameters selected using the dry and wet season subsets and then a pool of the parameters selected with the rainfed field subset and the irrigated field subset. We used the BIC to choose from the three models selected from these three groups of parameters.We used microarray data from two studies availablWe tested the transferability of the models determined with the irrigated field data of our experiment, limiting our evaluation to models that were season independent and explained more than half of the variance of the cluster mean. Using the same gene distribution as our irrigated field clusters, we calculated cluster means for the PND. We calculated climatic parameters in the same way as we did for our experiment. The developmental stage parameter was the number of days after transplanting. Input parameters were centered per time of day to match the expression data. We applied the second step of our model selection method to these profiles to select models common to all five time-of-day profiles, using as a subset of parameters the ones in the model determined for our irrigated field data instead of the hdi pre-selection. We only kept a parameter in the new model if it was fitted with a coefficient of the same sign as in the original model.For the independent analysis of the PND, we used the same method as for our data, which produced 60 gene clusters. The model selection method was a simplified version of the one used for our data. We added to the set of input parameters short-term averages of 8 and 12\u00a0hr to account for a possible longer effect of same day conditions, as some samples were collected later in the day than in our experiment. We only selected models common to all five time-of-day profiles, using a unique set of parameters selected from the cluster mean in its entirety and choosing from all possible linear regressions with no more than three parameters using the BIC.Oryza\u00a0sativa (http://fitdb.dna.affrc.go.jp/) to identify the genes showing a clear environmental response detected by the Nagano et al. models, choosing the ones that had models where both the R2 of the overall model and the R2d of the environmental parameter were over 0.5, with no developmental or circadian terms and an environmental term for either temperature or solar radiation with no gate or a sinusoidal gate and a dose-dependent response.We used the Field Transcriptome Database in review process). Similarly, the author response typically shows only responses to the major concerns raised by the reviewers.eLife posts the editorial decision letter and author response on a selection of the published articles . An edited version of the letter sent to the authors after peer review is shown, indicating the substantive concerns or comments; minor concerns are not usually shown. Reviewers have the opportunity to discuss the decision before the letter is sent and three reviewers, one of whom is a member of our Board of Reviewing Editors (Daniel J Kliebenstein).Thank you for submitting your work entitled \"Multiple abiotic stimuli interact to regulate rice gene expression under field conditions\" for peer review at The reviewers have discussed the reviews with one another and the Reviewing editor has drafted this decision to help you prepare a revised submission.Summary:This work begins a new effort to combine modern genomics methodologies with new computational approaches to look at how organisms exist within the environment. This includes an effort to incorporate all of the fluctuations in this environment in an attempt to better parse the transcriptomic response of the organism to this highly changing system.Overall, this manuscript is highly interesting and begins to extend the genomics effort into more real world settings.Essential revisions:There were two fundamental concerns:1) The first was that it was felt that the new computational approaches should be tested against another independent data set. Specifically it is suggested that the authors create models with their data on the parameter shared with Nagano et al., since the claim is that they have a new pipeline for data analysis/model building. Then, they should test these models with the data from Nagano et al. to demonstrate the added value.2) Secondly, the writing could use further improvement to help convey to both the biologists and the computational scientists what the inherent novelty of this study is to both groups. Alternatively, it might be suggested to focus solely on one group and provide them with insights that pertain to that group.Reviewer #1:My major concern was that it was difficult to parse out what was or was not truly novel in the results and analysis. I understand that these articles are splitting a divide between the computational and the biological communities each with their own and often contrasting expertise. As such, I would suggest an editorial reworking to make the central messages to be as simple and direct as possible.Reviewer #2:The main goal of this study is to determine the effects on multiple environmental factors/components on gene expression in natural field conditions for rice. The claim is that computational approach taken after the gathering and assembling the transcriptome profiles (based on RNA-seq) for three rice landraces with two different systems of cultivation in two phases of cultivation (dry and wet), allows establishing conclusions about interacting abiotic stimuli in field environments. The data set consists of expression levels for ~22,300 genes . In addition, the authors attempted to test the validity/robustness of the proposed approaches on an independent data set.In the following, I will only focus on detailed review of the shortcoming of the statistical modeling strategy proposed and employed in this study. The workflow is summarized as follows: conduct There are numerous technical issues and difficulties with the chosen strategy, some of which could be remedied, and other which are to be deeply questioned. These are present in every step of the workflow, which led me to question the interpretation and conclusions drawn.For instance, in step (1), no validation for selection of the number of clusters is provided, nor it is assessed how this may affect the results; the authors could use various cluster quality indices to back up their claims; at this point, it is also not clear which distance measure was used in the PAM clustering approach. It is also advisable that the same centering and scaling strategy is used from the beginning of the workflow, so that consistent results are expected.In step (2), the set of parameters used was first pruned, by arbitrarily removing highly correlated parameters; there are many ways in which this could be done \u2013 was the parameter with largest number of high correlations removed first, or was another strategy used? How would this affect the final results obtained and the subsequent biological interpretations? Most critically, the authors only state that the hdi package in R was used, without indicating if they used it with the default setting (lasso) for the variable/parameter selection. At this stage, I do not know if the parameters were selected based on p-values , since lasso has multiple solutions, which again would affect the preselected parameters! Moreover, centering and scaling was here done on the entire data set, which will certainly affect the findings should this be done differently .The same lack of robustness checks is present in step (3), where the pre-selection was repeated for step 2, simply because some of the selected models failed the MSE test (< 0.15).In step it is adFinally, the combination of models from step (5) is nowhere detailed; are they only inspected for congruence of coefficients or are they summed to make predictions for the additive effects of two environmental stimuli?In the enrichment analysis, why were the annotations found across the three annotation resources not combined in a weighted fashion, so that GO terms more often encountered are given a higher more weight? The simple combination employed may distort the findings, particularly for the under-representation of classes .Finally, it would have been interesting to see how the models performed in the case of the data from Nagano et al.; rather than building models on the independent data , the transferability of the models should have been assessed and commented on. This is in fact the most challenging part and would shed light on bridging the greenhouse-field gap.At this stage, I also wonder why the profiles from different genes involved in particular process were not used to derive the models .My conclusion is that the authors used lasso (without ever having stated it and in combination with questionable strategies to remove some of the parameters) followed by classical regression techniques to arrive at the conclusion that correlated responses (whatever they may represent) result in similar models \u2013 this is an expected finding and it does not address the main aim of the study. Should an existing variable selection method been used as the only strategy or if latent variables were used on their own with simple regression techniques the authors would have had a streamlined and feasible way to assess the uniqueness, quality, and robustness of the findings, which now is almost impossible to objectively assess.Reviewer #3:eLife.Plessis et al. is a field genomics paper in rice that examines the relationship between environmental variables across cultivars of rice across 2 seasons (one cultivar is represented in each season). The main goal of the paper is to develop a method to find environmental variables that are highly correlated with gene expression. The authors were careful in how the tissue for RNA-seq was collected so as to minimize confounding effects of time between samples and the circadian clock. Additionally, most of the code used to do the analysis was provided which made detailed thought about the method easier. The authors even used previously published rice data to apply their method to. In general, the paper would be of general interest to the readers of I have three major criticisms of the paper in its current form:1) This is a large dataset with many axes of variation with one of the main conclusions being \"additive and interactive effects of distinct environmental conditions on gene expression are widespread\". While simplicity is nice for interpretation , finding the best two parameters is an arbitrary cutoff. This needs to be justified statistically by either simulations or references. Furthermore, there is discussion of interactions when these simple models do not account for interactions between variables. It is unclear from the analysis and text how interactions are defined and thus interpreted.2) The number of PAM clusters that were chosen is arbitrary and could influence the interpretation. There is no code provided for the PAM cluster analysis. Additionally, this is a concern because while the original data 240 number of samples were used in PAM clustering with 50 clusters, the revisited Nagano et al. 2012 data had only 52 RNA samples (that were averages across replicates) but 50 clusters was chosen for this data set as well. The risk here is over-fitting the Nagano data. In the same R package that was used to generate the clusters (\"cluster\"), there are functions (see clusGAP) to calculate the GAP statistic to determine the appropriate number of clusters given the dataset .3) In addition to the technical concerns above, as written there is not enough of a comparison between this modeling approach and the one of Nagano et al. 2012 to show what this new approach has to add with respect to predicting gene expression patterns in the field. I think that the paper could benefit greatly from showing how the model selected from simple to complex would compare to using a full model including interactions with all the terms and gradually dropping terms out to compare models like in the Nagano et al. 2012 paper . For example, model selection on: lm(gene expression ~ genotype*season*timepoint*treatment). Building on this, I also think there is a missed opportunity here to do some mixed effects modeling (see lme4 R package) that could take into account the correlation of gene expression within and between all the treatment, genotype time of year combinations.eLife. Your revised article has been favorably evaluated by Detlef Weigel (Senior editor), a Reviewing editor, and two reviewers (Reviewer #2 and Reviewer #3). The manuscript has been improved but there are a few remaining issues that need to be addressed before acceptance, as outlined below:Thank you for resubmitting your work entitled \"Multiple abiotic stimuli are integrated in the regulation of rice gene expression under field conditions\" for further consideration at The reviewers felt that the manuscript nicely shows that even though climate and local field environment have large effects on gene expression, their effects can be captured with relatively simple models. While there are not yet sufficient data to be predictive for climates or field environments that have not been evaluated yet, the finding suggests that a relatively limited amount of such data sets will enable predictive modeling. This should be emphasized in the Abstract, Introduction and Discussion.Both reviewers also identified a few technical details that need more clarification before final acceptance. All reviewers and editors for this manuscript felt that there still needs to be significantly more effort to reach the biological reader and convey the novel insights. You state at the end of the Abstract: \"We show that new insight can be gained from studying the effects of co-occurring abiotic stimuli in complex dynamic environments.\" Please state more clearly what these are. All efforts to clarify these insights for both a biological and quantitative readership will greatly improve the future impact of this manuscript on the field.More detailed requests for clarification are found below.Reviewer #2:The authors have carefully considered and addressed the comments raised by the reviewers with respect to the selection of number of clusters , number of piecewise models (per BIC only), and averaging of profiles. However, the issue with the model simplification by pruning correlated parameters seems to differ between the response and the main text. More specifically, it is not clear how the decision in paragraph two, subheading \u201cModeling the effect of climatic factors on transcriptomic variation in different field environments\u201d was implemented, as this can be carried out in many ways. This statement does not correspond to the response provided in your letter, whereby the correlation is set to 0.98 and the parameters were averaged (this strategy of averaging provides a deterministic and reproducible outcome).In paragraph four, subheading \u201cModeling the effect of climatic factors on transcriptomic variation in different field environments\u201d, it should be included that the BIC is calculated on the joint piecewise models; I suppose the number of parameters corresponds to the number of predictors summed over all models, or does it correspond to the number of unique predictors over all models? This will have effect on the final findings.The reciprocal analysis of the partial Nagano data (PND) appears to have pulled out the environmental effects predictive for the transcriptomic changes, and I find that it has added some ideas about the difficulty of the problem of model generalizability.I would suggest that the claim about \"designing model selection approach\" is toned done, as the authors themselves state in the response letter that \"we do not claim to provide any notable advances in this paper for the computational community\". While this may be an article aimed for the genomics-enabled biologist, it will be read by computational biologists interested in the large data resource provided.Finally, while the authors provided a pipeline based on well-established methods, the interested reader may want to know what is the added value with respect to now classical approaches , as one of the other reviewers already suggested.Rephrasing of some key sentences may be needed, to fully match what was stated in the main text and response letter:1) The statement in the Abstract \"we show that new insights can be gained\" should indicate what these insights precisely are.2) The impact statement should be clarified \u2013 one can determine a model for any set of variables; however, the pressing problem is to show that the model explains a major part of the variance and can be used for predictive purposed. Therefore, it also needs rephrasing.3) The statement of the major findings \u2013 \"Our method allowed for the detection of additive effects of several environmental factors and differences in gene expression patterns between fields, genotypes or seasons\" \u2013 is not strong enough, since any data analysis method for differential analysis does essentially the same.Reviewer #3:1) In re-reviewing this paper the authors have done a great deal to clarify the modeling approach and have provided many details that were requested. I think that they have made great improvements in the technical aspects of this paper. However, now that the modeling details are clearer the manuscript could be vastly improved for readability for a biological audience if the biological question was clearly outlined in the Introduction and followed through the Methods, Results and Discussion.In the Abstract: \"We show that new insight can be gained from studying the effects of co-occuring abiotic stimuli in complex dynamic environments.\"Now that the technical aspects of the methods are clearer I am struggling to see what new biological insight is gained from this paper. My main criticism of this paper as a whole in this new version is that it reads as a new way to do exploratory data analysis in a large-scale field transcriptomics experiment, but is not framed to ask a direct biological question with the data/analysis.In my last review I stated: \"as written there is not enough of a comparison between this modeling approach and the one of Nagano et al. 2012 to show what this new approach has to add with respect to predicting gene expression patterns in the field.\" I followed this with some suggestions as ways to approach this using a mixed model approach that could be applied to genes or gene clusters of interest. The authors have done extra work to apply their method to the Nagano et al. 2012 paper, but I still do not see what new biological insight is gained from the cluster method over gene-wise predictions. If the truly novel aspect of this dataset is the drought/non-drought comparison then could the paper focus much more on this comparison?2) Interactions implied in Introduction and Conclusion, but not tested in models:Introduction: \u201cIn addition to the dynamic nature of field conditions, the interaction between multiple stimuli is another major cause of discrepancies in plant responses/phenotypes between laboratory and field conditions.\u201dConclusion: \"By working in natural, complex conditions in the field, we can examine interactions between the effects of factors that vary at distinct time-scales, like temperature and water availability.\"In the Authors\u2019 response: \"We agree that the use of the term \"interaction\" in the context of linear models can be confusing. We have therefore removed any mention of interactions in the interpretation and discussion of our modeling results to make our meaning clearer.\"Although the authors disagreed with me about the including models with interaction terms for the entire dataset, I still think it would be a good approach to make distinct comparisons between genes/gene clusters and how they respond to the drought treatment.For example, in the subsection \u201cThe agricultural field environment strongly affects transcriptional responses to climatic fluctuations\u201d, the authors state: \"This result suggests that, in our experiment, a group of abiotic stress response genes responded to a greater extent to high light/heat stress than drought stress, while another group of abiotic stress response was much more sensitive to the effect of drought.\" Why not subset the data based on this observation and test these comparisons directly with a follow up model that includes an interaction term?3) Do these gene expression clusters mean anything for the plant as far as other phenotypes are concerned? There is a missed opportunity here to expand what is found in the clusters into a story about how the environment influences gene expression and how that gene expression might influence plant growth/development. The Discussion starts to get at this, but the message is lost in the details about what a few genes from each cluster mean. The clustering/model selection approach is a great way to reduce this complex dataset to fewer significant parameters but as this manuscript is written does not help in biological interpretation of the clusters. This is also where a clear biological goal of the modeling/method/experimental design is necessary and could improve this manuscript. Essential revisions:There were two fundamental concerns:1) The first was that it was felt that the new computational approaches should be tested against another independent data set. Specifically it is suggested that the authors create models with their data on the parameter shared with Nagano et al., since the claim is that they have a new pipeline for data analysis/model building. Then, they should test these models with the data from Nagano et al. to demonstrate the added value.2) Secondly, the writing could use further improvement to help convey to both the biologists and the computational scientists what the inherent novelty of this study is to both groups. Alternatively, it might be suggested to focus solely on one group and provide them with insights that pertain to that group.We appreciate the favorable evaluation of our work and the interest for our contribution to the new field of ecological systems biology. We found that we were able to address reviewer and editor comments, either through modification of the manuscript or through additional analysis. Although a detailed account of our revisions to the work and the manuscript follow, we briefly summarize the main points of revisions here:1) We have used the comments and suggestions of the reviewers to modify our computational method and make it more consistent across datasets: we have set up a criterion to choose the number of clusters, revised our centering and scaling steps and simplified the model selection steps.2) We have modified the comparison of our results with the ones from the Nagano et al. paper.3) We have extensively edited the manuscript to address the concerns and required improvements of the reviewers. Specifically, we believe that our revised manuscript will now be clearer to biologists, our intended audience.Reviewer #1:My major concern was that it was difficult to parse out what was or was not truly novel in the results and analysis. I understand that these articles are splitting a divide between the computational and the biological communities each with their own and often contrasting expertise. As such, I would suggest an editorial reworking to make the central messages to be as simple and direct as possible.The novelty in our work is to be found in the results that can be obtained from an approach combining focused experimental field measurements and a computational analysis that together were designed to specifically detect environmental effects on gene expression in fluctuating conditions. The computational analysis itself relies on well-established component statistical tools but contains a non-trivial amount of novelty via the combination of these modules into a working pipeline that identifies and models the environmental response of co-regulated groups. That said, we do not claim to provide any notable advances in this paper for the computational community, but rather aim the manuscript at genomics-enabled biologists with no extended statistical knowledge. As stated in the Introduction, environment-targeted transcriptomics studies carried out over several weeks or including more than two environmental perturbations are extremely rare and in this regard, our results pertaining to the combined effect of drought, temperature, solar radiation and wind on gene expression in the course of a month are truly original. Compared to the other main dataset of rice transcriptomics in the field , ours expends the range of environmental conditions provided, with a rainfed field in addition to the irrigated field and two contrasted seasons, the dry season comprising a drought period. In the new version of our manuscript, we have highlighted further these various aspects of the novelty of our work.Reviewer #2:For instance, in step (1), no validation for selection of the number of clusters is provided, nor it is assessed how this may affect the results; the authors could use various cluster quality indices to back up their claims; at this point, it is also not clear which distance measure was used in the PAM clustering approach. It is also advisable that the same centering and scaling strategy is used from the beginning of the workflow, so that consistent results are expected.Reviewer #3:2) The number of PAM clusters that were chosen is arbitrary and could influence the interpretation. There is no code provided for the PAM cluster analysis. Additionally, this is a concern because while the original data 240 number of samples were used in PAM clustering with 50 clusters, the revisited Nagano et al. 2012 data had only 52 RNA samples (that were averages across replicates) but 50 clusters was chosen for this data set as well. The risk here is over-fitting the Nagano data. In the same R package that was used to generate the clusters (\"cluster\"), there are functions (see clusGAP) to calculate the GAP statistic to determine the appropriate number of clusters given the dataset .Due to the similarity of these two comments we address them in aggregate here.It is an oversight on our part that we did not mention the distance used for the clustering (1 \u2013 Pearson correlation coefficient) and it has now been corrected. We also now provide the code for the clustering step in our Source Code file. According to the suggestion of reviewer 2, we have changed our scaling and centering strategy to make it more consistent. We were only interested in differences between genotypes related to their environmental responses and not in mean differences in expression level. Differences in mean expression level between seasons could be caused by differences in climatic factors as well as other factors that we did not include in our analysis for example day length or non-water related soil characteristics. To avoid attributing mean differences between seasons to the wrong effect, we preferred not to take into account these differences in mean expression level. For these reasons, each quarter of the dataset constituting one genotype in one season was centered, which removed differences in mean level of expression between genotypes and between seasons. This centering was performed once before the data was used in either the dry season only or two seasons analysis. Scaling was done once, for each gene over the whole dataset used in each analysis, prior to clustering. The exact same treatment is applied to the environmental parameters. We maintain that this is the correct pre-treatment to the data .Modeling cluster means was a way to focus our analysis on major transcriptional variations that are more reproducible than the expression profiles of individual genes . Furthermore, our results show that clusters with a lower number of genes are more likely to have less reproducible expression profiles , the set of parameters used was first pruned, by arbitrarily removing highly correlated parameters; there are many ways in which this could be done \u2013 was the parameter with largest number of high correlations removed first, or was another strategy used? How would this affect the final results obtained and the subsequent biological interpretations?We changed our pruning strategy to avoid taking arbitrary decisions. We set our threshold of correlation to 0.98, the highest genotype correlation of all clusters, an indicator of the noise in the expression data we are modeling. When several parameters were correlated over this threshold, we averaged them to create a new parameter replacing them all.Most critically, the authors only state that the hdi package in R was used, without indicating if they used it with the default setting (lasso) for the variable/parameter selection. At this stage, I do not know if the parameters were selected based on p-values , since lasso has multiple solutions, which again would affect the preselected parameters! Moreover, centering and scaling was here done on the entire data set, which will certainly affect the findings should this be done differently .This information was not available in the Materials and methods section but could be found in the R code provided. This has been corrected and the Methods section expanded accordingly. We did use the lasso, with the stability method of the hdi function. Stability selection, as implemented in the hdi package, constructs many lasso paths, in our case 300 solutions based on a random 85% of the data each. It then looks at how frequent each parameter was in the first q parameters in the paths and selects only parameters with a frequency above a certain threshold. Variable q and the frequency threshold can be set using an upper bound for the number of false positives, in our case 2 .At this step, we now only center the field subsets of the data to avoid getting models with an intersect.The same lack of robustness checks is present in step (3), where the pre-selection was repeated for step 2, simply because some of the selected models failed the MSE test (< 0.15).This is a misunderstanding of our method: we have not been clear enough about this particular step in the Materials and methods section and this has now been corrected. We systematically repeat the pre-selection, independently of any test, and choose between the three models obtained with the three different pre-selections using the BIC.In step (4) it is advisable to determine MSE from cross-validation, and to only use BIC. I do not understand why the combination of MSE and BIC should be used for the selection.This is a good comment, and we agree that this could be improved. We have thus changed our model selection approach accordingly. We now use only the BIC and calculate the mean squared error for each linear model using a five time five-fold cross- validation. Each selection step (with the exception of the hdi pre-selection of parameters) now relies on this strategy.Finally, the combination of models from step (5) is nowhere detailed; are they only inspected for congruence of coefficients or are they summed to make predictions for the additive effects of two environmental stimuli?The combination of models, as described in In the enrichment analysis, why were the annotations found across the three annotation resources not combined in a weighted fashion, so that GO terms more often encountered are given a higher more weight? The simple combination employed may distort the findings, particularly for the under-representation of classes .The three resources were combined in order to increase coverage of the rice genome, not to assess the likely correctness of a given annotation. The set of rice genes covered by resource A is not the same as resource B, but neither is it disjoint. We want to achieve maximum coverage of rice genes, and thus combined the resources, but do not believe that the occurrence of the same GO annotation in multiple resources indicates a greater likelihood of that annotation being correct. Rather we consider it highly likely that the vast majority of annotations in all three resources are correct, and that double-occurrence merely indicates that a particular gene was covered by two resources.Reviewer #2:Finally, it would have been interesting to see how the models performed in the case of the data from Nagano et al.; rather than building models on the independent data , the transferability of the models should have been assessed and commented on. This is in fact the most challenging part and would shed light on bridging the greenhouse-field gap.Reviewer #3:3) In addition to the technical concerns above, as written there is not enough of a comparison between this modeling approach and the one of Nagano et al. 2012 to show what this new approach has to add with respect to predicting gene expression patterns in the field. I think that the paper could benefit greatly from showing how the model selected from simple to complex would compare to using a full model including interactions with all the terms and gradually dropping terms out to compare models like in the Nagano et al. 2012 paper . For example, model selection on: lm(gene expression ~ genotype*season*timepoint*treatment). Building on this, I also think there is a missed opportunity here to do some mixed effects modeling (see lme4 R package) that could take into account the correlation of gene expression within and between all the treatment, genotype time of year combinations.Due to the similarity of these two comments we address them in aggregate here.In comparing to the Nagano et al. dataset we were attempting to do something that very few genomics studies attempt: to a compare to previous genomics studies that use different technologies. We find that nearly every genomics paper out there simply uses the latest-greatest technology and makes little or no effort to combine datasets, replicate past results or comment on the reproducibility of even the main findings. Why is this: because of the extreme difficulty of comparing the results from even slightly different technologies and different experimental designs. In this section of the paper we are trying to do this in spite of the difficulty of comparing apples and oranges. So, part of the motivation for including this comparison is our belief that the lack of comparisons to past studies is a critical void in the genomics community. All this said: we are still comparing apples and oranges with all the associated caveats and difficulties.We have completely revised the way we use the data from the Nagano et al. paper to address this comment. Our new aim was to test the transferability of the results of our experiment to an independent dataset and then to compare our approach with the one developed by Nagano et al. in regard to the identification of environmental effects on gene expression and demonstrate a possible \u201cadded value\u201d, even though the differences between the methods make this task difficult.For the transferability aspect, our first approach was rendered awkward by the fact that we were comparing two different clustering distributions as well as comparing models that included rainfed field data and often contained several equations with models generated from only irrigated field data and comprising a unique equation. These issues made model-to-model comparison cumbersome and unclear. Therefore, we have now preferred using a new analysis of our data that includes only irrigated field gene expression and chose to only keep clusters with a single equation for both seasons of our experiment, which can be considered as a first transferability test (from one season to the other). We then used these models determined from our data to test which of the developmental/environmental effects could also be detected on a subset of the Nagano data comparable to our own data in terms of day time and plant age, organized in the same clusters as in our data. We believe that this new analysis gives interesting insights on the effect of the climatic context on the reproducibility of environmental effects on gene expression.We understand that it is important in assessing the impact of our work to compare it with the most similar study already conducted so far. Even though Nagano et al. use a computational approach to analyze gene expression patterns in field grown rice like we do, their experimental design and aims differ from ours so distinctively, that it makes it very difficult to compare their results with ours. First, our purpose is focused on identifying relationships between gene expression and environmental conditions while Nagano et al. are investigating all possible drivers of gene expression in order to be able to predict gene expression. Because of these distinct aims, their sampling spans all of daytime and nighttime and the whole developmental life of the plant when ours is limited to one time of day, and spans only one month of the vegetative stage. A consequence of these differences in experimental design is that we cannot apply our modeling approach to their entire dataset but need to reduce it to include only one developmental stage, one part of the day and remove circadian changes. Furthermore, Nagano et al. have adopted a very different strategy to model environmental effects: their environmental term can only consider one climatic factor per model and includes a threshold, the possibility of rectangular or sinusoidal gate and of a dose-independent response. One other major difference is that they infer models for individual genes while we do it for co-expressed groups of genes. All these differences make it impossible to conduct a valid systematic comparison of the Nagano et al. models with models that we have generated from a subset of their data. To be able to do any comparison, we could only consider extreme cases of each approach that would make the models more or less similar. We had to keep in mind however that the Nagano et al. were inferred on a much larger dataset so that any result from the comparison must be considered with caution.Regarding mixed effects modeling, we agree with its potential, however we would first have to show the benefits of such an approach to justify the added complexity. That in itself is no easy task. And, in the context of our analysis, do we not minimize random effects by centering all data?Reviewer #2:At this stage, I also wonder why the profiles from different genes involved in particular process were not used to derive the models .Although this is an interesting suggestion, it makes the implicit assumption that all the genes from a specific process are co-regulated, which our results do not support. This would also considerably limit the analysis compared to our whole genome approach as only a small fraction of rice genes have been assigned with certainty to specific processes.Reviewer #3:1) This is a large dataset with many axes of variation with one of the main conclusions being \"additive and interactive effects of distinct environmental conditions on gene expression are widespread\". While simplicity is nice for interpretation , finding the best two parameters is an arbitrary cutoff. This needs to be justified statistically by either simulations or references.We agree that our original analysis might have been too stringent in its attempt to avoid over-fitting. We have thus run our model selection on the new two-season dataset with 53 clusters with no constraint on the number of parameters per linear equation. We found that among the 133 model equations selected for these 53 clusters , only ten contained four parameters and one had 5 five parameters, showing that equations with three or less parameters are generally enough to model the cluster means. We compared the results for the 11 clusters whose models included equations with more than three parameters with a run of model selection that constrained each equation to no more than three parameters and found that the models constrained for the number of parameters had BICs comparable to the ones with no constraint in five cases and resulted in lower BIC in two cases. The reason for this occasional inability of our method to prevent over-fitting the models resides in its using successive steps of model selection, instead of comparing all possible models at once, which is intractable with the piece-wise modeling approach we are using. In three cases, allowing for more parameters per equation did improve the model significantly, but we estimated that it was preferable to control for the over-fitting occurring in the other cases and therefore chose to limit the number of parameters per equation to three in the final version of the model selection pipeline. To put it simply, we choose to allow a few false negatives if it will prevent false positives .Furthermore, there is discussion of interactions when these simple models do not account for interactions between variables. It is unclear from the analysis and text how interactions are defined and thus interpreted.We agree that the use of the term \u201cinteraction\u201d in the context of linear models can be confusing. We have therefore removed any mention of interactions in the interpretation and discussion of our modeling results to make our meaning clearer.The reviewers felt that the manuscript nicely shows that even though climate and local field environment have large effects on gene expression, their effects can be captured with relatively simple models. While there are not yet sufficient data to be predictive for climates or field environments that have not been evaluated yet, the finding suggests that a relatively limited amount of such data sets will enable predictive modeling. This should be emphasized in the Abstract, Introduction and Discussion.We have revised the Abstract, Introduction and Discussion to highlight how our study can contribute to the undertaking of predicting transcriptional patterns.Reviewer #2:The authors have carefully considered and addressed the comments raised by the reviewers with respect to the selection of number of clusters , number of piecewise models (per BIC only), and averaging of profiles. However, the issue with the model simplification by pruning correlated parameters seems to differ between the response and the main text. More specifically, it is not clear how the decision in paragraph two, subheading \u201cModeling the effect of climatic factors on transcriptomic variation in different field environments\u201d was implemented, as this can be carried out in many ways. This statement does not correspond to the response provided in your letter, whereby the correlation is set to 0.98 and the parameters were averaged (this strategy of averaging provides a deterministic and reproducible outcome).This concern arises from a confusion between two different issues. First, the pruning of the correlated parameters was done as described in the first response letter and described in the Materials and methods section. Second, after increasing the number of parameters that could be included in each equation, we had to deal with the problem that some equations comprised parameters that were correlated with each other , which increased the risk of over-fitting without being very informative. This was remedied by allowing only one of a pair of correlated parameters to appear in an equation. The choice was also deterministic, as the chosen parameter was the one that resulted in the best model.In paragraph four, subheading \u201cModeling the effect of climatic factors on transcriptomic variation in different field environments\u201d, it should be included that the BIC is calculated on the joint piecewise models; I suppose the number of parameters corresponds to the number of predictors summed over all models, or does it correspond to the number of unique predictors over all models? This will have effect on the final findings.We have made the suggested modification. The BIC is calculated using the sum of number of parameters over all models, because if the same parameter appeared in several equations, it would be with a different coefficient, contributing to increasing model complexity as penalized by the BIC. This has been made clearer in the Materials and methods section.I would suggest that the claim about \"designing model selection approach\" is toned done, as the authors themselves state in the response letter that \"we do not claim to provide any notable advances in this paper for the computational community\". While this may be an article aimed for the genomics-enabled biologist, it will be read by computational biologists interested in the large data resource provided.We have rephrased this sentence to \u201cWe used a model selection approach (\u2026)\u201d.Finally, while the authors provided a pipeline based on well-established methods, the interested reader may want to know what is the added value with respect to now classical approaches , as one of the other reviewers already suggested.The major advantage of our method is its ability to examine a large number of parameters and select only the sparse models. This allows an extensive examination of environmental and developmental parameters. More traditional variance methods (such as mixed model methods) are difficult to implement in this context because of the need for these approaches to put all of the parameters simultaneously in the models, which results in over-fitting.Rephrasing of some key sentences may be needed, to fully match what was stated in the main text and response letter: 1) The statement in the Abstract \"we show that new insights can be gained\" should indicate what these insights precisely are.We have reworked the Abstract to more specifically refer to multiple biological insights resulting from our study.2) The impact statement should be clarified \u2013 one can determine a model for any set of variables; however, the pressing problem is to show that the model explains a major part of the variance and can be used for predictive purposed. Therefore, it also needs rephrasing.The impact statement has been changed accordingly.Reviewer #3:1) In re-reviewing this paper the authors have done a great deal to clarify the modeling approach and have provided many details that were requested. I think that they have made great improvements in the technical aspects of this paper. However, now that the modeling details are clearer the manuscript could be vastly improved for readability for a biological audience if the biological question was clearly outlined in the Introduction and followed through the Methods, Results and Discussion. In the Abstract: \"We show that new insight can be gained from studying the effects of co-occurring abiotic stimuli in complex dynamic environments.\" Now that the technical aspects of the methods are clearer I am struggling to see what new biological insight is gained from this paper. My main criticism of this paper as a whole in this new version is that it reads as a new way to do exploratory data analysis in a large-scale field transcriptomics experiment, but is not framed to ask a direct biological question with the data/analysis.We have extensively revised the Abstract, Introduction and Discussion, and some of the writing in the Results to frame our approach around two biological questions:1) How are multiple fluctuating environmental signals integrated in transcriptional responses in the field?2) How does context impact the climatic responses of gene expression?For each of these questions a paragraph of the Introduction describes the issues and current knowledge, and a section of the Discussion gives the answers that our results provide. We have also highlighted in the Abstract additional insights not directly related to these questions. Finally, in our Conclusion section, we describe the advance our study provides in the efforts to develop models of gene expression, and the implications of such work on understanding species response to climate change and on crop design.In my last review I stated: \"as written there is not enough of a comparison between this modeling approach and the one of Nagano et al. 2012 to show what this new approach has to add with respect to predicting gene expression patterns in the field.\" I followed this with some suggestions as ways to approach this using a mixed model approach that could be applied to genes or gene clusters of interest. The authors have done extra work to apply their method to the Nagano et al. 2012 paper, but I still do not see what new biological insight is gained from the cluster method over gene-wise predictions. If the truly novel aspect of this dataset is the drought/non-drought comparison then could the paper focus much more on this comparison?The rationale for the clustering step is more methodological than conceptual. Our results show that reproducibility is much higher for cluster means than individual genes. This means that the environmental expression response of most genes in our analysis is confounded by stochastic effects and responses to endogenous signals, that we cannot model. Fitting models to these expression profiles would therefore greatly increase the risk of fitting equations that have no causal relationship with transcript accumulation. We found it more informative to cluster all genes and work on reproducible cluster means than only model a few genes with reproducible patterns of expression. As explained before, the novel aspects of this dataset compared to Nagano et al. is that we are assessing the integration of multiple environmental signals (including drought) and the effect of different types of environmental context.2) Interactions implied in Introduction and Conclusion, but not tested in models:Introduction: \u201cIn addition to the dynamic nature of field conditions, the interaction between multiple stimuli is another major cause of discrepancies in plant responses/phenotypes between laboratory and field conditions.\u201d Conclusion: \"By working in natural, complex conditions in the field, we can examine interactions between the effects of factors that vary at distinct time-scales, like temperature and water availability.\" In the Authors\u2019 response: \"We agree that the use of the term \"interaction\" in the context of linear models can be confusing. We have therefore removed any mention of interactions in the interpretation and discussion of our modeling results to make our meaning clearer.\" Although the authors disagreed with me about the including models with interaction terms for the entire dataset, I still think it would be a good approach to make distinct comparisons between genes/gene clusters and how they respond to the drought treatment. For example, in the subsection \u201cThe agricultural field environment strongly affects transcriptional responses to climatic fluctuations\u201d, the authors state: \"This result suggests that, in our experiment, a group of abiotic stress response genes responded to a greater extent to high light/heat stress than drought stress, while another group of abiotic stress response was much more sensitive to the effect of drought.\" Why not subset the data based on this observation and test these comparisons directly with a follow up model that includes an interaction term?After the first reviews, we understood that for readers with a modeling perspective, \u201cinteraction\u201d refers to a precise term in an equation, which is why we have removed it from the interpretation and discussion of all of our results. However, we thought it would be acceptable to keep the word \u201cinteractions\u201d in the Introduction and Discussion in reference to previous work about the effect of combined stresses where this word was used. In these studies, interaction did not refer to an interaction term in a model but to the fact that the effect of co-occurring stresses cannot be inferred from single stress treatments. As there is still confusion in this regard, we have rephrased these findings to make clearer what these studies meant, without using the word interaction.Because of the tools we use for model selection, to integrate interactions into our modeling approach, we would have to add interactive parameters for each pair of environmental and developmental parameter, which would considerably inflate our number of parameters . This would certainly lead to over-fitting our expression data . This explains why we did not choose to include interactions terms in our equations. There are however implicit interactions in the models where different subsets of the data have different models: a cluster modeled with different equations in the irrigated and rainfed fields underlies the existence of an interaction between climatic effects and the field environment.We agree that the example from the subsection \u201cThe agricultural field environment strongly affects transcriptional responses to climatic fluctuations\u201d cited here was confusing and hinted at different types of interactions without making them explicit. This part of the Discussion has been removed, as it did not fit within the biological questions our article is now structured around. A follow-up model on this matter is therefore not relevant anymore.3) Do these gene expression clusters mean anything for the plant as far as other phenotypes are concerned? There is a missed opportunity here to expand what is found in the clusters into a story about how the environment influences gene expression and how that gene expression might influence plant growth/development. The Discussion starts to get at this, but the message is lost in the details about what a few genes from each cluster mean. The clustering/model selection approach is a great way to reduce this complex dataset to fewer significant parameters but as this manuscript is written does not help in biological interpretation of the clusters. This is also where a clear biological goal of the modeling/method/experimental design is necessary and could improve this manuscript.Even though we did measure some developmental phenotypes during our experiments, it has not been possible to relate them with our transcriptomics results. While we did not measure significant differences in transcriptional patterns between genotypes, the two landraces showed differences in morphology and phenology of the same range as the between-field differences, and averaging the genotypes like we did for expression would confound the field and season differences. With a more appropriate experimental design and more detailed phenotypic data, relating gene expression with specific phenotypes in natural conditions would be of great interest but it is outside the scope and possibilities of our study.Nevertheless, we do agree that the functional analysis of our clusters should have been more thoroughly exploited. Therefore, we have added a section to our Discussion where we examine some biological implications of our GO enrichment results."}
+{"text": "L1 regularization. This method can estimate the dynamic behavior of genes based on linear time-series modeling constructed from an ODE-based model and can infer the regulatory structure among several tens of genes maximizing prediction ability for the observational data. Furthermore, the method is capable of incorporating various types of existing biological knowledge, e.g., drug kinetics and literature-recorded pathways. The effectiveness of the proposed method is shown through a comparison of simulation studies with several previous methods. For an application example, we evaluated mRNA expression profiles over time upon corticosteroid stimulation in rats, thus incorporating corticosteroid kinetics/dynamics, literature-recorded pathways and transcription factor (TF) information.Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in the field of systems biology. Currently, there are two main approaches in GRN analysis using time-course observation data, namely an ordinary differential equation (ODE)-based approach and a statistical model-based approach. The ODE-based approach can generate complex dynamics of GRNs according to biologically validated nonlinear models. However, it cannot be applied to ten or more genes to simultaneously estimate system dynamics and regulatory relationships due to the computational difficulties. The statistical model-based approach uses highly abstract models to simply describe biological systems and to infer relationships among several hundreds of genes from the data. However, the high abstraction generates false regulations that are not permitted biologically. Thus, when dealing with several tens of genes of which the relationships are partially known, a method that can infer regulatory relationships based on a model with low abstraction and that can emulate the dynamics of ODE-based models while incorporating prior knowledge is urgently required. To accomplish this, we propose a method for inference of GRNs using a state space representation of a vector auto-regressive (VAR) model with To accomplish this, many mathematical methods have been developed for the analysis of high-throughput biological data, e.g., time-course microarray data e.g., DNA-protein interactions and the pharmacogenomics of chemical compounds. These contributions have allowed the knowledge of GRNs to accumulate.Transcriptional regulation, which is controlled by several factors, plays essential roles to sustain complex biological systems in cells. Thus, identifying the structure and dynamics of such regulation can facilitate recognition of and control over systems for many practical purposes, e.g., the Michaelis-Menten equation For elucidation of GRN dynamics, time-course observational data have been generally used. Currently, one strategy to elucidate transcriptional regulation using observational data is to apply an ordinary differential equation (ODE)-based approach, which can represent the dynamic behavior of biomolecular reactions based on biologically reliable models, e.g., Bayesian networks e.g., protein-protein interaction networks (PINs), literature-recorded pathways and transcription factor information In contrast, a statistical model-based approach using highly abstracted models, e.g., literature-recorded pathways and intracellular kinetics/dynamics of chemical compounds, and can deal with even non-equally spaced time-course observational data. A regulatory structure is inferred by maximization of the L1 regularized likelihood. To this end, we developed a new algorithm to obtain active sets of parameters and estimate a maximizer of the L1 regularized likelihood using the EM algorithm.We propose a novel method for inference of GRNs based on a newly developed model that uses a state space representation of a vector auto-regressive model (VAR-SSM) To demonstrate its effectiveness, we compared this method to a state space model (SSM) nth and a degradation process, described by a differential equation e.g., N-dimensional vector including regulatory effects on the nth gene by other genes, tth time point, and In inferring the regulatory structure of GRNs consisting of several tens of genes, hill-function based differential equations, i.e., N-dimensional hidden state variable, N-dimensional vector including synthesis rates, N genes at the tth time point and N\u00d7N diagonal matrices. The initial state vector i.e., In constructing gene regulatory models, we make an assumption that observational data are measured with observational noise. Under this assumption, to separately handle a system model i.e., and biolnth row and kth column element is represented byContrary to the derivation of i.e., nth row element of e.g., corticosteroids in corticosteroid-stimulated GRNs, we should consider the concentration of such biomolecules. For these cases, we remodel M-dimensional vector containing the concentration of the biomolecules at the tth time point, M-dimensional vector representing their regulatory effects on the nth gene. We consider the case that the concentration is known or can be simulated. In the results section, for an application example, we deal with corticosteroid drug pathways that have been well studied previously et al.When simulating the dynamic behavior of GRNs including biomolecules that cannot be represented by i.e., linear and nonlinear models. In using linear state space models, posterior probability densities of the hidden state can be obtained as Gaussian distributions and the optimal mean and covariance matrices can be analytically calculated by the Kalman filter algorithm e.g., extended Kalman filter i.e., genes are regulated by only a few specific regulators. Imposing such a sparse constraint to regression approaches is a general problem, but for state space models to simultaneously estimate optimal hidden state and parameter values , it is not a trivial problem Recently, many types of state space models have been proposed and applied in the context of systems biology F in A. The prediction, filtering, and smoothing of the Kalman filter are calculated by the following formulas:Let Prediction:Filtering:Smoothingi.e., L1 regularization to select effective sets of elements for N-dimensional Gaussian distributions L1 regularization. The L1 regularized log-likelihood is given byL1 regularization term for the nth row. In the EM algorithm, the conditional expectation of the joint log-likelihood of the complete data setith (previous) iteration.In biological systems, most genes are regulated by a few specific genes, L1 regularization can be found in The detailed solution for estimating parameter values using the EM algorithm for VAR-SSM with i.e., Because of the combination of the regularization terms and a state space representation, updating an element of -Initial Settingsi.e., 1. Set e.g., 2. Set the maximum number of iterations to be i.e., the number of active parameters 3. Set -Main Routine4. For 0. Through the following steps, fixing (a) Set (b) Calculate conditional expectations using the Kalman filter.(c) Update (d) Calculate the BIC score and decrease (e) Set (f) Consider the set of all subsets of (g) Set i becomes 5. Set i.e., there can exist better ones having lower BIC scores for the selected A conceptual view and a pseudo code of the algorithm are shown in e.g., as recorded in the literature, we derive the weighted regularization nth row, we define the weight vectors To weight parameters of known regulations, In practice, the purpose of the weight is to select known regulation in the instance where multiple candidates are highly correlated with the same gene. Thus, when the correlation of a known regulation is still a low value, the regulation should not be selected as an active regulation. For example, weights for literature-recorded pathways and regulations by TFs are set as i.e., a state space model (SSM) tth row vector consists of et al.http://sunflower.kuicr.kyoto-u.ac.jp.To show the effectiveness of the proposed method, we compared it with other GRN inference methods, e.g., Yao et al.http://www.cellillustrator.com/home). The details of the artificial simulation models are as follows.For the comparison, we first generated two time-courses from (i) linear difference equations as -Dataset(i)The number of genes is 18.Each gene undergoes synthesis and degradation processes, and genes are mutually regulated as shown in i.e., A drug is added at The expression data is observed at The number of replicated observations with different observational noise for each time point is three.The simulated expression is updated according to the linear difference equations represented by The observational data and the values of the parameters are available at -Dataset(ii)1 to 5 of dataset (i) are also satisfied in dataset (ii).6. The simulated expression is updated according to the differential equations. Regulatory relationships are the same as in (i) but the regulatory effects are represented by hill functions, such as 7. The observational data and the csml (cell system markup language) file are available at A true positive (TP), false positive (FP), false negative (FN), precision rate (PR\u200a=\u200aFor dataset (i), although the PR and RR values peak at L1 regularization using the LARS-LASSO algorithm i.e., GeneNet, GLASSO, ARACNE, CLR and MRNET, we considered the true network (directed network) as an undirected network and then measured the performance by comparing this undirected network to the inferred networks. Additionally, for GeneNet, G1DBN and mutual information-based methods , which are required to set a threshold value to determine the existence of regulation, we checked the results of setting the threshold q-value (GeneNet) and posterior probability (G1DBN) to F-et al.Next, we compared the results of (a) the proposed VAR-SSM with the lowest BIC and (b) the proposed VAR-SSM with the lowest SPE to (c) SSM Consequently, the proposed method achieved a low false positive rate while maintaining a high true positive rate. These results may be acceptable because the system model of the proposed method is the same as or similar to the artificial simulation models. Thus, it is conceivable that the proposed method is highly capable of inferring the regulatory structure of the assumed hill-function based model. Furthermore, we demonstrated the effectiveness of the weighted regularization for known prior information using dataset (ii). To evaluate the performance, we adapted a simulation time interval of yeast 1) of a part of the DREAM4 challenge . To measure the performance of the proposed method, in this comparison, we generated dataset (iii), which was a set of 100 time-course observational data, in which the measured time points were In contrast to the previous comparisons, for which the data were based on the assumed models as According to the original setting, three genes, which were randomly selected for each time-course, were perturbed among ith gene by jth gene as We applied the methods (a)\u2013(j) to dataset (iii); however, since SSM As a result, although the simulation model for dataset (iii) is different from the models that we assumed, the proposed method using SPE outperformed the other methods in terms of both AUROC and AUPR. The number of selected simulation time intervals As an application example, we analyzed microarray time-course gene expression data from rat skeletal muscle Mtor, Anxa3, Bnip3, Bcat2, Foxo1, Trim63, Akt1, Akt2, Akt3, Rheb, Igf1, Igf1r, Pik3c3, Pik3cd, Pik3cb, Pik3c2g, Slc2a4, and Mstn. Note that the microarray (GSE490) does not include three genes in the original pathway Redd1, Bcaa and Klf15. In addition, we employed the genes, Irs1, Srebf1, Rxrg, Scarb1, Gpam, Scd, Gpd2, Mapk6, Ace, Ptpn1, Ptprf, Edn1, Agtr1a, Ppard, Hmgcs2, Serpine1, Cebpb, Cebpd, Il6r, Mapk14, Ucp3, and Pdk4, which have been suggested to be corticosteroid-induced genes Because corticosteroid pharmacokinetics/dynamics in skeletal muscle have been modeled based on differential equations corticosteroid and hub genes regulating other genes. However, these results may be difficult to biologically interpret because some mRNAs are not considered to regulate other genes. Therefore, to exploit biological meaning correctly and demonstrate the effectiveness of incorporating prior information in the case of real biological data, we finally performed an experiment using TF information from ITFP (Integrated Transcription Factor Platform) Trim63, Akt1, Akt2, Mstn, Irs1, Srebf1, Gpam, Cebpb, and Cebpd, were set First, to determine the simulation time interval from i.e., literature-recorded pathways and regulation by TFs, were inferred in contrast to the non-weighted network in Cebpb, Mstn, Cebpd, and Trim63, were also selected as hub genes with no weight in Cebpb, which is known as a transcription factor related to immune and inflammatory responses, is indicated as a hub gene (illustrated as a green circle). Cebpd and Cebpb are assumed to be candidate genes for insulin-related transcription factors In i.e., GeneNet and G1DBN, to the pharmacogenomic data and attached significance levels the dynamics of other biomolecules can be included in the model, (iv) existing biological knowledge, e.g., literature-recorded pathways and TF information, can be integrated. Furthermore, we proposed an indicator for selecting a simulation time interval for the inference.In this study, we proposed a novel method for inference of gene regulatory networks incorporating existing biological knowledge and time-course observation data. The properties of the method are as follows; (i) the dynamics of the gene expression profiles can be estimated based on the proposed linear model with a hidden state, (ii) To show the effectiveness of the proposed method, we compared it to the previously reported GRNs inference methods using hill function-based pharmacogenomic pathways i.e., the weighted regularization and inclusion of a term for other biomolecules, influenced the results of selecting potential regulators and introducing drug effects to genes, respectively. Finally, these inferred regulations were evaluated by GeneNet and G1DBN, and some of the regulations had high significance. Since our approach imposed prior weights for reliable regulations and included drug terms to explicitly represent their dynamics, not only these regulations but also regulations that are evaluated as non-significant could be candidate regulations for corticosteroid pharmacogenomics. These results indicate that the proposed method can help to elucidate candidates that will allow extension of GRNs in which the regulation among genes is partly understood by incorporating multi-source biological knowledge.For an application example, we applied the proposed method to a corticosteroid-stimulated pathway in rat skeletal muscle. Because pathways and genes related to corticosteroids have been widely investigated, we were able to obtain the concentration of the drug as a function of time from the corticosteroid kinetics/dynamics and the literature-recorded pathways. By incorporating time-course mRNA expression data, corticosteroid kinetics/dynamics, literature-recorded pathways and TF information, we inferred the regulatory relationships among 40 genes that are candidate or known corticosteroid-related genes. The tendency of the BIC scores and the SPE for the simulated time intervals were the same as in the simulation studies, in which the regulatory systems were based on the previous corticosteroid pharmacogenomic studies, and interesting findings for corticosteroid regulation were obtained. For example, genes that are suggested to be significant factors in corticosteroid pharmacogenomics were predicted to be hub genes regulating other genes in the results both with and without prior information. Furthermore, we found that the properties of the proposed method, Method S1A solution for estimating parameter values and active sets. The detailed solution of estimating parameter values using the EM-algorithm for VAR-SSM with L1 regularization.(PDF)Click here for additional data file.Model S1Artificial data and parameter values for dataset (i). The artificial observational data and parameter values for dataset (i).(ZIP)Click here for additional data file.Model S2Artificial data and simulation files for dataset (ii). The artificial observational data and a csml file for dataset (ii).(ZIP)Click here for additional data file.Model S3Artificial data for dataset (iii). The artificial observational data (100 time-courses) for dataset (iii).(ZIP)Click here for additional data file.Model S4Corticosteroid pharmacokinetics/dynamics in rat muscle. A corticosteroid pharmacokinetics/dynamics described in differentia equations in rat muscle (PDF)Click here for additional data file."}
+{"text": "Unlike many other approaches, our approach acknowledges the role of the different cellular layers of measurement and infers consensus profiles and time profile clusters for further biological interpretation. We investigated a time-course data set on epidermal growth factor stimulation of human mammary epithelial cells generated on the two layers of RNA and proteins. The data was analyzed using our new approach with a focus on feedback signaling and pathway crosstalk. We could confirm known regulatory patterns relevant in the physiological cellular response to epidermal growth factor stimulation as well as identify interesting new interactions in this signaling context, such as the regulatory influence of the connective tissue growth factor on transferrin receptor or the influence of growth arrest and DNA-damage-inducible alpha on the connective tissue growth factor. Thus, we show that integrated cross-platform analysis provides a deeper understanding of regulatory signaling mechanisms. Combined with time-course information it enables the characterization of dynamic signaling processes and leads to the identification of important regulatory interactions which might be dysregulated in disease with adverse effects.Identification of dynamic signaling mechanisms on different cellular layers is now facilitated as the increased usage of various high-throughput techniques goes along with decreasing costs for individual experiments. A lot of these signaling mechanisms are known to be coordinated by their dynamics, turning time-course data sets into valuable information sources for inference of regulatory mechanisms. However, the combined analysis of parallel time-course measurements from different high-throughput platforms still constitutes a major challenge requiring sophisticated bioinformatic tools in order to ease biological interpretation. We developed a new pathway-based integration approach for the analysis of coupled omics time-series data, which we implemented in the R package Omics data integration is a conclusive concept for a systemic understanding of biological signaling mechanisms, both in healthy conditions and disease signaling has already been studied comprehensively in comparison to other signaling pathways as dysregulation is associated with poor prognosis in many human malignancies signaling pathway,\u201d the NCI pathway \u201cEGFR-dependent Endothelin signaling events\u201d or the NCI pathway \u201cErbB1 downstream signaling.\u201d Furthermore, a number of pathways are identified that are involved in cellular adhesion, STAT3 dependent signaling and PI3K signaling. Differential abundance of phopho-MAPK14 was only identified at time point 0.25 h after EGF stimulation. Corresponding pathways identified for that time point included e.g., the Biocarta \u201cp38 mapk signaling pathway\u201d and the Biocarta \u201cmapkinase signaling pathway.\u201d According to the TF\u2014target gene database the identified TFs activate the expression of a high number of genes as shown in Table In the transcriptome based upstream analysis an identification of upstream TFs was performed based on the differentially expressed transcripts. Corresponding numbers at each time point after EGF stimulation are displayed in Table The pathways identified in the downstream and upstream analyses at each measured time point after EGF stimulation are part of the Supplementary Material Tables , S3.PLAU, the urokinase-type plasminogen activator, and CTGF, the connective tissue growth factor, comprise late regulatory changes. A figure with all static consensus profiles is part of the Supplementary Material by G-protein-coupled-receptors , but also on members of the \u201cintermediate gene expression changes\u201d group and the \u201clate gene expression changes\u201d group. Two further members (PLAU and ODC1) are influenced by IL1A, a hub gene in the network, which we assigned to the \u201cimmediate early signaling processes\u201d group and to the \u201clate gene expression changes\u201d group, as it shows immediate membership in the static consensus graphs, but also a late response profile. A small group showing intermediate gene expression changes comprises TFRC and GADD45A. We observe in the graph that GADD45A activates itself, but also PCNA, a gene of the \u201clate gene expression changes\u201d group. PCNA is additionally self-activated, as well as externally activated by the ErbB ligand AREG and ASPH, the aspartate beta-hydroxylase. AREG and ASPH are upregulated late after EGF stimulation. IL1A also activates SLC3A2, the solute carrier family 3 member 2, and inhibits LAMA3, a proliferating cell nuclear antigen, laminin alpha 3. The second protein being part of the network is the transcription factor STAT3. The changes in STAT3 phosphorylation are found in the consensus graphs over all time points, thus we assign it to the group of \u201ccontinuous protein phosphorylation changes.\u201d Beside the activating influence of MAPK1 also autoregulation of STAT3 can be detected.In total, we could identify five subgroups in the consensus-based dynamic network by mapping them to the times in which they are part of the consensus graphs Figure : (1) immIn order to identify co-regulation patterns in the signaling response after EGF stimulation we performed time profile clustering. We obtained four dynamic co-regulation patterns of which two exhibit positive regulation and two exhibit negative regulation. Both positive and negative clusters each comprise one cluster of immediate regulation and one of delayed regulation. The clusters are depicted in Figure The results of the time-course integration based on the consensus analysis results are displayed in Figure SLC3A2, FKBP5, PPP2CA, CD44, and ODC1. All of these except for ODC1 show anti-correlating patterns between transcripts and proteins until 4 h after EGF stimulation. For later time points most pairs exhibit correlating behavior. MAPK14 also has CYR61, CCND1, and SERPINB2 as downstream targets with corresponding proteins being significantly differentially abundant, whereas for PRKAR2B only CYR61 could be identified.STAT3 is the phosphoprotein showing the most downstream transcripts that match to significantly regulated proteins Figure . STAT3 iIn the downstream and upstream analyses the results indicate that pathway identification based on differentially abundant phosphoproteins and differentially expressed transcripts is effective. In both pathway sets those pathways known to be activated by EGF stimulation were identified reliably in the different databases, expectedly the \u201cEGF signaling pathway\u201d itself. This shows, that the two data sets are in concordance on the pathway layer even if they are measured on different cellular layers and analyzed individually. Based on these initial results a pathway-based integration was considered to be constructive. However, downstream and upstream analyses might also introduce false positive findings, which we aimed to reduce from further analysis steps by the subsequent intersection analysis. The small set of phosphoproteins measured over time gives a strong basis for the pathway layer based integration as they were selected carefully for the experiment and belong to key pathways in EGF signaling. However, a larger set of phosphoprotein data as obtained now e.g., from mass-spectrometry approaches could lead to more robust results.In order to evaluate our methods it is important to first classify the data according to their temporal transcriptional domains. According to Avraham and Yarden feedbackWe used the static consensus analysis in order to generate a static view on the integrated networks at each time point. Via static consensus profiles we can identify transcription factors with regulatory effects and their regulated consensus molecules on the gene layer at the 1 h time point. A large number of those genes were already reported to be IEGs in the cellular response to growth factor stimulation according to Tullai et al. . PLAU anThe static consensus profiles of most SRGs, in contrast, are supposed to show a sustained activity. This is exactly what we find in our consensus graph analysis.Due to the low number of differentially abundant phosphoproteins as a starting point the number of intersecting proteins from downstream and upstream analyses are low, as well. MAPK1 is involved in a variety of cellular growth processes such as proliferation and differentiation, thus its presence in the consensus graph corresponds well to the expected cellular response after EGF stimulation. As a regulatory subunit of the cAMP-dependent protein kinases PRKAR2B is involved in various cellular functions. With its late activity we suspect an involvement in the cellular reconstruction processes taking place for the final phenotype definition. The VAV proteins are guanine nucleotide exchange factors that activate pathways leading to cytoskeletal actin rearrangements and transcriptional alterations response of the cells.IL1A and CTGF as main players driving EGF stimulation response in the cell. Interestingly, we could detect the link between GADD45A and PCNA in two independent high-throughput time course data sets measured on different platforms using our pathway-based integration approach. As a matter of course, with a higher temporal resolution of the coupled time course measurements more accurate results can be identified by our approach, as less intermediate time points need to be estimated. To gain insight into the biological response after an external stimulation at least four time points after the stimulation time point are necessary, though there is a high information content in such coupled data sets on the different cellular layers. The chosen time points and the temporal resolution, however, need to be adjusted specifically to the cellular signaling dynamics and the stimulation of choice in order to reflect the crucial time points of regulation.In summary, we identified MAPK1, CCND1, the cyclin family protein, ANXA1 and ASPH, LAMA3 and AREG, which were identified in the consensus-based dynamic analysis in the group of late gene expression changes, VEGFC, a vascular endothelial growth factor promoting angiogenesis, CCND2\u2014cyclin D2, NME1\u2014nucleoside diphosphate kinase 1, which has been associated with high tumor metastatic potential based on different studies are not members of cluster 2, but of the immediately positively regulated cluster 1, it can be assumed, that TIMP1 activation might also have a negative regulatory impact on these late after EGF stimulation. In the delayed downregulated cluster 3 we observe RARRES3, the retinoic acid receptor responder 3, which is known for its growth inhibitory effects or immediate early genes which are upregulated again at later time points (PLAU or IL1A). Our hypothesis, that cluster 2 includes mainly genes upregulated as secondary response genes, responsible for the phenotype definition, holds true, when having a closer look to the members: We observe We were interested in how far our approach reveals the dynamics of elements in the regulatory cascade of a stimulation induced phosphorylation cascade triggering a specific gene expression, which then leads to the generation of proteins needed in the cellular response to that particular stimulation. Therefore, after integrating the phosphoproteome data in the first pathway layer based integration, we integrated in a second step also the proteome data with the results of our pathway-based integrative analysis dynamically. The delay between consensus transcript generation and their corresponding protein generation reflects the time the cell needs for the complete translational and post-translational process. However, it is known that differences in protein abundance are only attributable to mRNA levels by about 20\u201340% , however, when performed on a time-series data set with higher resolution, such time shifts might be observable. Non-correlating expression level patterns indicate post-translational modifications or a possibly very rapid degradation of mRNA or the protein product, which is not captured in the low resolution time measurements. Of the identified pairs CYR61 is a growth factor inducible protein which promotes the adhesion of endothelial cells , but its physiological function has not been characterized comprehensively, although activity in the adaptive immune response has been reported (Schroder et al., With the integrated time-courses of phosphoproteins, downstream consensus-graph transcripts and their corresponding proteins the data implies an extensive post-translational modification of a number of proteins. This we see in the transcript/protein pairs investigated in detail here, but also in the downstream transcripts depicted in gray in Figure FOS and EGR1, while the hub proteins identified in the proteome data were EGFR and ITGB1. Comparing these results to our results from the pathway-based integrative analysis, we likewise observe FOS and EGR1 to be highly important regarding regulatory mechanisms during the initial cellular response. Yet, we additionally derived further information than what is given by the separate analysis: We evaluated these genes to play a significant role in the immediate early cellular reaction based on static consensus profiles. Furthermore, we saw that these are mainly influenced by IL1A and the phosphorylation of MAPK1 directly as well as indirectly. Based on the time profile clustering we saw on top that they belong to the early positively regulated cluster. The protein hubs that are identified via the separate analysis, however, cannot be found in our consensus analysis, as the consensus is confined to the small set of measured phosphoproteins.To comprehensively assess the advantage of our data integration approach based on public pathway knowledge we compared its results with the ones gained by a separate analysis of the individual proteomic and transcriptomic data sets. Waters et al. performeIn a second separate analysis of the proteomic and transcriptomic data sets Waters et al. performeIn summary, we conclude that the integrated analysis of the two data sets moves the focus to the dynamic interplay of regulatory mechanisms and enables a layer specific and detailed regulatory analysis of the cellular response to external stimulation.The data integration approaches applied by Waters et al. were basIn the integrative analysis of Waters et al. major ceFOS and SRC, while the hub nodes in the network generated from exclusively microarray data were FOS and EGR1, generated exclusively from proteome data EGFR and ITGB1 and exclusively from phosphoproteome data STAT3 and MAPK1. Interestingly, we also found FOS and EGR1, as well as STAT3 and MAPK1 as consensus molecules in our consensus-based dynamic analysis with considerable regulatory influence during the cellular response after EGF stimulation. The proteome hub nodes EGFR and ITGB1, as well as the hub node SRC from the integrated network were not part of our results due to the low number of phosphoproteins measured in the study. However, we found already considerable amount of regulatory mechanisms when including only the phosphoproteome data set as initial data set in our analysis. The MMP cascades identified in the integrated analysis from Waters et al. (Furthermore, integrated signaling networks from all data sets were investigated in Waters et al. . Not surs et al. as most Unfortunately, in the integrated analysis of Waters et al. only timThe presented data integration approach shows a way to gain a much deeper understanding of biological processes if time-course measurements and data from different high-throughput platforms representing the different functional layers of the cell are combined. Our approach enables a functional linking of regulatory processes over the transcriptional and translational cycle, even if the temporal resolution of the example data set is quite low, data has only been measured on two functional cellular layers and the phosphoproteome data set is very limited. This sets the basis for the integration of further cellular layers, as following regulation upon external perturbation in a detailed way provides a much deeper understanding of biological processing.pwOmics promote the generation of coupled data sets as they offer the possibility of an integrated analysis and help to sort the vast data sets in a biologically interpretable manner. By applying the different analysis steps implemented in pwOmics we showed that biological interpretation is facilitated and the results correspond to current biological knowledge about EGF stimulation generated in low and high-throughput experiments. Furthermore, we identified interesting regulatory relationships that were not observed yet in physiological EGF signaling. As our approach considers data from the different functional cellular layers individually, it enables to identify the regulatory interplay between these layers. We have demonstrated this in the consensus analysis, which is able to identify the molecular response minutes to hours after stimulation as feedback mechanism with a wave-like regulatory pattern generated by IEGs, DEGs, and SRGs and their corresponding proteins. We could also identify previously published pathway crosstalk via activation of MMPs (Yarden and Sliwkowski, Bioinformatic tools like the R package In order to link the different functional cellular layers it is beneficial and necessary to integrate knowledge from public databases which builds a frame for placing and linking the individual analysis results. This has the advantage of utilizing a vast amount of collected and curated information, which stays unused otherwise and can add an additional information layer for interpretation of the data. On the other hand this prior knowledge also directs the results in a certain extent, thus the quality of the databases used has to be taken into consideration when interpreting the overall results. A further caveat is that the public database knowledge available in most databases is not cell type or tissue specific resulting in a generalized analysis. However, as more cell type or tissue specific knowledge is collected such databases can be build up and integrated in the presented analysis workflow.In the consensus-based dynamic analysis we make the simplifying assumption of a gradual change of signaling over time. Clearly, this does not hold true for individual cells and still is a rough assumption for a set of cells as there have been found oscillatory mechanisms which work at high frequencies (Avraham and Yarden, We showed that the hypotheses on regulatory mechanisms generated via our integrative approach could be confirmed with independent low-throughput data sets. Although such time-course data sets measured in parallel enable a detailed analysis, it is not yet possible to infer from these data sets every regulatory aspect in detail. Nevertheless, our approach is a step toward portraying the whole picture of regulatory influences on the molecular level.pwOmics (Wachter and Beissbarth, Main analysis steps of the pathway-based integration approach of coupled time-series omics data described in this manuscript are implemented in the R package AW developed the method, performed data analysis and wrote the manuscript. TB conceived the design, envisioned the project and revised the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "The hair cycle is a dynamic process where follicles repeatedly move through phases of growth, retraction, and relative quiescence. This process is an example of temporal and spatial biological complexity. Understanding of the hair cycle and its regulation would shed light on many other complex systems relevant to biological and medical research. Currently, a systematic characterization of gene expression and summarization within the context of a mathematical model is not yet available. Given the cyclic nature of the hair cycle, we felt it was important to consider a subset of genes with periodic expression. To this end, we combined several mathematical approaches with high-throughput, whole mouse skin, mRNA expression data to characterize aspects of the dynamics and the possible cell populations corresponding to potentially periodic patterns. In particular two gene clusters, demonstrating properties of out-of-phase synchronized expression, were identified. A mean field, phase coupled oscillator model was shown to quantitatively recapitulate the synchronization observed in the data. Furthermore, we found only one configuration of positive-negative coupling to be dynamically stable, which provided insight on general features of the regulation. Subsequent bifurcation analysis was able to identify and describe alternate states based on perturbation of system parameters. A 2-population mixture model and cell type enrichment was used to associate the two gene clusters to features of background mesenchymal populations and rapidly expanding follicular epithelial cells. Distinct timing and localization of expression was also shown by RNA and protein imaging for representative genes. Taken together, the evidence suggests that synchronization between expanding epithelial and background mesenchymal cells may be maintained, in part, by inhibitory regulation, and potential mediators of this regulation were identified. Furthermore, the model suggests that impairing this negative regulation will drive a bifurcation which may represent transition into a pathological state such as hair miniaturization. The hair cycle represents a complex process of particular interest in the study of regulated proliferation, apoptosis and differentiation. While various modeling strategies are presented in the literature, none attempt to link extensive molecular details, provided by high-throughput experiments, with high-level, system properties. Thus, we re-analyzed a previously published mRNA expression time course study and found that we could readily identify a sizeable subset of genes that was expressed in synchrony with the hair cycle itself. The data is summarized in a dynamic, mathematical model of coupled oscillators. We demonstrate that a particular coupling scheme is sufficient to explain the observed synchronization. Further analysis associated specific expression patterns to general yet distinct cell populations, background mesenchymal and rapidly expanding follicular epithelial cells. Experimental imaging results are presented to show the localization of candidate genes from each population. Taken together, the results describe a possible mechanism for regulation between epithelial and mesenchymal populations. We also described an alternate state similar to hair miniaturization, which is predicted by the oscillator model. This study exemplifies the strengths of combining systems-level analysis with high-throughput experimental data to obtain a novel view of a complex system such as the hair cycle. The miniorgan of the hair follicle represents a complex biological system that undergoes repeated phases of death and regeneration over its lifetime \u03b22, which is synthesized and secreted by DP cells. The evidence suggests that, in general, Tgf\u03b22 suppresses proliferation and induces catagen-like changes in the follicle, including apoptosis of MX cells \u03b22 mediated pathway which activates epithelial stem cells to promote hair follicle regeneration The molecular mechanisms underlying this cyclical pattern of death and renewal in hair follicles are not well understood; however, some general concepts, as well as specific molecular regulators, have been identified. One key aspect is the communication between epithelial and mesenchymal cells. Numerous studies have identified physical interactions between these cell populations, as well as several possible signaling molecules et al., the authors model follicle growth and coupling as an excitable medium et al. developed a general model for hair cycling based on observations in the literature et al., mRNA microarrays were compiled over the first three rounds of hair growth: morphogenesis, the second naturally synchronized cycle and a depletion-induced cycle Mathematical models of general features of hair cycling have also been studied. In a recent study by Murray Nonlinear dynamical models have provided valuable insights into many oscillating biological systems in situ hybridization and immunofluorescence all demonstrated similar associations. The results describe a coupling scheme, between these two cell populations, which would be sufficient to maintain the observed synchronization. Specific signaling molecules were also identified as being priority follow-up targets for drivers of synchronization. To our knowledge this is the first attempt at integrating high-throughput molecular data with a mathematical model to predict systems level properties, such as synchronization and population dynamics.Our aim here was to investigate a subset of genes whose expression changes as a function of time in a potentially periodic manner, similar to the cyclical nature of hair growth. Previous modeling studies, which have focused on general aspects of hair growth, represent important initial steps in applying mathematical strategies to understanding the hair cycle et al.Given the proposed cyclical nature of hair growth, we investigated the possibility of periodically expressed mRNA in the microarray data collected by Lin To further investigate the periodic expression, we assigned a specific frequency and a phase shift to each signal. This was done using the Principal Periodic Component (PPC) as an approximation to the FSD. Both the PPC and FSD reasonably recapitulated the time course trajectories, primarily the low frequency expression signals a sharp and complete depletion of the expanding population within the catagen time frame. The model also identified differences between samples from the natural and induced cycle. Specifically, the model estimated a slower anagen onset in depletion induced mice, this was also observed by comparing morphologies of tissue sections in Lin et al.in silico microdissection procedure was able to identify expanding cell populations compared to a static background populations.From the heterogeneous hair cycle samples, we were able to approximate the dynamics of dominant expanding cell populations . The traThe model was also able to estimate static intracellular expression levels for each population. Combining this with the estimated population size, we were able to compare estimated expression levels to those observed in the data. Overall, we found that the majority of expression signals were not well described by such a simple model . Probeseet al.et al.We next investigated the possibility that the estimated populations were associated to specific biological cell types involved in the hair cycle. We made use of two existing studies, Rendl et al.These results provide us with a second biological insight: the genes in LFO cluster 1 were associated with expanding cell populations of the follicle that were enriched for follicle epithelial cells as defined Rendl In Situ Hybridization (ISH), and protein antigen staining. To generate tissue samples, we aligned the hair cycle in 10-week old mice with the shave/depilatory induction protocol. Two animals were sacrificed for each of the five time points considered; however, qRT-PCR was done using four technical replicates for both samples and imaging shows results typical of multiple follicles observed over the two biological replicates. Phenotypic changes were quantified by melanogenesis scoring, which is known to correspond to active hair growth. Both biological replicates were observed to have entered anagen within 16 days of induction and to have completely finished the first round of post induction hair growth by 29 days, as determined by scores increasing from and then returning to zero , Fermitin Family Member 2 (Fermt2) and Vimentin (Vim) for candidates associated to the background population, as well as Ovo-Like 1 (Ovol1) and SMAD Family Member 6 (Smad6) for the expanding population. In the follicle dissection study of Rendl \u03b2-catenin/TCF4 complex, and knockdown of Fermt2 leads to loss of \u03b2-catenin mediated transcription \u03b2-catenin/TCF4 complex, such as Wnt, are known to be required for the hair inducing property of DP cells et al.We investigated the localization of candidate gene products by various imaging techniques in samples corresponding to telogen taken before induction (day 0) and anagen taken 16 days after induction (day 16). Here, we recall that background candidates were determined to be markers for hair follicle cell populations that remain relatively stable throughout the hair cycle; these were also enriched with DP signature genes. Using ISH and fibroblast growth factor 7 (Fgf7) as a control marker \u03b2 signaling pathway \u03b2 signaling, as well as its proximity to MX cell markers, Smad6 may be an important candidate for future study in hair cycle regulation. We also noted the periodic expression of bone morphogenetic protein (BMP) genes, which have been documented as important regulators of skin and hair development. Four BMP genes showed periodic expression as LFOs: BMP 1 was found in cluster 1 while BMPs 2K, 8a, and 7 were expressed in cluster 2 (matrix-associated). Other BMPs such as 2 and 4 were present in the original data, but the expression data contained too much variability to survive the FDR cutoff. The complete lists of LFO genes that matched the background and expanding population clusters, along with statistical metrics, is in Supplementary Imaging results also confirmed candidate markers for the expanding population. Here we recall that the expanding population candidates were determined to be markers for cells whose relative population size increases during anagen, followed by a sharp decline in catagen ; these wThe coupled oscillator model suggests that out-of-phase clustering is maintained by positive and negative coupling. The two population model indicates that these clusters are associated to specific cell populations. Taken together this is similar to negative feedback, for example the expanding population may drive the background population to produce an inhibitory signal, such as apoptosis, that in turn depletes the expanding population. However, if the background population is static, how is it contributing to such a control loop? For example, when the expanding population is relatively high, one would expect an increase in the expression of the inhibitory genes from the background to drive down the expanding population. One reasonable possibility is that expression changes were occurring within the background population. On average we found that the assumption of static intracellular expression was reasonable enough to estimate population dynamics recall ; however\u03b22, which is currently thought to be one of the signaling molecules produced in DP cells to initiate apoptosis in hair epithelial cells at catagen on-set We investigated the possibility that inhibitory signaling genes may be in the DP enriched group identified as LFO cluster 2, but not well described by the static intracellular expression model. We expect such signaling genes to display an increased expression 14 to 16 days after morphogenesis, near the on-set of catagen and before the sharp decline in the expanding population . Using t\u03b22, this list may contain potential targets for molecules that communicate an inhibitory signal from the DP to proliferating hair epithelial cells, closing a negative feedback loop. Obviously further experiments will be required to test this hypothesis; however, it does provide a starting point for future validation of the conclusions drawn above and, perhaps, even those identified in the model of Al-Nuaimi et al.Given the observed expression signal, membership in DP enriched cluster 2, high enrichment for extracellular genes and inclusion of TgfAlthough our approach provides novel insights and genes associated to the hair follicle, we also recognize that there are several limitations to this study. We studied microarray-derived RNA expression data from developing mouse skin that included non-periodic as well as periodic gene expression patterns. Due to the cyclic nature of the hair cycle, we chose to focus our study on the latter. We emphasize that our study would overlook important regulators of the hair cycle if they were not periodically expressed. Next, we only considered a single time course, experimental study , which obviously limits the data and conditions available to us , and could lead to some important genes and cycle dynamics being excluded from further analysis. Furthermore, biological and technical variation, along with typical tradeoffs in sensitivity versus specificity associated with parameter selection, such as p-value thresholds, will further limit statistical detection of important mRNAs or expression patterns. Due to concerns of batch effects, we did not choose to combine additional datasets from other experimental studies to offset these issues. Instead, we chose to limit the scope of our investigation to describing a specific, but prevalent, dynamic pattern observed in the data. Again, by limiting the scope in this manner, it is likely that some important hair cycle regulators were overlooked. For example, BMP 2 and 4 have been shown to influence anagen initiation Study design also limited us to time course over a single cycle of follicle synchronized hair growth. We were not able to test if the identified expression patterns, specifically synchronized out-of-phase gene expression, continued for additional cycles. This is a typical experimental limitation due to the loss of follicle synchronization as animals mature, at latter stages of hair growth. This is a different concept from the synchronization describing gene expression patterns. One might still expect that similar gene expression patterns within individual follicles, and the surrounding microenvironment, continue with additional cycles; however, without single follicle tracking, we cannot confirm this. Furthermore, our dynamic coupled oscillator model would never predict such follicle-level de-synchronization, as we did not include any mechanisms for cycle variability nor did we include the concept of individual follicles. Accounting for stochastic variation and spatially modeling individual follicles that are themselves coupled, represents an additional level of complexity that may more accurately model the rich dynamics of the hair system, but was not considered in this study.Finally, we modeled gene expression from whole skin, since isolation of hair follicles prior to gene expression profiling is resource intensive and was beyond the scope of our work. In doing so, we relied on the 2-population model, cell type specific enrichment .To identify periodic signals in the mRNA expression data, we applied a robust regression method described in the literature A permutation strategy was used to find the p-value for the g-statistic. To improve p-value estimation, we applied a Generalized Pareto Distribution (GPD) to the tail of the permutation distribution when possible as described in the literature https://github.com/Rtasseff/gpdPerm).The complete periodic identification procedure was performed on each of the normalized expression signals. No pre-filtering or pre-selection of probesets was applied. We note that the selection of frequencies is not obvious for non-uniformly spaced time points; however, false positives due to improper frequency selection are mitigated by subsequent p-value calculations. We did find that the inclusion of very high frequencies, near or greater than https://github.com/Rtasseff/oscillator).To describe the expression dynamics in terms of oscillators, we calculated the instantaneous phase and frequency. If we imagine an oscillator as a point revolving around the unit circle in the complex plane, then we can describe its trajectory by finding its phase (or angle) and frequency (or rate of change). Here, trajectories are a result of the microarray time course data for samples of mouse skin, and therefore represent averages over this tissue. Given an analytical, or continuous, representation of the signal, we can use common methods applied in signal processing to calculate these properties https://github.com/Rtasseff/oscillator).We calculated a set of complex order parameters to quantify the collective behavior of the system. In oscillatory systems it is common to use the order parameters We employed a mathematical description of coupled oscillators to study general features of the synchronization observed in the hair cycle data. We considered a modified version of the simple Kuramoto model To solve the system, we follow the original paper We solved for various properties of the hair cycle system using EQ 12 and the oscillator state variables solved for above. We assumed the observed period of the hair cycle system was at a quasi-steady-state, where the magnitude of the first order parameter is constant and the rate of change of the phase is also constant. This was demonstrated by The bifurcation diagram was solved numerically . We founin silico microdissection Observations of two distinct gene expression clusters motivated us to explore possible relationships to different cell populations within the hair follicle. We considered the scenario in which observed expression changes are due to changes in relative cell population size as opposed to intracellular changes. In Supplementary in silico microdissection works by applying a simple linear model of mixed samplesy's simultaneously.Briefly, We consider a model of an expanding cell population mixed with a constant background population. We treated the hair cycle expression chips as independent mixed samples each with possibly different cell fractions. No information of cycle type or time was needed, nor any strategy for combining samples as in the previous periodic identification. For later comparisons of the induced and natural cycle, we set the time relative to cycle initiation, which we assumed to be after morphogenesis or after depletion. This time scale was only used for graphical representation, and was not used in any calculations. A linearly increasing function from 0 to 1 was used as the initial conditions for ive size and the ive size . We noteWhile calculating the internal expression for the two populations, we also estimated the corresponding standard error using common methods associated with linear regression. The standard error was used to produce a t-statistic and p-value for each probeset, which indicated the extent to which a gene was differently expressed between the two populations . The proWe also considered a computational negative control. In the above analysis, we inherently assumed that expression is related to time, after morphogenesis or after depletion. Our population analysis allowed us to then associate expression to relative population size, and therefore, plot relative population size as a function of time. Here we considered a negative control, that expression is random with respect to time, and not related to hair cycle. To achieve this, we randomly permuted (or shuffled) the time courses for each probeset. For a proper comparison the depletion and naturally induced time courses were not intermixed, and kept separate. After permutation, we employed the exact same analysis and plotting procedure as above to estimate enrichment of genes related to hair. The NGD is a semantic similarity measure The threshold of 1.0 was chosen as it is the NGD of the expected value for independent or unrelated terms. Briefly, given a set number of occurrences for a term et al.et al.A cell type enrichment analysis was used to link model populations to specific cell types. Two existing studies, Rendl Male mice, C57Bl/6 at 62-66 days of age, in the telogen phase of the hair cycle \u00b5m formalin fixed paraffin embedded (FFPE) sections were cut, fixed in 10% formaldehyde overnight at room temperature (RT) and digested with proteinase K . Sections were hybridized for 3 hours at 40\u00b0C with custom designed QuantiGene ViewRNA probes against specific target genes and the positive control genes used were Fgf7 for dermal papilla cells and Foxn1 for matrix cells .ISH was performed using QuantiGene ViewRNA protocols . Five Bound probes were then amplified per protocol from Affymetrix using PreAmp and Amp molecules. Multiple Label Probe oligonucleotides conjugated to alkaline phosphatase (LP-AP Type 1) were then added and Fast Red Substrate was used to produce signal . For two color assays, an LP-AP type 6 probe was used with Fast Blue substrate followed by LP-AP type 1 probe with Fast Red Substrate to produce a dual colorimetric and fluorescent signal. The probes sets used for ISH are described in The in situ hybridization assay in this study utilizes branched DNA (bDNA) technology, which offers near single copy mRNA sensitivity in individual cells. The bDNA assay uses a sandwich-based hybridization method which relies on bDNA molecules to amplify the signal from target mRNA molecules. Each probe set hybridizing to a single target contains 20 oligonucleotides pairs. This was followed by sequential hybridization with the final conjugation of a fluorescent dye. Thus, each fully assembled signal amplification \u2018tree\u2019 has 400 binding sites for each labeled probe. Finally, when all target specific oligonucleotides in the probe set have bound to the target mRNA transcript, the resulting amplification of signal approaches 8000-fold (20 oligonucleotides times 400 binding sites\u200a=\u200a8000 fold).\u00b5M thickness) or FFPE sections (5 \u00b5M thickness) of mouse skin to visualize the hair follicles present at Day 0 and Day 16. Cryosections were stored at \u221280\u00b0C until use. Cryosections were dried for 30 min at room temperature and fixed by immersion in ice-cold acetone for 10 mins. Cryosections were then air-dried for 5 mins and washed three times with PBS. For FFPE sections, deparaffinzation was performed using xylene and series of alcohol changes. Antigen retrieval for performed using 0.05% trypsin at 37\u00b0C for 20 mins. Both cryosections and FFPE sections underwent the same treatment after this step. The sections were blocked for 1 hour using normal donkey serum in PBS. Sections were then incubated with specific primary antibodies (as described in Immunofluorescence staining was performed on fresh frozen cryosections (10 \u00b5l reaction mixture with gene-specific primers and \u03b2-Actin using RT2 SYBR Green ROX qPCR Mastermix (Qiagen). The PCR conditions were 95\u00b0C for 10 min, and 40 cycles of 95\u00b0C for 15 s, 59\u00b0C for 30 s, 72\u00b0C for 30 s on the ABI HT 7600 PCR instrument. All samples were assayed in quadruplicate. The differences in expression of specific gene product were evaluated using a relative quantification method where the expression of specific gene was normalized to the level of \u03b2-Actin. Primer sequences available in Supplementary Total RNA was extracted from mouse skin samples at days 6, 16, 23, 29, 38, 44 and 59 using Agilent's Total RNA isolation mini kit (Agilent Technologies). Reverse transcription reaction was performed with 500 ng of total RNA using the Superscript VILO cDNA synthesis kit (Life technologies). A 1\u223625 dilution of cDNA was used in the QRT PCR reaction. QRT-PCR was carried out in a 10 Figure S1Details on the periodic identification by frequency. Probesets were separated into groups based on the frequency of the Principal Periodic Component, a combined group was shown at the far left. (Top) Box plots of the Coefficient of determination, also known as R-Squared values, to indicate goodness of fit. Boxes show the Interquartile range. Results were shown for the complete fit (Fourier Series Regression) and for the simplified fit involving only the Principal Periodic Component. (Bottom) Histogram to show number of probesets at each frequency, not the y-axis is in log scale.(EPS)Click here for additional data file.Figure S2Schematic of possible levels of synchronization and the corresponding first and second order parameters. Referring to EQ 9 in the (EPS)Click here for additional data file.Figure S3Schematic of a possible two-population expression profile resulting in out-of-phase expression. Here, (EPS)Click here for additional data file.Figure S4Negative control for the estimated relative population size in the two-population model. We shuffled the gene expression for each time course so that it did not associate to the hair cycle, see (EPS)Click here for additional data file.Figure S5Negative control for the two-population model model fits and estimated differential expression. We shuffled the gene expression for each time course so that it did not associate to the hair cycle, see (EPS)Click here for additional data file.Figure S6Two phase histograms shown together with separation determined by different methods. Top (same as in (EPS)Click here for additional data file.Figure S7Overlap with hair cell type specific gene signatures. Signatures in the form of probeset IDs were taken from the literature (Lit.) for Matrix (MX), Dermal Papilla (DP), melanocytes (MC) and outer root sheath (ORS) from (EPS)Click here for additional data file.Figure S8\u03b2-Actin.Quantification of hair growth and validation of selected microarray results by QRT-PCR. Two biological replicates are shown (points), for visual assistance a line is drawn through the mean of each replicate. (A) Melanogenesis graph for samples at time points for which QRT-PCR was performed. High scores were indicative of anagen. Both biological replicates showed the same behavior. (B) QRT-PCR analysis for background, DP enriched, candidate genes. A cyclic pattern in the expression was observed with low expression in the mid to late anagen phase and increasing in telogen onset (day 29), a slight decrease was observed in late telogen. (C) QRT-PCR analysis for matrix derived cell candidate genes. Maximum expression was observed during the anagen phase (day 23) and the expression declined to a minimum from catagen to telogen phase (Day 29 to 39). The reaction for each biological replicate was performed in quadruplicate (average was reported) and normalized to (EPS)Click here for additional data file.Figure S9Technical controls for RNA imaging by In Situ Hybridization (ISH). We show two replicates of both negative, in the absence of any RNA probe, and positive, addition of the Ubiquitin C (UBC) RNA probe, controls for both day 0 and day 16 time points. UBC was the positive control suggested by the manufacturer. STAT5A was added to positive controls for comparison purposes.(TIF)Click here for additional data file.Figure S10Expression trajectories that matched the criterion for possible drivers of negative feedback. Probesets identified as low frequency oscillators and increased expression near catagen onset that was not captured by a population model with static intracellular expression. The 88 expression signals meeting this criterion are shown relative to the static population model, for example values above one indicate increases above what could be expected by static intracellular expression.(EPS)Click here for additional data file.Figure S11Similar to Supplementary (EPS)Click here for additional data file.Figure S12Similar to (EPS)Click here for additional data file.Figure S13Comparison simulations of the high and low dimensional coupled oscillator systems. The high dimensional system simulates all (EPS)Click here for additional data file.Table S1Cell type enrichment for model populations. P-values derived from hypergeometric distribution to test enrichment of cell type specific probesets from lists reported in the literature (PNG)Click here for additional data file.Table S2Target probe set information.(PNG)Click here for additional data file.Table S3Immunofluorescence antibody information.(PNG)Click here for additional data file.Table S4QRTPCR primer information.(PNG)Click here for additional data file.File S1Periodic genes with a Normalized Google Distance (NGD) to the term \u2018hair\u2019 of less than one for abstracts in PubMed. Genes were considered periodic if they had a symbol that mapped to at least one probeset shown in (TXT)Click here for additional data file.File S2Expression modeled as a system of coupled oscillators). Oscillators were started from a random, incoherent state and parameter values for the stable configuration were used Click here for additional data file.File S3Identification and characterization of periodic expression signals) and estimated to have differential expression between the expanding and background populations (see section Associating gene clusters to hair specific cell populations). Statistics and metrics, described in the Table of probesets considered as low frequency oscillators (see section (XLSX)Click here for additional data file.File S4Identification of negative feedback targets).List of gene symbols corresponding to possible negative feedback targets (see section (TXT)Click here for additional data file.File S5Identification of negative feedback targets).GO term enrichment for gene symbols corresponding to possible negative feedback targets (see section (XLSX)Click here for additional data file.File S6Compressed folder of code and scripts used in this study (see contained readme.txt).(ZIP)Click here for additional data file."}
+{"text": "Gene regulatory network (GRN) is a fundamental topic in systems biology. The dynamics of GRN can shed light on the cellular processes, which facilitates the understanding of the mechanisms of diseases when the processes are dysregulated. Accurate reconstruction of GRN could also provide guidelines for experimental biologists. Therefore, inferring gene regulatory networks from high-throughput gene expression data is a central problem in systems biology. However, due to the inherent complexity of gene regulation, noise in measuring the data and the short length of time-series data, it is very challenging to reconstruct accurate GRNs. On the other hand, a better understanding into gene regulation could help to improve the performance of GRN inference. Time delay is one of the most important characteristics of gene regulation. By incorporating the information of time delays, we can achieve more accurate inference of GRN.In this paper, we propose a method to infer time-delayed gene regulation based on cross-correlation and network deconvolution (ND). First, we employ cross-correlation to obtain the probable time delays for the interactions between each target gene and its potential regulators. Then based on the inferred delays, the technique of ND is applied to identify direct interactions between the target gene and its regulators. Experiments on real-life gene expression datasets show that our method achieves overall better performance than existing methods for inferring time-delayed GRNs.By taking into account the time delays among gene interactions, our method is able to infer GRN more accurately. The effectiveness of our method has been shown by the experiments on three real-life gene expression datasets of yeast. Compared with other existing methods which were designed for learning time-delayed GRN, our method has significantly higher sensitivity without much reduction of specificity. The inference of a gene regulatory network (GRN) is a vital step in understanding many biological systems in detail. However, the inference of GRN is known to be challenging due to several facts: (1) gene regulation is inherently complicated, (2) the measurements of gene expression levels are usually noisy, (3) the datasets for GRN inference are often incomplete, (4) time-series gene expression datasets have short time series compared to the number of genes measured. Generally, a GRN is inferred using machine learning algorithms on a time-series gene-expression dataset. Given the time-series data, the gene regulation could be inferred in two ways: one is assuming instantaneous or first order regulation, and the other is considering higher order regulation. In many cases, a gene regulates the expression of another gene by its products (RNAs or proteins). Since it takes time to generate those products and different processes need different amounts of time, time-delayed regulation is ubiquitous in cellular processes. Thus, inferring time-delayed gene interactions is essential to accurately reconstructing GRN.r-th order system with totally T time points in the dataset, the available numbers of time points for inference reduce to T-r. This poses a serious computational challenge resulting in more false predictions.The problem of inferring higher-order time delays is challenging, due to the tremendous search space when the numbers of time lags are unknown. For the et al. proposed a framework to infer instantaneous and time-delayed genetic interactions at the same time Append the expression of the target gene to for each gene j (j \u2260 i) doij lof interaction between the target gene i and gene j from Find time delay j from X based on ij , xj l= XExtract and align time samples for gene j-th gene to jx]Append the expression of the end for\u2032Return The authors declare that they have no competing interests.H.C. developed algorithm and performed experiments. H.C., P.A. and J.Z. analyzed results, interpreted results, and wrote the manuscript. L.Z. helped in performing the experiments. J.Z. and F.L. provided overall supervision, direction and leadership to the research.1 The derivation of Eq. 2: Using the convolution expression, we have xy \u03d5(\u03c4 ) = x(\u2212\u03c4 ) \u2217 y(\u03c4 ); Converting x(\u2212\u03c4 ) to frequency domain using Fourier transform \u2032 \u03c4 = \u2212\u03c4 we have xy = FT[xy \u03d5(\u03c4 )] = X\u2217 (f)Y(f); Applying inverse Fourier transform we can obtain xy \u03d5as in Eq. 2."}
+{"text": "When modeling coexpression networks from high-throughput time course data, Pearson Correlation Coefficient (PCC) is one of the most effective and popular similarity functions. However, its reliability is limited since it cannot capture non-linear interactions and time shifts. Here we propose to overcome these two issues by employing a novel similarity function, Dynamic Time Warping Maximal Information Coefficient (DTW-MIC), combining a measure taking care of functional interactions of signals (MIC) and a measure identifying time lag (DTW). By using the Hamming-Ipsen-Mikhailov (HIM) metric to quantify network differences, the effectiveness of the DTW-MIC approach is demonstrated on a set of four synthetic and one transcriptomic datasets, also in comparison to TimeDelay ARACNE and Transfer Entropy. We define the DTW-MIC function as the root mean square of MIC and the similarity measure naturally induced by DTW. We quantify the performance of the DTW-MIC approach within a differential network framework based on the Hamming-Ipsen-Mikhailov (HIM) distance , 59, thunettools [http://cran.r-project.org) and on the GitHub repository https://github.com/MPBA/nettools.git.As a bioinformatics resource, we provide an implementation of the DTW-MIC measure, other association and inference functions, and the HIM distance within ReNette, the Open Source web framework for differential network analysis . ReNettenettools are avaii.e., the ability of capturing variable relationships of different nature, and equitability, that is the property of penalizing similar levels of noise in the same way, regardless of the nature of the relation between variables.The Maximal Information Coefficient (MIC) measure is a member of the Maximal Information-based Nonparametric Exploration (MINE) family of statistics, introduced for the exploration of two-variable relationships in multidimensional data sets , 51, 53.e.g., in the analysis and inference of various kinds of biological networks. MIC has been coupled to the Context Likelihood of Relatedness (CLR) [minerva package [Since its introduction in 2011, a debate arose in the scientific community regarding statistical flaws of MINE \u201375, suchss (CLR) for netwss (CLR) , 84; MICss (CLR) . MIC hasss (CLR) \u201397; sevess (CLR) \u2013100. MICss (CLR) by using package .Example To illustrate the difference between PCC and MIC in detecting non-linear relationships between two variables, we introduce a simple synthetic example ti = i : 1 \u2264 i \u2264 100}:a < b. While A(i) is just 1/100\u2013th of the identity map, B(i) is a logarithmic map, C(i) is obtained from A(i) by adding a 20% level of uniform noise, D(i) is a more complex non-linear map merging a trigonometric and a logarithmic relation and, finally, E(i) is obtained from D(i) by a vertical offset and then flattening to zero all the values in the time interval [A\u2013E is displayed together with the PCC and MIC values for all pairs of sequences. MIC is able to capture the functional relationship linking all pairs of time series, even in presence of a moderate level of noise: all MIC values are larger than 0.72, and in six cases out of ten MIC attains the upper bound 1. On the other hand, PCC is close to one only when evaluating the pairs , , and , while all the remaining six cases display a correlation score smaller than 0.33, confirming that PCC is ineffective as a similarity measure for complex longitudinal data. As a relevant example, note that B(i) has a strong functional dependence from D(i) and E(i) although the shape of the corresponding curves are hugely different: this non-linear behaviour is well captured by MIC, with similarity value 1 to both and , while the corresponding values for PCC are negative.Dynamic Time Warping (DTW) , 55 is aSeveral variations to the original DTW algorithm have been proposed, first to overcome technical drawbacks and then to target specific data structures. Within the most important alternatives, we list DerivativeDTW , Iteratis from the distance DTW we use the function DTWs = 1/(1 + DTWd), where DTWd is the normalized distance between two series, as computed in the R package dtw [To obtain a similarity measure DTWkage dtw .Example In what follows, a synthetic example ti = i : 1 \u2264 i \u2264 100}:r(i), for P and D (boldface). Moreover, in the top panel of P (squares) and D (dots) are displayed for k = 0, 1, 2 versus the time shift s ranging from 0 to 40. The example shows that DTW can model the dependence between r(i), even for large time shift s and high noise level k. In particular, as a function of the time shift s, the value for DTW monotonically decreases from 1 to 0.959, 0.804, 0.670 for k = 0, 1, 2 respectively, and D > D > D consistently along the whole range 0 \u2264 s \u2264 40. On the other hand, PCC rapidly decreases to very low correlation level even for small time shifts s > 5, with PCC < 0.3 for all values s > 7. Furthermore, the PCC value does not change monotonically on increasing noise: in fact, the curves P mutually intersecate. Finally, to assess the significance of the values D, we compare it against the null distribution N random vectors \u03b7j on 100 time points with values randomly and uniformly sampled between two positive real values m < M. In particular, as parameters here we use N = 1000 and, given a noise level k, we set k = 0, 1, 2, the distribution of the set k = 0, 1, 2 respectively. Thus the mean values i.e., 0.7435 (k = 0), 0.6577 (k = 1) and 0.5121 (k = 2), can be used as significance thresholds, as shown in the bottom panel of s \u2264 40, the curve P lies above the corresponding significance threshold value.s:s, thus taking care of time shifts and non-linear functional relations. This characteristic makes DTW-MIC more effective than PCC, but also of MIC and DTW considered separately, as demonstrated by the following example We define DTW-MIC as a novel measure of similarity between two time series by considering the root mean square of MIC and DTWExample Consider a set g of three genes g1, g2 and g3 and the corresponding time series of expression i \u2264 100 defined as follows:M : M \u2208 {PCC, MIC, DTWs, DTW-MIC} and 1 \u2264 i < j \u2264 3}, computing for each similarity measure M, the corresponding coexpression network on the gene set g. All the three pairs of series have a very low correlation (PCC \u2264 0.23), but DTW-MIC is still able to capture the existing relation between them (DTW-MIC \u2264 0.5), even when these relations are of different nature. In fact, G2 and G3 have a low DTW similarity, but a high MIC correlation, while the opposite happens for G1 and G3. Finally, the pair has moderate values for both MIC and DTW. In all three cases the resulting DTW-MIC value is above the significance threshold computed from the null model described in the previous section, which is 0.52 for , 0.29 for and 0.39 for .N genes and their expressions g1, \u2026, gn, the resulting WGCNA network is described by the adjacency matrix A whose entries are defined asAn effective method for simultaneously analysing the mutual relations among a group of interacting agents is provided by graph theory, consisting in (i) building a complex network that has the agents as nodes and (ii) inferring the (weight of the) edges connecting the nodes by applying a similarity measure between the signals of the agents. A typical example in omics science is represented by gene networks: the nodes are the genes and an edge between two genes is weighted by the similarity between their expression levels in a time window as read by microarray or sequencing technologies. In case of a binary network, the edge is declared to exist only if the similarity value lies above a chosen threshold. These graphs are called coexpression networks, having as most popular model the Weighted Gene Co-expression Network Analysis (WGCNA) \u201318, wherM = |PCC| and \u03b2 a positive power, usually tuned according to additional constraints, such as the scale-freeness [\u03b2 = 6.for freeness , 116 of freeness is \u03b2 = 6M measure in M = |PCC|. Apart from WGCNA, we will use two more algorithms for comparison purposes to DTW-MIC.In the Results section we will use the WGNCA framework with the novel DTW-MIC as the TD-ARACNE package.Algorithm for the Reconstruction of Accurate Cellular Networks (ARACNE) is the rTransferEntropy R package within the WGCNA framework with \u03b2 = 6. Since Transfer Entropy is not symmetric, we follow the same strategy adopted by the authors of MIC: the weight of an unsigned interaction between the signals of two genes X, Y is the maximum of the intensity of the two directed interactions X \u2192 Y, Y \u2192 X. The embedding dimension and the neighbor used by the Kraskov estimator are set to 3 and 1, respectively, as shown in the documentation of the R package. In some cases, the considered dataset does not satisfy the assumptions of the Kraskov estimator, thus Transfer Entropy cannot be computed. As suggested by the R package documentation, a small Gaussian noise needs to be added to the data before computing Transfer Entropy.The second algorithm, Transfer Entropy , 64, altN1, N2 two undirected (possibly weighted) networks. The drawback of edit distances (such as H) is their locality, as they focus only on the network parts that are different in terms of presence or absence of matching links [For the quantitative assessment of the difference between two networks sharing the same nodes a graph distance is required. Among all metrics described in the literature, we choose the Hamming-Ipsen-Mikhailov (HIM) distance for its consistency and robustness , 123. Thng links , 59. In Example In the example P, Q) between two graphs P and Q is represented by a point of coordinates R = , IM) and its HIM value is the length of the segment connecting R to the origin , divided by In this section we apply the novel DTW-MIC similarity measure to two case studies in computational biology.n genes, together with the corresponding time series describing, for each gene, the dynamics of the expression level. Our strategy is the same in both applications and it includes two steps: first, the reconstruction of the network in the WGCNA framework in the classical approach via PCC and through DTW-MIC and the two additional benchmark measures TimeDelay ARACNE and Transfer Entropy, and then the evaluation of the HIM distance of the reconstructed networks from the true graph.Each dataset includes a network ad hoc procedure for the control of the local false discovery rate at a given threshold, an algorithm proven to be well performing in reconstruction [e.g., [e.g. [e.g. [e.g. [In detail, in the first application a suite of three synthetic gene network/time-course datasets is generated, inspired by real biological systems. The second task has the same goal, but expression level measurements come from a publicly available microarray dataset from a human cohort and the true network is experimentally unknown; however, a reasonable approximation of the network has been inferred by GeneNet , 128, a truction . Althougn [e.g., , 130). I., [e.g. ) or the g. [e.g. ) to guarg. [e.g. , 53, or g. [e.g. (e.g. [1g. [e.g. ).http://gnw.sourceforge.net/genenetweaver.html. GNW generates realistic network structures of biologically plausible benchmarks by extracting modules from known gene networks of model organisms like yeast and E. coli [http://www.the-dream-project.org/) Challenge [The datasets for the synthetic example are generated by GeneNetWeaver (GNW) , 133 an E. coli , endowinhallenge , 135\u201313720, Ecoli20, Ecoli50, where the name points to the original reference network and the subscript indicates the number of nodes. In detail, Yeast20 is a subnet of the Yeast transcriptional regulatory network with 4441 nodes and 12873 edges [20 and Ecoli50 are subnets of the E. coli transcriptional regulatory network with 1502 nodes and 3587 edges, corresponding to the TF-gene interactions of RegulonDB release 6.7 of May 2010 [Three synthetic networks are generated by GNW for the first application task, namely Yeast73 edges , 138, whMay 2010 , 139. Ind1, \u2026, d10} of expression levels are generated by a dynamic model mixing ordinary and stochastic differential equations, on 41 time points equally spaced between time 0 and time 1000 {t0 = 0, t1 = 25, \u2026, t40 = 1000}. In each series, the initial time point t0 = 0 corresponds to the wild-type steady-state and, from that moment onwards, a perturbation is applied until time point t20 = 500: at that point, the perturbation is removed, and the gene expression level goes back from the perturbed to the wild-type state [E.coli data; in both cases, the selected model is the microarray noise model described in [20, Ecoli20 and Ecoli50. GNW network and time-course data are publicly available on figshare, at the URL https://figshare.com/articles/Gene_Net_Weaver_Dataset/2279628.For each network, 10 longitudinal datasets {pe state . Moreoveribed in . Both thribed in . As an e20, Ecoli20 and Ecoli50, a network is inferred by PCC, DTW-MIC, Transfer Entropy and TimeDelay ARACNE from each of the time course dataset {d1, \u2026, d10}, and the obtained graph is compared via the HIM distance to the corresponding true network. As an example, in 20 graph aside the networks reconstructed from the dataset d1. In all experiments, the results for TimeDelay ARACNE are reported for N = 11 normalization bins and likelihood 1.2 as in the R package documentation; worse results (not reported here) were obtained for N = 5 and N = 22.In each of the three cases Yeast50 with TimeDelay ARACNE, with also smaller standard deviation over the 10 experiments in almost all cases.The results are reported in 20 dataset, four additional time course datasets were generated on the same timepoints, but with a dual gene knockout: the curve of gene YNL221C in d1, \u2026, d4 are reported in For the YeastAgain, the DTW-MIC inferred networks are closer to the true network than the other graphs, in all four experiments, with TimeDelay ARACNE as second best performing algorithm.tcell.34 and tcell.10, log-transformed and quantile normalized, are publicly available in the R package longitudinal. This package was developed by Opgen-Rhein and Strimmer who inferred the corresponding network by shrinkage estimation of the dynamical correlation [tcell.34 and tcell.10, in the top right panel of the same tcell.34 and in the first out of 10 replicates of tcell.10.Rangel and colleagues in investigrelation , 141. Thtcell.34 and tcell.10. In both datasets, the dimension of the longitudinal data for each replicate (10 time points) cannot guarantee robustness in the inference process, since both PCC and MIC are not reliable for datasets of too small sample size [i is followed by time point 0h for replicate i + 1, thus yielding for each gene a single time course on 340 time points for tcell.34 and on 100 time points for tcell.10. The inferred networks are displayed in Figs Eight instances of the T-cell network are inferred, by the three similarity measures DTW-MIC, PCC and Transfer Entropy and the reconstruction algorithm TimeDelay ARACNE, starting from the two datasets ple size , 68, 142In tcell.34 and tcell.10 the HIM distance from the true graph is smaller for the networks inferred by the DTW-MIC. The TimeDelay ARACNE measure reaches the same results, but only after a tuning phase optimizing the parameters N = 11, \u03b4 = 3 and likelihood 0.7. Note that, in all cases, the Hamming component of the distance is smaller, while the Ipsen-Mikhailov component is larger. Thus less links are changing between the inferred networks and the true graph, but these changing links induce a strongly different structure between the two nets. Indeed, in this experiment the choice of the similarity measure has a larger impact than the starting dataset, since the nets inferred using the same measure on different datasets are mutually closer than the nets inferred by different methods on the same time courses. Finally, without the power function (with \u03b2 = 6 as default) applied in the WGCNA for soft thresholding the reconstructed networks are very different from the true graph, regardless of the starting dataset. For instance, the resulting HIM is about 0.47 for PCC and 0.66 for DTW-MIC, with 0.63 the average HIM value for a null model generated by computing the distance from the true graph of 1000 random network with uniform edge weight distribution in . This effect does not come unexpected, because of the tendency of MIC to overestimate the MI in a number of situations [For both datasets tuations , 67, 68,tuations or soft We introduced here DTW-MIC, a novel similarity measure for inferring coexpression networks from longitudinal data as an alternative to the absolute PCC used in the WGCNA approach. By combining Dynamic Time Warping and Maximal Information Coefficient, the DTW-MIC similarity can overcome the well known limitations of PCC when dealing with delayed signals and indirect interactions. Experiments on biologically inspired synthetic data and gene expression time course data demonstrate higher precision on average in the network inference for DTW-MIC with respect to PCC, TimeDelay ARACNE and Transfer Entropy in different conditions, and without the need for a parameter tuning phase. Considering the MIC bias towards false positives and the availability of numerous similarity measures derived from DTW, it is likely to expect as future development the exploration of different alternatives to the DTW-MIC pair. For instance, it has been pointed out that Brownian distance correlation , 76 and"}
+{"text": "Microarray data is often utilized in inferring regulatory networks. Quantile normalization (QN) is a popular method to reduce array-to-array variation. We show that in the context of time series measurements QN may not be the best choice for this task, especially not if the inference is based on continuous time ODE model. We propose an alternative normalization method that is better suited for network inference from time series data. When studying temporal changes in gene expression levels via microarrays, it is important to reduce any systematic biases, since one should only compare measurements on an equal footing. Normalization is the term used for the varying techniques applied to microarray data to achieve this reduction. Several approaches have been proposed and quite thoroughly investigated in the past ,2,3,4. Tin silico and assist in designing experiments. Popular models include graphical models [Given microarray time series data, one may try to unravel the structure of an underlying gene network by constructing a mathematical/computational model and to test whether this can reproduce the observations. If this is the case such a model can be very useful for experiments l models ,6,7,8 anl models ,10,11,12l models ,14. We ci.e., M genes and N time points. For convenience we illustrate the procedure step by step in the (1)First sort all data in the whole data matrix according to magnitude from low to high;(2)N numbers;Partition this sorted dataset into bins (3)S with elements Sort each column in the original unsorted data matrix according to magnitude from low to high. This results in an (4)ith row of matrix S using the following scaling function fFor (5)Return each scaled element in each column back to their original unsorted positions within the columns.The difference with QN is that instead of averaging over the rows of column-wise sorted data, these expression levels are scaled to fit the appropriate bins in the histogram of the distribution of expression levels over time. In this way the variation present in the original data is preserved. This is important especially when doing network inference based on a continuous ODE-model that relates the rate of change of an expression level of one gene to the expression levels of other (regulating) genes. For a discrete Boolean-type network inference, QN is accurate enough, because it preserves the rank information. However, for a ODE-based inference, which is essentially continuous time approach, it is crucial to have more than merely the rank information.In this section we describe our new algorithm, which we refer to as modified histogram matching normalization. We assume that the expression data have dimensions As mentioned in the introduction, time series data are often analyzed using graphical models or systems of ODEs. To not lose our focus because of technical details, we use here the simplest representative of graphical models, namely correlation. In spite of its simplicity, it is often a very useful tool to give an impression of expression level data and the potential connections between the genes involved. We investigate, both with simulated and real microarray data, how the correlations deviate from the \u201cground truth\u201d after applying QN and MHMN normalization.In the subsequent analysis, we investigate how well QN and MHMN allow reconstruction of the parameters in linear ODEs. The latter are the simplest representatives of ODEs. Because the temporal dynamics of gene network are often simulated using ODEs ,12,15, tWe want to see how the correlations of time series measurements are affected by the two different normalization methods. To give an impression of the results, we take as example a dataset with 8 genes and 8 time points, stored in an et al. in their study on the effects of fasting on murine transcriptomics [Mus musculus at 5 time points after 0, 12, 24, 36, and 72 h of fasting. Data has dimensions The real microarray data we employ in this section is the time series data generated by Sokolovi\u0107 iptomics in varioWe observe that indeed when the data is very large, the errors in correlations have similar distributions for QN and MHMN. According to the Kolmogorov\u2013Smirnov test, the null hypothesis that the datasets have the same distribution at the i = row number, j = column number) has value 1 if gene j is connected to gene i and zero otherwise. With this same adjacency matrix, we can set up a system of linear ODEs by replacing all ones with appropriate random numbers to model the strength of interaction/regulation from gene j to gene i. The resulting matrix M is used in the linear ODE system x the vector containing the gene level expressions. Finally we choose a set of random initial values within a reasonable range. By integrating the ODE we obtain continuous time solutions In this section we simulate time series data using linear ODEs. First we generate random gene networks with a fixed number of genes and varying number of edges. Such a network correspond to a binary adjacency matrix, where a matrix element M from these samples, first using the samples directly, second, using the samples after applying MHMN and QN-normalization respectively. To simulate a plate effect, we put additive noise to the samples at a randomly chosen time point.The task is now to infer the entries in matrix We performed series of simulation experiments with small systems of ODEs (4\u20135 nodes). With or without time-point specific noise (plate effect), MHMN consistently resulted in better parameter inference than QN. In each experiment a random set of max 25 parameters need to be inferred. In both cases the Kolmogorov\u2013Smirnov test leads to rejecting the null hypothesis that the datasets have the same distribution at the We have shown that the popular normalization method, quantile normalization (QN), while being effective in equalizing the data from array to array, was in our analysis inferior to MHMN for ODE-based time series analysis. It is also less suitable for correlation-based analysis, when the number of rows in the data is not large. Time series analysis is important in practice. For example, when we consider tissues of a developing organism, one expects to see temporally meaningful changes in expression levels. QN is less suitable for time series analysis because it loses the information on the differences of expression levels between arrays. QN produces arrays that consist of exactly the same numerical values in each column (time point). See"}
+{"text": "Four cell lines affected with either HPV16 or HPV18 were assayed at 8 sequential time points for gene expression (mRNA) and gene copy number (DNA) using high-resolution microarrays. Available methods for temporal differential expression analysis are not designed for integrative genomic studies.To determine which changes in the host cell genome are crucial for cervical carcinogenesis, a longitudinal Here, we present a method that allows for the identification of differential gene expression associated with DNA copy number changes over time. The temporal variation in gene expression is described by a generalized linear mixed model employing low-rank thin-plate splines. Model parameters are estimated with an empirical Bayes procedure, which exploits integrated nested Laplace approximation for fast computation. Iteratively, posteriors of hyperparameters and model parameters are estimated. The empirical Bayes procedure shrinks multiple dispersion-related parameters. Shrinkage leads to more stable estimates of the model parameters, better control of false positives and improvement of reproducibility. In addition, to make estimates of the DNA copy number more stable, model parameters are also estimated in a multivariate way using triplets of features, imposing a spatial prior for the copy number effect.With the proposed method for analysis of time-course multilevel molecular data, more profound insight may be gained through the identification of temporal differential expression induced by DNA copy number abnormalities. In particular, in the analysis of an integrative oncogenomics study with a time-course set-up our method finds genes previously reported to be involved in cervical carcinogenesis. Furthermore, the proposed method yields improvements in sensitivity, specificity and reproducibility compared to existing methods. Finally, the proposed method is able to handle count (RNAseq) data from time course experiments as is shown on a real data set.The online version of this article (doi:10.1186/1471-2105-15-327) contains supplementary material, which is available to authorized users. Cervical cancer is caused by infection with high-risk types of the human papillomavirus (HPV) followed by additional changes in the host cell genome. Insight in genes that are consistently altered over time will improve our understanding of the molecular mechanisms driving cervical carcinogenesis. These genes may provide novel biomarkers for early detection of cervical cancer as well as potential therapeutic targets. High-throughput techniques, such as microarrays and next generation sequencing, are tools for fast high-resolution genome-wide molecular profiling. Applying these techniques to measure genes at consecutive moments in time at multiple molecular levels generates a description of the occurrence of molecular abnormalities during cervical carcinogenesis.in vitro system of four independent cell lines immortalized with either HPV16 or HPV18, previously shown to faithfully mimic cervical carcinogenesis at the (epi)genetic level [A longitudinal ic level , was useic level . Integraic level . We presAvailable methods in current literature for time-course differential gene expression analysis can only be applied to a single molecular level. Since microarrays have become widely used for studying genome-wide gene expression, a range of statistical methods have been tailored for the identification of differentially expressed genes in microarray time-course experiments. Several of these methods are developed in an empirical Bayes framework \u20139. Tai aIn this article we present a method for identification of temporal differential gene expression driven by genomic abnormalities, introducing several new concepts. First, employing low-rank thin-plate splines and empirical Bayes shrinkage, identification of temporal differential gene expression is improved in terms of sensitivity, specificity and reproducibility. Second, including DNA copy number as a time-varying molecular covariate reduces residual variance and allows for the identification of genes which have variation in expression over time caused by genomic abnormalities. Genes with expression levels affected by DNA copy number aberrations have the capability to contribute to malignant cell growth , 14. IdeA method for the identification of temporal variation in gene expression from an integrative genomics study with a time-course set-up is presented. The variation in gene expression over time is described by a generalized linear mixed model employing low-rank thin-plate splines. Hyperparameters of the model are estimated from the data with an empirical Bayes procedure. With parameter estimates at hand, we describe how relevant hypotheses may be tested. The section concludes with extensions of the model and practical considerations for its application.n cell lines are assayed repeatedly over Xi,j,t and Yi,j,t represent the DNA copy number and the expression level, respectively, of gene j of cell line i at time point t. Their realizations are denoted by xi,j,t and yi,j,t. For both random variables the index j runs from j=1 to p. This is due to the application of a matching procedure [Consider a time-course microarray experiment where rocedure , which mj are assumed to be normally distributed: j. The mean of this distribution is modelled as:The expression levels of gene f and h model the fixed and random effects, respectively, of the expression level of gene j. Both f and h are specified next.where The fixed effects, encompassing both cell line and DNA copy number effects, are modelled by a linear regression component:\u03b1i,j is the effect of cell line i in gene j and \u03b2j the DNA copy number effect on the expression levels of gene j.where The random effect captures the dynamics in the expression levels over time and is modelled nonparametrically by low-rank thin-plate splines . These sZt= and \u03b3j=T, the vector of coefficients of the spline. These coefficients are randomly distributed as \u03bak, \u03ba1<\u03ba2< \u22ef<\u03baK, are fixed knots, equally distributed over the interval where \u03b3j,k are assumed independent and all stemming from the same distribution \u03a3\u03b3,j. This requires the definition of the matrix \u03a9 with k1,k2=1,\u2026,K. Furthermore, let \u03a9 with U\u03c9 and V\u03c9 containing its left and right singular vectors as columns and diagonal matrix D\u03c9 of its singular values. It is then assumed that \u03a3\u03b3,j can be written as Model (1) is recasked as a semi-parametric mixed model. In a mixed model the random effects \u03b3j,k and \u03b5i,j,t of this model are both multivariately normal with mean zero and covariances i1=i2 and t1=t2 and zero otherwise, and \u03a9 results in a covariance matrix with higher covariances of random effects of neighboring knots (than those of more distant knots).where Yj,\u2217\u2217, matrix of measurements for gene j which are first ordered by cell lines i and within a cell line by time. The likelihood for gene j thus is:If we denote with which is to be used in the estimation.The parameters of Model (2) are estimated by means of an empirical Bayes procedure. Empirical Bayes enables us to exploit the high-dimensionality of the data by \u2018borrowing information across genes\u2019, which yields more reproducible results. Information will be shared among genes via common hyperparameters of the priors of the model parameters. Here this sharing is done only for parameters that will be subject to inference. Other parameters are considered confounders that need to be taken into account but are not of central interest. In principle, our estimation allows common hyperparameters for these confounders, but at a computational cost.For the fixed DNA copy number effect a Dirac-Gaussian mixture prior (a mixture of a point mass on zero and Gaussian distributions) is used:\u03c42). The point mass accommodates the proportion of genes without a DNA copy number effect. As this proportion is likely to comprise the majority of genes, it shrinks the \u03b2j to zero for those genes with a gene dosage effect.where \u03b3j,k and error \u03b5i,j,t are endowed with normal priors: a) more stable parameter estimates and b) protection against over-fitting. The reciprocals of these parameters, a and b of these hyperprior Gamma distributions are estimated using the method of [a and b resulting in a very flat prior on the precision. This corresponds to a very narrow prior on the variance of the random effect, that is, a flat spline. Iteratively, should the data give rise to it, the prior of precision becomes more informative. As a result the variance moves away from zero, increasing the flexibility of the spline. Would one desire more shrinkage our procedure allows the employment of a hyperprior composed of a Gamma and a point mass at zero.The random effect ethod of . The amo\u03b1i,j a Gaussian prior is assumed: \u03b1i,j as it is considered a confounder, for which an unbiased estimate is preferred.For the fixed cell line effect \u03b1i,j is j: the hyperparameter of this prior is different for each gene.We temporarily assume that the cell lines are merely biological replicates and no inferential statement with respect to the cell line effect will be made in the Section \u2018Head-and-neck cancer\u2019. Hence, for the moment the prior of the cell line effect Given the hyperparameters shared by all genes, the model parameters of the individual genes are estimated by the mean of their posterior distributions. These are obtained by means of integrated nested Laplace approximations (INLA) . INLA yiIt remains to choose the hyperparameters. An informed choice of the hyperparameters is made through application of the empirical Bayes procedure of . That isThe conventional empirical Bayesian estimate maximizes the product of the marginal likelihoods:\u03c0(\u00b7) denotes the prior of its argument (hyperparameters of the priors are suppressed for ease of notation). The hyperparameters of the priors of the \u03b1j, \u03b2j, and where edure of yields afunction . Hereto edure of is compu\u03b2j is shared by all genes. Hence, the choice of the hyperparameters affects the posterior distribution of all \u03b2j. In particular, if the mass of \u03c0(\u03b2j) is more concentrated around zero, the posterior has more probability mass close to zero. Only if the data contains enough evidence (in favor of a non-zero \u03b2j) to outweigh the prior, the posterior will center around a non-zero value. The prior of \u03b2j puts (via the spike at zero) more mass at zero. Moreover, the precision of the other mixture component is estimated from all genes. Under the assumption that a (vast) majority of genes do not exhibit an effect, the precision is under-estimated for the minority. This, together with the spike, yields a conservative prior leading to shrunken estimates.From expression (3) it becomes clear how the parameter estimates are shrunken. The prior of (say) t-distribution with degrees of freedom equal to the number of Gaussians [t-distribution with many degrees approaches a Gaussian. As there are usually many different genomic aberration patterns, we exploit this approximation and assume a common variance in our Dirac-Gaussian mixture prior. In a simulation study does DNA copy number drive gene expression, and ii) is there differential expression over time? The former question can be answered by testing whether the DNA copy number effect \u03b2j differs significantly from zero, evaluating the null hypothesis H0:\u03b2j=0 versus its alternative H0:\u03b2j\u22600. The second question is addressed by testing \u03b3j,k\u22600, resulting in a non-constant spline. Additionally, one may ask whether there is a difference between the cell lines, but this question is not considered here .From a biological point two questions are of main interest: Both hypotheses are evaluated by means of the likelihood ratio statistic. For the first question on DNA copy number the statistic is:\u03b1j under the alternative hypothesis HA. P-values are then obtained from the asymptotic (chi-square) distribution of the likelihood ratio statistics. The degrees of freedom of this chi-square distribution are equal to the difference in the number of parameters of the models compared. Note that this test is likely to be somewhat conservative, because where e.g. This hybrid testing procedure mimics Limma , 23. Lim\u03b2j) of gene j may be correlated with that of neighboring genes (e.g. \u03b2j\u22121 and \u03b2j+1). This phenomenon is not accommodated by the aforementioned prior of the \u03b2j.DNA copy number aberrations are often not confined to a single gene but span a large region of the genome that harbors multiple genes. Consequently, neighboring genes may share the same genomic aberration signature. At the transcriptomic level this results in co-expression of these genes . Put dif\u03b2j follows a first-order autoregressive (AR(1)) process along the genome: \u03b2j=\u03c1\u03b2j\u22121+\u03b5j. The relevant parameter of this process is simply estimated by regression of the \u03b2j on the \u03b2j\u22121.In this section we describe an extension of our procedure that incorporates the possible spatial correlation among the DNA copy number effects. To this end it is assumed that the \u03b2j, it rests to refit the model. However, the assumption of an AR(1) process on the gene dosage effect complicates the refitting as this should now be done simultaneously for all p genes. This is computationally too demanding. To approximate the joint fit the model is refitted per triplet of neighboring genes . For each triplet a trivariate normal prior is assumed :Having obtained an estimate of the spatial correlation among the \u03b2js only the middle one is conserved. More details are provided in Additional file where the correlation structure of the covariance matrix follows an AR1) process. For the re-estimated vector of process.Besides doing more justice to the underlying biology the \u2018spatial prior\u2019 above reduces the variation of the DNA copy number effect. This is achieved as the assumption of an AR(1) process effectively \u2018averages\u2019 the DNA copy number effect over neighbouring genes. As such, it is also a way of borrowing information across genes.\u03b3j accordingly. Figure A straightforward extension of Model (2) is to allow for a different spline in each cell line. This reflects the biological plausibility of different dynamical behaviour in different cell lines. In particular, we may then test for differences in the behavior over time between the cell lines. When rewriting Model (2) to a vector notation, the incorporation of different splines per cell line amounts to the replacement of Z of the latter is orthogonalized to the DNA copy number data. This orthogonalization does not affect the overall fit, but ensures that the spline captures only variation in expression levels that cannot be explained by DNA copy number changes. The effect of the orthogonalization is illustrated in the Section \u2018HPV-induced transformation\u2019.Within mixed Model (2) DNA copy number and time (via the spline) compete to explain the gene expression. Due to its flexibility the spline may consume variation in expression levels actually due to DNA copy number changes. Moreover, with DNA copy number changes being a more clearly delineated cause (than time in the form of a nonparametric spline), we prefer to attribute variation in expression levels to the genomic aberration. To let DNA copy number changes prevail over the spline, the design matrix k and applied to all genes in the analysis. More details are provided in Additional file To determine the optimal number of knots, we employ the deviation information criterion (DIC) . The DICin vitro cell line model. Hereto two cell lines are affected with HPV16 and two with HPV18 [The proposed method is demonstrated on data of an experiment on HPV-induced transformation. The experiment intends to faithfully mimic cervical cancer development employing a HPV-immortalized th HPV18 . Over tith HPV18 . Similarth HPV18 and betwth HPV18 . DNA copth HPV18 which usj exhibits temporal gene expression if We now turn to the identification of genes with differential expression over time. To this end only the expression data features are used, ignoring the effect of genomic aberrations in Model (2). Gene p-values close to but just short of the significance threshold.Table The common and different spline models employed for the identification of temporal differential expression are now extended to include DNA copy number . As noted in the Section \u2018Practical considerations\u2019 the flexibility of the spline may consume part of the DNA copy number effect. The proposed remedy limits (via projection) the spline basis to the space orthogonal to space spanned by the DNA copy number information. To assess the potential gain of the orthogonalization each analysis, with common and different spline(s), is done with and without orthogonalization. Prior distributions are as before. The number of knots is determined as done previously . For each analysis hyperparameters of DNA copy number and spline(s) are re-estimated by the empirical Bayes procedure.We first discuss the number of features with differential temporal expression . Finally, on the full data set (not shown) the analysis using the orthogonalized spline basis gives a modest improvement in the number of genes significantly affected by DNA copy number.Turning to the effect of DNA copy number, we first analyzed the data with Model 2) containing only the fixed cell line and DNA copy number effect. This analysis identified 568 features with a significant gene dosage effect on expression. Inclusion of the time effect in Model (2) reduces the number of features with a significant gene dosage effect on expression with a common spline). Indeed, CADM1 is down-regulated in all four cell lines over time . SLC25A36 is also identified as a gene with a significant DNA copy number effect with estimate \u03b3k,j. In the Section \u2018Estimation\u2019 we suggest to use the Gamma distribution. However, we have also implemented a mixture of a point mass at zero and Gamma distribution, which one expects to lead to more shrinkage. Model (2) using different splines and a standard design matrix is refitted now with this mixture prior for the random spline effect. Application of our empirical Bayes procedure with the Gamma prior identified 421 features, while the Dirac-Gamma mixture prior selected 396 features. The latter 396 are all included in the former 421 features. The slight reduction in the number of selected features is of course due to the inclusion of the point mass at zero. The fit of both resulting models is almost identical for most features, but for some features with a slightly less flexible spline as in Figure Finally, we want to assess the sensitivity of the results with respect to the choice of the prior distribution. For illustration purposes we focus on the hyperprior of the random effect in vitro and sampled at six time points. Transcript levels of the 2\u00d76 samples were sequenced. Data were mapped to the human genome and raw count data (reads) per gene transcript were used and not summarized per gene. Their normalization comprises rescaling by a the trimmed mean of each sample\u2019s library . Normal\u03bci,j,t of the counts is (after transformation by the inverse of the link function) still modeled by the right-hand side of Model (2) with assumptions on model parameters in place. Hyperparameters are then estimated via the empirical Bayes procedure previously described.Model (2) cannot be directly applied to the sequencing data, as the normal distribution is often a poor approximation for the distribution of counts. The normality assumption is replaced by the (zero-inflated) negative binomial , 34: -. \u03b3j is the main parameter of interest and the analysis compares the model with and without time effect. The optimal number of knots is determined using the procedure described in the Section \u2018Practical considerations\u2019. Prior distributions for cell line and time effect are as in the Section \u2018Estimation\u2019. Hyperparameters are estimated for each analysis separately, but only the variance of the random time effect is shrunken via the empirical Bayes procedure. Counts of are fitted using the model with same and different splines as illustrated for one RNA-seq tag in Figure The analysis of the head-and-neck cancer data concentrates on two main questions: identification of tags with temporal variation and those different between the two conditions. To answer this, Model (2) is used without the DNA copy number term (which is not included in the experiment). Common and different spline models are employed as in the Section \u2018HPV-induced transformation\u2019. Parameter The number of tags with a significant (at the 5% FDR level) temporal variation identified equals 8416 (10951) for the common (different) spline model. As observed in the analysis of the HPV-induced transformation data, the use of a different spline leads to many more findings. Again, this is explained by the improved fit due to a more flexible model. In particular, all the tags identified with the same spline model are also found by its flexible counterpart.The proposed method is compared to three well-known alternatives for significance analysis of time-course microarray data: EDGE, , timecouThese competitors have not been designed for the analysis of integrative genomics studies with a time-course set-up. Hence, our method is applicable to a wider class of studies. Besides this qualitative argument, we wish to have a quantitative comparison of the methods. To this end the comparison is restricted to time-course genomics studies involving only a single molecular level. Moreover, to avoid bias of any of the methods by a particular model choice, the comparison is done on two real data sets. The first is the HPV-induced transformation data from the Section \u2018HPV-induced transformation\u2019, limited to the gene expression levels only. The other data set is included in the EGDE-package , where gWe now briefly describe the other methods used in the comparison: EDGE, BATS and timecourse. For a more detailed description please refer to the corresponding references.j by means of a p-dimensional B-spline basis. Temporal differential expression is evaluated by an F-statistic measuring the goodness-of-fit of the null hypothesis (a flat or constant spline) in comparison to the alternative hypothesis. In the comparison EDGE is used with default parameter settings.EDGE captureT2 or MB-statistics. For the comparison we used the timecourse R-package with standard settings and Hotelling T2-statistic, due to the balancedness of the study design .Method timecourse uses noFinally, BATS which cSensitivity and specificity of the four aforementioned methods are compared in both data sets. Hereto knowledge of the genes with true temporal differential expression is needed. In its absence we constructed a consensus set which fulfills this role. That consensus set comprises of the features identified by all four methods. Sensitivity is then the proportion of features with temporal differential expression correctly identified as such. On the other hand, specificity is the proportion of features which are correctly identified as features without differential expression over time . Sensitivity and specificity of each method are assessed for various numbers of significant features.Figure Furthermore, we compared the reproducibility of the five methods (now including reference method). Hereto each data set was divided into two equally sized groups. We assessed how well the results of the two splits coincided. This boils down to the application of each method on both splits. The overlap in significant features for each method was determined.Figure We presented a method for the analysis of integrative (onco) genomics studies with a time-course experimental design. The method identifies temporal differential gene expression while accounting for time-varying molecular covariates like DNA copy number changes. Simultaneously, the method assesses which of these covariates significantly contributes to temporal differential gene expression. The method employs a mixed model describing the temporal changes in gene expression in terms of DNA copy number and a (low-rank thin-plate) spline which captures additional temporal variation in the transcript levels. The method estimates the parameters of this model by means of an empirical Bayes procedure that \u2018borrows information\u2019 across genes. The empirical Bayesian procedure shrinks the parameter estimates (towards zero), thus accounting for multiplicity. This shrinkage enhances the reproducibility of the results. In a direct comparison with other methods for the identification of temporal differential expression, the proposed method proved to be a strong competitor, particularly in terms of reproducibility. In addition existing methods cannot incorporate additional genomics data. Furthermore, our method is straightforwardly applicable to count data resulting from RNA-seq experiments. Application to an integrative oncogenomics study, involving HPV-transformed cell lines, confirmed genes CADM1 and SLC25A36, known to be implicated in the development of cervical cancer. The presented methodology also identified other, novel and potentially interesting genes. These are currently under investigation and will be reported in a follow-up medical paper. Preliminary pathway analysis already showed that genes identified from this dataset by tigaR but not by the other methods were enriched for genes involved in cellular transformation.Our ongoing research concentrates on two extensions of the proposed method. First, we are considering the inclusion of microRNA data. MicroRNAs affect expression levels post-transcriptionally. However, which microRNA targets which mRNA is only partially known. Hence, integration of temporal microRNA expression data also needs to address the problem of selecting the microRNAs targets. With the number of microRNAs known and typically measured in time-course integrative genomics studies being larger than the number of samples (# time points \u00d7 # cell lines) this adds an additional layer of complexity to the problem.The second extension comprises the integration of pathway information. This requires a multivariate formulation of the model for temporal changes in gene expression. Next to DNA copy number changes now the changes in transcript levels of other genes in the pathway may need to be included. A key challenge here is to \u2018borrow information\u2019 within and between pathways.The methodology described in this paper is implemented in the R-package tigaR available upon request from the first author (v.miok@vumc.nl).Supplementary material. Supplementary document containing details about the simulation study setup, additional figures and tables. (PDF 1013 KB)Additional file 1:"}
+{"text": "Arabidopsis defense responses evoked by the biotrophic fungus Golovinomyces orontii and the necrotrophic fungus Botrytis cinerea through integrative network analysis. Two time-course transcriptional datasets were integrated with an Arabidopsis protein-protein interaction (PPI) network to construct a G. orontii conditional PPI sub-network (gCPIN) and a B. cinerea conditional PPI sub-network (bCPIN). We found that hubs in gCPIN and bCPIN played important roles in disease resistance. Hubs in bCPIN evolved faster than hubs in gCPIN, indicating the different selection pressures imposed on plants by different pathogens. By analyzing the common network from gCPIN and bCPIN, we identified two network components in which the genes were heavily involved in defense and development, respectively. The co-expression relationships between interacting proteins connecting the two components were different under G. orontii and B. cinerea infection conditions. Closer inspection revealed that auxin-related genes were overrepresented in the interactions connecting these two components, suggesting a critical role of auxin signaling in regulating the different co-expression relationships. Our work may provide new insights into plant defense responses against pathogens with different lifestyles.A comprehensive exploration of common and specific plant responses to biotrophs and necrotrophs is necessary for a better understanding of plant immunity. Here, we compared the Golovinomyces orontii has an obligate biotrophic lifestyle, and it has been shown to colonize Arabidopsis under controlled laboratory conditionsBotrytis cinerea is recognized as a typical necrotrophic fungus that causes grey mould diseaseB. cinerea affects over 200 crop species, resulting in serious economic losses. The life cycles of G. orontii and B. cinerea on Arabidopsis follow a defined infection progression, including conidium germination, appressorium formation, penetration of the host surface and conidiophore formation2G. orontii and B. cinerea require approximately 5 and 3\u20134 days, respectively.Plant pathogens, including viruses, bacteria, fungi, oomycetes and nematodes, can cause severe economic and ecological damage. According to their lifestyles, plant pathogens can be generally divided into two major categories, biotrophs and necrotrophs. Biotrophs feed on living host cells. Thus, they keep host cells alive during their invasion to complete their life cycles. Powdery mildew is a fungal disease that affects a wide range of plant species, including many economically important cropsG. orontii11ein2-1) and a JA-insensitive mutant (coi1-1) have been reported to be highly susceptible to B. cinerea infection, which demonstrates the important roles of JA and ET in resisting B. cinerea131617Arabidopsis mutants with repressed auxin signaling show increased resistance to the biotrophic pathogen Pseudomonas syringae but increased susceptibility to the necrotrophic pathogen B. cinereaA multitude of studies have investigated plant defense responses against pathogens with different lifestyles, making great contributions to our understanding of plant immunity56589Arabidopsis microarray samples was conducted to detect genes and co-expression modules common to drought and bacterial stress responsesP. syringae infection or attack by the insect Brevicoryne brassicae, Barah et al. explored the general and attacker-specific defense response genes in Arabidopsiset al. employed the concept of biological networks to better interpret immune-related transcriptomic dataArabidopsis immune co-expression network using large-scale transcriptional data and identified 156 distinct immune-related functional modules. Recently, we also employed an advanced machine learning method to integrate the Arabidopsis gene network with a series of transcriptional dataRecently, high-throughput experiments have resulted in the increasing availability of omics data . The availability of these data for plant stress responses provides a good opportunity to employ computational systems biology approaches to advance our understanding of plant stress responses. For example, a meta-analysis of 386 G. orontii and necrotrophic pathogen B. cinerea have been conductedG. orontii and B. cinerea by integrating transcriptional data and the Arabidopsis PPI network and the B. cinerea conditional PPI sub-network (bCPIN), to characterize the plant defense responses against G. orontii and B. cinerea. First, we assessed the biological significance of the two conditional PPI sub-networks and focused on the analysis of hub proteins in plant immunity. Moreover, by comparing the two conditional PPI sub-networks, we were able to reveal two network components that were involved in plant development and defense, respectively. We attempted to explain the distinct expression correlations between interacting proteins connecting the two network components during plant defense response to pathogens with different lifestyles. Finally, we developed a website for the scientific community to interactively explore the networks constructed in our work.Although many experimental studies have been carried out to decipher general plant immune responses, a systematic analysis that integrates different omics data has not been used to compare plant defense responses to pathogens with different lifestyles. Recently, microarray experiments measuring plant immune responses to the biotrophic pathogen network . By mappArabidopsis PPI data were collected from three publicly available molecular interaction databases, TAIRExperimental Arabidopsis to two different pathogens, were used in our work. The first (GEO accession number: GSE5686) was generated by the AtGenExpress project, which detected Arabidopsis defense responses at 8 different time points during infection by a biotrophic fungus . The second (GEO accession number: GSE29642) was produced by Windram et al. and contained 24 time points after inoculation with a necrotrophic fungus Two series of time-course transcriptional data, which measured the transcriptional responses of We removed proteins without expression values in either transcriptional dataset from the primary PPI network. The retained network (AraPPINet), covering 5,598 proteins and 13,328 interactions, was used for the further construction of conditional PPI sub-networks .G. orontii or B. cinerea infection , the corresponding transcriptional data were integrated into AraPPINet. The Pearson correlation coefficient (PCC) was employed to measure the gene expression correlation between two interacting proteins. Transcriptional data from infected tissues were used to calculate the PCC value for each interaction. The biological significance of a PCC value depends on the corresponding transcriptional data and the choice of normalization methodAs an important strategy to integrate the transcriptome data and PPI network, gene expression correlations between interacting proteins have been widely used to identify conditional sub-networks3233p-value\u2009=\u20092.49\u2009\u00d7\u200910\u22124) and bCPIN compared to AraPPINet.To assess the biological significance of gCPIN and bCPIN, a series of analyses were carried out, including topological analysis, modularity analysis and functional enrichment analysis. Several global network topological parameters that reflect the general arrangement of nodes or interactions within gCPIN and bCPIN are displayed in 37G. orontii and B. cinerea.Taken together, these results indicated the biological significance of gCPIN and bCPIN. In the subsequent analysis, we focused on the comparative analysis between gCPIN and bCPIN for the investigation of plant defense responses to p-value\u2009=\u20095.98\u2009\u00d7\u200910\u22126 and 7.18\u2009\u00d7\u200910\u22125, respectively). Plant hormones and TFs have been reported to play vital roles in plant immunity8p-values are listed in In a PPI network, hubs are generally defined as proteins (nodes) with a significantly higher degree than other nodesPATHOGENESIS-RELATED genes that confer resistance to pathogensArabidopsis RING E3 ligase, HUB1 (for HISTONE MONOUBIQUITINATION1) was shown to be essential for resistance to B. cinerea, while hub1 mutant plants exhibited no effect on resistance to P. syringaeB. cinerea through wide interactions with its partners in bCPIN. To further decipher the molecular mechanism of HUB1 in regulating plant immunity to necrotrophs, these partners can serve as important candidates for experimental verification. More details regarding hub degree distribution can be interactively explored through our website (http://systbio.cau.edu.cn/BN/index.php).We divided the hubs in gCPIN and bCPIN into three groups . The gCPIN-specific hubs were hubs only in gCPIN, the bCPIN-specific hubs were hubs only in bCPIN, and common hubs were hubs shared by bCPIN and gCPIN. We obtained 182 gCPIN-specific hubs, 171-bCPIN specific hubs and 236 common hubs . The disArabidopsis and Carica papaya . We found that the average dN/dS values of hubs in both gCPIN and bCPIN were much smaller than 1, which indicated that the hubs had experienced strong purifying selection values between election . This reelection . ConsideG. orontii and B. cinerea, we selected common edges from gCPIN and bCPIN and constructed a common response network covering 1,702 nodes and 1,619 edges , which was consistent with the biological significance of the common response network. Developmental processes were also prominently enriched in the common response network, and the corrected p-value for this term was 9.47\u2009\u00d7\u200910\u221246, which was consistent with the common knowledge that plant growth and development are influenced during the plant immune responseTo further investigate the relationship between plant immune responses induced by 19 edges . GO anno cinerea . Approxip-value of 5.47\u2009\u00d7\u200910\u221213. The second largest component, consisting of 258 nodes and 330 edges, was significantly enriched with \u201cdevelopmental process\u201d . According to their biological functions, we named the two components DefRC (Defense-Related Component) and DevRC (Development-Related Component) .G. orontii and B. cinerea.To investigate whether the identification of components was related to the choice of PCC thresholds, we reconstructed and analyzed the common response network using a more stringent cutoff . The resG. orontii or B. cinerea infection conditions. We found that interactions connecting sizable network components (containing at least 10 proteins) often had different expression correlations under different conditions . Thresholds of \u22120.5 and \u22120.27 were then selected to define negatively correlated interactions under G. orontii and B. cinerea infection conditions, respectively. Under G. orontii infection condition, 267 interactions connecting DevRC and DefRC were negatively correlated and mock-treated (control) plants at any time point. All genes in DefRC and DevRC were differentially expressed following B. cinerea infection. Similarly, after G. orontii infection, most genes in DefRC (88.7%) and DevRC (87.2%) were differentially expressed. The extensive changes in gene expression in DefRC and DevRC further showed the activation of plant defenses and the impact on plant development following G. orontii and B. cinerea infection.In addition to the distinct expression correlation between DevRC and DefRC, the expression patterns of genes in DevRC and DefRC were also different in response to G. orontii infection, many genes in DefRC were up-regulated at all three stages, which revealed the activation of plant immunity of DevRC genes were suppressed, and almost all genes (95.3%) were suppressed at the late stage.We also compared the number of differentially expressed genes in DevRC and DefRC at different stages of infection. According to the infection cycle, time-course transcriptional data were divided into three stages . Following immunity . After Bon stage . B. cinenfection . At the The interface connecting two components is the place where different biological processes coordinate with each other32p-value\u2009=\u20093.20\u2009\u00d7\u200910\u22129). Auxin is known to regulate many aspects of plant growth and development. Its role in plant-pathogen interaction has also been widely reportedARF1 (for AUXIN RESPONSE FACTOR 1), ARF2 (for AUXIN RESPONSE FACTOR 2) and AXR6 (for AUXIN RESISTANT 6) , have beSTANT 6) . For exaG. orontii infection but positively correlated in response to B. cinerea infection . The website was implemented using Sigmajs Exporter, a plugin in Gephi (https://marketplace.gephi.org/plugin/sigmajs-exporter/). In addition to the interactive network exploration, we also converted the scatterplot of hubs in gCPIN and bCPIN into an interactive web application based on Shiny (http://shiny.rstudio.com/).For the convenience of the research community, we have created a user-friendly website to interactively explore and visualize the networks constructed in our study . This significant overlap indicated that the construction of gCPIN can capture the core PPIs related to the infection of G. orontii, but we also observed a large number of different PPIs between the two gCPINs. Thus, it is possible that the results of the comparative analysis would also be affected by the use of different expression data. To obtain more reliable results, therefore, using expression data under identical laboratory conditions and treatments would be a better choice.Our results are based on currently available interactome and transcriptional data and must be interpreted with caution. One major limitation of the comparative analysis is the bias of the expression data used to construct gCPIN and bCPIN. For example, the transcriptional data GSE5686 were collected from only leaves 7\u201310, while the transcriptional data GSE29642 were collected from leaf 7. We conducted a computational experiment to investigate how the conditional PPI sub-network can be affected when different gene expression data measuring the same pathogen infection were used. For this purpose, we constructed a new gCPIN (gCPIN-GSE13739) using another dataset, GSE13739, which measures Arabidopsis interactome is still far from complete. Some genes without interaction partners in the current coverage are not included in this work, but they might play important functional roles in plant immunity. On the other hand, less abundant or tissue-specific transcripts may be missed by retaining only PPIs with high expression correlation. Through analysing the expression levels of genes from AraPPINet and two conditional PPI sub-networks, we found that the expression levels of genes from gCPIN or bCPIN were significantly higher than the expression levels of genes from AraPPINet . Moreover, we also downloaded 746 tissue-specific Arabidopsis genes from the literaturep-value\u2009=\u20093.55\u2009\u00d7\u200910\u20133), and 60 genes were excluded from bCPIN . The above analyses showed that less abundant or tissue-specific genes tended to be filtered out by our method. Undoubtedly, the Arabidopsis interactome will become more complete, and more time-course transcriptional data measuring Arabidopsis gene expression under pathogen infection will be generated in the near future. The availability of these data will allow scientists to design more advanced workflows, perform more comprehensive analyses and obtain more reliable results.Another limitation of the current work is that we only considered highly correlated PPIs in the construction of gCPIN and bCPIN. On the one hand, the available G. orontii and the necrotrophic pathogen B. cinerea by integrating transcriptional data and Arabidopsis PPI data. First, we found that hubs in gCPIN and bPCIN played important functional roles in plant immunity. Plant defense-related genes, plant hormone-related genes and TFs were overrepresented in hubs; the distinct roles of gCPIN/bPCIN-specific hubs in plant defense responses to biotrophs and necrotrophs should be related to their different interaction partners in two networks. Moreover, we found that hubs in bCPIN evolved faster than hubs in gCPIN. By analyzing common interactions from gPCIN and bCPIN, we further identified two major network components (DefRC and DevRC), in which the defense responses and development processes were enriched, respectively. Interestingly, the gene expression relationship between DefRC and DevRC was positively correlated under B. cinerea infection condition but negatively correlated under G. orontii infection condition. Several proteins involved in the interactions connecting DefRC and DevRC were found to participate in the regulation of the trade-off between plant immunity and development. Finally, we noted an enrichment of auxin-related proteins involved in the interactions connecting DefRC and DevRC, which might explain the distinct relationships between DefRC and DevRC under different conditions. Taken together, we hope that the current comparative analysis on plant immune responses to pathogens with different lifestyles will help to improve our systems understanding of plant immunity.In summary, we constructed two conditional PPI sub-networks (gCPIN and bCPIN) to compare plant immune responses against the biotrophic pathogen Arabidopsis TFs were downloaded from the Plant Transcription Factor Database (PlantTFDB), which is a public database devoted to identifying and categorizing all plant genes involved in transcriptional controlArabidopsis hormone-related genes, which are defined as genes participating in the biosynthesis, metabolism, transport, perception or signaling pathways of plant hormones, were gathered from the Arabidopsis Hormone Database 2.0 (AHD2.0)Two series of transcriptional data were downloaded from the NCBI Gene Expression Omnibus (GEO), and the corresponding GEO accession numbers were GSE5686 and GSE29642Arabidopsis from the FTP site of TAIR (ftp://ftp.arabidopsis.org/home/tair/Ontologies/). Then, for each record in the annotation file, if the description of a gene met the following two criteria, we selected the gene as a plant defense-related gene. First, the record should use experimental evidence codes, including Inferred from Experiment (EXP), Inferred from Direct Assay (IDA), Inferred from Physical Interaction (IPI), Inferred from Mutant Phenotype (IMP), Inferred from Genetic Interaction (IGI), and Inferred from Expression Pattern (IEP). Second, the GO term should contain biological process keywords, including \u201csystemic acquired resistance\u201d, \u201csystemic resistance\u201d, \u201cimmune\u201d and \u201cdefense response to fungus\u201d. The other method used to collect plant defense-related genes was literature retrieval. We first searched the literature in PubMed with the keywords \u201cBotrytis cinerea\u201d and \u201cpowdery mildew\u201d; then, we selected plant defense-related genes through literature reading.Plant defense-related genes were collected in two ways. The major way was analyzing gene ontology annotationMany methods exist to determine a PCC threshold, including the use of an arbitrary cutoff\u200960X and Y are the log-transferred gene expression values of two interacting proteins, and N represents the number of time points in the corresponding transcriptional data. Then, we randomly permuted transcriptional data for the 5,598 proteins across different time points and calculated PCCs for each permutation to achieve a random PCC distribution. Finally, we sorted the random PCC distribution and chose the PCC value at the end of the top 10% highest (lowest) PCCs as the threshold of positively (negatively) correlated interactions.where http://micans.org/mcl/), an implementation of the MCL algorithmCytoscape (version 3.0.2) and its plugins were employed to visualize the network constructed in this study and perform network analysisArabidopsis and C. papaya. First, we downloaded the protein sequences and coding sequences (CDS) of Arabidopsis and C. papaya from the PLAZA databasehttp://inparanoid.sbc.su.se/cgi-bin/index.cgi) was used to identify the orthologs of Arabidopsis hub proteins in C. papaya. Finally, for each pair of orthologs, we calculated dN/dS using the yn00 program in the PAML package66To estimate the evolutionary rate of hubs, we compared orthologous sequences between G. orontii and B. cinerea. The early stage included microarray data at 6\u2009hour from GSE5686 and the first two time points from GSE29642. The middle stage of infection was composed of microarray data at 12\u2009hour to 24\u2009hour from GSE5686 and 6\u2009hour to 20\u2009hour from GSE29642. The remaining microarray data were defined as the late stage of infection. A gene was recognized as differentially expressed in a certain infection stage if it was differentially expressed at any of the time points included in this stage. The direction of a differentially expressed gene in each stage was determined by the majority vote of its constituent time points.A gene was identified as differentially expressed if its expression value exhibited a greater than 1.2-fold change between the spore-infected (treatment) and mock-treated (control) plants at any time point. Normalized transcriptional data were used to identify differential expression genes. For better comparisons, the two groups of time-course transcriptional data were divided into three stages based on the life cycles of GO enrichment analyses for modules predicted from MCL, the common response network and the network components were conducted using BiNGO 3.0.2 in Cytoscape with the \u201cGO Biological Process\u201d categoryHow to cite this article: Jiang, Z. et al. Network-Based Comparative Analysis of Arabidopsis Immune Responses to Golovinomyces orontii and Botrytis cinerea Infections. Sci. Rep.6, 19149; doi: 10.1038/srep19149 (2016)."}
+{"text": "Dynamic aspects of gene regulatory networks are typically investigated by measuring system variables at multiple time points. Current state-of-the-art computational approaches for reconstructing gene networks directly build on such data, making a strong assumption that the system evolves in a synchronous fashion at fixed points in time. However, nowadays omics data are being generated with increasing time course granularity. Thus, modellers now have the possibility to represent the system as evolving in continuous time and to improve the models\u2019 expressiveness.Continuous time Bayesian networks are proposed as a new approach for gene network reconstruction from time course expression data. Their performance was compared to two state-of-the-art methods: dynamic Bayesian networks and Granger causality analysis. On simulated data, the methods comparison was carried out for networks of increasing size, for measurements taken at different time granularity densities and for measurements unevenly spaced over time. Continuous time Bayesian networks outperformed the other methods in terms of the accuracy of regulatory interactions learnt from data for all network sizes. Furthermore, their performance degraded smoothly as the size of the network increased. Continuous time Bayesian networks were significantly better than dynamic Bayesian networks for all time granularities tested and better than Granger causality for dense time series. Both continuous time Bayesian networks and Granger causality performed robustly for unevenly spaced time series, with no significant loss of performance compared to the evenly spaced case, while the same did not hold true for dynamic Bayesian networks. The comparison included the IRMA experimental datasets which confirmed the effectiveness of the proposed method. Continuous time Bayesian networks were then applied to elucidate the regulatory mechanisms controlling murine T helper 17 (Th17) cell differentiation and were found to be effective in discovering well-known regulatory mechanisms, as well as new plausible biological insights.Continuous time Bayesian networks were effective on networks of both small and large size and were particularly feasible when the measurements were not evenly distributed over time. Reconstruction of the murine Th17 cell differentiation network using continuous time Bayesian networks revealed several autocrine loops, suggesting that Th17 cells may be auto regulating their own differentiation process. Understanding gene regulatory networks (GRNs) is of extreme relevance in molecular biology and represents an open challenge for computational sciences. The task of uncovering the underlying causal structure of these cellular dynamics is referred to as gene network reconstruction or Reconstruction of gene regulatory networks from time course expression data is an active area of research ,2. In reA number of approaches have been applied to the GRNs reconstruction problem. Boolean networks have beeDynamic aspects of regulatory networks are investigated by measuring the system variables at multiple time points (e.g. through gene expression microarray or mRNA sequencing). This approach is the result of technological constraints of the experimental techniques which only allow for measurements of \u201csnapshots\u201d of the system at multiple time points. In this situation the risk of missing important pieces of information is high if the sample rate is not adequately chosen or not fine enough . While this issue is currently unavoidable, when computationally analyzing these time course datasets it can be advantageous to separate the way the time course data is experimentally obtained from the way the time is represented in the computational model. Current state-of-the-art approaches described above directly build on \u201csnapshot-like\u201d data, making the strong assumption that the system under investigation evolves in a synchronous fashion at fixed points in time. Even when only discrete time data is available, modeling the system as continuously evolving over time represents a conceptually more correct/natural approximation and improves model expressiveness . Nowaday\u201cfor how long does gene X have to remain up-regulated to have an effect on the regulation on gene Y?\u201d and in presence of partial evidence such as \u201cWhat is the most probable state for gene X at time t given that I observed that gene Y was up-regulated from time t - \u03b1 to t - \u03b2 ?\u201d. With their graphical representation of causal relations, CTBNs also provide an intuitive and meaningful level of abstraction of dynamic regulatory process which can help a molecular biologist to gain a better understanding of the systems studied. Finally, CTBNs conserve all of the advantages which are characteristic of probabilistic graphical models and which make them suitable for the analysis of biological networks represents the count of transitions from state x to state x\u2032 for the node Xn when the state of its parents Un is set to u, while T[x|u] is the time spent in state x by the variable Xn when the state of its parents Un is set to u. Furthermore, \u03b1x|u and \u03c4x|u are hyperparameters over the CTBN\u2019s q parameters while \u03b8 parameters. However, , the optimal CTBN\u2019s structure is selected by solving the following problem:where G={Un\u2208X:n=1,\u2026,N} represents all possible choices of parent set Un for each node Xn, n=1,\u2026,N. Optimization problem will be leaf nodes with only incoming arcs. In the learned network arcs are directed but do not encode information about positive or negative regulation. A direct arc between two genes implies a direct causal relation (regulation) between the pair. Longer paths between two nodes suggest that the influence of one gene on the other pass through a regulatory chain involving intermediate genes. Even if not displayed in the networks, auto regulation interactions, interaction directions (positive/negative) and relative timings are encoded within the conditional intensity matrices (CIMs) associated with each node. Let\u2019s consider the following example consisting of a small network of 3 genes and shown in Figure C can be propagated forward to any continuous point in time, by calculating:From this CIM exp is the matrix exponential and \u0394t is the difference between the last known state for the parents of C and the time t for which we want to calculate the probability distribution of C. CIMs are learned together with the graph structure and represent the basis for the inference task, which is not directly investigated in this work.Where X and Y. X is said to Granger cause Y if the autoregressive model of Y is more accurate when based on the past values of both X and Y rather than Y alone. The accuracy of the prediction is measured through the variance of the prediction error. Let us suppose that we have bivariate linear autoregressive model for the variables X and Y defined as:The Granger causality test was firstly conceived for the economic domain and is bp indicates the model\u2019s order, e.g. the number of past observations of the time series to incorporate in the autoregressive model. The impact that each one of these observations has on the predicted values of X and Y is contained in the matrix A. \u03b5 represents the prediction error for the time series . Considering the first equation, for Y to Granger cause X the variance of \u03b5x must be smaller than the variance of \u03b5x when the Y term is removed from the equation. This original GC formulation is meant to uncover causal relationships among two variables; in multivariate systems a pairwise analysis of this kind applied to all possible pairs of variables is limited in the type of causal relationships that can be uncovered. For this reason, this concept was extended [X, Y and Z. Intuitively, the multivariate linear autoregressive model for the variable X can be written as:Where extended ,42 to thY Granger causes X if the variance \u03b5x is smaller than what it would be when the Y term is removed from the equation. VAR models have the undeniable advantage of being well-understood and widely applied in many disciplines such as the neurosciences, economics and biology. In this work GC, like in almost the totality of its applications and theoretical investigations, is considered in its formulation which assumes the observations to be taken at regular and fixed time intervals. As underlined in [In the equation above, lined in , the Gralined in . GC is ulined in investiglined in more reclined in , with malined in . Moreovelined in . A recenlined in showed ta priori, which is seldom the case with real biological datasets. In this section simulated time course datasets have been used to benchmark the accuracy network reconstruction with GC, DBNs and CTBNs.Simulated datasets are important for benchmarking the accuracy of gene regulatory network reconstruction as the true network structure is known in vivo gene networks of E. coli [S. Cerevisiae. Subnetworks were extracted by randomly choosing a seed node and progressively adding nodes with the greedy neighbor selection procedure, which maximizes the modularity and is able to preserve the functional building blocks of the full network [The datasets were generated by the same methodology as was used in the DREAM4 competition , extract E. coli and S. C network .F1 measure for binary classification which is defined as:To ensure robustness, our studies are not based on one single network instance, but are always based on a set of 10 different networks instances. The reconstruction algorithms are tested under several conditions: for increasing number of nodes in the network (network size), for different time points densities in the dataset (time course granularity) and for datasets with time measurements not evenly but unevenly distributed (randomly spaced). The accuracy of network reconstruction was measured using the recall is referred to as sensitivity and the precision as positive predicted value.In statistics the in vivo gene regulatory network structures of E. coli [S. cerevisiae we randomly extracted sets of 10 networks consisting of 10, 20, 50 and 100 genes for both organisms. For the sake of brevity, the sets of 10 networks consisting of 10, 20, 50 and 100 genes will be referred to as 10-NETs, 20-NETs, 50-NETs and 100-NETs respectively. Statistical analysis of the complexity of the extracted network structures is provided in Figure The first step of our analysis on simulated data consisted in studying how the three methods perform when faced with the task of reconstructing gene networks of different sizes. From the known E. coli and S. cThe generated dataset consists of 21 evenly spaced time points. This dataset aims to simulate the amount of data that high-throughput techniques will soon generate while maintaining a realistic time course magnitude: expression microarray experiments repeated with these many time points are today possible. On the other hand, the dataset is still unrealistically rich in terms of number of perturbations and replicates. Such a comprehensive dataset is however necessary to fairly compare the analyzed methods.optimization of the model parameters for the three methods; for CTBNs and DBNs this included experimentally establishing the optimum number of discretization levels. More details can be found at the end of this document.Prior to learning, we performed an empirical E. coli dataset are summarized in Table F1 values are calculated as the arithmetic mean over the sets of 10 sampled network instances, and Figure F1 values obtained by the methods on the 10 sampled network instances are represented through boxplots. For 50-NETs and 100-NETs learning with DBNs became computationally intractable; therefore, the corresponding results are not available. It can be concluded that the reconstructed network structures were the most accurate for CTBNs which outperformed DBNs and GC for 10-NETs, 20-NETs, 50-NETs and 100-NETs in terms of the mean F1 values. A paired t-test confirmed that the F1 values for CTBNs were significantly higher than for DBNs and GC in all tested network sizes . Moreover CTBNs were robust with respect to the increasing network size: their performance smoothly degraded as the number of nodes of the network increased. Indeed, the difference between mean F1 values for CTBNs and GC increased progressively with the network\u2019s size. GC outperformed DBNs on 10-NETs (0.13 mean F1 gap) while on 20-NETs GC were only marginally more accurate than DBNs with a limited mean F1 difference of 0.02.Results on S. cerevisiae dataset shown in Table F1 difference between CTBNs and GC increasing from 0.17 for 10-NETs up to 0.29 for 100-NETs. Interestingly, on this dataset DBNs outperformed GC . The paired t-test confirmed the significant superiority of CTBNs in all cases over both DBNs and GC. DBNs were significantly better than GC on 20-NETs.Results on random reconstruction method which starts with an empty graph and randomly adds edges to it. As expected, this random algorithm had low precision while its recall stabilized around 0.50. As shown in Table As a negative test we also simulated a E. coli and S. cerevisiae. The network size was kept constant at 20 nodes, as this was seen in the previous section to represent a good trade-off between network complexity and computational cost.The second set of tests are conceived to compare the network reconstruction algorithms with time course datasets of increasing time granularity. Although the overall duration of the simulated experiment was kept fixed, measurements were collected at increasing frequencies of evenly spaced time points. As in the previous section, datasets were generated for both E. coli are shown in Table F1 values calculated as the arithmetic average over the sets of 10 network instances for a low time granularity of 11, best for granularity 21 (mean F1 0.47) and achieved a slightly lower accuracy for granularity 31 (mean F1 0.40). CTBNs achieved a slightly lower accuracy than GC for time granularity 11 (mean F1 0.47), achieved the overall best performance for time granularity 21 (mean F1 0.57) and had a slightly lower accuracy for granularity 31 (mean F1 0.54). A paired t-test over the F1 values concluded that CTBNs performed significantly better than DBNs for all time course granularities and also better than GC with the exception of time courses of granularity 11. Finally, GC proved to be significantly better than DBNs for granularity 11, while no statistically significant difference emerged between the two for higher time granularities. The three methods share the trend of reconstruction accuracy initially increasing from time granularity 11 to 21, reaching a peak at 21 and then decreasing for granularity 31: this behavior could be explained by the fact that the optimal number of discretization levels has been empirically established for time granularity 21 data and subsequently applied to time granularity 11 and 31 data. The discretization level applied to granularity 31 data may be therefore suboptimal.Results on S. cerevisiae are shown in Table F1 of 0.57; however, the drop of effectiveness for granularities 21 and 31 was clear with mean F1 values of 0.41 and 0.42 respectively. CTBNs were always the most accurate achieving mean F1 values of 0.60, 0.70 and 0.60 for the three time course densities. Again, DBNs performed poorly for granularity 11 , while better for more finely grained data (0.58 and 0.48 mean F1). With the exception of granularity 11, DBNs outperformed GC, which is the opposite of what we observed for E. coli datasets. A paired t-test concluded CTBNs significantly outperformed DBNs for all time granularities and GC for granularities 21 and 31. Interestingly, it is possible to prove the superiority of GC over DBNs for granularity 11, while vice-versa for granularity 21.Results on optimal value of the hyperparameters \u03b1 and \u03c4 has been performed only for the dataset associated with a granularity value equal to 21. These optimal values were subsequently applied to datasets associated with granularity values equal to 11 and 31. While this choice makes the performances achieved by CTBNs suboptimal, it also ensures robustness, that is, it implicitly protects from potential overfitting of the hyperparameters.It has to be noted that the search for the The third step of our analysis on simulated data consisted in evaluating the performance of the three methods changes when the time measurement are not evenly spaced over time but randomly sampled. This is a typical scenario in wet-lab experiments.E. coli. We repeated the numerical experiments for time courses of granularity of 11, 21 and 31 (keeping the 10 random time point instances consistent).For the purpose of the test, 10 different random time point instances were sampled and used to generate 10 unevenly distributed time course datasets; tests were run on the set of 20-NETs of the organism F1 value achieved by DBNs among the 10 unevenly (randomly) sampled time point instances is always smaller than the minimum F1 value achieved by CTBNs on the same 10 unevenly sampled time point instances. Furthermore, the maximum F1 value achieved by DBNs on the same samples is always smaller than the maximum achieved by CTBNs, for all network instances and time course granularities. The result is clear, showing that CTBNs are always preferable to DBNs when the time course data is not evenly spaced. CTBNs and GC showed comparable ranges of F1 values , with no clear trend in either of the methods to perform better. GC was better than DBN with respect to both minimum and maximum F1 values , with only a few cases for which DBNs was preferable.Results are shown in Figure in vivo evaluation is a critical point for GRN reconstruction methods which often rely on less quantifiable evaluations such as comparison with existing literature and/or information available in public databases. The benchmarking of CTBNs was performed on a small but certified network: a network consisting of five genes synthetically constructed in the yeast S. cerevisiae [S. cerevisiae experimental dataset the results were coherent with those obtained on simulated datasets: CTBNs outperformed DBNs and GC. A graphical representation of the true network compared with the ones inferred by DBNs, GC and CTBNs is provided in Figure Due to the current lack of reliable large scale gold standards, revisiae and showGene regulatory networks have been described extensively in the regulation of immune response, but more importantly in the control of inflammation. Inflammation is a multifaceted cellular response critical for the protection of the host against different types of injuries such as infections. However, the dark side of the inflammatory process is represented by tissue damage since inflammatory responses react against self-tissues. Precise regulation of gene expression is extremely important in the context of inflammation for host survival under its own immune activation. In particular, gene regulation of inflammatory cellular differentiation appears essential for fine-tuning of the entire inflammatory response. At the onset of chronic inflammation, Th17 cellular response is of particular interest. Th17 cells produce well-known soluble molecules such as IL17A, IL17F and IL21 which are important for neutrophil recruitment, infection clearance and delivery of antimicrobial peptides. Fine tuning of the Th17 cell differentiation program appears to be pivotal for proper control of over exuberant inflammatory processes in the vertebrate immune system. While some key regulators of the Th17 differentiation are known, a large portion of the regulatory mechanisms controlling this process remains unclear.\u03b3t through soluble molecules such as IL6, TGF \u03b2, IL1 \u03b2. Furthermore, stabilization of the Th17 phenotype requires the activation of IL23R receptor through the innate cytokine IL23 [Naive T cells (or Th0) can be polarized to differentiate into one of the T helper phenotypes by exposing them to various polarizing cytokines. The external signals through cytokines drive different regulatory pathways within the cells, and gene regulatory networks involving master regulator transcription genes dictate the final differentiation status. Th0 cells can be programmed to undergo differentiation into the Th17 phenotype by activating transcription factors such as Stat3 and ROR ine IL23 .\u03b21 in two biological replicates. Measurements were taken at 18 time points unevenly spanned over the first 72 hours following induction. Furthermore, separate measurements were taken involving perturbation with the stabilizing innate cytokine IL23 50h from the start of polarization. This dataset is one of the longest and most finely grained time course data ever generated in the T helper differentiation context, with a total of 58 gene expression microarray samples. In order to keep the results interpretable, the analysis was restricted to the representative set of 275 genes individuated by the authors [\u03b21 type and not those regulatory fluctuations which take place independently of the differentiation process (in both Th0 and IL6+TGF \u03b21 cells), fold-change values of IL6+TGF \u03b21 versus Th0 were used as input data for the learning algorithm.Here, structure learning of CTBNs is applied to elucidate the gene regulatory network controlling differentiation of murine naive Th0 to the Th17 phenotype. Data for this study is derived from a recently published time course microarray experiment resultin authors as refle\u03b21 vs. Th0), the second one using the time course series with the addition of the Il23 cytokine into the culture (from fold changes IL6+TGF \u03b21+IL23 vs. Th0+IL23). In order to evaluate which mechanisms are relevant to the stabilization of the phenotype, we looked at differences among the two networks. If the perturbations would have been the type of gene knock-outs and/or gene knock downs, the correct way to proceed would have been to learn one single network from both the unperturbed and perturbed data. Here, the perturbation is of a stabilizing nature, e.g. it enhances differentiation process through the activation of additional regulatory mechanisms and the inhibition of others. For simplicity, from now on we will refer to the first network as IL6+TGF \u03b21 network and to the second one as IL23 network.Two separate networks have been learned: the first one using unperturbed time course series creates a large number of additional paths connecting genes. For this reason, a validation approach which tries to find a pathway between genes known to be related could lead to biased results, where incorrectly inferred arcs paradoxically lead to a greater number of true positives. It is clear that the benchmarking of CTBNs in the absence of a gold-standard cannot be performed in a purely quantitative way, but it has to be complemented with a reasoned biological interpretation of the network.\u03b21 network inferred from data is shown in Figure \u03b21 network with a reverse orientation compared to the literature. This is a well known problem with reconstructing networks referred to as model non-identifiability, which arises when given the data, it is not possible to recover (learn) a unique set of parameters. Instead, in such situations we have multiple sets of parameters settings that are indistinguishable given the data [non-identifiability of a model can be due to data scarcity or the presence of hidden variables. Given that we are examining a subset of genes, both hypotheses are possible. For these reasons, the inverted interactions were considered valid.The IL6+TGF the data . The nonAn additional assessment of the validity of the inferred network was performed by looking at the leaf nodes (nodes with no children) and the root nodes (nodes with no parents).\u03b21 network 13 of the 90 leaf nodes represented soluble immune mediators, which usually characterize the cells at final steps of their differentiation processes. That was the case of the cytokines Il4, Il9, Il24, Il1rn, Clcf1 and Tgfb3, cytokine signal transducer Il6st which is shared by many cytokines, cytokine receptors such as Il12rb2, Il1r1, chemokines such as Ccl1, and chemokine receptors such as Ccr5, Ccr6, Cxcr4. Among leaf nodes we also found clusters of differentiations such as Cd2, Cd24, Cd274, Cd86 which represent a clear marker of the final steps in acquisition of the terminal Th17 phenotype. Furthermore, apoptosis markers like Casp3, Fas, Daxx, Vav3, Trat1, Tnfrsf25, Tgm2, Sertad1 together with programmed cell death 1 ligand 2 (Pdcd1lg2) which follow T cell activation and exhaustion were correctly associated with leaf nodes. Transcription factor regulators of late phases of the differentiation processes such as for Tbet, Runx2, Runx1, Rorc, Maf, all responsible for the final steps of the definition of the Th17 cell phenotype, are correctly placed at the end of the chain. Finally, Sgk1 is a recently discovered marker identifying the Th17 pathogenic phenotype, acquired by T cell at the late phases of the T cell polarization [Sgk1 network is correctly represented as a leaf node.In the temporal network semantic leaf nodes are associated with final products (cytokines in our case). In the inferred IL6+TGF rization ; in our Flna), an actin binding and signal mediator scaffolding protein, required for T cell activation in response to TCR activation, also known as \u201csignal1\u201d [Bcl3, which is known to be activated in response to initial TCR activation [Bcl3 is discussed more in detail in the next paragraphs, as new interesting insight related to its role emerged from the network.Conversely, root nodes are associated with molecules at the beginning of the cascade. Two root nodes were observed at the top of the network structure and both appear to be correctly identified so with their role in initiating the differentiation cascade. One of them is Filamin A is appreciable. From a theoretical point of view, given that the number of variables under study is several order of magnitude greater than the data sample size, network sparsity is something that reconstruction methods seek . A netwoIl4ra, the receptor of the cytokine Il4, shown in Figure Il4ra seems to control the ability of the human immune system to regulate the magnitude of Il17 production [Il4ra in negative regulation of Th17 differentiation is expected [A major hub node in the network is oduction . Thus, aexpected .Figure Ctsw), Bcl3, Zfp281, Il4Ra, Prickle1 and Tnfsf11. Among these Bcl3 and Tnfsf11 are known to have a significant influence on Th17 differentiation. Bcl3, a member of IkB family of proteins, is an essential negative regulator of Toll-like receptor-induced responses and inhibitor of NFkB. Reduced Bcl3 expression has been associated with Crohn\u2019s disease [Bcl3 has an inhibitory role in regulating IL17 release [Bcl3-/- mice develop autoimmune diabetes with increased Th17 type cytokine expression. Therefore, Bcl3 is correctly inferred as hub node. Tnfsf11 alias Rankl is known to be a marker of pathogenic Th17 cells in inflammation, and therefore its status as hub in the network is correct [Ctsw is a member of the peptidase C1 family, a cysteine lysosomal proteinase that plays a crucial role in the turnover of intracellular proteins as antigens and has a specific function in the mechanism or regulation of CD8 + T-cell cytolytic activity [Zfp281, a zinc finger transcription factor required in embryonic stems cells for pluripotency [Prickle1, a nuclear receptor which is a negative regulator of Wnt/beta-catenin signaling pathway, in Th17 differentiation is yet unknown.Other major hub nodes include Cathepsin W were applied to the gene regulatory network reconstruction task from gene expression time course data. A comparison with two state-of-the-art methods, i.e. dynamic Bayesian networks (DBNs) and Granger causality analysis (GC), was conducted. The performance of the methods was analyzed in three different directions: for networks of increasing size, for time course data of increasing granularity and for evenly versus unevenly spaced time course data.F1 measure for all network sizes and both E. coli and S. cerevisiae. Furthermore, they suffered from a limited and smooth loss of performance with respect to the networks of increasing size. This suggests that if applied to networks larger than those analyzed in this paper, CTBNs can still effectively help to uncover the causal structure of the regulatory network. These aspects make CTBNs a good candidate for solving the reconstruction of regulatory networks, which are systems characterized by a large number of variables.CTBNs achieved the highest value of the CTBNs were the best performing approach when the time course granularity was sufficiently fine (21 and 31 time points in our experiments), while for coarser granularities (11 time points) CTBNs and GC performed analogously. DBNs performed poorly in the granularity 11 case, showing a big gap from CTBNs and GC on both organisms. The result of CTBNs for granularity 11 was unexpected: probabilistic approaches tend to require a big amount of data in order to be effective.F1 value that one can obtain when learning a network from unevenly sampled data (over 10 random samples) is was always better than the worst case obtainable when learning with DBNs. The same favorable situation for CTBNs applied to the best cases. Considerations made for CTBNs over DBNs applies to GC over DBNs as well, while CTBNs and GC showed a similar behavior in response to unevenly spaced data. The poor performance of DBNs on unevenly spaced data is due to the observational model assumption on which their representation of the time is built: variables are assumed to evolve at fixed increments; when that is not the case, time points are treated as evenly spaced with consequent introduction of incorrect information in the model. On the other hand, the good performance of GC on unevenly spaced time course data is surprising; in order to understand the exact reason why GC does not suffer significantly further studies are required. This feature of both CTBNs and GC which emerged is particularly relevant to the gene network reconstruction problem. Indeed, time course data are rarely collected at regular time intervals, while the most common scenario is to have time measurements more densely sampled during some specific phases of the studied phenomenon and coarsely sampled during other phases.Thanks to their explicit representation of the time, CTBNs were always preferable to DBNs when the time points were not evenly spaced: the worst case in terms of E. coli , a DNA-binding protein that mediates the TNF-alpha expression binding to the promoter of the TNF-alpha gene. Litaf may be then important to delineate the Th17 pathogenic phenotype, which is achieved thanks to the addition of IL23 in the culture and regulated by Il10 during Th17 differentiation also with Irf4, Sgk1, Il17ra and Id2, which are all known as being phenotypic markers of Th17 pathogenic cells [Bcl3 in Th17 differentiation, since it appears to be able to interact and probably affect the balance between positive and negative markers of Th17 cells Figure E. Tnfsf1pression . The netTnfsf11 is the salt-sensing kinase Sgk1 and Il17a in the IL6+TGF \u03b21 as well as in the IL23 network. If experimentally confirmed, this may represent novel information: Sgk1 would be independent of Il23 signaling, but dependent on Il17 itself , through the regulatory chain involving Prdm1 and Tnfsf11. This is aligned with the theory that Sgk1 depends on Il17 and may suggest once again the existence of an autocrine loop in the regulation of Sgk1.One of the genes which appears to be controlled by 1 Figure E, which f Figure E. IntereThe encouraging results achieved in this investigation suggest that structural learning of CTBNs should be considered as a new reliable gene network reconstruction method when time course expression data is available; results indicate that CTBNs would be particularly suitable for the learning of large networks and when the time measurements are not collected at evenly spaced time points. Those are key features which gives a great advantage to CTBNs over the existing state-of-the-art methods. However, CTBNs necessarily require the input data to be discretized. If the data is noisy, as it is usually the case in the biological domain, the discretization can help to reduce the amount of noise. On the other hand, the discretization may also lead to loss of relevant information. Researchers should keep that in mind when using CTBNs.CTBNs helped elucidate the regulatory network responsible for murine Th17 differentiation, confirming well-known regulatory interactions and main regulators, as well as formulating new biological hyphothesis. Apart from a number of new potential regulators, the network inferred by CTBNs highlighted the presence of several autocrine loops through which Th17 could be autoregulating their own differentiation process. The relevance of this insight comes from the fact that, while self-regulating mechanisms are known to exist in other cell lines, their existence in Th17 has not emerged yet. Wet-lab experiments aimed at validating this hypothesis are now required.\u201cmemoryless\u201d. CTBNs can be extended to the modeling of systems with memory by introducing hidden nodes/states and representing the system through a mixture of exponential distributions. The application of this extension to the gene network domain is relevant and remains to be explored. Another key aspect to be investigated is the inference task, which would allow for a deeper analysis of the dynamic aspect of the reconstructed gene network, such as answering queries directly involving time. In many biological processes the structure of the causal relationships among variables can vary over time (i.e. there can be different gene networks regulating different phases of the cell cycle). Hererogeneous DBNs [CTBNs assume the duration of the events to be a random variable that is exponentially distributed. The exponential distribution has the characteristics of being ous DBNs -83 modelin vivo gene regulatory network structures of E. coli [S. cerevisiae [The simulated dataset was generated with the help of the Gene Net Weaver tool ,84 which E. coli and S. crevisiae endowingGiven each extracted network structure, Gene Net Weaver combines ordinary and stochastic differential equations to generate the corresponding dataset. Perturbations are applied to the first half of the time series and removed from the second part, showing how the system reacts and then goes back to the wild type state. The multiplicative constant of the white noise term in the stochastic differential equations was set to 0.05 as in DREAM4. Finally, all expression values were normalized by dividing them by the maximum mRNA concentration of the related dataset.optimization of the model parameters for the three methods was run; for CTBNs and DBNs this included experimentally establishing the optimum number of discretization levels. Here all the steps aimed to individuate the best configurations for the three methods here described. It is important to notice that with the term optimization we do not refer to the optimization of an objective function, but to a set of independent numerical experiments where the structural learning is run for different values of the model\u2019s parameters. The optimal parameters are considered the ones for which the algorithms achieve the highest values of the F1 measure.Prior to running the tests on simulated data, an empirical optimization experiments were run on the 10-NETs and 20-NETs, where the required learning time was still feasible. The optimal parameter values found were subsequently applied to the 50-NETs and 100-NETs. Because CTBNs cannot handle continuous data; a discretization was applied. Discretization of continuous data is known to be a critical task: too few bins (levels) of discretization lead to a loss of important information, while when increasing the number of bins it is known that the required amount of data and computational resources increases as well. To find the optimal number of bins, tests with data discretized into 3, 4, 5, 6 and 7 equal width bins were performed. Best performances were obtained when using 5 equal width bins. It is worthwhile to notice that discretization intervals were chosen individually for each variable (gene) based on the max and min value of expression levels of each variable among the whole set of data generated. In order to preserve the significance and comparability of the results, one needs to keep track of the discretization intervals applied to each variable. The impact of different numbers of discretization bins on the performance of CTBNs and DBNs is shown in Figure \u03b1 and \u03c4, introduced in section Methods, best values were found to be 0.01 and 5 respectively. Because of the local nature of the learning, the optimal hyperparameters values found on 10-NETs and 20-NETs are expected to be optimal for 50-NETs and 100-NETs as well. Indeed, separate optimization process on 10-NETs and 20-NETs returned the same optimal values. Sensitivity of network reconstruction performance to variation of hyperparameters \u03b1 and \u03c4 (CTBNs) is shown in Figure \u03b1 and \u03c4.For CTBNs, The computational nature of the exact structural learning problem lent itself to greedy learning. However, preliminary tests on the 10-NETs returned the same results for both exhaustive and greedy learning, although it cannot be established whether the exhaustive learning on the larger networks would have returned better results. The last parameter investigated was the maximum number of parents allowed for each node: since the greater this value is, the longer is the computational time required, sequential tests with an increasing value of this parameter were run. Interestingly, it was observed that CTBNs were never able to detect more than 3 parents per node even when the true networks contain nodes with a number of parents greater than 3.optimization on the number of discretization bins was re-run, and results confirmed that what is optimum for CTBNs may not be the best option for learning with DBNs. Indeed, results indicated 3 as optimum number of discretization bins for DBNs. Discretization intervals were selected individually for each variable as was done for CTBNs. Model selection has been performed by using the BIC criterion [For DBNs parameter riterion , which rFor GC analysis no discretization was required since the approach can handle continuous data. Best value for the model order parameter, i.e. the number of past observations to incorporate into the regression model, was discovered to be equal to 1. Covariance stationary (CS) is a requirement for the GC to be applied. Data was CS according to the ADF criterium , but notoptimization was run also with respect to the synthetically reconstructed yeast dataset. Optimal number of bins resulted to be 3 for DBNs and CTBNs, while the maximum number of parents was set to 5. Optimal prior values for CTBNs were equal to those on simulated data. Learning criteria for DBNs was set to BIC. For GC all the pre-processing steps listed for the simulated data were applied, finding a p-value cutoff of 0.05 with an approximation of the False Discovery Rate (FDR) correction being the best performing one.Parameter X to be the fold-change values, noise and random fluctuations in the data resulted to be heavy for X < 1.2 and X > -1.2; as a consequence, X was discretized into 3 different levels: X<\u22121.2, \u22121.2\u2264X\u22641.2, X>1.2. Genes whose fold-changes levels after discretization were constant among all the time points were excluded from the analysis.The microarray raw data for the 275 genes indicated by were anaExperiments were run using: for CTBNs the CTBN Matlab Toolbox developed at the MAD (Models and Algorithms for Data and text mining) Lab of the University of Milano-Bicocca, for DBNs the Bayesian Net toolbox of Murphy version"}
+{"text": "Identifying conserved and divergent response patterns in gene networks is becoming increasingly important. A common approach is integrating expression information with gene association networks in order to find groups of connected genes that are activated or repressed. In many cases, researchers are also interested in comparisons across species (or conditions). Finding an active sub-network is a hard problem and applying it across species requires further considerations . To address these challenges we devised ModuleBlast, which uses both expression and network topology to search for highly relevant sub-networks. We have applied ModuleBlast to expression and interaction data from mouse, macaque and human to study immune response and aging. The immune response analysis identified several relevant modules, consistent with recent findings on apoptosis and NF\u03baB activation following infection. Temporal analysis of these data revealed cascades of modules that are dynamically activated within and across species. We have experimentally validated some of the novel hypotheses resulting from the analysis of the ModuleBlast results leading to new insights into the mechanisms used by a key mammalian aging protein. Several studies rely on gene expression profiling (either using microarrays or RNA-Seq) to identify genes that are differentially expressed (DE) between treatment and control or to find genes that are involved in a specific condition. While such studies led to useful results, proteins usually operate in complexes or cascades and are often post transcriptionally regulated, so in many cases important genes may be missed when only using expression data. Interaction data are useful for identifying such genes and an iSeveral studies have utilized cross species expression data for studying the same condition in multiple species \u201310. SuchOptimally finding active sub-networks or modules in a general graph was shown to be Non-deterministic Polynomial-time (NP) hard . SeveralFrancisella tularensis, a gram-negative bacterium that is highly virulent in humans that maximize the target function and discuss how such modules can be connected in time when using time series expression data to identify the progression of information within cells.We assembled gene association networks using various genomic data types including protein\u2013protein interactions and genetic interactions from BioGRID , versionet\u00a0al. to the average of the times series of mock values assuming Gaussian distribution for the control experiments. For the ues , early studies . The subscript Z refers to parameters pertaining to nodes. M is the number of nodes in sub-network j. Z\u03b2 is an empirical parameter designed to produce fewer nodes with a positive score, hence creating smaller sub-networks , which is a normally distributed random variable with mean iE is the edge score defined for edge i for an edge in module E(M)j, \u03bc and E(M)\u03c3 are, respectively, the mean and SD of edge scores calculated for a module of size M. Computing the background statistics for the edges score component is less straight forward as it is a function of the number of nodes. The number of nodes in a sub-network sets minimum and maximum limits on the number of edges in this sub-network. To learn the conditional distribution of edge scores , \u03c3E(M)\u03bc) we used an iterative approach using randomized modules. We calculated mean and standard deviation of the edge score distribution over random modules for every possible module size M between 1 and 150. In addition, the score of the edge component in Equation . The optimal value of W is determined based on the parameter selection criterion described in the Section titled 'Parameter selection criterion' below. A higher W means more emphasis is placed on the connectivity.Given a connected sub-network between node a and node b in any module, divided by the maximum number of appearances of node a or node b in any module, and comparing this calculation to some user-defined cutoff. In set notation this can be written as the following.Greedy search was previously shown to produce good results when searching for sub-networks . Our seaE is the set of all edges, e is an edge connecting nodes a and jb, M is a module identified by iterator j for all possible modules and j,x)I. Modules with less than four nodes were omitted.For each W in our model). The best method to determine parameter values is by using a gold standard set (e.g. known modules in the condition) and, in a training procedure, choosing values that lead to the best recovery of such known modules. However, in our case little is known about modules that are activated during different types of infection, and we expect this to be a general problem for other studies as well. Thus, to determine the value of this parameter we searched for values that optimize the following three general criteria (so that it is applicable across a wide range of conditions being studies): (i) the percentage of the number of modules that contain uniquely enriched Gene Ontology (GO) biological processes terms (i.e. terms that are found to be significantly enriched in only one module) . (Altern module) can be uIn order to evaluate the performance of ModuleBlast we generated random networks that preserve the degree distribution of the corresponding real networks. We tried two randomization methods: node expression value shuffling and edge switching. The first method is based on shuffling the expression values in each of the species, hence preserving the exact same topology of the original network. In the edge switching method we continuously picked two edges and switched their node assignments, hence preserving the degree distribution of each node (see the Supplementary Methods).jS is a sub-network of size iAM, Z and iBZ are node scores from species A and B, respectively. The distance obtained for each module was compared to distances calculated for 10 000 random modules with the same number of nodes, using nodes that are part of some module. In addition, we assessed the overall activation/repression of the modules in each of the species using a similar randomization method over the sum of values in nodes for each of the species separately, i.e.jS is a sub-network of size M and iXZ is a node score for all nodes i in jS from species X. Similar to the Diff calculations, we compared the obtained distances to distances calculated for 10 000 random modules with the same number of nodes, using nodes that are part of some module.As mentioned above, for each node in our network we have at least two scores (one from each species). While we use the maximum absolute value when searching for sub-networks, once these are found, we can study and compare the activation of nodes from the two species in each module using their actual values. In order to evaluate the convergence or divergence of modules we calculated for each module the L1 distance over all the nodes in the module using the difference between the most extreme values in the genes represented by each node in the two species. Specifically, we computej)(S > 0.95), i.e. at least 95% of the random modules are more divergent than the inspected module. (ii) Modules that are species specific. Modules that are different between the species (Diffj)(S < 0.05) and show high activation (S < 0.05) in one of the species and low activation in the other (S < 0.05). (iii) Divergent modules that are divergent in opposing patterns (e.g. one is upregulated and the other is downregulated). These modules are highly different between the species (Diffj)(S < 0.05) and show high activation in both species, i.e. Activej,A)(S < 0.05). Note that several modules fall outside all three categories (divergence score between 0.05 and 0.95). While such modules are still very relevant to the condition being studied, for these modules we do not make a call regarding conservation.Using these measurements we classified the modules into three categories: (i) conserved modules. These modules show little difference between the species compared to random modules , we defined these modules as matching. In many cases clear chains are identified throughout the time series indicating a module that is preserved through time. Nonetheless, usually in earlier time points where the overall activation of modules is lower there may be several modules that are matched to later time points creating a fan-in structure.In order to identify cascades of activated modules, we generated a separate modules set for each time point using a search procedure that is similar to the one described above. We next tested the overlap of module sets between time points using a hypergeometric test. If reciprocal tests were found to be significant that maximize the target function. Highly overlapping sub-networks are merged based on the quality of the overlap . These networks can be used to identify both similarities and differences between species. In addition, the modules can be connected over time when using time series expression data to identify the progression of information within cells. A general outline of our methods for assembling and searching the combined interaction network is presented in Supplementary Figure S1. Simulation result indicate that ModuleBlast is able to identify conserved and species specific activated modules for the vast majority of tested cases. More importantly, the method correctly assigns the \u2018conserved\u2019 or \u2018divergent\u2019 label for almost all identified modules indicating that such assignments are robust; see the Supplementary Results for details.F. tularensis Schu S4. F. tularensis causes a wide range of infections, including pneumonias of the lower respiratory tract. In order to understand how prevalent these cases are, we calculated statistics for the number of members in each orthogroups between mouse and macaque. 80.1% (11 368 out of 14 200 orthogroups) have only two members corresponding to a 1:1 orthology match (See Supplementary Data Set DS1). 16.5% (2337) have three members (corresponding to 1:2 or 2:1 matches). Only 0.3% of the orthogroups have six members or more, indicating that paralogs are not likely to cause a large shift in the results. This was also confirmed by repeating the analysis using the mean expression value over the group of paralogs instead of using the most extreme value, which resulted in nearly identical modules.We next applied ModuleBlast to study the response of alveolar macrophages (AM) from mice and cynomolgus macaques to F. tularensis upon infection of the cell (see further discussion below). Of the 188 nodes, 52.13% show a high differential expression in either of the species. The modules contain 602 unique edges indicating a high connectivity in the resulting modules.Using ModuleBlast we obtained 17 modules containing 188 unique nodes (Supplementary Data Set DS3), out of which 13 modules were enriched for unique GO terms. The ability of the method to identify significantly enriched modules with relevant functions while at the same time minimize overlap between these modules highlights the ability of ModuleBlast to identify distinct mechanisms triggered by the infection. 43 unique GO terms, 36 unique KEGG pathways and 154 GSEA sets were identified for all modules , including modules that are enriched for chemotaxis, transfer of antigenic peptides (TAP) complex, apoptosis and NF\u03baB regulation, all relevant to the strategies employed by F. tularensis infection (see below). Supplementary Data Set DS4 marks GO categories that are unique to the joint analysis . We next compared our results to analyses that are based on randomization tests. We used two variants for the tests: expression value randomization (Supplementary Table S2), and edges switching (that keeps the same expression for nodes but changes the network topology) (Supplementary Table S3). See the Materials and Methods section and Supplementary Methods for details. In both cases, the number of modules as well as the number of nodes identified by ModuleBlast significantly decreased. Both of these randomization methods retain the underlying network distributions.We conducted several tests to evaluate whether cross species analysis improves our ability to identify relevant modules. We first tested the ability of our method to capture insights across species by comparing the combined mouse\u2013macaque analysis with analyses that were conducted separately for each of the species using species-specific expression data and complete interaction networks (including genes that do not have orthologs in the other species). As can be seen in Table Cytoscape (K-means (K = 100) was not able to find clusters associated with relevant biological functions since, unlike the other methods, it does not use the network information that may have made it hard to focus on a subset of the genes.While several methods were developed for finding active sub-networks in a single species, as mentioned in the Introduction section, only one previous method (NeXus ) has beeytoscape . jActiveytoscape , most ofWe next assessed the modules to determine if they are conserved or divergent across the two species. Conservation is multifaceted when examining modules across species. The three options for conservation and divergence we considered are: (i) conserved modules (CM), (ii) divergent modules that are species-specific (SP), i.e. active in only one of the species and (iii) divergent modules that show opposite expression patterns in the two species (OP) . In order to evaluate the convergence or divergence of the modules we calculated the sum of differences between the various species over all the nodes in the module and compared these differences to 10 000 random modules with the same number of nodes . Conservation and divergence are determined by looking at the absolute difference between expression values for nodes from both species. This may bias the analysis in cases of correlated expression changes leading the method to miss conserved modules. In addition, variance in module node values will affect the assignment to conserved/divergent categories, as our method does not assume that divergent modules should have a consistent difference between the two species. We also assessed the overall activation of the modules in each of the species using randomization methods. Out of the 17 modules, we classified four modules as conserved (CM), two modules as species specific (SP), and two module as divergent (OP) using the cutoffs defined in the Materials and Methods section (Supplementary Data Set DS4). Comparing modules overall activation and divergence revealed high correlation between them. Figure F. tularensis infection. For example, the mouse-specific module 10 was found to be enriched with KEGG pathway of chemokine signaling pathway and GO process of serine/threonine kinase activity that was shown to be involved with F. tularensis infection , is relatively divergent across both species and is highly enriched for apoptosis (1E-6) and positive NF\u03baB regulation (1E-6). Apoptosis was previously shown to play an important role in murine 24\u201348 h . TF enriB (NF\u03baB) .F. tularensis infection over time. We therefore constructed a module set for each time point separately and matched the resulting module sets in each time point to all other time points using a reciprocal hypergeometric test . General trends through the time course analysis show that the size of the modules and the number of enriched GO terms significantly increase as the response progresses.The above analysis was conducted by averaging the entire time series values for each gene into a single value. While useful for finding relevant functional modules, we sought to identify the entire cascade of events that occur following F. tularensis infection (see above). Module 124 also contains several genes that were found in matched modules in early time points. Specifically, AKT1 expression is elevated 1 h after exposure to bacteria, during the F. tularensis penetration, but not in later time points. This result is similar to previous observations that F. tularensis Schu S4 infection reduces AKT1 gene and protein expression, thereby reducing cytokine response and host defenses against infection . Figure nfection Another nfection . In addiThe analysis of immune response data described above allowed us to compare ModuleBlast to other methods based on curated sets of immune response genes. However, ModuleBlast can also be used in cases where little prior biological knowledge is available. To illustrate this and to test the ability of such an analysis to generate new testable hypotheses, we applied ModuleBlast to study data regarding the SIRT6 (sirtuin 6) protein. This protein was implicated in calorie restriction response and its To study the mechanisms by which SIRT6 affects aging and to determine the relationship between mice and human responses we used ModuleBlast to compare expression data sets from mouse embryonic fibroblast cells (MEF) extracted from a knockout SIRT6 mouse (KO mice) and a human HeLa cell line expressing shRNA against SIRT6 (KD human) . We founThe role SIRT6 plays in controlling inflammation has been somewhat controversial. On the one hand, the study that generated the KO and KD expression data indicateWe have also repeated these experiments in a human cell line , to confirm that the immune response is decreased in human cells as well. Downregulating SIRT6 in TNF-\u03b1-stimulated HeLa cells results in significantly increased levels of the inflammatory gene CD38 Figure . Taken tF. tularensis and found several modules with high relevance to the response progression over time and immune response mechanisms. The combined species analysis was able to identify modules that were not found in the single species analyses and show improved statistics over other methods. We have also applied it to study the effects of one of the first validated mammalian aging proteins, SIRT6. Follow-up experimental analysis, both in vitro and in vivo, confirmed the inflammatory response role for SIRT6 that was predicted by ModuleBlast.We developed a novel method to find active sub-networks within and across species. ModuleBlast provides enhanced cross species capabilities by classifying the modules to several conservation types and identifying modules that show interesting activation patterns. Identifying conserved and divergent response patterns in the context of connected groups of genes is becoming increasingly important with applications for both basic science and drug discovery research by highlighting biological mechanisms that are likely to be affected similarly or differently to a specific drug or treatment. We applied ModuleBlast to a time series of expression data from mouse and macaque AMs infected with It should be noted that ModuleBlast is intended to serve as a discovery platform and not as a detailed mechanistic modeling framework for gene regulation. An underlying assumption in our analysis is that paralogous genes are likely to show similar expression patterns and their measurements can be summarized by taking the most extreme value for the maximum possible activation for all paralogs. This assumption is not always realistic and paralogous genes may exhibit quite distinct behavior. In addition, no \u2018gold standard\u2019 or large-scale experimentally validated information is currently available in order to validate the resulting modules. We have thus used GO, KEGG, GSEA and follow-up experiments to determine functional enrichment and validate some of our modules. While such an analysis cannot serve a conclusive evidence for the accuracy of the modules, the increase in the number of observed terms from all three annotation sets when compared to the randomized results, and the agreement between the predictions and experimental results, supports the modules identified by ModuleBlast.et\u00a0al. combine the two networks. Such an approach would need to rely, at least in part, on network alignment methods and these can be computationally challenging though several heuristics for such problems have been suggested .We have implemented ModuleBlast as a general purpose, easy-to-use web tool with built-in support for various gene identifier names spaces, orthology information and underlying networks . The web tool can be used at the URL provided in the Abstract page, requires only gene identifiers and values to operate and offers extensive analysis options of the results. We hope that our tool will be a useful addition to the current set of analysis packages used by the experimental and computational communities.www.expression.cs.cmu.edu/module.htmlSupporting web server: Supplementary Data are available at NAR Online."}
+{"text": "Biomarkers have gained immense scientific interest and clinical value in the practice of medicine. With unprecedented advances in high-throughput technologies, research interest in identifying novel and customized disease biomarkers for early detection, diagnosis, or drug responses is rapidly growing. Biomarkers can be identified in different levels of molecular biomarkers, networks biomarkers and dynamical network biomarkers (DNBs). The latter is a recently developed concept which relies on the idea that a cell is a complex system whose behavior is emerged from interplay of various molecules, and this network of molecules dynamically changes over time. A DNB can serve as an early-warning signal of disease progression, or as a leading network that drives the system into the disease state, and thus unravels mechanisms of disease initiation and progression. It is therefore of great importance to identify DNBs efficiently and reliably. In this work, the problem of DNB identification is defined as a multi-objective optimization problem, and a framework to identify DNBs out of time-course high-throughput data is proposed. Temporal gene expression data of a lung injury with carbonyl chloride inhalation exposure has been used as a case study, and the functional role of the discovered biomarker in the pathogenesis of lung injury has been thoroughly analyzed. Recent advances in high-throughput technologies have provided a wealth of genomics, transcriptomics, proteomics, and metabolomics data to decipher disease mechanisms in a holistic and also dynamical manner3One of the recently proposed applications of DNBs is to detect early-warning signals of complex diseases78et al.intra-class correlation coefficient drastically increases in absolute value, while its averaged inter-class correlation coefficient decreases in absolute value. This group also shows a drastic increase in the averaged standard deviation of concentrations of its constituent molecules. If these three properties hold all together, this group is called the dominant group of the system whose emergence is an indicator of the pre-disease state reflecting the transition of the system to the disease state. These criteria once hold, imply that the concentrations of molecules in the dominant group tend to increasingly fluctuate while they behave dynamically in a strongly collective fashion. The dominant group can be portrayed as a subnetwork which emerges at a particular time-point during the disease progression, and therefore characterizes dynamical features of the underlying system. Hence, it can be considered as a dynamical network biomarker.By using time-course high-throughput -omics data, and based on the principle of bifurcation in dynamical systems, Chen Detecting DNBs as disease early-warning signals is a recently proposed topic in the field, and the major effort to date has been given to the theoretical derivation of DNB properties and its applications to different diseases. Little effort, however, has been devoted to the computational identification of DNB from high-throughput data. Identifying a DNB is computationally intractable due to the huge number of variables in high-throughput data, and thus, it demands special attention of computer science community to employ effective computational methods to detect DNBs reliably and efficiently.Here, DNB detection is defined as a multi-objective optimization problem. The overall workflow for DNB identification from high-throughput data is portrayed, and as a case study, DNB and the pre-disease state for the lung injury with carbonyl chloride inhalation exposure were identified using time-course microarray data.K samples over T time-points. Let n molecules under study such that the concentrations of each molecule K samples at time-point t are represented asConsider a high-throughput experimental setup where data is acquired for a set of t for sample k. Consequently, a matrix representing concentration vectors of all n molecules at time t is denoted aswhere leading network showing these properties. Therefore, a time-point DNB is a group of molecules intra-class correlation coefficient as defined by Equation t.1. inter-class correlation coefficient as defined by Equation 2. It shows a decrease in the t.3. Its constituent molecules show drastically increased fluctuation as formalized by Equation These three properties can be combined together to construct a composite index:functionally important regardless of whether the two genes or proteins are directly linked in the actual cellular interactome. However, DNB criteria can be refined for incomplete graphs which may require revisions in the theoretical derivation of DNB properties. The re-derivation of DNB criteria for incomplete graphs is the author\u2019s future plan to extend the current work.In this formulation, all the mutual correlations are used to derive the composite index for DNB detection, which means that DNB as well as the whole interactome are thought to be complete graphs. It implies that co-expressions among any two genes or proteins are A multi-objective optimization is an optimization problem that involves multiple objective functions, formulated asX is the solution space. In non-trivial multi-objective optimization problems where the objective functions are conflicting, no feasible solution that simultaneously minimizes all objective functions typically exists. Therefore, attention is paid to Pareto optimal solutions, i.e., solutions that cannot be improved in any of the objectives without deteriorating at least one of the other objectives. A feasible solution where integer andX is referred to as the Pareto optimal set, and the corresponding objective vectors are called the Pareto front. For many problems, the number of Pareto optimal solutions is enormous and a multi-objective optimizer is usually aimed to identify a representative set of solutions which 1) lie on the Pareto front, and 2) are diverse enough to represent the entire range of the Pareto frontA solution A popular approach to generate Pareto optimal solutions is to use evolutionary algorithms (EAs). EAs are generic meta-heuristic optimization algorithms whose procedure begins with a population of solutions usually generated at random. It then iteratively updates the current population to generate a new population by the use of four main operators, namely selection, crossover, mutation and elite-preservation. The operation stops when one or more pre-specified termination criteria are met. The use of a population of solutions allows an EA to find multiple optimal solutions, thereby facilitating the solution of multi-objective optimization problems. Furthermore, EAs have essential operators to converge towards a set of non-dominated points which are as close as possible to the Pareto-optimal front, and yet diverse among the objectivesNon-dominated Sorting Genetic Algorithm-II (NSGA-II)crowding distance is calculated for each individual. Crowding distance is a measure of how close an individual is to its neighbors. NSGA-II selects individuals based on the rank and the crowding distance.Currently most evolutionary multi-objective optimization algorithms apply Pareto-based ranking schemes. A standard example is the a posteriori methodologies. In a posteriori methods, a representative set of Pareto optimal solutions is first found and then the decision maker (DM) should choose one of the obtained points using higher-level information. The DM is expected to be an expert in the problem domain although several decision making support methods are developed to aid the DM in the selection of the preferred solutions1516NSGA-II as well as other evolutionary multi-objective optimizers are early yet strong signal indicating the emergence of the pre-disease state. Hence, a DNB should meet two objectives simultaneously: 1) it should show a strong signal, measured in terms of the composite index I, which is ensued from the sudden deterioration prior to the critical transition, and 2) it is the leading network of such critical transition. These two objectives are conflicting in nature as a leading network showing a significantly high signal, is not necessarily the one with the strongest signal throughout the whole spectrum of the disease progression. Hence, DNB identification can be thought of as a bi-objective optimization problem to find a group of molecules whose composite index is maximized in the current time-point while minimized over the prior time-points. In mathematical terms, a multi-objective optimization problem which finds a dominant group of molecules at time t, can be formulated asA DNB shows an t. If this assumption is voided, Equation where T is the latest sampling time of the experiment.i corresponds to molecule \u03b8, being set initially such as the crossover and mutation rates, the population size, and the maximum number of iterations.Prior to using any evolutionary algorithm, an appropriate representation of individual solutions should be sought. Here, each solution t, NSGA-II returns a Pareto set and the corresponding frontiers. The Pareto set then undergoes statistical significance analysis to assess how likely it is to have occurred by sampling error alone. Details of the significance assessment is discussed in the next section.At each time-point p-value is the smallest. This generic rule is based on the idea that the lower the p-value is, the more significant the Pareto signals are, and thus, the corresponding time-point is more likely to be the pre-disease state. In practice, however, Pareto sets should be more deliberately investigated, particularly when there exists multiple time-points with very close p-values. In such cases, some higher-level analyses of Pareto molecules facilitate the discovery of context-specific DNBs.The pre-disease stage is considered to be the time whose Pareto set Once the pre-disease state is identified, the final task is to select one of the non-dominated solutions of the corresponding Pareto set as an early-warning biomarker of the disease under study. Here, it is assumed that DM looks for the solution closest to the ideal solution which optimizes all the objectives simultaneously, i.e., both objectives are 0.0:where t, one first needs to generate a \u201cnull\u201d sample which preserves the structure of the observed sample. Accordingly, for each individual in Pareto-Sett, an equivalent random individual was generated which has equal number of molecules , but randomly chosen from the pool of molecules under study. Then, for Pareto-Sett and its equivalent random set, the distances of the objective vectors of constituent individuals with the optimum nonparametric statistical hypothesis test, i.e., Wilcoxon-Mann-Whitney testp-value indicating how likely it is that the two samples come from the same population.To estimate the statistical significance of Pareto-Setet al. conducted microarray experiments to investigate the molecular mechanism of phosgene-induced lung injury. To acquire control and case samples, two groups of CD-1 male mice were exposed to air and phosgene, respectively. Lung tissue was collected from air or phosgene-exposed mice 0.5, 1, 4, 8, 12, 24, 48, and 72\u2009h after exposure. The details of the experiment are available in the original paperAs a case study, the time-course gene expression data of lung injury with carbonyl chloride inhalation exposure was chosen, primarily due to the high temporal resolution of the corresponding dataset. The dataset was obtained from an experiment of a toxic-gas-induced lung injury such as pulmonary edemaThe adopted microarray has 22,690 probe sets originally. The probe sets were mapped to the corresponding NCBI Entrez gene symbols using the GEO annotation file. All probes with no correct corresponding gene symbols were screened out, and multiple probes corresponding to the same gene were aggregated by averaging. 13,214 genes were left for the consequent analysis.An array usually contains tens of thousands of probes. Many of the corresponding genes, however, do not play any significant role in the context of interest. Therefore, it is important to reduce the dimensionality by filtering out uninformative genes prior to the subsequent learning steps.p-values were obtained. Due to the multiple hypothesis testing, p-values were corrected by using the false discovery rate (FDR) estimationq-values were more than 0.01 were filtered out.Differential expression (DE) analysis is typically used to identify informatory genes whose expression levels significantly change between two sample groups . Here, at each time-point, a modified version of t-testAlthough DE analysis can screen out many genes whose expressions do not significantly change across conditions, it does not assure that the statistically significant differences are also large enough to be biologically meaningfult, the expression of each gene xi\u2032s expressions across control samples at time t, and then divided by the standard deviation of xi\u2032s expressions over the same set of controls.The last preprocessing step was to normalize case samples with the corresponding control samples in order to make expressions of genes comparable over different time-points. Accordingly, at each time T was considered to be 12\u2009h as disease was well progressed afterwards given that 50\u201360% mortality was routinely observed after 12\u2009hDNB identification was conducted following the workflow described in p-values, t\u2009=\u20094\u2009h is the highest-ranked candidate for the pre-disease state. 4\u2009h is the earliest time-point whose Pareto solutions show strong and yet unique signals. Times 8\u2009h and 12\u2009h show strong signals, but their higher p-values indicate less significant shifts from the null distributions. This hypothesizes that the whole system in these time-points is relatively highly-disrupted. The averaged standard deviation of all molecules at time 8\u2009h, Pareto sets were generated for times 1\u2009h\u201312\u2009h, and the behaviors of the solutions\u2019 composite indices over the whole spectrum of the disease progression were plotted in p-value at time 1\u2009h is the second lowest. However, the Pareto solutions at time 1\u2009h have a recurrent pattern with a higher pick at 8\u2009h. This implies that the corresponding molecules are possibly disease genes rather than being the pre-disease biomarkers.The Moreover, historically, the most severe phosgene-induced acute lung injury in this mice model ranges from 4\u2009h to 12\u2009h after exposureAfter identifying 4\u2009h as the pre-disease state, its DNB was chosen to be a Pareto solution whose objective vector was closest to the ideal objectives. The identified DNB contained 16 genes as listed in In order to visualize how the identified DNB emerged at 4\u2009h, DNB dynamics over the two consecutive time-points of 1\u2009h and 4\u2009h were depicted in et al.DNBs incorporate temporal alterations with structural information to provide a holistic view of the biological systems and to give a deeper insight into the physiological and pathological mechanisms at the molecular level. DNBs have great significance in disease classification and monitoring, therapeutic response evaluation, early diagnosis and understanding of molecular pathogenesiset al.et al.et al.The focus of this paper, however, is on one of the key applications of DNBs that is to detect early-warning signals of complex diseases, and to identify the pre-disease state. Dynamical characteristics of the molecular system prior to the critical transition, or the so-called DNB criteria, were theoretically derived by Chen et al.edge network to exploit higher-order statistics information among molecules. In an edge network, a node is not a molecule but a pair of molecules , and a link represents the relationship between two molecule pairs (i.e. between two edges) rather than between two molecules. By assuming Gaussian distribution on the expression levels of each molecule, an edge network reflects the second-order statistical information of a dynamical system, and thus captures the stochastic dynamics of the original biological system as well as the node network. The computational approach for DNB identification was to first construct an edge network for each sample/subject using fourth-order correlation coefficient estimation. Then, those edges (i.e. molecule pairs) appeared in majority of the subjects\u2019 edge-networks (>75%) were thought to be closely related to the disease development, and their corresponding genes were extracted as DNB genes. The pre-disease time is the time with significant increase in the composite index value of the identified DNB.Inspired by DNB core theoretical idea, Yu et al.Later on, Zeng et al.Recently, Li To investigate the advantage of multi-objective optimization workflow proposed in this work, the identified DNB of acute lung injury (GSE2565 dataset) was compared with those achieved by the existing relevant methodologies as mentioned above. According to the experimental results, by using the multi-objective optimization approach, the disease initiation signal was detected one step earlier. This is a noteworthy achievement in \u201cpredictive medicine\u201d which facilitates earlier therapeutic interventions, and thus improves the chance of disease prevention. Also, according to the biological experiments, the most severe physiological effects occurred within the first 8\u2009hours after exposure, resulting in the increase of the pulmonary edema and decrease of the survival rate . Therefore, the system has potentially entered into the disease stage after 8\u2009hours, and predicting 8\u2009h as the pre-disease state is not well-compatible with the experimental observations. Moreover, the distribution of differentially expressed genes at different time-points demonstrt is the pre-disease time as predicted by the corresponding method. In addition to this, p-values displayed in Lastly, the DNB signal intensity was measured as the escalation of the composite index at the pre-disease time as compared to the previous time-point, i.e., In conclusion, the multi-objective optimization workflow detected a reasonably-sized DNB which predicts the disease initiation earlier in time yet with a strong signal. Together, all these pieces of evidence verify the effectiveness of the proposed methodology as compared to the previous DNB works.Extending the Core DNB: The core DNB was first extended with protein-protein interactions (PPIs). The idea is that DNB proteins may bind to and thus trigger other proteins although mRNAs of interacting partners are not significantly expressed according to the microarray data\u2014possibly due to the fact that the degradation rate of mRNAs is usually much faster than those of proteinsExtracting Context-specific Targets of the Extended DNB: Now the question is that whether the EDNB can drive the regulation of genes relevant to the underlying pathogenesis. The target-genes (TGs) of the EDNB were retrieved from ORTI database, a rich and publicly-available database of experimentally-validated mammalian transcriptional interactions (http://orti.sydney.edu.au/). However, ORTI is a generic database compiling regulatory interactions from different experiments. Therefore, all the retrieved TGs are not necessarily regulated in this particular context/disease. To extract context-specific targets, a subset of retrieved TGs which are significantly deregulated during the disease periods from a population of size N that contains exactly K successes. Here, N is the total number of curated gene-disease associations in DisGeNET , n is the number of EDNB\u2019s deregulated TGs (293 genes), K is the total number of genes associated with ALI in DisGeNet (61 genes), and k is the number of EDNB\u2019s deregulated TGs associated with ALI in DisGeNet (10 genes). ALI associated TGs were listed in Hypergeometric distribution calculates the statistical significance of having drawn a specific According to the above-mentioned analysis, DNB genes and their interacting partners are potential regulators of ALI associated genes, and therefore, are likely to play a role in the pathogenesis of acute lung injury. In order to better understand the underlying cellular mechanisms, EDNB was functionally profiled using Gene Ontology (GO). GO project47Overall, the enriched GO terms fairly support cellular processes underlying ALI. Further investigations of genes annotated with these GO terms is a highly conserved protein that functions as a key player in cell cycle regulation. Cdk1 inhibition was shown to reduce lung damage in a mouse model of ventilator-induced lung injuryCarbohydrate sulfotransferases (Chst family) are sulfotransferase enzymes that are functioning in a wide range of cellular processes, from structural purposes to extracellular communications. The role of small interfering RNA (siRNA) targeting Chst3 has been studied in mouse model with pulmonary emphysema . It has been shown that Chst3 siRNA diminishes accumulation of excessive macrophages and the mediators, and thus accelerates the functional recovery from the injuryC-type lectin (Clec) is a type of carbohydrate-binding protein domain known as lectin. Proteins with C-type lectin domains have functional roles in immune response to pathogens and apoptosisTenascins are extracellular matrix glycoproteins. Tenascins have anti-adhesive effects. They are abundant in the extracellular matrix of developing vertebrate embryos and reappear around healing wounds. Tenascin-C is the most intensely studied member of the family and has been shown to be an important mediator of TGF-\u03b2-mediated fibrosis in the pathogenesis of acute lung injuryXanthine dehydrogenase (Xdh) is a protein involved in the oxidative metabolism of purines. A number of studies have supported the role of XDH in the pathogenesis of acute lung injury, hypoxia and relevant diseases59The GO term enrichment analysis verifies the functional relevance of DNB genes and their interacting partners as a In summary, the statistical analyses of the functional role of the identified DNB together with the manual literature search for relevant information strongly support the effectiveness of the proposed methodology from the biological perspective.How to cite this article: Vafaee, F. Using Multi-objective Optimization to Identify Dynamical Network Biomarkers as Early-warning Signals of Complex Diseases. Sci. Rep.6, 22023; doi: 10.1038/srep22023 (2016)."}
+{"text": "Results show that the proposed algorithm outperforms the others according to both \u03bcssd is intermediate between best-fit and the three-rule algorithms. Thus, our new algorithm is more appropriate for inferring interactions between genes from time-series data.Restricted Boolean networks are simplified Boolean networks that are required for either negative or positive regulations between genes. Higa et al. proposed a three-rule algorithm to infer a restricted Boolean network from time-series data. However, the algorithm suffers from a major drawback, namely, it is very sensitive to noise. In this paper, we systematically analyze the regulatory relationships between genes based on the state switch of the target gene and propose an algorithm with which restricted Boolean networks may be inferred from time-series data. We compare the proposed algorithm with the three-rule algorithm and the best-fit algorithm based on both synthetic networks and a well-studied budding yeast cell cycle network. The performance of the algorithms is evaluated by three distance metrics: the normalized-edge Hamming distance A key goal in systems biology is to characterize the molecular mechanisms governing specific cellular behaviors and processes. This entails selecting a model class for representing the system structure and state dynamics, followed by the application of computational or statistical inference procedures to reveal the model structure from measurement data. The models of gene regulatory networks run the gamut from coarse-grained discrete networks to the detailed description of stochastic differential equations . They pBoolean networks, and the more general class of probabilistic Boolean networks, are one of the most popular approaches for modeling gene networks. The inference of gene networks from high-throughput genomic data is an ill-posed problem. There exists more than one model that can explain the data. The search space for potential regulator sets and their corresponding Boolean functions generally increases exponentially with the number of genes in the network and the number of regulatory genes. It is particularly challenging in the face of small sample sizes, because the number of genes typically is much greater than the number of observations. Thus, estimates of modeling errors, which themselves are determined from the measurement data, can be highly variable and untrustworthy. Many inference algorithms have been proposed to elucidate the regulatory relationships between genes. Mutual information (MI) is an information-theoretic approach that can capture the nonlinear dependence between random variables. REVEAL is the first information-based algorithm to infer the regulatory relationships between genes . HoweveA restricted Boolean network is a simplified Boolean model that has been used to study dynamical behavior of the yeast cell cycle [\u201324]]. I]. I24]].This paper is organized as follows: Background information and definitions are given in Section 2. Section 3 presents a brief introduction to the three rules; after which, we systematically analyze the regulatory relationships between input genes and their target gene and propose an inference algorithm. Section 4 and Section 5 present results for the simulated networks and for the cell cycle model of budding yeast. Concluding remarks are given in Section 6.G is defined by a set of nodes V\u2009=\u2009{x1, \u2026, xn}, xi \u2208 {0, 1} and a set of Boolean functions F\u2009=\u2009{f1, \u2026, fn} and xi represents the expression state of gene xi, where xi\u2009=\u20090 means that the gene is off, and xi\u2009=\u20091 means it is on. Each node xi is assigned a Boolean function ki specific input nodes, which is used to update its value. Under the synchronous updating scheme, all genes are updated simultaneously according to their corresponding update functions. The network's state at time t is represented by a binary vector x(t)\u2009=\u2009(x1(t), \u2026, xn(t)). In the absence of noise, the state of the system at the next time step isA Boolean network p, its corresponding Markov chain possesses a steady-state distribution. It has been hypothesized that attractors or steady-state distributions in Boolean formalisms correspond to different cell types of an organism or to cell fates. In other words, the phenotypic traits are encoded in the attractors ].].N\u2009=\u200920 \u03bcssd. This result indicates that the proposed algorithm may be more appropriate for recovering regulatory relationships between genes under the restricted Boolean network model.The model space of Boolean networks is huge and from the point of view of evolution, it is unimaginable for nature to select its operational mechanisms from such a large space. Restricted Boolean networks, as a simplified model, have recently been extensively used to study the dynamical behavior of the yeast cell cycle process. In this paper, we propose a systematic method to infer the restricted Boolean network from time-series data. We compare the performance of the three-rule, best-fit, and the proposed algorithms both on simulated networks and on an artificial model of budding yeast. Results show that our algorithm performs better than the three-rule and best-fit algorithms according to the distance metrics The main advantage of the proposed algorithm is that it is more robust to noise than both the three-rule algorithm and best-fit algorithm. The proposed algorithm infers the regulatory relationships according to the consecutive state transitions of the target gene, instead of the small perturbations between two similar states in the three-rule algorithm. Simulation results show that noise in the data may induce many incorrect constraints by the three-rule algorithm. This hinders its application to noisy samples. Moreover, the proposed algorithm can capture the intrinsic state transition defined in Equation\u00a0a priori knowledge is typically a necessity. A priori knowledge can be incorporated into the proposed algorithm and helps by reducing the search space. For instance, an algorithm might assume a prescribed attractor structure [[x regulates y, then we only consider those combinations containing x, thereby reducing the search space. Additionally, different methods may focus on different aspects of the inference process. For example, the best-fit algorithm and CoD are mainly concerned with the fitness of the data, whereas MDL-based methods intend to reduce structural risks. Future work will involve combining MDL with the proposed algorithm to reduce the rate of false positives.In the Boolean formalism, a single time series (or trajectory) can be treated as a random walk across state space. It is not possible to recover the complex biological system from just one short trajectory by any method. Using heterogeneous data and some ructure . In ourThe authors declare that they have no competing interests."}
+{"text": "Plasmodium vivax can cause significant morbidity. Effective control strategies targeting P. vivax malaria is hindered by our limited understanding of vivax biology. Here we established the P. vivax transcriptome of the Intraerythrocytic Developmental Cycle (IDC) of two clinical isolates in high resolution by Illumina HiSeq platform. The detailed map of transcriptome generates new insights into regulatory mechanisms of individual genes and reveals their intimate relationship with specific biological functions. A transcriptional hotspot of vir genes observed on chromosome 2 suggests a potential active site modulating immune evasion of the Plasmodium parasite across patients. Compared to other eukaryotes, P. vivax genes tend to have unusually long 5\u2032 untranslated regions and also present multiple transcription start sites. In contrast, alternative splicing is rare in P. vivax but its association with the late schizont stage suggests some of its significance for gene function. The newly identified transcripts, including up to 179 vir like genes and 3018 noncoding RNAs suggest an important role of these gene/transcript classes in strain specific transcriptional regulation.Historically seen as a benign disease, it is now becoming clear that Plasmodium falciparum and major steps towards an operational vaccine have been recently made1P. falciparum has though highlighted how little progress has been made to control the second species of the malaria pathogen; the geographically most widely spread parasite, P. vivax. It is of particular concern that in areas where P. falciparum control is successful, P. vivax becomes dominantP. vivax has been significantly underestimated in the past45P. vivax that is now being fully appreciated, calls for specific disease control programs that need to be different from those that until now are being used in most endemic regions to target P. falciparum35Malaria remains a global problem impacting hundreds of million lives around the world. Over recent years significant progress has been made in reducing the global burden of vivax malaria are underlined by unique biology of the pathogen itself and its interaction with the host78in vitro culture system, limits the understanding of molecular mechanisms that characterize this parasite. Nonetheless publication of the genome11P. vivax biology. The subsequent high throughput sequencing914151617101920222324P. vivax relapses resulting from the activation of the dormant hypnozoite stageThe specificities of Plasmodium using microarray approaches revealed many features of gene expression and utilization in respect to both the parasite growth and adaptation to its host1011P. falciparum parasites failed in deriving these information due to the high abundance of AT-rich low complexity sequences in the genome2728P. vivax genome is highly syntenic to P. falciparum with only a handful of syntenic breaks within the 14 chromosomes, it has a higher CG content (45% CG) and virtually lacks the A-T rich repeats10P. vivax transcriptome of the Intraerythrocytic Developmental Cycle (IDC) of two clinical isolates in high resolution by Illumina HiSeq platform. This study complements the previous microarray studyP. vivax transcriptome by measuring the absolute transcript levels. In addition, we complete and expand the P. vivax gene annotation by characterizing UTRs and transcriptional start sites (TSS) for the majority of the genes, identifying new AltSpl events and also previously unknown protein-coding and noncoding transcripts.Genome-wide transcriptional analyses throughout the life cycle stages of P. falciparum31P. vivax11P. vivax transcription in higher resolution, we applied massively parallel sequencing to the identical RNA samples previously used in the microarray studyP. vivax samples were collected from two patients at the Shoklo Malaria Research Unit, Mahidol University, Mae Sot, Northwest Thailand (here defined as SMRU1 and SMRU2). Samples were highly synchronous containing 100% and 83% ring stages at the time of blood collection as validated by microscopy-based parasite morphology counts. High levels of synchrony were subsequently maintained over the 48\u2009hour ex vivo cultures with morphological stage exclusivity reaching 100\u201394% for trophozoites and 82\u201365% for schizonts at the given time points Plasmodium parasites. Overall, 2.2\u2009M to 5.1\u2009M (median\u2009=\u20093.1\u2009M) read pairs per library mapped to the 22.6\u2009M chromosome sequences that resulted in 20\u201345 fold (median\u2009=\u200927) coverage of the P. vivax genome. For the 13,626 exons in the total 5586 annotated protein coding genes in P. vivax, 93% of them showed coverage greater than 10-fold in at least one time point across the IDC in both isolates and 88% of the exons showed the same high level of coverage (>10 fold) in the four control reference samples. Reproducibility of the results from the control reference samples was demonstrated by Pearson Correlation Coefficient (PCC) that was greater than 0.99 for each pair which indicates a high fidelity of our sequencing procedures To reconstruct the transcriptional cascade of the P. vivax transcriptomic studies: (i) 3918 (75 %) of the genes identified here were also found to change their overall expression by more than two fold across the IDC by Westenberger et al.et al.Our results also correlated well with two microarray based vir genes whose expression peaks at the extremes of the IDC are adjacent to each other. Notably, these three vir genes are expressed consistently throughout the IDC with imperceptible expression variance (bottom 10% percentile). The cluster on chromosome 6 contains four highly expressed vir genes that are interspersed by two additional vir genes (with low expression) but also by other genes involved in host-parasite interactions including PHIST, Duffy Binding Protein (DBP), and a putative exported protein. These additional genes within the chromosome 6 cluster are also expressed at high levels. Until today, little is known about regulatory mechanisms controlling the vir genes as well as other factors of the host-parasite interaction. Our data open an intriguing possibility of the existence of transcriptionally active sites in at least two chromosomal locations that control transcription of antigenic determinants in multiple P. vivax isolates/clones.To investigate the biological significance of the absolute expression levels, we stratified the 5226 detected genes into five groups (20th percentiles each group) based on their peak levels of mRNA abundance during the IDC . The enr6/32 see online. AP2-like transcription factors . While the length of the 5\u2032UTRs varied from 0 to 3603bp with a median of 295bp, the 3\u2032UTRs were significantly shorter (P\u2009<\u20092.2e-16) , Mus musculus (821\u2009bp) and Danio rerio (445\u2009bp), but it is still somewhat longer compared to Saccharomyces cerevisiae (166\u2009bp)Caenorhabditis elegans (140\u2009bp)Schizosaccharomyces pombe (203\u2009bp)To date little is known about both 5\u2032 and 3\u2032 UTRs of -16) see online. P. vivax genes exhibit an opposite trend (295\u2009bp of 5\u2032UTR versus 203\u2009bp of 3\u2032UTR) , the membrane of the infected erythrocytes (median 393\u2009bp), mRNA silencing , terpenoid metabolism (median 368\u2009bp) and protein kinases (median 321\u2009bp). The shorter 5\u2032UTRs transcripts are enriched in genes associated with tRNA modifications (median 225\u2009bp) or pyruvate metabolism (median 275\u2009bp). On the other hand, genes encoding proteins of the infected erythrocyte membranes and P bodies have long 3\u2032UTRs with a median size of 227\u2009bp and 268\u2009bp. These functional enrichments suggest a biological significance of the UTRs, possibly in posttranscriptional regulation of gene expression. Moreover the gene functionalities identified by these enrichment analyses represent potential growth limiting/regulating processes and thus in the future, it will be interesting to explore the 5\u2032 as well as 3\u2032 UTRs for the presence of regulatory elements.In most eukaryotes, the 5\u2032UTR is shorter than its 3\u2032 counterparts for the majority of genesf 3\u2032UTR) . To inveP. vivax, we used the method of rapid transitions of starting tag and identified multiple TSS for 1491 coding genes less than 1% . As a result, we identified 102 AltSpl events in highly transcribed genes for each AltSpl event by counting the number of reads spanning minor splicing junctions at each time point (see P\u2009=\u20090.0122) of 23 genes (for SMRU1 and 21 for SMRU2) among the total of 61 genes which had alternative protein products. This suggests that AltSpl do not regulate timing of gene expression but instead may help in diversifying gene functions, particularly during the late schizont stage. Functional enrichment analysis revealed that the AltSpl in the UTRs are enriched amongst genes encoding S-Glutathionylated proteins while AltSpl in the coding regions associate with genes encoding nucleic acid binding proteins . Although these may be somewhat biased by overrepresentation of highly transcribed genes, the distinct functional groups associated with AltSpl suggests its biological significance in Plasmodium. This is supported by a statistically significant overlap between the alternatively spliced gene products in P. vivax and P. falciparum putative introns being retained in transcripts of 421 genes . In 21 genes . Like Alde novo assembled transcripts generated by sequencing of the control reference samples. Here we identified a considerable number of transcripts that were not within the current annotations of the P. vivax genomeP. vivax or human genomes (type-II) and the ORFs predicted for these transcripts are extremely short with a median of 24\u2009bp (Methods and P\u2009<\u20090.05) to the known Plasmodium proteins recorded by RefSeq database. The majority of these belong to the vir gene family and other factors of host parasite interactions.Next we explored the -II) see online. .05) see online. les) see online. P. vivax genome but are clearly detectable in the RNA-Seq data from all four control reference samples. Although they are as short as type-I, the type-II transcripts are more likely to represent partial (not full length) sequences of coding transcripts with coding sequences including 1536 Plasmodium RNAs, 36 human RNAs and 33 other eukaryotes RNAs antigenic factors and 42 to merozoite invasion genes, 39 to red blood cell binding proteins and 13 putative members of the early transcribed membrane protein (ETRAMP) gene family . That is consistent with previous studies showing that vir genes expression vary between P. vivax strains and isolates as a contributing factor to antigenic variation961The 2178 type-II transcript contigs do not map to the nscripts . This istes RNAs . Amongstmily see online. ng-stage . This fuP. vivax IDC provides an in-depth dataset analyzing characteristics of RNA metabolism ranging from temporal and absolute abundance of mRNA to transcript structure including UTRs and splice forms. Moreover, we expand the current annotation of the P. vivax genome with a comprehensive list of transcripts that includes mainly genes encoding proteins of host parasite interactions but also non coding RNA transcripts. Importantly this dataset comprises the global transcriptome status of multiple developmental stages of the IDC which allows studying the overall dynamics of the RNA metabolism in P. vivax. The presented dataset opens new possibilities into studies of key features of gene expression and with that unique property of P. vivax pathophysiology and pathogenesis that will potentially bring new strategies for malaria intervention.In summary, this RNA-Seq analysis of the This study was approved by the relevant local ethics committees and the Oxford Tropical Research Ethics Committee. The methods were carried out in accordance with the approved guidelines. All experimental protocols were approved by the Oxford Tropical Research Ethics Committee and the NTU Institutional Review Board (IRB11/08/03).ex vivo cultures. The eight sequenced SMRU2 samples were previously labeled as the time point 2\u20139 corresponding to the 6\u2009hr, 12\u2009hr, 18\u2009hr, 24\u2009hr, 30\u2009hr, 36\u2009hr, 42\u2009hr and 48\u2009hr of ex vivo cultures. Overall, 250\u2009ng of total RNA per sample was used to construct sequencing library. Four control reference samples were prepared by mixing 250\u2009ng of all time course samples for both SMRU1 and SMRU2 isolates.As we studied the identical RNA samples used in our previous microarray work, the details of sample collection and RNA isolation have been described in the paperThese samples were processed with the Illumina TruSeq RNA Sample Preparation Kit v2 following the manufacturer\u2019s recommendations. The libraries were then normalized to 2\u2009nM and validated by qPCR on an Applied Biosystems StepOne Plus instrument, using Illumina\u2019s PhiX control library as the standard. After qPCR validation, the libraries were pooled and sequenced at a final concentration of 11.5\u2009pM across 8 lanes of a HiSeq2000 high-output run at a read length of 100\u2009bp paired end. Overall, we obtained 1.7 billion reads pairs from the 19 sample libraries.Plasmodium vivax Salvador I genomeP. vivax r(t)RNA genes that resulted in 205\u2009M reads pairs left to the following study. The proportion of human RNAs was estimated by reads mapping to human RNAs of RefSeq database of Dec 2014. The details are described in http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE61252).The sequencing raw reads were aligned to Fragments Per Kilobase of coding exon per Million fragments mapped (FPKM) for each protein coding gene at each time point. Second, we filtered out genes with low read coverage (<10 reads) or low abundance (FPKM\u2009<\u20091) in any one of control reference samples. The cutoff selection depends on the standard variation of FPKM values of control reference samples . There were 157 genes with a total of 404 FPKM values not detectable which were 0.52% of the whole data of IDC transcriptome. All the transcriptome data were also deposited into NCBI\u2019s Gene Expression Omnibus which are accessible through GEO Series accession number GSE61252 (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE61252).A total of 101\u2009M uniquely mapped reads pairs were used in constructing transcriptional profiles. The proportion of unique reads to the total raw reads ranges from 3.7% to 12.2% with a median of 5.7% per sample. First, the mRNA abundance was measured by the ples see online. P. vivax and P. falciparum for each isolate time point and time points in the reference IDC transcriptomein vitro P. falciparum Dd2 lifecycle). The stage with the peak SRCC value was assigned as the best estimate of the age for parasite collected at each time point were calculated between global mRNA profiles of syntenic orthologous genes of The expression timing of a gene was estimated using sine wave function. The expression profile of each gene is modeled using sine function ast hour of sample collection, A is the amplitude of expression profile across life cycle, C is the vertical offset of profile from zero, \u03c9 is the angular frequency, given by \u03c9\u2009=\u20092\u03c0/48 and \u03b1 is the horizontal offset of profile from zero which we used as phaseogram to show gene expression timing where genes were sorted according to that for transcriptome visualization.Where P\u2009<\u20090.05, PCC\u2009<\u20090.5 and the average difference between paired time point of two isolates are less than the maximum difference between corresponding controls. The differential expressed genes are listed in A pair wised Wilcoxon test was used to compare transcriptional profiles for each gene between isolates. Significantly differential expression is defined with the cutoff of P. vivax SalI genome of PlasomDB release12. All spots data were subjected to \u201cnormexp\u201d background correction followed by lowess normalization within array and quantile normalization between arrays using Limma package of R. Log2 ratios of Cy5 over Cy3 intensities were calculated for each spot to represent expression value of a particular probe except those with signal intensity less than 1.5 times the background intensity for both Cy5 and Cy3 fluorescence. For each gene, the expression value was estimated as the average of all probes representing it. Overall, 4989(98%) of 5085 genes designed on microarray display expression profiles without missing values with SMRU1 (5004 or 98% with SMRU2).We reanalyzed the published microarray data2 ratio of FPKM. The transcriptional profiles correlation revealed a complete agreement between RNA-Seq and microarray with the median PCC 0.95\u2009\u00b1\u20090.06 for 4973 genes with isolate SMRU1 and 0.91\u2009\u00b1\u20090.1 for 5003 genes with isolate SMRU2. Moreover, the RNA-Seq provides more reliable transcriptional profiles with higher agreement (PCC\u2009>\u20090.5) observed between isolates by RNA-seq to the 304 (80% of 381) genes which showed disagreement (PCC\u2009<\u20090.5) between techniques were normalized by the average FPKM of controls of the gene and consequently transformed to a logques see online. P\u2009<\u20090.01) enriched of genes with similar transcription levels and regulation levels which significantly clustered at their medoids on the map of \u201cwizard\u2019s hat\u201d for each. Only the biological process of translation (GO: 0006412) are found with genes significantly scattered on the map (P\u2009=\u20090.01) comparing to random data.We applied medoid algorithm to calculate the center of each pathway on the \u201cwizard\u2019s hat\u201d map. In details, for each pathway, we first calculated the dissimilarities matrix for each gene. The dissimilarity between the selected gene and other genes is their Euclidean distance. Next, the medoid gene was chosen as the center one mostly minimizing the sum of dissimilarities matrix. Next, we randomly picked the same number of genes from the \u201cwizard\u2019s hat\u201d for the same pathway and calculated the dissimilarities matrix to the pre-known medoid gene. A Wilcoxon test was performed to compare the observed dissimilarities to random generated dissimilarities. For 738 tested pathways/gene groups, 77(83) in isolate SMRU1 (SMRU2) are significantly (de novo assembly was separately carried out for 28\u2009M uniquely mapped reads and 26\u2009M unmapped reads from control samples using TrinityP. vivax genome and human RNAs) were trimmed to clean those low quality bases from sequence ends and also clip the remaining adapters using Trimmomaticde novo transcripts were aligned to P. vivax genome using BlatP. vivax genome region spanning too short (\u2264200\u2009bp) or too long (\u226530\u2009kb) along chromosomes. Finally, the 28\u2009M mapped reads resulted in 21867 to 22563 de novo transcripts (mean 22285) per control sample, and the 26\u2009M unmapped reads resulted in 7074 to 10111 de novo transcripts (mean 8897) per control sample.Full-length transcriptome rapid transition of starting/ending tags number similar to previous RNA-Seq studies45We detected the UTR boundaries for each protein coding gene using an approach of L(nR) is the number of starting/ending reads in 20\u2009bp window on the left(right) side of a genomic position, and \u03c3 equal to 1 which was estimated based on calculation using 10000 random selected positions within ORF exons. Next, for each upstream position of the start codon and downstream position of the stop codon within the region covered by the longest de novo assembled transcript which belonged to the studied gene solo, we tested the probability it falls into the null hypothesis and the UTR boundary (or TSS) was set at the position which refused the null hypothesis at P\u2009\u2264\u20090.05. For genes with multiple positions passing the cutoff P\u2009\u2264\u20090.05, UTR boundary was set to the only position with the highest frequency of starting/ending reads. To sharp the UTR boundaries, we merged reads alignment from all four control references.Where nde novo transcripts to the genome of P. vivax SalI and extracted all junctions spanning at least 10 and no more than 1\u2009k nucleotides along the genome sequence. The putative splici junctions required to be detectable by three or more controls and contain canonical splicing sits of donor/acceptor sequences (GT/AG) at both ends. Overall, a total of 8423 putative splicing junctions (putative introns) are detected. Next, we estimated the false discovery rate (FDR) for each of 8423 splicing junctions. The positive dataset is the 8423 splicing junctions. The negative dataset were constructed by those putative splicing junctions without known splicing site sequences, GT-AG or GC-AGThe putative splicing junctions or introns are defined independently from the current gene models. We aligned the RCi is the total read counts from four control references confirming the ith junction, and Ni is the observing time of the ith junction by control references. All the 8423 splicing junctions passed the cutoff of FDR\u2009<\u20091%.Where The splicing efficiency of a splicing junction was measured by the number of reads spanning that junction with more than 10\u2009bp matching to each flanking sequences over the average number of un-spliced reads at the \u201cexon-intron\u201d sites on both sidesThe Altspl events were also determined independently from current gene models. First, the putative splicing junctions were grouped by their locations. In practice, two junctions were grouped together if their locations overlapped to each other, and two groups were merged if any member of them overlapped to each other. Subsequently within each group, Altspl events were characterized into distinct type such as alternative 5\u2032/3\u2032 splicing sites, exon skipping or mixture of both types .2 ratios of FPKM.To compare Altspl between time points and isolates, the transcription level of minor isoform was estimated by the number of reads spanning the minor splicing junction followed by control normalizing it to obtain logP. vivax genome map, we perform de novo assembly for mapped reads and unmapped reads separately and characterized the result of de novo transcripts into two categories: type-I de novo transcripts reconstructed by mapped reads and aligned to genome regions outside current gene models; type-II de novo transcripts reconstructed by reads unmapped to P. vivax or human RNAs. To reduce the data redundancy, we clustered de novo transcripts into groups and use the longest transcript of each group to represent one cluster. A cluster is established if transcript \u201ca\u201d from control \u201cA\u201d is the reciprocal best hit of transcript \u201cb\u201d from control \u201cB\u201d by BLAT searching for\u2009>\u2009100\u2009bp nucleotides matched to each other. Clusters are merged if any members of them satisfy the same criteria above. The final novel transcripts required reproducible reconstructions in each controls. Finally, we identified 3049 type-I novel transcripts (2890 from chromosome and 159 from super contigs AAKMs) and 2178 type-II novel transcripts.For the incompleteness of de novo transcripts using BWAPhaseogram P. vivax genes were annotated onto KEGG pathways, 2590 genes onto MPM pathways and 3944 genes onto GO terms. To search for the homologous proteins of the novel transcripts, we used the tool of NCBI Blastx (http://blast.ncbi.nlm.nih.gov/Blast.cgi) against NCBI Protein Reference SequencesWe annotated How to cite this article: Zhu, L. et al. New insights into the Plasmodium vivax transcriptome using RNA-Seq. Sci. Rep.6, 20498; doi: 10.1038/srep20498 (2016)."}
+{"text": "Unsupervised analyses such as clustering are the essential tools required to interpret time-series expression data from microarrays. Several clustering algorithms have been developed to analyze gene expression data. Early methods such as k-means, hierarchical clustering, and self-organizing maps are popular for their simplicity. However, because of noise and uncertainty of measurement, these common algorithms have low accuracy. Moreover, because gene expression is a temporal process, the relationship between successive time points should be considered in the analyses. In addition, biological processes are generally continuous; therefore, the datasets collected from time series experiments are often found to have an insufficient number of data points and, as a result, compensation for missing data can also be an issue.An affinity propagation-based clustering algorithm for time-series gene expression data is proposed. The algorithm explores the relationship between genes using a sliding-window mechanism to extract a large number of features. In addition, the time-course datasets are resampled with spline interpolation to predict the unobserved values. Finally, a consensus process is applied to enhance the robustness of the method. Some real gene expression datasets were analyzed to demonstrate the accuracy and efficiency of the algorithm.The proposed algorithm has benefitted from the use of cubic B-splines interpolation, sliding-window, affinity propagation, gene relativity graph, and a consensus process, and, as a result, provides both appropriate and effective clustering of time-series gene expression data. The proposed method was tested with gene expression data from the Yeast galactose dataset, the Yeast cell-cycle dataset (Y5), and the Yeast sporulation dataset, and the results illustrated the relationships between the expressed genes, which may give some insights into the biological processes involved. In the last two decades, the development of medicine and molecular biology has been significantly improved by DNA microarray technology applications. The technology allows variations in expression levels to be monitored simultaneously for thousands of genes, even in some multiple experiments in which data are collected across various time points. During distinct biological processes, high-throughput data of time-series gene expression are recorded to explore the complex dynamics of biological systems. The expression data can reveal gene activities in conditional reactions such as cell-cycle, disease progression, and response to external stimuli .Analyses of microarray data are essential in several time-series expression experiments such as biological systems, infectious diseases, and genetic interactions . PatternAffinity propagation is an efIn this paper, we combined the cubic B-splines interpolation, affinity propagation, and consensus clustering -17) to a to a17])G={G1,G2,\u2026,Gn} where n is the number of genes, and each gene Gi includes \u03c4 time points for the gene expression values, the n genes are grouped into K disjoint clusters C1,C2,\u2026,CK. Based on the clustering, various groups of genes with similar expressions can be identified and organized for further analyses. The framework of the proposed algorithm is shown in Figure \u03c4 time points are doubled to give 2\u03c4\u22121 points using the cubic B-splines interpolation algorithm reflects how appropriately the data has been clustered. A high Silhouette index indicates a good clustering result, which indicates the data are classified appropriately.where DI) is defined asThe Dunn\u2019s validation index defines the distance between clusters ui and uj (inter-cluster distance); \u0394(uk) represents the intra-cluster distance of cluster uk, and R is the number of clusters of partition U. Likewise, the large values DI correspond to good clusters.where The Davies-Bouldin index (DBI) is defined asSI and DI, inter-cluster distance \u03b4 is in the denominator )Saccharomyces cerevisiae. The gene expression profiles were measured with four replicate assays across 20 time points and the genes have been annotated in four functional categories in the Gene Ontology was buir et al. ) listingThe Yeast cell-cycle dataset includeThe Yeast sporulation dataset contains for affinity propagation were chosen as the similarities between gene expression profiles based on the Pearson correlation. The preferences p were chosen as the medians of the similarities s. In addition, the window size w and the number of windows l are tradeoff between clustering accuracy and algorithm efficiency. Smaller window sizes could miss the dynamic of the temporal gene expression while larger ones could decrease the number of votes, making the relationships between genes difficult to determine. The parameter relativity threshold \u03c3 for graph partitioning ranged from 0.5 to 0.8 in the experimental evaluation.Here, we discuss the parameter settings used in our algorithm for the experimental evaluation. The similarities O(\u03c42), \u03c4 is the number of points) to collect enough consensus votes to reach the precise probability between each pair of genes. Thus, we recommend that the proper window sizes w range from \u03c4 is the number of time points for the gene expression values being analyzed, and the appropriate number of windows l is more than or equal to In our implementation we prefer to apply a large amount of windows with 8GB memory using Windows 7 64-bit.Consider Figure We compare our results with other methods in the literatures as shown in Table In Figure In Figure Compare with the results of other methods without partial learning of labeled data including k-means, cubic splines , differeConsider Table Even though, in contrast to the results derived from the Yeast galactose dataset, our best result of 0.57113 of the adjusted Rand index is not looking good. To take a closer look on the phenomenon, we demonstrate the clustering result with five groups derived from our algorithm in Figure The relationship between normalized gene expression values of the genes and the time points for each cluster are indicated by cluster profile plots. The best clustering results of cluster profile plots evaluated by our proposed algorithm are shown in Figure The clustering performance can be observed by the gene expression of each cluster shown in Figure SI and DBI to judge the clustering performance. Amongst all 31 relativity thresholds, our algorithm has a maximum value of the Silhouette index of 0.72923 as shown in Figure \u03c3 causes distinct groups combined into a larger group including all 474 genes. The effect is caused by the insufficient number of time points which impacts the number of votes for investigating the relationships between genes. Noted that there are only seven time points on the Yeast sporulation dataset. As discussed in the window-size setting, for this Yeast sporulation dataset, we have to expand the window size to prevent some bad votes been included in the aggregated consensus matrix. The range of window size is suggested as , not the regular setting.Due to the lack of annotations on the Yeast sporulation dataset, we use two internal validity measurements, i.e., the To resolve the problem with short time-series gene expression datasets, Ernst et al. present an algorithm to analyses and retain significant gene expression profiles . In thiCompare with the results of other methods including SiMM-TS , Chiu etIn the implementation of the extended version of our algorithm, the execution time may rapidly increase because the number of combinations grows exponentially. Our suggestion is applied this version on those datasets with small size of time points, say 10 at most.We demonstrate the clustering result with five groups derived from our algorithm in Figure The statistics of the execution time for three dataset is shown in Table In this paper, we present an unsupervised clustering algorithm to analyze time-series gene expression data, which requires no prior knowledge, such as the number of clusters or the cluster exemplars (centroids). The algorithm combines affinity propagation and consensus clustering with various intervals of time points, which provided progressive robustness and accuracy by overcoming the interference from background noises and experimental errors. Besides, the interactions between genes across distinct time points were investigated by interval selection based on a sliding-window mechanism.Because of the efficiency of affinity propagation, the proposed algorithm provides appropriate and effective analyses of time-series gene expression data. Based on three real gene expression datasets, our algorithm significantly outperformed other methods when the same datasets were used in the evaluation. The experimental results on the Yeast galactose dataset, the Yeast cell-cycle dataset (Y5), and the Yeast sporulation dataset, confirmed that our method can successfully illustrate biological relevance between the expressed genes.In the future development of our method, we aim to integrate the problem of absent features at some time points, which is a critical issue in bioinformatics and machine learning. The standard treatments for absent features such as Zero , Mean , and kNN in time-series data neglect the temporal dependence, causing improper results. We also aim to improve the semi-supervised clustering analyses, which are currently affected by the incompleteness of the gene annotations. By combining un-annotated data with the small amount of annotated data that is available, we expect to see considerable improvements in the clustering accuracy of our method.https://vc.cs.nthu.edu.tw/tychiu/Programs/MainProgram.7z.The program is freely available for non-profit use on request at"}
+{"text": "The understanding of changes in temporal processes related to human carcinogenesis is limited. One approach for prospective functional genomic studies is to compile trajectories of differential expression of genes, based on measurements from many case-control pairs. We propose a new statistical method that does not assume any parametric shape for the gene trajectories.The trajectory of a gene is defined as the curve representing the changes in gene expression levels in the blood as a function of time to cancer diagnosis. In a nested case\u2013control design it consists of differences in gene expression levels between cases and controls. Genes can be grouped into curve groups, each curve group corresponding to genes with a similar development over time. The proposed new statistical approach is based on a set of hypothesis testing that can determine whether or not there is development in gene expression levels over time, and whether this development varies among different strata. Curve group analysis may reveal significant differences in gene expression levels over time among the different strata considered. This new method was applied as a \u201cproof of concept\u201d to breast cancer in the Norwegian Women and Cancer (NOWAC) postgenome cohort, using blood samples collected prospectively that were specifically preserved for transcriptomic analyses (PAX tube). Cohort members diagnosed with invasive breast cancer through 2009 were identified through linkage to the Cancer Registry of Norway, and for each case a random control from the postgenome cohort was also selected, matched by birth year and time of blood sampling, to create a case-control pair. After exclusions, 441 case-control pairs were available for analyses, in which we considered strata of lymph node status at time of diagnosis and time of diagnosis with respect to breast cancer screening visits.The development of gene expression levels in the NOWAC postgenome cohort varied in the last years before breast cancer diagnosis, and this development differed by lymph node status and participation in the Norwegian Breast Cancer Screening Program. The differences among the investigated strata appeared larger in the year before breast cancer diagnosis compared to earlier years.This approach shows good properties in term of statistical power and type 1 error under minimal assumptions. When applied to a real data set it was able to discriminate between groups of genes with non-linear similar patterns before diagnosis. Nature Medicine [The assumption of systems epidemiology is that Medicine stressedMedicine . One expOne approach for prospective functional genomic studies is to compile trajectories based on measurements from many case-control pairs in order to study the carcinogenic process . The traOur overall aim was to develop statistical methods for exploring the changes in gene expression in years before diagnosis as part of a processual approach , not to There is no prior knowledge about the form of the trajectory of gene expression for any of the thousands of genes. This lack of a priori information normally demands an agnostic approach , i.e., cThe new statistical approach are described below. As a \u201cproof of concept\u201d we carried through an analysis in a nested case-control design in the Norwegian Women and Cancer postgenome cohort. For each incident breast cancer case identified through linkage to the Norwegian Cancer Registry a control was drawn from blood samples collected at the same time and year of birth. This ensured the same storage time and no effect of age between cases and controls. The pairs of cases and controls were kept together throughout all laboratory procedures in order to reduce batch effects. For more details see later under Epidemiological design and study population.The new statistical method for curve group analyses is a statistical method based on a set of hypothesis testing that we developed in order to detect changes in gene expression levels over time, and whether these changes, if they exist, differ among strata. This method is able to identify small changes that vary slowly over time and/or among strata, by using a large number of genes in each analysis. In order to define test statistics that measure the development of differential gene expression levels over time and differences among strata, we have introduced the concept of curve groups, where each curve group consists of genes that have a similar development over time, i.e., similar differential trajectories. These methods are described in detail below:Xg,p be the log2-expression difference for gene g and the matched case-control pair p. Each case-control pair belongs to a stratum s and a time period t. We wanted to test whether Xg,p is independent of the time period, and whether there is no difference among the strata, i.e., Xg,p is independent of stratum. Figure\u00a0Let In the illustrative application, analyses were either conducted within strata of lymph node status at breast cancer diagnosis (positive or \u2018with spread\u2019 and negative or \u2018without spread\u2019) or with respect to breast cancer screening visits (detection categories); cancers diagnosed during screening visits were considered \u2018screen-detected cancers\u2019; cancers diagnosed within 2\u00a0years of last screening visit were considered \u2018interval cancers\u2019; and cancers diagnosed clinically in women that did not attend screening or had not attended screening for more than 2\u00a0years were considered \u2018clinical cancers\u2019 Table\u00a0.Table 1NXg,p is independent of the time period in a global test since we are interested in weak signals from many genes, not signal that may only be identified in a single gene. To define a test statistic that measures development over time we used curve groups. The follow-up time was divided into three time periods t\u2009=\u20091,\u20092,\u20093 where t\u2009=\u20091 is 0-1 year before cancer diagnosis, t\u2009=\u20092 is 1-2 years before cancer diagnosis, and t\u2009=\u20093 is 3-5 years before cancer diagnosis.s, a gene g can belong to zero or one of six curve groups based on the average (mean) of the data over all case-control pairs in the stratum in each of the three time periods. These averages were denoted g belong to curve group \u2018123\u2019 indicating an increasing gene expression level over time when approaching the time of diagnosis, with gene expression level 1 in time period 3, gene expression level 2 in time period 2 and gene expression level 3 in time period 1 (closest to the time of diagnosis). If the three averages are too similar, gene g does not belong to any curve group. See Fig.\u00a0For a given stratum entclass1pt{minimapg,c be the p-value of this test. Depending on the statistical question at hand, we defined two alternative criteria for concluding that a gene g belongs to the curve group c:Each curve group included only genes with a significant change in expression level over time. This was done by testing whether the smallest and largest values of Inclusion criterion 1: Gene g belongs to curve group c if pg,c is below a predefined limit \u03b1.Inclusion criterion 2: Gene g belongs to curve group c if gene g is among the M genes with lowest pg,c. See more in the next section.For each stratum we tested whether To test for the development of gene expression levels over time, for each stratum we counted the number of genes that belong to the curve group using inclusion criterion 1. We then performed seven hypothesis tests: one global test and one for each of the six curve groups in each stratum. In the global test the test statistic is the total number of genes that belong to any one of the six curve groups, while in the test for individual curve groups the test statistic is the number of genes that belong to the curve group in question. If the conclusion of the hypothesis test was that there were more genes in the curve groups than what was expected by chance, we concluded that there was a significant development over time for some of these genes.c, stratum s and case-control pair p, we defined a curve group variable Zc,s,p as follows: we selected the genes that belonged to curve group c for stratum s using inclusion criterion 2 with M\u2009=\u2009100. Let Gc,s denote this set of genes. The curve group variable Zc,s,p for case-control pair p was then computed as the average value of the data Xg,p over the genes in Gc,s:Let us consider in our illustrative example ytwo strata like for instance \u201cwith spread\u201d and \u201cwithout spread\u201d at the time of diagnosis. We wanted to test whether there were differences in gene expression levels between these two strata, using information from several genes. For each curve group Zc,s,p were different between the two strata for case-control pairs p either for all time periods combined or for each time period separately. Note that the genes were selected based on data from stratum s, but the variable may have been calculated for case-control pairs p in any stratum. For example, assume that we wanted to test if there was a difference in gene expression level between case-control pairs in the stratum with spread versus the stratum without spread for curve group 123. Assume that the set of 100 genes G123,with spread was selected using criterion 2 in the stratum with spread. We would then have calculated Z123,with spread,p for all case-control pairs p in the stratum with spread and Z123,with spread,p\u2019 for p\u2019 in the stratum without spread, and tested if the difference was larger than expected by chance. Note that testing the strata with spread versus without spread may also be performed with the set of genes G123,without spread selected from the without spread stratum or from any of the other defined strata.We could then test whether the variables Tg,t and comparing the difference in gene expression levels between the two strata for each gene g and time period t. We defined g with weight wt. Furthermore, the test statistic was defined as Gk is the set of genes with the k largest Fg values, i.e. Lk is the sum of the k largest Fg values. We observe that Lk is a weighted sum of t-statistics. We used equal weights wt\u2009=\u20091/3 for each time period. Alternatively, the weights could be selected either as proportional to the number of case-control pairs in each time period or with larger values for the case-control pairs in a time period closer to the time of diagnosis. We then performed a global test including all three time periods, and separate tests for each time period, in which only data corresponding to each time period were included. This test performed very well on several simulated datasets with a different development over time or different gene expression levels for some genes for two strata. For details see Holden [The test described above focuses on genes that belong to the same curve group. We also constructed a hypothesis test to compare the difference in development over time between two strata that did not depend on curve groups. This test statistic was constructed by first computing the two-sample t-statistic N is the total number of randomizations and K is the number of randomizations out of N with a more extreme statistic than the statistic for the real data [N\u2009=\u20091000.We computed p-values in all the tests described above by estimating the null distribution for the statistic of the hypothesis test by randomizing the data. In the randomization, we preserve critical properties of the genes and randomize only what\u2019s connected to the evolution over time and stratum. This randomization defines the null-distribution for the test statistic that is used when finding the p-value. In hypothesis tests for development over time in a single stratum, the null model was estimated by randomizing case-control pairs for that stratum between time periods, while in the hypothesis tests comparing two strata, the null model was estimated by randomizing case-control pairs between the two strata for each time period. Note that these randomization algorithms maintained the correlation structure between the genes for each case-control pair. Also note that the curve groups were redefined before a sample of the null model was computed from a randomized dataset. The p-value of the test was set to eal data . In the The NOWAC study is a nation-wide population-based cancer study that was initiated in 1991 , and theA nested case-control design was chosen in order to reduce batch effects in the laboratory and also for the high cost of each analysis. For each case of breast cancer, a control from the same batch of 500 women in the postgenome cohort was assigned, matched by time of blood sampling and year of birth, to be analyzed together with the case.The controls are used to establish the average (mean) gene expression level in individuals without cancer and to allow exposure-adjusted analyses to be performed. The expression level of a gene not involved in the carcinogenetic process will exhibit variability dependent on day-to-day changes in exposures such as environment and nutrition, resulting in random fluctuations of the difference in gene expression between case and matched control around a population-average constant over time. Whereas, the difference in expression level of genes related to different stages of the carcinogenetic process may vary over time in a non-random way, thus exhibiting some non-random trend. The changes in genes related to the carcinogenic process could be complicated by other effects of exposures to the carcinogens .Cases of invasive breast cancer diagnosed in the NOWAC postgenome cohort through the end of 2009 were identified through linkage to the Cancer Registry of Norway. Altogether 637 cases of invasive breast cancer were reported. After removing outliers and ineligible cases including women with distant metastases, the study consisted of 441 case-control pairs. Information on lymph node status at breast cancer diagnosis was based on the pTNM information included in the Cancer Registry of Norway. Detection categories were also obtained from the Cancer Registry of Norway, which updates this data regularly through linkage to the screening database kept by the National Breast Cancer Screening Program .The NOWAC study was approved by the Norwegian Data Inspectorate and the Regional Ethical Committee of North Norway (REK). The linkages of the NOWAC database to national registries such as the Cancer Registry of Norway and registries on death and emigration was approved by the Directorate of Health. The women were informed about these linkages. Furthermore, the collection and storing of human biological material was approved by the REK in accordance with the Norwegian Biobank Act. Women were informed in the letter of introduction that the blood samples would be used for gene expression analyses.All extraction and microarray services were provided by the Genomics Core Facility, Norwegian University of Science and Technology, Trondheim, Norway. To control for technical variability such as different batches of reagents and kits, day-to-day variations, microarray production batches, and effects related to different laboratory operators, each case-control pair was kept together throughout all extraction, amplification, and hybridization procedures. RNA extraction was performed using the PAXgene Blood miRNA Isolation kit according to the manufacturer\u2019s instructions. RNA quality and purity was assessed using the NanoDrop ND 8000 spectrophotometer and Agilent bioanalyzer , respectively. RNA amplification was performed on 96 plates using 300\u00a0ng of total RNA and the Illumina TotalPrep-96 RNA Amplification Kit . The amplification procedure consisted of reverse transcription with a T7 promotor and ArrayScript, followed by a second-strand synthesis. In vitro transcription with T7 RNA polymerase using a biotin-NTP mix produced biotinylated cRNA copies of each mRNA in the sample. All case-control pairs were run on either the IlluminaHumanAWG-6 version three expression bead or the HumanHT-12 version 4. Outliers were excluded after visual examination of dendrograms, principal component analysis plots and density plots. Individuals that were considered borderline outliers were excluded if their laboratory quality measures where below given thresholds .2-differences of the gene expression levels for each case-control pair were computed and used in the statistical analyses. Additional adjustments for possible batch effects were unnecessary as the case-control pairs were kept together throughout the laboratory processes.The dataset was preprocessed as previously described . The datp\u2009=\u20090.01), and more p-values less than 0.05 than expected by chance (Tables\u00a0p\u2009=\u20090.02) . Tables\u00a0\u03b1\u2009\u2010\u2009value -value was too small, weakening the power of the test.A time trend was considered to be present if there were more genes in the curve groups than expected by chance. The number of case-control pairs stratified according to lymph node status and detection category is shown in Table\u00a0e Tables\u00a0 and 3. FZc,s,p described in the methods section. P-values were obtained by testing whether the curve group variables Zc,s,p were different in the two strata; many were below 0.05 and some were smaller than 0.01 . One of the methods focused on identifying genes with specific changes over time within a given lymph node status. The other method focused on differences in gene expression levels between lymph node statuses in the different time periods. Both methods focused on different aspects of functional time dependency of gene expression levels relative to time of breast cancer diagnosis, and both methods gave significant results when many genes were used. As gene expression data are very noisy, our methods used information from several genes simultaneously to increase the power of the hypothesis tests.A potential weakness of the curve group approach is the increasing number of curve groups as observation time periods increases. When there are four time periods, 24 curve groups will be needed, and even more will be needed for five time periods.Studies of gene expression levels in peripheral blood are challenging and have many difficulties and pitfalls. Most biobanks suffer from ubiquitous degradation by RNase, which reduces the quality of mRNA for whole genome analyses. Only samples that contain a specific buffer or are directly frozen in liquid nitrogen can be used for whole genome analyses. The signals related to carcinogenesis in the blood are expected to be much weaker than those in tumor tissue and can be confounded by signals from exposures to carcinogens or other lifestyle factors. The problem of noise due to the complicated study of carcinogenesis, the need for an adequate epidemiological design including exposure information and blood sampling, complicated technology, and the development of robust statistics, could make the approach unsuccessful. The prospective design of our study made it difficult to increase the statistical power, so our results should be interpreted with care.To the best of our knowledge, the NOWAC postgenome cohort is the largest population-based prospective cancer study designed for transcriptomics due to the presence of buffered RNA. All parts of the analyses were done within the framework of the NOWAC study. In the NOWAC postgenome cohort, a single laboratory processed all samples using the same technology, thus reducing analytical bias and batch effects. The cohort design reduced selection bias. A weakness of a prospective study could be possible changes in case-control status as controls became cases over time, thus reducing the differences in gene expression levels within a case-control pair. We removed all case-control pairs in which controls were diagnosed with breast cancer or any other cancer within 2\u00a0years of blood sampling. The matching was done only for storage time and year of birth. Matching on other variables will eliminate the inclusion of these lifestyle factors in the analyses. If matched on e.g. smoking we could not estimate the effect of smoking or any interactions with other risk factors. Unfortunately, there was no repeated sampling of blood, and no additional questionnaires were completed. Repeated measurements would secure better analyses, making it possible to use intra-individual comparisons over time.The proposed statistical methods are sensitive for finding curve groups of genes even for strata with few case-control pairs. This made it possible to describe and test non-linear relationships. Our findings could be viewed as a proof of concept of systems epidemiology, indicating the potential to include gene expression for functional analysis in prospective studies of cancer."}
+{"text": "Current approaches to study transcriptional profiles post influenza infection typically rely on tissue sampling from one or two sites at a few time points, such as spleen and lung in murine models. In this study, we infected female C57/BL6 mice intranasally with mouse-adapted H3N2/Hong Kong/X31 avian influenza A virus, and then analyzed the gene expression profiles in four different compartments over 11 consecutive days post infection. These data were analyzed by an advanced statistical procedure based on ordinary differential equation (ODE) modeling. Vastly different lists of significant genes were identified by the same statistical procedure in each compartment. Only 11 of them are significant in all four compartments. We classified significant genes in each compartment into co-expressed modules based on temporal expression patterns. We then performed functional enrichment analysis on these co-expression modules and identified significant pathway and functional motifs. Finally, we used an ODE based model to reconstruct gene regulatory network (GRN) for each compartment and studied their network properties. Seasonal influenza infection affects 1 billion people annually, causing up to 500,000 deaths each year \u20134. MigraFrancisella tularensis [Escherichia coli (APEC) [Current approaches to study transcriptional profiles post influenza infection typically rely on tissue sampling from one or two sites, generally spleen and lung in murine models, and these samples are often collected at only a few time points. This approach, however, offers only a limited snapshot of transcriptional changes throughout the course of infection. In contrast, comprehensive understanding of whole compartment transcriptome variations after infection has provided valuable insights into location-specific changes after HIV, transmissible spongiform encephalitis (TSE) , Francislarensis , and avii (APEC) infectioi (APEC) . Such muTo address this issue, we studied the dynamic immune responses to influenza infection at the transcriptional level by simultaneous daily sampling of lymphocytes in four different compartments over 11 consecutive days post infection. Data were analyzed with a procedure based on high-dimensional ordinary differential equation (ODE) models to reconFemale C57/BL6 mice were infected intranasally with a mouse-adapted H3N2/Hong Kong/X31 avian influenza A virus . Infectep-values or 0.1445 for Benjamini-Hochberg adjusted p-values for the spleen compartment; 0.0032 for unadjusted p-values or 0.0964 for Benjamini-Hochberg adjusted p-values for the blood compartment. The selected probe sets were then mapped to National Center for Biotechnology Information (NCBI) gene names, and collapsed to single gene names by selecting one unique probe that had the largest inter-quantile range (IQR).We selected temporally differentially expressed probe sets from each compartment using functional principal component analysis . The numAfter mapping, the numbers of temporally differentially expressed genes (TDEGs) in each compartment were: 1,642 in lung; 2,922 in lymph node; 856 in blood; and 614 in spleen. Among them, only 8 genes and 3 pseudo-genes were common to all compartments. On average, only about 10% of TDEGs were shared by two compartments. Of particular interest is the up-regulation of Ddx60, a DEXD/H box RNA helicase activated in the cellular response to foreign RNA or DNA . Ddx60 iWhile we expected a fraction of significant genes to be common between compartments, the extremely low number suggests that common transcriptome response in different compartments after influenza infection is rare. Furthermore, the temporal patterns for these common TDEGs are quite different in the four compartments . We used feature based clustering to map the temporal features of the data to biologically relevant functions. Details of the criteria and method can be found in the Materials and Methods Section. Using feature based clustering, we identified multiple unique clusters in each compartment: 12 (blood), 32 (lung), 24 (lymph node), and 16 (spleen), shown in To perform gene set enrichment analysis, we first clustered genes into two classes: whether there is a time delay, or not, for the transcript levels to increase or decrease compared to the baseline. Genes with a transcription-change time delay are referred to as \u201cdelayed genes\u201d, and those without as \u201cearly genes\". Based on this criterion, 511 and 49 genes are identified as delayed genes for the lung and lymph node, respectively. In contrast, no delayed genes were identified in blood and spleen samples. Fig I in In our previous work , computaWe first examined transcripts expressed in the early and delayed expression clusters for those associated with innate or adaptive immune responses (See We next examined those clusters with middle and late activation. The lung is the only compartment in which a large proportion (about 30%) of TDEGs show a temporal activation delay. Differences in GO terms between the early and delayed gene modules supported a switch from an innate to an adaptive immune process. Gene ontology functions consistent with an early innate immune response enriched in the early transcriptome clusters included: recruitment of lymphocytes to the lung in response to early infection with chemokines (KEGG pathways mmu04060 and mmu04062), receptor signaling pathways associated with the Type I interferon response (KEGG NOD-like and RIG-I-like pathways); and KEGG mmu04623 cytosolic DNA-sensing pathway which suggests Toll-like receptor expression typical of the early response to influenza. The GO tWe next examined the delayed gene clusters in lung in more detail, and performed upstream analysis to infer the upstream activated genes that led to the delayed gene module transcription pattern. Within the delayed gene module, several probable upstream modifiers switched from inhibited to activated, or the reverse, between days 1 and 5 . NetworkTo examine the switch between innate and adaptive immune responses in the lymph nodes, we examined upstream activators of the statistically significant transcripts. Such analysis allows identification of early signaling pathways associated with the observed downstream transcriptome pattern, and better identifies the transcriptome programs and motifs. In While TCR was one upstream modifier identified in lymph node, among the up-regulated TDEGs are a preponderance of B cell-specific genes reflecting the strong wave of B cell expansion. This suggests that the upstream cell cycle control molecules and growth factors identified in common between lung and lymph node may be illuminating common proliferative pathways between T and B lymphocytes responding to influenza infection. Indeed, while the number of delayed genes identified in lymph node was relatively small (n = 45), the associated GO terms were mostly related to cell cycle, cell division, and DNA repair, consistent with proliferation and an active immune response.Next, we clustered genes by the number of modes of their expression patterns over the period of 11 days. This quantity reflects the fluctuations in the expression levels over the period of study. As a special case, a gene is classified as 0-mode if its expression level increases or decreases monotonically over the period of 11 days. The results are summarized in Gene set enrichment analyses show that in lung, mode 2 TDEGs are associated with a few GO terms related to cell cycle/division and regulation of phosphorylation, while mode 1 TDEGs are associated with many more GO terms with diverse functions related to immune responses, such as activation and regulation of many types of cells, several receptor signaling pathways, and various cytokine and chemokine related terms. In the other three compartments, the relationship between modes and immune responses to virus infection are not clear; instead, the overall patterns appear related to different aspects of housekeeping functions in different mode-groups, such as mRNA processing (Mode 0 group) versus DNA processing (Mode 1 group) in spleen. This observation suggests that these different biological processes have different natural periodicities, which are reflected by the number of modes in a fixed time interval.Based on the smoothed expression patterns, we define a gene as an up-regulated (down-regulated) gene if its estimated maximum mode is larger than its baseline (at DPI = 0). All compartments have roughly balanced numbers of up- and down-regulated genes, and GSEA does not show clear functional difference between these two groups.Finally, we cluster genes based on their activation time which is defined as the time (day) the expression level reaches 50% of its maximum (for up-regulated genes) or minimum (for down-regulated genes). A similar classification was used in . The resWe next used a model based on linear ODE system to construct the compartment specific regulatory network between the clusters induced by influenza infection . Briefly, the network structure was determined by a variable selection using the smoothly clipped absolute deviation (SCAD) method . Since tOne notable feature of the reconstructed regulatory networks is the existence of hub clusters that regulate more clusters than most other clusters. We identified clusters C12 (Delay.Up.ActDay8.Modes1), C23 (Early.Up.ActDay2.Modes1), and C30 (Early.Up.ActDay6.Modes2) as the hub clusters for the lung network. These hub clusters have both incoming and outgoing edges, indicating that the genes in the clusters are regulated by, and are regulating, other clusters. This is also evident from their functional enrichment, for example, C23 is enriched for cytokine regulation and processes that lead to the induction of T and B cell activation.In this manuscript, we report one of the most comprehensive transcriptional analyses to date of the immune response to primary influenza infection. The collection of frequent time-series data from lung, lymph node, spleen, and peripheral blood allowed us to examine, simultaneous, temporal patterns of gene expression after infection within multiple compartments. For each compartment, ODE network models were fit to the data, revealing a systems-level structure of the influenza immune response. Our study shows that at the transcriptome level, each of the four biological compartments respond to influenza infection very differently. These compartmental differences are manifested by vastly different lists of genes with statistically significant changes in expression levels, co-expression modules and their temporal patterns, and the reconstructed GRNs. We believe in depth study of these differences can help researchers design better experiments in the future. For example, we note that the activation and peak up- (down) gene regulation times are different between compartments. The median activation time occurs on DPI 5 for lung and on DPI 2 for the other compartments. However, the median peak/nadir days for these compartments are: DPI 5 (blood and lymph node), 7 (lung), and 9 (spleen).A related finding is that lymphocytes in the spleen are activated early (DPI 2) but peak transcription levels occur late. This suggests that although spleen responded to influenza infection quickly in the beginning, it takes more than a week for a typical TDEG to reach its largest differential expression (as compared with the base line level). These findings have implications for study design and statistical power. Based on our findings, we suggest that in cross-sectional or a longitudinal studies with limited number of time points, different data collection times should be tailored to the known expression periods of each tissue compartment.Our approach also revealed a different GRN within each tissue compartment, which is a new aspect of primary influenza infection. The characteristics of the influenza immune response within each compartment differed by both the number of modules and network density of the GRN. The density of a network is calculated as the ratio of the number of connected edges to the number of all possible edges, allowing for loop (e.g. self-regulatory feedback) edges. A low network density indicates a sparser network. While the lung has the most diverse temporal patterns in terms of the number of clusters, it showed the lowest network density, which suggests that each cluster has fewer average regulatory relationships with others . This fiIn summary, we used a systems-biology approach to identify both broad coordination of the influenza immune response across lung, lymph node, spleen and blood, as well as compartment-specific dynamics of gene expression reflecting more focused and specialized biological functions (T and B cell activation and proliferation) within primary lymphoid organs. Broadly expressed genes highlighted for the first time by this analysis included Ddex4, a mediator of the Type I interferon response and RigI signalling pathway, and Ehd4, which may be responsible for increased MHC-I presentation of influenza viral peptides, facilitating CD8 killing of infected epithelial cells. This higher level analysis demonstrates the power of analysing temporal gene expression network topology to gain further insights into complex intra- and cross-compartmental dynamics during infection.\u2122, Sigma-Aldrich, St. Louis, MO) and inoculated intranasally with 0.03 ml of 1x105 EID50 H3N2/Hong Kong/X31 IAV. Mice were monitored for signs of infection throughout the experiment. All animals displayed signs of infection by Day 2 post infection, including ruffled fur, lethargy and initial weight loss. Weight loss peaked around day 6 post infection, with animals typically displaying ~15% weights loss as compared to Day 0 weights. Animals recovered all weight, and exhibited normal behaviour by Day 8 post infection. The temporal trend of body weight loss is illustrated in Fig Q in Female C57/BL6 mice, 6\u201312 weeks of age were anesthetized with intraperitoneal injection of 2,2,2-tribromoethanol , removed, and stored on ice until processing. Lungs from two mice in each group were flash-frozen in RNALater buffer (Sigma-Aldrich) and lungs from the remaining three mice were processed. Lung tissue was disrupted by pressing the tissue through a strainer with a syringe piston. The cells were filtered through Nitex mesh and layered on Lympholyte . TER-119 immunomagnetic beads were used to remove red blood cells. For blood samples, 13 were stored in a different buffer, and a subsequent quality assurance microarray analysis of these samples showed a different expression pattern than the remaining 23 samples and thus these 13 samples were excluded from the final blood microarray analysis . The lysates were immediately passed over QiaShredder columns (Qiagen), and flash frozen in liquid nitrogen. Total RNA was reverse transcribed to cDNA with the NuGEN\u00ae amplification system, and this cDNA was hybridized with Affymetrix\u00ae Mouse Gene 1.0 ST arrays. Quantile-normalized data were generated with Affymetrix\u00ae Expression Console software using the PLIER algorithm. Probe sets were mapped to gene symbols with annotations provided by Affymetrix\u00ae. All microarray images and processed data are publicly available at the NCBI Gene Expression Omnibus website under accession number GSE57455.2 transformation to these expression measurements. Exploratory analyses reveal that the lung data had two distinct groups of array results which correlated exactly with buffer effects. The lungs of the 13 mice displayed by red stars were stored in a different buffer from the other 23 mice, which produced a clear buffer effect as illustrated by Fig O in We exclude probe sets with all measurements < 100, a threshold below which it is hard to differentiate gene expression signals from the background noise. This results in 30,184, 25,876, 30,590, and 29,285 probe sets for the lungs, lymph nodes, blood, and spleen, respectively. We apply logxi(t) represents the underlying true expression curve for the ith gene. The baseline is chosen to be the gene expression level on DPI = 0, since the mice on DPI = 0 were killed immediately after receiving flu virus and it is reasonable to assume that the gene expression levels had not been affected. In practice, the true expression curve xi(t) needs to be estimated from the noisy microarray data, yijk = xi(tj)+ \u03f5ijk, where j = 0,\u2026,10 indexes the time points and k indexes the repetitions at each time point. A commonly used smoothing technique is to expand xi(t) in terms of an intercept plus an L-dimensional linear basis:For each of the four compartments, we want to identify genes whose time course gene expressions have significant changes from the baseline. The null hypothesis of this testing problem can be defined as:\u03b7l(t), l = 1,2,\u2026,L, are eigen-basis estimated from the data by the functional principal component analysis (FPCA) [t- and F-statistics, we use a test statistic that compares the goodness of fit of the model under the null hypothesis to that under the alternative hypothesis [p-values, in which each permutation sample yijk with the index j randomly permuted.In this study the basis functions s (FPCA) . These opothesis :Fi=SSi0R permutation samples, we can then compute the unadjusted p-value for each gene:With the null statistics The multiple test correction method proposed by is then We cluster the differentially expressed probe sets into different groups or functional modules based on their time course expression patterns by considering four features hierarchically: 1) whether there is a time delay for the probe set to start express, compared to the baseline; 2) the number of modes of the expression patterns from Day 0 to Day 10; 3) whether the probe set is up or down regulated; and 4) the activation time which is defined as the time to reach 50% of the first peak or nadir (minima or maxima). Since the analysis of the time course expression patterns may be sensitive to excessive variation of one time point or outliers, it is necessary to remove these outliers before clustering analysis. We ran a residual analysis to identify these outliers. We observed that DPI = 1 is associated with abnormally high RSS for the lung and lymph node data, thus we removed DPI = 1 and refit the lung and lymph node data before the clustering analysis. For each significant probe set, we first average the expression values on the same day and standardize the data by the mean and standard deviation over time. The following clustering method is applied to these standardized data so that the grouping of probe sets is based on the gene expression pattern rather than the expression magnitude. After exploring the gene expression patterns visually, we found that some of the probe set expressions do not change from baseline for the first several days after influenza infection. Thus, we first clustered probe sets into two big classes: whether there is a time delay for the probe set to start express, compared to the baseline, which we call them as delay probe sets and early probe sets. We only consider the delay window from Day 3 to Day 5 based on our preliminary data exploration. A linear regression model for the expression data for each probe set for the first 3, 4, or 5 days can be written as:yit is the average expression of the ith gene on the tth day; m is the upper limit of delayed time (day). If there are m-day delay for the ith gene's expression, we expect that aim = 0; consequently we may use a standard t-statistic to test aim = 0 and identify the delayed genes. At the same time, we also need to exclude genes with very large aim) or very large standard error of Here t-statistic is less than 3.0 or Either the The adjusted standard error of These thresholds are determined based on our visual inspection of the gene expression patterns. Next, we cluster genes by the number of modes of their expression patterns over the period of 10 days. The following procedure is used for this purpose.Smooth the expression data by the FPCA technique describe in Section \u201cDifferential expression analysis\u201d.j = 2,3,\u2026,9) is called a mode if for a given tolerance level \u03b4 = 0.1, either one of the following criteria is met:An interior time point , it is merged to a neighboring group. For example, there are only 18 genes in the group of 0-mode for the lung data, thus it is merged with the group with 1-mode. Based on the smoothed expression patterns, we define a gene as an up-regulated (down-regulated) gene if its most prominent mode is larger than its baseline (at DPI = 0). Finally, we cluster genes based on their activation time which is defined as the time (day) the expression level of a gene reaches 50% of its maximum (for up-regulated genes) or minimum (for down-regulated genes).Here After classifying the TDEGs into unique clusters based on the above criteria, we noticed that some clusters have only a handful of genes, so we merge small clusters (with n \u226410 genes) to a large cluster most similar to it in terms of the shape of the mean curves.Xi(0) represents the mean gene expression function of the ith gene and Xi0 represents its initial value. A mixed-effect model is used to account for the between-gene variation within each cluster. The parameter \u03b1i0 is the intercept term and \u03b1ij quantifies the regulation effect between clusters in the network. Once genes are properly classified into clusters (modules), we can build an ODE network system for Mk(t), the mean expression curve of the kth cluster (module). In this way, the dimension of Eq and their derivatives M'k(t) from the observed data. We then substitute these estimates into Eq (K independent pseudo regression models. The smoothly clipped absolute deviation (SCAD) [When modules , it is c modules , 47 for modules . We firs into Eq to turn n (SCAD) is then n (SCAD) .kth module Xi(t) is the true expression curve of the ith gene in this module; k-th differential equation; the random effects \u03b3ij are assumed to follow normal distributions and they characterize the between-gene variation in the kth module.To obtain the gene-specific regulatory parameter estimates, we consider the following mixed-effects ODE model for the The probe sets are mapped to gene symbols with annotations provided by Affymetrix\u00ae. We first performed the functional enrichment analysis using DAVID , throughhttp://immport.niaid.nih.gov) that are related to immunology, such as \u201cB cell\", \u201cT cell\u201d, \u201clymphocyte\", etc. We encode these key words into Immunology Ontologies (IO) and annotate each gene by matching the key words with the gene's definition and description in GenBank (http://www.ncbi.nlm.nih.gov/genban). The statistical significance of these keyword related annotations are calculated by the Fisher\u2019s exact test.The GO annotations are not specifically designed for immunology related enrichment analyses. In order to decipher more detailed immunology specific functional implications underlying these gene clusters and the regulatory networks, we define 406 key words based on the curated terms listed in Imm-Port . Each identifier was mapped to its corresponding object in Ingenuity's Knowledge Base. Network eligible molecules were overlaid onto a global molecular network developed and networks were then algorithmically generated based on their connectivity. Upstream analysis modulator Z-scores of 2 and above were considered probable activators and less than \u22122 were considered probable inhibitors. Excel , Adobe Illustrator and Acrobat Professional CS5 were used to create Figs Data sets containing TDEG expression values were uploaded into Ingenuity Pathways Analysis (IPA) (S1 Table(XLSX)Click here for additional data file.S2 Table(XLSX)Click here for additional data file.S3 Table(XLSX)Click here for additional data file.S4 Table(XLSX)Click here for additional data file.S5 Table(XLSX)Click here for additional data file.S1 Text(PDF)Click here for additional data file."}
+{"text": "We consider data from a time course microarray experiment that was conducted on grapevines over the development cycle of the grape berries at two different vineyards in South Australia. Although the underlying biological process of berry development is the same at both vineyards, there are differences in the timing of the development due to local conditions. We aim to align the data from the two vineyards to enable an integrated analysis of the gene expression and use the alignment of the expression profiles to classify likely developmental function.We present a novel alignment method based on hidden Markov models (HMMs) and use the method to align the motivating grapevine data. We show that our alignment method is robust against subsets of profiles that are not suitable for alignment, investigate alignment diagnostics under the model and demonstrate the classification of developmentally driven genes.The classification of developmentally driven genes both validates that the alignment we obtain is meaningful and also gives new evidence that can be used to identify the role of genes with unknown function. Using our alignment methodology, we find at least 1279 grapevine probe sets with no current annotated function that are likely to be controlled in a developmental manner.The online version of this article (doi:10.1186/s12859-015-0634-9) contains supplementary material, which is available to authorized users. Alignment of time course gene expression data is an important problem since, \u2018biological processes have the property that multiple instances of a single process may unfold at different and possibly non-uniform rates in different organisms, strains, individuals, or conditions\u2019 . Such diVitis vinifera L., Cabernet Sauvignon) at the \u2018Willunga\u2019 and \u2018Clare\u2019 vineyards in South Australia. The experiment was run over the duration of the development cycle of the grape berries, from the closed-flower to ripe-red stage of the berries themselves. For each gene, we have a pair of expression profiles, one from each of the Willunga and Clare vineyards. Pairs of expression profiles for four example genes can be seen in Fig. We consider a time course microarray experiment conducted on grapevines have been used for alignment of time course gene expression data . HoweverIn order to work with the typical sparsity of the grapevine data, as well as to provide a principled way to obtain a common alignment across both vineyards, we turn to hidden Markov model (HMM) based alignment methods.et al. . We discard the expression profiles not differentially expressed in time at the 0.001 % significance level using LIMMA , as wellW1:19 and C1:17 (indexed by time) for the Willunga and Clare vineyards respectively. The alignment is obtained based on the assumption that both emission sequences arise from a single state sequence S1:19. The time points for the Willunga sequence W1:19 correspond directly to those of the common state sequence S1:19, while the time points for the Clare sequence C1:17 are obtained via \u2018gap positions\u2019 1>T.Our study focuses on discovering gene regulatory networks from time series gene expression data using the Granger causality (GC) model. However, the number of available time points (CGC-2SPR (CGC using two-step prior Ridge regularization) to resolve the problem by incorporating prior biological knowledge about a target gene data set. In our simulation experiments, the propose new methodology CGC-2SPR showed significant performance improvement in terms of accuracy over other widely used GC modeling and MI-based (MRNET and ARACNE) methods. In addition, we applied CGC-2SPR to a real biological dataset, i.e., the yeast metabolic cycle, and discovered more true positive edges with CGC-2SPR than with the other existing methods.In this study, we proposed a new method, viz., Abm1 gene (its functions previously were unknown) might be related to the yeast\u2019s responses to different levels of glucose. Our research improves causality modeling by combining heterogeneous knowledge, which is well aligned with the future direction in system biology. Furthermore, we proposed a method of Monte Carlo significance estimation (MCSE) to calculate the edge significances which provide statistical meanings to the discovered causality networks. All of our data and source codes will be available under the link https://bitbucket.org/dtyu/granger-causality/wiki/Home.In our research, we noticed a \u201c 1+1>2\u201d effect when we combined prior knowledge and gene expression data to discover regulatory networks. Based on causality networks, we made a functional prediction that the Technology advances in molecular biology, especially those in Next Generation Sequencing (NGS) have innovated the principles of biology research . These nAlthough there are numerous studies on GRN inference, only a few of them have focused on the newly emerging type of data: i.e. time series gene expression data. With the help of NGS technology, researchers, for example , can easOne popular approaches for predicting gene regulatory networks from time series data is Dynamic Bayesian Network (DBN) modeling. Early DBN methods rested on Boolean Network (BN) theories. Existing BN inference approaches include the REVEAL algorithm , the MDLFurthermore, mutual information (MI) methods based on information theory were applied in a few studies , 16. TheDue to the increased expression data dimension size in recent years, another family of methods came into focus: Vector AutoRegressive (VAR) methods. Granger causality inference is one of the most popular VAR methods, originally proposed in economic studies , 21, andn) but an extremely small number of time points (T). On the one hand, pairwise Granger causality (PGC) model . e is the random noise that conforms to a Gaussian distribution N.If gene N. Afterward, the expression values of all the other time points are calculated by the rules described above. Altogether, only 20 time points of gene expression data have been generated from the model, similar to real biological studies. This is another major difference from previous studies which usually use hundreds of time points to train the model =W. After the prior knowledge graph is filtered through the dictionary of target gene set, it contains 33583 number of gene pairs, and then is used to generate a prior knowledge weight matrix W with 33,583\u00d72(undirected) \u00d72(model order) =134332 non-zero entries.The first prior knowledge is \u201cYeastNet\u201d, generated by summarizing heterogeneous knowledge from various traditional biological experiments . In thisk-mer\u201d motifs. The table S5 in the study [k-mer sums\u201d, which indicate the preferences of 85 TFs binding to different genes. We directly used these scores to build the prior knowledge matrix W. After target gene set dictionary based extraction, the prior knowledge graph contains information about 45(the number of TFs) \u00d72921(the number of the target genes) =131,445 directed gene pairs, which correspond to a prior knowledge weight matrix W with 131,445\u00d71(directed) \u00d72(model order) =262890 non-zero entries.The other prior knowledge is the genome wide TF binding profiles of the yeast genes , which ihe study is the \u201cCGC-2SPR to the dataset in a two-step process.As mentioned in the methodology section, we applied \u03bb value in Eq. \u03bb=0.01 is much better than that of the random model that is based only on the zero regression coefficient matrix (Results not shown).Firstly, a normal Ridge regression is optimized and applied to the dataset. The optimal B\u2217 and \u03bb1), we started to incorporate prior knowledge into the regularization process. \u03bb1 was chosen as 0.01, the same as the \u03bb obtained from Ridge cross validation. To better mix heterogeneous knowledge, we choose the value of \u03bb2 such that B is comparable to \u03bb2W: \u03bb2=max(B\u2217)/max(W). With this approach to set parameters, \u03bb2 is 5e\u22123 for CGC-2SPR with \u201cYeastNet\u201d and 2e\u22126 for CGC-2SPR with \u201cTF binding score\u201d.Based on the ordinary Ridge regression results were sampled from either the filtered gene set or the golden standard. Figure CGC-2SPR with YeastNet only displays a marginal improvement that is barely recognizable with/without the prior knowledge while the average pairs are not noticeably affected , we analyzed the biological meanings of the discovered causality networks. Furthermore, we employed the MCSE algorithm to estimate the significance level of the identified causality edges.After extracting the results from Saccharomyces genome database [Abm1 might be a gene that responds to different glucose levels.We plotted one causality network result using Cytoscape , as showdatabase . MIG2 isCGC-2SPR. However, the convergence properties for solving Lasso and Elastic net problems [L1 norm, and, as a result Lasso or Elastic net is much slower than Ridge. Nevertheless, the two-step algorithm proposed in this paper is directly applicable to both Lasso and Elastic net.As we observed from the simulation experiments, normal Lasso performs better than normal Ridge due to its feature selection effect. It is an interesting question whether Lasso or Elastic net with prior knowledge can provide even better performance than problems are not p is another interesting topic. Previous studies mentioned that choosing different model order, p, can significantly impact on the results [p, should be chosen based on such a trade-off. We used a cross validation approach to select the right model order p. However, the cross validation process is computationally expensive, which prompts us to explore other efficient ways to select the appropriate model order p in future.How to intelligently select the right model order results , 25. A hIn addition, our real data experiments tested two different types of prior knowledge. This special case study indicated that closely relevant prior knowledge could assist to generate better results than general prior knowledge. Therefore, in real biological research, closely related prior knowledge is preferred whenever available.To successfully utilize network inference methods from time series data, the corresponding biological experiments should be carefully designed. A basic requirement is that the time series should cover the whole biological phenomenon interested. For cycled biological processes, more samples in each cycle enable the possibility to discover gene regulatory relationships that happens in a smaller time scale. Also, it is preferable to have time series expression measurements that cover 1.5-2 cycles according to Nyquist-Shannon sampling theorem .Last, a golden standard is usually not complete or even far from complete in real biological studies. The performance number measured by the golden standard might be lower, or even far lower than its actual performance for the real biological problem. Furthermore, the golden standard itself could also be used as a solid prior knowledge to assist the analysis of time series gene expression data. On the other hand, other types of data (e.g. genome wide TF binding data) might be noisy, but nevertheless more adequate than the golden standard, and still can better complement the expression data in discovering gene regulations.CGC-2SPR, that can effectively incorporate prior knowledge into Granger causality analysis, and accurately derive causal relations between gene pairs from gene expression time series data. In contrast to previous studies, we generated simulation datasets with a close-to-real condition (n>>T) and used it to evaluate previous methods and our new approach. In our simulation experiments, CGC-2SPR showed significantly better prediction accuracy than the other popular methods, including PGC, Ridge and Lasso regularizations, information theory approaches. For real data experiments, we applied the new method to infer causal networks from the yeast metabolic cycle dataset, along with two types of prior knowledge (TF binding profile and YeastNet) respectively. Through the golden standard data evaluation, CGC-2SPR demonstrated both improved performance and the advantage of combining heterogeneous knowledge. Furthermore, we proposed a new Monte Carlo method MCSE to estimate the significance levels of causal relations.In this paper, we proposed a novel method, termed N, N, N, N and N respectively.In this section, we studied the noise effect on the performance of network inference methods that where evaluated in the manuscript, including CGC-2SPR, PGC, Ridge, Lasso, Enet, ARACNE and MRNET. The simulation datasets are generated with the same golden standard network, but at different noise levels with the Gaussian distributions N, Lasso and Enet performed better than CGC-2SPR. In all other noise levels, CGC-2SPR performs stably and consistently better than all the remaining methods. With the help of prior knowledge, the PRC curve of CGC-2SPR only slightly drops as the noise level increases.We evaluated the performance of the network inference methods over the generated simulated expression profiles. Fig. N.Time-delayed version of ARACNE and MRNET has been verified in existing studies , 19 to bThe time-delayed version of MRNET (TD-MRNET) is implemented by following the descriptions in : FirstlyCGC-2SPR and Lasso in the paper. As shown in Fig. n>>T, these pairwise information-metric based methods are quite susceptible to noise. Therefore, random coincidence regulatory relationships will dominate the results of ARACNE, MRNET, TD-ARACNE and TD-MRNET. The tests on the other noise levels have demonstrated similar results and are not shown here.We compared the performance of TD-ARACNE, TD-MRNET, ARACNE, MRNET and the two best performing methods, i.e.,"}
+{"text": "Given the development of high-throughput experimental techniques, an increasing number of whole genome transcription profiling time series data sets, with good temporal resolution, are becoming available to researchers. The ReTrOS toolbox (Reconstructing Transcription Open Software) provides MATLAB-based implementations of two related methods, namely ReTrOS\u2013Smooth and ReTrOS\u2013Switch, for reconstructing the temporal transcriptional activity profile of a gene from given mRNA expression time series or protein reporter time series. The methods are based on fitting a differential equation model incorporating the processes of transcription, translation and degradation.Arabidopsis thaliana, and how such reconstructed transcription profiles can be used to study the effects of different cell lines and conditions.The toolbox provides a framework for model fitting along with statistical analyses of the model with a graphical interface and model visualisation. We highlight several applications of the toolbox, including the reconstruction of the temporal cascade of transcriptional activity inferred from mRNA expression data and protein reporter data in the core circadian clock in The ReTrOS toolbox allows users to analyse gene and/or protein expression time series where, with appropriate formulation of prior information about a minimum of kinetic parameters, in particular rates of degradation, users are able to infer timings of changes in transcriptional activity. Data from any organism and obtained from a range of technologies can be used as input due to the flexible and generic nature of the model and implementation. The output from this software provides a useful analysis of time series data and can be incorporated into further modelling approaches or in hypothesis generation.The online version of this article (doi:10.1186/s12859-017-1695-8) contains supplementary material, which is available to authorized users. Analyzing the temporal dynamics of mRNA and protein expression is a key ingredient to the study of gene function within the cell. The widespread adoption of high-throughput transcriptomic and proteomic technologies, such as microarrays, fluorescent imaging, transcriptional reporter constructs and sequencing, has enabled the generation of large numbers of high-resolution genome-scale time series data sets. The processing, analysis and summarising of such time series data has a number of theoretical and computational difficulties to overcome. If, for example, a protein reporter construct is used, what is the relationship between the observed reporter protein and the mRNA expression dynamics of the gene of interest? Moreover, allowing for the process of mRNA and protein degrading, what is the actual transcriptional activity?ReTrOS-Smooth: based upon the algorithm introduced in Harper et al. (2011) [r et al. 011 [2]. ReTrOS-Switch: based upon the algorithm introduced in Jenkins et al. (2013) [t al. 201 3]. ReT. ReTReTrHere, we present a software toolbox called ReTrOS (Reconstructing Transcription Open Software) which provides several approaches for processing and analysing both gene or protein expression time series data sets, with an easy-to-use graphical interface for user interaction. The software is written in the cross-platform MATLAB\u00ae; environment. The approach used in ReTrOS is based on a differential equation model to accouThis article introduces and describes the underlying model and the two algorithms implemented in ReTrOS. We briefly discuss the data requirements and compare the applicability of the two methods. Following the methods overview, we present several cases of applying ReTrOS concluding with some final remarks summarising the software and its uses.The basic model underlying ReTrOS is an ordinary differential equation (ODE) model introduced in : 1\\docuM(t) is the amount of mRNA transcript at time t, P(t) is the amount of protein at time t, \u03c4(t) is the rate of transcription/mRNA synthesis, \u03b4M is the rate of mRNA transcript decay, \u03b1 is the rate of translation/protein synthesis and \u03b4P is the rate of protein decay.where y(ti), i=1,\u2026,n, be the discretely observed (not necessarily at equidistant time points) imaging protein time series. Assuming that the mean of the data are proportional with unknown factor \u03ba to the concentration of the reporter protein, we have Let \u03b5(ti), i=1,\u2026,n are independent normally distributed random variables each with mean zero and unknown variance \u03c32(ti). If the observations are mRNA expression levels (e.g. measured using microarrays technology), instead of (Eq. and d of Eq. we have \u03b5(ti), i=1,\u2026,n again independent normally distributed random variables each with mean zero and unknown variance \u03c32(ti).with \u03c4(t) as follows.The main difference between ReTrOS-Smooth and ReTrOS-Switch lies in the formulation of the transcription function Here we use a kernel smoothing function that we found to be more flexible outperforming the spline approach particularly when the data changes rapidly over short time-scales. The user is assumed to have prior information about the values of the degradation rates and has to provide these through appropriate distributions. The algorithm extends the transcription reconstruction algorithm originally introduced in by incor\u03b4P we can compute the mRNA path by Rewriting Eq. in termsIn ReTrOS-Smooth, the continuous function moothing with a b\u03ba and \u03b1 are unknown, the profile of the reconstructed time paths representing the mRNA and transcriptional dynamics are computed at an arbitrary level. In ReTrOS we set \u03ba\u03b1=1 as default parameter values.If the scaling factors y(ti) represent mRNA levels, the procedure involves only the one step of the back-calculation in (Eq. y(ti) are assumed to be related to the unknown mRNA expression levels, M(t), through ough Eq. . A priorVariability of the reconstructed profile The variability of the profiles for ti. The influence of both sources of variability can be estimated in a straightforward way via bootstrap simulation methods [(1) Compute ti=1,\u2026,n where ti, and such that e(ti) has the same sign as (2) Obtain a resampled data profile of same sample size (3) Find \u03b4P and \u03b4M from their prior distributions.(4) Draw values for 5) Calculate Calculat methods : (1) CoR times (by default set to 99 in ReTrOS-Smooth), after which point-wise mean estimates for \u03b4P and \u03b4M in step (4) are drawn from a gamma distribution with mean and standard deviation provided by the user. If the time course represents mRNA levels then steps (1) to (5) are implemented analogously where only a prior distribution for \u03b4M is required. An example output from the ReTrOS-Smooth method is shown in Fig. Steps (2) to (5) are repeated mentclass2pt{minim\u03c4(t) in : This method is an implementation of the transcriptional switch model inference introduced in Finkenst\u00e4dt et al. and Jenk) in Eq. has the L the total length of the time interval observed. Note that transcription might not be fully turned off and that there may be more than just two states. The posterior distributions for the switch-times s1,\u2026,sk and number of switches k are estimated by the software. Extending this model to incorporate protein expression dynamics (Eq. with mics Eq. gives thM(t) satisying (Eq. \u03c4(t) as in (Eq. with ying Eq. and \u03c4(t)s in Eq. .P(0),M(0) and all \u03c4. That is, sampler following the methodological approach introduced in [\u03b4P and \u03b4M, which are updated using a standard Metropolis-Hastings acceptance scheme. For protein reporter observations , M(0) and \u03c40,\u2026,\u03c4k, are obtained through linear regression of the model and each method provides default values for all other parameters. Most algorithmic parameters can be modified through the user interface and all parameters can be modified through the batch scripts and that changes in transcription activity are expected. The algorithmic complexity of the ReTrOS-Smooth method is lower than for the ReTrOS-Switch method, which also requires a large number of MCMC iterations to estimate the parameter distributions accurately. As such, the computational time required for the ReTrOS-Smooth method is far smaller than for the ReTrOS-Switch method. ReTrOS-Smooth processes time series such as in Fig. Arabidopsis thaliana leaf samples. Here we also present a new case study applying ReTrOS to recently published mRNA and protein-reporter time series of a selection of central Arabidopsis thaliana circadian clock-related genes. Circadian clocks and rhythms are present in most living organisms and provide a regulatory method for many important processes. A common feature of circadian clocks is that they consist of both transcriptional and translational components. We use the ReTrOS software to explore the oscillatory nature of the mRNA expression and protein reporter time series data from the model plant Arabidopsis thaliana.The algorithms used in the ReTrOS toolbox have previously been applied to a number of different mRNA and protein reporter data sets, in addition to those in . The stuIdentifying temporal events in theArabidopsis thaliana circadian \u2018repressilator\u2019 circuit Using the simplified core \u2018repressilator\u2019 model clearly show the circadian nature of the transcriptional profiles: for example LHY has three transcriptional switches, at approximately 6, 10 and 20 h, during the first 24-h observation period, followed by three switches of the same types at approximately the same times during the second 24-h period. However, not all of the analysed profiles displayed identifiable expression dynamics and as such a clear transcriptional switch profile could not be obtained. The putative interactions of the repressilator model can be visually identified in many cases: for example the regulatory interactions by the LHY/CCA1 group (shown in black) can be identified with positive regulation of several PRR genes and negative regulation of several EC genes . The temporal flow of transcriptional regulation in waves can also be identified, with the LHY/CCA interactions followed by the PRR interactions which are followed by the EC interactions.Effects of light and temperature conditions on circadian marker genes We analysed protein luciferase reporter data from a 4-day time series dataset of the circadian-controlled CCR2 and CAB2 promoters in a range of wild-type and mutant lines, temperature conditions and light regimes [cry1 cry CCR2(3) double-mutant line , red (RL) and mixed red-blue (RBL) light conditions. We observe similar behaviours to those identified within the original study, such as an increased period in the cry1 cry2 CCR2 mutant line at 27 \u00b0C under RBL conditions, however, the back-calculated transcription profile of the cry1 cry2 CCR2 still shows damped rhythmic dynamics. As the back-calculation model takes into account the degradation processes of the luceriferase reporter, finer-scale structures in the time series are able to be extracted which may allow, for instance, increased accuracy in periodicity inference when using methods robust to asymmetric oscillations such as spectrum resampling [ regimes . Applyinsampling .Fig. 7BAnalysis of large-scale and high-throughput data is becoming an increasingly common task for many researchers. We provide an easy-to-use toolbox for the analysis of mRNA or protein-reporter time series data, that generates a fine-scale profile of transcriptional activity by removing the effects of degradation processes from the observed data .The ReTrOS toolbox, user manual and example data are freely available from"}
+{"text": "Recent advances in omics technologies have raised great opportunities to study large-scale regulatory networks inside the cell. In addition, single-cell experiments have measured the gene and protein activities in a large number of cells under the same experimental conditions. However, a significant challenge in computational biology and bioinformatics is how to derive quantitative information from the single-cell observations and how to develop sophisticated mathematical models to describe the dynamic properties of regulatory networks using the derived quantitative information.This work designs an integrated approach to reverse-engineer gene networks for regulating early blood development based on singel-cell experimental observations. The wanderlust algorithm is initially used to develop the pseudo-trajectory for the activities of a number of genes. Since the gene expression data in the developed pseudo-trajectory show large fluctuations, we then use Gaussian process regression methods to smooth the gene express data in order to obtain pseudo-trajectories with much less fluctuations. The proposed integrated framework consists of both bioinformatics algorithms to reconstruct the regulatory network and mathematical models using differential equations to describe the dynamics of gene expression.The developed approach is applied to study the network regulating early blood cell development. A graphic model is constructed for a regulatory network with forty genes and a dynamic model using differential equations is developed for a network of nine genes. Numerical results suggests that the proposed model is able to match experimental data very well. We also examine the networks with more regulatory relations and numerical results show that more regulations may exist. We test the possibility of auto-regulation but numerical simulations do not support the positive auto-regulation. In addition, robustness is used as an importantly additional criterion to select candidate networks.The research results in this work shows that the developed approach is an efficient and effective method to reverse-engineer gene networks using single-cell experimental observations.The online version of this article (doi:10.1186/s12920-017-0312-z) contains supplementary material, which is available to authorized users. The advances in omics technologies have generated huge amount of information regarding gene expression levels and protein kinase activities. The availability of the large datasets provides unprecidental opportunities to study large-scale regulatory networks inside the cell by using various types of omics datasets , 2. The Mathematical methods for the analysis of single-cell observation data is mainly for normalization of experimental data, identification of variable genes, sub-population identification, differentiation detection and pseudo-temporal ordering . These tOne of the major challenges in computational biology is the development of dynamic models, such as differential equation models, to study the dynamic properties of genetic regulatory networks , 22, 23.To address these issues, this work propose a novel approach to reverse-engineer gene networks using single-cell observations. To get pseudo-temporal ordering of single cells, we first use a method of dimensionality reduction, namely diffusion maps , 37, to + cells were flow sorted at primitive streak (PS), neural plate (NP) and head fold (HF) stages. In addition, the E8.25 cells were subdivided into putative blood and endothelial populations by isolating GFP + cells and Flkl + GFP \u2212 cells (4SFG \u2212), respectively. Cells were sorted from multiple embryos at each time point, with 3934 cells going to subsequent analysis. Experimental study quantified the expression of 33 transcriptional factors involved in endothelial and hematopoietic development, nine marker genes, including the embryonic globin Hbb-bH1 and cell surface marker such as Cdh5 and Itga2b (CD41), as well as four reference housekeeping genes . We select 40 genes from this dataset but exclude four house-keeper genes and two other genes (i.e. HoxB2 and HoxD8) because the variations in the expression levels of these fix genes are relative small.A recent experimental study used the single-cell qPCR technique to identify the expression levels of 46 genes in 3934 single stem cells that were isolated from the mouse embryo . The FlkThe dCt values in Supplementary Table 3 represenThe process of pseudo-temporal ordering can be divided two major steps. The first step uses Diffusion Maps for lower-dimensional visualization of high-dimensional gene expression data, then the Wanderlust algorithm is employed to order the individual cells to get the trends of different genes.Diffusion Map is a manifold learning technique for dealing with dimensionality reduction by re-organizing data according to their underlying geometry. It is a nonlinear approach of visual exploration and describes the relationship between individual points using lower dimension structure that encapsulates the data . An isoti, D is the number of single-cell, \u2225\u00b7\u2225 is the Euclidean norm, and \u03b5 is an appropriately chosen kernel bandwidth parameter which determines the neighbourhood size of points. In addition, an N\u00d7N Markov matrix normalizing the rows of the kernel matrix is constructed as follows: where P(xi) is an normalization constant given by Mij represents the connectivity between two data points xi and xj. It is also a measure of similarity between data points within a certain neighbourhood. Finally we compute eigenvalues and eigenvectors of this Markov matrix, and choose the largest d eigenvalues. The corresponding d eigenvectors are the output as the lower dimensional dataset Yi,i=1,\u2026d.where Using the generated low dimensional dataset, we use the Wanderlust algorithm to get a one-dimensional developmental trajectory. There are several assumptions about the application Wanderlust to sort gene expression data from single cells. Firstly, the data sample includes cells of entire developmental process. In addition, the developmental trajectory is linear and non-branching, it means that the developmental processes is only one-dimensional. Furthermore, the changes of gene expression values is gradual during the developmental process, and thus the transitions between different stages are gradual. Based on these assumptions, we can infer the ordering of single cells and identify different stages in the cell development by using the Wanderlust algorithm.k-nearest-neighbours graph that every cell connect to k cells that are most similar to it, then we randomly pick l neighbours out of the k-nearest-neighbours for each cell and generate a l-out-of- k-nearest-neighbours graph (l-k-NNG). Then the second stage begins for the trajectory detection. One early cell point s should be selected first in this algorithm, which serves as the starting point of psuedo-trajectory. The point s can be determined by the Diffusion Maps method in the first. For every single cell, the initial trajectory score can be calculated as the distance from the starting-point cell s to that cell.The Wanderlust algorithm also consists of two major stages, namely an initiation step and an iterative step for trajectory detection. In the first stage, we select a set of cells as landmarks uniformly at random. Each cell will have landmarks nearby. Then we construct a t with early cell s and landmark cell l, if d1. Regarding the four genes in the second group, it is assumed that both synthesis rate and degradation rate are variables of time, since we need to realize the first switching from high expression level to low expression level, The nine genes selected in Fig.\u00a0ver time . The genver time , it is at=50 and t=200. Numerical results are consistent with those showing Fig.\u00a0Our simulation results suggest that the proposed ODE system is still even when an implicit method with very good stability property is used for numerical solution of the proposed model. Numerical simulation will break down if we try to find the solution over a relatively long pseudo-time interval. Therefore we have to separate the whole time interval into a number of subintervals, and in each subinterval, we use the experimental observation data as the initial condition to generate solution of the subinterval. Figure\u00a0Sox 7, Sox 17) in Fig.\u00a0The proposed network in Fig.\u00a0For this extended network, we also use the ABC algorithm to infer model parameters using the modified dCT values. Simulation results of four genes are presented in the Fig.\u00a0i-th gene, we set bii>0; and the value of aii is aii>0 for positive auto-regulation. We first test the gene network shown in Fig.\u00a0In the graphic model generated by the GENIE3 algorithm, the auto-regulation, namely the positive or negative regulation of a gene to the expression of itself, is not considered. To find the potential auto-regulations in these genes, we test the network by adding positive auto-regulation to a particular gene. For the We have also conducted the positive auto-regulation test for the model based on the network in Additional file\u00a0Gata1, Gfi1b, Hhex have better robustness property than the model without any perturbation. However, for the gene network with 17 regulations in Additional file\u00a0Sox7, Sox17 have better robustness property than the network without positive auto-regulation. These simulation results do not provide strong evidence to support the positive auto-regulation in the nine genes in Fig.\u00a0We also test the robustness property of the developed models and the models with positive auto-regulations. We first use the estimated optimal parameter set to generate one simulation which is regarded as the exact simulation of the model without any perturbation. Then all the model parameters are perturbed by using a uniformly distributed random variable, and perturbed simulations are obtained using the perturbed model parameters. We generate 1000 sets of perturbed simulations and calculate the mean and variance of the simulation error for the perturbed simulation over the unperturbed simulations. For the gene network shown in Figs.\u00a0In this work we have designed an integrated approach to reverse-engineer gene networks for regulating early blood development based on singel-cell experimental observations. The diffusion map method is firstly used to obtain the visualization of gene expression data derived from 3934 stem blood cells. The wanderlust algorithm is then employed to develop the pseudo-trajectory for the activities of a number of genes. Since the gene expression levels in the developed pseudo-trajectory show large fluctuations, we then use Gaussian process regression method to smooth the gene express data in order to obtain pseudo-trajectory with much less fluctuations. The proposed integrated framework consist of both the GENIE3 algorithm to reconstruct the regulatory network and a mathematical model using differential equations to describe the dynamics of gene expression. The developed approach is applied to study the network regulating early blood cell development, and we designed a graphic model for a regulatory network with forty genes and a differential equations model for a network of nine genes. The research results in this work shows that the developed approach is an efficient and effective method to reverse-engineer gene networks using single-cell experimental observations.In this work we use simulation error as the key criterion to select the model parameters and infer the regulation between genes. However, because of the complex searching space of model parameters and noise in experimental data, it may be difficult to judge which model is really better than others if the difference between simulation errors is small. In fact, simulation errors of various models for the network of nine genes are quite close to each other. Therefore, in addition to using simulation error as the unique criterion to select a model, other measurements, such as AIC value, parameter identifiability and robustness property of a network, are also needed as important criteria. All of these issues are potential topics for future research."}
+{"text": "Arabidopsis thaliana flowering time in genetic backgrounds varying in upstream signalling components with the expression levels of floral pathway integrator genes in these genetic backgrounds. Our modelling results indicate that flowering time depends in a quite linear way on expression levels of floral pathway integrator genes. This gradual, proportional response of flowering time to upstream changes enables a gradual adaptation to changing environmental factors such as temperature and light.The appropriate timing of flowering is crucial for the reproductive success of plants. Hence, intricate genetic networks integrate various environmental and endogenous cues such as temperature or hormonal statues. These signals integrate into a network of floral pathway integrator genes. At a quantitative level, it is currently unclear how the impact of genetic variation in signaling pathways on flowering time is mediated by floral pathway integrator genes. Here, using datasets available from literature, we connect Arabidopsis thaliana. Substantial qualitative information is available about the factors involved and how these interact genetically, both for the signal transduction pathways and the floral pathway integrator genes (FLOWERING LOCUS T (FT), an activator of flowering. FT is produced in the leaves and moves to the shoot apical meristem (SUPPRESSOR OF OVEREXPRESSION OF CONSTANS 1 (SOC1) (APETALA1 (AP1) expression (FLOWERING LOCUS C (FLC). FLC, together with SHORT VEGETATIVE PHASE (SVP), represses the transcription of SOC1 and FT. Thus FLC acts as a flowering repressor by blocking the photoperiodic flowering pathway. In the ambient temperature pathway, which involves amongst other FLOWERING LOCUS M (FLM) and SVP, small fluctuations in temperature influence flowering time via floral pathway integrators including FT and SOC1 is involved upon dimerizing with SOC1 and APETpression . The verand SOC1 . SOC1 inFY (LFY) ; SOC1 isith SOC1 . Autonomith SOC1 . Gibbere and LFY . LFY is LFY , where f is a function of concentrations x of one or more regulators. To simulate the effect of genetic variation in upstream signalling pathways influencing a given gene, the value of each parameter \u03b2 in its equation was modified by multiplying it with a factor a ranging from 0.05 to 10 in steps of 0.05 and subsequently from 10 to 100 in steps of 1. The resulting flowering time after simulating the modified model was obtained, as well as the expression value of the gene itself at day 10 . Out of the resulting expression values, a range of ten-fold expression change was chosen around the unperturbed expression level at day 10. In addition, in SVP and FLC there is no ODE because these genes are present as external inputs in the model. For these, variation in upstream signalling pathways was simulated by simply setting the level of the gene to different fixed levels. For SVP this again involved a range of ten-fold expression change; for FLC this range was arbitrarily made larger because of the small effect of ten-fold expression change.Predictions from the dynamic flowering time model were obtained using the model as presented in FT, often values are provided for several timepoints during one day (to capture the circadian rhythm). Although for such a case in principle it would be best to record the total area under the curve (sum of expression), for simplicity the highest observed value was used as approximation in this case.We use data from a randomly chosen subset of genes for which mutations are described as impacting flowering time . Our datT\u00a0=\u00a0Sensitivity\u2217x\u00a0+\u00a0T0, where T is flowering time and x is expression level; Sensitivity and T0 are parameters for which values are obtained in the fit. The R-function lm was used for the linear fit, and cor.test to test the statistical significance.To analyse the data, a straight line was fitted through each of the datasets: Sensitivity in a multiplicative way: if T\u00a0=\u00a0S\u2217x\u00a0+\u00a0T0, then for x\u2032\u00a0=\u00a0a\u2217x, T\u00a0=\u00a0(S\u2215a)\u2217x\u2032\u00a0+\u00a0T0, i.e., S\u2032\u00a0=\u00a0S\u2215a. Hence, we can compare the value of Sensitivity for different genes only when the same reference gene is used for normalization, and no additional relative normalization is used. The parameter T0 should be independent of the normalization that is used for expression data. It would only depend on the unit of flowering time. This unit was either total leaf number or rosette leaf number; we did not observe a systematic difference for data reported in either unit and hence did not discriminate between these cases in presenting our results.One important point in our data analysis is that various datasets were obtained using different ways of normalizing the expression values. Multiplicative normalization should effect R-function nls. In these models, each dataset obtained its own value of Sensitivity, but only one global value of T0 was used for each floral pathway integrator gene.In addition to separately fitting the various datasets available for a given floral pathway integrator gene, we also obtained one model for each floral pathway integrator gene in which the various datasets were fitted simultaneously. This was performed using the AP1 expression, the model predicts flowering time: flowering is predicted to start when AP1 expression passes a certain threshold. This model was developed using expression data and flowering time of wild-type Arabidopsis thaliana, as well as mutants of floral pathway integrator genes. In our current work, we focus on genetic variation in upstream signalling pathways, which were not used previously for modelling. To simulate variation in these upstream signalling pathways, parameters describing input to the floral pathway integrator genes were modified in the model (see \u2018Methods\u2019). This allowed to observe the dependency of predicted flowering time on expression levels of floral pathway integrator genes for all the genes over the full range of expression displayed in R2 values for the linear fits are all above 0.75.We aim to obtain a comprehensive picture of how variation in signalling pathways influences flowering time via affecting floral pathway integrator genes. To do so, we first analysed our recently published mechanistic model for the floral pathway integrator gene network . This moor genes . These por genes a linearSOC1.Hence, analysis of our floral pathway integrator gene regulatory network model predicts a gradual and rather linear dependence of flowering time response on changes in input to the floral regulatory network. To assess the validity of this prediction, we chose to analyze large amounts of datasets available in literature. Numerous studies present measurements of flowering times in various conditions and for various genetic backgrounds. Since one often knows which floral pathway integrator gene is relevant for the specific signalling pathway involved, the expression levels of the specific gene thought to be responsible for mitigating the input from the signal transduction pathway are measured as well. Although one has to extract most of this data manually from tables or figures in relevant publications, it is an advantage that large amounts of data can be analysed in this way. Even though some of the individual datasets are small, in its totality the data consists of over 200 pairs of measurements of expression level and flowering time. This data has so far been scattered throughout literature and we demonstrate that it can be integrated. We use this data as a means to describe in a quantitative way the effect of changes in genetic background in signalling pathway components on flowering time. We start with a specific example regarding the floral pathway integrator gene SOC1 expression measurements (qPCR) were obtained in different genetic backgrounds and different conditions (SOC1 (R2\u00a0=\u00a00.80). It is this dependency that is the focus of investigation of this study, for SOC1 as well as for floral pathway integrator genes. In our analysis, we focus on the effect of differences in genetic backgrounds on each particular gene in the floral pathway integrator gene network. For that particular gene, expression level measurements might then be explanatory for flowering time changes. By analysing data as shown in nditions . For thenditions . It is sWhen integrating and comparing data for different experiments or different genes, one particular complication is that reported qPCR gene expression levels are normalized in various ways. In order to be able to combine datasets from different publications, one of the two following conditions should hold: (1) The same reference gene was used for normalization, and we assume that the expression level of the reference gene is constant in the different conditions applied in the various publications. In this scenario, expression levels of different genes in various publications can be quantitatively compared. Alternatively, (2) the reported expression level was scaled using wildtype expression levels of the gene of interest. In this case, in order to compare data from different publications, it is essential that the wildtype expression level that is used is the same. This seems less likely than the assumption that a reference gene such as actin or tubulin has a constant gene expression level. In several cases, the two scenarios are combined, in the sense that qPCR data are first normalized to a reference gene but that the reported expression level is subsequently scaled to a wildtype expression level.SOC1, the data analysed above were reported after scaling the expression level to wildtype SOC1 expression levels. Two additional examples of data for flowering time and SOC1 expression were obtained in which expression levels were normalized relative to a reference gene . Remarkably, it can be observed in For nce gene . Hence, T0 values for the same floral pathway integrator gene obtained from different datasets should be quite similar. This was indeed observed for the SOC1 datasets presented above. More generally, although there is some variation, the different values of T0 obtained for a given gene are indeed significantly similar to each other compared to the values for the other genes . Note that LFY is not included in this figure because a lfy mutant does not flower properly at all (LFY cannot be predicted by the simple linear analysis presented here. We provide an alternative analysis of our flowering time ODE model for prediction of LFY mutant flowering time in LFY expression was fixed at given levels and the resulting flowering time predicted by the ODE model was recorded. For values of LFY below \u223c1nM, the model predicts that there is no flowering. This behaviour is in accordance with the known behaviour of the lfy null-mutant which was not used for training the model, providing additional independent validation for the model.The values of mutants (Fig. 4By at all . The disT0, our prediction of flowering time, to the observed flowering time in knock-out mutants. One reason for this small slope could be the fact that knock-out mutants in general will not have exactly zero expression in planta, leading to a smaller effect on flowering time than predicted. Nevertheless, the clear relationship between predicted and experimental flowering time provides independent validation of the simple linear model fits from which the value of T0 was obtained. Note that the flowering time and expression data used to obtain these fits are from genetic backgrounds in which upstream signal components have been mutated. Hence, the input data are independent from the floral pathway integrator gene knock-out mutants from which flowering time data is used in The value of the slope of the fitted line in Input from the environment is transduced by signalling pathways and integrated by a small number of floral pathway integrator genes. The complexity of the signalling pathways and their connection with the floral pathway integrator genes is overwhelming. Hence, understanding the effect of genetic variation in signalling pathways on flowering time is a daunting task. Our analysis indicates that in spite of this complexity, the effect of differences in genetic background can be quantitatively understood by focussing on expression level changes of floral pathway integrator genes. Perturbations in upstream signalling pathways effect floral pathway integrator genes mostly in such a way that the effect on flowering time is linear in the change in gene expression level. The fact that a linear response is significant in most cases, and that this response is observed for different floral pathway integrator genes, suggests that it is an important aspect of the way in which plants adapt to their local environment. The measured expression level changes are often up to tenfold or higher \u2013S5. HencC. elegans that the effect of genetic background on the severity of RNAi and mutant phenotypes could be predicted from variation in the expression level of the affected gene (Our findings on the role of gene expression variation in transducing the effect of genetic background variation to flowering time can be compared with more general analyses focusing on understanding the effect of variation in genetic background on phenotypes. For example, it was found in ted gene . Also, ited gene , suggestted gene . In a brOur findings are based on literature data obtained under various experimental conditions. For example, the day or the timepoint during the day used for measurement is different between different datasets. More generally, gene expression clearly might display different trends in different tissues or between different cell-types within a tissue. Using a single qPCR-based value to characterize the expression of a gene ignores these spatial aspects completely. Although this puts limit on the level of comparability between these data sets, our analysis shows that it is possible to integrate such data. One additional complicating factor is the fact that qPCR data are reported in various ways. For one parameter in our model we overcome this problem by comparing data normalized in the same way. For the other parameter, this is not needed because it is independent of normalization. Nevertheless, the use of multiple qPCR reference genes would be of great value, both for better comparability between studies and also to ensure accuracy of measurements .In addition to different ways of reporting expression, also different ways of reporting flowering time are used. The data we used either reported the total number of leaves, or the number of rosette leaves. Days to flowering is not often reported but would be a useful addition, in particular since leaf number and days to flowering are not always congruent . A more FT than for other genes: the value of T0 obtained for FT did not correlate well with the experimental flowering time of an ft mutant (FT compared to the other genes (FT, the mRNA levels measured by qPCR are only a weak proxy for the real amount of active component. This is because FT protein is transported from leaves to meristem before it may exert its effect on SOC1 and FT. Molecular aspects of this transport are not known in much detail yet, but one could imagine that there would be some kind of threshold above which not all FT is transported. If this would be the case, the predicted value of T0 in our analysis would be too low, as is indeed observed when the predicted values are compared with experimental flowering times for mutants (FT. A more general scenario in which the response of flowering time to expression level of a particular floral pathway integrator gene would not necessarily be expected to be linear is if multiple floral pathway integrator genes are simultaneously effected by upstream changes. Yet another complicating factor is the fact that various floral pathway integrator genes regulate each other. This could lead to correlations in expression levels of various floral pathway integrator genes, which in turn might influence our analysis. If a gene which is directly influenced by an upstream pathway regulates another floral pathway integrator gene, both might in principle display a clear correlation between flowering time response and expression level.The linear model appeared to be successful, but less so for t mutant , and wheer genes . This mi mutants . A similAGL24 is shown to be a dosage-dependent mediator of flowering signals (FLC levels in Arabidopsis accessions are correlated to flowering times of these accessions (In the literature, the quantitative, continuous nature of flowering time and its gradual response to changing input is often neglected when analysing the effect of variation on flowering time. In many cases, the measured response of flowering time to perturbations is reported just as leading to early or late flowering. Only a few studies analyse quantitative relationships between gene expression levels and flowering time. This includes a study in which signals . FLC levcessions . For riccessions . Our com10.7717/peerj.3197/supp-1Supplemental Information 1Click here for additional data file.10.7717/peerj.3197/supp-2Data S1Each sheet in the file contains data for one floral pathway integrator gene. These datasets were obtained from literature; see Click here for additional data file.10.7717/peerj.3197/supp-3Supplemental Information 2Click here for additional data file."}
+{"text": "Gene expression time series data are usually in the form of high-dimensionalarrays. Unfortunately, the data may sometimes contain missing values: for eitherthe expression values of some genes at some time points or the entire expressionvalues of a single time point or some sets of consecutive time points. Thissignificantly affects the performance of many algorithms for gene expressionanalysis that take as an input, the complete matrix of gene expressionmeasurement. For instance, previous works have shown that gene regulatoryinteractions can be estimated from the complete matrix of gene expressionmeasurement. Yet, till date, few algorithms have been proposed for the inferenceof gene regulatory network from gene expression data with missing values.one-step or two-step missing measurements. The PBGA filters use Gaussianapproximation and various quadrature rules, such as the unscented transform (UT),the third-degree cubature rule and the central difference rule for computing therelated posteriors. The proposed algorithm is evaluated with satisfying resultsfor synthetic networks, in silico networks released as a part of the DREAMproject, and the real biological network, the in vivo reverse engineering andmodeling assessment (IRMA) network of yeast Saccharomycescerevisiae.We describe a nonlinear dynamic stochastic model for the evolution of geneexpression. The model captures the structural, dynamical, and the nonlinearnatures of the underlying biomolecular systems. We present point-based Gaussianapproximation (PBGA) filters for joint state and parameter estimation of thesystem with PBGA filters are proposed to elucidate the underlying gene regulatory network(GRN) from time series gene expression data that contain missing values. In ourstate-space model, we proposed a measurement model that incorporates the effect ofthe missing data points into the sequential algorithm. This approach produces abetter inference of the model parameters and hence, more accurate prediction ofthe underlying GRN compared to when using the conventional Gaussian approximation(GA) filters ignoring the missing data points.The online version of this article (doi:10.1186/s13637-016-0055-8) contains supplementary material, which is available to authorizedusers. Gene regulation happens to be one of the most important processes that takeplace in living cells , 2. For The generation of high throughput time series measurement of transcript levels has become an increasingly powerful tool forinvestigating complex biological processes and a useful resource for GRN inference. ModelinThe state-space approach, an extension of the DBNs, is a popular technique tomodel the GRNs , 23, wheHowever, in reality, gene expression time series data may not contain sufficientquantity of data in the appropriate format for the inference of GRNs because of themissing data points . For exaIn this paper, we present a class of GA filters for inferring GRN from data withmissing measurement values, which can be modeled in the same unifying framework asin the case of state estimation from one-step or two-step randomly delayedmeasurements . A generIn the literature, several point-based Gaussian approximation (PBGA) filtershave been used for solving the GRN inference problem from DNA microarray geneexpression data and genome-wide knockout fitness data , 30; howSaccaromyces cerevisiaeT is Gausssian distributed with zero mean and covariance matrixk=1,\u2026,K.with A=T,B=T,\u03bc=T and I0=T and we denote the expression level for all genes at time stepk by gk=T. Then, the augmented state vector can be described by The goal of inference is to estimate the parameters (coefficients) of themodel in , which fThe augmented version of the state transition equations include and the Succinctly, the state transition of the dynamic model is written as f(\u00b7) is the nonlinear functionassociated with ;;f(\u00b7) is The measured gene expression levels can be modeled as k is the output data from the experiments at time k, h(xk)=gk and where zk, are missing and the estimation is made from the availablemeasurements, yk. We assume that z1 is available. At timek=2, if the measurement output is missing,estimation is done with z1 and at any time instantk\u22653, maximum of two consecutive time pointsmay be missing. In summary, if zk is missing estimation is done with zk\u22121 and if zk\u22121 is unavailable, estimation isdone with zk\u22122. Thus, the measurement outputat each time can be modeled as is a good choice. In our experiments,q=0.1, so that the probability that zk is used in the estimation is k\u22121 is used in the estimationk\u22122 is used in the estimation isk, if zk is missing and zk\u22121 is available, we replace zk with zk\u22121; otherwise, we replace zk with zk\u22122, as there can be maximum of twoconsecutive missing data points in the measurement.In addition, yQk=0.004I, and the zero-mean Gaussianmeasurement noise has a covariance matrix Rk=0.001I, k=1,\u2026,M. Time series data aregenerated for a total of M=50 time points. Toquantify the results more rigorously, we set the noise threshold at 40% of themaximal variation for linear and nonlinear coefficients such that if an inferredlink is less than this threshold, it is considered noise and subsequently filteredoff. In the end, we come up with sparse networks and the TPR and PPV metrics arecalculated for the networks. The synthetic network in Fig. First, we supplied the CM data to the UKF algorithm. The inferred modelparameters are shown in the third column in Table Escherichiacoli and Saccharomyces cerevisiae.The time series measurements were generated using parametrized stochasticdifferential equations (SDEs), with observations uniformly sampled under five different perturbations, for a total of 105observations per gene. The inference is performed by using all the perturbations.Self-interaction/autoregulatory edges were not expected in the predictions andwere subsequently removed. Since the number of possible edges in an N-gene network without autoregulatory interactions isN(N\u22121),the length of a complete list of predictions is 90 edges for a network of size 10[In order to assess the performance of GRN inference algorithms, several insilico gene networks have been produced as the benchmarking data sets,specifically, the DREAM in silico gene networks \u201341. We m size 10, 34.We first test the UKF algorithm on the five 10-gene network data sets (CM) andthe result is shown in column 2 in Table We also compared our algorithm against a relevant computational methoddesigned for the GRN network inference, i.e., , which iSaccharomyces cerevisiae GAL network in yeastis one of the most prominent model systems due to its importance for the studiesof eukaryotic regulation and relatively self-contained nature [d nature \u201347. A syd nature . In the The true interactions is shown in Fig. one-step or two-step missing measurements. Gene regulation is assumedto follow a nonlinear state evolution model described in network byincorporating a general s-step missing values fors-consecutive time points, which may addressmore complex missing data scenarios.In general, this work addresses the possibility of having Time series gene expression data be modeled with state-space model and the modelparameters can be estimated using different GA filters. Unfortunately, there aresituations which result in loss of expression values for all genes at a particulartime point or few successive time points. In this case, conventional filteringapproach fails to correctly estimate the model parameters, which are used toelucidate the underlying GRN. We have proposed PBGA filters that treat the missingmeasurement values as a set of delayed measurements and demonstrated that themodified filter can estimate the model parameters, with missing measurements, asaccurate as the conventional filter with no missing measurements."}
+{"text": "Motivation: Repeated cross-sectional time series single cell data confound several sources of variation, with contributions from measurement noise, stochastic cell-to-cell variation and cell progression at different rates. Time series from single cell assays are particularly susceptible to confounding as the measurements are not averaged over populations of cells. When several genes are assayed in parallel these effects can be estimated and corrected for under certain smoothness assumptions on cell progression. Results: We present a principled probabilistic model with a Bayesian inference scheme to analyse such data. We demonstrate our method\u2019s utility on public microarray, nCounter and RNA-seq datasets from three organisms. Our method almost perfectly recovers withheld capture times in an Arabidopsis dataset, it accurately estimates cell cycle peak times in a human prostate cancer cell line and it correctly identifies two precocious cells in a study of paracrine signalling in mouse dendritic cells. Furthermore, our method compares favourably with Monocle, a state-of-the-art technique. We also show using held-out data that uncertainty in the temporal dimension is a common confounder and should be accounted for in analyses of repeated cross-sectional time series.Availability and Implementation: Our method is available on CRAN in the DeLorean package.Contact:john.reid@mrc-bsu.cam.ac.ukSupplementary information:Supplementary data are available at Bioinformatics online. However, current medium and high-throughput assays used to measure gene expression destroy cells as part of the protocol. This results in repeated cross-sectional data wherein each sample is taken from a different cell.Many biological systems involve transitions between cellular states characterized by gene expression signatures. These systems are typically studied by assaying gene expression over a time course to investigate which genes regulate the transitions. An ideal study of such a system would track individual cells through the transitions between states. Studies of this form are termed This study analyses the problem of variation in the temporal dimension: cells do not necessarily transition at a common rate between states. Even if several cells about to undergo a transition are synchronized by an external signal, when samples are taken at a later time point each cell may have reached a different point in the transition. This suggests a notion of pseudotime to model these systems. Pseudotime is a latent (unobserved) dimension which measures the cells\u2019 progress through the transition. Pseudotime is related to but not necessarily the same as laboratory capture time.Variation in the temporal dimension is a particular problem in repeated cross-sectional studies as each sample must be assigned a pseudotime individually. In longitudinal studies, information can be shared across measurements from the same cell at different times.Inconsistency in the experimental protocol is another source of variation in the temporal dimension. It may not be physically possible to assay several cells at precisely the same time point. This leads naturally to the idea that the cells should be ordered by the pseudotime they were assayed.The exploration of cell-to-cell heterogeneity of expression levels has recently been made possible by single cell assays. Many authors have investigated various biological systems using medium-throughput technologies such as qPCR dimensional latent space. Often this projection results in a natural clustering of cells from different time points or of different cell types which can then be related to the biology of the system. Such clusterings may suggest hypotheses about likely transitions between clusters and their relationship in time.Dimension reduction has a large literature and there are many available methods. Here, we give a few examples of some that have been used in single cell expression analyses.Principal components analysis (PCA) is prevalent in analyses of expression data is another popular dimension reduction technique. MDS aims to place each sample in a lower dimensional space such that distances between samples are conserved as much as possible. Independent components analysis (ICA) projects high dimensional data into a latent space that maximizes the statistical independence of the projected axes. et al. were able to uncover subpopulations of cells at the 16-cell stage, one stage earlier than Guo et al. had identified using PCA.Gaussian process (GP) latent variable models (GPLVMs) are a dimension reduction technique related to PCA. They can be seen as a non-linear extension to a proThe latent space in all of the methods above is unstructured: there is no direct physical or biological interpretation of the space and the methods do not directly relate experimental covariates such as cell type or capture time to the space. The samples are placed in the space only to maximize some relevant statistic, although the analysis often reveals some additional structure. For example, one axis may coincide with the temporal dimension of the data, or cell types may be clearly separated. In these cases, the structure has been inferred in an unsupervised manner. However, there is no guarantee that the methods above will uncover any specific structure of interest, for example, a pseudotime ordering.Here, we propose to impose an a priori structure on the latent space. In the model presented in this article, the latent space is one-dimensional and the structure we impose on the space relates it to the temporal information of the cell capture times. That is the latent space represents the pseudotime.et al. modelled RNA-seq count data from human Th17 cell differentiation using a negative binomial distribution with a time-varying mean. For each of three biological replicates, they analysed subpopulations of cells at given time points resulting in three time series of longitudinal data. The time-varying mean was fit using a GP over the scaled pseudotime space. They compared this pseudotime based model favourably with a similar model that only used the capture time points.A number of methods have been proposed to estimate pseudotimes in gene expression time series. et al. show how Monocle can be used to identify pseudotemporal ordering, switch-like changes in expression, novel regulatory factors and sequential waves of gene regulation.et al. have a body of work investigating pseudotime estimation. They developed the Embeddr R package .Our method works on data with a simple structure. First, it expects gene expression data on a logarithmic scale, such as Ct values from qPCR experiments or log transformed counts from RNA-seq experiments. Second, it requires a G is the number of genes assayed; C is the number of cells sampled; g in cell c where c is ck where T is the number of distinct capture times.Our notation for the data is: The primary latent variables in our model are the pseudotimes. The model assigns a pseudotime to each cell such that the induced gene expression profiles over the latent pseudotime space have low noise levels and are smooth.Our model captures several aspects of the data: first, the data are noisy which we model in a gene-specific fashion; second, we expect the expression profiles to be smooth; third, we expect the pseudotime of each cell not to stray too far from its capture time.The model can be split into several parts: one part represents the gene expression profiles; another part represents the pseudotimes associated with each cell and another part links the expression data to the profiles.gy of gene g is a draw from a GPg is a gene-specific covariance function. The expression profiles are functions of pseudotime and as such the covariance function relates two pseudotimes.g\u03c8 parameterizes the amount of temporal variation this gene profile has and g\u03c9 models the noise levels for this gene. Log-normal priors for the g\u03c8 and g\u03c9 are parameterized asSupplementary Materials as are running times for the results in this article.The expression profiles are modelled using GPs. The expression profile ximation which hac\u03c4 for cell c is given a prior centred on the time the cell was captured. We use a normal prior as it reflects our beliefs well. There are no conjugacy issues in our inference scheme and it would be straightforward to use any prior distribution.c\u03c4 is used in the calculation of the covariance structure over pseudotimes 3\u22152 covariance function. Our experience shows that this function captures our smoothness constraints well although any reasonable covariance function could be used.l is a length-scale hyperparameter shared across the genes.The pseudotime r in r by For cyclic data such as from the cell cycle or circadian rhythms, we expect the expression profiles to be periodic. We can model this explicitly by a transformation of This has the effect of restricting the GP prior to periodic functions with period \u03a9.The model links the expression data to the expression profiles by evaluating the profiles at the pseudotimes.Briefly, our model can be interpreted as a one-dimensional GPLVM with a prior structure on the latent pseudotime space. The GPLVM model is a non-linear version of probabilistic PCA. In probabilistic PCA, the locations of the data in the latent space are given a Gaussian prior with zero mean and unit covariance. In our model, the analogous latent variables are the pseudotimes. Our model gives the pseudotimes a structured prior rather than a standard normal: that is, we relate the latent pseudotimes to the capture times of the cells using a Gaussian prior.Supplementary Materials). The hyperparameters All of the hyperparameters Supplementary Materials).As with many hierarchical models, the parameters can have several posterior modes. For instance, much of the variation in typical single cell assay data could be explained by smooth expression profiles with high noise levels. Alternatively, the same data could also be explained by rough expression profiles with low noise levels. Our model aims to balance these conflicting explanations and find parameters to fit the data with reasonable noise levels and expression profiles that are neither too smooth nor too rough. Selecting suitable hyperparameters for the parameter priors is important to avoid unrealistic regions of parameter space. We have found an empirical Bayes approach useful in this regard .In order to further mitigate the pseudotime mixing problem, we use naive heuristics to initialize our MCMC chains and ADVI starting points measured expression levels every 2\u00a0h resulting in 24 distinct capture time points. We grouped these 24 time points into four low-resolution groups, each consisting of six consecutive time points. We then asked our model to estimate the pseudotimes associated with each sample but only provided it with the low-resolution group labels. We fit 100 of the 150 genes mentioned in the text of Windram et al.\u2019s publication. We used the remaining 50 genes as held-out data to validate the fit.Windram Supplementary Materials).We used the ADVI variational Bayes algorithm to estimate pseudotimes for each sample in the infected condition see . The proThe Spearman correlation between estimated pseudotimes from the posterior and the true capture times was high (posterior mean 996) see top leftSupplementary Materials). Monocle was unable to recover the capture times for cells from the first low resolution group for the 50 genes that we had not used to fit the model and averaged over genes. We did the same for 1000 pseudotime orderings sampled under the null hypothesis. The posterior mean of the gR of the pseudotimes estimated by our model were significantly smaller than those from the null hypothesis (t-test). We calculated the roughness statistic for the pseudotime ordering estimated by Monocle. This was significantly higher than the roughnesses in our posterior .We calculated roughness statistics Supplementary Materials) and compared the peaks to the CycleBase peaks.CycleBase between the CycleBase defined peak times and our estimates (RMSE\u2009=\u200914.5).Supplementary Materials). The RMSE associated with these estimates was 33.3%.We wished to understand how well our model estimated peak times compared to naive estimates. We made naive estimates from the raw expression data as follows. Each cell in We used ADVI to fit our sparse model to 307 cells from the LPS condition including the two precocious cells captured at 1\u00a0h. gR (see Section 3) for 100 genes that we had not used to fit the model and averaged over genes. We did the same for 1000 pseudotime orderings sampled under the null hypothesis. The posterior mean of the gR of the pseudotimes estimated by our model were significantly smaller than those from the null hypothesis (t-test).We calculated roughness statistics Arabidopsis). Our model provided plausible estimates of pseudotimes on all the datasets. We validated these estimates technically by evaluating the smoothness of the expression profiles of held-out genes in two of the datasets. These profiles are significantly smoother than expected under the null model. In addition, we validated the estimates biologically using obfuscated capture times (in the Arabidopsis dataset), data from separate experiments (cell cycle peak times) and independent analyses (identification of precocious cells). Overall these results demonstrate that uncertainty in the temporal dimension should not be ignored in repeated cross-sectional time series of single cell data and that our method captures and corrects for these effects.We have presented a principled probabilistic model that accounts for uncertainty in the capture times of repeated cross-sectional time series. We have fit our model to three separate datasets each using a different biological assay in three organisms (human, mouse and Arabidopsis example, we analysed the cells are spread out in pseudotime around the 20-h mark (Our method has a number of attractive attributes. It explicitly estimates pseudotimes in contrast to methods such as Monocle and Wanderlust which estimate orderings of cells. The pseudotimes are on the same scale as the experimental capture times. The orderings estimated by Monocle and Wanderlust have no scale. In our model, consecutive cells that have diverse expression profiles are placed further apart in pseudotime than similar cells. Thus, our pseudotime estimates quantify the rate of change of the system. For example, in the 0-h mark suggestiOur method uses GPs which are a natural framework to model noisy expression profiles. GPs are well established probabilistic models for time series. They provide more than just point estimates of the profiles, they also provide a measure of posterior uncertainty. This is useful in downstream analyses such as regulatory network inference. A GP model is characterized by its covariance function and associated parameters and the covariance functions in our model have interpretable parameters: gene-specific temporal variation and noise. We have also demonstrated how a GP framework is suitable for modelling periodic expression profiles such as cell cycle expression profiles. The primary limitation of GPs for our model is that inference complexity scales cubically in the number of samples. For this reason, our method is not applicable to data from many hundreds or thousands of cells like Monocle and Wanderlust.Inference in our model is performed using Markov chain Monte Carlo. This technique provides a full posterior distribution over the model parameters. However, mixing over the pseudotime parameters in our model can be difficult and we found that our model did not mix well when fit to the cell cycle dataset. In this case, we analysed expression profiles from the sample with highest log probability and found they estimated cell cycle peak times well.Single cell assays give us an exciting opportunity to explore heterogeneity in populations of cells. As the technology develops and the cost of undertaking such assays drops, they are destined to become commonplace. In addition, high-throughput longitudinal studies remain impractical and for the foreseeable future the majority of such time series will be repeated cross-sectional in nature. Until this changes, there will be challenges associated with estimating uncertainty in the capture times and variation in the rate of progress of individual cells through a system. Our method explicitly models these effects and is a practical tool for analysis of such repeated cross-sectional time series. Furthermore, in contrast to Wanderlust, our method only depends on open-source software and is available under a liberal open-source license."}
+{"text": "Drosophila melanogaster. Gap genes are involved in segment determination during early embryogenesis. They are activated by maternal morphogen gradients encoded by bicoid (bcd) and caudal (cad). These gradients decay at the same time-scale as the establishment of the antero-posterior gap gene pattern. We use a reverse-engineering approach, based on data-driven regulatory models called gene circuits, to isolate and characterise the explicitly time-dependent effects of changing morphogen concentrations on gap gene regulation. To achieve this, we simulate the system in the presence and absence of dynamic gradient decay. Comparison between these simulations reveals that maternal morphogen decay controls the timing and limits the rate of gap gene expression. In the anterior of the embyro, it affects peak expression and leads to the establishment of smooth spatial boundaries between gap domains. In the posterior of the embryo, it causes a progressive slow-down in the rate of gap domain shifts, which is necessary to correctly position domain boundaries and to stabilise the spatial gap gene expression pattern. We use a newly developed method for the analysis of transient dynamics in non-autonomous (time-variable) systems to understand the regulatory causes of these effects. By providing a rigorous mechanistic explanation for the role of maternal gradient decay in gap gene regulation, our study demonstrates that such analyses are feasible and reveal important aspects of dynamic gene regulation which would have been missed by a traditional steady-state approach. More generally, it highlights the importance of transient dynamics for understanding complex regulatory processes in development.Pattern formation during development is a highly dynamic process. In spite of this, few experimental and modelling approaches take into account the explicit time-dependence of the rules governing regulatory systems. We address this problem by studying dynamic morphogen interpretation by the gap gene network in Drosophila melanogaster. Gap genes are involved in determining the body segments of flies and other insects during early development. Gradients of maternal morphogens activate the expression of the gap genes. These gradients are highly dynamic themselves, as they decay while being read out. We show that this decay controls the peak concentration of gap gene products, produces smooth boundaries of gene expression, and slows down the observed positional shifts of gap domains in the posterior of the embryo, thereby stabilising the spatial pattern. Our analysis demonstrates that the dynamics of gene regulation not only affect the timing, but also the positioning of gene expression. This suggests that we must pay closer attention to transient dynamic aspects of development than is currently the case.Animal development is a highly dynamic process. Biochemical or environmental signals can cause the rules that shape it to change over time. We know little about the effects of such changes. For the sake of simplicity, we usually leave them out of our models and experimental assays. Here, we do exactly the opposite. We characterise precisely those aspects of pattern formation caused by changing signalling inputs to a gene regulatory network, the gap gene system of Biological systems depend on time. Like everything else that persists for more than an instant, there is a temporal dimension to their existence. This much is obvious. What is less obvious, however, is the active role that time plays in altering the rules governing biological processes. For instance, fluctuating environmental conditions modify the selective pressures that drive adaptive evolutionary change , 3\u20135, tiDrosophila melanogaster ..43].D. melanogaster ..D. melanDa fixed to zero and interaction signs constrained to those of previous works . As shown in We do observe, however, that maternal gradient dynamics significantly affect the levels of gap gene expression throughout the trunk region of the embryo are multi-stable at every time point. Every instantaneous phase portrait contains multiple attractors. Distinct attractors govern the dynamics of gap gene expression at different points in space and time. We identify three alternative non-autonomous mechanisms which control the positioning of domain boundaries in the anterior trunk region of the embryo.hb and gt. Gap gene expression dynamics are governed by the same attractor across different nuclei (i. e. passes in front of) the trajectory in phase space, which leads to a marked change in the trajectory\u2019s direction. Although all nuclei across the Gt boundary show qualitatively similar behaviour, the timing of attractor movement differs markedly from one nucleus to another. The further posterior a nucleus is located along the A\u2013P axis, the earlier the drop of the attractor occurs , and the Hb/Kr interface is positioned by attractor selection: nuclei anterior to this border fall into the basin of an attractor with high Hb, nuclei posterior of the border end up in the basin of an attractor with high Kr concentration. Instead of a static switch, however, we find nuclei being captured by different basins at different time points across space. Still, the overall principle of boundary placement by attractor selection remains the same between static-Bcd and fully non-autonomous gap gene circuit models. The fact that similar regulatory principles are at work in both models validates our approach, and confirms that the placement of stationary domain boundaries in the anterior of the embryo does not depend in any fundamental way on the dynamics of maternal inputs.Taken together, our evidence suggests that the non-autonomous mechanisms positioning anterior gap domains are equivalent to the corresponding autonomous mechanisms from the static-Bcd model described by Manu et al. since thet al. [spiral sink , left. Aus case) . The spin shifts .It is important to note that similar regulatory principles can be found in all three solutions of our fully non-autonomous model that reproduce gap-gene patterning correctly both in the presence and absence of diffusion. We have chosen the most structurally stable solution for detailed analysis. The other two circuits show more variability of regulatory features both across space and time. Still, both of these models consistently exhibit multi-stability in the anterior, and spiral sinks as well as transiently appearing and disappearing limit cycles in the region posterior to 52% A\u2013P position. This indicates that the two main dynamical regimes described here\u2014stationary boundaries through attractor selection in the anterior vs. shifting gap domain boundaries through spiralling trajectories in the posterior\u2014are reproducible across model solutions.et al. [D. melanogaster has important functional and evolutionary implications, which are discussed elsewhere [It is important to note that non-autonomy of the model is not strictly required for the spiral sink mechanism to pattern the posterior of the embryo. Simulations with fixed maternal gradients demonstrate that domain shifts can occur in an autonomous version of our gap gene circuit see Figs and 7. Tet al. . The firlsewhere .Analysis of an accurate, non-autonomous model is required to isolate and study the explicitly time-dependent aspects of morphogen interpretation by the gap gene system. Here, we have shown that such an analysis is feasible and leads to relevant and specific new insights into gene regulation. Other modelling-based studies have used non-autonomous models before .S1 TableModel equations are shown in the Models and Methods section.(PDF)Click here for additional data file.S1 Fig(A) Features of phase space in autonomous dynamical systems. (B) Categorisation of transient, non-autonomous dynamics.(PNG)Click here for additional data file.S2 FigD. melanogaster gap gene circuits fitted to data without diffusion. Circuits showing any of these gross patterning defects were excluded from further analysis, even if their RMS score was low. Arrows indicate patterning defects as named in the panel headings (A\u2013C). Horizontal axes represent %A\u2013P position (where 0% is the anterior pole). Vertical axes show relative protein expression levels (Rel. Prot. Expr.) in arbitrary units (au). T4/6 indicate time classes C14-T4 and T6, respectively.Commonly observed defects in fully autonomous (PNG)Click here for additional data file."}
+{"text": "Ostrinia nubilalis) and evaluate the hypothesis that the coupling of different forms of allochrony is due to a shared genetic architecture, involving genes with pleiotropic effects on both timing phenotypes. We measure differences in gene expression at peak mating times and compare these genes to previously identified candidates that are associated with changes in seasonal mating time between the corn borer strains. We find that the E strain, which mates earlier in the season, also mates 2.7 h earlier in the night than the Z strain. Earlier daily mating is correlated with the differences in expression of the circadian clock genes cycle, slimb, and vrille. However, different circadian clock genes associate with daily and seasonal timing, suggesting that the coupling of timing traits is maintained by natural selection rather than pleiotropy. Juvenile hormone gene expression was associated with both types of timing, suggesting that circadian genes activate common downstream modules that may impose constraint on future evolution of these traits.Speciation often involves the coupling of multiple isolating barriers to produce reproductive isolation, but how coupling is generated among different premating barriers is unknown. We measure the degree of coupling between the daily mating time and seasonal mating time between strains of European corn borer ( Speciation typically occurs through the accumulation of multiple reproductive barriers ,2. The eVariation in the timing of mating often contributes to prezygotic reproductive isolation between sympatric populations by reducing the encounter rates between potential mates , timeless (tim), vrille (vri) and PAR domain protein 1 (pdp1) [clk and cyc transcription ,28,29,30cription . Mutatioosophila ,32, Bactctrocera ,34, Spododoptera , and Clu [Clunio ).Riptortus pedestris), where cycle RNAi impedes diapause break and period RNAi affects diapause entry [Drosophila [Wyeomyia smithii [Although evidence suggests that the differences in daily timing are often associated with the changes in regulation or expression of circadian clock genes, whether daily and seasonal biological rhythms both rely on the circadian clock pathway remains a subject of debate. Seasonal cues, such as increases in temperature and day length (photoperiod), strongly influence the time of year at which insect mating occurs by triggering dormancy break and reproductive development . The ligse entry . Howeverosophila . Photope smithii ,42, provOstrinia nubilalis) and causes incomplete reproductive isolation, providing an ideal system to explore coupling and the genetic basis of these traits [Natural variation in daily and seasonal mating time occurs in the European corn borer moth controlling the termination time [pdp1 and per) and several insulin-signaling genes [In ECB, seasonal timing is at least partially determined by the time that is needed for overwintering larvae to terminate winter diapause (dormancy) in the spring and summer months . Diapausion time ,52,53. Wion time . Genes tng genes . cis-acting genetic changes in gene regulation underlie differences in mating time, we would expect causal loci to differ in the expression at the peak mating time for each strain. We therefore examine patterns of gene expression associated with changes in times of peak mating activity to identify genetic and regulatory factors that are associated with and potentially contribute to divergence in circadian mating time in ECB. Furthermore, we compare the overlap between genes differentially expressed during mating to candidate genes for seasonal timing to explThe stocks of bivoltine E (BE) and univoltine Z (UZ) strain ECB were donated by Charles Linn at the New York State Agricultural Experiment Station in Geneva, NY. The BE stock was originally derived from bivoltine E strain individuals collected from Geneva, NY and the UZ stock from univoltine Z strain individuals that were collected from Bouckville, NY . The stocks were maintained via mass rearing at Tufts University for several generations and were the same lines that were previously characterized for seasonal timing . ECB lari, from 1 to w), the expected frequency of hybrid (2iqip) and pure-strain offspring (ip2i+ q2) offspring was calculated based on the number of mating moths that were Z strain (ip) or E strain (iq). The expected number of hybrid and pure-strain moths each hour was then calculated by multiplying the total number of moths (in) by their expected frequencies. Summing the expected numbers of intra- and inter-strain matings across hours (w) provided an estimate of the total number of hybrid and parental offspring produced during scotophase. Thus, the strength of reproductive isolation ranges from 0 (random mating) to 1 (complete reproductive isolation), and it is given by Equation (1) below:We calculated the absolute strength of the reproductive isolation stemming from mating time following Dopman et al. . We geneIn order to identify changes in expression related to circadian control of female mating receptivity, independent of male presence or the physical act of mating, a second experiment was conducted to examine RNA expression levels for the two strains. Additional individuals were reared, as described above, but because mating time is predicted to be female-controlled, individuals were sexed as pupae and only female pupae were kept in isolated 44.4 mL cups until eclosion.One-day-old virgin females were randomly selected for sacrifice at one of three time points. The first time point was one hour before the end of photophase and provided a daylight baseline for expression change during scotophase (photophase time point). The two other time points corresponded to the median observed mating time for each strain, as determined during the mating trial experiment: 1.3 h into scotophase (median mating time E strain) and 4 h into scotophase (median mating time Z strain). At designated sacrifice time points, the containers were placed in triple-layered black plastic bags and were kept at \u221220 \u00b0C for 10 min to sedate the moths. Under dark conditions, female moths were transferred to 1.5 mL microcentrifuge tubes containing 1.0 mL of RNAlater . Moths in RNAlater were stored at \u221220 \u00b0C for the duration of the experiment, before long-term storage at \u221280 \u00b0C. A total of 72 females of each strain were preserved, and 60 of these were ultimately used for sequencing. Adult moths were decapitated and heads were pooled according to time point and strain. Heads were used due to known circadian gene expression in antennal and neural tissue ,56,57,58cDNA libraries were prepared from mRNA using the TruSeq Sample Prep Kit v2 Set A using 1 mg total RNA, and prepared libraries were quantified using the Qubit High Sensitivity DNA assay. Libraries were quantified a second time on an Agilent Bioanalyzer . Two replicate libraries for each strain and time point were run on each of the two lanes of an Illumina HiSeq 2500, located at the Tufts University Core Facility for Genomics to generate 100 bp single-end reads from 23 libraries. One UZ 1.3 h library failed. Sequences are available in the GenBank Sequence Read Archive .http://www.bioinformatics.babraham.ac.uk/projects/fastqc). Sequences were then trimmed using Trimmomatic version 0.32 to remove adapter sequences, bases with low sequence quality, and any reads that were shorter than 36 base pairs [http://hannonlab.cshl.edu/fastx_toolkit). The transcriptome was assembled de novo using Trinity and a k-mer length of 25 from all reads [Single-end Illumina sequencing reads were assessed for quality using the FastQC program . The lon points) . This trThe set of longest transcripts was indexed as a reference transcriptome, and the individual reads were mapped back to this transcriptome using Bowtie2 version 2.1.0 . The resP-values were corrected for multiple testing using a false-discovery rate (FDR) cutoff of 0.05. Additionally, we emphasized transcripts that were specifically upregulated during at the peak mating time for each strain.Differential expression analysis was done by fitting negative binomial generalized log-linear models in edgeR . Specifiq < 0.05) to genes that were differentially expressed between strains during diapause break (day 1 and day 7) from the data set in Wadsworth & Dopman [We compared genes that were significantly differentially expressed between strains at any mating time point . Transcripts were also blasted against known B. mori gene pairs to estimate chromosomal locations. Enriched gene ontology (GO) terms were identified for differentially expressed transcripts. This was done using the FlyBase D. melanogaster gene IDs [Ostrinia using BLASTn searches against previous published mRNA sequences from Ostrinia furnacalis [Transcripts were annotated by BLASTing sequences against known ne pairs . Supplembyx mori and Danaroteomes . For cirgene IDs derived gene IDs . GO enrirnacalis ,71.Correlations in expression among annotated circadian genes across time points were explored using data from both strains together . For each transcript, normalized read counts were averaged within strain and time point. Correlations in mean expression were calculated across time points and strains using custom scripts and the corrplot package in R .p = 0.602). The median mating times for each strain were 1.33 h into scotophase for the E (mean = 2.41 h) and 4 h for the Z (mean = 3.96). An independent two-group Mann-Whitney-Wilcoxon test indicated that the distributions of the two strains were also significantly different , as did the Z strain distribution . The absolute strength of circadian mating time as a reproductive isolating barrier between the current E and Z strain populations was calculated to be 0.393, whereas the historical absolute strength of circadian mating time was 0.525.For E strain, a total of 21 of the 76 females (27.6%) mated at any point during the scotophase period. For the Z strain, 25 of the 79 females (31.6%) were observed to mate at any time during the night. The proportions of females mating from each strain were not significantly different showed patterns of significant differential expression in any contrast. The number of differentially expressed (DE) transcripts varied greatly between the different contrasts tested . Generalcycle (cyc) were upregulated in the E strain during photophase = 2.691, p = 0.001, q = 0.012; comp21826, BMAA 218\u2013294, log FC = 1.96, p = 0.003, q = 0.02) and 4 h . Supernumerary limbs was upregulated in the E strain during photophase . Vrille (vri) was upregulated in the Z strain relative to the E strain during photophase showed a pattern of higher expression in E during photophase, but this was not significant after correcting for multiple testing . Two period (per) transcripts were identified but although expression was slightly higher in the E strain at the 1.3 h time point, this was not significant . The photoreceptor rhodopsin 5 was also upregulated in the E strain during photophase and at 1.3 h . Analysis of relative expression levels of 28 transcripts, annotated as circadian or photoreceptive genes, primarily identified transcripts that were differentially expressed between strains during photophase, at E-strain peak mating (hour 1.3), and at Z-strain peak mating (hour 4) . Two trap = 1.59 \u00d7 10\u22125, q = 0.005). Two genes that were involved in melatonin and ecdysone release in the prothoracic gland also showed differences between strains: Arylalkamine N-acetyltransferase (aaNAT) was upregulated in the E strain at both photophase and hour 4 time points ; prothoracicotropic hormone was upregulated in the E strain during photophase . We identified transcripts of endocrine system genes downstream of the circadian clock and potentially influencing mating time through the activation of the Pheromone Biosynthesis Activating Neuropeptide (PBAN) pathway . These iSex peptide receptor is related to the termination of pheromone calling and receptivity to mating . Five trOstrinia were detected in our female head/antennal transcriptome (ORs numbered following [Thirteen olfactory receptors (ORs) previously identified in ollowing ). Of theFinally, we identified ten transcripts that fit a predicted pattern of significant differential expression between the strains at both hour 1.3 and hour 4. Of these, six were successfully annotated, including four with immunological functions: two transcripts of cecropin A, and two of attacin. The remaining annotated transcripts were identified as an IQ motif protein and a prickle-like protein.p < 0.05; Clk was positively correlated with one transcript of cyc and negatively correlated with vri and timeout . One transcript of per was negatively correlated with vri and positively correlated with the second transcript of per . A second transcript of cyc was positively correlated with cry1 , shaggy , and jetlag . A third transcript of cyc was positively correlated with slimb .Correlation analysis among 22 circadian clock transcripts identified four transcripts that were significantly negatively correlated across time points and six that were significantly positively correlated with one another and three were differentially expressed at all three time points . We foue points . HoweverD. melanogaster gene IDs. In the between-strain contrasts, enriched GO terms found amongst all of the assembled transcripts that were significantly differentially expressed included catalytic and hydrolytic activity during photophase, structural constituents at hour 1.3, and thiolester hydrolytic activity and the regulation of triglyceride metabolism at hour 4. When separated by strain, additional enriched GO terms were identified. In the E strain, light absorption was enriched at hour 1.3 and pigmentation-related terms and melatonin defense response regulation were enriched at hour 4 were annotated with t hour 4 . In the Fewer enriched GO terms were identified in within-strain contrasts. Within the E strain, enrichment of voltage-gated ion activity was found in transcripts significantly differentially expressed between hour 1.3 and 4. Within the Z strain, the only GO terms enriched between any of the contrasts were microvillus and actin-based cell projection in the photophase vs. hour 4 contrast. We also performed GO enrichment on 157 annotated joint outliers from the diapause time course. This analysis found that only macromolecule depalmitoylation, pyrimidine nucleobase metabolic process, and palmitoyl-(protein) hydrolase activity were significantly enriched.Our results indicate that differences in daily mating time are coupled with differences in the seasonal timing in ECB. E strain females that break diapause early in the season mated 2.6 h earlier during scotophase than Z strain females that break diapause later in the season. This coupling will enhance the total isolation among these sympatric ECB strains. Dopman et al. (2010) estimated that seasonal timing differences in New York can limit up to 66% of inter-strain mating , based oWe found differences in the daily mating time measured among ECB populations in this study and those that were measured in the historical Liebherr & Roelofs study . Our E scyc, and to a lesser extent, clk, were higher in the E strain. Vri represses clk and its expression was lower in the E strain at the same time point [vri expression was significantly negatively correlated with clk, cyc, and per expression across multiple time points. The expression of slimb was positively correlated with cyc across time points and slimb was also more highly expressed in the E strain during photophase. SLIMB degrades PER, which should result in the decreased inhibition of CYC [cyc, clk, vri, and slimb at the end of photophase. Further, it suggests that the circadian clock of the E strain may have a shifted phase relative to that of the Z strain, with peaks in the expression of circadian genes occurring earlier in a given 24-h period, a hypothesis that could be tested by comparing patterns of activity and gene expression between strains over the entire 24-h cycle. Thus, changes in expression of these core circadian clock genes during late photophase may be triggering differences in the timing of pheromone biosynthesis and release, leading to differences in mating time.Coupling of daily and seasonal mating time could occur due to selection against recombinant genotypes or pleiotropic alleles influencing both traits . We evalme point . In our n of CYC . This suvri as an interesting candidate gene for shifting the daily timing of mating behavior. Selected early and late Drosophila eclosion chronotypes differ in the timing of the peak expression of vri by about 2.5 h during early scotophase [vri and slimb are known to be regulated by ecdysone, the insect steroid hormone that plays a key role in reproduction and gamete release [vri expression at the end of photophase being associated with later mating, was also found using RT-qPCR between corn and rice strains of the fall armyworm (Spodoptera frugiperda) that differ in mating time [vri was on the same chromosome as the QTL for mating time in armyworms, suggesting that it may be a causal factor for changes in mating time [vri and alterations in the expression of cyc and clk may be a general mechanism of regulating daily allochrony in moths, as corn borers and armyworms are distantly related moth species . Work in other species supports otophase . Both vr release ,80,81. Fing time . Hannigeing time . Thus, vPer is a candidate gene for diapause timing in ECB because it shows amino acids changes among E and Z strains and trends toward higher expression in the E strain during diapause break (days 1 and 7) [per alleles are linked to the QTL for diapause timing and oscillate in frequency across latitude with voltinism (generation number) [Pdp1 is also differentially expressed during diapause break [per nor pdp1 were differentially expressed across the first 4 h of the night. Further characterization of the expression of circadian clock genes is needed determine whether this result is robust across 24-h time periods and rule out that differential expression of per does not occur later in the night. Future work can also scan for naturally segregating genetic variation in regulatory or coding sequences between ECB strains in these important timing genes, as we would expect causal genes to contain mutations in cis-regulatory or protein-coding regions. Thus, while the circadian clock pathway might be involved in both seasonal and circadian timing, different genes within these pathways are associated with different types of timing. The circadian clock pathway was hypothesized to be a shared genetic pathway between daily and seasonal allochrony. The circadian clock in the brain acts as the pacemaker that synchronizes the body\u2019s endogenous clock ,82. We c1 and 7) . In addi number) ,53. Pdp1se break and has se break , but neiDrosophila [Although associated circadian loci for daily and seasonal timing may be different, they may interact with common downstream physiological pathways that lead to phenotypic shifts in the biological rhythm. We did find evidence for shared differential expression of genes in the juvenile hormone pathway. Juvenile hormone (JH) and ecdysteroids play key roles in triggering developmental transitions and reproduction . In mothosophila ,90. Six osophila . These rosophila . Such moosophila and may osophila .aaNAT is a gene that is regulated by clk and cyc and stimulates melatonin production [aaNAT transcript was upregulated in the E strain (at photophase and hour 4). aaNAT is known to stimulate the release of prothoracicotropic hormone (PTTH). We observed that PTTH was also upregulated in the E strain during photophase. Expression differences of aaNAT and PTTH may represent another instance of daily and seasonal timing involving similar downstream pathways, as these genes are involved in seasonal diapause termination in the moth Antheraea pernyi [Melatonin is also thought to be important in linking circadian rhythms to behavior, possibly by influencing the release of JH or ecdysteroids . aaNAT ioduction ,94. In oa pernyi .Per, cyc, and pdp1 are all located on the sex (Z) chromosome, while vri and slimb are autosomal [Per is over 40 centiMorgans (cM) away from pdp1 and cyc. This lack of pleiotropy or genetic linkage in ECB is consistent with the work in pitcher plant mosquitoes that found that seasonal and daily timing traits evolve independently under artificial selection in mosquitos [We found little evidence for pleiotropy as a cause of coupling between daily and seasonal allochrony in ECB. Evidence for physical linkage among daily and seasonal differentially expressed candidate genes is also minimal. utosomal ,96,97. Posquitos . While sosquitos . pgFAR) and the major QTL for male response on the Z chromosome is far from per (11cM) and pdp1 (21 cM) [Given that coupling among different forms of allochronic isolation in ECB is not due to shared or physically linked genes, coupling is likely maintained by selection and occurs because recombinant individuals have low fitness . For exa (21 cM) ,96,98. T"}
+{"text": "DNA microarrays offer motivation and hope for the simultaneous study of variations in multiple genes. Gene expression is a temporal process that allows variations in expression levels with a characterized gene function over a period of time. Temporal gene expression curves can be treated as functional data since they are considered as independent realizations of a stochastic process. This process requires appropriate models to identify patterns of gene functions. The partitioning of the functional data can find homogeneous subgroups of entities for the massive genes within the inherent biological networks. Therefor it can be a useful technique for the analysis of time-course gene expression data. We propose a new self-consistent partitioning method of functional coefficients for individual expression profiles based on the orthonormal basis system.Escherichia coli data.A principal points based functional partitioning method is proposed for time-course gene expression data. The method explores the relationship between genes using Legendre coefficients as principal points to extract the features of gene functions. Our proposed method provides high connectivity in connectedness after clustering for simulated data and finds a significant subsets of genes with the increased connectivity. Our approach has comparative advantages that fewer coefficients are used from the functional data and self-consistency of principal points for partitioning. As real data applications, we are able to find partitioned genes through the gene expressions found in budding yeast data and E. coli genes. The proposed method is able to identify highly connected genes and to explore the complex dynamics of biological systems in functional genomics.The proposed method benefitted from the use of principal points, dimension reduction, and choice of orthogonal basis system as well as provides appropriately connected genes in the resulting subsets. We illustrate our method by applying with each set of cell-cycle-regulated time-course yeast genes and Discovering which genes are functioning and how they express their changes at each time is a necessary and challenging problem in understanding cell functioning . The larStatistical models can find genes with similar expression profiles whose functions might be related through statistics or biology. Our approach has the assumption that specific curve form exists for each gene\u2019s trajectory and for each partition of these gene curves.The observations of gene expressions are curves measured according to time on each gene. We can then call the observed lines of genes functional data because an observed intensity is recorded at each time point on a line segment. Functional data analysis is possibly considered a suitable method to model these gene curves .Clustering algorithms are utilized to find homogeneous subgroups of gene data with both supervised or unsupervised . For funTo obtain more knowledge about biological pathways and functions, classifying genes into characterized functional groups is a first step. Many methods of analysis, such as hierarchical clustering , K-meansIn this paper, we use Legendre orthogonal polynomial system and principal points to obtain functional partitions. Analysis can be accomplished through extracting representative coefficients via data dimension reduction and finding principal points. Connectedness and silhouette values are computed for partition validity measure. An efficient way to deal with such gene data is to incorporate the functional data structure and to use a partitioning technique.As a smooth stochastic functional process, the observed gene expression profiles have the covariance function which can be expressed with smooth orthogonal eigen-functions based on functional principal components. The random part of Karhunen-Loeve representation of the observed sample paths serves as a statistical approximation of the random process.Abraham et al. proposedk-principal points are defined as a set of k-points that minimizes the sum of expected squared distances from every point to the nearest point of the set. These k-principal points are mathematically equivalent to centers of gravity obtained by K-means clustering. Tarpey [The . Tarpey , 43 also(i)Tarpey showed t(ii)For functional data, clustering algorithms are useful to find representative curves under the different modes of variation. Representative curves from a data set that can be found using principal points from a large collection of functional data curves , 37.(iii)k-points are self-consistent for a distribution if each of the points is the conditional mean of the distribution over its respective Voronoi region. K-means algorithm converges to a set of k self-consistent points of the empirical distribution if a set of k-points are self-consistent.Principal points are special cases of self-consistent points. A set of In this paper, we handle the relation between clustering functional data and partitioning functional principal points. We propose to use self-consistent partitioning techniques for gene grouping based on curvature profiles as FDA. Some advantages in the use of FDA techniques for partitioning are:Partitioning based on interactions of genes is studied for the structure of genetic networks. In addition, statistical test and association rule approach represents another new strategy. Recently a statistical biclustering technique was proposed with applying on microarray data (gene expression as well as methylation) \u201327. ConsEscherichia coli data that resulted in partitioning with interpretable genes.Numerous research results on clustering microarray data which are mostly grouping common expression patterns. There are a few cases for partitioning genes with time-course regarded as functional data. In this research, we propose a new method for self-consistent partitioning of genes with functional gene expression data. The proposed method consists of two main steps. The first step is to represent each gene profile by functional polynomial representation. The second is to find principal points and appropriate partitions. We applied our method to simulated data and analyzed yeast gene microarray data and Yi(t) as a stochastic process at time t. Let fi(t) denote the expected expression at time t for the ith subject. The model with the functional data representation is\u03bej(t). For example, Legendre polynomials, as an orthonormal polynomial system, are expressed using Rodrigues\u2019 formula asConsider the gene expression data curve \u03b5i(t) is an error function with mean 0, independent of each other term in the model. For each gene \u03b2i0,\u2009\u03b2i1,\u2009\u03b2i2,\u2009\u03b2i3,\u2009\u03b2i4 are regression coefficients based on Legendre polynomials. In the microarray experiment Yi(t) is the log gene expression of gene i at time t.The first few Legendre polynomials are\u03b21 in (1) gives the overall trend in the outcome profile, then the derivative fi\u2032(t) gives the rate of change in the expected outcome at time t. Parameter \u03b22 is the coefficient of the quadratic polynomial providing a measure of concavity of the outcome curve. Parameter \u03b23 as the coefficient of the cubic polynomial is a measure of curvilinearity and \u03b24 as the coefficient of the quartic polynomial gives a measure of concavity of the outcome curve. The estimated polynomial coefficients have information about the underlying functional patterns and enable the automatic estimation of pattern functions.The curves given by the orthogonal polynomials are characterized by five coefficients, four of which are used to classify subjects. First, the coefficient Principal points and self-consistent points can be used for partitioning a homogeneous distribution. Principal points can be defined as a subset means for theoretical distributions.y1,\u2009y2,\u2009\u22ef,\u2009yk} the k distinct non-random functions in a function space L2, defineDj\u00a0of yj that consists of all y\u2009\u2208\u2009Rp. The sets of Dj are often referred to the Voronoi neighborhoods of yj. The domains of attraction induce a partition as Dj via the pre-images Bj such as \u222aBj\u2009=\u2009Rp where the boundaries have a probability of zero.For a set W\u2009=\u2009{k-points is expressed in terms of mean squared error (MSE). A set of k points \u03be1,\u2009\u03be2,\u2009\u22ef,\u2009\u03bek are principal points [X\u2009\u2208\u2009Rp ifk points\u00a0y1,\u2009y2,\u2009\u22ef,\u2009yk. The optimal one-point representation of a distribution is the mean, which is corresponding to k\u00a0=\u00a01 principal point. For k\u00a0>\u00a01 principal points are a generalization of the mean from one to several points optimally representing the distribution. A nonparametric estimate for the principal points is obtained via K-means algorithm. Thus the k-points are mathematically equivalent to centers of gravity by K-means clustering.The set of optimal l points for a raThe concept of principal points can be extended to functional data clustering. Tarpey \u201343 proveWe derive functional principal points of orthonormal polynomial random functions based on the transformation.\u03be1,\u2009\u03be2,\u2009\u22ef,\u2009\u03bek} is self-consistent for a random vector X ifA set {k-points is self-consistent if each of the points is a conditional mean in the respective domain of attraction. Principal points are self-consistent [X is p-variate elliptical with E(X)\u2009=\u20090 and Cov(X)\u2009=\u2009\u03a3, then v, the subspace spanned by a self-consistent set of points is spanned by an eigenvector set of \u03a3. Principal points find the optimal partitions of theoretical distributions. It would be interesting to study principal points of theoretical distributions such as finite mixtures, for which cluster analysis is meant to work.A set of nsistent , but thensistent , 47 provnsistent proved tTarpey showed tk-point approximations to continuous distributions.Cluster analysis is related to finding homogeneous subgroups in a mixture of distributions, it would be appropriate to give optimal cluster means to the principal points inspired by . ClusterEstimators of the principal points can be oThe self-consistent curves inspired by Hastie and Stuetzle can be gClustering algorithms are often used to find homogenous subgroups of entities depicted in a set of data. For functional data, clustering algorithms are also useful to find representative curves that correspond to different models of variation. Early work on the problem of identifying representative curves from a data set can be found based on the principal points , 17. Thef1,\u2009f2,\u2009\u22ef,\u2009fn} is a random sample of polynomial functions of the form (1) where the coefficient vector\u00a0\u03b2\u2009=\u2009\u2032 follows 5-variate normal distribution. The L4 version of the K-means algorithm can be run on the functions fi,\u2009\u2002i\u2009=\u20091,\u2009\u22ef,\u2009n to estimate principal points. The center of K-means clustering for the estimated coefficient vectors is based on the orthonormal transformation that constitutes the functional principal point; therefore, we consider K-means clustering for the Legendre polynomial coefficient vectors and for the Fourier coefficient vectors after Fourier transformation.Suppose {L2 metric on function space can be done by clustering regression coefficients linearly transformed based on the orthogonal system [The K-means algorithm providesl system . Clusterl system without C for \u03b2 because the estimated coefficient vector can be a Gaussian random function. Eigenvalues and eigenvectors are then obtained from the covariance matrix of the estimated coefficients.Estimated coefficient vectors can be used to obtain the principal points for partitioning. The subspace can be spanned by eigen-functions of the covariance kernel K centers to the data isOne difficult problem in clustering analysis is to identify the appropriate number of groups for the dataset. As a nonparametric way for chooThe sample Legendre coefficients and the sample Fourier coefficients approximately follow the multivariate normal distribution; therefore, Gaussian mixture model-based clustering can be considered in addition to the number of partitions that can be chosen as a maximizer of the Bayesian Information Criterion (BIC).J, the number of polynomials, we can consider several J values and BIC, assuming that each partition covariance has the same elliptical volume and shape. We surmise that a true optimal J value for all the genes may not exist because the known optimal J values are various for each gene function. Our experiments consider the feasible numbers of partitions and J values for their optimality with the corresponding dataset.xTo determine the value of The determination of the number of subsets (clusters) is an intriguing problem in unsupervised classification. To assess the resulting cluster quality various cluster validity indices are used. We consider silhouette measure proposed by and connith sample in the jth cluster is defined as:a(i) is the average distance between the ith sample and all other samples included in the jth cluster, b(i) is the minimum average distance between the ith sample and all the samples clustered in kth cluster for k\u2009\u2260\u2009j. A point is regarded as well clustered if s(i) is large. The silhouette width is an internal cluster validity index used when true class labels are unknown. With a partitioning solution C, the silhouette width judges the quality and determines the proper number of partitions within a dataset. The overall average silhouette value can be an effective validity index for any partition. Choosing the optimal number of clusters/partitions is proposed as the value maximizing the average s(i) over the data set [The silhouette width for the data set .C\u2009=\u2009{\u00a0C1,\u2009\u22ef,\u2009CN} are clusters, and p is the number of variables contributing to the connectivity measure. Define nni(j) is the jth nearest neighbor of observation i, and let i and nni(j) are in the same cluster and 1/j otherwise.Connectivity was suggested in as a cluThe connectivity assesses how well a given partitioning agrees with the concept of connectedness. This evaluates to what degree a partitioning observes local densities and groups genes (data items) together within their nearest neighbor in the data space based on violation counts of nearest neighbor relationships. The connectivity has a value between zero and \u221e that should be minimized for the best results. Dunn\u2019s index is anothStability measures can be computed after partitioning. Average Distance (AD) computes the average distance between genes placed in the same cluster by clustering based on the full data and clustering based on the data with a single column removed. AD has a value between zero and \u221e; therefore, smaller values are preferred.Figure of Merit (FOM) measures the average intra-cluster variance of the genes in the deleted column, where clustering is based on remaining (undeleted) samples. FOM estimates the mean error using predictions based on cluster averages. The final FOM score is averaged over all the removed columns with a value between zero and \u221e. FOM with smaller values means better performance.i\u2009=\u20091,\u20092,\u2009\u22ef,\u20096,\u2009\u2002u\u2009=\u20091,\u20092,\u2009\u22ef,\u2009m, and tu\u2009=\u2009u/m. The underlying regression functions for f are:We consider flexible functional patterns of data since real gene expression functions are various with noise. Nonlinear curves are generated according to the regression modelf1 and 100 curves of each of f2,\u2009\u22ef,\u2009f6 to reflect certain aspect of gene expression data. Noise is imitated by adding random values from a normal distribution. Two noise levels are considered as low noise \u03c3\u2009=\u20090.5 and high noise \u03c3\u2009=\u20091.5. The number of time points is set to m\u00a0=\u00a020.The simulated data consist of 1000 curves with 6 different underlying functions. The data set has 500 curves of J\u00a0=\u00a03, 4, 5 coefficients in Gaussian-based principal points partitioning. The mean silhouette values and connectivity vary little according to J values. The number of subsets can be determined with modified GAP statistics [The advantages of the proposed method are evaluated by simulations. The number of subsets are known as K\u00a0=\u00a06. Table\u00a0atistics . The simEvaluation for a clustering method can be done on theoretical grounds by internal or external validation, or both , 31. LikThe yeast cell-cycle data set includesFirst, the Legendre coefficients and Fourier coefficients are estimated. Then each set of estimated coefficients is applied to K-means clustering and Gaussian-based principal point estimation with the estimated covariance matrix.k\u00a0=\u00a05. We considered from k\u00a0=\u00a04 since previous research typically provides at least 4 subsets, even with different criterion. BIC is maximized at k\u00a0=\u00a05 for model-based clustering with the Legendre polynomial coefficients under VEV condition. Therefore, we decide the number of subsets as k\u00a0=\u00a05.Figure\u00a0J is considered from J\u00a0=\u00a02 to J\u00a0=\u00a07 and the average silhouette value is maximized at J\u00a0=\u00a05. The average silhouette values for J\u00a0=\u00a04 and J\u00a0=\u00a05 is 0.511 and 0.520 which are very close. However the mean within sum of squares (MSW) with J\u00a0=\u00a04 is 7376 and MSW with J\u00a0=\u00a05 is 144,650. MSW with J\u00a0=\u00a04 is less than MSW with J\u00a0=\u00a05. Consequently, the genes within each subset are closer to its center for J\u00a0=\u00a04. Therefore, we decide to use J\u00a0=\u00a04 Legendre polynomials and one constant term with the resulting coefficients used for partitioning. Table\u00a0J\u00a0=\u00a04 Fourier coefficients are suggested for partitioning. We consider the same number of Fourier coefficients and those of Legendre polynomials for the comparison of yeast data.The number of Legendre polynomials Then K-means clustering is done with the time-course original data (y), with 4 Legendre polynomial coefficients (LPC) and one constant term, and with 4 Fourier coefficients (FC) and one constant mean term respectively. K-means clustering with Legendre polynomials result in five subsets with 120, 128, 914, 1241, and 2086 genes respectively. The 2086 genes in Subset 5 seem to be nondifferential. Table\u00a0Nonparametric estimators of principal points are given by the subset center means Fig. . Figure\u00a0p-values and compared them in terms of biological significance to over-represented GO terms with the Partitioning Around Medoids (PAM) clustering method was performed with the genes in each subset in order to explain the explain biological relevance of the partitioned data. ORA searches for Gene Ontology (GO) terms of a given set of genes by evaluating the statistical significance of over-represented functional and molecular mechanisms , 6. GO ihod Fig.\u00a0 that canp-value <0.1. We drew our attention on Subset1 where the highly significant pathway terms are involved in DNA replication and repair processes during the cell cycle. Sugar metabolisms are easily detected because sugars are the basic building blocks of DNA. From these annotation results, the genes in Subset 1 are closely interrelated in the role of DNA replication. However, 53 of 96 genes in this subset are not included for the annotation; therefore, these 53 genes could be good candidates for further study with a hypothesis that they are dynamically involved in the DNA replication and repair process. Recently FDA pro prop-valtly FDA transcriptional responses to recovering from the stationary phase. This experimental dataset consists of log ratio intensity values for E. coli genes measured in cDNA microarray hybridizations. The final data set includes more than 3607 genes at 11 time points; however, 3452 genes remain after removing genes with missing values. Time-course E. coli microarray data are regarded as functional data obtained according to 11 time points for each gene. This dataset is part of a study that tracks transcriptional responses to over 30 chemical and physiological perturbations [We applied our method to microarray data tracking E. coli bacteria. Functional and regulatory classifications for E. coli genes are considered to evaluate transcriptional activity within and across groups of related genes. Figure\u00a0p-values less than 0.01. As expected, genes in Subset 4 are mainly involved in cell growth; however, the genes in Subset 1 are also related to cell growth similar to genes in Subset 4 that have distinct cellular processes such as molecular bindings. However, the keywords in Subset 2 and Subset 3 are mainly related to enzymatic processes after cell growth. For example, acetylation affects protein stability; in addition, purine/pyrimidine biosynthesis, ligase, transferase, are all important enzymatic processes for cell stabilization. Oxidoreductase and NADP are also responsible for the electron transfer. The proposed technique proved that it provides decisive and biologically meaningful subsets of genes in time-course experiments despite the limited biological annotations.The current study took advantage of the available information about the physiology of The dynamic nature of biological systems makes the investigation of temporal gene expression data important for exploration of gene expression regulation since they provide valuable functional information about temporal underlying patterns. Partitioning these genes is therefore an interesting problem in order to find gene functions in each partition.E. coli.In this paper, we present a functional partitioning procedure using principal points for temporal gene expression data after Legendre polynomial transformation. The optimal partitioning results produce a set of gene curve profiles that identify distinct types of gene expressions. Temporal gene expression data can be viewed as functional data since they are continuous and discretized samples of smooth random gene expression trajectories according to time. Partitioning differentiates cell-cycle regulated genes and other non-cell-cycle regulated genes for yeast. Also partitioning differentiates distinct cellular processes for The proposed method identified each partition for its cellular process properties, which shows that transformation via orthogonal polynomials could work for self-consistent partitioning. Our contributions include proposing principal points for microarray partitioning and the idea of some functional coefficients as transformation giving information about functional data. The future development of our method considers other transformations of functional data and functional time dependency that expects improvements in partitioning evaluation.E. coli dataset in this work is also generated using the custom made two channel microarray technique with two different fluorescence dyes. However, RNA-Seq uses a next-generation sequencing (NGS) technique to measure the quantity of RNA in a sample of interest. The expression intensity is quantified by counting the number of reads mapped to each gene; therefore, care should be taken as the changes of total RNA amount between conditions possibly lead misrepresentation of the changes of individual transcript. In conclusion our method can be applied if the RNA-Seq data is appropriately processed. Further study is expected to utilize the proposed method in the analysis of more complex model organisms such as rats.The yeast cell cycle data used is an early version of a two channel microarray that was hybridized with cDNA from two samples to be compared . The"}
+{"text": "Hes1 promoter (10/19), which has previously been reported to oscillate, than the constitutive MoMuLV 5\u2019 LTR (MMLV) promoter (0/25). The method can be applied to data from any gene network to both quantify the proportion of oscillating cells within a population and to measure the period and quality of oscillations. It is publicly available as a MATLAB package.Multiple biological processes are driven by oscillatory gene expression at different time scales. Pulsatile dynamics are thought to be widespread, and single-cell live imaging of gene expression has lead to a surge of dynamic, possibly oscillatory, data for different gene networks. However, the regulation of gene expression at the level of an individual cell involves reactions between finite numbers of molecules, and this can result in inherent randomness in expression dynamics, which blurs the boundaries between aperiodic fluctuations and noisy oscillators. This underlies a new challenge to the experimentalist because neither intuition nor pre-existing methods work well for identifying oscillatory activity in noisy biological time series. Thus, there is an acute need for an objective statistical method for classifying whether an experimentally derived noisy time series is periodic. Here, we present a new data analysis method that combines mechanistic stochastic modelling with the powerful methods of non-parametric regression with Gaussian processes. Our method can distinguish oscillatory gene expression from random fluctuations of non-oscillatory expression in single-cell time series, despite peak-to-peak variability in period and amplitude of single-cell oscillations. We show that our method outperforms the Lomb-Scargle periodogram in successfully classifying cells as oscillatory or non-oscillatory in data simulated from a simple genetic oscillator model and in experimental data. Analysis of bioluminescent live-cell imaging shows a significantly greater number of oscillatory cells when luciferase is driven by a Technological advances now allow us to observe gene expression in real-time at a single-cell level. In a wide variety of biological contexts this new data has revealed that gene expression is highly dynamic and possibly oscillatory. It is thought that periodic gene expression may be useful for keeping track of time and space, as well as transmitting information about signalling cues. Classifying a time series as periodic from single cell data is difficult because it is necessary to distinguish whether peaks and troughs are generated from an underlying oscillator or whether they are aperiodic fluctuations. To this end, we present a novel tool to classify live-cell data as oscillatory or non-oscillatory that accounts for inherent biological noise. We first demonstrate that the method outperforms a competing scheme in classifying computationally simulated single-cell data, and we subsequently analyse live-cell imaging time series. Our method is able to successfully detect oscillations in a known genetic oscillator, but it classifies data from a constitutively expressed gene as aperiodic. The method forms a basis for discovering new gene expression oscillators and quantifying how oscillatory activity alters in response to changes in cell fate and environmental or genetic perturbations. PLOS Computational Biology Methods paper.This is a Oscillatory dynamics are widespread in biology over a range of time scales, from the circannual migration and reproduction patterns of animals to cytosIn addition to measuring time and space, oscillations have attracted interest due to their unique properties of encoding information . OscillaHes1 stops oscillating when miR-9 is overexpressed [As new potential gene expression oscillators are discovered it is essential to have tools that can objectively classify a biological time series as rhythmic or arrhythmic. This is particularly challenging in data from individual cells because mRNA and protein production and degradation are controlled by random collisions between finite numbers of interacting molecules within the cell , 24, whixpressed and othexpressed . Here, axpressed and as gMany methods for analysing biological time series are designed for estimating the period of known oscillators, such as encountered, for example, in circadian time series (reviewed in ). A commIn addition to the Fourier transform, time series analysis using wavelets is also commonly used to estimate the amplitude and period of oscillations , 28. FouAlgorithms for assessing statistical confidence that the time series is periodic (reviewed in ) includeA limitation of the LSP in applications to single-cell time series is that the assumed model of oscillatory and non-oscillatory dynamics may be unrealistic. The LSP and similar methods are typically benchmarked by generating a sine wave or other perfectly periodic waveform and adding white noise to simulate measurement error , 39, 45.Another approach that is complementary to data driven approaches has been to employ \u201cbottom-up\u201d dynamical modelling, which takes a hypothesis of interactions within a gene regulatory network and predicts how the products of gene expression will evolve in time. Stochastic dynamical modelling accounts for the natural randomness inherent to single cells due to low copy number of interacting molecular components . Even thHere we describe a new method that can be used as a statistical test to determine whether a given time series is periodic or not. To develop this new method we used a mechanistic model of intrinsically stochastic gene expression to define a forward model of what we expect oscillatory and non-oscillatory dynamics to look like, using Gaussian processes. Gaussian processes then allow us to perform data analysis, and we develop a statistical method to determine the confidence that a given single-cell time series is periodic based on our expectation of oscillatory and non-oscillatory dynamics. The new method has two critical advantages over previous methods for the analysis of single cell data: firstly, it can deal with a \u201cdrifting phase\u201d that is often present in single cell gene regulation data, such that the exact peak-to-peak time of oscillations varies and the time series is therefore poorly described as a perfectly periodic signal. This peak-to-peak variation is naturally embedded in the newly presented analysis pipeline through use of a quasi-periodic covariance function. Secondly, our non-oscillatory model accepts that a time series can be randomly fluctuating and that consecutive time points can be correlated due to the finite degradation timescales of mRNA and protein. This is more likely to be a realistic model of non-oscillatory gene expression than white noise.Hes1 genetic oscillator than an Moloney murine leukaemia virus MoMuLV 5\u2019 LTR (MMLV) control, which is representative of fluctuating, non-oscillatory gene expression. Our analysis pipeline calculates the number of oscillating cells within a population and also parameters quantifying dynamic behaviour, such as period of oscillations. The coherence of oscillations can also be quantified through the quality Q-factor [We demonstrate that our approach is more effective than the LSP at classifying cells as oscillatory or non-oscillatory in synthetic data generated from a stochastic model of gene expression. We then apply our method to experimental data, and show that significantly more cells are classified as oscillating in time series from the Q-factor , 51. It The principles of the new method, which is described in detail below, are as follows: Firstly, we define a general model of gene regulation in terms of the underlying reactions of synthesis and degradation of molecular species in a single cell, whereby the copy number of the observed species changes with time. Then we apply the linear noise approximation (LNA) to the system to approximate the dynamics as a Gaussian process. In a Gaussian process model, measurements at different times follow a multivariate normal (Gaussian) distribution, which is defined only by a mean and a covariance function. The covariance function determines how correlated a pair of measurements is as a function of their separation in time. The LNA provides a theoretical model for two cases, depending on the covariance function: 1) intrinsically noisy oscillations, 2) random aperiodic fluctuations, which form the null model of non-oscillatory behaviour. These define a mechanistic, albeit abstract, model-based representation of single-cell gene expression dynamics.https://github.com/ManchesterBioinference/GPosc.Stochastic modelling and the LNA is a forward modelling approach in that we can generate time series data from a given model. Our next step is to perform this process in reverse and infer the most likely model from a given time series, which can be derived from synthetic or experimental data. As the LNA describes a Gaussian process, the tools of non-parametric regression with Gaussian processes can be used to compute the probability of the data under a process with an oscillatory or non-oscillatory covariance function . We can R chemical reactions change the molecule number of N chemical speciesj is the index of the reaction number, Xi is the chemical species i, sij and rij are the stoichiometric coefficients and f(X)j is the rate of reaction j, which depends on both the network of interactions and the kinetics of the reaction. The probabilistic evolution of the system is described with the CME, and approximate solutions can be found using van Kampen\u2019s system-size expansion [x(t) is the deterministic solution, \u03a9 is the system size and \u03f5(t) describes the stochastic fluctuations around the deterministic solution. While ni represents the discrete copy number of reactants, \u03f5i can be modelled as a continuous random variable for large ni and \u03a9. The ansatz is Gaussian white noise with \u2329\u03b6i(t)\u03b6j(t\u2032)\u232a = \u03b4ij\u03b4(t \u2212 t\u2032). The stoichiometric matrix, S, describes how molecule numbers of each species are changed by reactions, and it is given by Sij = rij \u2212 sij. \u03f5(t) are distributed as a multivariate normal distribution. A multivariate normal distribution is fully described by its mean and covariance. As the deterministic equations are at a steady state, the mean of the process is zero. The covariance function K(\u03c4) can be solved analytically, as the Langevin equations derived through the LNA are linear (a multivariate Ornstein Uhlenbeck (OU) process) [\u03c4 = |t \u2212 t\u2032| is the time difference between two points. The quantity \u03c3 satisfiesWe firstly define a general model of gene expression and then show how its dynamics can be approximated as a Gaussian process. The biochemical reactions controlling gene expression are the result of probabilistic encounters between discrete numbers of molecules. The exact order and timing of reactions are random, and this is mathematically described with the chemical master equation (CME). The CME formulates the stochastic kinetics in terms of underlying reactions, and captures the probability of having a certain number of molecules as a function of time. The CME is formulated from the underlying microscopic reactions of the system, where xpansion , 54. Thee ansatz can be uprocess) ,K(\u03c4)=\u03c3eUJU\u22121 = diag, and the covariance matrix is transformed to i is theni can either be real or complex, and we therefore define two covariance functions, labelled \u201cOU\u201d and \u201cOUosc\u201dThe eigenvalue \u03bb\u03c3OU, quantifying the variance of the generated signal, and \u03b1, which describes how rapidly the time series fluctuates. Illustrative examples are provided below. The OUosc covariance function defines a quasi-periodic oscillatory process which captures the peak-to-peak variability inherent to stochastic biochemical oscillators. For simplicity we have only taken the real component of the eigenvector covariance function, and hence there is only a term proportional to exp(\u2212\u03b1\u03c4)cos(\u03b2\u03c4), although in principle an even more general model could include an additional contribution proportional to exp(\u2212\u03b1\u03c4)sin(\u03b2\u03c4). Note that the OUosc covariance function has been similarly derived for a 2-dimensional underdamped oscillator modelled with linear langevin equations [\u03c3OUosc and \u03b1 parameters quantifying the variance and time scale of fluctuations, respectively, but it also contains a cos(\u03b2\u03c4) term. The covariance function displays damped oscillations at frequency \u03b2, and the system therefore undergoes stochastic oscillations at this period. Crucially, the OUosc covariance function is damped, and therefore peaks in the time series will only be locally correlated in time. For a given position of a peak in the time series, the timing of the next peak will be relatively well known, but subsequent peaks become increasingly difficult to predict, depending on the length scale of the dampening \u03b1. The dampening of correlation over time leads to the phase drift and peak-to-peak variability within single-cell oscillators.The OU covariance function represents a process with aperiodic stochastic fluctuations and no peak in the power spectrum . The OU quations , but was\u03c3, which changes the scale bar, but does not change the dynamical properties; this is kept at 1 for all examples. For the non-oscillatory OU covariance function, the \u03b1 parameter controls how long fluctuations are correlated in time. When \u03b1 is high the covariance function is heavily damped and successive time points have low correlation of the two models, which is used for model selection between the non-oscillatory OU and the oscillatory OUosc model and provides a level of confidence that a time series is periodic.t will generate a random vector f(t). Due to the randomness of the process, however, this represents just one sample from an infinite number of trajectories that could be created. Gaussian process regression is so powerful because it is analytically tractable to state the probability of the data being generated by a given model, even though there are infinite possible trajectories that the stochastic model could take. By integrating over all possible trajectories, the marginal likelihood of the data set y for a given model is\u03b8 are the hyperparameters of the models, which in the current context is \u03b1, \u03b2 and \u03c3. The covariance matrix K has elements from the KOU or KOUosc covariance function evaluated at the time points where the data are collected. The first term describes how well the model fits the data, whereas the second term penalises model complexity of the covariance function. If the covariance function is rapidly damped (high \u03b1), then it describes a process with extremely short timescales. As the process fluctuates rapidly, it will be able to fit the data well regardless of whether it is a true representation of the dynamics, and this second term corrects for this overfitting. The third term is a normalisation constant. If the data are subject to measurement error, each observation y(t) can be related to the underlying function of the cellular behaviour f(t) through a Gaussian noise modelAs the LNA describes a Gaussian process, the tools of non-parametric regression with Gaussian processes can be used to determine the likelihood function for both the OU and OUosc models . SimulatThe maximum marginal likelihood is found when the derivative of the marginal likelihood with respect to hyperparameters becomes zero. The marginal likelihood was maximised for both OU and OUosc models using the MATLAB GPML toolbox , and theAs well as oscillatory activity, gene expression may exhibit long term trends. Without accounting for trends in the data, oscillations can become harder to detect because the signal is dominated by an aperiodic long term trend. The OUosc model describes oscillations with variability in amplitude and period, but the mean of the covariance function is zero. Over long time scales it is clear that the average of the signal generated by the OUosc model remains constant , and con\u03c4 = |t \u2212 t\u2032| as before. The parameter \u03b1SE can alternatively by expressed as a lengthscale of correlation, lSE, with \u03b1SE. Crucially, when the long term trend is added to the original oscillatory signal, the calculated LLR markedly drops to 1.12, favouring the OU model model in favour of the alternative model. When the models are nested, the test statistic is usually assumed to be asymptotically chi-squared distributed (using Wilks\u2019 theorem). However, Wilks\u2019 theorem assumes that the estimated parameters are in the interior of the more complicated model, and this does not apply here because the OU model is only equivalent to the OUosc model when the frequency is zero. We therefore use an empirical approach for choosing an LLR threshold to define a cut-off between oscillatory or non-oscillatory classification, that we now describe.An objective metric on which to base the LLR threshold is the false discovery rate (FDR). The FDR seeks to control the proportion of cells passing the oscillatory test that are actually non-oscillatory, but it differs from the false positive rate (FPR) often used in statistical testing. Given a particular LLR threshold, the FPR is the proportion of cells called significant (oscillatory) when they are actually null (non-oscillatory). The FPR is typically controlled by setting a significance level on the p-value, which is the probability of the observed test statistic under the null hypothesis (in our case the LLR). In contrast, the FDR quantifies the expected proportion of cells falsely called significant as a percentage of the total number of cells passing the threshold. So if, for example, out of 100 non-oscillatory cells the LLR threshold is chosen such that 5 are deemed significant, then the FPR is 5%, but the FDR is 100%, because all 5 of the cells called significant are actually false. The FDR therefore provides a better representation of the balance between the number of true positives and false positives in classifying cells.\u03c00 is the estimated proportion of non-oscillating cells and FPR is the probability of non-oscillator passing. The method relies on first estimating the proportion of non-oscillating cells (\u03c00) by comparing the shape of p-values from the analysed data set with that expected from a population of non-oscillating cells. The original protocol uses the statistical properties of p-values, specifically that under the null hypothesis (i.e. a population of non-oscillating cells) the p-values are uniformly distributed between 0 and 1. As we calculate a LLR instead of a p-value, we cannot assume that the distribution of LLR scores is uniform (i.e. we do not know the expected distribution of LLR scores of non-oscillating cells). We therefore find this distribution through a bootstrap approach where we simulate many cells with the null aperiodic OU model and calculate the LLRs. In order to choose parameters to simulate cells we use the parameters of the OU model fitted to the data. Sampling from the null models to compute the test statistic is known as a Cox test [Analogous to the p-value to control the FPR, the q-value is the minimum FDR attained at or above a given LLR score. For a particular LLR threshold, the associated q-value is the expected number of false positives as a proportion of all cells that exceed the threshold. To control the FDR and calculate q-values we follow the procedure proposed by Storey and Tibshirani . The FDRCox test , 61. TheLLR values of the time series from the data.Calculate the LLR score for each cell in the data set. For each cell in the data set to be analysed , fit an oscillatory (OUosc) and non-oscillatory (OU) Gaussian process model to the detrended data, and calculate the LLR difference between the fits of the two models. The LLR is normalised by dividing by the length of the data and multiplying by 100 (to make units more rounded). Let LLR values of the synthetic time series.Create a synthetic data set to approximate the LLR distribution expected from the null (OU) model. Generate 2000 synthetic cells by sampling equally from the parameters fitted by the trend and OU non-oscillatory model for each cell. If the data set consists of 100 cells, then the fitted trend and OU parameters of each cell would be used to create 20 synthetic cells, for example. Let \u03c00 must be estimated by tuning the parameter \u03bb. Allowing \u03c00(\u03bb) on \u03bb, set the estimate of \u03c00 to beEstimate the proportion of non-oscillating cells by comparing the shape of the data set with that of non-oscillating cells generated in (2). For a range of \u03bb, , estimate proportion of cells in the data set that are non-oscillatoryi = m \u2212 1, m \u2212 2, \u2026, 1 calculateCalculate the q-value of each cell in the data set.q < \u03b3) we are then able to quantify the number of oscillating and non-oscillating cells within the population.By controlling the q-value at a certain threshold = \u03b1m/(1 + (np/\u03a9P0)h), where P0 and h are constants representing the strength of negative repression and \u03a9 represents the system size. The double arrow denotes that this reaction contains the total delay within the system, such that when the reaction is triggered at t, an mRNA molecule is not produced until t+\u03c4. The parameters of the model control whether the system undergoes oscillations or aperiodic fluctuations [In order to assess the performance of the method to discriminate periodic and aperiodic signals, we generated synthetic data from a stochastic model consisting of reactions between discrete numbers of molecules in a network with a known oscillatory and non-oscillatory regime. Specifically, we used a model of the cillator , 63, whituations . Data watuations , 66. To Hes1 reporter construct contained 2.7kb of Hes1 promoter upstream of destabilised ubiquitin-luciferase followed by the 540 bp Hes1 3\u2019UTR [Hes1 reporter or control reporter cell-lines was performed by transfection of pcDNA4-Hes1::ubq-luciferase WT 3\u2019UTR or pbabepuro::ubq- luciferase respectively into C17.2 cells using Lipofectamine 2000 and addition of 1 mg/ml Zeocin or puromycin 5ug/ml after 48 hours. Cells were maintained in antibiotic selection for 2 weeks and individual resistant colonies picked to generate single-cell clones. C17.2 Hes1::ubq-luciferase clones were tested for luciferase expression and response to transient Notch1 Intra-cellular domain over-expression in a FLUOstar Omega plate reader . A representative clone was used for subsequent imaging.C17.2 cells were grown in DMEM with 10% FBS . The s1 3\u2019UTR within ts1 3\u2019UTR containiC17.2 reporter cells were plated on 35mm glass-based dishes and were allowed to adhere before serum withdrawal for 3 hours and subsequent imaging in the presence of 10% serum and 1mM D-luciferin . Plates were placed on an inverted Zeiss microscope stage and maintained at 37\u00b0C in 5% CO2. Luminescent images were obtained using a 10x 0.3NA air objective and collected with a cooled charge-coupled device camera . A 30 minute exposure and 2x2 binning was used.\u03c3n in Bioluminescent movies were analysed in Imaris . Images were first subject to a 3x3 median filter to remove bright spot artefacts from cosmic rays. Individual cells and background regions were tracked manually using the \u201cspots\u201d function and single cell bioluminescence values over time were extracted. Four background areas containing no cells were used to estimate and constrain the experimental noise parameter Hes1 genetic oscillator. We subsequently apply the method to live-cell reporter imaging from C17.2 neural progenitor cells to assess ability to discriminate dynamic gene expression generated by two different types of promoters: Hes1 and MMLV.In order to characterise the performance of the new method based on Gaussian processes we compare it with the LSP on synthetic data generated by a stochastic model of the Hes1 oscillator, which is a system of negative autoregulation with delay .We simulated data from 1000 cells for the tal data . MeasureThe LSP of the non-oscillating cell is highlWe use the Receiver Operating Characteristic (ROC) curve to systematically compare the performance of both methods , which pFor the first synthetic data set used to generate a ROC curve, the protein levels were measured every 30 minutes for 25 hours and measurement noise with a variance of 10% of the signal was then added to each time point (representative time series of non-oscillating and oscillating cell shown in \u03b1SE = exp(\u22124) to oscillatory and non-oscillatory synthetic data (representative time series of oscillating cell shown in \u03b1SE of exp(\u22126), exp(\u22124) and exp(\u22122). The higher the upper bound, the more flexible the fitted trend is to the data to exp(\u22122) .Hes1 model.We sought to quantify the effect of the lengthscale in a systematic way to provide a guideline for a sensible parameter choice to achieve good statistical power while controlling the FDR at a low level. The detrending pipeline is designed to remove long term trends within the data, but if the lengthscale is too short then it can remove the oscillatory signal. We therefore tested the effect of the detrending lengthscale on the false positive rate, statistical power and FDR when a range of different trends is added to synthetic data from the \u03b1SE of exp(\u22123.5), exp(\u22124) and exp(\u22124.5), corresponding to a lengthscale of 4.1, 5.2 and 6.7 hours, respectively and exp(\u22126), corresponding to a lengthscale of 8.6 and 14.2 hours, respectively then the LNA can lose accuracy and the system may no longer behave like a Gaussian process. These effects can be particularly strong when there are nonlinear reaction steps in the system, the parameters of the model lead to oscillations (near a Hopf bifurcation) and the copy number of interacting species is low . To valiymmetric , and theymmetric , with a The LNA works by considering fluctuations around a single steady state. Some networks are able to generate bistability, with two steady states. An example of such a network is shown in , where tDuring the derivation we assume there is only one dominant behaviour within the time series . This assumption can be violated when the dynamics are more complicated, and an example of this is when there are two characteristic frequencies within the data. Taken together, these examples show that our method is robust to at least some common cases of mis-specification, although manual inspection of the data is recommended to establish whether the time series data differ substantially from an OU or OUosc Gaussian process. The data analysis pipeline can be in principle be adapted to handle more complicated dynamics through adding multiple covariance functions together and including methods such as changepoint detection see .Hes1 promoter or by a constitutive MMLV promoter. The bioluminescence of single cells was measured by exposing the camera to signal over 30 minutes, to give time points every 30 minutes, where the length of time series ranged from 26 to 72 hours. The Hes1 promoter has been previously reported to drive oscillatory expression due to negative feedback of the HES1 protein on its own promoter [Having demonstrated the utility and accuracy of our method on synthetic data, we then applied the method to an experimental data set consisting of time series from live single-cell bioluminescence imaging. A firefly luciferase reporter was driven either by the 2.7 Kb region of the promoter , althougHes1 (19 cells) and MMLV promoter (25 cells) were analysed by calculating the LLR between OU and OUosc models. The LLRs of the Hes1 promoter cells ranged from 0 to 40, with a reasonably uniform distribution over the entire range . Using our developed detrending protocol and applying the LSP on the detrended data with Benjamin-Hochberg FDR correction as presented in [Hes1 and 4/25 MMLV cells passing the test. This suggests that our detrending pipeline may be used by other methods to improve performance, although the FDR may still not be as good as using the OU/OUosc approach.Applying the LSP to the same data set leads to all cells passing as oscillators (19/19 ented in leads toO(n3) for n number of data points, which is limited by the Cholesky factorisation used to implement the matrix inversion in calculating the log marginal likelihood take 5 minutes. Generating a synthetic population of 2000 OU non-oscillating cells takes 0.04 seconds, but re-fitting the OU/OUosc models takes 90 minutes. The LSP in contrast is faster, taking 15 seconds to analyse all cells (with detrending). The asymptotic runtime of the Gaussian process method is kelihood . If time\u03b1SE = exp(\u22124.5), which corresponds to a time scale of 6.7 hours. The ranked list of LLRs is shown in \u03b1SE = exp(\u22124.5), exp(\u22124) and exp(\u22125).Note that the detrending parameter has an effect on the overall ranking of cells. The analysis of the data was performed with a detrending parameter of Hes1 system [Oscillations are important for a wide range of biological processes, but current methods for discovering oscillatory gene expression are best suited for regular oscillators and assume white noise as a null hypothesis of non-oscillatory gene expression. We have introduced a new approach to analysing biological time series data well suited for cases where the underlying dynamics of gene expression is inherently noisy at a single-cell level. By modelling gene expression as a Gaussian process, two competing models of single cell dynamics were proposed. An OU model describes random aperiodic fluctuations, but in contrast to white noise the fluctuations are correlated over time, and this may form a more appropriate model of non-oscillating single cells for statistical testing. The OUosc model describes quasi-periodic oscillations with a gradually drifting phase causing \u201cdephasing\u201d whereby an initially synchronous population of cells lose phase with one another; such peak-to-peak variability has been observed in the 1 system . This ge1 system but has 1 system but not 1 system . We testAn alternative but related method was proposed , which aAs a practical application of their algorithm, Westermark et al. (2009) fitted circadian time series of cells with gene knock-outs and subsequently compare the parameters of the fitted models. If a genetic perturbation causes a systematic change to the dynamics then mutants may carry signatures in their time series that can be discovered by clustering in parameter space. This clustering of time series with parameters can also be performed with our method, as the period and quality parameters are estimated for each cell. Additionally, our method can also be used to quantify the percentage of cells with oscillatory versus non-oscillatory activity for various mutants. This allows quantitative assessment of the degree of disruption to oscillatory dynamics caused by a genetic mutation, which has a wide range of applications for both circadian and ultradian fields.Our analysis method could form a starting point from which extensions can be designed to customise it for different experimental purposes. Firstly, our method could be extended to form a statistical test for whether two oscillating time series are coupled or synchronised. For example, dual-colour imaging in a single cell can be used to image the expression of two genes simultaneously. Using the same Gaussian process framework, one model could propose that the oscillations are independent while another could describe them as coupled oscillators (with a possible phase-shift between them). Through calculating an LLR between the competing models, a confidence measure could similarly be defined that the two oscillators are coupled.Secondly, our method currently assumes that gene expression is either oscillatory or non-oscillatory for the entire duration of the time series. During development, however, stem cells may make dynamic transitions from oscillatory to non-oscillatory gene expression as they differentiate into more specialised cells , 27. A \u201cFinally, the method could be adapted to infer spatial organisation and pattern formation. The starting point of the method was a chemical reaction system in a well-mixed compartment, where space was neglected. In spatial systems, the combination of reaction and diffusion can lead to the formation of Turing Patterns , such asBy quantifying the proportion of oscillating cells in a statistically objective manner, we envision that our method will provide a useful tool to single-cell gene expression community and it will be expanded upon for future applications.S1 Fig\u03b1 = 0.2, \u03b2 = 0.5, and \u03c3 = 1.OUosc parameter values are (EPS)Click here for additional data file.S2 Fig\u03c00, using a detrending length scale of \u03b1SE = exp(\u22126), \u03b1SE = exp(\u22124) and \u03b1SE = exp(\u22122), respectively. The value of \u03c00 is estimated as the spline fit (red line) at the lowest \u03bb. The estimated number of non-oscillating cells, (EPS)Click here for additional data file.S3 Fig\u03b1SE = exp(\u22123.5), (C) \u03b1SE = exp(\u22124) and (D) \u03b1SE = exp(\u22124.5).(A) with no trend added, trend added at (B) (EPS)Click here for additional data file.S4 Fig\u03b2x = 0.9, \u03b1k = 1.7, k = 0.0001, \u03b2y = 1.1, \u03b10 = 0.8, \u03b1y = 0.8 and \u03a9 = 20. (B) Time series example for the p53 model in the non-oscillatory regime, where y(t) is removed from the degradation of x(t). Model parameters are \u03b2x = 0.9, \u03b1k = 5.1, k = 1, \u03b2y = 1.1, \u03b10 = 0.8, \u03b1y = 0.8. and \u03a9 = 20. The false positive rate, statistical power and FDR of 2000 oscillating and non-oscillating cells from the p53 model simulated with the Gillespie algorithm with trend added at (C) \u03b1SE = exp(\u22125), (D) \u03b1SE = exp(\u22126).(A) Time series example for the p53 model in the oscillatory regime. Model parameters are (EPS)Click here for additional data file.S5 Fig\u03b1SE = exp(\u22124). (B) The LLR distribution of synthetic bootstrap data of the entire data set. (C) The Q-Q plot of the Gillespie simulated (plus trend) LLR distribution (from A) against the OU bootstrap LLR distribution (B). (D) The estimates of \u03c3SE inferred from the Gillespie data with trend added .(A) The LLR distribution of the of non-oscillating Gillespie simulations with added trend of (EPS)Click here for additional data file.S6 Fig\u03b1SE = exp(\u22124) (from \u03b1SE = exp(\u22124) (from (A) The cumulative density function of the LLR of 1000 non-oscillating Gillespie simulations with added trend of \u22124) from and the \u22124) from . Note th\u22124) from and the (EPS)Click here for additional data file.S7 Fig(A) The LLR distribution of the of non-oscillating Gillespie simulations with no added trend. (B) The LLR distribution of synthetic bootstrap data of the entire data set. (C) The Q-Q plot of the Gillespie simulation LLR distribution (from A) against the OU bootstrap LLR distribution (B).(EPS)Click here for additional data file.S8 Fig\u03b1SE = exp(\u22124) for time lengths of 25 and 50 hours, respectively. The LLR distribution of synthetic bootstrap data of the entire data set for time lengths of 25 and 50 hours, respectively. The Q-Q plots of the OU simulated LLR distribution against the OU bootstrap LLR distribution for time lengths of 25 and 50 hours, respectively. The estimates of \u03c3SE in from the Gillespie data for time lengths of 25 and 50 hours, respectively. The LLR distribution of the of (EPS)Click here for additional data file.S9 FigHes1 oscillator at a system size of \u03a9 = 1. (B) Histogram of all data points contained in (A).(A) Time series example of (EPS)Click here for additional data file.S10 FigP0 = 12, n = 2, \u03b1m = \u03b1p = 10, \u03bcm = \u03bcp = 0.3 and \u03a9 = 1. LLR distributions of 2000 cells simulated from bistable network and from OU bootstrap, respectively.(A) Network topology of the bistable network. Time series examples of bistable network. Model parameters are (EPS)Click here for additional data file.S11 Fig\u03c31 = 5, \u03b11 = 0.001, \u03b21 = 2\u03c0/24, \u03c32 = 1, \u03b12 = 0.1, \u03b22 = 2\u03c0/2.5. (B) The corresponding time series from (A) after detrending with a lengthscale of 7.5 hours.(A) Time series example of dynamics generated by two oscillatory OUosc covariance functions added together, with a period of 2.5 and 24 hours. Covariance parameters are: (EPS)Click here for additional data file.S1 Table(PDF)Click here for additional data file."}
+{"text": "Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis\u2013Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato ( For convenience, we briefly sketch the approach when the catalytic rates are constants over time as in our previous work .i to node j means that i can be converted to j by enzymatic activity. To an edge from node i to j, we assign a weight, i.e., the catalytic rate kij \u2265 0, which represents the rate of product formation. In parameter inference one estimates the kij from data.We recall that any network can be represented as a graph, where nodes are connected by edges when there is some interaction between these nodes. In a metabolic network a node represents a substrate or a product, and a directed edge from node i at time t as Xi(t), a general time-invariant linear ODE model with a constant nonhomogeneous term, satisfying the mass conservation law, can be written as i = 1, \u2026, n, with Xi, the second for the incoming edges and bi for the possible in or outflow to the system. To simplify the notation, we introduce a matrix A with components given by X(t) by appending an extra 1 and matrix A by extending it with an extra column containing the constant b1.Denoting the concentration of substrate kij, i.e., the weights of the edges in the network and the flow terms bi. In view of n + 1) \u00d7 (n + 1) matrix i at time points tj, j = 1, \u2026, m, as an (n \u00d7 m) matrix \ud835\udd4f. Estimates of the derivatives of the data curves we will store in a matrix n \u00d7 m data matrices \ud835\udd4f0, \ud835\udd4f1 as follows m is the number of measurements. The matrix t. If this is not the case, the necessary modifications are easily implemented.To reconstruct a metabolic network from time-series measurements, we have to estimate the reaction rates i can numerically find the minimizer kij and the flow term b1) with the constraint that kij \u2265 0.To this end, we reformulate the equation as a minimization problem: As described in the introduction, tree networks are networks, whose graphs resemble trees in that they branch away from the root and the directions of the edges always point from the root towards the leaves. In b1 represents the influx into the system and the ki,j are the catalytic rates. Note that there are as many unknown parameters as there are measured variables Xi(tj). Therefore, as can be directly verified, we can rewrite the previous matrix equation by exchanging the Xi and ki,j as follows: For the network on B is an upper triangular matrix since the entries below the diagonal are zero. This implies that the determinant of the matrix B in Xi \u2260 0, \u2200i = 1, \u2026, n. So, B is invertible and the system of equations has the unique solution. We immediately see that In earlier work we developed a fast method to reconstruct metabolic networks . The ideAs the catalytic rate is now modeled as a function in time, and not as a constant, it is no longer possible to infer this with the standard procedure of solving for those parameters that fit the ordinary differential equations to data in the sense of maximum likelihood. We cannot clearly separate the substrate/product, enzyme concentrations and noise, since we have no measurements of the enzyme concentrations. To solve them, we would have to impose a model on them, which we don\u2019t have a priori. A reasonable approach in this situation is to first estimate a model for the metabolite concentrations for which we have several measurements. By fixing the concentrations first using spline approximations, we may then estimate the trends in the enzyme concentrations. This method assumes that the solutions are rather smooth. If this is not the case and the sampling frequency is low, the derivatives obtained by fitting splines can introduce errors that distort the reconstruction. The inference method proposed here is by no means restricted to tree networks, but in case the network has a tree structure, the parameters can be estimated in an unambiguous way. We summarize the general work flow for the proposed parameter inference in the schematic diagram in kij(t) in model kij(tk).In this section we present three different schemes to estimate the Scheme 1.kij(tk) and b1 at all times tk. Finally, for smooth and continuous catalytic rates, one may fit, e.g., a second order polynomial through these estimates.To estimate the derivatives at some time point one still needs the data of neighboring time points. So, the first step in this scheme is to fit, e.g., P-splines to the data time series . From th Scheme 2.kij(t) can be adequately represented as polynomials in time of some order. In practice order 2 is often sufficient. With this choice we have then: An alternative approach in which the number of parameters is smaller than in scheme 1, is to assume that the functions kij we have 3 parameters to be estimated using the whole time series data. By substituting \u03b1ij, \u03b2ij and \u03b3ij, and thus for kij(t).This implies that per Scheme 3.X(tk) and the measurements. We look for a matrix Xi(t) to Xi, kij, and b1.As in the previous scheme, we assume To compare the fit, accuracy and speed of these three schemes we applied them using as test networks random tree networks that have equal numbers of nodes and edges as the network on \u03b1ij, \u03b2ij and \u03b3ij in a range, such that the resulting solutions have approximately the same range as the metabolite concentration data for quercetin glycosides measured in tomato seedlings are members of the multigene superfamily in plants that can transfer single or multiple sugars to various plant molecules, resulting in the glycosylation of these compounds . To dateOur procedure for the GT inference is as follows: 1.Given the time series metabolite concentration data, estimate the time dependent parameters using all biologically relevant networks. Select the network that gives the best fit to measurements with respect to residual or goodness of fit etc. Save the estimated catalytic rates corresponding to the best network as reference. 2.Compute correlations between the time series of expression levels of each GT and the previously saved series of catalytic rates. 3.Select those GTs whose dynamics correlate best with catalytic dynamics for further experimental validation.kij > 0) were given to the solvers and the parameters were estimated using constrained non-linear global optimization (NMinimize in Mathematica) choosing for the fast Nelder\u2013Mead algorithm .As can be seen from This result is more or less to be expected, since when the data is reasonably accurate, it does not always make sense to re-estimate the data by using it as an unknown variable in the equations of the system. Rather, it may pay off to substitute the data directly into the equations reducing the number of unknown elements. Also it is logical that schemes 1 and 2 perform less well on non-tree graph networks, since the assumption on unique point-wise estimability is not valid anymore. Since our method is based on initial fitting of splines, the major sensitivity is indeed with respect to data. This was also confirmed by the sensitivity analysis we conducted.\u22124 to 105. For the sensitivity analysis numerical derivatives need to be computed. Since we are considering time varying parameters, we have taken time-averages of point-wise derivatives. Eigenvectors corresponding to very small eigenvalues, implying sloppiness in sensitivity, all point towards those parameters that are associated with network nodes where the measured metabolite concentrations are very low. This is logical since the parameters associated with concentration values close to zero have little effect on the residual, because our objective function does not contain the standard deviation term in the denominator. By this choice we explicitly wanted to avoid that those measurements that are close to noise level shall have equal weight with the more abundant ones.Our network models, although relatively small, belong to the general group of the so-called sloppy biochemical models , despiteXi to Xj exactly as in In As a computational validation of the selection procedure, we tested whether substituting the expression levels of the selected genes into the model will result in a decreased residual (better likelihood of observing the measurements). The reason we want to do this post-analysis is two-fold. First of all, our GT candidates are ranked according to their correlation with the predicted enzymatic trends, but it may happen that several candidates have almost equal correlation coefficients. This makes it difficult to distinguish between the candidates, especially because the initial GT-population is already a result of an ontology-based selection. Another point is that the selection of the most likely GT\u2019s is based on individual matchings with single dynamic parameters whose magnitudes are unknown. It is not absolutely clear, say, whether the combination of the very best candidates will always give better results than when, for example, one candidate is actually the second best one (in terms of correlation). In each network combination, at most seven GTs are considered, but the number of all possible combinations is still very high. Also the expression levels need to be scaled to match the metabolic model.P-value less than 0.00001 and for the variance test a P-value of less than 0.006. We may conclude that in the context of a dynamic kinetic reaction model, those genes with expression levels highly correlating to the predicted enzyme dynamics, are significantly more likely to be responsible for the observations.To ensure a rich set of gene combinations, we ran a Markov Chain Monte Carlo-algorithm (MCMC) . To addrIn this section we discuss the results in terms of identifiability which is a major issue in parameter inference. A parameter estimation method may always be able to find some estimates, but this makes sense only if it is clear that it is possible to estimate the parameters from the data, i.e., they are structurally and practically identifiable.A general problem in parameter estimation is that it is difficult and sometimes even impossible to be sure that the estimated parameters are unique. If the model is structurally unidentifiable, there is an infinite number of parameter sets that give equal results. This is a substantial challenge, especially when the network structure is not known, since an overly complex network can result in over-fitting. This problem is not present in any of the potential networks as sketched in Structural identifiability does not imply practical identifiability and therefore we have studied the practical identifiability of the parameters in our system by means of profile likelihood . We learIn this article, we consider the time dependence and unique estimability of kinetic rates in metabolic networks. Firstly, we show that when the underlying network has a structure of a tree graph, these rates can be unambiguously estimated. Secondly we propose a fast approach for the estimation of time dependent kinetic rates and demonstrate its performance on simulated data. Finally we also propose an application where we utilize the estimation method to detect the genes that are potentially involved in particular enzymatic reactions using microarray data.10.7717/peerj.2417/supp-1Supplemental Information 1This excel file contains the concentrations of the metabolites on different time points and under different conditions.Click here for additional data file.10.7717/peerj.2417/supp-2Supplemental Information 2This Mathematica notebook solves the time-varying enzyme concentrations, given the time-series of concentrations and a hypothetical metabolic network.Click here for additional data file.10.7717/peerj.2417/supp-3Supplemental Information 3Click here for additional data file.10.7717/peerj.2417/supp-4Supplemental Information 4Click here for additional data file.10.7717/peerj.2417/supp-5Supplemental Information 5This dataset was used to compare the estimated enzymatic rates from the mathematical model to the actual expression rates of the enzymes that are potentially involved in the reactions.Click here for additional data file."}
+{"text": "Cellular senescence irreversibly arrests growth of human diploid cells. In addition, recent studies have indicated that senescence is a multi-step evolving process related to important complex biological processes. Most studies analyzed only the genes and their functions representing each senescence phase without considering gene-level interactions and continuously perturbed genes. It is necessary to reveal the genotypic mechanism inferred by affected genes and their interaction underlying the senescence process.We suggested a novel computational approach to identify an integrative network which profiles an underlying genotypic signature from time-series gene expression data. The relatively perturbed genes were selected for each time point based on the proposed scoring measure denominated as perturbation scores. Then, the selected genes were integrated with protein-protein interactions to construct time point specific network. From these constructed networks, the conserved edges across time point were extracted for the common network and statistical test was performed to demonstrate that the network could explain the phenotypic alteration. As a result, it was confirmed that the difference of average perturbation scores of common networks at both two time points could explain the phenotypic alteration. We also performed functional enrichment on the common network and identified high association with phenotypic alteration. Remarkably, we observed that the identified cell cycle specific common network played an important role in replicative senescence as a key regulator.Heretofore, the network analysis from time series gene expression data has been focused on what topological structure was changed over time point. Conversely, we focused on the conserved structure but its context was changed in course of time and showed it was available to explain the phenotypic changes. We expect that the proposed method will help to elucidate the biological mechanism unrevealed by the existing approaches.The online version of this article (doi:10.1186/s12918-017-0417-1) contains supplementary material, which is available to authorized users. IKK4a/ARF locus [Cellular senescence is irreversible exit from the cell cycle resulting from the limited replicative capacity caused bRF locus . RecentlRF locus . To undeRF locus . Genome-A recent study analyzed genome-wide gene expression at various time points during the establishment of replicative senescence and revealed senescence stage-specific gene perturbations . AnalysiTime-series gene expression data have been widely used to explore the molecular-level events during a phase change such as the senescence process described above, despite of difficulty of culturing cells to full senescence. Typical analytical methods use co-expression patterns to identify functional modules or compare pairwise time points to capture features of the transition or to identify temporally regulated gene expression versus one control sample . Consequgraphlet degree distribution agreement [Recently, a study reported the construction of an age-specific integrative gene network with PPI and topological analysis of the network to reveal the key modules in aging . To consgreement was usedWe proposed a novel approach to investigate the core modules of a genetic network highly correlated with phenotypic changes from time-series data. To construct the network, we integrated a perturbed gene set with biophysically validated PPIs recently published and idenTo test our hypothesis, we applied the proposed method to two replicative senescence datasets from human diploid fibroblasts (HDFs) and mesenchymal stem cells (MSCs) and one independent cancer progression dataset from human tissue neoplasia. We performed functional enrichment of the identified core sub-gene network and simple significance tests to confirm whether our findings reflect changes in gene expression that account for the observed phenotypic change.As a proof of concept for our approach, we intensively used recently published time-series gene expression data (GSE41714) measured during replicative senescence in HDFs . This tiTo identify the connectivity between genes, we downloaded a recently published human PPI dataset with 23,The entire workflow of the proposed method is shown in Fig.\u00a0geneij indicates gene i at time point j, eij indicates the expression value of the ith gene at the jth time point, and M is the total number of time points. This formula measures the difference between each expression value and the average for expression value. The perturbation score, p(geneij), is calculated for every gene at every time point, and larger values implied that gene i was relatively perturbed at time point j. If adjacent time points with similar phenotypes are grouped, the average perturbation score for the grouped time points was used.Before identifying the perturbed gene set, we carry out quantile normalization for the time-series gene expression dataset with R. For normalization of the downloaded dataset which is series matrix format in GEO, we used \u2018preprocessCore\u2019 library provided by Bioconductor. By normalization, we can expect that the average of expression values and relatively highly expressed genes at certain time points were adjusted. This gives a similar intensity distribution for each time point. After normalization, perturbed gene sets were identified to construct a network for each time point. In our method, the perturbation score of a gene at a time point is calculated as follows:Only genes with perturbation scores above the threshold are selected for each grouped time point. The threshold as determined for a given dataset by assuming the perturbation scores are normally distributed and setting the threshold as the sum of the mean and standard deviation. Figure\u00a01and2 and time-point4 will be 1.1 which is calculated by (|0.5\u2013(\u22120.45)|\u2009+\u2009|-0.775\u20130.475|)/2. From this calculation, we can determine whether the variation of perturbation scores and the state of phenotypic changes are associated or not. To investigate that perturbation scores were related to phenotypic changes, the statistical tests for all possible pairs of adjacent time points and one additional pair composed of the first and the last time points was performed.From the identified time-specific networks which have different size and include different interactions, we detect topologically conserved sub-network across time-point. As described in the Introduction, we assumed that sub-networks with constant topological structures play important roles in phenotype change. Topological conservation for a sub-network refers to its continuous perturbation with changing phenotype. As mentioned in the Introduction, we assumed that these network modules and the transition of their information such as perturbation scores were related to phenotypic changes. To prove our assumption, we calculate the average difference of perturbation scores between two time points after identification of common network. For example, in Fig.\u00a0p-value which indicates whether we can reject the hypothesis or not. Through the result of this test, we demonstrate that context transition explainable phenotypic change can be observed in the conserved network reflecting a time order, i.e. identified by the proposed method. This procedure was systematically implemented by using Java and R.We investigate an association between phenotypic changes and variation of perturbation scores with statistical test based on large-sized random sampling. We assume \u2018average differences of perturbation scores from our common network and random network are same\u2019 as a null hypothesis. We tested whether this hypothesis can be rejected or not. The test is composed of three steps. As shown in Fig.\u00a0To demonstrate the effectiveness of the proposed method, we applied it to three time-series gene expression datasets: two senescence datasets and a cancer progression dataset. Although we focused on senescence, we included the cancer progression dataset to test the applicability of our method to other types of datasets. We detected a common network from whole time point-specific networks and demonstrated how the common network can explain the phenotypic change for the three experimental datasets by performing statistical tests and functional enrichment with KEGG pathway database and gene ontology database.To determine a optimal thresholds for identifying perturbed genes, we used dataset-specific cut-off values. As described in Method section, we calculated the mean and standard deviation for the perturbation scores in given dataset and found that the perturbation scores were not normally distributed: the mean was near zero, and the standard deviation was relatively small. However, we assumed that the distribution was Gaussian because the number of perturbation scores was large enough to apply the central limit theorem. Supporting Fig.\u00a0Table\u00a0p-value was almost zero for this comparison. The phenotypic changes between the first and last stage was understandably obvious since young cell turned into old cell. This result demonstrated that context transition of the conserved network can reflect and explain the phenotypic changes. Surpporting Fig.\u00a0With the common network, we investigated a statistical significance comparing all possible adjacent stages. As shown in Table\u00a0We investigated how perturbation scores of the common network are changed during transition and whether this changes have direction to the increase or decrease. As shown in Fig.\u00a0Furthermore, we applied the proposed method to the cancer progression dataset (GSE15299) and perfP-value of comparision between the first and last day was 3.792E\u201329, which is considerably significant. As shown in Fig.\u00a0To capture the progression of invasive neoplasia, the author made an Ras-inducible human model which can be changed epidermal tissue to squamous cell carcinoma . In thisThrough abovc experiments, we computationally analyzed common network in time-dependent gene expression profile. In addition to the computational validation, we performed two types of functional enrichment test on the common network. Fisrt, we performed gene ontology based enrichment using BiNGO plugin iP-value\u2009<\u20090.01) were listed in Table\u00a0On HDF senescence dataset, top 10 pathways and gene ontology terms to build a pathway of directed interactions among genes in common network Fig.\u00a0, omittinInterestingly, we observed a feedback loop composed of the replicative senescence related genes: CDK6, CCND1, CDKN1A, and CDKN1B Fig.\u00a0. In the In our feedback loop Fig.\u00a0, the subIn addition, we investigated TFs which can control this regulatory relationship. We used recently published computational method, iRegulon . Among tWe analyzed whether the expression levels of four genes constructing above feedback loop are corresponding to the regulatory loop or not using an independent dataset. We used recently published gene expression profile on the replicative senescence in normal human diploid fibroblasts . In thisIn this study, we proposed a novel approach to identify gene networks that are significantly correlated with phenotypic changes from time-series data. In this process, we integrated a recently published PPI dataset with time-series gene expression data to produce informative interactions among genes. Networks were validated with statistical tests and functional enrichment. To demonstrate the suitability of the proposed method, we used three different real datasets for cellular senescence and cancer progression. The identified networks were appropriate to explain the phenotypic changes. In our future work, we plan to carry out perturbing experiments with the identified TFs to demonstrate whether they can contribute to changing phenotype by affecting expression level of CDK6 and its looping member or not."}
+{"text": "W) that stores interactions among the nodes over the entire network. We used gene co-expression to compute the edge weights. Through W, we then associate an energy score (E) to each input pattern such that each pattern has a specific E. We propose that, based on the co-expression values stored in W, HN associates lower E values to stable phenotypic states and higher E to transient states. We validate our model using time course gene-expression data sets representing stages of development across 12 biological processes including differentiation of human embryonic stem cells into specialized cells, differentiation of THP1 monocytes to macrophages during immune response and trans-differentiation of epithelial to mesenchymal cells in cancer. We observe that transient states have higher energy than the stable phenotypic states, yielding an arc-shaped trajectory. This relationship was confirmed by perturbation analysis. HNs offer an attractive framework for quantitative modelling of cell differentiation (as a landscape) from empirical data. Using HNs, we identify genes and TFs that drive cell-fate transitions, and gain insight into the global dynamics of GRNs.The epigenetic landscape was introduced by Conrad Waddington as a metaphor of cellular development. Like a ball rolling down a hillside is channelled through a succession of valleys until it reaches the bottom, cells follow specific trajectories from a pluripotent state to a committed state. Transcription factors (TFs) interacting as a network (the gene regulatory network (GRN)) orchestrate this developmental process within each cell. Here, we quantitatively model the epigenetic landscape using a kind of artificial neural network called the Hopfield network (HN). An HN is composed of nodes (genes/TFs) and weighted undirected edges, resulting in a weight matrix ( In the course of development, cells take on a succession of distinct phenotypic states, from an initial totipotent or pluripotent state through to a final differentiated state in which the cell is committed to a particular location and function. This commitment is typically progressive, with cells passing through a hierarchy of increasingly specialized intermediate states along a developmental trajectory. The transition from one intermediate state to the next is driven by the concerted action of transcription factors (TFs) and other biomolecules as part of the gene regulatory network (GRN).1 Like a population of balls rolling down a rough hillside, cells follow specific trajectories and encounter decision points (inflections) before eventually coming to rest in one or another potentially steady state, termed attractors.3 Importantly, Waddington depicted the topography of the landscape as determined by a system of underpinning interconnected cables. Although this metaphor preceded our current understanding of the relationship among genes, transcripts and proteins, it is easily interpretable today as depicting the framework for control of cellular differentiation and phenotype by dynamics of the underlying GRN.Conrad Waddington introduced the epigenetic landscape as a metaphor for cellular development.4 proposed a \u201cquasi-potential\u201d that connects the elevation of the landscape to the likelihood of the corresponding cell state. For Huang4, each point in the landscape represents a gene-expression configuration of a binary regulatory circuit. An alternative formulation5 emphasises the possibility of cell trajectories without necessarily \u201crolling downhill\u201d, e.g., the landscape is modelled as a non-hierarchical \u201cepigenetic disc\u201d in which cell fates can be interconverted without necessarily traversing back up through a developmental hierarchy. Other formulations emphasise the underlying molecular mechanisms including the role of DNA methylation, histone modification and signalling pathways in driving cell-fate decisions.6\u20138To model this landscape, Huang9 with TFs driving cell development. For instance, Takahashi and Yamanaka10 demonstrated that small sets of TFs (the \u201cYamanaka cocktail\u201d) are sufficient to induce pluripotency in a differentiated somatic cell, i.e., the cell is reprogrammed back to its original state at the top of Waddington\u2019s landscape. Experimental protocols exist for generating specialized cells including neurons,11 hepatocytes,12 macrophages13 and cardiomyocytes from undifferentiated fibroblasts14 using small sets of TFs, demonstrating cell-fate conversion and reprogramming.Modern experimental evidence does in fact show hierarchies of cell fates15 that cannot be quantified has been echoed by several authors.15\u201317 The recent availability of single-cell transcriptomic (and other omic) time course data provides new opportunity to explore landscape models that could provide insight into the GRNs that underpin cellular development. However, there are issues in quantifying this landscape, including design of the algorithmic framework per se, the approach to data utilisation and mapping of actual developmental stages to features of landscape models.Nonetheless it remains controversial whether a Waddington (or similar) landscape can be computed solely from empirical data. The view of Waddington\u2019s landscape as a \u201ccolourful metaphor\u201d18\u201320 generalised logical networks,21 constraint-based models,22 Petri nets,23 stochastic master equations24 or agent-based approaches.25 Although these approaches can identify stable states, in general they do not generate a solution landscape of the sort introduced above.Dynamic systems of biomolecular interactions have been modelled using Bayesian networks,26\u201328 neural networks29\u201331 or systems of ordinary differential equations (ODEs);32 however, these have seen application mostly in computational simulation rather than in the analysis of empirical data. For instance, Wang et al.33 used ODEs and simulated data to construct a probabilistic \u201cpseudo-potential\u201d energy landscape using only two TFs, GATA1 and PU.1, to identify the developmental path of cells from undifferentiated to differentiated states. This system permits a binary cell-fate decision such as macrophage/monocyte or megakaryocyte/erythrocyte. Similar probabilistic potential landscapes have been used to model lysis-lysogen switching in bacteriophage \u03bb34 and the mitogen-activated protein kinase (MAPK) signal transduction network.35 Srihari et al.36 used binarized gene-expression data in an optimisation framework to derive attractors of cell states and identify TFs switched between these attractors. Ferrell37 proposed an alternative landscape for the developmental processes of cell-fate induction and inhibition, in which cell-fate commitment corresponds to the disappearance of a valley rather than the creation of a new valley. Davila-Velderrain et al.38 provide a comprehensive overview. Problems associated with these models include (lack of) computational scalability and, for ODEs, the need for rate constants.In the context of GRNs and cell fate, attractor landscapes have been described using Boolean networks,Mathematical models and GRNs have their own descriptions of states, trajectories and attractors. Questions pertain to whether and how phenotypic states of a cell, which are maintained and regulated by a GRN, can be mapped to an attractor landscape model computed from gene-expression and/or other empirical data. Here we use Hopfield networks (HNs) to model an attractor landscape that can serve as a framework to understand the dynamics of the underpinning GRN.29 used an HN to simulate fusions between different types of cells corresponding to distinct attractors of the HN, concluding that fusion helps the cells reach an attractor state that would otherwise be inaccessible. Lang et al.31 used a similar approach to model and explain partially reprogrammed cell fates, and identified driver TFs involved in this process. They employed binarized gene-expression values, based on a conditional probability distribution derived from global histone-modification data, which reflect the epigenetic state of TFs and developmental signals in the landscape. Maetschke and Ragan,30 on the other hand, constructed an HN from static gene-expression data for different subtypes of cancer, and showed that cancer subtypes can be characterized as distinct attractors of an HN.HNs, introduced by John Hopfield in 1982, are auto-associative (recurrent) artificial neural networks. Input patterns to the HN are associated with distinct attractors of the network, and can later be recalled even from partial or noisy inputs. Koulakov and LazebnikHere we construct HNs from large-scale gene-expression time course data, and map developmental trajectories to Hopfield energy profiles in this landscape. We find that after cells are induced to differentiate, parts of their GRNs become less tightly correlated, as expected for a cell in transition from one cell-fate to another. The topography of these landscapes reflects the correlated activity of key genes that have been experimentally shown to drive cell-fate transitions and decision-making.Here we describe the HN energy landscape generated from the first case study.EP7=\u22121320897, i.e., a low elevation on the Hopfield energy landscape. On induction, these cells enter intermediate state P6 followed by P5, where we see an increase in energy to EP6=\u2212755220 and EP5=\u2212599724 before the cells reach the differentiated state P4 with the lowest energy EP4=\u22123307223 and are not merely an artefact, we progressively perturbed the gene-expression values for each stage, and computed the energies of the resulting HNs . If randvis-a-vis the random network closer to that of the random network at all investigated levels of perturbation up to 50% and last (P4) groups show greater difference network , and maip to 50% . This de3940 and KEGG pathways41 most over-represented among the top 100 genes switched between the first and last groups. Among the enriched GO terms are cell-fate determination and cellular differentiation.43 The KEGG pathways41 most-enriched were Hedgehog signalling, which governs a wide range of processes during embryonic development and controls stem-cell proliferation44 and Wnt signalling, a major player in stem-cell self-renewal and differentiation.45 Among the switched genes we find FOXA1, GATA4, CD9 and OCT4 TFs involved in regulation of stem-cell differentiation (DAVID47). Specifically, the differentiation markers FOXA1 and GATA4 are upregulated in P4, whereas the pluripotency markers CD9 and OCT4 are upregulated in P7 . The feature-selected genes that switch activity between groups at each transition are listed in ed in P7 .For the remaining case studies, we inferred different genes to have switched expression, and different pathways and biological processes to be enriched, but we always observed the same pattern of energy changes across time points and S3. E, which we compute from the pattern of co-variation of gene expression in that cell. For each landscape model, groups of cells at the same developmental stage, and thus sharing a common pattern of gene expression, are positioned close together. As we construct the landscapes based on the set of genes showing the greatest variation in expression across the time points in each data set, the models reflect the molecular biological processes underlying cell development.We computed developmental landscapes from single-cell gene-expression data in 12 sets of time course data sets, using the mathematical formalism of the Hopfield recurrent neural network. Each point in each landscape represents the state of a cell; its elevation is determined by the energy E measures the extent of co-variation over all pairs of feature-selected genes, capturing a distinct pattern of gene expression which represents a cellular phenotype.48 Strong co-variation yields a relatively low energy, whereas looser co-variation gives a high energy. Along a time course as the GRN is differentially regulated or rewired, changes in the patterns of co-variation are reflected as a trajectory on our landscape. For these 12 data sets, identities of the switched genes provide insights into functional modules, many of which have previously been validated experimentally.In our HN models, the energy of a state space does not reflect its likelihood; instead, We selected a form of perturbation analysis to build confidence in our energy values. The perturbation results make it clear that the energy values we compute for the initial and final states are far from those of random networks; this allows us to make comparisons among states. The set of energy values arising from perturbation analysis should not be taken as an estimate of the stability of an attractor, or the time or energy a cell needs to move out of that attractor state; for such concepts, new methods remain to be developed.Different algorithmic frameworks are available to describe systems of biomolecular interactions. HNs are computationally simple, and our results demonstrate that they can capture trajectories of cellular development using only gene-expression data.30 and Lang et al.31 we employed gene-expression data, typically at genome-wide scale, as input. Like these authors, we sought to reduce noise (and thereby avoid spurious attractors) and improve computability by first carrying out feature selection. Thus we based each analysis on the set of genes (probes) most-informative in its respective context. Lang et al.31 further used histone-modification data to establish a threshold for the binarization of gene-expression values. In principle, any type of data that provides patterns characteristic of cells at different developmental stages or time points could be input directly into the HN, including transcriptomic data for gene expression. Indeed it would be possible to use HNs to build a landscape model over mixed data types, yielding a unique approach to integrating data in developmental systems biology.Following Maetschke and Ragan30 At a given time, a cell is described by a state that reflects the overall pattern of interactions within its GRN. In the Hopfield framework, state refers to the value of Ht)((s), where t is the update cycle of the HN. Because our aim here is to position each cell on the landscape, we did not allow the HN to converge to its lowest energy, but instead use the observed expression pattern to compute E.Models are abstractions of reality, and a mapping is required to link features of a model to events in a real process. Here, particular care is required to describe this mapping, as both the real physical GRN in cells, and the HN framework of our model, employ concepts of states, trajectories and attractors.These data sets are not dynamic in the sense of measuring gene expression in a fixed set of cells through a succession of time points. Rather, we position snapshots of the system (Hopfield energy profiles) in a common landscape model, then draw on external information to trace a temporal progression (trajectory) across the landscape.In the HN formalism, attractors are local minima of the energy landscape, and in the present context correspond to phenotypic states maintained by the underlying GRN. Here, cells start out in a stable state for which we calculate a low energy. After induction they progress through transient states described by higher energy values, and eventually reach another low-energy phenotype represented by an attractor. As in Waddington\u2019s metaphor, in each data set we observed sets of cells positioned along a developmental trajectory, with each point on the landscape representing a state of the GRN at a specific time. While Waddington implied that the topography of his hillside might be dynamic (by depicting interconnected cables beneath it), here we employ a static weight matrix, so our cells (networks) map onto a static landscape. Moreover, unlike in the Waddington metaphor, the trajectories we infer do not run downhill into a globally low-energy attractor at one edge of a state-space dimension; rather, we find low-energy states at both ends of every trajectory.49 who considered cancer as a pre-existing attractor in his quasi-potential landscape, we might construct an HN from cancer-progression data and track trajectories of cells as they progress from a normal to a cancerous state. We could extend the model by employing targeted perturbation to measure the contribution of subsets of genes or TFs to the GRN, and thus to trajectories of disease progression.In principle, the HN model can be applied to contexts other than normal cellular development. For example, following the study by Huang,As envisioned by Waddington, Kauffman and others, it is thus possible to compute a robust quantitative landscape model, based on empirical data, which reflects the collective behaviour of genes and TFs in driving cellular differentiation. By providing the framework for such models, HNs show that developmental landscapes need not be just a colourful metaphor.50 The similarity of an arbitrary input pattern to a stored pattern can be expressed by an energy function. Local minima of the energy function are the attractors of the system, to which input patterns converge during recall. A network can have multiple attractors with different minimum values. The HN associates similar stored patterns with the same attractor, whereas distinct stored patterns tend to be associated with different attractors.An HN consists of nodes and weighted undirected interactions (represented as an interaction matrix) between these nodes. So-called patterns, for instance gene-expression profiles, can be stored as the weights of the network. Stored patterns (even when distorted) can then be retrieved from the network by a recurrent recall procedure.S be a set of m samples (cells) under study, and let G={g1, g2,\u2026,gn} be the set of genes profiled from these samples. For any sample s\u2208S, each gene is assigned to a node, thereby giving n nodes H(s)={H1, H2,\u2026,Hn}(s) in the HN. Each node Hi carries the expression value for gi normalised and discretized to the values {\u22121, 0, 1}. For any pair of nodes , Hi\u2260Hj, we assign the interaction weight w \u2208 as the co-expression (computed here as Pearson's correlation) between {gi, gj} across the m samples and w=0, resulting in a zero-diagonal symmetric weight matrix W. For each node Hi, N(Hi) is its set of neighbours (connected nodes).Let s\u2208S, consisting of gene-expression values for the genes in G, can be stored by iterating through the HN. At each iteration, the node Hi is updated toHi. If w>0 then Hi is updated to a value of the same sign (positive or negative) as Hj, whereas if w<0 then Hi is updated to a value of the opposite sign as Hj. Consequently, Hi is either \u201cpulled towards\u201d or \u201cpushed away\u201d from Hj depending on w. After each update, we can capture the extent of agreement or disagreement between Hi and Hj as an energy function E=\u2212HiWijHj, with the convention that lower values represent higher agreement or stability. Thus the energy for the entire network is given byEach sample 51 in which as the iterations progress and the values assigned to the nodes are repeatedly updated, E[H(s)] converges to a low-energy state. This converged energy value represents the attractor for the sample s, and s is said to have converged to its attractor. This energy value scales linearly and is unitless. Given this framework, if we have a collection of samples Si represents a set of samples from a specific stage (or time point) of cellular differentiation, we expect all samples in Si to converge to the same attractor in the HN.Such an energy function belongs to the Lyapunov family of monotonically non-increasing functions,Here, we aim to link the values of the HN energy function to stages of cellular differentiation. We hypothesise that as a cell transits from an initial to a differentiated state, the interplay (co-variation) between genes changes (in its GRN), and the cell as a network of genes moves along the Hopfield energy profile. By computing the energy values based on gene-expression profiles of samples at different stages of cellular development, we capture this transition. In particular, we demonstrate that if pair-wise co-expression coefficients are used to construct the weight matrix, these energy values indeed reflect the developmental stages of the cell. Tight correlation corresponds to lower energy values, and looser correlation to less-favourable energies.30 which was based on the standard Hopfield model: we build the weight matrix using Pearson's correlation rather than Hebbian learning, and omit the iteration step so as to compute energies that represent the actual biological states of the network rather than iterated values.Using sets of cells from different stages of differentiation , here we demonstrate that the computed energy level reflects the stage of differentiation. Our approach differs in two main respects from that presented in the study by Maetschke and Ragan,n+1 dimensional space with n dimensions for the genes, and one dimension for the energy E(H). To visualise this landscape in three dimensions, we render the surface of the energy landscape by interpolating energy values over a two-dimensional surface: first the dimensionality of the gene-expression data is reduced to 2 using principal component analysis (PCA), a regular grid is constructed with the same dimensions as the reduced data, inverse PCA is performed to map the grid points to the high-dimensional space, then E is computed for the grid data.30 In this landscape, the two major principal components of the n genes serve as our dimensions x and y, and the energy is represented as the third dimension z. Stages Si are points in this space, represented here by unique colours cells. In this study, cells were fractionated by flow cytometry into different categories of pluripotency based on the expression of two stem-cell surface markers, GCTM-2 and CD9, yielding the four groups P7 (GCMT2HIGH-CD9HIGH), P6 (GCMT2MID-CD9MID), P5 (GCMT2LOW-CD9LOW) and P4 (GCMT2\u2212-CD9\u2212). Cells in group P7 are in a dormant state inducible to pluripotency, whereas P4 cells are committed to their lineage, and P6 and P5 cells are in intermediate states of differentiation. The data set includes expression profiles from 12 samples corresponding to 3 replicates from each category.We constructed HNs for 12 time course data sets covering a broad range of case studies. Four of these are descz-score normalisation, followed by feature selection to extract the probes with the highest variation across groups and time points. To determine number of features we used the elbow of the variance plot over features, choosing the number such that including one more probe does not change the variance.The three other case studies cover maturation of embryonic neural cells during mouse brain development, time course differentiation of THP1 monocyte cells to macrophages and trans-differentiation of epithelial to mesenchymal cells in cancer . Please E for the network for each round of perturbation. We also compared the resulting energy value with that of a random network of the same size (\u0394E=Erandom\u2212Eperturbed). This process was repeated 100 times for each data set and each proportion of perturbed genes. Please refer to the For each data set, we assessed robustness of the energy values by randomly changing the expression values of a randomly selected subset of the feature-selected genes. A new expression value was randomly chosen from within the interval {\u22121, +1}. We then constructed the HN network as above, and computed"}
+{"text": "There is increasing interest in the use of quantitative transcriptomic data to determine benchmark dose (BMD) and estimate a point of departure (POD) for human health risk assessment. Although studies have shown that transcriptional PODs correlate with those derived from apical endpoint changes, there is no consensus on the process used to derive a transcriptional POD. Specifically, the subsets of informative genes that produce BMDs that best approximate the doses at which adverse apical effects occur have not been defined. To determine the best way to select predictive groups of genes, we used published microarray data from dose\u2013response studies on six chemicals in rats exposed orally for 5, 14, 28, and 90\u00a0days. We evaluated eight approaches for selecting genes for POD derivation and three previously proposed approaches . The relationship between transcriptional BMDs derived using these 11 approaches and PODs derived from apical data that might be used in chemical risk assessment was examined. Transcriptional BMD values for all 11 approaches were remarkably aligned with corresponding apical PODs, with the vast majority of toxicogenomics PODs being within tenfold of those derived from apical endpoints. We identified at least four approaches that produce BMDs that are effective estimates of apical PODs across multiple sampling time points. Our results support that a variety of approaches can be used to derive reproducible transcriptional PODs that are consistent with PODs produced from traditional methods for chemical risk assessment.The online version of this article (doi:10.1007/s00204-016-1886-5) contains supplementary material, which is available to authorized users. In Canada, an important challenge is the requirement to assess the potential for risk to human health of a large number of existing chemicals in a short timeframe. Specifically, the Government of Canada, under the Chemicals Management Plan launched in 2006, has a commitment to address 4300 existing substances identified as priorities by 2020; many of these substances have a paucity of traditional toxicology data estimates that a rodent cancer bioassay requires 860 animals, $2\u2013$4 million, and 5\u00a0years to plan, conduct, and evaluate. As a result, the NTP has generally only conducted an average of 12 cancer bioassays/year since this program was launched in the 1970s. Further, due to the need for animal-based toxicity data for hazard identification and dose\u2013response analysis for many chemicals requiring risk assessment, the US Environmental Protection Agency\u2019s (EPA) Integrated Risk Information System (IRIS) has only evaluated 570 chemicals since the EPA created the IRIS program (1985) up until March, 2016 , measure global transcriptional changes in a tissue or cell type following chemical exposure. Abundant evidence indicates that changes in mRNA levels occur during chemical toxicity and that characterizing these changes can provide meaningful information for toxicological assessment apical data. BMDExpress aniline in feed at six doses ; (5) female F344 rats were exposed to N-nitrosodiphenylamine by dietary feed at 0, 250, 1000, 2000, or 4000\u00a0ppm; and (6) female F344 rats were exposed to hydrazobenzene by dietary feed at concentrations of 0, 5, 20, 80, 200, or 300\u00a0ppm (Table\u00a0n\u00a0=\u00a010 per group). Clinical signs of toxicity, body weights, and food consumption of animals were checked daily. Necropsies were conducted at scheduled time points. Following gross examination for abnormalities, the target organs (target organs were selected based on previous studies of the test article) were removed, weighed, and prepared for histopathological assessment and gene expression microarray measurements.Details of the study design, animal exposures, necropsy, histology, serum clinical chemistry, blood concentration of chemicals, and microarray methods were reported previously at the time of necropsy. RNA isolated from the primary target tissues of six rats per dose per time point was analyzed using Affymetrix microarrays. Because all of the chemicals selected in this study had published toxicological data, target tissues were known in advance. Target tissues were liver , thyroid (MDA), and bladder (NDPA). DNA microarray hybridization was performed for 16\u00a0h on HT Rat230\u00a0+\u00a0PM microarrays. The complete microarray datasets were downloaded from the Gene Expression Omnibus (GEO) at the NCBI (Accession No. GSE45892).For RNA analysis, target tissues were either flash frozen or preserved in RNAGene expression data were normalized using robust multi-array average (RMA) is used. Unless otherwise stated, BMD(L)a values in the current paper refer to non-cancer apical endpoints.Because we are modeling both apical and transcriptional changes, to avoid confusion, we refer to a BMD derived from an apical endpoint as a BMDa values were modeled for apical endpoint measurements usind et al. with BMDd et al. were rund et al. . No manua calculations for cancer are described in detail in a previous study , confidence interval of 0.95, and benchmark response set to 1.349 was multiplied by 0.5 for use in subsequent analyses. Using the built-in defined category analysis feature, probes with BMDts were mapped to Ingenuity Pathway Analysis (IPA) canonical pathways . Promiscuous probes (probes annotated with more than one gene), as well as probes with BMDts higher than the highest dose and goodness-of-fit test p value <0.1, were removed.BMDExpress was used for dose\u2013response modeling and BMDFs statistic adjustment were identified using microarray analysis of variance (MAANOVA). This analysis was conducted in R (R Core Team www.qiagen.com/ingenuity) to identify significant enrichment of genes in specific molecular pathways and to predict activated upstream regulators. IPA Core Analysis with a gene expression threshold of fold change \u2265\u00b11.5 and FDR p\u00a0\u2264\u00a00.05 was run, and enriched canonical pathways that were statistically significant (p\u00a0\u2264\u00a00.05) were selected. IPA calculates the p value using the right-tailed Fisher\u2019s exact test. In this method, the p value for a given pathway is calculated by considering the number of differentially expressed genes (FC \u2265\u00b11.5 and FDR p\u00a0\u2264\u00a00.05) that participate in that pathway and the total number of genes that are known to be associated with that pathway.Gene expression data were analyzed using IPA were uploaded to the BMDExpress Data Viewer Functional Enrichment Analysis tool. This tool performs enrichment analysis using a Fisher\u2019s exact test. The Fisher\u2019s exact test is identical to conventional pathway analysis , except it only applies the analysis to genes that passed the BMDt filtering criteria and have a BMDt. Thus, it explores pathway enrichment for genes that show a dose\u2013response and are significantly increased in at least one dose group relative to controls. A list of significant pathways (p\u00a0<\u00a00.05) for each dataset is obtained. Only pathways with four or more genes with BMDt values were considered.Pathway enrichment analysis using the BMDExpress Data Viewer tools are described in detail elsewhere . The specific details of each approach are described in Table\u00a0ts within the groups defined by these approaches was calculated. The correlation between the BMDts derived for each approach and the BMDa values was computed. We also used three approaches (Approaches 9\u201311) that have been used previously . The subsequent BMDExpress analysis was similar for all approaches following these pre-filtering steps .Approach 9\u2014The significantly enriched pathway with the lowest BMDApproach 10\u2014The mean of gene BMDs across all pathways.Approach 11\u2014The median gene BMD across all pathways.We note that previous approaches to derived transcriptional BMDs have applied different statistical filtering prior to BMD modeling, ranging from no statistical filter to a conservative filter of FDR 05 Table\u00a0 to inclut for the 11 approaches were estimated using the bootstrap in the subsequent analyses in the manuscript because the mean values had a lower variance than the median values.Distributions for the mean or median (Approach 11 only) BMDap Efron . BMDt esPearson\u2019s correlations were estimated using the R statistical package. In this analysis, BMD and BMDL values in ppm were converted to milligrams per kilogram-day using strain-specific subchronic food intake factors calculated based on recommended biological values from the EPA . Linear p value was <0.05.A likelihood test was used to test each approach to determine whether it was statistically significantly different from the 1:1 ideal line (the null model). The likelihood ratio statistic is the difference in the likelihood function under the alternative hypothesis and the likelihood function under the null hypothesis; this difference is then multiplied by 2. The likelihood statistic is distributed as a Chi-square distribution with the degrees of freedom equal to the difference in dimensionality of the parameter space under the alternative and null hypothesis. In this analysis, the degrees of freedom is 2. The null model was rejected if the t derived from an approach should be less than threefold the apical POD; (2) significance of the Pearson\u2019s correlation coefficient should be <0.05; and (3) the significance of the likelihood ratio test in deviating from the 1:1 slope should be >0.05.We applied three criteria to assess the effectiveness of the approaches in identifying a relevant POD: (1) the mean ratio of the BMD(L)a values were calculated for changes in target organ weight and histology . Because we are not using these data for human health risk assessment, no additional considerations were made . Generally, the BMDa values decreased over time. The lowest BMDa values were observed at 90\u00a0days for TBB, TCP, NDPA, and HZB, and 28\u00a0days for BB and MDA. The lowest BMDas across all time points by our calculations were: 4.9\u00a0mkd for TBB (hepatocyte hypertrophy), 1567\u00a0ppm for NDPA , 55\u00a0ppm for HZB (bile duct duplication), 47\u00a0mkd for BB (increase in absolute liver weight), 4.5\u00a0mkd for TCP (hepatocyte hypertrophy), and 43\u00a0ppm for MDA (follicular cell hypertrophy). The lowest BMDas across all time points for TBB, NDPA, and HZB were similar to the NOAEL values reported in previous studies , which is an important factor for NOAEL and LOAEL, is not considered in BMDa derivation. Thus, we decided to compare BMDts derived from transcriptional data to NOAEL, LOAEL, as well as the lowest BMDa at the matched time point and the lowest BMDa overall of all of the apical data.BMD(L)gy Table\u00a0. For eacrve EFSA ], BMDa ap\u00a0\u2264\u00a00.05) at each time point generally increased in a dose-dependent manner, but did not appear to be time dependent .Eight approaches to selecting gene and molecular pathway BMDts for each approach were visualized by box and whisker plots , NOAEL and LOAEL values for apical endpoints measured in the rodents in this study, and the lowest BMDas for these animals. Visual inspection of Fig.\u00a0ts and ranges for each of the approaches despite being drawn from, in many cases, very different gene lists. Approach 9 (the significantly enriched pathway with the lowest BMDt) generally produces the lowest BMDts, but also appears to have the broadest interquartile ranges, suggesting a higher degree of variability and uncertainty in applying this approach. In contrast, Approaches 10 and 11 yield BMDts that are somewhat higher (with tighter distributions), but are remarkably similar to the majority of the other approaches, despite representing the entire selection of pathway BMDts rather than the most statistically significant, or lowest BMDts. It was not possible to apply Approach 1 or 2 to HZB at the 5-day time point because there were no pathways that were significantly enriched in IPA, indicating a limitation of this approach. Approach 4, based on the 20 genes with the greatest fold changes, tended to have lower BMDts than the other approaches. We note that coefficient of variation (CV) values were below 0.2 for all approaches except 9, which indicates relatively low dispersion of data points around the mean BMDt of these approaches. Overall, visual inspection suggests that the majority of the approaches yield comparable BMDt values, within tenfold of the corresponding BMDa, that are largely consistent with the various PODs derived from apical endpoint analysis. Below we explore the relationship of the BMDts derived in each of the approaches to BMDas in more detail.Distributions of BMDts Figs.\u00a0, S4. Figts derived from each approach were divided by the POD values for each chemical to explore the relationship between transcriptomic-based PODs derived from each approach to apical PODs , and the BMDa at day 5 is always greater than the lowest BMDa across all times as expected . We find that the BMDts across all time points are somewhat higher than the lowest BMDa from the target tissue across all time points, but, nonetheless, are generally still within tenfold.The BMDDs Figs.\u00a0, S5\u2013S8. AEL Fig.\u00a0a, b. We cts Fig.\u00a0a, b. It ts derived from each of the approaches and the log-transformed apical POD values were also calculated to determine coefficient correlations (r) and linear significance in order to determine the extent of correlation between transcriptional and apical data. Figure S11 shows the 5-day time point for this correlation analysis; the other time points are shown in Figures S12, S13, and S14; Tables\u00a0ts for the 11 approaches was tested to assess whether it approached a 1:1 relationship using the likelihood ratio test were within the range of 0.54\u20130.99 across the dataset . Assessment of the BMD(L)t values derived from the 11 approaches clearly showed that the transcriptional data were closely aligned with LOAELs, followed by NOAELs, and then the time-point-matched lowest BMDas. The linear regression models for most of the transcriptional approaches were not significantly different from a 1:1 relationship with the apical data.The correlation between log-transformed BMD(L)We applied three criteria to assess the effectiveness of the approaches in identifying a relevant POD. For simplicity, because of the size of the dataset, we have focussed on presenting the results from the 5-day time point for each chemical in more detail and place these findings in the context of the other time points ; analysis of TCP using Approaches 3, 10, and 11 yielded BMDts that were greater than tenfold the NOAEL were smaller than 3. Analysis of the average of this ratio across all of the chemicals for every approach revealed that the BMDt mean derived using Approach 4 was closest to the NOAEL was found line . Approaches 10 and 11 were again marginally higher than the other approaches for a few data points, and Approaches 4 and 9 were generally closest to the NOAEL. The specific number of data points within threefold at 14, 28, and 90\u00a0days were 48/66, 45/66, and 47/66, respectively. Average correlation coefficients for all approaches in the 14, 28, and 90\u00a0day datasets were 0.85, 0.82, and 0.77, respectively, similar to average r values for 5-day time points (r\u00a0=\u00a00.83). Based on our three criteria, Approaches 1, 4, 5, 6, and 7 at 14\u00a0days, Approaches 1, 7, and 9 at 28\u00a0days, and Approach 4 at 90\u00a0days were effective in predicting NOAELs.The results for the other time points were largely consistent with the results at 5\u00a0days. The majority of BMDt mean values were closer to the NOAEL than the BMDt values . All of the approaches in all time points yielded BMDLts within tenfold of the NOAEL, and the averages of the BMDLt/NOAEL ratios for the six chemicals were <3 in all approaches. The number of data points within threefold of their associated NOAELs in the 5-, 14-, 28-, and 90-day time point datasets was 42/64, 51/66, 42/66, and 49/66, respectively . BMDLts derived from all approaches were very highly correlated with their apical endpoint NOAELs in 5-, 14-, 28-, and 90-day time point datasets, and average r values for all approaches were 0.82, 0.84, 0.80, and 0.76, respectively . In general, all approaches except 3, 9, 10, and 11 were significantly correlated with the LOAEL (Pearson\u2019s correlation). The largest and smallest correlation coefficients were found for Approach 2 and Approach 9 , respectively . All of the BMDt values for all approaches were within tenfold of the LOAEL , and only 10 of the 64 data points were outside of the threefold range . All of the data points from Approaches 1, 2, 5, 6, and 7 were within threefold of the LOAEL. All approaches were highly correlated with the LOAEL and 9 had the highest and lowest correlation coefficients with apical data. The BMDLt values for Approaches 2, 3, 5, 6, 7, and 8 were closest to the LOAEL . The likelihood ratio test results showed no significant difference from the 1:1 line for any of the approaches in the 5-day datasets were within tenfold of the LOAEL, and there were 11/64, 8/66, and 18/66 data points outside the threefold range in the 14-, 28-, and 90-day time points, respectively. The BMDLts values for all approaches and time points were generally highly correlated with the LOAELs. Out of the 11 approaches, BMDLts from seven, nine, and three of the approaches in the 14-, 28-, and 90-day time point datasets, respectively, were significantly (p\u00a0<\u00a00.05) correlated with their respective LOAELs. The maximum and minimum r values were 0.90 and 0.72 at 14\u00a0days, 0.87 and 0.69 at 28\u00a0days, and 0.87 and 0.57 at 90\u00a0days (Tables S27\u2013S29). No significant difference from the 1:1 line (BMDt vs. LOAEL) was found, with the exception of Approach 4 (14\u00a0days) and Approaches 4 and 9 (90\u00a0days). Overall, seven, seven, nine, and two of the approaches in the 5, 14, 28, and 90\u00a0day datasets (respectively) met the three criteria and thus may be recommended for predicting the LOAEL , were within threefold of the lowest BMDa at the 5-day time point for Approaches 10 and 11, and 0.81 (p\u00a0=\u00a00.1) for Approach 9 that gene alterations and apical effects occurred at similar doses on day 5; however, at later time points, time-matched apical responses generally occurred at somewhat lower doses than transcriptional responses (Tables S27\u2013S29). However, we note that the vast majority of the BMDts even at the 90-day time point were within tenfold of apical PODs, suggesting their utility even at 90\u00a0days. Similar to 5\u00a0days, all approaches except 9, 10, and 11 meet the criteria for day 14 datasets. However, only Approach 4 met all three criteria at the 28- and 90-day time points . The overwhelming majority of BMDt for prediction of BMDas. Using this approach, a strong correlation (r\u00a0>\u00a00.90) was found between BMDts and the lowest time-point-matched BMDa values for adverse apical endpoints at 5-, 14-, 28-, and 90-day time points for these same six chemicals ; and (2) to remove pathways that were not significantly enriched following BMDt analysis . We directly compared our BMDts to those published in Thomas et al. , but not at 90\u00a0days (r\u00a0=\u00a00.61). These results suggest that addition of a Fisher\u2019s exact test to this Approach did not improve relationships with BMDas. Indeed, the linear relationship between BMDts derived from the lowest transcriptional pathway and its time-point-matched apical endpoint decreased due to this additional filtering. Because the lowest pathway BMDt may in certain cases be based on only a handful of genes (depending on the minimum number of genes required in a pathway for BMDt consideration that is applied), we felt this additional filtering ensured a more robust and responsive set of genes for this application. Thus, despite the weaker correlation, we advise the application of this filter should this approach be applied.Thomas et al. , b propos et al. , b tended to be high. However, the majority of the BMDts were within tenfold of the lowest BMDas at the 5-day time point , with r\u00a0>\u00a00.86 . The results of the likelihood ratio test showed that at the 5-day time point there were no significant differences between the 1:1 line and transcriptional-to-apical lines for Approaches 3, 4, and 9 were 0.94 and 0.71 for 14\u00a0days, 0.94 and 0.84 for 28\u00a0days, and 0.76 and 0.63 for 90\u00a0days (Tables S27\u2013S29). There was no significant difference between a 1:1 line and the transcriptional-to-apical lines for Approach 9 (14\u00a0days), Approach 4 (28\u00a0days), and Approaches 1, 3, 4, 7, and 9 (90\u00a0days). However, only Approach 4 (28\u00a0days) met all three of our criteria for prediction of the lowest BMDa across all time points tended to overestimate POD as expected. Across the entire dataset, the BMDts were best at predicting the LOAEL and the lowest time-point-matched BMDa, whereas BMDLts were relatively equal in being effective predictors of LOAEL and NOAEL.Overall, we found that while the majority of the approaches were effective predictors of apical PODs at the 5-, 14-, and 28-day time point, only three approaches met our three criteria at the 90-day time point Table\u00a0. Approact- or BMDLt-derived for each approach against apical PODs were within tenfold of the tumor response. BMDt values derived from the 11 approaches for MDA and NDPA were within threefold at these time points . The average ratios of BMDt-to-cancer BMDa values at the 90-day time point for all approaches were <3. While more carcinogenic chemicals are required to calculate a correlation coefficient, based on the results derived from three chemicals in the current study the transcriptomic-derived BMDt values at 90\u00a0days were slightly closer to cancer BMDa values than other time points.Tumor responses in thyroid, bladder, and liver, respectively, were reported in rodents following MDA, NDPA, and HZB exposure NTP , b. Incip value and p value. A previous study from our laboratory investigated the effects of using statistically filtered data on gene and pathway BMDts, and more specifically compared FDR p value with ANOVA unadjusted p value for analyzing and pre-filtering data. Significant genes were selected based on: (1) fold change and corrected p value cutoff using MAANOVA in Approaches 1, 2, 4, 5, 6, 7, and 8; and (2) p value \u22640.05 using ANOVA in Approaches 3, 9, 10, and 11. Overall, BMD(L)t derived from our MAANOVA-filtered approaches, which applied an FDR correction and fold-change cutoff, were better at predicting apical PODs than those approaches analyzed using ANOVA , met the three criteria for predicting the most sensitive apical endpoint observed on day 5. However, no approach met the three criteria when analyzed against the lowest BMDa endpoint across all time points at 5\u00a0days.Based on the three criteria we described above, the BMD(L)t from groups of genes. We assessed the relationship between BMDts derived using these 11 approaches to PODs derived from apical data that might be used in a human health risk assessment. To evaluate the effectiveness of the approaches in predicting apical PODs, we used three criteria: (1) ratio of BMDt to apical endpoint POD <3; (2) correlation coefficient p value <0.05 for BMDt to apical POD; and (3) likelihood ratio test p value >0.05 for deviation from the 1:1 line for BMDt versus apical POD. We found a very high degree of concordance between all of the approaches for deriving BMD(L)ts and apical PODs. Generally, in our opinion, BMD(L)t values derived using the 11 approaches were remarkably aligned with different apical PODs that may be used in human health risk assessment. The vast majority of BMDts across all approaches were within tenfold of the various BMDas and were largely within threefold as well. In general, across the 5-, 14-, 28-, and 90-day time points, eight, eight, eleven, and three approaches met our three criteria, respectively, and thus qualify as effective estimates of apical PODs.We leveraged published Affymetrix DNA microarray data on well-designed time-series and dose\u2013response experiments in rats to evaluate 11 approaches to deriving a BMDConsistency in the PODs derived from transcriptional endpoints with those derived from standard toxicity endpoints increases confidence in the use of transcriptional PODs in human health risk assessment. However, the relevance of these specific gene expression perturbations to adverse effects is unclear, since they are not based on an MOA-centric approach. The approaches described assume that significant perturbations in gene expression in general may lead to an adverse outcome. Early thought in this field presumed that gene expression changes would occur at lower doses than adverse apical effects and thus would be overly conservative. In contrast, we demonstrate that mean transcriptional PODs from the approaches reviewed herein are generally higher than apical PODs, suggesting that this is not the case. Indeed, the mean transcriptional BMDs differ from the corresponding apical endpoints by less than1.5-fold for matched time points .Although additional studies using chemicals targeting different types of adverse effects are required to validate our findings, our results suggest that transcriptional response can be used as an efficient alternative approach for POD selection in chemical risk assessment. Transcriptional PODs were furthest from apical PODs at the 90-day time, suggesting that some dampening of transcriptional response may be occurring at later time points, and supporting the use of earlier time points to identify doses that significantly impact transcriptional profiles along a trajectory toward disease. Our results indicate that transcriptionally derived POD estimation from a short-term study are consistently within tenfold of PODs derived from apical endpoints from longer term studies. We also support previous findings that a more conservative statistical filter yields transcriptional PODs that are more aligned with apical PODs. However, our results suggest that any of the proposed approaches should produce transcriptional PODs that are within tenfold of the non-cancer and cancer BMDt) should be considered for risk assessment Supplementary material 2 (DOCX 11596\u00a0kb)Supplementary material 3 (XLS 2254\u00a0kb)Supplementary material 4 (XLS 4844\u00a0kb)Supplementary material 5 (XLS 6703\u00a0kb)Supplementary material 6 (XLS 5094\u00a0kb)Supplementary material 7 (XLS 9463\u00a0kb)Supplementary material 8 (XLS 8625\u00a0kb)Supplementary material 9 (XLSX 20\u00a0kb)Supplementary material 10 (XLSX 38\u00a0kb)Below is the link to the electronic supplementary material."}
+{"text": "RAF\u2010MEK\u2010ERK signalling pathway controls fundamental, often opposing cellular processes such as proliferation and apoptosis. Signal duration has been identified to play a decisive role in these cell fate decisions. However, it remains unclear how the different early and late responding gene expression modules can discriminate short and long signals. We obtained both protein phosphorylation and gene expression time course data from HEK293 cells carrying an inducible construct of the proto\u2010oncogene RAF. By mathematical modelling, we identified a new gene expression module of immediate\u2013late genes (ILGs) distinct in gene expression dynamics and function. We find that mRNA longevity enables these ILGs to respond late and thus translate ERK signal duration into response amplitude. Despite their late response, their GC\u2010rich promoter structure suggested and metabolic labelling with 4SU confirmed that transcription of ILGs is induced immediately. A comparative analysis shows that the principle of duration decoding is conserved in PC12 cells and MCF7 cells, two paradigm cell systems for ERK signal duration. Altogether, our findings suggest that ILGs function as a gene expression module to decode ERK signal duration.The We demonstrate that long mRNA half\u2010lives dominate mRNA dynamics of ILGs and that this characteristic intrinsically enables them to respond late and to decode signal duration at the same time, without any need for additional regulation. This is in contrast to short\u2010lived IEGs, which do not decode but relay signal duration, postponing the task of duration decoding. The principle of signal duration being translated into response amplitude is conserved in rat PC12 cells and human MCF7 cells, two cell systems which serve as paradigm models for cell fate decisions based on signal duration. In general, mRNA half\u2010life is a strong predictor for response dynamics in these systems. Gene term enrichment analysis furthermore proposes a potential role of ILGs in positive regulation of apoptosis. As IEGs are found to be involved in negative regulation of apoptosis, we speculate that the two opposing modules together could serve as\u00a0a fail\u2010safe mechanism upon prolonged versus transient ERK signalling.et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, Signals received at the cell surface propagate through a network of signalling proteins cells constitutively expressing an inducible form of RAF upon constant exposure to 4OHT consistently reach at least 50% of their response amplitude in all simulated input scenarios , when we focus on quantitative aspects of mRNA expression.It is important to note that response amplitudes are presented as relative values normalised to steady\u2010state expression. Such normalised values ease the comparison of the timing between different genes during their transition from one steady state to another. At the same time however, this representation cannot reflect absolute changes in mRNA concentration. Hence, we present relative changes in expression (noted as amplitude [%]) when describing the relation between mRNA half\u2010life and signal duration decoding and absolute changes in expression and gene expression data obtained from Affymetrix Human Gene 1.0 ST microarray time course experiments Fig\u00a0A.2 fold change of 3.89\u00a0\u00b1\u00a00.42 =1fort>0 and incorporated it into our extended model of gene expression. We then fitted basal transcription rate k0, pERK\u2010dependent transcription rate k, degradation rate \u03b3 and transcriptional delay \u0394t for each of the 102 significantly induced primary response genes considering an error model to account for expression level\u2010dependent variance . For the remaining genes and for genes with \u0394t\u00a0<\u00a030\u00a0min , half\u2010life estimates (t1/2=ln(2)/\u03b3) were based on the simplified model. All fitted parameter values are listed in Across all treatment durations for the ON condition, pERK was reliably induced with mean logr, which was calculated as the sum of model\u2010derived half\u2010life and transcriptional delay were classified as DEGs with \u0394t\u00a0>\u00a030\u00a0min and median response time of 160\u00a0min. Lastly, we identified 27 immediate\u2013late genes with median response time of 204\u00a0min and half\u2010lives greater 120\u00a0min.Genes were ranked according to their model\u2010derived response time DUSP1) to 117\u00a0min (EGR2). Both DEGs and ILGs subsequently responded after several hours (2\u201348\u00a0h) with few exceptions . DEGs showed median half\u2010life of 70\u00a0min and median transcriptional delay of 102\u00a0min. Responding ILGs showed half\u2010lives ranging from 124\u00a0min (DUSP6) up to 561\u00a0min (QSOX1). For one DEG (PPP1R15A) and three ILGs , model\u2010derived response times were >10\u00a0h, the time span covered in the experiment. All summarising values assigned to particular gene clusters like here need to be considered bearing in mind the continuous nature of gene expression parameters apparent in our analysis and FGF caused attenuated but sustained ERK activation (mean log2 fold change: 2.83\u00a0\u00b1\u00a00.50) \u2010mediated transcriptional shutdown and upon metabolic labelling of RNA with 4\u2010thiouridine (4SU) and longer mRNA half\u2010lives for ILGs , whereas estimates based on modelling of gene induction had suggested shorter half\u2010lives for DEGs in range of IEGs (<\u00a0120\u00a0min). We therefore concluded that half\u2010lives of IEGs and ILGs can certainly be estimated from gene induction kinetics, whereas half\u2010lives of DEGs are more reliably determined in transcriptional shutdown experiments.et\u00a0al, et\u00a0al, et\u00a0al, P\u2010value\u00a0=\u00a01.4\u00a0\u00d7\u00a010\u22124) than GC content of DEG promoters and transient signalling (EGF treatment) in our transcriptome time course data .In contrast, response amplitude of ILGs was strongly linked to ERK signal duration. Upon sustained signalling, the majority of ILGs reached response amplitudes between 80 and 100%, similar to IEGs. However, upon two\u2010hour pulse signalling, the majority of ILGs reached response amplitudes only between 40 and 60%, and upon transient EGF\u2010mediated signalling, the majority of ILGs did not exceed 20% response amplitude. Thus, ILGs clearly distinguished sustained and short signalling by translating signal duration into response amplitude. Supporting our hypothesis that long mRNA half\u2010lives enable this decoding mechanism, we found that a fraction of long\u2010lived DEGs with model\u2010derived half\u2010lives greater 120\u00a0min was also capable of decoding ERK signal duration in a similar manner , respectively. Likewise, MCF7 cells undergo proliferation or commit to apoptosis when exposed to transient or sustained ERK signalling elicited by EGF or heregulin (HRG), respectively. We calculated amplitude ratios for significantly induced IEGs and ILGs in MCF7 and for homologues in PC12 using publicly available data sets , DEGs have been described as a module of negative feedback regulators . Among induced DEGs, we identified aforementioned RNA binder ZFP36, negative receptor feedback elements ERRFI1 and SPRY2, and other negative feedback regulators of protein kinase activity . Moreover, the top upregulated DEG was tumour suppressor tissue factor pathway inhibitor 2 (TFPI2).To check whether our classification of different PRG subclasses is consistent with known functional annotations, we performed Gene Ontology (GO) term enrichment analysis Fig\u00a0A. In accHaving confirmed the different functional roles of IEGs and DEGs, we moved on to functionally characterise our newly defined gene cluster of ILGs. Strikingly, GO term enrichment analysis suggested a distinct role of ILGs in positive regulation of apoptosis, putatively opposing involvement of IEGs in negative regulation of apoptosis. This finding suggests that the capability of ILGs to decode ERK signal duration might serve as a potential fail\u2010safe mechanism to control aberrant ERK signalling, as these positive regulators of apoptosis only come into play, when ERK is activated in a prolonged fashion.et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, In general, it has been shown that RAF\u2010MEK\u2010ERK signalling is involved in positive and negative regulation of both intrinsic and extrinsic (receptor) pathway of apoptosis are enriched for genes that are involved in positive regulation of apoptosis, which is the cell fate for sustained ERK signalling in our model system, suggesting that the mRNA half\u2010life is important to functionally decode signal duration. Very recently, the idea that mRNA half\u2010life is involved in signal decoding has been shown for signal frequency decoding of p53 signalling . Over the last years, several mathematical frameworks with varying degree of complexity have been presented to estimate the contribution of processing, transcription and degradation rates from measured RNA dynamics as described previously . UT_2 was excluded due to strong dissimilarity to UT_1 and UT_3 in cluster analysis of correlation values and putative contamination. Log2 fold changes for independently obtained EGF, and FGF time course data were calculated with respect to mean expression in corresponding untreated samples . To account for expression level\u2010dependent variations, an empirical null model was based on replicates for 2\u2010h 4OHT treatment. For this, transcripts were ranked by their mean expression across replicates and a moving average with window size k\u00a0=\u00a02,000 was calculated to serve as an expected variance measure for a given expression level. Z\u2010scores for each transcript pi in each sample j were calculated accordingly:Fluorescence intensities from scanned microarrays were processed and analysed in R. Background correction, quantile normalisation, probe set summarisation and logz\u2010score of 5.6 in 4OHT time course data were considered regulated . This corresponded to an average false discovery rate (FDR) of 1% in 4OHT time course data. Here, false positives were estimated by counting transcripts detected differentially expressed between one replicate and the mean of the two other replicates of the 2\u2010h 4OHT treatment samples. For all downstream analyses, 4OHT\u2010regulated genes were further filtered in two steps. First, regulated genes were tested against a random set of unregulated genes (of the same size) for their log2 fold change standard deviation (SDlog2fc) across all samples. This was done to filter out a large fraction of erroneously detected genes, which were unaltered across all samples when the untreated condition was left out. Here, a SDlog2fc cut\u2010off was defined at FDR of 5% . Secondly, genes induced in a non\u2010monotonic fashion that could not be fitted to our one\u2010step model were excluded from the analysis. All remaining genes are referred to as differentially expressed in the main text.Genes exceeding an absolute et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, Total RNA was extracted with TRIzol. Labelled and unlabelled fractions were separated as described previously were considered primary response genes.Differentially expressed genes were checked for significant t, relative amplitude was deduced from gene\u2010wise parameter estimates of k0, k and \u03b3: Gene expression data were fitted to complete and simple model for mRNA dynamics as described in the main text using Nelder\u2013Mead method implemented in R package optimx. For a given expression of a gene at time 2 fold change predictions for growth factor\u2010induced gene expression, gene\u2010wise fitted model parameters and input functions for ERK\u2010dependent transcription rate k were fed to the complete model.To obtain semi\u2010quantitative loget\u00a0al, t1/2\u00a0>\u00a016\u00a0h) consistently identified in two published data sets on human mRNA half\u2010life =M0+e\u2212\u03b3t to infer decay rates \u03b3.Half\u2010life estimates based on ActD\u2010mediated transcriptional shutdown were derived from microarray gene expression time course data Fig\u00a0A. Since et\u00a0al, Half\u2010life estimates based on metabolic labelling with 4SU followed by RNA sequencing were calculated using all three fractions of RNA, that is total RNA, labelled RNA (eluate) and unlabelled RNA (flow\u2010through). Dynamic transcriptome analysis (DTA) was used for quantification and are provided as supplementary data .et\u00a0al, et\u00a0al, Published time course expression raw data on PC12 (Offermann cDNA was synthesised using High\u2010Capacity RNA\u2010to\u2010cDNA\u2122 Kit (Applied Biosystems #4387406). qRT\u2013PCR was performed using Taqman gene expression assay (Thermo Fisher #4304437) with following Taqman primers (Thermo Fisher): Hs01045540_g1 (ARC), Hs00156548_m1 (CLU), Hs00610256_g1 (DUSP1), Hs01044001_m1 (DUSP6), Hs00152928_m1 (EGR1), Hs00166165_m1 (EGR2), Hs00170630_m1 (FOS), Hs00171851_m1 (FOSB), Hs04187685_m1 (FOSL1), Hs00357891_s1 (JUNB), Hs00374226_m1 (NR4A1), Hs00943178_g1 (PGK1), Hs00169585_m1 (PPP1R15A), Hs00153133_m1 (PTGS2), Hs04334126_m1 (TFPI2), Hs00959047_g1 (TNFRSF12A), Hs00381614_m1 (ZCCHC12), Hs00185658_m1 (ZFP36).Protein was extracted using Bio\u2010Rad Cell Lysis Buffer (#171\u2010304006M). Concentration was determined using Thermo Fisher Pierce BCA Protein Assay (#23228). 25\u201350\u00a0\u03bcg of purified protein was used for blotting. Images were acquired using Li\u2010Cor Odyssey Scanner. Western blot antibodies were as follows: EGR1 (Santa Cruz sc\u2010110), FOS , CLU (Santa Cruz sc\u20108354), FOSL1 (Santa Cruz sc\u2010376148).For flow cytometry, cells were harvested 48\u00a0h after treatment and fixed in 2% paraformaldehyde (PFA) for 10\u00a0min at RT. Cells were permeabilised in methanol and incubated on ice for 30\u00a0min. For immunostaining, cells were incubated for 1\u00a0h with Cleaved Caspase\u20103 rabbit mAb .Both microarray gene expression data and metabolic labelling RNA\u2010Seq data are accessible from gene expression omnibus (GEO) under SuperSeries accession number GSE93611.FU performed computational analyses. AS, EW, FU and RF\u2010G conducted experiments. FU and NB wrote the manuscript with input from AS, EW, RF\u2010G, JM, ML and BK. FU, BK and NB designed the experiments.The authors declare that they have no conflict of interest.Expanded View Figures PDFClick here for additional data file.Table\u00a0EV1Click here for additional data file.Table\u00a0EV2Click here for additional data file.Review Process FileClick here for additional data file.Source Data for Figure\u00a03Click here for additional data file.Source Data for Figure\u00a04Click here for additional data file."}
+{"text": "A common first step in marker-gene analysis is grouping genes into clusters to reduce data sets to a more manageable size and potentially mitigate the effects of sequencing error. Instead of clustering based on sequence identity, marker-gene data sets collected over time can be clustered based on temporal correlation to reveal ecologically meaningful associations. We present Ananke, a free and open-source algorithm and software package that complements existing sequence-identity-based clustering approaches by clustering marker-gene data based on time-series profiles and provides interactive visualization of clusters, including highlighting of internal OTU inconsistencies. Ananke is able to cluster distinct temporal patterns from simulations of multiple ecological patterns, such as periodic seasonal dynamics and organism appearances/disappearances. We apply our algorithm to two longitudinal marker gene data sets: faecal communities from the human gut of an individual sampled over one year, and communities from a freshwater lake sampled over eleven years. Within the gut, the segregation of the bacterial community around a food-poisoning event was immediately clear. In the freshwater lake, we found that high sequence identity between marker genes does not guarantee similar temporal dynamics, and Ananke time-series clusters revealed patterns obscured by clustering based on sequence identity or taxonomy. Ananke is free and open-source software available at Phylogenetic marker-gene sequencing has revolutionized our understanding of microbial ecology. Nearly every conceivable habitat has been profiled using markers such as the 16S ribosomal RNA (rRNA) gene. These studies have revealed a hitherto unappreciated degree of diversity among both well-studied and novel microorganisms . A singlThe large amount of data generated in microbial marker-gene surveys can present a significant impediment to analysis; a single data set can contain millions of unique sequences, including real variants and products of sequencing error. Clustering methods are often used to reduce the magnitude of the data and minimize the impact of sequencing errors. Traditionally, the most common clustering approach is to merge sequences into operational taxonomic units (OTUs) at a pre-defined sequence-identity threshold, often 97%\u00a0. AlthougMethods that construct clusters based on attributes more closely linked to ecological properties can overcome the limitations of sequence-identity-based OTUs while retaining the benefits of clustering. For example, distribution-based clustering has been used to split OTUs when the member sequences have distinct distributions across samples, minimizing inappropriate aggregation\u00a0. With tiAnanke requires only the sequence data and time points as input. The sequence data can be any FASTA-formatted data, including but not limited to 16S rRNA gene amplicon sequences. Sequences can be preprocessed beforehand with users\u2019 preferred methods. The time point data is a metadata file that relates the sample names to their relative sampling time.m\u00a0\u00d7\u00a0n time-series matrix where m is the number of unique sequences and n is the number of time points. To reduce space on disk and in memory, this data is stored in compressed sparse row format in an HDF5 file\u00a0 into account; however, other measures such as Euclidean and Manhattan distances are available to use. To control for data compositionality within each sample, users can select an optional centered log ratio transform with count zero multiplicative zero imputation before distance calculations, using the methods from the CoDaSeq R library\u00a0(Ananke uses the short time-series (STS) distance\u00a0 to compu library\u00a0.min_samples, and \u03f5. The min_samples parameter is set to 2 to prevent singletons from forming their own Ananke TSCs, and instead places them into the \u201cnoise bin\u201d which contains all unclustered singleton sequences. These \u201cnoise\u201d sequences are those that share no similar temporal dynamics, at a given \u03f5 value, with any other sequence, and as such both rare and highly abundant sequences can be labeled as noise. Additionally, rare sequences that appear in only one time point, if not filtered out by an abundance filter, will form TSCs with all other sequences that appear only at that time point. As \u03f5 is increased, fewer sequences will be labeled as noise, but some TSCs will grow too large in size to be useful. Ananke allows for interactive exploration of the parameter space by pre-computing results over a range of \u03f5 values. Run times and memory usage for the various steps in the Ananke computational pipeline are given in The unique sequence pairwise STS distance matrix is clustered into Ananke TSCs by the DBSCAN algorithm\u00a0 implemen\u03f5 in the browser-based application. We recommend that users begin exploring at the \u03f5 that provides the largest number of TSCs, and therefore the greatest separation of sequences. The user interface presents the taxonomic classifications and sequence-identity-based OTU assignments for each unique sequence in an Ananke TSC, allowing users to compare different clustering methods. The interface allows the user to explore their sequence identity-based clusters with the constituent unique sequences coloured by their time-series cluster membership. This allows temporal inconsistencies within an OTU to be identified at a glance.The Ananke-UI facilitates data exploration with an interactive application built with Shiny , a libra\u03f5 clustering parameter values, and the adjusted mutual information (AMI) score score with resI) score . The higPRJEB6518. The second data set is comprised of 96 time points from an eleven-year time series of Lake Mendota in Wisconsin, USA. Sequences and metadata were retrieved through EBI under accession PRJEB14911. For both data sets, Ananke TSCs were computed over a parameter range of \u03f5\u00a0=\u00a00.01 to \u03f5\u00a0=\u00a01 with a step size of 0.01. For comparison with sequence-identity clustering, sequences were clustered into 97% OTUs using the UPARSE pipeline (http://www.github.com/McMahonLab/TaxAss).Two biological time-series data sets were analyzed using Ananke. From pipeline at 97% ipipeline trained pipeline via QIIMpipeline . For thepipeline were clahttp://github.com/beiko-lab/ananke and http://github.com/beiko-lab/ananke-ui). Scripts for reproducing the analyses, including data retrieval, sequence-identity clustering, taxonomic classification, and the Ananke pipeline are available at https://github.com/mwhall/Ananke_PeerJ. Ananke HDF5 data files for the lake and stool data sets are available on figshare (doi: 10.6084/m9.figshare.c.3707938.v1).The Ananke software, which includes the Python-based clustering algorithm, the R- and Shiny-based visualization platform, and associated documentation, is available on GitHub into clusters . This ge\u03f5, rather than a parameter that prespecifies the number of desired clusters, which other common clustering methods require. This is a more intuitive parameterization that is similar to sequence-identity clustering, as \u03f5 controls the granularity of the clusters. A smaller \u03f5 value implies clusters of sequences with more similar temporal profiles, whereas a larger \u03f5 would combine sequences with more disparate\u00a0patterns.The STS distance measure was designed for sampling schemes that are uneven and contain relatively few time points . Unlike Assessing cluster quality in a biological data set is a difficult task since no ground truth exists for comparison. To assess Ananke\u2019s cluster quality, we generated six artificial patterns of temporal variation that represent ecological events or patterns that users may wish to identify in a biological data set . AppearaAnanke yielded average AMI scores >0.8 on simulated time-series data sets with as few as ten time points . HoweverThe majority of the simulations flagged low-variance and high-variance stationary time-series profiles as noise, or placed these two patterns into the same TSC, which prevented Ananke from achieving higher AMI scores. Ananke\u2019s algorithm has trouble clustering stationary time-series profiles because they lack large slopes to influence the STS distance measure. The STS distance measure does not provide enough information to separate the low-variance from the high-variance stationary temporal patterns since there are no consistently present large slopes. Ananke\u2019s current focus is on the detection of distinct ecological patterns such as appearance, disappearance, and conditional rarity, but future incorporation of the overall variance of temporal profiles in addition to shared slope would allow Ananke to also focus on stationary profiles.\u03f5\u00a0=\u00a00.1, with an average Ananke TSC comprising 0.5% of the data set with 124,823 total sequences and 15 unique sequences and Haemophilus parainfluenzae. The remaining sequences belonged to various taxonomic groups including the genera Leuconostoc, Weissella, Lactococcus, and Turicibacter from the class Bacilli; Clostridium and Veillonella from the class Clostridia, including known pathogen C. perfringens; and several sequences from the genus Acinetobacter.The sampled subject experienced food poisoning as a likely result of 163\u2013240 . In Anan 163\u2013240 . Some Anequences , while opiraceae . During \u03f5\u00a0=\u00a00.16 . The twoDorea OTUs have their most abundant sequence decrease in relative abundance to below detection. A second sequence variant is introduced and persists until the food poisoning event around day 59, when it sees a decrease in relative abundance. In both cases, a third sequence variant appears. In OTU 6, classified to Dorea, this third variant does not persist and the second variant reappears. In OTU 505, classified to Ruminococcus gnavus, the third variant persists, even alongside the second variant as it reappears. An OTU-based approach risks aggregating these sequence variants, thereby obscuring this subtle transition. Compared with OTU methods, Ananke provides an alternate, higher resolution method to highlight both clear and subtle partitioning of the profiles with respect to time.Ananke highlighted several smaller changes in the community in addition to the changes associated with the food-poisoning disturbance. Some of these changes, such as those shown in \u03f5\u00a0=\u00a00.09, with an average TSC comprising 0.2% of the data set with 56,493 total sequences and 31 unique sequences nomenclature, where the taxa levels lineage, clade, and tribe approximate the Linnaean family, genus, and species . Ananke Bacteroidetes lineage bacI is known to prefer high dissolved organic carbon, which often occurs during cyanobacterial or algal blooms . (D) Distribution of the sizes of time-series clusters (in log10 number of unique sequences).Time-series cluster descriptions for the faecal sample data, pre-processed with DADA2 denoising. (A) Number of time-series clusters as a function of the clustering parameter, \u03b5. (B) Proportion of sequences in the \u201cnoise bin\u201d as a function of the clustering parameter, \u03b5. (C) Distribution of the sizes of time-series clusters . (D) Distribution of the sizes of time-series clusters (in log10 number of unique sequences).Time-series cluster descriptions for the freshwater lake data. (A) Number of time-series clusters as a function of the clustering parameter, \u03b5. (B) Proportion of sequences in the \u201cnoise bin\u201d as a function of the clustering parameter, \u03b5 . (C) Distribution of the sizes of time-series clusters (in logClick here for additional data file."}
+{"text": "The analysis of RNA-Seq data from individual differentiating cells enables us to reconstruct the differentiation process and the degree of differentiation (in pseudo-time) of each cell. Such analyses can reveal detailed expression dynamics and functional relationships for differentiation. To further elucidate differentiation processes, more insight into gene regulatory networks is required. The pseudo-time can be regarded as time information and, therefore, single-cell RNA-Seq data are time-course data with high time resolution. Although time-course data are useful for inferring networks, conventional inference algorithms for such data suffer from high time complexity when the number of samples and genes is large. Therefore, a novel algorithm is necessary to infer networks from single-cell RNA-Seq during differentiation.In this study, we developed the novel and efficient algorithm SCODE to infer regulatory networks, based on ordinary differential equations. We applied SCODE to three single-cell RNA-Seq datasets and confirmed that SCODE can reconstruct observed expression dynamics. We evaluated SCODE by comparing its inferred networks with use of a DNaseI-footprint based network. The performance of SCODE was best for two of the datasets and nearly best for the remaining dataset. We also compared the runtimes and showed that the runtimes for SCODE are significantly shorter than for alternatives. Thus, our algorithm provides a promising approach for further single-cell differentiation analyses.https://github.com/hmatsu1226/SCODEThe R source code of SCODE is available at Bioinformatics online. Conventional bulk RNA-Seq reveals the average gene expression of an ensemble of cells, and therefore does not permit the analysis of detailed states of individual cells. With the advancement of single-cell RNA-Seq (scRNA-Seq), we can now quantify the expression of individual cells and analyze detailed differences among cells can reveal the key regulatory factors for lineage programming inference from scRNA-Seq data. Pseudo-time can be regarded as time information, and hence, scRNA-Seq performed on cells undergoing differentiation can be regarded as time-course expression data at a high temporal resolution. Although several algorithms have been proposed to reconstruct GRN from time-course data , most ofRecently, Boolean network-based algorithms have been proposed for inferring GRN from single-cell data have been used to describe regulatory network and expression dynamics. ODEs can describe continuous variables over continuous time and the underlying physical phenomena, and therefore they are suitable for inferring GRN from scRNA-Seq during differentiation. Although several ODE-based network-inference algorithms have been proposed . PreviouDnmt3a and Dnmt3b might be key regulators of differentiation.Accordingly, we developed an approach to describe regulatory networks and expression dynamics with linear ODEs as well as a novel, highly efficient optimization algorithm, SCODE, for scRNA-Seq performed on differentiating cells by integrating the transformation of linear ODEs and linear regression. In the Methods section, we show that linear ODEs can be transformed from fixed-parameter linear ODEs if they satisfy a relational expression. We also show that the relational expression can be estimated analytically and efficiently by linear regression. In addition, SCODE uses a small number of factors to reconstruct expression dynamics, which results in a marked reduction of time complexity. In the Results sections, we described the application of SCODE for three scRNA-Seq datasets during differentiation. First, we validated that the optimized ODEs can reconstruct observed expression dynamics accurately. Second, we evaluated the inferred network by comparing it to the transcription factor (TF) regulatory network database based on DNaseI footprints and transcription factor binding motifs. SCODE performed best with two of the datasets and was the close second best algorithm for the remaining dataset. Third, we compared the runtimes of the algorithms, and SCODE was significantly faster than previous algorithm that was designed for time-course data. Moreover, SCODE is faster than some algorithms that do not use time parameters. These results illustrate the remarkable efficiency of SCODE. Lastly, we analyzed the network inferred from a dataset and determined that the de novo methyltransferases In this paper, we propose a novel algorithm for scRNA-Seq performed on differentiating cells to reconstruct expression dynamics and infer regulatory networks with a highly efficient optimization method. We believe that our approach will substantially advance the development of regulatory network inference and promote the development of further single-cell differentiation analyses and bioinformatics methods.G (G is the number of TFs) that denotes the expression of TFs and G that denotes the regulatory network among TFs. We infer the TF regulatory network by optimizing In this research, we focus on TFs and inferring TF regulatory networks. First, we describe TF expression dynamics throughout differentiation with linear ODEs:The above model and most of the algorithms which infer GRN from RNA-Seq make no distinction between mRNA and corresponding TF levels. Although several studies have shown poor correlation between mRNA and protein levels ) . With th each TF , and A cD, with G\u2009\u00d7\u2009D matrix, and hence we used a pseudo-inverse matrix The basic idea of reduction is that the patterns of expression dynamics are limited and expression dynamics can be reconstructed with a small number of patterns. For the next step, we consider a small vector Recently, such dimensionality reduction approach has also been proposed to infer network with Monocle (D).We analyzed three time-course scRNA-Seq datasets by the following procedures. First, transcripts per million reads (TPM) and fragments per millions of kilobases mapped (FPKM) were transformed as log(TPM\u2009+\u20091) and log(FPKM\u2009+\u20091), and we regarded these log-transformed values as the expression value. Next, we calculated the averaged expression of each TF at each time point and calculated the variance of the averaged expression for each TF. For TF data, we used Riken TFdb for mouse analyzed was derived from primitive endoderm (PrE) cells differentiated from mouse ES cells (by using G6GR ES cells (The second dataset was derived from scRNA-Seq data obtained to examine direct reprogramming from mouse embryonic fibroblast (MEF) cells to myocytes at days 0, 2, 5 and 22 derived from definitive endoderm (DE) cells differentiated from human ES cells, containing 758 cells (http://www.regulatorynetworks.org), which was constructed from DNaseI footprints and TF-binding motifs . For each D, we executed SCODE 100 times independently, and the first, second and third quantiles of the RSS values of test data are shown in D\u2009=\u20094.Our model was overfitted to the training data, and the inferred D. The corresponding first, second and third quantiles of correlation coefficients are shown in D\u2009=\u20094, the medians of the correlation coefficients are 0.71, 0.94 and 0.88 for each dataset. The medians tend to decrease for large D because the matrix D. The medians also decrease for small D, possibly because the optimized D\u2009=\u20094.Because we used random sampling during optimization, we validated the reproducibility of the optimized D\u2009=\u20094, we used D\u2009=\u20094 unless otherwise specified. For optimized D, we used the mean of optimized Because the RSS values for test data are almost saturated and the estimated D\u2009=\u20094) as genuine D. The medians are 0.70, 0.71 and 0.91 for D\u2009=\u20094, and 0.61, 0.48 and 0.49 for D\u2009=\u20096. Therefore, SCODE can accurately infer the genuine D, and can roughly infer D values unless we set extremely large or small D.Next, we investigated whether SCODE can infer genuine D\u2009=\u20094, this does not necessarily mean that SCODE can successfully learn the dynamics. Next, we investigated whether the optimized ODE can accurately reconstruct observed expression dynamics to verify the optimization of SCODE (t\u2009=\u20090) were set to the mean expression of 0-h or day 0 cells. At first, we compared the reconstructed dynamics with observed data in the principal component analysis (PCA) space space . For eveSox2, Utf1, Epas1 and Foxq1) in Data1 . Because the runtimes of Jump3 are significantly large for large numbers of cells, we used 25 cells at even intervals in the pseudo-time order as the data for Jump3. The AUC values of each method for each dataset are shown in For Data1 and Data2, the AUC values of SCODE are significantly larger than those of the other algorithms. This is because our model considers the dynamics of expression and fully uses time information. Although Jump3 is also designed for time-course expression data, the AUC values are not high. This is because Jump3 is not designed for scRNA-Seq conducted during differentiation, but is designed for multiple time-course data. This suggests the necessity of a novel computational algorithm designed for scRNA-Seq data.The performance of SCODE is second, but almost equal to the best performance for Data3. Given that the reconstructed path in PCA space is a little out of alignment for Data3 , our modIn summary, our algorithm can infer TF regulatory networks with high performance in comparison to other network inference algorithms, especially for Data1 and Data2. This result implies the importance of time parameters in network inference and the necessity of a novel network inference algorithm designed for scRNA-Seq data obtained during differentiation.GATA3, which is the known marker for the differentiation of Data3, positively regulate ZEB1 by using an optimized model, and such simulation-based analyses will be useful for many types of research, such as detection of drivers of differentiation. Thus, SCODE is useful not only for regulatory network inference, but also for various analyses using simulation, and therefore, our research is a promising computational tool for further single-cell sequence analyses.Supplementary DataClick here for additional data file."}
+{"text": "Aims/Introduction. Evidences have shown that the deteriorated procession of disease is not a smooth change with time and conditions, in which a critical transition point denoted as predisease state drives the state from normal to disease. Considering individual differences, this paper provides a sample-specific method that constructs an index with individual-specific dynamical network biomarkers (DNB) which are defined as early warning index (EWI) for detecting predisease state of individual sample. Based on microarray data of influenza A disease, 144 genes are selected as DNB and the 7th time period is defined as predisease state. In addition, according to functional analysis of the discovered DNB, it is relevant with experience data, which can illustrate the effectiveness of our sample-specific method. A drastic change in the complex biological processes has been shown in recent studies, after which the system shifts rapidly from a stable state to another , 2. ThisThe earliest disease progression is identified by using a single molecular biomarker . With fut-test. Genes in DEGs were clustered into 40 categories by using hierarchical clustering analysis. Then, 144 genes in a group which satisfied the three criteria of DNB identification proposed by Chen were selected as DNB. Therefore, based on individual-specific data, we can predict and identify whether a time period is in predisease state by observing the variation of EWI value combined with the three indicators.Based on rapid advanced high-throughput technologies, we can obtain gene or protein expression at genome-wild scale with over thousands of measurements of long-term dynamics. Considering individual differences, our study is different from the method with multiple patient samples at each time period for detecting predisease state instead of proposing a sample-specific method \u201318. In oThe microarray gene expression data is downloaded from the National Center for Biotechnology Information's Gene Expression Omnibus (GEO) database (series accession number: GSE19392). The gene expression data set is generated by using Affymetrix HT Human Genome U133A (HT_ HG-U133A) Microarrays, which are obtained from an experiment of primary human bronchial epithelial cells that are infected with the wild-type PR8 influenza virus (A/PR/8/34). In our study, 10 out of 20 samples are defined as case group which are collected from primary human bronchial epithelial cells infected with the wild-type PR8 influenza virus after 0.25\u2009h, 0.5\u2009h, 1\u2009h, 1.5\u2009h, 2\u2009h, 4\u2009h, 6\u2009h, 8\u2009h, 12\u2009h, and 18\u2009h and the rest of the 20 samples are defined as control group which treated the same process but in the absence of virus. Moreover, 22277 probe sets are mapped to 13915 unique gene symbols involved in the influenza data set.t-test, which can evaluate the significance of genes with differential expression between case group and control group, is applied in the selection of DEGs. The p value figured out by t-test is directly used for the subsequent filtering analysis without multiple-testing correction. Only the genes with p < 0.05 are regarded as DEGs.The Student A(t) = (A1(t),\u2026,An(t))\u2032 is an n-dimensional vector which represents observed values or molecule concentrations at time point t\u2009\u2009, for example, minutes, hours, or days. Parameter P = indicates the slowly changing factors about genetic factors and epigenetic factors . Yet p is determined by its character and is not taken into consideration in this study because it is a unknown parameter with slower dynamics than A(t). f is general nonlinear functions of A(t).Studies have shown that a biological process of the complex disease can be divided into three parts concretely. The state between normal and disease state is a tipping point which called predisease state. The system will change dramatically when the phase of disease approaches to the state. The following discrete-time state system of a living organism can be described by a nonlinear dynamical system equation:A(t) can be classified into two groups, namely, the case group and the control group. They are denoted as Acase(t) and Acontrol(t), respectively:The observed values or molecule concentrations Ancase(t) is the nth gene or protein expression of case group and mean(\u2211tAncontrol(t)) and SD(\u2211tAncontrol(t)) are the mean and standard deviation of nth gene or protein expression at all time points in control group and the control group, respectively. Then a n \u00d7 t normalization matrix is obtained:nth gene or protein at time point t.Due to the existing large differences in the expression values of various genes or proteins, the data normalization manner as follows is adopted to analyze the data:t-test are isolated from normalization data r \u00d7 t matrix and (n \u2212 r) \u00d7 t matrix, respectively. And To further filtrate DNB, DEGs by (i) The average standard deviation (SD) of molecule concentration (ii) The average Pearson correlation coefficient (PCC) in absolute value of molecule concentrations (iii) The average Pearson correlation coefficient in absolute value between molecule concentrations inside this group q genes or proteins is separated from t can be constructed by three criteria:The optimal group containing ic state original(i) The average coefficient variation (CV) of molecule concentrations at different time points is the value of fluctuation. The CV value approaching predisease state is higher than that of other time point.(ii) The average value of absolute difference (DIF) in molecule concentrations inside DNB approaching predisease state drastically decreases compared with the values at other time points.(iii) The average value of absolute difference between molecule concentrations inside DNB and any other outside DNB (ODIF) approaching predisease state is relatively higher than others.t of all time points can be constructed as Hence, the EWIIn the light of the characteristic in predisease state, a time point with the largest value can be considered as the predisease state. After the point, the disease progression of a living organism shifts rapidly from normal to disease state.t-test is applied to calculate the p value of each gene by comparing its expression profile between case groups and control groups. We identify 264 genes with p < 0.05 as DEGs. Next, 264 genes are classified by hierarchical clustering analysis into 40 categories. Analyzing all clusters or groups, a group of 144 genes is identified as DNB, which satisfies the three criteria of DNB identification. Among them, the values of average SD and average PCC in this group are 1.2585797 and 0.3047569, which is higher than others , and the average OPCC is relatively high.All without-correct-corresponding gene symbols are screened out, and probe of the same genes is combined by the averaging method. There are 13915 genes left. Based on the 13915 genes, Student's To further clarify the early warning index for influenza A disease with 10 time points, https://david.abcc.ncifcrf.gov/) with Gene Ontology (GO) analysis and KEGG Pathway analysis is mentioned. Some enriched GO functions based on identified genes in DNB are listed in In order to analyze biological functional of the obtained DNB, A bioinformatics database DAVID (https:/According to KEGG Pathway enrichment analysis, the results show that genes in DNB of influenza A disease are closely relevant to immune system and inflammation, such as cytokine-cytokine receptor interaction, PPAR signaling pathway, and Jak-STAT signaling pathway in Moreover, the genes in PPAR signaling pathway like LPL, APOA1, OLR1, and APOC3 play a significant role in inhibiting inflammation, regulating cell apoptosis and immune system. And the genes in DNB which are marked red are placed the critical positions in cytokine-cytokine receptor interaction and PPAR signaling pathway. As shown in To further demonstrate the effectiveness of our method, we analyze symptoms of patients and their complications. Patients develop symptoms of illness of upper respiratory tract infection. However, they are also accompanied by the occurrence of pulmonary complications, and renal failure . Moreovet-test between case groups and control groups. Then, a new type of normalization data is constructed by the formula defined in this study for the sake of analysis of the next step. Different from the previous methods, our work regards the gene expression with time of each gene as a vector for hierarchical clustering analysis. And the Euclidean distance is applied to calculate the distance within genes in DEGs. A group, which satisfies three criteria of DNB identification, is identified as DNB. Further, the values of CV, DIF, and ODIF are calculated to construct an index for detecting predisease state of individual sample. The index EWI is applied in early diagnosis with the microarray data of influenza A disease, which demonstrates fluctuated values with time. Although the ODIF value approaching predisease state is not completely obvious, the expression value of the other three indicators is significantly relevant with our theory. In addition, everyone with the same disease has different DNB due to different driving factors. We will focus on this important future topic and continue to refine the algorithm in later research.To detect the early warning signal of influenza A disease using a small number of samples of high-throughput data, we propose an early warning index serving as a leading indicator to predict the critical transition based on the concept of dynamical network biomarkers proposed by Chen, which drives the disease progression from normal state to disease state. Compared to the general biomarkers , dynamic"}
+{"text": "Comprehensively understanding the dynamics of biological systems is among the biggest current challenges in biology and medicine. To acquire this understanding, researchers have measured the time-series expression profiles of cell lines of various organisms. Biological technologies have also drastically improved, providing a huge amount of information with support from bioinformatics and systems biology. However, the transitions between the activation and inactivation of gene regulations, at the temporal resolution of single time points, are difficult to extract from time-course gene expression profiles.Escherichia coli. The method completely detected the three periods of the shift; 1) consumption of glucose as nutrient source, 2) the period of seeking another nutrient source and 3) consumption of lactose as nutrient source. We then applied the method to mouse adipocyte differentiation data. Cell differentiation into adipocytes is known to involve two waves of the gene regulation cascade, and sub-waves are predicted. From the gene expression profiles of the cell differentiation process from ES to adipose cells (62 time points), our method acquired four periods; three periods covering the two known waves of the cascade, and a final period of gene regulations when the differentiation to adipocytes was completed.Our proposed method reports the activation period of each gene regulation from gene expression profiles and a gene regulatory network. The correctness and effectiveness of the method were validated by analyzing the diauxic shift from glucose to lactose in Our proposed method identifies the transitions of gene regulations from time-series gene expression profiles. Dynamic analyses are essential for deep understanding of biological systems and for identifying the causes of the onset of diseases such as diabetes and osteoporosis. The proposed method can greatly contribute to the progress of biology and medicine. Acquiring a comprehensive understanding of biological systems dynamics is among the biggest challenges in biology and medicine. The processes of all organisms (including humans) are time-variant. Cells cycle and divide, change their internal states and differentiate into other cell types when stimulated from the outside. Abnormal cell differentiation causes diseases and cancer. Dynamic systems are universal in organisms.Biological systems dynamics can now be understood through advanced biological technologies. On-chip hybridization and DNA sequencing has enabled the simultaneous measurement of the expression levels of many genes. Many of the accumulated data are registered in public databases such as GEO (Gene Expression Omnibus) , 2, Arratime-course\u201d (2017/7/12) returned 913 datasets in GEO and 7623 entries in SRA. Based on these data, researchers have revealed the biological mechanisms of processes such as cell cycling [The above databases store the time-course movements of the gene expression levels in multiple organisms. For example, a search for \u201c cycling and diau cycling . These sEscherichia coli, and the yeast network databases YEASTRACT [Numerous systems operate in living cells. Examples are metabolism, genetic and environmental information processing, cellular processes, and organismal systems . ConcurrEASTRACT \u201317. The EASTRACT \u201322. GeneEASTRACT , 24 and EASTRACT . The regEASTRACT are often biologically meaningful. The most well-known period is the cell cycle, which has two states; interphase and cell division. The interphase is divided into three sub-states called phases , each with sub-phases. The cell cycle has four phases in total. Under these dynamics, the number of periods in a cell cycle is not fixed. The same situation occurs in other biological processes. Therefore, determining the number of periods in a biological process is a difficult task, and the dynamics of the gene regulatory network are difficult to analyze, especially in a manual analysis.Exp: time-course gene expression profiles with |T| time points and |G| genes.T: a set of time points in Exp. If iscorebest, scorebest\u2190scorek; Actbest\u2190Act; Trabest\u2190Tra.step 8 if scorek>scorek\u22121, goto Step 6.step 9 output Actbest, Trabest and scorebest.The above algorithm is a brute-force search with poor computational efficiency. However, as the gene expression profile contains very few time points, the above algorithm satisfies our requirements.For larger datasets, the calculation time can be shortened by using a hash function from a gene and a period to the sub-score. Meanwhile, the computational complexity can be reduced by dynamic programming.E. coli. In this test, the method was assessed by its ability to detect the time of the diauxic shift.The effectiveness of the proposed method was tested on two biological datasets. The results of our method were also compared with a manual analysis of the periods in the time series. The first dataset contains the time-course gene expression profiles during the diauxic shift from glucose to lactose metabolism in Mus musculus. Two periods of geneE. coli MG1655 and isogenetic mutants cultured in a medium containing glucose and lactose [The first dataset is the time-course gene expression profile GSE7265 in the GEO dataset. The diauxic shift was observed in lactose . The expmin.to 1089 min. The diauxic shift from glucose to lactose has been recognized in the first half, but information on the second half is completely lacking. Although the GS2765 gene expression profile contains 17 time points, the original paper [The time points are divided into two phases; the first lasting from 780 min. to 939 min., the second from 969 The gene regulatory network of the diauxic shift comprises 31 genes and 50 gene regulations . FourteeM representing the intensities of the gene regulations was calculated by an R package from GitHub https://github.com/takenakayoichi/tacs. The computed matrix M was input to the proposed method.The proposed method was implemented in R, and the matrix E. coli. The maximum score during each period is tabulated in Table\u00a0The proposed method detected five periods during the diauxic shift of wild type Takenaka et al. manuallyE. coli exist in the logarithmic growth phase and are consuming glucose and lactose, respectively. The second period is consistent with cessation of cell proliferation.Our method divided the first half of the experiment into three periods. Figure\u00a0In comparison, the manual curation identified only two periods in the first half. The division point was exactly centered in the growth arrest period. From the difference between our results and the manually curated results, we can identify whether a mechanism that regulates the gene expression level exists independently in the logarithmic and growth-arrest phases.E. coli cells cannot immediately access a new nutrition source once the original source is depleted. If this were possible, logarithmic growth would continue without interruption.Which of these methods expresses more realistic biological properties? Manual curation suggests that lactose consumption begins immediately after glucose depletion. On the other hand, the proposed method suggests that once the glucose is depleted, the cells cease growing while seeking a new nutrient source, and resume growth when an alternative nutrient is found. We consider that the proposed method presents a more plausible dynamic model than manual curation. One expects that the E. coli had consumed the glucose and lactose in their nutrient medium, they began seeking the next nutrition source. When no further source was found, the gene expression regulations changed sufficiently for detection by both methods. However, as the expression profile provides the only available information in the second half, the actual number of periods in the second half of the experiment is difficult to judge.The second half of the experiment was continuous in the manual curation, but divided into two periods in the proposed method. Once the \u03bcM DEX, and with insulin-transferrin-selenium-X supplement containing 5 \u03bcg/ml of insulin and 1 \u03bcM rosiglitazone. After 48 hours, the differentiation medium was replaced with DMEM supplemented with 10% FBS. The RNAs collected at 62 time points during adipocyte differentiation were measured using an Affymetrix GeneChip Mouse Genome 430 2.0 Array. The time points were 0, 5, 15, 30 and 45 min., hourly from 1 to 30 h, and 6-hourly from 36 to 192 h after adipogenesis induction. Each datum was background-subtracted and normalized in the robust multi-array analysis (RMA). The gene expression profiles are available from the Genome Network Platform [The second dataset was a time-course gene expression profile of mouse adipocyte differentiation. The RNAs in this dataset were collected from mouse ST2 bone marrow stroma cell-derived stem cells (RCB0224) obtained from the RIKEN BioResource Center . ST2 cells were induced by changing the medium from RMPI1640 to DMEM supplemented with 10% FBS, 0.5 mM 3-isobutyl-1-methylxanthine (MIX), 0.25 Platform .Previous research has unveiled the important regulators of adipocyte development and a two-wave cascade in this process see Fig. The netThe expression levels of the 14 genes are shown in Fig.\u00a0M of our algorithm. The intensities were calculated from the gene expression profiles in Fig.\u00a0Figure\u00a0The proposed method detected four periods of adipocyte differentiation. The maximum score in each period is listed in Table\u00a0\u03b2 and Cebp \u03b4 are controlled and the regulations of the second wave (Thr \u03b1/\u03b2 and Srebp-1c) begin activating. The pparg and cebp \u03b1 genes are regulated during the third period, and the most downstream genes are regulated during the fourth period. The four-period differentiation identified by our method includes the known waves, but appears to be more precise.The two waves of the known cascade in Fig.\u00a0\u03b1 are activated twice. Once at the first period, and re-activated in the second wave composed of third and fourth periods. Pparg is a marker of adipose cells and a known regulator of adipocyte differentiation. The two activations of the pparg gene are confirmed in Fig.\u00a0The four periods detected by our method differ from the two-wave differentiation in two ways. First, the regulations of Pparg and Cebp We proposed a method that detects the dynamics of gene regulatory networks at the temporal resolution of single time points in the underlying gene expression profiles. The dynamics are modeled as periods of activated regulations. The plausibility of the model was quantified using the Bayesian information criterion. The problem constitutes a combinatorial optimization problem that find the highest-scoring model. The algorithm inputs the gene expression profiles and a gene regulatory network, and returns the activated regulations, divided into periods.E. coli and adipocyte differentiation in the mouse. In both datasets, the proposed method detected more plausible dynamic models than existing models. We believe that the proposed method can precisely reveal the dynamics of biological systems.The effectiveness of the method was validated in two datasets; the diauxic shift from glucose to lactose in"}
+{"text": "S. cerevisiae time course data. BFN produces regulatory relations which show consistency with succession of cell cycle phases. Furthermore, it also improves sensitivity and specificity when compared with alternative methods of genetic network reverse engineering. Moreover, we demonstrate that BFN can provide proper resolution for GO enrichment of gene sets. Finally, the Boolean functions discovered by BFN can provide useful insights for the identification of control mechanisms of regulatory processes, which is the special advantage of the proposed approach. In combination with low computational complexity, BFN can serve as an efficient screening tool to reconstruct genes relations on the whole genome level. In addition, the BFN approach is also feasible to a wide range of time course datasets.The great amount of gene expression data has brought a big challenge for the discovery of Gene Regulatory Network (GRN). For network reconstruction and the investigation of regulatory relations, it is desirable to ensure directness of links between genes on a map, infer their directionality and explore candidate biological functions from high-throughput transcriptomic data. To address these problems, we introduce a Boolean Function Network (BFN) model based on techniques of hidden Markov model (HMM), likelihood ratio test and Boolean logic functions. BFN consists of two consecutive tests to establish links between pairs of genes and check their directness. We evaluate the performance of BFN through the application to One of the challenging fields of computational biology is the study of gene regulatory networks (GRNs). The demanding task of recovering the hidden relations between genes at the whole-genome level can provide insights to the comprehensive understanding of biological pathways and their mechanisms. It can also enhance the developments for disease treatments and biological technology. Currently there are two major experimental approaches to identify regulatory relations between genes. The first type uses perturbation (knockout or overexpression) experiments to explore regulatory targets of specific gene. The second type detects targets for specific transcription factors (TFs) with protein-DNA binding experiments. Neither of two experimental techniques can be utilized to reconstruct GRN on genome-wide scale due to the demand of huge number of experiments. In addition, the abundant high-throughput observational transcriptomic data , 2, 3, iTo perform the task of gene regulatory network inference (GRNI) or reverse-engineering from available transcriptomic data, numerous computational methods from statistics and computer science were developed. Reviews of the existing approaches in GRN reverse-engineering , 6, 7, 8Boolean networks (BNs) , 10 and Bayesian networks are another class of methods , which rAs an alternative to highly detailed and complex BN, PBN and DBN, the information theoretic models emerged Methods which are able to establish causality are mostly graph based. For example, the method of GeneNet convertsBased on survey of existing methods, we highlight six characteristics of a GRN inference approach which ideally should be considered in the state-of-the-art method: accuracy, ability to capture dynamics of temporal data, differentiation of direct and indirect regulations, detection of directionality of the link, assigning the most informative function to the link, and computation efficiently on large datasets.p1 and p2 which set the significance level of each test in the first and the second step.Boolean Function Network (BFN) is a two-step GRN reverse-engineering approach to achieve the above six aims. At the first step, BFN identifies pairwise dependencies to explore directionality, optimal Boolean functions and time delays. At the second step, it tests directness of relations established in the first step. The purpose of ensuring of directness of the gene regulatory relations is to achieve the clear structural representation of GRN and reduce the number of false positive links. The output of algorithm is controlled by two threshold parameters As comparison of performance, we evaluate the accuracy of BFN and Boolean Network , DynamicBFN can be considered as dynamic GRNI since it uses adaptive time delay between genes. Another important characteristic of BFN which is seldom addressed by the most existing approaches is the ability to identify the regulatory function describing the relation between gene pairs in GRNs. Among the methods reported in literature, the Boolean models (BN and PBN) are few methods that have such capacity. However, these methods use a limit number of Boolean functions, such as three Boolean gates and their combinations. We will extend the number of Boolean functions in the proposed approach to describe the complex nature of regulatory relations.S. cerevisiae dataset in the section of \u201cResults\u201d.Furthermore, BFN has low computational complexity and computation time. Therefore, it can be applied to high-throughput data, which is demonstrated for whole genome The proposed BFN method is based on a hidden Markov model (HMM). We assume that the true status of a gene at discrete moment of time is a hidden state, while the measured expression level of gene transcript at the specific time is an observation. In a nutshell, the BFN method consists of two major steps: identification of pairwise dependencies between genes by Test 1 and the subsequent check whether those links are direct or indirect by Test 2.x1 \u2192 x2, x1 \u2192 x3, x2 \u2192 x3 by Test 1. As it is shown at x1 has indirect effect on x3 through x2. In the second scenario, both x1 and x2 regulate x3 directly. Therefore the question arises which one of the two maps is correct? With the increasing number of genes, the resulting network can differ dramatically if the directness of the links is not tested. Further method\u2019s details are explained in the section of Materials and Methods.Both Test1 and Test 2 are based on the comparison of likelihoods of two alternative models. Test 1 search for the best Boolean function and time delay between two genes, which would maximize the likelihood ratio of a model with a link over a model without a link. The proposed BFN method needs time course data to discover the regulatory relationships. The minimum number of time points required for proper inference is 10. To investigate the performance of BFN, we will use the widely used data of yeast expression in Spellman et al. . DetailsMa et al. providedsensitivity = 1, specificity = 1 in C1), the smaller score indicates the better performance.For BFN, we ran Test 1 and Test 2 consecutively with the p-value threshold of 0.05 for both tests. The time delay range is set to be the range from 0 to 5 on the Spellman dataset. 185 genes are assigned as TFs in according to SGD (Saccharomyces Genome Database) , which aAccording to the scores reported in From these comparisons, the performance of BFN is better than the average results. The performance in C1 and C3 metrics shows that the BFN can achieve improvements on the average. In particular, the improvement of BFN is appealing for C1 metric (sensitivity and specificity) across all GRNs gold-standards. When compared with the best individual scores, the BFN has better performance in C1 metric than any method applied to Smith and Yeung datasets, while no advantage can be seen for C2 and C3 metrics. It should be noted that performance ranks of those methods in Arabidopsis thaliana which consists of 8 genes paired (based on expression pattern similarity) into 4 modules regulatory information. We focus on the 103 cell-cycle regulated studies . Among tp1 = 0.005 and p2 = 0.05 are used for Test1 and Test2 correspondingly. The time delay range varies from 1 to 5 sample time points. We consider links of positive Boolean functions only. As a result, we obtained 68 links in Conventionally cell cycle is partitioned into four consecutive phases: mitosis (M), gap1 (G1), synthesis (S) and gap 2 (G2). In this study, we focus on the list of 103 cell-cycle genes along wiS. cerevisiae analysis , the pool of target genes can be naturally divided into groups according to their Boolean functions and time delay parameters detected by BFN. For each TF, the results of the whole genome analysis can be dThe obtained information is useful both for elaboration of TF\u2019s function and for establishing candidate genes of regulatory targets. To illustrate the former case, we consider the YHL020C (OPI1) transcription factor. The annotation by SGD shows that the regulatory targets are related to the \u201ccarboxylic acid biosynthetic process\u201d (GO: 0046394). When we consider the subgroup according to time delay 0 transcription factor or we are only interested in pairwise relations when regulatory effect can be seen over time . Based on our empirical experience, the maximum time delay should be no larger than one third of number of time-points in dataset and at least a half of the time interval between cell\u2019s steady states.The proposed BFN approach is a fast and efficient way to explore the regulatory relations between genes for further experimental analysis. By adjusting those two threshold parameters Similar to any other computational methods based solely on transcriptome data, the BFN is not sufficient to reconstruct GRN entirely because the posttranslational modification should be taken in to account. Moreover, the observational dataset can reflect the status of GRN only under specific experimental conditions. However, the proposed BFN method can become valuable tool for biologists to reduce the search space for relations between genes. And it will help to recover the overall picture of regulatory pathways when it is applied to several related time-series data under different experimental conditions.When the prior knowledge is available, such as a list of TFs, it can be integrated to the proposed BFN to reduce computational complexity and improve prediction precision. The apparent advantage of the BFN method is that it not only determines direct relationships between genes but also provides direction and Boolean function with time delay. The follow-up division into groups based on the assignment of Boolean functions and time delays to each relation can be incorporated to clustering and for the analysis of functional enrichment.The proposed BFN can identify directness of a link between a pair of genes. It could be expanded to discover structures of three and more genes with higher computational complexity in future studies.Spellman et al. provide The expression profile is the measured abundance of mRNAs for each determined point in time. The source for this type of data can be microarray, next generation sequencing and other types of biochip experiments. Hereafter, we arrange the variables (genes) horizontally and n is the number of genes. The observations (time-points) are arranged vertically with the total number of columns is m, n\u226bm. Naturally, the range of values varies greatly from one gene to another. In order to enable comparison of the expression profiles of different genes, the expression values have to be standardized to the same scale, i.e. converted to the standard range of for every gene. We will apply the approach of empirical cumulative distribution function (ECDF) transformation, which can be described as follows.xi:For each gene xij along m time-points in ascending order.Sort observations j = 1\u2026m, where I is array of sorting indices.Assign to corresponding observation a probability If there are ties for some genes, then the above standardization procedure can generate skewness. Thus, we use the following modified procedure for standardization in this study.xi:For each gene xij along m time-points in ascending order.Sort observations uik, k = 1\u2026K, K < mIdentify unique values uik:ck for given unique valueCount the number of ties I is array of sorting indices and Compute and assign to corresponding For each unique value i at time point t.After applying the above standardization, we obtain the corresponding empirical cdf value V = {x1\u2026xn} representing genes together with the set of all unary and binary Boolean functions f = {f1\u2026f6,F1\u2026F42} which defines relations between nodes.We define the Boolean network as a set of vertices f: Bk \u2192 B, where B = {0,1} is a Boolean domain and k is arity of the function. For every k there exist a finite set of non-trivial Boolean functions which can be represented in the form of truth table. In Tables Boolean function is a mapping of the form Besides all possible non-trivial Boolean functions with unique definite output, we also consider functions with two possible outputs, {0, 1}, which means that either 0 or 1 may appear in the output for the same input assignment. In In In order to identify the pairwise dependencies between genes, we examine two models for every possible pair of genes. One model represents the situation where genes are linked and the other model suggests there is no link between genes under consideration.y1(t) and y2(t) are continuous observed values of Gene 1 and Gene 2 at time point t respectively. The notations of x1(t) and x2(t) are the corresponding discrete latent variables. The notation of \u03c4 represents the time delay between genes. For example, \u03c4 = 1 means the time delay of 7 minutes for the expression data of \u03b1 -factor cells arrest in Spellman et al. [Assume that n et al. . Two comt the likelihood score is marginalized over all possible latent variable states of x1(t) and x2(t).The larger this ratio is, the more significant the link is. The likelihoods of models can be written respectively:According to Bayes\u2019 theorem,P(y1(t)|x1(t)) and P(y2(t)|x2(t)) in formulas When P(y1(t)) and P(y2(t)) are taken outside of the sum in both formulas because P(yk(t)) is constant as yk(t) is the realization of random variable xk(t). They will be cancelled out in likelihood ratio and they can be omitted in formulas for Lnot linked and Llinked. So, the formulas be rewritten as next:The terms of P(xk(t)|yk(t)) is the empirical cdf The estimate of conditional probability P(x1(t)|y1(t))\u2027P(x2(t + \u03c4)|y2(t + \u03c4)).For simplicity, we will use p00t = P(x1(t) = 0|y1(t))\u2027P(x2(t + \u03c4) = 0|y2(t + \u03c4)).For example, P(xk(t)) is a marginal probability and it can be computed as follows:Note that x2(t+\u03c4) given x1(t) in The conditional probability of Boolean state of variable p00 and p11 are given weight 1, while p01 and p10 are set to zero for the accordance to the truth table of f1. For computation reason in practice, we will use \u03b5 and 1\u2212\u03b5 instead of 0 and 1 to avoid computing log(0) in log likelihood. The parameter \u03b5 can be adjusted if needed. Based on our empiric experience, it does not notably affect output. The default value of \u03b5 in software implementation is set to 0.005 in this study. However, the decrease of \u03b5 can slightly increase number of regulatory relations in output. For the functions which have indefinite output for one of inputs (f3-f6), we use the marginal probability of the second gene to be 1 or 0 as weight function. With all notations explained above, the likelihoods corresponding to all possible six functions between two genes can be written as follows:Similarly, Lf1\u2026Lf6 will suggest the function \u03c4. At the same time we need to find optimal time delay between genes. Thus we repeat procedure for all possible time delays and choose the one which corresponds to the largest difference in log-likelihoods of two models.The largest of likelihoods R) can be approximated by the Chi-square distribution.Significance of the established link is measured with p-value. Under the null hypothesis, the test statistic of 2 log against the binary function of F. Since the link x1\u2192x3 is present in both models and it does not contribute to models differentiation, we can remove it from computation. Thus the corresponding likelihoods for two models can be written as next:However, it is unnecessary to compute all parts since we are only interested in the difference, that is, the unary function of After applying Bayes\u2019 theorem and all reductions similar to Test 1, the likelihood of indirect model M0 and direct model M1 can be written as follows:Similarly to Analogously to \u03c4\u2033 between intermediate gene x3 and target gene x2. Thus we select \u03c4\u2032 such that the largest likelihood ratio Rx3 refers to the best choice of x3.Among all candidates, we choose such intermediate gene in model M1 which can maximize likelihood of this model and therefore maximize difference between models. In the last step, we need to make sure that we choose optimal time delay On the whole, the Test 2 procedure can be expressed as next:O(n3) in worst case when the GRN is a complete directed graph. In Test 1, we consider n (n\u2013 1) possible gene pairs. For every gene pair which passed Test 1, we consider at most (n\u2013 2) intermediate genes in Test 2. However, these two tests are conducted in sequence, not in a nest loop. This allows the reduction of computational complexity significantly because there are only a limited number of gene pairs that will pass Test 1 and enter Test 2.Boolean networks can be very useful in finding dependencies among genes. However the exhaustive search of the optimal Boolean network is infeasible for the study of a large number of genes. The proposed BFN algorithm has the computational complexity of S1 Table(XLSX)Click here for additional data file.S2 Table(XLSX)Click here for additional data file.S3 Table(XLSX)Click here for additional data file.S4 Table(XLSX)Click here for additional data file.S5 Table(XLSX)Click here for additional data file.S6 Table(XLSX)Click here for additional data file.S7 Table(XLSX)Click here for additional data file."}
+{"text": "Expression-based phenotype classification using either microarray or RNA-Seq measurements suffers from a lack of specificity because pathway timing is not revealed and expressions are averaged across groups of cells. This paper studies expression-based classification under the assumption that single-cell measurements are sampled at a sufficient rate to detect regulatory timing. Thus, observations are expression trajectories. In effect, classification is performed on data generated by an underlying gene regulatory network.Network regulation is modeled via a Boolean network with perturbation, regulation not fully determined owing to inherent biological randomness. The binary assumption is not critical because the resulting Markov chain characterizes expression trajectories. We assume a partially known Gaussian observation model belonging to an uncertainty class of models. We derive the intrinsically Bayesian robust classifier to discriminate between wild-type and mutated networks based on expression trajectories. The classifier minimizes the expected error across the uncertainty class relative to the prior distribution. We test it using a mammalian cell-cycle model, discriminating between the normal network and one in which gene p27 is mutated, thereby producing a cancerous phenotype. Tests examine all model aspects, including trajectory length, perturbation probability, and the hyperparameters governing the prior distribution over the uncertainty class.Simulations show the rates at which the expected error is diminished by smaller perturbation probability, longer trajectories, and hyperparameters that tighten the prior distribution relative to the unknown true network. For average-expression measurement, methods have been proposed to obtain prior distributions. These should be extended to the more mathematically difficult, but more informative, expression trajectories.The online version of this article (10.1186/s12918-018-0549-y) contains supplementary material, which is available to authorized users. Phenotype classification is a salient issue for translational genomics, for instance, classification of normal versus cancerous cells, of different stages of tumor development, or different prospective drug response. Both expression microarray and RNA-Seq measurements have received much interest; however, they suffer from a lack of specificity. First, pathway timing is not reflected in the data and, second, expressions are averaged across groups of cells, so that individual cell responses are not detectable. New technologies are being developed for profiling individual cells using RNA-Seq or quantitative PCR . IndividGenes have interactions with each other, which can determine how they are behaving over time and define the dynamics of gene regulatory networks (GRNs). One way of showing the dynamics of GRNs over discrete time points is Boolean networks with perturbation (BNp) . A BNp iSuppose we have single-cell measurements sampled with a sufficient rate to detect regulatory timing. In effect, this would mean that classification would be done on data reflecting an underlying gene regulatory network. In we propoIn and 7] 7] we asIn this paper we extend the single-cell trajectory classification to the Bayesian framework. We propose the intrinsically Bayesian robust (IBR) classifier for the trajectorires. The IBR classifier is a specific type of the obtimal Bayesian classifier (OBC), first introduced in , 9 for tS=0) and mutated (S=1). We introduce a Bayesian version of the partially observed Boolean dynamical system (POBDS), proposed in T is a binary state vector, called a gene activity profile (GAP), at time k, in which xi(k) indicates the expression level of the i-th gene at time k (either 0 or 1); f=T:{0,1}n\u2192{0,1}n is the vector of the network functions, in which fi shows the expression level of the i-th gene at time k+1 when the system lies in the state Xk at time k; nk=T is the perturbation vector at time k, in which n1(k),n2(k),\u22ef,nn(k) are i.i.d. Bernoulli random variables for every k with the parameter pk=P(ni(k)=1) for i=1,\u22ef,n; and \u2295 is component-wise modulo 2 addition.where \u03c0 describing its long-run behavior. If pk is sufficiently small, \u03c0 will reflect the attractor structure within the original network. We can derive the transition probability matrix (TPM) if we know the truth table and the perturbation probability of a BNp. As a result, the steady-state distribution \u03c0 can be computed as well.The existence of perturbation makes the corresponding Markov chain of a BNp irreducible. Hence, the network possesses a steady-state distribution pk. Since 0 104. However, this scalability comes at the expense of the reduced ability to fully characterise posterior uncertainty. In this study we have focused purely on best characterising the posterior uncertainty using MCMC algorithms that asymptotically converge to the true posterior distribution. In practice this purest approach may not be necessary but we argue that pseudotime uncertainty should be addressed.A caveat of the specific methodology adopted in this study is that it is necessarily computationally intensive due to the use of full Markov chain Monte Carlo based Bayesian inference and is dominated by functions of the Gaussian Process covariance matrix that have complexity ng point and receng point can reduIt is important to note that the GPLVM used in our investigations is not intended to be a single, all-encompassing solution for pseudotime modelling problems. For our purposes, it provided a simple and relevant device for tackling the single trajectory pseudotime problem in a probabilistic manner but clearly has limitations when the temporal process under investigation contains bifurcations or heteroscedastic noise processes (as discussed earlier). Improved and/or alternative probabilistic models are required to address more challenging modelling scenarios but the general procedures we describe are generic and should be applicable to any problem where statistical inference for a probabilistic model can give posterior simulation samples.We also developed a novel sigmoidal gene expression temporal association model that enabled us to identify genes exhibiting a strong switch-like (in)activation behaviour. For these genes we were then able to estimate the activation times and use these to assess the time resolution that can be attained using pseudotime estimates of single cells. Our investigations show that pseudotime uncertainty prevents precise characterisation of the gene activation time but a probabilistic model can provide a distribution over the possibilities. In application, this uncertainty means that it is challenging to make precise statements about when regulatory factors will turn on or off and if they act in unison. This places an upper limit on the accuracy of dynamic gene regulation models and causal relationships between genes that could be built from the single cell expression data.missing data statistical inference problem that we should remind ourselves involves quantities (pseudotimes) that are unknown, never can be known.In conclusion, single cell genomics has provided a precision tool with which to interrogate complex temporal biological processes. However, as widely reported in recent studies, the properties of single cell gene expression data are complex and highly variable. We have shown that the many sources of variability can contribute to significant uncertainty in statistical inference for pseudotemporal ordering problems. We argue therefore that strong statistical foundations are vital and that probabilistic methods for provide a platform for quantifying uncertainty in pseudotemporal ordering which can be used to more robustly identify genes that are differentially expressed over time. Robust statistical procedures can also temper potentially unrealistic expectations about the level of temporal resolution that can be obtained from computationally-based pseudotime estimation. Ultimately, as the raw input data is not true time series data, pseudotime estimation is only ever an attempt to solve a S1 Text(PDF)Click here for additional data file.S1 FigMonocle\u2019s MST approach. A Boxplots of resamples across all cells. Upper and lower whiskers extend to the highest and lowest values within 1.5 times the interquartile range. B The 2\u03c3 interval was then computed for the pseudotime of each cell, which varies from as low as close to 0 up to almost half the pseudotime interval (0.5).80% of cells were subsampled from the Laplacian Eigenmaps representation 30 times and the pseudotime refitted using (TIF)Click here for additional data file.S2 FigA & C or using a low level of shrinkage in B & D for 10 separate MCMC chains. It can be seen that the trajectories consistently fit using high levels of shrinkage, implying this is required to have a well-defined posterior as opposed to a \u2018lumpy\u2019 posterior with many local maxima using low levels of shrinkage. C & D show the posterior densities for a randomly chosen cell (number 100) using the two different shrinkage regimes as defined above.GPLVM trajectories were fit on the Trapnell dataset using either a high level of shrinkage in (TIF)Click here for additional data file.S3 FigMonocle ordering found using ICA on 500 most variable genes. Right: Monocle orderings found using a Laplacian Eigenmaps embedding as described above. Error bars show the 95% HPD credible interval.Left: (TIF)Click here for additional data file."}
+{"text": "In particular, we explore statistical and model-based methods for integrating transcriptomic, proteomic and metabolomics data. Our case studies reflect responses to a systemic inflammatory stimulus and in response to an anti-inflammatory treatment. Our paper serves partly as a review of existing methods and partly as a means to demonstrate, using case studies related to human endotoxemia and response to methylprednisolone (MPL) treatment, how specific questions may require specific methods, thus emphasizing the non-uniqueness of the approaches. Finally, we explore novel ways for integrating Figure 1. The ultimate aim of biomedical sciences is to establish a thorough understanding of how these control mechanisms function when in a healthy state and how the control is lost (or shifted to a new mode) when symptoms of a disease are displayed, in order to explain the observed phenotypic changes with the known paradigms at the molecular level. Our ability to collect information about molecular events in our bodies has tremendously increased with great advancements in technology. However, we still have a long way to go finding the best ways to fully utilize this information.Life is complex at all scales. From a single cell to the whole body, there are myriad intricate mechanisms that control every aspect of this complexity, omics tools are available, each of which makes it possible to observe the physiologic responses at their complementary level. They enable the examination of a broad array of cellular or systemic elements and functions through the use of vast amounts of quantitative or semi-quantitative data from various levels of biological organizations toward a systems approach defined as quantitative systems pharmacology (QSP) , 2016. Tomics analyses at multiple physiologic levels and integrated them using multiple approaches. The analyses described in the sections that follow include metabolic and transcriptional responses to endotoxemia, an experimental model in humans that recapitulates the dynamics of systemic inflammatory response. We then switch to an anti-inflammatory therapy and focus on the effects of a commonly used synthetic glucocorticoid in liver. This analysis represents a more direct integration approach, in which we evaluate the concordance of the hepatic response to the drug treatment at gene and protein expression levels. Finally, we discuss a network-based approach that integrates -omics information with existing pharmacokinetic/pharmacodynamic models.This paper is centered on integrating information from multiple physiologic levels. We focused on how critical relationships are shaped over time during the development of the response to a systemic inflammatory stimulus and in response to an anti-inflammatory treatment. The systems approach allowed us to track the continuum of physiologic responses through their evolution and in relation to multiple dynamics running in harmony. We extracted the coherent dynamic responses represented in the -omics era. The answer to \u201cwhy\u201d is straightforward: life is a complex organization of functional entities each manifesting the actions and activities of appropriate levels through corresponding markers at a suitable level, broadly described as -omics data. Therefore, the -omics data provide a snapshot of the system across multiple levels of organization. The suffix \u201come\u201d is used to identify groups of objects sharing common characteristics, either descriptive or functional. At an elementary level we routinely think of genomic (sequence of the genome), transcriptomic (expression of the genome), proteomic (expression of the proteome), metabolomics (expression of the metabolome) and the list is continuously expanded to include the epigenome rstraints . However-omics data reflect different processes, principally within the same cell type, reflecting a sequence of events. Likely the most characteristic example of this would be transcriptomic and proteomic information. The fundamental question arising here is how to upgrade the longitudinal information quantifying gene and protein expression simultaneously, for a particular cell type in response to an external perturbation or environmental condition. In a simplistic way, one can view this problem as an extension of the so-called central dogma. We demonstrate the alternatives by focusing on the analysis of transcriptomic and proteomic data obtained from the in vivo response of a rodent model following bolus administration of synthetic glucocorticoids. Studies focusing on understanding the relationship between global mRNA transcription and protein translation have produced mixed results, many of which concluded that the transcriptomic and proteomic data is far from being easily described as complementary . In a number of previous publications, we have illustrated the analysis of longitudinal data with a time-varying base line , such as MPL, are widely used anti-inflammatory and immunosuppressive agents for the treatment of many inflammatory and auto-immune conditions including organ transplantation, rheumatoid arthritis, lupus erythematosus, asthma and allergic rhinitis . The mecFigure 3 (top). The models were progressively enhanced to capture the effects of the drug under several doses and dosing regimens. However, these models were based on the data generated by traditional message quantification methods that only allow measurements of single end points. Because of the diverse effects of CS and different molecular mechanisms potentially involved in these actions, a high-throughput transcriptomic, i.e., microarray, approach was effective in gaining better understanding of the temporal and tissue-specific effects of CS on different pathways and functions . Without considering any type of functional relation between transcriptomic and proteomic data, each can be considered independently Wistar rats were injected with 50 mg/kg methylprednisolone (MPL) intramuscularly and sacrificed at 12 different time points between 0.5 and 66 h post-dosing . Five animals, injected with saline and sacrificed at random time points in the same time window, served as controls. In order to remove the high concentrations of blood protein, it was necessary to use perfused tissue for proteomic analyses, which precluded the use of the same tissues employed for transcriptomics (below). Proteins from perfused and flash frozen livers were extracted, digested and analyzed using a nano-LC/LTQ/Orbitrap instrument. The Nano Flow Ultra-high Pressure LC system (nano-UPLC) consisted of a Spark Endurance autosampler and an ultra-high pressure Eksigent Nano-2D Ultra capillary/nano-LC system, with a LTQ Orbitrap mass spectrometer used for detection. Protein quantification was based on the area under the curve (AUC) of the ion-current-peaks. A more extensive description of the experimental setup and the analytical methodology can be found in the previously published study .Forty-three ADX Wistar rats were given a bolus dose of 50 mg/kg MPL intravenously. Animals were sacrificed at 16 different time points between 0.25 and 72 h post-dosing. Four untreated animals sacrificed at 0 h served as controls. The mRNA expression profiles of the liver were arrayed via Affymetrix GeneChips Rat Genome U34A , which contained 8800 full-length sequences and approximately 1000 expressed sequence tag clusters . This daAll animal experiments were performed at the University of Buffalo and protocols adhered to \u201cPrinciples of Laboratory Animal Care\u201d and were approved by the University at Buffalo IACUC committee.simultaneous consideration enables us to consider the following question: what is the dynamic correlation of the subset of MPL-regulated transcripts and proteins? As mentioned earlier, and shown later, the relation is far from trivial. However, such analysis, only depicts part of the picture. It is highly likely that transcriptional events do not manifest themselves at the protein levels and also post-transcriptional alterations may not include transcriptional alterations. For such an analysis, the two -omics data need to be separated and analyses performed. Therefore, in the former case the mining of the data is concurrent but the functional interpretation separate, whereas in the latter the mining of the data is separate but the functional analysis concurrent. The case study discussed in the following section demonstrates both approaches. Once again, it is important to realize that we only discuss two of the many questions which could be posed and addressed and the presentation is by no means exhaustive or all-encompassing.Assessment of the problem greatly reflects the approach(es) taken whereas \u00ae Systems)1.This analysis will help us identify the genes which were differentially expressed both at the transcriptional and translational levels. Data analysis for both proteomic and transcriptomic datasets started first by filtering for differential expression over time. Proteins and transcripts with differential temporal profiles were determined by using EDGE . We emplFigure 4 (top).In order to find potential co-regulatory relationships at these two levels, hierarchical clustering was used for first-pass analysis. For this purpose, temporal transcriptomic and proteomic data for the common genes were first concatenated and then clustered using the clustergram function in the Bioinformatics toolbox of MATLAB . The two clusters obtained by using correlation as the distance metric. The workflow is illustrated in omics analyses have been obtained from samples collected from two independent studies; the strain of experimental animals, dose and type of pharmacologic agent, sampled tissue, sampling procedures, and most of the time points for sample collection were the same for these studies. These conditions allowed us to assume that the experiments are similar enough to conduct individual and integrated bioinformatics analyses.This study aimed to compare and contrast the transcriptional and translational changes in liver induced by the exposure to a synthetic CS at a pharmacological dose. Although high-throughput -p-value < 0.05 and q-value < 0.01 cut-offs). After this filtering step, both datasets were fed into IPA in order to match distinct identifiers used (Swiss-Prot IDs for proteins and Affymetrix IDs for transcripts). A comparison between two datasets indicated that 163 genes were commonly found in both transcriptomic and proteomic datasets; i.e., both mRNAs and proteins corresponding to these genes were differentially expressed over time.The preprocessing before performing the first-pass analysis involved identifying the significant genes whose both transcripts and proteins existed in the individual datasets. Differential expression analysis through EDGE identified that 475 out of 959 proteins and 1624 out of around 8800 transcripts had temporal profiles that significantly varied over time . Functional annotations of proteins and transcripts at each level of analysis were conducted in IPA by running a core analysis for each cluster and evaluating the enriched canonical pathways and predicted upstream regulators obtained in IPA.While the hierarchical clustering analysis described above identifies the potential co-regulatory schemes for the genes in the intersection of transcriptomic and proteomic datasets; it fails to capture the dynamics in the rest of the genes which may also show differences in expression over time, although they may not co-exist in both datasets. In order to evaluate the overall dynamic patterns and extract the most useful information integrating these two datasets, a consensus clustering method wSummarizing the observations described in great detail in we note Glue Grant,\u201d aiming at providing a blue-print of the host response to injury and trauma based on sophisticated analysis of blood samples to healthy human subjects has been used as a reproducible experimental procedure providing mechanistic insights into how cells, tissues and organs respond to systemic inflammation. Low doses of LPS transiently alter many physiologic and metabolic processes in a qualitatively similar manner to those observed after acute injury and systemic inflammation , thus alResponse to endotoxemia is closely associated with alterations in metabolism. Inflammatory processes change the direction of the substrate flow from the periphery toward splanchnic organs while also triggering the release of catabolic signals in order to meet increased energy and substrate demands ; and henAnalysis of the complete metabolic response to systemic inflammation is of special interest since metabolic composition of a tissue is uniquely altered in response to stimuli due to collective effects of the regulations at various levels of cellular processes including transcription, translation and signal transduction. Concentrations of metabolites in a sample at a given time, i.e., the \u201cmetabolome\u201d , can be Global transcriptomic studies of circulating leukocytes in experimental human endotoxemia previously elucidated the intricate regulatory schemes governing the inflammatory response . HoweverBuilding on this knowledge; we integrated the transcriptional response of leukocytes with systemic metabolic response to understand how inflammation-induced changes in the composition of plasma, in turn, affecting the transcriptional processes in the leukocytes.t = 1, 2, 6, and 24 h from both groups, samples were inventoried and stored at -80\u00b0C until the analysis. Metabolomic analysis was performed by Metabolon according to previously published methods were administered National Institutes of Health (NIH) Clinical Center Reference Endotoxin, at a bolus dose of 2 ng/kg body weight as previously described methods . The res methods , 2015a.t = 0 h) and 2, 4, 6, 9, and 24 h after LPS administration. Leukocytes were recovered by centrifugation; total cellular RNA was isolated from the leukocyte pellets and hybridized onto Hu133A and Hu133B oligonucleotide arrays (Affymetrix). Further details about the experimental design are presented in the original analysis received LPS at a bolus dose of 2 ng/kg body weight and four subjects received saline. Blood samples were collected before . The two clusters were obtained by using correlation as the distance metric.Data analysis for both transcriptomic and metabolomic datasets started first by filtering for differential expression over time. Transcripts and metabolites with differential temporal profiles were determined using EDGE software . The sigz-score of the deviation from the expected rank by the Fisher exact test, and a combined score that multiplies the log of the p-value computed with the Fisher exact test by the z-score. The pathways which have a combined score higher than 1.0 were called significant. The combined score was devised because Fisher exact test had a slight bias that affects the ranking of terms solely based on the length of the gene sets in each gene-set library components at some elementary level augment the number of descriptors, the augmentation is not passive, i.e., it is not simply increasing the dimensionality of the space.As our results demonstrated earlier, transcriptional and proteomic expression patterns roughly correlate for some of the genes, yet for others, the dynamics are more unexpected. One way to work with the existing PK/PD models would be teasing out the protein counterparts of the transcriptional clusters that are described by the observed dynamics and examining the potential mechanisms that could explain the observed protein expression profiles corresponding to the same genes. Another approach is considering the physiologic response as systems response composed of dynamics of individual elements. However, studies focusing on understanding the relationship between global mRNA transcription and protein translation have produced mixed results, often concluding that the transcriptomic and proteomic data is far from being easily described as complementary . NeverthWe will present a preliminary study applying this second approach. In our study, the driver of the response is the drug-receptor complex in the nucleus; transcripts and proteins, the nodes of the network, are the individual elements with diverse dynamics. The observed phenotype reflects the systems response arising from the dynamics of these individual elements, and these elements include genes and proteins, directly and indirectly, affected by the MPL. A number of target genes to be included in the network depends on the available biological data, literature information about the interaction of the nodes, as well as the desired complexity level. As more nodes are added into the network, the direct and indirect interactions between the elements of the network, as well as the number of parameters to be estimated, increases.proof-of-concept initial network. Genes included in this network are selected from the most informative transcripts that are both differentially expressed at the transcriptomic and proteomic levels, Figure 5 (left). The list was further reduced by focusing on genes functionally related to the major metabolic effects of MPL on metabolism, including hyperglycemia, dyslipidemia and muscle wasting . Refinement and further extension of the network through modeling will yield an accurate representation of the complete effects of the drug. Once the sub-set of genes and proteins of interest has been identified, we then need to establish a network structure expressing putative regulatory relations. A variety of computational methodologies for further refining regulatory network structures are discussed elsewhere - kd,m mRNAik; = s,P mRNAi - kd,PPik reflects the fundamental dynamics of transcription and translation processes through the use of ordinary differential equations (f(P) reflects the regulatory action of the various proteins on the transcription mRNA. The f(P) formally reflects the likelihood of the regulatory events and its functional form will reflect mechanistic interpretations. In its most general form, and based on thermodynamic arguments, it includes regulatory complexes as well as activation or repression of transcription (Inference (regression) methods correlatdynamics , 2013 exdynamics and indidynamics ,b for exdynamics ; (b) thedynamics ; and (c)quations . The funcription ,b:ijS defines a regulatory complex and the coefficients ij, \u03bcij\u03bb reflect activation and repression constants respectively. Discrete optimization formalisms .The network structure for m, R, DRD, R and nDR, and their corresponding kinetic parameters, are as established earlier in ija. The dynamics of protein synthesis was assumed in its simplest form (1st order translation and degradation). The normalized data associated with the mRNA and protein levels of the network in Figure 5 were used for estimating the parameters involved in the mRNA and protein dynamics only (the remaining were fixed based on prior deconvolution of the PK/PD model). MPL plasma concentration (D) exhibit a biexponential decline. Following the binding of the drug to the glucocorticoid receptor (DR), this complex translocates into the nucleus [DR(N)] and acts as the driving force for MPL-induced response patterns. Firstly, this effect is observed as inhibition of mRNA expression for the glucocorticoid receptor (mR), and consecutively the receptor protein (R). DR(N) is also introduced as a stimulatory factor to all of the nodes in the network. Level of mRNA expression is modeled to be controlled by the presence of drug-receptor complex in the nucleus together with indirect interactions between the proteins of the network. Degradation of mRNA, protein translation from mRNA, and protein degradation are all modeled as linear processes.In this model, the PK and PD of MPL, i.e., the equations for Figure 6, are only beginning to scratch the surface and point to directions for improvement, as they should with any iterative model development process. In general, for most network elements early dynamics seem to be represented well compared to later fluctuations in the response. However, significant improvements and refinements are expected. We anticipate the need to develop complex representations, likely requiring precursor and receptor-mediated indirect responses we may achieve a better rationalization of the information; and (2) perhaps more importantly, we may be able to further advance the frontiers of PK/PD modeling which could have significant impact. This preliminary work introduces an approach for bridging the classic PK/PD modeling efforts with the multi-level systems response. This allows us to explore the paths of utilizing the vast amount of information made available by new -omic profiling tools. These tools make it possible to evaluate the response as a whole at a certain biological level over time. The model-based integration approach discussed here ultimately aims to connect this valuable information coming from multiple layers in a useful framework which reflects the continuity of biological events in response to pharmacological stimuli. Achieving such integration will allow the development of model-based approaches for rationalizing the genomic, transcriptomic and proteomic data in the context of integrated dynamic regulatory network models, critical for the development of MPL PK/PD/pharmacogenomics models, enabling us to move beyond using -omics as a complex descriptor toward the development of pharmacologically relevant and predictive computational models.In this review, we have attempted to discuss three topics, based on our own experience. We discussed challenges associated with integrating -omics to the next level, we must expect a number of challenges: (1) as we hinted in the comparison of metabolomics profiles between a controlled human endotoxemia study and the clinical cases, -omics data from clinical studies and/or patient population will, unavoidably, express and capture many confounding factors beyond responses elicited by the agent under study; (2) circulating, i.e., systemic, markers pose, as discussed in the manuscript, additional challenges since the tissue-specificity is lost, complicating further the interpretation of the observed responses in a cause-and-effect sense; (3) the temporal granularity of the data will remain a key challenge. The disparities in temporal resolution of the responses at different physiologic levels will further complicate any data driven-approach. In such cases, it is likely that methods aiming at features and/or models \u2013 as discussed in the paper \u2013 will prove more beneficial; (4) patient history, including medication, will constantly nuance the data obtained; (5) despite the ability to probe an ever increasing number of likely biological descriptors and mediators leading to an increase in the dimensionality of the \u201cinput\u201d space, the actual \u201coutput\u201d space, that is the number of subjects, volunteers and/or patients, being sampled will always lack, especially if appropriate population stratifications are implemented. This is a classic problem in machine learning often referred to as classification/feature selection in \u201calmost empty spaces\u201d the level of the features of the data, or (b) the level of the features of the models that could describe the dynamics of the data. Each method offers distinct advantages and challenges. Considering features of the data, characteristics of the responses and likely important biomarkers are able to describe intricacies of the response. Features of the models underlying the dynamics of the data, on the other hand, enable a likely quantification of cause-and-effect relations, as well as the likelihood of expressing, and predicting, complex dynamics and emergent behaviors, not necessarily obvious while studying the features themselves. However, it is important to realize that the approaches are complimentary and are often combined to improve the overall effectiveness of the analysis. To some extent, effectiveness is also an emergent property of the concurrent and multi-prong analysis of information-rich KK, AA, and TN: performed calculations, conducted analyses and edited the manuscript. RA, SC, SC, DD, and WJ: edited the manuscript. IA: conceived the studies, and developed the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "To install the package type: devtools::install_github(\u201cDohertyLab/ExRANGES\u201d).Organisms respond to changes in their environment through transcriptional regulatory networks (TRNs). The regulatory hierarchy of these networks can be inferred from expression data. Computational approaches to identify TRNs can be applied in any species where quality RNA can be acquired, However, ChIP-Seq and similar validation methods are challenging to employ in non-model species. Improving the accuracy of computational inference methods can significantly reduce the cost and time of subsequent validation experiments. We have developed ExRANGES, an approach that improves the ability to computationally infer TRN from time series expression data. ExRANGES utilizes both the rate of change in expression and the absolute expression level to identify TRN connections. We evaluated ExRANGES in five data sets from different model systems. ExRANGES improved the identification of experimentally validated transcription factor targets for all species tested, even in unevenly spaced and sparse data sets. This improved ability to predict known regulator-target relationships enhances the utility of network inference approaches in non-model species where experimental validation is challenging. We integrated ExRANGES with two different network construction approaches and it has been implemented as an R package available here: Understanding these regulatory networks provides access points to modulate these responses through breeding or genetic modifications. The first step in constructing such networks is to identify the primary relationships between regulators such as transcription factors (TFs) and the target genes they control.Transcriptional regulatory networks (TRN) provide a framework for understanding how signals propagate through a molecular network and result in transcriptomic changes. These regulatory networks are biological computational modules that carry out decision-making processes and, in many cases, determine the ultimate response of an organism to a stimulus2. Since global transcript levels are comparatively easy to measure in most species and tissues, several approaches have been developed to identify connections between regulators and their targets by examining the changes in transcription levels across many samples6. These inferred approaches can provide a first approximation of regulatory interactions that can be used to guide experimental approaches. The assumption of these approaches is that the regulatory relationship between a regulator TF and its targets can be discerned from a correspondence between the RNA levels of the regulator gene and its targets. If this is true, then given sufficient variation in expression, the targets of a given factor can be predicted based on associated changes in expression. Initial approaches designed to do this focused on the correlation between regulators and targets, assuming that activators are positively correlated and repressors are negatively correlated with their target expression levels7. For almost two decades, these approaches successfully identified relationships between regulators and targets. Updates to this simple idea have included pre-clustering of transcript data, modifying regression analysis, incorporating training classifier models, and incorporating prior biological knowledge or additional experimental data. Each of these has improved the ability to identify connections between regulators and targets, even in sparse and noisy data sets11. For microorganisms, substantial experimental data identifying TF binding locations and the transcriptional response to TF deletions is available and has been organized into efficient databases14. This approach has enabled the prediction of TRN from expression data not only in unique conditions in the model species where the data was generated, but has also been extended to predict TF-target gene relationships in homologous species19. In 2010, the DREAM5 challenge evaluated the ability of different methods to identify TRN from gene expression data sets10. One of the top performing methods was GENIE38. This method uses the machine learning capabilities of random forest to identify targets for selected regulators21. Other successfully implemented approaches include SVM22, CLR6, CSI24, ARACNE5, Inferelator4, and DELDBN9. Common to these methods is the use of transcript abundance levels to evaluate the relationship between a regulator and its putative targets. Experiments performed in time series can provide additional kinetic information useful for associating regulators and targets. Many approaches have been developed that take advantage of the additional information available from time series data as reviewed in26. However, the steady-state transcript level as measured by most high-throughput transcriptional assays such as RNA-Seq is a measure of both transcriptional activity and mRNA stability. Therefore, the correlation between expression levels alone may not provide a direct assessment of transcriptional regulation as it can be confounded by the RNA stability of the target. Further complicating the identification of regulator relationships is the fact that a single gene can be regulated by different transcription factors in response to different stimuli.Experimental approaches such as Chromatin Immunoprecipitation followed by sequencing (ChIP-Seq) can identify direct targets of transcriptional regulators. However, ChIP-Seq must be optimized to each specific TF and antibodies must be developed that recognize either the native TF or a tagged version of the protein. This can present a technical challenge particularly for TFs where the tag interferes with function, for species that are not easily transformable, or for tissues that are limited in availabilityHere we present an approach that extends current approaches to TRN construction by emphasizing the relationship between regulator and targets at the time points where there is a significant change in the rate of expression. We demonstrate that: 1) Focusing on the rate of change captured previously unrecognized characteristics in the data, identifying experimentally validated regulatory relationships not detected by the standard approaches. 2) Combining expression level and the rate of change results in an improved identification of experimentally validated regulatory relationships.8, and anticipate that it will offer a similar benefit when combined with other network inference algorithms.We first evaluate the significance of the rate changes at each consecutive time point on a per-gene basis: RANGES . We then combined the expression level and significance of this rate change in ExRANGES (Expression by RANGES) to prioritize the correlation between regulators and targets at time points where there is a significant change in gene expression. ExRANGES improved the ability to identify experimentally validated TF targets in microarray and RNA-Seq data sets across multiple experimental designs, and in several different species. We demonstrate that this approach improves the identification of experimentally validated TF targets using GENIE327 using GENIE3. We compared the results of each approach to the targets identified experimentally using ChIP-Seq for five TFs involved in circadian regulation: PER1, CLOCK, NPAS2, NR1d2, and ARNTL29. Targets identified by each computational approach that were also considered significant targets in these published ChIP-Seq experiments were scored as true positive results. We calculated the ROC AUC for the five circadian TFs to compare the identification of true targets attained with GENIE3 using EXPRESSION values to the combination of expression and p-values using ExRANGES. We observed that for all five TFs, ExRANGES improved the identification of ChIP-Seq validated targets . We developed ExRANGES, a method that adjusts the expression level based on how much that gene changes in expression in the following time step. Briefly, for each gene, we calculate the significance of each time step. The expression level is adjusted by this significance factor so that the expression level preceding a major change in expression is emphasized ; however, each network identifies different targets across all samples compared to targets identified by EXPRESSION using both ExRANGES and EXPRESSION approaches. This data set consists of seven studies of blood samples from human patients. Multiple samples from an individual were taken over a seven to nine day period, depending on the specific study. Sampling was not evenly spaced between time points. In total 2372 samples were used, providing a background of 2231 consecutive time steps. Overall, the variance between samples was lower for this study than the circadian study examined above showed improved functional enrichment compared to the targets identified by EXPRESSION Fig.\u00a0. Since o57. Therefore, we compared the ability of ExRANGES and EXPRESSION to identify the OsMADS1 targets identified by L. Khanday et al. Of the 3112 OsMADS1 targets identified by ChIP-Seq, ExRANGES showed an improved ability to identify these targets . The change in expression levels for genei between time t and time t\u2009+\u20091 was determined for each consecutive time point. Since this data is cyclical, the interval between the last time point and the first time point is also included. For the CircaDB data set, the background of each consecutive time interval across the entire time series consists of 288 slopes (12 tissues\u2009\u00d7\u20092\u2009h for 48). At each time step, t the slope between t and t\u2009+\u20091 was compared to a bootstrapped version of this background generated by sampling 10,000 times with replacement. For each gene, the resulting p-value was calculated by using an empirical cumulative distribution function from the R stats package. This p-value was transformed to the \u2212log10 and the sign of the change in slope was preserved (R script provided). This significance of the change at each time interval is the rate change or \u201cRANGES\u201d value.Overview: We first determine the significance of the change in expression between two consecutive time points on a per gene basis. For each genet with the significance of the change in expression, or RANGES value, from time t to t\u2009+\u20091 :LS is a (gene\u2009\u00d7\u2009time) matrix representing a time series experiment with genes Xg is a vector of real numbers representing the expression of gene g from time points T1 to TN:g at time point T1. To calculate the rate of change for RANGES values, we start with Cg, which represents the changes between all consecutive time points for gene g:If the data are cyclical, we assume time point 1 can be used as time point N\u2009+\u20091:Else, disregard Cg is calculated for each gene by sampling 10,000 times with replacement. We call this P-values are determined for each The sign of each change is recorded for use in the final RANGES value:10 of the p-value and multiplying by the sign of the corresponding change:The RANGES value is calculated by taking the \u2212loghttp://www.montefiore.ulg.ac.be/~huynh-thu/software.html on June 14, 20168\u00a0and was modified for use with parLapply from the R parallel package63. The EXPRESSION network was built by providing the expression values across all samples for both TFs and targets. The ExRANGES network used the ExRANGES value for both TFs and targets. For example, for the CircaDB data, we considered 1690 murine TFs as the regulators64. For both approaches, all TFs were also included in the target list to identify regulatory connections between TFs. To implement GENIE3, we used 2000 trees for random forest for all data sets except the viral data set. Due to the size of the data set, we limited the viral data set to 100 trees. The importance measure from the random forest was calculated using the mean decrease in accuracy upon random permutation of individual features. This measure is used as the prediction score for TF-target relationships.To predict regulatory interaction between the transcription factor and the target gene, GENIE3\u00a0source code was downloaded from For INFERELATOR the TF and targets labels are identical to those used in GENIE3. Time information in the form of the time step between each sample was added to satisfy time course conditions as a parameter, default values were used for all other parameters. Only confidence scores of TF-target interactions greater than 0 were evaluated against ChIP-Seq standards. The confidence scores were used as the prediction score for TF-target relationships.65. The computationally determined prediction score and the targets from the respective experimental validation were used as the metric to evaluate the performance function. The area under the ROC curve (AUC) is presented to summarize the accuracy.ROC values were determined by the ROCR package in RAll data and scripts are either taken from existing public data or are available .Supplementary Information"}
+{"text": "The liver and the kidney are the most common targets of chemical toxicity, due to their major metabolic and excretory functions. However, since the liver is directly involved in biotransformation, compounds in many currently and normally used drugs could affect it adversely. Most chemical compounds are already labeled according to FDA-approved labels using DILI-concern scale. Drug Induced Liver Injury (DILI) scale refers to an adverse drug reaction. Many compounds do not exhibit hepatotoxicity at early stages of development, so it is important to detect anomalies at gene expression level that could predict adverse reactions in later stages. In this study, a large collection of microarray data is used to investigate gene expression changes associated with hepatotoxicity. Using TG-GATEs a large-scale toxicogenomics database, we present a computational strategy to classify compounds by toxicity levels in human and animal models through patterns of gene expression. We combined machine learning algorithms with time series analysis to identify genes capable of classifying compounds by FDA-approved labeling as DILI-concern toxic. The goal is to define gene expression profiles capable of distinguishing the different subtypes of hepatotoxicity. The study illustrates that expression profiling can be used to classify compounds according to different hepatotoxic levels; to label those that are currently labeled as undertemined; and to determine if at the molecular level, animal models are a good proxy to predict hepatotoxicity in humans. Toxicogenomics is a field that integrates data from high-throughput technologies into research on conventional toxicology. Major goals include elucidating the molecular mechanisms of toxicity, identifying potential biomarkers for exposure to toxic substances and deveDrug-induced liver injury (DILI) refers to an adverse drug reaction, which remains a major problem in drug development and pharmacotherapy representing clinical and financial challenges. Compounds are labeled according to Federal Drug Administration (FDA) approved labels using DIMany drugs act by binding to protein targets altering their function, one foundational assumption in toxicogenomics is that exposure to a toxicant leads to altered gene expression either directly or indirectly . Alterehttp://dokuwiki.bioinf.jku.at/doku.php] challenge in 2013 [Analysis of such large and comprehensive toxicogenomics database has been a great challenge ever since it was released through the Critical Assessment of Massive Data Analysis (CAMDA) [ in 2013 . Even th in 2013 , but it in 2013 . The divFrom the complete list of 170 compounds from TG-GATEs, only 48 were administered to in vivo and in vitro samples in rat and human models. A subset of 4,578 microarrays interrogating those 48 compounds, mostly drugs were used to analyze hepatotoxicity patterns through gene expression profiling. Only three models were included in the analysis: Rat in vivo , Rat in vitro and Human in vitro using primary hepatocytes . For each individual compound there was a control and three administered dose levels {Low, Middle, High} during a 24-hour period. Three time measurements were sampled for in vitro models and four for the in vivo model on gene expression microarrays including two and three biological replicates respectively. Raw data in the form of.CEL files, each of which corresponding to a microarray were stored for later access from R through a SQL database. Human primary hepatocytes were processed using Affymetrix HGU133Plus2, and animal samples on the GeneChip Rat Genome 230 2.0 which is known to be a powerful tool for toxicology .Data storage, access and manipulation was done using a relational database. Raw-data analysis included quality metrics using R and bioconductor libraries affy and oligo. Data were normalized using Robust Multiarray Average algorithm (RMA) with no https://www.bioconductor.org/packages/release/bioc/html/timecourse.html.We propose a strategy that combines data from the three models with dose levels in a time series approach on each of the 48 selected compounds. Two main goals motivate the analysis: To determine if at the molecular level, animal models are a good proxy to predict hepatotoxicity in humans and to classify compounds by toxicity levels in human and animal models through patterns of gene expression. To achieve these goals, we combined machine learning algorithms with time series analysis to classify genes whose absolute or relative expression varies over time, incorporating at the same time the correlation structure across time points. The time series method proposed by Tai and Speed , which iThe dynamic nature of the data was explored using a time series analysis on every compound with the timecourse package availablFurther analysis using machine learning methods allowed us to learn interesting patterns on already ranked genes through unsupervised hierarchical clustering on MB values after filtering them with Median Absolute Deviation (MAD) to contrDatabase implementation and data retrieval through R made all of the time course analyses time-efficient. The list of the 48 chemical compounds used in this work is presented in Genes were ranked based on variation over time as a function of the drug concentration in relation to their replicate variances. In other words, time series analysis considered three parameters to determine the gene ranking. The first parameter is about differential expression across time, there should be substantial changes in expression values between two or more consecutive time points. This is exemplified by the dynamic range of the y-axis on plots in cytochrome P450 family 1 subfamily A, at the top of the list ranked #1. This gene is part of a pathway for xenobiotic metabolism. In CCL4 on this gene. Aspirin (ASA) time course in contrast, does not show any of the patterns observed in (a), replicates have an erratic behavior and differential expression represented by the values on the y-axis are almost negligible and unable to separate effects by doses, therefore ranked #73. Phenytoin is an anticonvulsant used to treat a wide variety of seizures. Reports about DNA damage have been published [DNA damage inducible transcript 3 is ranked #8 with MB = 24.6 when high dose is administered. It is the kind of information we would look for in an attempt to identify hepatotoxic biomarkers either by time of exposure and/or dose concentration.Carbon Tetrachloride (CCL4) is a solvent for manufacturing organic compounds, its primary effects in humans are on the liver, kidneys, and central nervous system. Poisoning by inhalation, ingestion or skin absorption is possible and may be fatal. Administration of this compound in human hepatocytes showed ublished . Plot c)Once ranked, a list of 160 most variable ranked genes according to MAD of the Human in vitro samples were selected for hierarchical clustering analysis on the MB values and compounds (columns) vary clearly according to significance levels based on the MB values for each model. When we compare left and right plots in For our second goal, classifying compounds by toxicity levels in human and animal models through patterns of gene expression shown in Despite the large number of microarray data available from the TG-GATEs, its complex structure with many groups and lack of replication makes it difficult for pattern recognition approaches. Feature selection, clustering and other classification techniques did not perform well directly on gene expression values. Combining information from dose, time, animal or human models on each compound requires a more comprehensive method. The analysis by time series developed by Tai and Speed provided a methodology to rank genes using all their features such as dose, time of exposure and differential expression even with low replication. Classification by ranking using the MB statistic, a one-sample multivariate empirical Bayes that selects differentially expressed genes from replicated microarray time course experiments, allowed us to summarize time changes, dose concentration, quality of replicates and significant differential expression representing a fast and appropriate way to reduce complexity of the highly diverse data structure. Unsupervised hierarchical clustering was later applied to MB statistical values showing rather different patterns for Human in vitro, Rat in vitro and Rat in vivo models indicating that animal models can indeed correlate with human model for highly toxic compounds but shows a diverse pattern in general.Unsupervised machine learning techniques provided a set of genes capable of classifying compounds as DILI-concern toxic according to the FDA-approved labelling. Compounds already known to be highly toxic clustered with poisonous model compounds suggesting a possible list of toxic biomarkers. The case of Phenobarbital (PB) might be of interest, it is labeled as Less-DILI-concern but appeared clustered together with highly toxic compounds. The analysis pipeline used for this work can be reproduced on other data sets involving a different list of compounds and all doses or just a particular selection of doses.S1 TableThis is the matrix that contains the MB values for the Human in vitro model obtained from the Timecourse package.(CSV)Click here for additional data file.S2 TableThis is the matrix that contains the MB values for the Human in vitro model obtained from the Timecourse package.(CSV)Click here for additional data file.S3 TableThis is the matrix that contains the MB values for the Human in vitro model obtained from the Timecourse package.(CSV)Click here for additional data file.S4 TableTable that includes the top 160 genes of each experiment model ordered by MAD using the MB statistic as input. In the second tab is the order of the compounds presented in each of the hierarchical clustering dendrograms.(XLSX)Click here for additional data file."}
+{"text": "An alternative splicing isoform switch is where a pair of transcript isoforms reverse their relative expression abundances in response to external or internal stimuli. Although computational methods are available to study differential alternative splicing, few tools for detection of isoform switches exist and these are based on pairwise comparisons. Here, we provide the TSIS R package, which is the first tool for detecting significant transcript isoform switches in time-series data. The main steps of TSIS are to search for the isoform switch points in the time-series, characterize the switches and filter the results with user input parameters. All the functions are integrated into a Shiny App for ease of implementation of the analysis.https://github.com/wyguo/TSIS.The TSIS package is available on GitHub: Regulation of gene expression by alternative splicing (AS) generates changes in abundance of different transcript isoforms. One particular splicing phenotype is isoform switching where the relative abundance of different isoforms of the same gene is reversed in different cell types or in response to stimuli. Isoform switches often play pivotal roles in re-programming of gene expression and isoform switches of functionally different transcript isoforms between normal and tumor tissues provide signatures for cancer diagnostics and prognostics where the analysis can be implemented easily.TSIS detects pairs of AS transcripts with one or more isoform switches and genes with multiple pairs of transcripts which show isoform switches. By defining five metrics of the isoform switch, the method comprehensively captures and describes the isoform switches occurring at different points in time-series data. TSIS analysis can be carried out using command lines as well as through a graphic interface using a Shiny App and finds cross points of the fitted curves for each pair of isoforms. The spline method is useful to find global trends of time-series data when the data is noisy. However, it may lack details of isoform switches in the local region. It is recommended that users use both average and spline methods to search for the switch points and examine manually when inconsistent results were produced by the above two methods.t-tests to generate P-values for each interval. Metric 4 is a measure of whether the effect of the switch is transient or long lived . Metric 5: Isoforms with high negative correlations across the time-points may identify important regulation in alternative splicing. Thus we also calculated the Pearson correlation of two isoforms across the whole time-series.The intersection points determined in Section 2.1 divide the time-series frame into intervals and each switch point is flanked by an interval before the switch and after the switch . We defiwitch Pi . Each isArabidopsis circadian clock genes AT1G01060 (G2), AT5G37260 (G29) and AT3G09600 (G12) (Filichk00 (G12) were suc00 (G12) and deta"}
+{"text": "We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10\u00a0000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub ( Genetic regulatory networks capture the complex relationships between biological entities, which help us to identify putative driver and passenger genes in various diseases , 2. ManyIn this paper, we present fastBMA, which is completely written in C++ and uses more efficient and scalable regression and hashing methods. The algorithmic improvements increase the speed by a factor of 30 on smaller sets Fig.\u00a0A, with gA final feature of fastBMA is the implementation of a new method for eliminating redundant indirect edges in the network. The post-processing method can also be used separately to eliminate redundant edges from networks inferred by other methods. The code is open source (M.I.T. license). fastBMA is available from GitHub , in R asiX, is the expression level of gene i, H is the set of regulators for gene i in a candidate model, \u03b2's are the regression coefficients, and We can formulate gene network inference as a variable selection problem where the dependent variable (target gene expression) is modeled as a function of a set of predictor variables (regulatory gene expression). A regression model can be formed by fitting equation .(1)\\doci,tX is the expression level of gene i at time t, H is the set of regulators for gene i in a candidate model, \u03b2's are the regression coefficients, and i = 1\u2026n and time t = 2, \u2026T.Time series data can also be modeled by using the expression at the previous time point to predict the next time point.j is a regulator of gene i is the sum of the posterior probabilities of all candidate models that include gene j in the set of regulators of i. This posterior probability becomes the weight of the edge drawn from gene j to gene i in the gene network. Estimates of the weights from prior knowledge can be used to seed the calculation of models to increase accuracy. Alternatively, a set of uniform starting weights based on the average number of edges observed in biological networks can be used when there is no additional information . Brent minimization [g in the interval that gives rise to the set of models with the highest total marginal probability. A graph is constructed by drawing edges between genes with an edge weight equal to the average posterior probability of the regulator over the set of reasonable models. Transitive reduction is applied to this graph to remove edges that can be adequately explained by a better indirect path. A final graph is constructed by retaining edges with weights greater than a given cutoff.The core approach for fastBMA is similar to that used by ScanBMA. The best models are found using ScanBMA\u2019s search strategy with a starting value of mization is then Parallel and distributed implementationFaster regression by updating previous solutionsProbabilistic hashingPost-processing with transitive reductionThere are 4 major algorithmic improvements that increase the speed, scalability, and accuracy of fastBMA:Parallelization can be accomplished by using a shared memory system, such as OpenMP , which iInferring the entire regulatory network involves finding the regulators for every gene in the set. Since each of these determinations is carried out separately, each thread or process can be assigned the task of finding the regulator for a subset of genes in the set. When OpenMP is used, it provides a scheduler that dynamically assigns the regression calculations for a given gene to each thread. Threads work simultaneously on their tasks and receive a new task when they finish the previous task. All threads share access to memory, and the same input data for the regression is available to all the threads. The parallel code only extends to the regression loop; the final transitive reduction post-processing and output is done by a single thread.When MPI is used, we initially split the tasks evenly among the available CPUs. In the case of MPI processes, memory is not shared. Instead the input data is read by a master process and distributed to all the participating processes using MPI\u2019s broadcast command. All processes then work on their tasks simultaneously in parallel and send messages to all the other processes so that all processes know which tasks are being worked upon. The length of time required for each calculation varies considerably, and, as a result, some processes will finish before others. A process that finishes early then works on tasks initially assigned to other processes that have not yet been started. When all the regulators for all the genes have been found, a master process gathers the predictions, performs transitive reduction post-processing, and outputs the final complete network. OpenMP can also be used in conjunction with MPI to further subdivide the tasks among threads available to a CPU.n3), where n is the number of variables in the model. However, in the case of fastBMA, new regression models are based upon the previous models and involve the addition or removal of a single variable. It is possible to use the triangular matrix of the previous model to calculate the triangular matrix and regression coefficients for the new model. fastBMA\u2019s new C++ implementation of this update algorithm is based on the Fortran code from the qrupdate library [Even with the above parallel implementation, each individual calculation of regulators is still accomplished by a single process. If the regression procedure is too slow, this step can be rate-limiting for large numbers of genes regardless of the number of processors available. ScanBMA uses Cholesky decomposition to triangularize the regression matrix and obtain the regression coefficients through back substitution. These calculations have a time complexity of O( library .n2) when updating the previous solution. Average sampled model sizes for typical applications range between 5 and 20, and this would be the expected speedup when using a single thread. However, fastBMA further optimizes the implementation by pre-calculating matrix multiplications and using lower-level linear algebra routines from OpenBLAS [The time required for Cholesky decomposition becomes O.In order to understand the necessity and efficacy of the new probabilistic filter used by fastBMA, we must first understand the limitations of the simple hash table used by ScanBMA. Before evaluating a newly generated model, ScanBMA checks to see if that model has been previously evaluated. This is done by using a hash table to store a string representing the indices of the variables in the model. For smaller sets, the time and space required for this operation are negligible compared to the time and space required to calculate the regression coefficients. However, when the number of variables is in the thousands, this operation becomes the bottleneck. A regular hash table uses a hash function to map the model to a bucket. When the number of models is small relative to the number of buckets , it is unlikely that 2 models will be put in the same bucket, and the time taken to look up a model is just the time to map the model to a bucket. For lexicographical strings, the hash function is applied to small substrings and the values are combined. The time required for hashing the whole string is proportional the length of the string. In the case of ScanBMA, the length of the strings formed from the concatenated variable indexes is proportional to the number of variables unordered set container used by ScanBMA, this has worse-case O(m) time complexity where m is the number of models giving a total time complexity of O(nm) for the lookup procedure when m is large. In addition, the memory required to store the hash table will be O(m). Unfortunately, when a large number of mostly uninformative variables are coupled with a large Occam's window, m grows very rapidly. In these cases, we observed that the memory and time requirements of the hashing procedure soon become limiting. For example, even though it only runs a single thread, ScanBMA will run of out memory on a 56 GB machine when there are large numbers of variables and no informative priors.However, when the load factor is large, it is likely that multiple models map to the same bucket. The resulting collisions must be resolved by searching through the models in the bucket. For the C++ m for both time and space complexity. An example of a noisy or probabilistic hashing approach is the Bloom filter [It is vital that the ScanBMA algorithm does not sample a model more than once to ensure that the method will converge and terminate. However, the methodology is quite tolerant of falsely excluding models that have not been sampled. ScanBMA only explores a small sample of the possible models\u2014the vast majority of models are normally excluded. Furthermore, in the BMA approach, many models are averaged to obtain the final edges. Variables that are important appear in many models. In the rare case where a good model is falsely excluded, the impact is minimized because the key regulators in the falsely excluded model will be found in other models. When such false negatives are tolerated, an alternative to using a hash table is to ignore the collisions. This saves both time and space by removing the dependence on m filter , which hm filter due to fm is eliminated by ignoring collisions, and the dependence on n is eliminated by using an updatable hash function that cakilobytes gives identical results for a smaller synthetic dataset and almost identical results for the larger genome-wide experimental dataset. This is reflected in Fig.\u00a0gigabytes of memory to maintain a string hash table during wide searches over the yeast dataset.Our benchmarking confirms that ignoring collisions does not degrade the accuracy of fastBMA. Using a bit table of just 512 c and uses bits 21 and 22 combined with the last 4 bits of the model size to obtain the row r also makes the lookup operation very cache friendly. During our prototyping of different versions of fastBMA, we found that the optimized bit filter was much faster than using a full hash table even for small datasets where the load factor is small and there are few collisions.The implementation of the methodology is also further optimized for speed. New hash values are derived from old ones by looking up the value of the pre-calculated hash for the variable to be added or deleted and using XOR to combine it with the previous hash. This procedure is very fast and invertible but normally would cause severe collision problems, with the same hash being associated with different sets of variables. This is solved by mapping hashes from models of different sizes to different rows of the bit table. fastBMA uses a bit table of 64 rows by 65 326 columns. fastBMA maps the lower 16 bits of the hash value to obtain the column Bosnacki recently proposed comparing P-values of the best edge in an indirect path with that of the direct path [2 logN), where E is the number of edges and N is the number of nodes. By comparison, the GPU methodology of Bosnacki is O(N3) using a less selective criterion of comparing the best edge in the path. The search is also bounded: once a path's distance exceeds the direct distance, there is no need to further explore that path. In addition, fastBMA produces graphs with few high-weight edges, and, in practice, the algorithm is much faster than the worst case as most searches are quickly terminated.fastBMA\u2019s transitive reduction methodology is based on eliminating direct edges between 2 nodes when there is a better alternative indirect path. This approach was first described by Wagner . Bosnackect path . fastBMAWe have previously benchmarked ScanBMA against Simulated 10-gene and 100-gene time series data (5 sets of each) and the corresponding reference networks from DREAM4 [m DREAM4 \u201350. As tYeast time series expression data (ArrayExpress E-MTAB-412) consisting of 3556 genes over 6 time points and 97 replicates . Being aHuman single-cell time series RNA-Seq data GSE52529 (9776 genes) from GEO . As no sWe used the following 3 datasets for testing.some edges accurately, even if only for the most confident predictions, is still valuable for narrowing down a set of potential interactions to be further explored. Hence, we also plot the precision-recall graph to assess where the differences in accuracy are occurring. Timings for ScanBMA, fastBMA, and LASSO were the average of 5 runs on the same 8-core 56-GB Microsoft Azure A10 instance. fastBMA and ScanBMA were compiled on the instance and the binaries used. For the Jump3/GENIE3 comparison, we did not run the software ourselves but relied upon the published running times.We define a true positive (TP) as an edge in the inferred network that is also present in the ground truth or gold standard set. False positives (FP) are edges in the inferred network that are missing in the gold standard. False negatives (FN) are missing edges in the inferred network that are present in the gold standard, and true negatives (TN) are missing edges that are also missing in the gold standard. Precision (TP/(TP+FP)) and recall (TP)/(TP+FN) are useful measures of the positive predictive value and sensitivity of the methodology. However, precision and recall are dependent on the threshold used for the edge weights. Plots of precision vs recall over different values for the threshold give a more complete picture of the accuracy of the network inference. Similarly, receiver operating characteristic plots of TP/(TP+FN) vs FP/(FP+TN) for different thresholds are also useful, though less so than precision-recall plots because we are more interested in TP in sparse biological networks. We distill the overall information of these plots into a single number by estimating the area under the curve (AUC), i.e., area under precision recall curve (AUPR) and area under receiver operating curve (AUROC) for all possible threshold values. Due to the size of the larger yeast networks, all AUC calculations were done using custom software fastROCPRC , writtenWe applied our fastBMA algorithm to both simulated and real time series gene expression data. We had previously tested several methods on these datasets and founWe ran both ScanBMA and fastBMA with increasingly larger windows and the time and accuracy, as measured by AUROC and AUPR, plotted as line segments in Fig.\u00a0lopriors.tsv file in the One of the main advantages of the BMA methods is that they are able to incorporate prior information to improve inference. This was not possible for the DREAM4 dataset as it is a synthetic dataset, for which relevant prior information is not available. In this case, an uninformative uniform prior probability is used. However, for the yeast dataset, we had access to priors from external data sources . SpecifiA common use for computational network inference is to identify a small set of potential regulators that could be verified with further experiments. For this use case, an improvement in the precision of the most confident predictions is more important than a small improvement in the overall performance of the method. As some of the differences in AUC for the yeast dataset are relatively small, we plotted the precision recall curves in Fig.\u00a0The effect of post-processing is more limited. In Fig.\u00a0We also tested fastBMA on a human single cell RNA-Seq dataset with 9776 variables. Using a 32-core cluster on Microsoft Azure (2 nodes of 16 cores), fastBMA was able to obtain a network in 13 hours without using informative priors. Neither ScanBMA nor LASSO is able to return results for this dataset. We do not have a gold standard for this test\u2014the purpose was to demonstrate that fastBMA could handle a very large and noisy genomic-sized dataset and return a network within a reasonable time even in the worst case scenario where the data is noisy and there is no prior information.One possible drawback of the fastBMA methodology is the narrow search algorithm, which restricts sampling to models similar to the previously optimal models. While this is a prime reason for the speed of the approach, methodologies that sample the space more thoroughly, especially on smaller datasets, may be prove to be more effective. Table We have described fastBMA, a parallel, scalable, and accurate method for inferring networks from genome-wide data. We have shown that fastBMA can produce networks of increased accuracy orders of magnitude faster than other fast methods, even when using a single thread. Further speed increases are possible by using more threads or processes. fastBMA is scalable, and we have shown that it can be used to analyze human genomic expression data even in the most computationally demanding situation of noisy data, no informative priors, and considering all genes as possible regulators.fastBMA includes a new transitive reduction post-processing methodology for removing redundant edges where the predicted regulatory edge can be better explained by indirect paths. Both fastBMA and LASSO already penalize large models and favor the exclusion of redundant variables. This explains why post-processing has minimal impact on the sparse networks predicted by fastBMA and LASSO. In particular, fastBMA produces very sparse networks that are not improved by further processing on any of the datasets tested. LASSO\u2019s networks are denser. For the small synthetic DREAM4 set, the post-processing still does not improve the network. However, on the larger experimentally derived yeast dataset, spurious edges do appear in the LASSO networks despite the regularization penalty that discourages larger models. Some of these redundant edges are successfully removed by the transitive reduction post-processing, improving the overall accuracy of the network. Thus the transitive reduction methodology may prove useful as an adjunct to methods and datasets that give rise to denser networks and are more prone to over-predicting edges than fastBMA. With this in mind, and given that this methodology is different from other published methodologies, we have included the ability to run the transitive reduction module of fastBMA on any set of edges, not just those generated by fastBMA.Although we have focused on biological time series data, fastBMA can be applied to rapidly infer relationships from other high-dimensional analytics data. Also, the fastBMA methodology can be extended for even more demanding applications. For example, multiple bit filters could be used to hash larger search spaces. fastBMA does have some limitations: the speed relies on sampling a small subset of the search space defined by the initial best set of models. This may not be an optimal strategy when there are many almost equally good dissimilar solutions and no prior knowledge to provide a guide to a set of good starting models. In these cases, especially for smaller networks, there may be better solutions such as Jump3 that can sample the space more thoroughly within a reasonable time frame. However, on the 100-gene DREAM4 datasets in Table Project name: fastBMAhttps://github.com/lhhunghimself/fastBMAProject home page: Operating system(s): Linux to compile codeLicense: M.I.T.Any restrictions to use by non-academics: none other than those required by the licenseSimulated 10-gene and 100-gene time series data (5 sets of each) and the corresponding reference networks from DREAM4 were obtained from DREAM4 [m DREAM4 . Yeast tm DREAM4 and litem DREAM4 .Human time series RNA-Seq data GSE52529 (9776 genes) were obtained from GEO .GigaScience database, GigaDB [Snapshots of the supporting code are also available from the , GigaDB .AUC: area under the curve; AUPR: area under precision recall; AUROC: area under receiver operator curve; BIC: Bayesian information criterion; BMA: Bayesian model averaging; EM: estimation maximization; iBMA: iterative Bayesian model averaging.The authors declare that they have no competing interests.This work was supported by the National Institutes of Health and Microsoft Azure for Research Awards to K.Y.Y. and L.H.H.L.H.H. conceived and implemented fastBMA. L.H.H., K.S., M.W., and W.C.Y. tested and benchmarked the software. K.S. and L.H.H. added fastBMA to the networkBMA R package. L.H.H. generated the Docker packages. All authors read and approved the manuscript.GIGA-D-17-00012_Original-Submission.pdfClick here for additional data file.GIGA-D-17-00012_Revision-1.pdfClick here for additional data file.GIGA-D-17-00012_Revision-2.pdfClick here for additional data file.GIGA-D-17-00012_Revision-3.pdfClick here for additional data file.Response-to-Reviewer-Comments_Original-Submission.pdfClick here for additional data file.Response-to-Reviewer-Comments_Revision-1.pdfClick here for additional data file.Response-to-Reviewer-Comments_Revision-2.pdfClick here for additional data file.Reviewer-1-Report-.pdfClick here for additional data file.Reviewer-1-Report-(Revision-1).pdfClick here for additional data file.Reviewer-1_Original-Submission-(Attachment).pdfClick here for additional data file.Reviewer-2-Report-.pdfClick here for additional data file.Supplemental materialsClick here for additional data file."}
+{"text": "Simultaneous dynamic profiling of mRNA and protein expression is increasingly popular, and there is a critical need for algorithms to identify regulatory layers and time dependency of gene expression. A group of scientists from United States and Singapore present PECAplus, a comprehensive set of statistical analysis tools to address this challenge. Protein expression control analysis (PECA) computes the probability scores for change in mRNA and protein-level regulatory parameters at each time point, deconvoluting gene expression regulation in the presence of measurement noise. PECAplus adapted PECA\u2019s mass action model to a variety of proteomic data including pulsed SILAC and generic protein expression data. It also features analysis modules to fit smooth curves on rugged time series observations, and to facilitate time-dependent interpretation of the data for genes and biological functions. \u00a0They demonstrate the core modules with two time course datasets of mammalian cells responding to unfolded proteins and pathogens. Pairing these technologies, emerging studies have provided intriguing insights into the relative contribution of RNA and protein level regulation in response to various types of stress,4 others have compared ribosome profiling and protein synthesis rates in dynamic conditions.5Simultaneous, time-resolved profiling of mRNAs and proteins has developed into a routine task, providing new insights into the dynamics of cellular gene expression regulation.across the entire time course. The ODE equations often take the form of Yt and Xt denote protein and mRNA expression levels at time t, respectively. The two major kinetic parameters include synthesis rate \u03bas and degradation rate \u03bad and they determine the changes in protein expression given mRNA expression information.6These two-layered, time-resolved datasets bring new challenges to data analysis, as traditional fold-change and significance analyses methods cannot be used. Currently, the datasets are typically analyzed assuming that a single, fixed first-order ordinary differential equation (ODE) can explain the variation of a gene one set of rates for each gene. Second, the true nature of the gene expression function, i.e. the relationship between the input and the output, is difficult to recognize in the presence of measurement errors and other sources of noise, especially with a small number of observation time points. Third, the approach is usually unable to deconvolute the contributions of the different regulatory layers, i.e. that of synthesis and degradation, and that of RNA-level and protein-level regulation.However, the ODE-based approach has several limitations when applied to dynamic experiments. First, it implies that the rates of translation and protein degradation remain constant over the entire time period or change linearly at best, which is unlikely to hold true in a rapidly changing cellular environment with long follow-ups. As a result, the method reports only 7 or the protein expression data acquired with label-free, conventional stable isotope labeling-based , or isobaric tagging-based quantification methods . The challenge with the latter data is often overlooked: without pulsed labeling, it is impossible to distinguish between newly synthesized and pre-existing proteins. To the best of our knowledge, there exists no computational tool that is able to infer rate parameters under the relaxed constraint and identify both significantly regulated genes and significant change points in a multi-layered regulatory system.Last but not least, it needs to handle different types of proteomic data, e.g. data from pulsed SILAC experimentsthe time point in which the rate parameters shifted, reporting a statistical significance score called the change-point probability score (CPS) for each gene at each time point. We illustrate the models for paired protein\u2013RNA time series data, but they can also be readily fit onto mRNA data alone for the inference of RNA-level regulatory parameters without software modification. PECAplus is based on the core protein expression control analysis (PECA) model,11 termed PECA Core hereafter, which uses a regression-like framework for detecting significant changes in the combined effects of synthesis and degradation for individual genes. The underlying model uses a linear cumulative sum equation mimicking an ODE in a time interval manner, which is written as E[Yt] denotes denoised (true) protein concentration at time t conditional on the observed mRNA concentrations.To address this challenge, we present PECAplus, an ensemble of statistical models for probabilistic inference of single-level or multi-level regulatory kinetic parameters, including direct estimation of synthesis and degradation rates from a variety of datasets. In particular, all models in PECAplus identify The analysis using PECAplus occurs in three steps Fig.\u00a0: the dat4 and a dataset derived from a pulsed-SILAC experiment paired with transcriptomic data for LPS stimulation.3 PECAplus is freely available as a compendium of scripts and as a plugin for the\u00a0widely used proteomics analysis software PERSEUS.12We demonstrate the different modes of analysis along with the newly implemented pre-processing and post-processing functionalities, using a label-free proteomics and transcriptomics dataset for the unfolded protein response,T\u22121 intervals in a T time point experiment (rate ratios hereafter). By definition, a change in rate ratio indicates that the balance between synthesis and degradation tips to one direction (up or down), i.e. implicitly assuming that this change is the result of cellular regulation. However, it cannot inform whether the change is due to adjustment of synthesis rate or degradation rate, or both. In particular, PECA Core calculates the probability that the rate ratio is significantly different between adjacent time intervals before and after each time point. We validated and confirmed performance of the core approach in detail in Teo et al.11PECA Core performs statistical inference on the ratio of protein synthesis rate over degradation rate in individual genes across time points, i.e. for 4 PECA Core identifies change points of protein-level (i.e. translation/protein degradation) and RNA-level (i.e. transcription/mRNA degradation) regulation. At the protein level, we paired protein expression data with respective RNA expression data. At the RNA-level, we paired RNA expression data with constant values for DNA copy number.4We first illustrate PECA Core using a paired proteomics and transcriptomics dataset collected from mammalian cells responding to stress of the ER at eight time points.14 Under stress, its mRNA expression increases to peak at eight hours, while its protein expression is at a minimum level at that time point. Even if we take into account the typical time delay associated with translation, these opposing expression changes suggest complex interplay between the two levels of regulation, especially considering that the latter four time points are spaced 6\u20138\u2009h apart between adjacent observation times. Indeed, PECA Core identifies significant RNA rate ratio changes between the 1 and 16\u2009h marks and protein rate ratio changes at the 16\u2009h mark terms and other pathways curated in the Consensus Pathway DataBase17 (CPDB) in the genes with CPS score above a user specified threshold at each time point . The test evaluates genes with increased and decreased rate ratios separately, i.e. the different directions of change, and genes with rate ratios altered in both directions, extracting regulatory changes in each biological pathway.The large number of gene-level CPS scores and rate ratio parameters reported by PECA for each time point or interval can make it difficult to grasp the overall regulatory dynamics. For this reason, PECAplus offers the GSA module to convert the gene-level output into a summary of significant changes for gene function groups, i.e. all genes annotated with a specific function Fig.\u00a0. The GSA10p-values of the most significantly enriched, non-redundant pathways, in a time-dependent manner. It illustrates the dynamic up-regulation and down-regulation of each pathway, at both the mRNA prior20 on the RNA-level data of the ER stress experiment. Figure\u00a0To demonstrate PECA-N, we first used the protein\u2013protein interaction information from the STRING databaseadditional coordinated synthesis and degradation change was limited in the protein level analysis of PECA-N. Furthermore, it is possible that protein synthesis and degradation are slow in nature and thus span multiple time periods with varying lengths of time lag, which cannot be captured efficiently by MRF prior structure. Nevertheless, translation control is still highly coordinated over time as the GSA output suggests , and a third channel (light) as reference. The authors then used an ODE model to estimate the rate parameters, but assumed that the rates were linearly increasing or decreasing (or not changing). In contrast to PECA-pS, the approach produced only one set of rate estimates for each gene at 0\u2009h and another set at 12\u2009h.Next, we developed PECA-pS to parse pulsed SILAC data that allows for quantification of newly synthesized proteins and monitoring of degradation for existing protein copies simultaneously.per time interval. We note that the rate parameters cannot be computed in the absolute molar concentration scale, since most proteomics data sets do not have absolute quantification. Similar to PECA Core, PECA-pS reports CPS scores for a change in the rates between consecutive intervals. An important condition when modeling pulsed SILAC data is that the time course pattern must be monotone decreasing in the channel representing degradation of existing proteins, and monotone increasing in the channel representing synthesis of new proteins. Therefore, we focused the analysis on those proteins where this condition held true (see Online Methods).To allow for flexible rate changes, PECA-pS estimates rates 3 and observed good correlation confirming our model . PECA-R can use any type of protein expression values, e.g. concentrations or intensity values. With the rise of label-free proteomics experiments and the increasing use of post-hoc labels, such data becomes more routinely available. We illustrate PECA-R with the LPS data in which we summed medium and heavy channels for each gene to produce total protein expression values.De-convoluting synthesis and degradation rates from the total protein expression data (not pulse labeled) requires strong mathematical assumptions as the data does not separate newly synthesized and existing molecules. Any change in the concentration of a molecule can be explained by infinitely many combinations of synthesis and degradation rates. Moreover, synthesis and degradation for a gene might have opposing effects and the resulting expression data would be unchanged. Therefore, it is impossible to recover change unless additional information is available, i.e. changing RNA concentrations that impact protein levels.identifiability issue by placing reasonable restrictions on the rate parameter space. Specifically, we assume that increase in total expression of a protein is more likely attributable to increased synthesis than decreased degradation; whereas decrease in expression is more likely due to increased degradation than decreased synthesis (see Online Methods).PECA-R aims to overcome this R2 of 0.54). While the rate estimates from the two approaches are not on the same scale, the relative changes within individual genes are well preserved between the two versions , consisting of 1231 genes. Fig.\u00a0Despite similarity in the synthesis rates between the PECA-pS and PECA-R, a number of synthesis rate changes with high CPS scores were specific to the PECA-pS output Fig.\u00a0. We foun3 The correlation between the two sets of estimates was very strong , supporting the ability of PECA-R to recover the underlying synthesis and degradation rates on a relative scale. Remaining differences between the two approaches can be explained by the fact that many rate parameters changed in a non-linear fashion , downloadable from the same GitHub site. The Online Methods describe the availability and computation requirements.The source code and binaries are freely available from not shown).In this work, we presented a comprehensive statistics package to analyze time series omics data that involves one-layer or two-layer expression data. We present PECAplus through two proteomics\u2013transcriptomics examples, but the approach is generalizable to any paired expression data with two levels of regulation, i.e. where the molecules in one level serve as template for synthesis of those in the other level. For example, the researcher might investigate changes in transcription and RNA degradation, using transcriptomics and genomic data. In principle, PECAplus can also be used with paired ribosome footprinting and transcriptomics time series data, in which the tool deconvolutes the contributions of ribosome association with and dissociation from the RNA to support translation rates of synthesis and degradation. Pulsed SILAC experiments monitor these rates directly through assessment of newly made and pre-existing proteins. However, traditional analyses only determine rates across the cause of the change, i.e. differentiate between synthesis-driven or degradation-driven events. If the proteomics data is from a pulsed SILAC experiment, PECA-pS can extract rate parameter changes more accurately than PECA-R to determine to estimate synthesis or degradation rates. Therefore, we recommend using PECA-pS over PECA-R in this case. When the experimental design does not include pulse labeling, we recommend using PECA-R to examine rate parameters, but strictly focus events with high CPS scores associated with noticeable and statistically significant impact on the total protein concentration changes.Inferring change points of rate parameters directly from proteomics data that was not collected in a pulse-chase experiment can be a risky endeavor. We strongly recommend first analysis of such data to be carried out with PECA Core or PECA-N to identify genes, gene groups, and time points with significant changes. The user can perform post hoc analysis using PECA-R to identify the possible In sum, PECAplus offers an array of solutions to decipher systems-level signals from data generated with different experimental platforms. It employs mathematically sound statistical analysis of paired omics time series data in stream-lined fashion. In contrast to traditional analysis of concentration changes, PECAplus generates hypotheses on the regulatory mechanism underlying the change, e.g. if it arose from synthesis or degradation of the molecule. It helps moving gene expression analysis to new levels: that of time and of interconnected regulatory layers.4 for 2131 genes with missing observations at up to two time points within each replicate. The experiment consists of mRNA and protein intensity data collected at eight time points in two biological replicates of HeLa cells after DTT treatment. This data set was used for illustration of data smoothing and imputation, and time-dependent functional enrichment analysis in PECA and PECA-N analyses.We used the whole transcriptome RNA-seq data and proteomic data from Cheng et al.3 Using a modified pulsed-SILAC strategy,7 the abundance of newly synthesized proteins and previously labeled proteins are measured up to 12\u2009h after LPS treatment on dendritic cells. We divided the intensity values into the medium-isotope and heavy-isotope labeled samples by those in the light-labeled samples (H) to adjust for the variation in the reference pool of dendritic cells. This data was used for the illustration of PECA-pS.We obtained the pulse labeled-intensity data for 2288 genes from supplementary data in Jovanovic et al.To evaluate PECA-R, we derived a synthetic data set from the original LPS data by summing the intensity data from the medium-labellled and heavy-labeled channels at each time point , in addition to normalization by light-labeled samples at respective time points. The original data demonstrated many time course patterns with abundance values defying the expected trajectories in some genes: the intensity values of newly synthesized proteins decreased over time, or the intensity values of existing proteins increased in some genes. We removed these genes to avoid complications in the evaluation. We further smoothed both channels by fitting PECA-pS to guarantee generally smooth, monotone decreasing or increasing curves in the original signal, and added random noise to the filtered data . This new data set consisting of 1231 genes was used for the illustration of PECA-R and the comparison of PECA-pS and PECA-R with the ODE-based model.Before any data analysis module from PECAplus was applied, we used a smooth curve fitting procedure to mRNA and protein time series data. Assuming that the observed data points are realizations from a GP model, we optimized the parameters governing the Gaussian kernel and noise variance parameter empirically based on multiple data sets. After fitting a curve onto the time series data of each molecular type, we replace the observed intensity values with the predicted values from the GP model. If an intensity value is missing at a particular time point, the value is imputed by the posterior mean of the curve at that time point, which yields the most likely intensity value given other values in the neighboring time points according to the estimated GP model. The details of the mathematical model can be found in the\u00a0We implemented the test for time-specific enrichment of biological functions in a gene list, which is selected by a user-provided threshold of CPS scores. At the threshold, we make a list of genes for which rate ratios or rate parameters scored above the CPS threshold at each time point, and perform hypergeometric tests for all relevant biological functions in three different ways: the ones for which the rate or rate ratio parameter increased (up-regulation), decreased (down-regulation), or changed in any direction (significant-regulation). The background gene list is automatically adjusted to the genes included in the entire data. The user can specify the range of functions to test enrichment for, such as the minimum number of significant genes in the function and the number of genes in the function . The software package contains GO and CPDB annotations mapping to mouse and human genes.11 with a notable exception. In PECA Core, the prior probability of change point in a rate ratio parameter at time t is the same for every gene, which is estimated from the data across all genes. In PECA-N, we employ the MRF prior,19 where the prior probability of change point in a gene is adjusted by the change point status of other first degree neighbor genes in a user-provided biological network. To identify the neighbor genes, we used the protein\u2013protein interaction data from the STRING database.20 See\u00a0PECA-N employs the same statistical model as the original PECA in Teo et al.PECA-pS model uses pulsed-SILAC data for the proteomic data to estimate synthesis and degradation rates separately (up to a constant) and infer regulatory changes across the time points in synthesis and degradation separately. The model for the synthesis rate parameter takes the amount of mRNA available at the beginning of each time period into estimation, while the model for the degradation rate is formulated as a function of protein abundance at the beginning of each time period and the rate parameter, disregarding the abundance of mRNA. See\u00a0(i)When the total concentration increases, it is due to the increase in the synthesis rate as long as the mRNA concentration did not rise sufficiently high to explain the protein concentration at a fixed synthesis rate;(ii)When the total protein concentration decreases, it is due to the increase in the degradation rate as long as the mRNA concentration did not drop sufficiently to explain the protein concentration change at a fixed degradation rate.PECA-R aims to estimate synthesis and degradation rates separately from proteomic expression data . The model expresses the total concentration change as a sum of increase in concentration due to new synthesis and decreased due to degradation. The synthesis and degradation rate parameters are estimated under the following assumptions:The reason for imposing those assumptions on the parameter space is straightforward. In label-free or TMT data, we only observe total protein changes, without separate abundance measurements for newly synthesized and existing proteins. Hence when the protein concentration changes, this model has to make a decision as to whether the synthesis rate and/or the degradation rate changed, considering the changes in mRNA concentration.Since the total protein concentration changes can be explained by infinitely many combinations of the two rate parameters, the statistical significance score (CPS) is often more diluted in PECA-R than those values from PECA-pS. However, the PECA-pS model is not applicable unless pulse-labeled samples are available, and PECA-R is the next best option for non-pulse-labeled data within the PECAplus package if the estimation of synthesis and degradation is the ultimate aim of the analysis. See\u00a0https://github.com/PECAplus (Apache 2.0 license), along with a tutorial and example data sets. The code requires a Windows,\u00a0Mac OS X or Linux/Unix environment and enables the advanced access to the entire functionality of the tool. Second, the software package is available as a plugin to the widely used Perseus software (version 1.6.0.2), which was developed as a multi-functional platform for proteomics data analysis.12 This platform enables researchers without bioinformatics background to use PECAplus without any code manipulation. The Perseus platform also allows for easy visualization of the output. Run times of different modules vary by computer specifications and also depend on dataset size. With a ~3000 gene input dataset as discussed here used with default settings on an Windows 10 Home with Intel(R) Core(TM) i7-4710HQ CPU @2.50\u2009GHz, 16\u2009GB DDR3L SDRAM platform, the GP, PECA Core, PECA-pS, and PECA-R modules required ~1\u2009h analysis time. The GSA module produces results instantaneously.PECAplus can be downloaded from 4 The LPS stimulation data is from Tables S1 and S2 in the Supporting Information of Jovanovic et al.3 The portion of the data used in this paper are provided as example data to illustrate software\u00a0 reproducibility.The ER stress data is from DatasetEV1 in the Supporting Information of Cheng et al.Supplementary Information"}
+{"text": "Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods. Cellular components function through their interaction in form of biological networks, such as regulatory and signaling pathways . With thWith the emergence of targeted perturbation techniques such as RNAi and moreA large number of network computational methods exist, ranging from purely graph based \u20136 over BIn this work, we follow a mechanistic ODE modelling framework such as one described in , 19\u201321. In contrast, the proposed method called FBISC , neither requires any data discretization step nor does it rely on a steady state assumption. It can be applied to time series as well as steady state data. Akin to Molinelli et al. we allow for modeling non-linear regulatory relationships between molecules. To enable causal inference we allow for arbitrary, possibly combinatorial interventions. Our method is Bayesian and thanks to the employed Expectation Propagation (EP) scheme computatSometimes prior information is available about the structure of the network we wish to infer. The Bayesian framework used in our method allows us to incorporate prior knowledge into the network inference procedure in a natural and flexible way. This is achieved via a \u201cspike and slab\u201d prior , which aY = {Y1,Y2,\u2026,Yn} be the molecules for which we have available measurements at different time points (t). We would like to estimate the unknown network topology underlying their interaction. The time-series dataset can be described as i is the number of replicate measurements for molecule i, and m is the number of perturbations. C = {1,2,\u2026,m} represents the set of all perturbation experiments, in which each c \u2208 C can either directly influence one molecule or a subset of molecules Yc \u2286 Y, i.e. perturbations can be targeted or combinatorial.To ease the representation of our model, we first assume we have time-series data. The case of steady state data will be explained later. Let c to consist of a treatment with two ligands A and B. At given concentrations A might strongly inhibit protein P1, whereas B might exhibit a moderate effect on protein P2. In our model, we capture this behavior by including a set of extra nodes in our network, and a set of extra edges connecting perturbations with perturbed molecules. All the edges in our network are weighted, and with our model, we will infer the difference in the perturbation strength on different target molecules based on data. In conclusion, our network is a graph \u0393 = , with the set of nodes Y\u222aC and set of edges \u03f5. The set of edges for this graph consists of all possible connections between regular nodes and also the connections from our perturbation nodes to our regular nodes. So, our adjacency matrix to be inferred is a (n+m)\u00d7(n+m) weight matrix W = (wij). We do not aim to infer any edge pointing to the perturbation nodes. By means of this representation we are able to model interactions between our network nodes , and also between perturbation nodes and their targets as shown in We have to take into account that perturbations may not exhibit the same quantitative effect on all directly influenced molecules. For example, we can think c \u2208 C can be time dependent, represented as xc(t), which can be Boolean or fully quantitative. A special case is when xc(t) = 1 for all t = 1,\u2026,T. Targets of perturbation nodes do not have to be fully known; they can be inferred from data with our method. The technique for that (Expectation Propagation) is described later.In general We can in principle model perturbations either as perfect interventions or as soft interventions: Perfect interventions remove the influence of any other on the perturbed node, whereas soft interventions just increase the probability of the target node to be perturbed . By usin\u03b1i and \u03b2i.We assume an ordinary differential equation system (ODE) for describing the dynamics of the molecular network relative to known interventions:n-linear . Linear n-linear . Inspiren-linear , we thust is the length of the known time interval between subsequent measurements t and t+1. There is no constraint on equality of interval lengths. \u0394t weights the influence of measurements at time t on those at time point t+1. Shorter time intervals increase the influence of Eq can be lt in the formula here due to the steady state condition.Under steady state conditions we have i denote the Gaussian measurement noise for molecule i. The likelihood of measured data D given weight matrix W and known measurement noise can then be written as:Let qi per molecule is small, and thus the empirical variance is an unreliable estimate of the true marginal likelihood W is given byt here to make the independence of time explicit.Typically the number of replicate measurements cal form :p(yirc(using Eq in the s1)|\u03bci(t))Notably,In our method we use Eq and Eq to scoreW we use a spike-and-slab prior [\u03b3ij, for each wij indicating the presence (\u03b3ij = 1) or absence (\u03b3ij = 0) of edge i\u2192j. Given \u03b3ij the spike-and-slab prior on is defined as:1 of the slab distribution can be set sufficiently large (here: 10) in order to achieve a low bias of weight estimates for present edges. On the other hand, \u03c32 is set close to zero (\u03c32\u21920) to approximate a delta function centered at zero (the spike). The mixture coefficient \u03b3ij is drawn from a Bernouli distribution:\u03b3ij selects either the spike (if it is zero) or the slab distribution (if it is one) for wij. Parameter \u03c1ij reflects the prior probability for that. This allows us to incorporate prior knowledge in a similar way as e.g. described in [To enforce sparsity of ab prior on the eribed in .Expectation propagation (EP) has been introduced by as a comW,\u03b1,\u03b2) of our model. Similar to Variational Bayesian methods, EP minimizes Kullback-Leibler (KL) divergence between the true joint distribution p and some approximation, q. For that purpose it is essential to factorize the joint distribution p, for example as:f0(\u04e8): = p(\u04e8). Each factor fc(\u04e8) is approximated by a multivariate Gaussian Let \u04e8 denote the set of all inferable parameters is used for all layers of our DBN; for example if we assume wmax(y) and min(y) denote the maximally and minimally measured concentrations for one particular molecule (per replicate) in a certain condition over a complete time series.The non-linear sigmoid function shown in Eq yields se of Eqs and 5)::g(Z)=11+\u03a6 in the simplest case could be the identity function, as proposed in Bonneau, Reiss et al. (2006). In that case between the upper and lower bound the function g is fully linear, and deviates significantly from the original sigmoid curve if the argument Z is far away from 0.5. Furthermore, a linear concentration change is principally non-physiological. In order to account for these facts we thus suggest to define \u03a6 in Eq \u03a6 in thed use Eq as an apcpr(zprc)and use cpr(zprc)and use In practice we tried the following different spline interpolation methods here:Smoothing B-splines, as implemented in the \u201csmooth.spline\u201d function in R Interpolated cubic B-splines, as implemented in the \u201cspline\u201d function in R In order to better understand the principle behavior of our method named FBISC under different conditions, we performed several simulation experiments. A rigorous comparison against competing methods is shown in later Sections.yeast and E. coli transcriptional networks. The network topologies used in this paper are shown in the and area under precision-recall curve (AUPR) are used to evaluate prediction of method. Notably, these measures are independent of a specific confidence or Next we investigated the effect of using perturbation data with the same basic version of FBISC. For that purpose, we randomly picked 20% of the nodes of the network with 100 nodes and each of them affected by a different perturbation. Perturbations were assumed to represent a constant signal over time, and ten time points were simulated for time courses of each network node. We compared three situations 1) targets of perturbations are fully known; 2) targets of perturbations are unknown; 3) purely observational data. As indicated by In the last experiment we compared the different spline methods discussed in Section 2.3 against each other and against the piecewise linear approximation of the sigmoid curve. This was done for the network with 100 nodes and 7 simulated measurement time points. No perturbations were simulated at this point. The results shown in In conclusion, our simulations demonstrate that our method can successfully exploit perturbation information and profits from spline interpolated time series data. Furthermore, reconstruction performance is expected to be relatively robust, even if large networks are estimated.http://dreamchallenges.org/project-list/closed/). The gold standard networks provided in these data are used for evaluation. DREAM4 provides simulated data for five networks of size 10 nodes and five networks of size 100 nodes. For each network perturbation time series and steady state data were retrieved. Time series data comprise 21 time points (t0 = 0 to t20 = 1000) reflecting measurements of each network node. Perturbations are always applied at time 0 and removed at time 500. Information regarding to the exact targets of perturbations are not available. Each time series is measured with 5 replicates for the 10-node network and 10 replicates for the 100-node network. Each replicate represents a perturbation experiment in which different nodes (about one-third of the network) are perturbed.We downloaded data from the DREAM4 and DREAIn addition to time series different kinds of steady state data are available from DREAM4. Here we employed knock-out, knock-down, and multifactorial perturbation data. Knock-out and knock-down data reflect steady state measurements of all network nodes after perturbation of exactly one known node. Multifactorial data corresponds to combinatorial perturbations of unknown nodes in each experiment. No replicate measurements are available for steady state data.DREAM8 provides experimental data of a signaling network with 20 nodes at 11 time points. Here the perturbations correspond to compound treatments. Exact concentrations of perturbation sources are not given but specified with qualitative values (high/low). Following Young et al. we normaWe compared FBISC with ScanBMA , iBMA 3, LASSO , ebdnet Notably, information about perturbations was included into all methods competing with our FBISC approach. This was done by adding perturbations as additional potential \u201cregulators\u201d of each node (similar to the way that FBISC treats perturbations). In case that perturbation targets were known, perturbations were only considered as potential regulators of directly targeted nodes.p-value for each possible edge. Correspondingly, AUROC and AUPR are used to evaluate prediction performances of methods. Notably, these measures are independent of a specific confidence or p-value cutoff.All tested network inference algorithms produce either a confidence measure or a Using the above described time series perturbation data we compared results obtained by our method with the ones reported in Young et al. . ResultsNext we compared our method to the competing approaches on the basis of various kinds of steady state data , showingNext, we focused on the DREAM8 challenge data. In contrast to before, results for this dataset were obtained by our own implementation of competing methods. More specifically, we used R package NetworkBMA for ScanNext we tested, in how far the previously presented results would change in dependency of prior knowledge. Only time series data were used at this point. Following the approach used by Praveen et al. we consiFBISC is a Bayesian approach. Frequentist methods like CLR and ARACNE are based on mutual information and conceptually far simpler. They are thus computationally comparably cheap. From the practical point of view, a question is thus, how the computing time of FBISC compares to competing methods.nvar) per node. Here we tested ScanBMA with nvar = 10 and nvar = 20 . Results for our method compared to the competing approaches considered in this paper are reported in The run time of ScanBMA depends on the number of potential parents , which is frequently the case in biology. A further strength is that we consider perturbations themselves as time dependent, as e.g. reflected in the DREAM4 data. FBISC uses a biochemically inspired model to describe the non-linear dynamical behavior of molecular networks and integrates this description into a graphical modeling framework. This allows for the application of efficient approximate inference schemes, such as expectation propagation. Notably, the output of our method is a posterior distribution over edge weights, which accounts for the unavoidable uncertainty of any network inference.We enforced sparsity of inferred networks in form of a spike and slab prior. This type of prior forces many edge weights to exactly zero and naturally allows for the integration of prior background knowledge, which demonstrated useful in our results. Altogether we see the combination of a highly flexible modeling framework , which is applicable to time series as well as steady state data and uses computationally scalable Bayesian inference as differentiation of FBISC to existing techniques. The advantage for the user lies in a unified method, which allows for automatically adapting literature derived network information to experimental data and produces confidence measures. Our results showed an attractive prediction performance of our method. We thus believe that our proposed FBISC method is an attractive alternative to existing methods to learn causal network structures from complex perturbation data. The C# code of our method is included in the supplements of this paper .S1 Code(c# infer.net framework).(CS)Click here for additional data file.S1 Fig(generated by geneNetWeaver).(DOCX)Click here for additional data file.S2 Fig(DOCX)Click here for additional data file."}
+{"text": "Eukaryotic genomes are replicated in a reproducible temporal order whose physiological significance is poorly understood. M\u00fcller and Nieduszynski compare the temporal order of genome replication in phylogenetically diverse yeast species and identify genes for which conserved replication timing contributes to maximal expression. HTA1-HTB1 and discovered that this halved the expression of these histone genes. Finally, we showed that histone and cell cycle genes in general are exempt from Rtt109-dependent dosage compensation, suggesting the existence of pathways excluding specific loci from dosage compensation mechanisms. Thus, we have uncovered one of the first physiological requirements for regulated replication time and demonstrated a direct link between replication timing and gene expression.Eukaryotic genomes are replicated in a reproducible temporal order; however, the physiological significance is poorly understood. We compared replication timing in divergent yeast species and identified genomic features with conserved replication times. Histone genes were among the earliest replicating loci in all species. We specifically delayed the replication of Eukaryotic genomes replicate in a characteristic and reproducible temporal order dictated by the location and activity of replication origins. Replication timing correlates with gene expression, chromatin state, GC content, and subnuclear structure . These cAn appropriate number and distribution of replication initiation sites is essential for genome stability. Global deregulation of origin activity leads to DNA damage and genome instability. For example, massive overexpression of rate-limiting factors allows excessive origin activation, which results in DNA damage and cell death . ConversSaccharomyces cerevisiae, bidirectional promoters drive expression of histone gene pairs that encode dimerizing histones. In addition, histone gene pairs are frequently closely positioned to replication origins. For example, the HTA1-HTB1 gene pair is associated with an origin in S. cerevisiae, Lachancea kluyveri, Kluyveromyces lactis, and Lachancea waltii provide a measure of replication timing conservation, because they represent an evolutionary distance comparable to or less than that between our species comparisons. The majority of S. cerevisiae ohnologs do not have similar replication times , and a previous study found no conservation in replication origin location after the WGD . For each element, the available replication timing values were used to calculate the cross-species mean replication time and SD. A low SD represents low variation in the replication time between the species and therefore serves as a proxy for conservation of replication time. We compared the observed level of evolutionary conservation in replication time with a random model .To compare the replication time of genetic elements, we assigned their homologs and ohnologs based on previously determined common ancestry . Then, tom model . In bothS. cerevisiae elements with a conserved replication time, we found that neighboring elements displayed low conservation in replication time, comparable to that seen for all elements . Second, we found that these 221 elements were present within 142 genomic clusters, the majority of which (100) contained a single gene . In the remaining 42 clusters, we anticipated at least one element per cluster to be under selective pressure to retain its replication time. Indeed, there are clear examples of clusters with more than one element likely to be under such selective pressure . Therefore, the majority of the 221 S. cerevisiae elements are likely to have been subject to an evolutionary selective pressure to conserve the replication time, consistent with many independent physiological requirements for regulated replication timing.We sought to confirm that the species we analyzed had sufficient breakdown in genetic linkage to independently test the conservation of replication time of genetic elements. First, for the 221 S. cerevisiae elements with conserved replication times, we looked for common functional annotations. The three most significantly enriched elements were centromeres (P = 2.2 \u00d7 10\u221216), tRNA genes (P = 5.7 \u00d7 10\u22128), and histone genes . The identification of centromeres validated our approach, because we have previously discovered that regulated centromere replication time contributed to stable chromosome inheritance , from the very start to the last quarter of S phase (HTA2\u2013HTB2), showed no difference in replication timing between wild-type and origin mutant strains for which the replication time was unaltered. As anticipated, the POL2 transcript levels increased through S phase with similar kinetics for both strains .Next, we tested whether delayed gene replication resulted in altered expression. Wild-type and mutant cells were arrested and released synchronously into S phase. We observed no change in the dynamics of cell cycle progression as a consequence of delayed histone gene replication . TranscrHTA1, HTB1, HTA2, and HTB2). Transcript levels increased upon entry into S phase, peaking at 45 min ; this is consistent with previous studies (HTA2 and HTB2 gene pair served as a control (for no change in replication time), and we detected no difference in transcript level. However, for both HTA1 and HTB1, we observed a reduction in transcript level in the origin mutant to approximately half the wild-type levels , it was not dependent on the gene used to normalize expression levels, it was clearly observable at other mid-S phase time points, and it was seen in a biological replicate . Given that there are additional, near-identical gene copies, we anticipate modest reductions in histone protein levels. This is consistent with the wild-type and histone-delayed strains having near identical S-phase kinetics. In addition, because we observed a timely induction of HTA1 and HTB1 gene expression, even when replicated late in S phase , we can exclude a direct role for DNA replication in inducing histone gene transcription. Furthermore, our findings cannot be explained by previously reported mechanisms reads were mapped in either sample were excluded. The resulting absolute ratios reflect the read numbers; therefore, data were normalized by dividing by an empirically determined factor. Data points <0.9 or >2.1 were excluded. Smoothing was applied using a Fourier transformation . Samtools \u201cview\u201d was used to filter reads, retaining uniquely mapped reads with each segment properly aligned according to the aligner (\u2212f 2) and with high mapping quality (\u2212q 50). Replication timing profiles were generated by normalizing the replicating (S phase) sample to the nonreplicating (G2) sample in 1-kb windows. The http://ygob.ucd.ie; The position of every genetic element in each of the seven species as well as the ancestral relationship between elements were obtained from the Yeast Gene Order Browser . In each simulation, the cross-species mean replication time and SDs were calculated for every ancestral element. 3,000 simulations were run, giving a total of 3,000 \u00d7 4,616 values for both mean replication times and SDs. These values were analyzed using a 2D kernel density estimation (R package \u201ckde2d\u201d) to identify the thresholds encompassing 95, 99, and 99.9% of the simulated data . The equz tests with two-tailed comparisons were used to calculate the significance of the enrichment of functional annotations within the 221 elements with conserved replication time . Transcript abundance was determined by quantitative PCR using the SYBR Green JumpStart Taq ReadyMix (Sigma-Aldrich) and primers listed in Table S4.Cells were grown, arrested with \u03b1 factor, and released at 23\u00b0C. Samples were collected every 2.5 min for flow cytometry analysis and every 5 min for isolation of mRNA. Samples for mRNA extraction were washed in water, resuspended in 400 \u00b5l TES buffer to which 400 \u00b5l acid phenol was added. Samples were vortexed for 10 s and incubated at 65\u00b0C for 60 min with vortexing every 15 min. The samples were placed on ice for 5 min and spun , and the aqueous phase was recovered. RNA was recovered by ethanol precipitation and resuspended in 100% formamide. RNA concentrations were measured by Nanodrop and cDNA synthesized from 1 \u00b5g RNA using a d(T)rtt109\u0394 cells. The authors provided their data as log2 values. Transcript levels were then calculated using this formula: 2tx\u2212t0, i.e., 2 to the power of the difference between the arrested sample and the time points after release ( release .S. cerevisiae ohnologs do not have similar replication times. Fig. S1 B shows the distribution of the number of replication timing values for each ancestral element. Fig. S1 C shows the distribution of the number of elements that were below the threshold in simulated data. Fig. S1 D demonstrates that there is low conservation in replication time of genetic elements adjacent to conserved elements. Fig. S1 E shows that the majority of conserved elements are in single-gene clusters. Fig. S1 F shows that there is a statistically significant bias for codirectional replication and transcription of S. cerevisiae tRNA genes. Figs. S2 and S3 show the abundance of transcript levels of control genes and core histone genes at different times throughout a synchronous S phase, extending S. cerevisiae genes with conserved replication timing, and Table S3 B, the gene ontology terms that are significantly enriched among them. Table S4 lists the sequences of primers used for this study. A custom Python Script, fft.py (Fig. S1 provides additional information relevant to the replication timing comparisons between species. Fig. S1 A confirms that , fft.py , was use"}
+{"text": "Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses. Such analyses reconstruct a measure of a cell\u2019s progression through some biological process, known as apseudotime. Recently, attention has turned to modeling bifurcations where, part-way along such trajectories, cells undergo some fate decision and branch into two or more distinct cell types.Trajectory analysis of single-cell RNA-seq (scRNA-seq) data has become a popular method that attempts to infer lost temporal information, such as a cell\u2019s differentiation state3 constructs ak-nearest neighbor graph and uses shortest paths from aroot cell to define pseudotimes, using inconsistencies over multiple paths to detect bifurcations. Diffusion Pseudotime (DPT)4 similarly constructs a transition matrix where each entry may be interpreted as a diffusion distance between two cells. Bifurcations are inferred by identifying the anti-correlation structure of random walks from both a root cell and its maximally distant cell. While DPT arguably has a probabilistic interpretation, neither method specifies a fully generative model that incorporates measurement noise, while both infer bifurcations retrospectively after constructing pseudotimes. A further algorithm Monocle5 learns pseudotimes based on dimensionality reduction using the DDRTree algorithm6 and provides post-hoc inference of genes involved in the bifurcation process using generalized linear models.Several methods have been proposed to infer bifurcation structure from single-cell data. WishboneHere we propose a Bayesian hierarchical mixture of factor analyzers for inferring bifurcations from single-cell data. Factor analysis and its close relative principal component analysis (PCA) are frequently used in the context of single-cell gene expression modeling, both for visualization and trajectory inference see e.g.,8. SincThe model we propose is unique compared to existing bifurcation inference methods methods in the following: (1) by specifying a fully generative probabilistic model we incorporate measurement noise into inference and provide full uncertainty estimates for all parameters; (2) we simultaneously infer cell \u201cpseudotimes\u201d and branching structure as opposed to post-hoc branching inference as is typically performed; and (3) our hierarchical shrinkage prior structure automatically detects features involved in the bifurcation, providing statistical support for detecting which genes drive fate decisions.In the following, we introduce our model and apply it to both synthetic datasets and demonstrate its consistency with existing algorithms on real single-cell data. We further propose a zero-inflated variant that takes into account zero-inflation, and quantify the levels of dropout at which such models are beneficial. We highlight the multiple natural solutions to bifurcation inference when using gene expression data alone and finally discuss both the merits and drawbacks of using such a unified probabilistic model.N \u00d7 G matrix of suitably normalized gene expression measurements forN cells andG genes, whereyi denotes thethi row vector corresponding to the expression measurement of celli. We assign a pseudotimeit to each cell, along with a binary variable\u03b3i indicating to which ofB branches celli belongs:We begin with an\u03b3i =b if celli on branchb\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0(1)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0b \u2208 1,\u2026,B.withti is a surrogate measure of a cell\u2019s progression along a trajectory while it is the behavior of the genes - given by the factor loading matrix - that changes between the branches. We therefore introduceB factor loading matrices \u039bb = [cbkb],b \u2208 1,\u2026,B for each branch modeled.The pseudotimeThe likelihood of a given cell\u2019s gene expression measurement conditional on all the parameters is then given byyi|\u03b3i,\u039b\u03b3i,it,\u03c4 \u223c Normal\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0(2)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0G is theG \u00d7 G identity matrix.wherek\u03b3 should be similar to each other unless the data suggests otherwise. We therefore place a prior of the formWe motivate the prior structure as follows: if the bifurcation processes share some common elements then the behavior of a non-negligible subset of the genes will be (near) identical across branches. It is therefore reasonable that the factor loading gradientsk\u03b3i \u223c Normal\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0(3)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u03b8 denotes a common factor gradient across branches. This has similar elements to Automatic Relevance Determination (ARD) models with the difference that rather than shrinking regression coefficients to zero to induce sparsity, we shrink factor loading gradients towards a common value to induce similar behavior between mixture components. We can then inspect the posterior precision to identify genes involved in the bifurcation: ifg\u03c7 is very large then the model is sure thatkg0 \u2248kg1 and geneg is not involved in the bifurcation; however, ifg\u03c7 is relatively small then |kg0 \u2013kg1| >> 0 and the model indicates thatg is involved in the bifurcation.whereWith these considerations the overall model is given by the following hierarchical (M)ixtures of (F)actor (A)nalysers (MFA) specification:\u03b7\u03c4,\u03b8\u03c4,c\u03c4,\u03c7\u03b1,\u03c7\u03b2,\u03b1 and\u03b2 are hyperparameters fixed by the user. By default we set the non-informative prior\u03c7\u03b1 =\u03c7\u03b2 = 10\u22122 to maximize how informative the posterior of\u03c7 is in identifying genes that show differential expression across the branches.where9.As the model exhibits complete conditional conjugacy, inference was performed using Gibbs sampling . Detailsdropout where the failure to reverse-transcribe lowly expressed mRNA results in zero counts in the expression matrix. The issue has been extensively studied in the context of scRNA-seq, resulting in algorithms that take into account the resulting zero inflation, such as ZIFA7 or SCDE10.Single-cell data is known to exhibitWe can incorporate tractable zero-inflation into our model by considering a per-gene dropout probability given byigx is the unobserved true expression of geneg in celli and\u03bb is a global dropout parameter estimated in an Empirical-Bayes manner. This exponential model empirically fits multiple scRNA-seq datasets well .The above reasons that in a single-gene case the initial state is indistinguishable from the gene expression alone. We can easily generalize this to the multiple-gene case, due to the fact that the labels inWhile in algorithms, such as Wishbone and DPT, this non-identifiability is solved by setting an initial cell or state, the equivalent in our model is the correct initialization of the pseudotimes. PCA is applied to the data before inference and the principal component that best corresponds to the trajectory based on the expression of known genes is used to initialize the pseudotimes. Such trajectories correspond to local modes in the posterior space that are sufficiently narrow the probability of the Gibbs sampler moving to another local mode is negligible. A future extension that would solve this non-identifiability would involve placing priors on the behavior of certain genes across the branches, which combined with more efficient inference would pick out the \u2018true\u2019 trajectory.10.1101/076547).Please note that an earlier version of this article can be found on bioRxiv pseudotime and branch assignment estimates, respectively. We compared the Pearson correlation of the estimated pseudotimes to the true pseudotimes . However, a zero count for a particular gene in a particular cell may also be atrue zero where no mRNA in the cell is present.Single-cell RNA-seq data is known to exhibitWe expect such true zeros to be useful for pseudotime inference.Accounting for such dropouts involves modifying the model so that zero counts are likely if the underlying latent expression is low. Therefore, the red dropout cells in\u03bb, and the proportion of zero counts in each dataset can be seen inWe sought to quantify the benefits of modeling zero inflation against the drawbacks of losing the information contained in \u201ctrue zeros\u201d. We created multiple synthetic datasets , while v\u03bb and MFA variants can be seen in\u03bb = 0.02, where > 80% of counts are zeros) the zero-inflated variant performs considerably worse than the non-zero-inflated variant, with virtually no correspondence to the true pseudotimes compared to\u03c1 \u2248 0.75. We suggest this is due to the inference procedure, effectively imputing such a large proportion of the data that there are too many degrees of freedom to effectively infer the trajectory. For the remaining values of \u03bb the zero-inflated variant infers pseudotimes largely comparable to those of the non-zero inflated version, with marginal improvements in accuracy when there is moderate dropout . We conclude that incorporating zero-inflation into pseudotime inference is sensible, but the variable quality across the (unknown in practice) dropout range along with considerable additional computational cost render it unnecessary for most practical purposes.The resulting correlations with the true pseudotimes across the range of13.We next applied our method to previously published single-cell RNA-seq data of 4,423 hematopoietic progenitor/stem cells, differentiating into myeloid and erythroid precursors\u00d7 104 iterations using default hyperparameter values, except for\u03b8\u03c4 =\u03b7\u03c4 = 1, and initialized the pseudotimes to the second principal component of the data. The results can be seen ini\u03b3 detects the branching structure in the data, consistent with previous methods.To reduce the dataset to a computationally feasible size we used only genes expressed in at least 20% of cells with a variance in normalized expression greater than 5. We performed Gibbs sampling for 4g\u03c7, with larger values indicating more evidence that geneg is involved in the bifurcation process. For illustration purposes, we plot the expression ofELANE andCAR2, which the model suggests will show differential behavior across the bifurcation, along withRPL26, which the model suggests will show common behavior , along with the second principal component of the data (PC2), which we noted from exploratory analyses was highly correlated with the existing Wishbone values. We sub-sampled down to 1,000 cells for Monocle comparisons for computational convenience and used the previously published results for Wishbone from. The roo\u03c1 = 0.54), Wishbone (\u03c1 = 0.83), and DPT (\u03c1 = 0.78). However, there is virtually no correlation with Monocle (\u03c1 = 0.01), though as this low correlation only occurs with Monocle we assume it is not an issue with MFA. We also sought to compare branch allocations across the algorithms, which is difficult due to the non-identifiability of the statistical models involved.The comparison of the inferred pseudotimes with that MFA can be seen in\u03c7\u03b1 = 5\u00d7 103 and\u03c7\u03b2 = 1.We next applied MFA to single-cell mass cytometry data, tracking the differentiation of 22,850 monocytes and erythrocytes from hematopoietic stem and progenitor cells across 12 markers as published in\u03b3 values identifying a bifurcation at the \u201cpinch\u201d in the plot.The results can be seen ina priori and typically returns a large number. For the convenience of visualization we therefore only display the 30% most frequent states and group the remaining infrequent ones into \u201cOther\u201d. We find good agreement between MFA and Monocle and DPT, and similarities with the Monocle assignments (MFA branch 2 loosely corresponds to Monocle branch 17).We subsequently compared the inferred pseudotimes and branching to those found using the alternative algorithms. We found good correspondence to all other methods , with PeIn this paper we have presented a Bayesian hierarchical mixture of factor analyzers for inference of bifurcating trajectories in single-cell data. Our model is unique compared to existing efforts in that it (a) is fully generative, incorporating measurement noise into inference, (b) jointly infers both the pseudotimes and branches compared to post-hoc inference of branch detection, and (c) jointly infers which genes are differentially regulated across the branches. We also proposed an extension that accounts for the high levels of zero-inflation present in single-cell RNA-seq data. We applied our model to a range of synthetic and real datasets and demonstrated it performs competitively with existing methods.15 or hard-setting the pseudotimes prior to inferring the branching structure16. As such there is a natural trade-off between the expressivity of such models and being able to perform valid statistical inference that fully incorporates parameter variation without additional constraints or \u201ctweaking\u201d.There is a natural trade-off in designing such models between flexibility and practicality. The implicit assumption of MFA that gene expression develops linearly across pseudotime allows for fast Markov-Chain Monte Carlo sampling and joint inference of branch structure. However, it is potentially highly mis-specified: the predicted expression can become negative leading to erroneous inference see. A solut17, which due to the model\u2019s conditionally conjugate structure could be implemented without resorting to approximations. As mentioned previously, one main weakness of the model is the unrealistic assumption of linear changes in expression over pseudotime, leading to severe model specification. One could therefore consider alternative nonlinear functions, such as sigmoids Campbell and Yau present a probabilistic method called MFA to infer bifurcating trajectories and differentially regulated genes across branches from single cell transcriptomic data. The method is based on a Bayesian hierarchical mixture of factor analyzers. The authors also discuss an extension of this method to deal with dropout events commonly observed in scRNA-seq datasets. Although they claim that the model obtained with this procedure is less misspecified, the computational requirements associated with it may be a burden, especially for very sparse gene expression matrices and unnecessary for practical purposes. MFA is evaluated on a synthetic dataset, scRNA-seq and mass cytometry data and compared with existing methods, although limited to Wishbone, Monocle2 and Diffusion Pseudotime. Overall, the manuscript is well written, the results clearly described and the source code provided.The computational requirements of MFA are not discussed in depth and running times are not reported. Please add a table comparing the execution times of the different methods tested using the full datasets presented and not the down-sampled versions. If a method fails to run in reasonable time, just report this fact (and if possible show the speed-up of MFA using down-sampled datasets).\u00a0For different datasets, different values of (hyper)parameters are used but no clear guidelines are provided to the user in how to set those values. Please explain the rationale and if possible clear metrics that the users can use for tuning those parameters.\u00a0\u201cPCA is applied to the data before inference and the principal component that best corresponds to the trajectory based on the expression of known genes is used to initialize the pseudotimes\u201d. Please explain how to initialize the pseudo-time in absence of known genes.\u00a0I agree with Dr. Gitter about adding a short intro to factor analysis before jumping to equation 2.\u00a0Please describe (or better show with a synthetic dataset) how MFA performs when more than one branching point is present. For example, using the synthetic dataset presented in Rizvi et al 2017 Nat Biotech .\u00a01 where their imputation strategy clearly improve pseudotime estimations or Zheng et al. 2016 Nat Comm.\u00a0The authors claim that explicitly modeling the dropout events doesn\u2019t always justify the computational cost. I think it may be worth to test this idea on real datasets, especially droplet based in which this problem is more pronounced. Good candidate datasets to show how the method performs in those settings are presented in van Dijk et al.Application to scRNA-seq:\u00a0 1) What is the running time without down-sampling? How comparable are the results with or without down-sampling?ELANE and CAR2 on top?). 2) Two genes (ELANE and CAR2) are presented to illustrate the bifurcation process, what other genes are significant by this analysis? It may be worth to show in a sup table the ranking obtained for each branch using MFA\u00a0 , Wishbone (\u03c1 = 0.83), and DPT (\u03c1 = 0.78).\u201d Please describe more explicitly how the correlation is calculated taking into account the fact that different approaches may have different number of branches (and that some genes may be relevant only in a sub-branch). 3)Application to mass-cytometry data: 1) Please report the results and running time using the whole dataset. If a method fails to run in a reasonable time exclude it from the comparison. 2) Custom parameters used (see point 2)In the plot where multiple cells are displayed as circles, it may be worth to remove the black border to improve the perception of the density . It may be helpful to consider/clarify the following points to improve the manuscript:I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. This manuscript presents Mixtures of Factor Analysers (MFA), a hierarchical Bayesian model for studying the branching structure in single-cell RNA-seq and mass cytometry data. \u00a0Single-cell RNA-seq can provide snapshots of cells progressing through dynamic biological processes, many of which exhibit branching structure in which the expression levels of one subset of cells diverges from the others. \u00a0These types of dynamic behaviors are encountered not only in differentiation but also in stimulus response and other processes, which has sparked a need to computationally model the overall branching structure of the process, how cells progress through the process, and how gene expression levels of some genes differ along the branches. \u00a0These and related inference problems are what MFA aims to solve. MFA is a generative probabilistic model that uses factor analysis to model the expression properties of branches in a single-cell RNA-seq dataset. \u00a0A prior is used to encourage similar factor loading gradients in each branch. \u00a0An optional extension models the dropout phenomenon in which technical artifacts can cause non-zero mRNA abundances to be reported as zeros. \u00a0MFA is compared with several popular existing algorithms that are not generative probabilistic models: Wishbone, Diffusion Pseudotime, and Monocle 2. \u00a0These comparisons are conducted in a fair manner on simulated and real data. \u00a0The assessment of the benchmarking is balanced, as is the overall conclusion that in most cases MFA is competitive with these existing approaches even if there is not evidence that it definitively outperforms them by some quantitative metric. The balanced discussion is a strength of the manuscript overall. \u00a0Figures 2E and 3C both show the scenarios in which MFA's performance degrades with respect to dropout levels or the fraction of transient genes. \u00a0The authors conclude that in many practical analyses the extension for incorporating zero-inflation is not worth the added computational cost. \u00a0They also present the limitations of their linear model and offer suggestions for improving the scalability of the inference and the linearity assumption. The open source software is another asset and follows the best practices for scientific code. \u00a0The code is available in GitHub, and an archival version has been deposited in Zenodo. \u00a0The Zenodo version's title states that it is the \"Bioconductor-ready version\", and providing the mfa R package through Bioconductor would indeed further enhance its utility. Overall, the manuscript is easy to read, and the model is technically sound and well-motivated. \u00a0I have only minor comments that may improve the accessibility to a broader audience and help clarify some points. Minor comments: * The Methods section assumes that the reader is already familiar with factor analysis, as this technique is not explained. \u00a0It would be helpful to introduce the approach and the meaning of c and k in this biological context.1SLICER (DOI:10.1186/s13059-016-0975-3)2TSCAN (DOI:10.1093/nar/gkw430)3Topslam (DOI:10.1101/057778)4Mpath (DOI:10.1038/ncomms11988) Very briefly discussing some of these methods and expanding the discussion of how GPfates 5 relates to MFA would help readers understand MFA's advantages and disadvantages. \u00a0I do not think it is necessary to benchmark against additional algorithms. * There has been other related work on branching trajectories in single-cell datasets. \u00a0A few examples include: * The parameter B, the number of branches, appears to be user-defined, but this is not explicitly stated in the text. \u00a0It would help users to offer guidance on selecting this crucial parameter. * The sensitivity to the hyperparameter values is not assessed. \u00a0It is not clear what model behavior was observed when modeling the mass cytometry dataset that led to the decision to use non-default values for alpha_x and beta_x and how users should make those decisions on new datasets. * I understand the mathematical invariance presented in Figure 1, but the biological argument is not intuitive. \u00a0In the bifurcation 2 -> 1,3 states 1 and 3 have the same expression level, which would suggest that this single gene does not exhibit branching behavior. \u00a0Rather, it switches from a high to low state in all cases. * The simulation with 300 bifurcating cells and 60 genes may have been too simple. \u00a0Even the first principal component of the data recovers the true pseudotimes well. \u00a0All of the methods perform extremely well, making it difficult to assess their relative performances. * I expected that the red cells along the lower curve in Figure 2B would be labeled as False in Figure 2C. \u00a0Visually, the branch point appears to occur around 15 on the PC1 axis. * Running Monocle 2 on a smaller set of sub-sampled cells than the other methods could put it at a disadvantage. \u00a0The scalability of MFA is not discussed or related to the runtimes of the other methods. \u00a0Could MFA run on the full mass cytometry dataset in a reasonable amount of time or is sub-sampling required?Abstract: \"apply or model\" -> \"apply our model\"Abstract: \"context practical\" -> \"context of practical\"Supplement page 2: The identity matrix symbol in the line of Equation 1 for k is not correctSupplement page 4: Presumably the p(dropout in gene g) equation in the text should also have a 1/N term to match Equation 18Supplement page 7: \"conduisive\" -> \"conducive\" * There are a few potential typos:I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard."}
+{"text": "The innate immune response to pathogenic challenge is a complex, multi-staged process involving thousands of genes. While numerous transcription factors that act as master regulators of this response have been identified, the temporal complexity of gene expression changes in response to pathogen-associated molecular pattern receptor stimulation strongly suggest that additional layers of regulation remain to be uncovered. The evolved pathogen response program in mammalian innate immune cells is understood to reflect a compromise between the probability of clearing the infection and the extent of tissue damage and inflammatory sequelae it causes. Because of that, a key challenge to delineating the regulators that control the temporal inflammatory response is that an innate immune regulator that may confer a selective advantage in the wild may be dispensable in the lab setting. In order to better understand the complete transcriptional response of primary macrophages to the bacterial endotoxin lipopolysaccharide (LPS), we designed a method that integrates temporally resolved gene expression and chromatin-accessibility measurements from mouse macrophages. By correlating changes in transcription factor binding site motif enrichment scores, calculated within regions of accessible chromatin, with the average temporal expression profile of a gene cluster, we screened for transcriptional factors that regulate the cluster. We have validated our predictions of LPS-stimulated transcriptional regulators using ChIP-seq data for three transcription factors with experimentally confirmed functions in innate immunity. In addition, we predict a role in the macrophage LPS response for several novel transcription factors that have not previously been implicated in immune responses. This method is applicable to any experimental situation where temporal gene expression and chromatin-accessibility data are available. Macrophages are long-lived coordinating cells of the innate immune system. Activation of tissue macrophages by Toll-like receptor (TLR) stimulation initiates a dynamic program of gene expression changes involving hundreds of genes that are associated with processes such as phagocytosis, antigen presentation, immunoregulation, and non-oxidative metabolism \u20134. This Various systems biology approaches have been used to map the transcription factors that regulate the transcriptional response of macrophages and dendritic cells to stimulation with bacterial endotoxin lipopolysaccharide (LPS) , 15 inclAs more large biological data sets are deposited in public repositories, big data analytics is an increasingly useful tool for predicting TF binding sites, tissue distribution, function and interactions. This approach is promising and offers a number of advantages, such as the ability to comprehensively analyze large numbers of cells and tissues simultaneously, and to make specific predictions based on the complete picture. These predictions can then be sorted by probabilities and tested in the lab. Such a computational approach has been used to successfully predict key TFs that play a role in cell differentiation; for example, ectopic expression of just nine of the top candidate TFs for epithelial retinal pigment cells, was sufficient to transform human fibroblasts into retinal pigment epithelial-like cells . Bioinfotemporal binding propensity profiles that we correlated with temporal gene expression profiles. In contrast to single-time-point epigenome-guided analysis ).mulation . RecentlS1 FigBars show normalized expression levels of Bmp1 transcript in unstimulated macrophages and at 1, 4 and 12h post LPS stimulation.(TIF)Click here for additional data file.S1 Tablep value\u201d represent over-representation p value (relative to the background set of genes). MaxScoreDifference column represents Clover score difference between highest and lowest value (at any time point).Each tab shows all motifs for one expression cluster that were found to be significantly over-represented at one time point at least. Columns labeled as \u201cScore\u201d represent Clover score and \u201c(XLSX)Click here for additional data file.S2 TableClover raw score are shown for time points 0, 1, 2 and 4h for each motif and each cluster . Cluster median expression is shown for each cluster. Three different correlation scores are shown, one without any time lag, one for fixed time lag (fixed for each cluster) and one for optimal time lag (best score) for that motif/cluster combination.(XLSX)Click here for additional data file."}
+{"text": "Alternative transcription is common in eukaryotic cells and plays important role in regulation of cellular processes. Alternative polyadenylation results from ambiguous PolyA signals in 3\u2032 untranslated region (UTR) of a gene. Such alternative transcripts share the same coding part, but differ by a stretch of UTR that may contain important functional sites.The methodoogy of this study is based on mathematical modeling, analytical solution, and subsequent validation by datamining in multiple independent experimental data from previously published studies.In this study we propose a mathematical model that describes the population dynamics of alternatively polyadenylated transcripts in conjunction with rhythmic expression such as transcription oscillation driven by circadian or metabolic oscillators. Analysis of the model shows that alternative transcripts with different turnover rates acquire a phase shift if the transcript decay rate is different. Difference in decay rate is one of the consequences of alternative polyadenylation. Phase shift can reach values equal to half the period of oscillation, which makes alternative transcripts oscillate in abundance in counter-phase to each other. Since counter-phased transcripts share the coding part, the rate of translation becomes constant. We have analyzed a few data sets collected in circadian timeline for the occurrence of transcript behavior that fits the mathematical model.Alternative transcripts with different turnover rate create the effect of rectifier. This \u201cmolecular diode\u201d moderates or completely eliminates oscillation of individual transcripts and stabilizes overall protein production rate. In our observation this phenomenon is very common in different tissues in plants, mice, and humans. The occurrence of counter-phased alternative transcripts is also tissue-specific and affects functions of multiple biological pathways. Accounting for this mechanism is important for understanding the natural and engineering the synthetic cellular circuits.The online version of this article (doi:10.1186/s12864-017-3958-1) contains supplementary material, which is available to authorized users. Arabidopsis thaliana and Drosophila melanogaster [Circadian oscillation plays important role in regulation of gene expression. The number of reported cycling genes differs from study to study. Some publications report hundreds \u20133 othersnogaster . Others nogaster , 12 whilnogaster . Generalnogaster . The papn1(t) denote the change in abundance of the long isoform in time and n2(t) stands for the change abundance of the short isoform of the transcript n in time.Let rp describe the expression rate of the gene from which both isoforms are transcribed. Since they share the same promotor and all other functional sites except 3\u2032 UTR polyadenylation signal, the rate is the same for both short and long transcripts. Let p denote the probability of transcription resulting in production of the long isoform. Then 1-p is the probability of transcription resulting in the short isoform. The UTRs of these transcripts are different, thus we introduce separate variables for the degradation rates:Let rd1 describes the degradation rate for the long isoform of transcript n1rd2 describes the degradation rate for the long isoform of transcript n2Let\u2019s consider the case when the baseline of expression is modulated by a simple harmonic process, such as circadian rhythm. Since the entire cell (or even the organism consisting of magnitude of cells) is modulated by the same factors, we consider the period of oscillation equal in all equations. The baseline oscillation is described by the travelling wave equationb\u00a0>\u00a0c, which means that longer transcripts have a shorter life span. This assumption models the action of miRNA that can bind the longer transcript and facilitate the decay. The shorter isoform lacks the miRNA binding site and thus can last longer, transcribing more copies of encoded protein. The overall model takes the following form:Here we assume that The formula (14) and a corresponding p-value for each estimate. The p-value calculation is obtained from the bootstrap analysis described in Algorithm 1. The latter can be used to filter probes with low statistical confidence on their phase estimate. We used the mouse annotation data (available from Affymetrix and on the shared github source code) to identify multiple probe sets interrogating expression of the same gene. All probes that correspond to the same gene symbol are gathered in a same probe set. The next step of the analysis is to generate phase differences within each probe set. All probe pairs in each probe set are used to compute the absolute value of phase difference.The phase estimation procedure described previously provides for each probe an estimation of the phase among one of six phase classes discretized by cyclic shift p-value of 0.1 to filter probes with very low confidence on phase estimation. There is a peak around the zero phases for the different tissues. This result is expected since the probes are designed to provide a robust estimation of expression levels. Probe sets and separate probes within each set reporting results inconsistent with other probes and probe sets for the same gene tend to be eliminated from the chip on early design stages. As a result, the majority of alternative probe sets report abundance of the same transcript and shows no phase difference. There is a degree of uncertainty in identification of phase, considering the low sampling rate and high level of technical variation in microarray data. Thus, we expect high number of alternative probesets with phase difference by one time point and the primary analysis has been previously published [Arabidopsis thaliana microarray come from the less extensive annotation . For this analysis we considered probesets representing the same gene if any of the following fields were identical: RefSeq Transcript ID, AGI and Entrez Gene. Figure We applied the same analysis to ublished , 15 and ublished . AdditioRegardless of the source of oscillation, the rhythmic nature of expression demands a significant revision of the way we understand and model the function and regulation of genes. One of the previously published models predicted that for a rhythmically expressed gene addition of miRNA may have two different effects: either expected decrease or surprising increase of transcript abundance, depending on timing of miRNA action . In caseOur model shows that such strange behavior of alternative probes is not only natural; they are performing an important function. This function eliminates the effect of oscillation in transcription mechanism. Other studies have already reported pervasive oscillation of the entire transcriptome regulating the ratio of alternatively adenylated transcripts. Manipulating such ration allows changing the amplitude of a particular gene expression or even complete elimination of oscillation. Constant abundance of a gene product can be used for production purpose to maximize the output of a peptide or an enzyme producing the product of interest. Alternatively this mechanism can be engineered to block unwanted pathways such as apoptosis or cell motivity, etc., or to keep certain pathways active at any time. Likewise, the same model can be used to create a blueprint for constructing artificial genes with certain properties. For example, the formula given in the description can be used to select parameters of the artificial gene in order to create the desirable amplitude of oscillating product abundance.The mouse gene expression profiles was obtained in the original study of circadian gene expression in adipose tissues . The AKRWe used two independent data sets similar in experiment design , 15. SeeWe integrate each equation separately with respect to time. The solution of the system is:n1 and n2 as single harmonic oscillations:The rotating-vector description of simple harmonic oscillation provides a neat way of rewriting x1 and x2 can be represented as two vectors a1 and a2 that rotate around their tails, which are pivoted at the origin O. The angular speed of the rotation is equal to \u03c9. As the vectors rotate around the origin their projections x1 and x2 on the horizontal axis vary cosinusoidally. Hence we haveThe following geometric solution is illustrated in Fig. n1(t)\u2009=\u2009A\u2009cos(\u03c9t\u2009+\u2009\u03b21), withTherefore\u00a0n2:Similarly, we can get the expression for And the phase difference is.p-value to test for circadian genes, and finally we construct a bootstrap percentile confidence interval that will be used to assign a phase to each oscillating gene.The algorithm used in the analysis of the data is based on resampling techniques. Indeed, we use the maximum entropy bootstrap algorithm to generate a large number of replications of a given gene expression time series. Then, we calculate a bootstrapped The complete description with source code and test results has been published in .Additional file 1:https://stat.ethz.ch/R-manual/R-devel/library/stats/html/chisq.test.html. This supplemental file provides description and p-values obtained in simulations. (DOCX 10\u00a0kb)Supplemental method. In addition to Chi-Square test summarized in Table Additional file 2:Arabidopsis thaliana timeline gene expression. (ZIP 32\u00a0kb)Supplemental Data Tables. This zip archive contains the results of analysis of phase difference among redundant probe sets in mouse liver, mouse brown fat, mouse white fat and two independent studies of Additional file 3:Supplemental Initial Data Tables. This zip archive contains the initial timeline data for probe sets in mouse liver, mouse brown fat, mouse white fat originally published in Zvonic et al. . (ZIP 37"}
+{"text": "Metabolic disorders such as obesity and diabetes are diseases which develop gradually over time through the perturbations of biological processes. These perturbed biological processes usually work in an interdependent way. Systematic experiments tracking disease progression at gene level are usually conducted through a temporal microarray data. There is a need for developing methods to analyze such highly complex data to capture disease progression at the molecular level. In the present study, we have considered temporal microarray data from an experiment conducted to study development of obesity and diabetes in mice. We first constructed a network between biological processes through common genes. We analyzed the data to obtain perturbed biological processes at each time point. Finally, we used the biological process network to find links between these perturbed biological processes. This enabled us to identify paths linking initial perturbed processes with final perturbed processes which capture disease progression. Using different datasets and statistical tests, we established that these paths are highly precise to the dataset from which these are obtained. We also established that the connecting genes present in these paths might contain some biological information and thus can be used for further mechanistic studies. The methods developed in our study are also applicable to a broad array of temporal data. High throughput data like Microarray , 2 or RNSimilar methods have been used to study disease condition by identifying significantly perturbed biological processes in different stages in a disease progression. For example, Sun, et al., measuredgiven such lists of perturbed processes at respective stages in a disease progression. Linking such processes temporally could provide a view of how processes perturbed initially lead to processes perturbed at a later time point in a disease development. Thus, we have used here a temporal microarray data obtained from a mice liver as it was fed on high fat high sucrose diet and progressed towards obese condition. We found the lists of biological processes perturbed at different time points. Then, we constructed a network of biological processes where the processes are connected through common genes. From this network, we have identified links between biological processes that are perturbed at respective time points. These linked biological processes, defined here as paths, connect initial to the final perturbed processes. These paths helped us to study disease progression temporally at a process level and may be used to identify key genes involved in obesity disease progression.However, as far as we know, no such study has been done to link The details of the experiment conducted and processing of the data is given in Materials and Methods: Processing of Microarray Data. A heat map of the microarray data showing the gene patterns is shown in This resulted in matrices of es, nes and pvalues. The matrix consisting of nes values of dimension 816 X 10 is shown as a heatmap in To find how processes perturbed at the initial time point affects processes perturbed at later time point, we considered looking at connectivity between biological processes. We thus made a network of these biological processes where nodes are biological processes and edges between two nodes signify that at least one gene is shared between the two biological processes. The network is shown in Next, we defined a path of length 10 made up of processes with high nes values at respective time points i.e. 1st process in the path have highest nes value at the 1st time point, 2nd process in the path have highest nes value at the 2nd time point and so on. Thus the obtained path will have the highest average nes value. To find such paths, we first used a brute force approach, i.e. randomly generating paths of length 10. A distribution of average nes values of 50000 such random paths is shown in be functional importance. We plotted the distribution of link probabilities for all the links present in set of perturbed paths and also distribution of link probabilities present in the original network of processes as a set of processes j = 1:9), where j indicates time at which the process is perturbed and superscript on an edge indicates the edges connecting process Denoting an Define overlap factor O(iP) for a path iP as the union of paths present in overlap factors of its edges.Define overlap factor The overlap between paths is visualized by plots in Having established this property, we used it to select a subset of paths from the set of 1024 paths which overlap with most of the other paths. We found that the path with highest overlap factor set contained 464 paths out of total 1024. To increase the percentage, we tried different combinations of two paths and found that there are four pairs of paths that had 822 paths in their overlap factor sets i.e. approx. 80% of total paths. These paths are mentioned in To better depict the paths and capture the connectivity between processes, we made a gene process network where processes at consecutive time points are connected through common genes, see Having established the paths which are capturing the disease progression, we now checked the precision of the selected paths. That is we wanted to check how specific are the obtained paths with respect to the particular dataset representing a particular tissue in a disease state. For this, we used temporal datasets from NCBI repository under GEO accession number GSE63178. The database consisted of 40 temporal datasets, one of which was used in our study to find paths and other 39 datasets were used to calculate the precision of selected paths. The database consisted of microarray data obtained from temporal gene expression profiling of multiple murine tissues with different diet regimens 15]. We. We15]. To compare this precision with the scenario when processes from individual time points are used, we took the topmost process from Day 6 . We calcThis proves that processes are not specific to source dataset while the obtained paths are specific to source dataset.Finding links from a biological data is a desirable aim and can help in mechanistic understanding of underlying processes at work. Time series microarray data can be used to find such links as well as giving temporal direction to these links. In the present study, using a time series microarray data, we aimed to capture the disease progression at process and gene level. For this, we constructed a biological process network. Networks between biological processes or more generally sets of genes have also been used in other studies linking biologically relevant terms , 16 to fWe used t-test and other tests to provide meaning to the captured links present in the paths. Our statistical tests gave us confidence that the observed links are just not because of degree factor of processes. Thus we hypothesized that given two processes are perturbed at consecutive time points and are connected, there is a high chance that perturbation of a process at a time point might cause perturbation of other process at the next time point through these common genes. Although, this is a result drawn from a statistical testing and the ultimate validation can only be done through experiments. However, despite this limitation, we believe that our framework can help in generating hypothesis of the causal mechanisms behind observation of different perturbed processes at different times of the experiment.As final validation of the captured paths, we showed that the selected paths are more precise than biological processes in defining the biological state. We must mention here that as the datasets used in precision analysis were taken from different tissues as well as from different variations of high fat high sugar diet condition . Due to Thus, in conclusion we can say that our analysis helped us to extract genes that can be hypothesized to play an important role in disease progression which can be experimentally validated. One can further take up these genes for mechanistic understanding of disease progression. Finally, the procedure we have presented here can be used with any time series microarray, proteomics data to find links between processes perturbed at different time points.The microarray data was obtained from an experiment where one group of mice were fed with high fat high sucrose diet (treated group) and another group with normal diet (control group) for certain days before taking tissue samples from both groups of mice. Both groups of mice were fed respective diets for following days: Day1, Day 6, Day 10, Day 14, Week 0, Week 3, Week 6, Week 9, Week 12, Week 15 and Week 18. This experiment was repeated for three times. Then, microarray experiment was performed on tissue samples and after suitable normalization of the signal intensities of each probe using Agilent Genespring GX software, three values of log fold change for control sample and treated sample were obtained for each probe and at each time for each tissue. Further details of the experiment are given in [This data for liver tissue was downloaded from the NCBI repository under GEO accession number GSE63175. The data also contains information about data where mice were fed with high fat high sucrose diet plus an ayurvedic formulation which is out of scope from our present study. These data correspond to columns with columns header \u201cP2_HFx_y \u201c and were removed. The column headers have information of time point of experiment in days as well as weeks. Weeks were recorded in experiment as number of weeks after Day 14. Thus, 14 days were added while converting weeks to days. This implies Day 14 and Week 0 would correspond to same time i.e. experiment done twice; thus Day14 and Week 0 information was combined after making the final matrices as mentioned at end of this section. This resulted in final time points as: Day 1, Day6, Day 10, Day 14, Day 35, Day 56, Day 77, Day 98, Day 119, and Day 140 as reported in this study. The steps used in Processing of Microarray data are described below and shown in a schematic diagram in For each probe, the mean of log fold change for treated samples were calculated and a pvalue signifying difference between three control values and three treated values was generated by using t-test. The data contains 40628 probes which correspond to 29411 gene symbols. Gene symbol information for each probe was taken from column with column header \u201cGene symbol\u201d. Here, multiple probes correspond to same genes.Step 1. First, we filtered the data to have only those genes that are changed for at least 2 fold in all three treated samples at a time point. In case two probes corresponding to same gene satisfy this condition, the probe with minimum p value was chosen. We repeated this process for data at different time points and combined all filtered genes together to form a matrix with filtered genes and time points with fold change values of all filtered genes inserted at respective time points. In the matrix, there would be many genes with no fold change values at some time points. For these genes, we used the following steps to insert values at these time points (i.e. where these are not >2 fold changed).Step 2.For the genes with missing values, we went back and searched their probe\u2019s fold change values and in case if we find any two values out of three are outside the interval -1 and 1, we selected those probes and go to step 3. If no probe of the selected gene satisfies above criteria, we then selected those probes which would have values at three samples within -1 and 1 and go to Step 3.Step 3. The selected probe\u2019s average over three samples were taken if in all three cases the value is greater/less than 0. If multiple probes of a gene satisfied this condition, probe with minimum p value was chosen. If no probe out of selected probes satisfy this condition, probe\u2019s average value over two (out of three) samples with value greater/less than 0 was taken. For multiple probes satisfying this condition, probe with minimum p value was taken. For the probe chosen, if the average value was less than 0.8/greater than -0.8, a dummy number 0.001 was inserted, else the average value was inserted in the matrix.The resulting matrix contained log fold change values at eleven time points. We combined Day 14 and Week 0 information in following way. If defined perturbed gene as having absolute log fold change >1 and negative (positive) perturbed gene as having log fold change <-1 (>1) at a time point. Now, if a gene\u2019s value in both time points are perturbed in same direction (-ve or +ve), we took average values as merged value. If values are perturbed in opposite direction in two time points, we assigned a dummy number (as used above) to that gene as merged number. If gene\u2019s value is perturbed in only in 1 time point, we used that value as merged value and if value is not perturbed in both time point, we assigned one of the non perturbed value as merged value. Now, we checked, in resulting matrix of 10 time points, if a gene is not perturbed in even a single time point, we removed those genes.The resulting matrix contained log fold change values at ten time points for 19303 genes. The matrix was clustered using default clustergram function of matlab which uses algorithm of Eisen et al. resulting in heatmap shown in The Gene Ontology Biological Process list was obtained from Enrchr Library section We followed a method as described in to calcuj is the fold change of gene j, N is total number of genes, NH is the number of genes in S. Enrichment score (es) is calculated as maximum plus minimum of hitP \u2212 missP. The plot of hitP \u2212 missP for a particular process and for fold change values of genes at Day 1 is shown in Here, rS1 Fig1, log2 transformed) at each time point is calculated as in step 1. For the genes where condition of step 1 is not satisfied at some time points (as represented by empty boxes in the matrix), step 2 and step 3 are followed to get the fold change values (FC2) and inserted at respective places in the matrix. Then time points t4 and t5 are combined as mentioned in text to give Final Matrix.For each gene (represented by one/many probes), its fold change value (FC(PDF)Click here for additional data file.S2 FigAn example of gene set enrichment analysis method is shown for a process receiving high nes value at Day 1. Most instances of genes of this set are present towards left which results in high es and nes value and signifies that most genes of this process are perturbed.(PDF)Click here for additional data file.S3 Fig(A) For all the processes the nes values calculated at each time point were plotted against the corresponding\u2013log10 pvalue and shows that as nes values of a process increases the corresponding\u2013log10 also increases signifying that process with high nes values are significantly perturbed. (B) Here, for each process, the average absolute fold change of its genes at each time point is calculated and this value is plotted against the nes values of these processes. The plot shows as the nes values of a process increases, the average absolute fold change values of its genes also increases.(PDF)Click here for additional data file.S4 FigProbability of obtaining given edges by chance is plotted for all edges as well as edges from set of perturbed paths and clearly shows that probabilities are low for edges from set of perturbed paths as compared to total edges.(PDF)Click here for additional data file.S1 TableThe list of 816 biological process names.(XLS)Click here for additional data file."}
+{"text": "Functional genomics and gene regulation inference has readily expanded our knowledge and understanding of geneinteractions with regards to expression regulation. With the advancement of transcriptome sequencing in time-series comes theability to study the sequential changes of the transcriptome. Here, we present a new method to augment regulation networksaccumulated in literature with transcriptome data gathered from time-series experiments to construct a sequentialrepresentation of transcription factor activity. We apply our method on a time-series RNA-Seq data set of Escherichia coli as ittransitions from growth to stationary phase over five hours and investigate the various activity in gene regulation process bytaking advantage of the correlation between regulatory gene pairs to examine their activity on a dynamic network. We analysethe changes in metabolic activity of the pagP gene and associated transcription factors during phase transition, and visualize thesequential transcriptional activity to describe the change in metabolic pathway activity originating from the pagPtranscription factor, phoP. We observe a shift from amino acid and nucleic acid metabolism, to energy metabolism during thetransition to stationary phase in E. coli. After mapping out a genome, the study of gene regulationprocesses and their effect on gene expression is generally thenext stage in the analysis pipeline and is an importanttopic in system biology. It is a challenging topic due to theway it interconnects components in the genome, the proteomeand the epigenome as they contribute to the control of themagnitude, location and timing of gene expression ,2.GenerA more accessible approach comes from reverse engineeringgene regulation links from transcriptome data usingstatistical models and algorithms that can rely on observedmeasurements of transcripts alone or with the inclusion ofnon-transcriptome data like protein concentration . AsmoreThe next step in gene regulation research lies within timeseriesdata which has an advantage over a single time pointtreatment and control experiment because it can detectpatterns of gene expression over time such as periodicpatterns in response to stimulus . This tyTo elucidate the dynamics of time-series gene regulation, wepresent a method that uses the cross-correlation betweentranscription factor and gene expression for efficient analysisof numerous genes. We use cross-correlation in place ofcorrelation as correlation is not able to capture thesequential changes in gene expression that exist in time-seriesdata, while the use of differential equations is difficult toapply to large data sets. The relationship between theexpression of the regulating transcription factors and thegenes they regulate is used to identify crucial times ofactivity in order to build a sequence of regulation events.These activation times come in pairs when the geneexpressions of transcription factors change with acorrespondingly similar change in gene expression in thegenes they regulate. We apply our method to the E. colimodel as it provides a solid foundation with establishedliterature and produces fuller networks, and then show thegene regulation activity for the pagP gene and its associatedregulators.The expression data was normalized in R using DEA transcription factor a and the gene it regulates b form aregulatory pair. The time when a initiated its activity ta andthe time when the expression of b responded tb is calculatedper pair. Genes with only one transcript configuration wereselected and self-regulating genes with no other regulatorsin its regulatory process were excluded.The lag time h was determined by calculating the crosscorrelationbetween the transcription factor a and regulatedgene b for all possible h . RegulatA network was created using the igraph package sothat The network was converted into a dynamic network usingthe networkDynamic and ndtThe E. coli b str. REL606 expression data from is availWe propose a gene expression regulation analysis approachto detect the times at which gene expression regulatorsinitiate a change and the times at which the genes beingregulated have their expressions altered. These times create asequential activity map of transcription factors and genes thatdescribe the pattern of metabolic reactions during a givenperiod of time. The method was applied on experimentaltime-series data from E. coli over a period of five hours andthe regulation process involving phoP and pagP activity wasused to show how the results could be visualized andinterpreted in a dynamic network.Time-series experiments are different from how staticexperiments are investigated so there was an initial investigationwhich found that cross-correlation was better fitted to the timeseriesdata compared to correlation between the expression of atranscription factor and regulated gene. Pearson correlation was0.0709 overall and -0.506 for average negative correlation and0.556 for average positive correlation while in contrast, crosscorrelationvalues were -0.738 for average negative correlationand 0.766 for average positive correlation. By allowing for timelag, we were better able to identify regulation events, eventhough there was still some variability that could not beaccounted for with cross-correlation alone.The chosen lag times were used to filter out the possibletranscription factor and gene activation times. The mostcommon combination of activity time in sample 1 and 2 is 3 x3 hours which is a lag time of 0 . Sample The three samples in the data set were analysed separately tosee if there would be a consensus result between them. Theexpression of each time point per sample was highly correlatedwith each other which indicated a sufficient degree of similaritybetween them for direct comparison (between 0.912 and 0.709).When inspecting the range of detected active times, weobserved a general agreement of lag times between thesamples. Although the actual lag times and active timesdiffered, sample 3 seemed to be an hour behind samples 1 and2 metabolically. This difference in time was present in themajority of genes analysed and suggested sampling orbiological error. The consistency seen in the results indicate thatthe length of the time period between transcription factor actionand gene expression change was more relevant than the geneexpression value itself, and that differences between samplescan be detected in this way.The network produced from sample 1 data was67 vThe network produced from sample 2 data wasthe The network produced from sample 3 wasmodeThe pagP gene was previously identified as a principalelement of phase transition and it iOur method for isolating the times at which transcriptionfactors initiate gene regulation and when the change in geneexpression starts was successful and the results werecomplemented by a dynamic network visualization. We wereable to show the sequential regulation activity initiated byphoP on the genes it directly regulated as well as genesfurther downstream of its activity. The network showed asmall part of the ripple effect that transcription factors canhave on regulation systems in a given time period. Thevisualization identified the types of metabolic pathways thatwere activated and deactivated as E. coli reachesstationary phase and determined the time and sequence ofactivities. Although this was used on a simplified data set, itis possible to extend the analysis to other types ofexperiments as a type of regulation profile of metabolicprocesses."}
+{"text": "How to correctly detect the causality from the observed time-series is quite important but it is usually very difficult, and has attracted much attention in complex systems in recent years. There are many effective method to infer the causal relation between the variables, such as mutual prediction method , state sX was said to \u201cGranger cause\u201d Y if the predictability of Y declines when X was removed from the universe of all possible causative variables u. The key characteristic of GC is separability, which means that the information for a causative factor only depends on one variable. In other words, if the variable X is removed, its information will be eliminated at the same time. However, the assumption of separability is mainly appropriate to the stochastic systems or linear systems because the separability implies that the system is just considered as a part not a whole at one time. Generally, for a linear system with strongly coupled variables, GC is a very powerful tool to detect their interactions. However, it lacks ability to detect the causal relation on weakly coupled variables or nonlinearly coupled variables, in particular for a deterministic system, i.e., for those systems, GC may give ambiguous results or even wrong conclusions.GC can be roughly stated as follows : the varIn order to overcome these drawbacks or shortages, Sugihara et al. proposedThe CCM method and many other methods are effective for identifying the causal associations from the observed data, and in particular, CCM method can be thought as another milestone after the GC method to detect causality. However, in spite of those impressive advances on this area, most existing approaches including CCM and GC methods, generally need a long time-series to detect the causal relation, for example, more than 3000 in CCM study. But in many real-world application data, especially in biological systems, the oberved or obtained time-series data (or samples) is usually very short . Since these data or samples are highly depended on the experiment or the limited resource. Thus, one natural question is how to detect the causality of these high dimensional short time-series, including those weakly coupled and nonlinearly coupled relations.In this work, to answer this question, we aim to find an effective method to detect causal relation for high dimensional short time-series or small samples. In other words, in this paper, we propose a new method called topologically equivalent position method shorting for TEP which can detect causality for very short time-series data or small samples. This method is mainly based on attractor embedding theory in nonlinear dynamical systems. Specifically, we exploit the information from the embedding theorem, i.e., two attractors reconstructed from two different observed variables are topological equivalent. That information is used to predict time-series of one variable from another or detect the causality between them based on the principle of topologically equivalent position, i.e., the positions of two corresponding points in the two attractors are topologically equivalent. This prediction can be achieved even by a small number of samples or short time-series. We use both numerical examples and real gene expression data to show the effectiveness of our method. As a result, it shows our method can be effectively used in biological systems. And it can extend GC and CCM methods to general cases. In addition, the comparison studies for different approaches are also provided to show the superiority of our method.In dynamical systems theory, the necessary condition that two variables (time-series) are causally linked each other, is that these two variables are from the same dynamical system or they must share the same attractor. This also means that time-series data of one variable contains the information of other variables in the same system or attractor, and thus can be used to predict the dynamics of other variables. Here the attractor means a set of numerical values of the state invariant under the dynamics or the numerical values toward to a system in the course dynamic evolution. Furthermore, according to the Takens\u2019 delay embedding theorem , one canM and the reconstructed attractors by lagged-coordinates from their components, for example, X, Y as MX, MY, respectively for short gene expression data. Specifically, first we define the causality, which is actually a prediction-based concept in this work. We denote the shared common attractor and {Qi}, are located on their corresponding reconstructed attractors MX and MY, i.e.\u00a0Pi\u2009\u2208\u2009MX, Qi\u2009\u2208\u2009MY, . Now we use the information of Pi\u00a0 to predict Qi\u00a0, if the predicted time-series (samples) Qi\u00a0 on attractor MY, we call that Y causes X ,we say that Y does not cause X For any two points Pi\u2009\u2208\u2009MX,\u00a0Qi\u2009\u2208\u2009MY\u00a0are called topologically equivalent positions (TEP), if and only if the relative distances from Pi, Qi to any other points on respective attractors MX, MY are invariant.MX and MY (taking P4 and Q4 for example) are called topologically equivalent position if the following quantities are satisfied:D12 are the Euclidean distances of the first two points on their attractors MX and MY.\u00a0di4 and Di4 are Euclidean distances from points P4 and Q4 to other points on their respective attractors MX and MY. For a general case, any two points on two topologically equivalent attractors are called topologically equivalent position, if they satisfyTo understand this definition, we give the illustration in Fig.\u00a0\u03b3 is a constant.where Pi, Qi on the reconstructed attractors MX, MY are known. Next, we use the information defined in (2) to detect causal relation between these two time-series from the topologically equivalent attractors.We also assume that the relative position of points P1\u2009\u223c\u2009Pi and Q1\u2009\u223c\u2009Qi\u2009\u2212\u20091 to predict P1\u2009\u223c\u2009P4 and Q1\u2009\u223c\u2009Q3, to predict P1\u2009\u223c\u2009P4 and Q1\u2009\u223c\u2009Q3 by using (2) . The next important step is to evaluate this prediction. Our criterion is to check the error between the predicted point Q4. We denote the error asIn order to identify the causality between two time-series from their topologically equivalent attractors, first we use the information of \u03f5\u00a0is sufficiently small (less than 10\u22123), it implies an accurate prediction from P4 to Q4. In the same way, we can check other points until all the points are evaluated. Finally, we obtain the mean error or the total error. If they are sufficiently small, it means that the information of {Pi}, can predict {Qi}, . This also implies that {Pi}, has strong relationship with {Qi}, . In other words, the error can reflect the causal relation between these two variables X and Y. Clearly, even three points are sufficient to estimate the TEP between two time-series in theoretically (to produce \u2018two distances\u2019 needs three points at least), which is a major advantages of this method.If the error\u00a0\u03f5 of Eq. , i.e., substitute dij into (5), then the error \u03b5ij can be obtained without solving dij and Dij is known, it is easy and straightforward to calculate the relative error \u03b5ij. Therefore, small error \u03b5ij implies that the predicted Qi or is accurate.Clearly, we substitute \u03b5ij by the exponential function so that the error is normalized between 0 and 1. Therefore, the final score of a pair of observed time-series is defined as:n is the number of the time points (samples) for error estimation. Generally, we use leave-one-out scheme to evaluate all the observed time points (samples).We further scale the error By using this score function, we identify the causal relation both for numerical examples and real gene expression data in next two sections.Remark: Both the CMS method used in [ used in and TEP used in ). Our TETo validate the effectiveness of our TEP method, we first apply our TEP method to both several benchmark examples and gene expression data. The theoretical models used here were the same ones used in .rx\u2009=\u20093.8, ry\u2009=\u20093.5, \u03b2x, y\u2009=\u20090.02, \u03b2y, x\u2009=\u20090.1, and the initial conditions X(1)\u2009=\u20090.4, Y\u00a0(1)\u2009=\u20090.2.The first example is logistic difference equations. Since we know the underlying relations between the variables in advance, we just use these mathematical models to identify the validity of our proposed method. Considering the following two coupled Logistic difference equations which exhibit chaotic behavior 7\\documenX and Y. Since the two cases \u03b2y, x\u2009=\u20090 or \u03b2x, y\u2009=\u20090 are equivalent, without loss of generality, here we consider the case \u03b2y, x\u2009=\u20090. The system (7) becomesrx\u2009=\u20093.8, ry\u2009=\u20093.5, \u03b2x, y\u2009=\u20090.02, and the initial conditions X(1)\u2009=\u20090.4, Y\u00a0(1)\u2009=\u20090.2. We give the calculation results by using our method shown in Fig.\u00a0By using the TEP method, we first check the bidirectional causal relation and then the unidirectional causal relation of the above system (7) between the variables Comparing the results in Fig. To further verify the effectiveness of our TEP method, we detect the causal relation between the variables of a 5-species model. The model can be described by the following system shown in Fig. Y1, Y2 and Y3 are coupled each other, and also Y1, Y2, Y3 drive Y4 and Y5. However, Y4 and Y5 do not have any effect on Y1, Y2 and Y3. It agrees with our calculation results (by using 15 time points) listed in Fig. From Fig. Both these two simple models show that our method works well by using a small number of samples, i.e., it can detect the causality between the variables correctly.E.Coli, as described in [In this section, we apply our method to detect the causal gene regulation between a pair of genes. The gene regulatory network used here is the bacterium ribed in . It has ribed in , 25. In ribed in , 25.\u03b5 for each pair of genes. Therefore, there are totally i. e. , 9900\u00a0\u03b5s. We also use the receiver operating characteristics (ROC) curves with different noise level, i.e., noise free, noise level 0.1 and 0.2, respectively. At the same time we also compare our method with the IOTA method used in [By using the algorithm above, we first calculate used in . The commentclass2pt{minimFrom the ROC curves above and the statistic analysis of ROC curves, clearly, TEP is effective to detect the causality of gene regulations for the observed or obtained short time-series data. Comparison results between our method and IOTA method see Fig.\u00a0 also demE.coli gene expression data, here we first conduct the statistics analysis of ROC curves. At the same time, we compare our method with the IOTA method. The results are shown in Fig.\u00a0Now we detect the causal gene regulations from yeast gene expression data with 10 time points. The network structures were downloaded from the reference , 27. JusCircadian rhythm is fundamentally important for mammals in their physiological processes. To identify the important circadian genes and their roles in their relevant processes is important to elucidate their mechanism of rhythms, in particular, at a network level. In fact, there exists many key circadian genes and functional organization interaction, which generate circadian oscillations. Based on the rat circadian rhythm gene expression data , we deteFor circadian rhythm related genes , 30, theE. coli and Yeast in Figs.\u00a0In order to make our results more clearly, we give the real gene regulatory network of 100 genes of How to detect the causal relations from short time-series data is really very important. Especially for the genes, because the obtained causal relations among the genes can provide valuable information and insights into topological structures of gene regulatory networks. Besides the gene regulatory networks, our method can be used in many other complex networks. However, we must also point out that there still exist false predictions, e.g., many false prediction by the circadian rhythm gene expression data. As a future topic, we will study the dependence of our method on the data and its length.In this work, a new method which called topologically equivalent position method is proposed. It is a prediction-based method. It can be effectively used to detect the causality of the observed short time-series data or very small samples. Both the numerical examples and gene expression data have been used to validate the proposed method. Different from the existed method, such as Granger causality and CCM, our method not only is simple in terms of computational procedure, but also can be applied to nonlinear systems. The most important is that it can identify the causality for the observed observed time-series just from short time points. This is very useful for real-world data, in particular, the gene expression data, which are typically very short (\u224810 points)."}
+{"text": "Feature selection, aiming to identify a subset of features among a possibly large set of features that are relevant for predicting a response, is an important preprocessing step in machine learning. In gene expression studies this is not a trivial task for several reasons, including potential temporal character of data. However, most feature selection approaches developed for microarray data cannot handle multivariate temporal data without previous data flattening, which results in loss of temporal information.We propose a temporal minimum redundancy - maximum relevance (TMRMR) feature selection approach, which is able to handle multivariate temporal data without previous data flattening. In the proposed approach we compute relevance of a gene by averaging F-statistic values calculated across individual time steps, and we compute redundancy between genes by using a dynamical time warping approach.The proposed method is evaluated on three temporal gene expression datasets from human viral challenge studies. Obtained results show that the proposed method outperforms alternatives widely used in gene expression studies. In particular, the proposed method achieved improvement in accuracy in 34 out of 54 experiments, while the other methods outperformed it in no more than 4 experiments.We developed a filter-based feature selection method for temporal gene expression data based on maximum relevance and minimum redundancy criteria. The proposed method incorporates temporal information by combining relevance, which is calculated as an average F-statistic value across different time steps, with redundancy, which is calculated by employing dynamical time warping approach. As evident in our experiments, incorporating the temporal information into the feature selection process leads to selection of more discriminative features.The online version of this article (doi:10.1186/s12859-016-1423-9) contains supplementary material, which is available to authorized users. Feature selection approaches can be roughly categorized into filter-based methods , wrapperA challenge in gene expression studies is the identification of discriminative genes, which may be later used as predictors (inputs) to classification models. Removing irrelevant features may lead to improved accuracy and increased interpretability of the classification model. However, this task is challenging, especially when data have temporal characteristics. Various feature selection approaches have been developed for microarray data \u20136. HowevL2,1-norm penalty for feature selection, thus ensuring all regression models at different time points to share a common set of features. This method removes redundant features by reducing their weights (coefficients) to zero but the approach belongs to the embedded feature selection methods rather than filter-type methods.Several feature selection approaches for temporal data have been proposed recently. For instance, proposedA special group of filter-based feature selection approaches tends to simultaneously select highly predictive but uncorrelated features. An example is the Maximum Relevance Minimum Redundancy (mRMR) algorithm developed for feature selection of microarray data . It tendIn this paper, we propose a temporal minimum redundancy - maximum relevance (TMRMR) feature selection approach, which is able to handle multivariate temporal data without data flattening. We preserve the idea of maximum relevance and minimum redundancy criteria but we cThe mRMR is a feature selection approach that tends to select features with a high correlation with the class (output) and a low correlation between themselves. For continuous features, the F-statistic can be used to calculate correlation with the class (relevance) and the Pearson correlation coefficient can be used to calculate correlation between features (redundancy). Thereafter, features are selected one by one by applying a greedy search to maximize the objective function, which is a function of relevance and redundancy. Two commonly used types of the objective function are MID and MIQ representing the difference or the quotient of relevance and redundancy, respectively. For temporal data, mRMR feature selection approach requires some preprocessing techniques that flatten temporal data into a single matrix in advance. This may result in a loss of possibly important information among temporal data . A common way for data flattening used as a preprocessing step to mRMR is depicted in Fig. In this study, we preserve the idea of the mRMR algorithm by maximizing the objective function, which includes relevance and redundancy, but we adapt it to handle multivariate temporal data without flattening Fig. .Fig. 2TN individuals. G observed genes measured at T time steps for individual i. i. Let us also denote by N\u00d7T matrix of gene expression data for jth gene. We represent relevance of a gene gj by calculating the F-statistic at each time step and then combining these values by using an appropriate aggregation operator. A number of aggregation operators may be applicable here, such as median, arithmetic mean, geometric mean, maximum or even an approach that combines aggregation operators by using min-max normalization. All methods were implemented by using MATLAB software.m={1,10,20,30,40,50} genes.The proposed TMRMR-C and TMRMR-M feature selection approaches were compared with four baseline feature selection algorithms according to the evaluation procedure described in the previous section. By using the 5-fold cross validation procedure, the accuracy of KNN, NB and SVM classifiers was calculated for the top Rc) significantly contributes to prediction accuracy. In addition, we calculated the average accuracy of the three classifiers over all datasets . This indicates that the proposed methods have selected the most discriminative features.Table m value, we tested whether the proposed TMRMR-C approach (which outperformed the TMRMR-M) statistically significantly outperforms other methods. For this purpose, we applied Welch\u2019s t-test on the results given in Table \u03b1=0.05).For each m for all classifiers and for all datasets. This figure clearly shows that in most cases both, TMRMR-C and TMRMR-M approaches, outperform baseline methods for most values of m. This figure also shows that, among the four tested baseline feature selection approaches, F-statistic outperformed the others in most cases including mRMR. Since mRMR uses F-statistic as a measure of relevance we can conclude that minimum redundancy condition, calculated as a Pearson correlation coefficient, hurts its performance. On the other hand, the proposed TMRMR-C and TMRMR-M methods achieved highest accuracy by combining relevance, calculated as an average F-statistic value across different time steps, with redundancy, calculated by employing DTW and thus succeeded to capture some additional information hidden in temporal characteristics of the data.Results given in Table T=3, T=5 and T=7 for all three datasets. We select the following time points for evaluation purposes: initial time point, end time point and equally distant time points between them . Due to the space limitation, in Table T and m values). On the other hand, the TMRMR-M algorithm showed improvement in all but 3 cases from which 2 occurred when the number of time points was set to 3 (T=3) and the remaining one occurred when the number of time points was set to 5 (T=5). This confirms the fact that a limited number of time points negatively affects the DTW approach and consequently the TMRMR-M algorithm, nevertheless, the proposed method showed improvement in most cases when comparing to baseline approaches. This leads to the conclusion that in cases with a limited number of time points the TMRMR-C approach, which is computationally more expensive, might be more appropriate than the TMRMR-M approach.The accuracy of the DTW algorithm may degrade considerably when operating on expression profiles with not enough data points which is often the case in gene expression datasets. This may limit the applicability of the proposed TMRMR-C and TMRMR-M algorithms in such cases and, for this purpose, we performed analysis on how reducing the number of time points affects performance of the proposed methods comparing to baseline approaches. We repeated the same evaluation procedure but with reduced number of time points http://www.pantherdb.org/) which extracted significantly over-represented biological processes. For each of the three datasets, the top 5 GO terms are reported in Table p-value corrected based on the Bonferroni procedure.We have performed gene ontology over-representation analysis to find gene ontology (GO) terms that are over-represented within the subset of selected genes. For this purpose we used annotations for the top 50 genes selected by the TMRMR-C algorithm from each of the three datasets used in this study , Tanimoto distance (Tdist) and number of features shared across all folds of the 5-fold cross validation procedure (Nshared) for the top 50 selected features (m=50). For each method, Fig. T=3, T=5, T=7 and T=Tall, where Tall\u2208{16,14,21}). This figure clearly shows that, on average, the TMRMR-C feature selection method is the most stable one according to each of the three measures . The second most stable method is ReliefF which appears to be more stable than the TMRMR-M algorithm , while the least stable method is mRMR . Since both the mRMR and the TMRMR-C algorithms are based on maximum relevance and minimum redundancy criteria, we can conclude that combining relevance, calculated as an average F-statistic value across different time steps, with redundancy, calculated by employing DTW significantly improves robustness for temporal data.In order to compare robustness of the proposed TMRMR-C and TMRMR-M feature selection methods with other baseline approaches used in this study, we calculated the Spearman\u2019s rank correlation coefficient the ranked list of the top 50 genes selected by the TMRMR-C approach for H3N2, HRV and RSV datasets, respectively and (2) error bars for the two groups, symptomatic and asymptomatic, for the top genes selected from the three datasets. (DOCX 240 kb)"}
+{"text": "Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating\u2014the most common chronometric technique in archaeological and palaeoenvironmental research\u2014creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20\u201330% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence. This means time-series regression methods could be used to quantitatively test hypotheses about the impact of climate change on humans and other hominins, or conversely the impact of hominin societies on their environments. However, there is reason to think that chronological uncertainty may complicate the use of such methods. In particular, the chronological uncertainty associated with the most common chronometric method used in the dating of both records\u2014radiocarbon dating\u2014could undermine our ability to confidently identify statistical relationships between the records. This is because calibrated radiocarbon dates have highly irregular uncertainties associated with them, and uncertainties of this type are not in line with the assumptions of many standard statistical methods, including time-series analysis [Time-series regression analysis is an important tool for testing hypotheses about human-environment interaction over the long term. The primary sources of information about human behaviour and environmental conditions in deep time are the archaeological and palaeoenvironmental records, respectively. These records contain observations with an inherent temporal ordering and are thus analysis \u20135. To inanalysis .non-stationarity, which describes time-series with statistical properties that vary through time\u2014e.g., the mean or variance of the series might change from one time to the next, violating the common statistical assumption that observations are identically distributed [autocorrelation, which means the observations in the series correlate with themselves at a given lag [Time-series data have to be analyzed carefully because the order in the sequence of observations matters. There are two traits a time-series can have that make temporal ordering important. One is tributed . The othiven lag . AutocorArchaeological and palaeoenvironmental time-series typically have both traits ,8,9. Thechronological uncertainty.Fortunately, these methods already exist because statisticians, mathematicians, and engineers have been working with non-stationary, autocorrelated time-series for a long time . As a reContemporary time-series, such as stock prices or daily temperatures, are usually recorded at precisely known times, but looking into the deep past entails significant chronological uncertainty. Archaeologists and palaeoenvironmental scientists usually make chronometric estimations by proxy using radiometric methods that rely on measuring isotopes of unstable elements that decay at a constant rate . Even thThe most common chronometric method, radiocarbon dating, is particularly problematic. Radiocarbon dates have to be calibrated to account for changes in isotope ratios through time. The calibration process results in chronometric errors that are often highly irregular, yielding ranges of potential dates spanning many decades or even centuries ,5,16,17.In the study reported here, we explored the impact of chronological uncertainty on a time-series regression method called the Poisson Exponentially Weighted Moving Average (PEWMA) method . Classifmeasurement equation. Brandt et al. [Like other state-space models, the PEWMA model has two main parts. The first is called the t et al. define ty, is dependent on the unobserved mean of the Poisson process, \u03bct, at time t. The unobserved mean of the Poisson process, \u03bct, is, in turn, dependent on the mean at the pervious time, at-1 and bt-1 corresponding to the shape and rate of the distribution, respectively. The unobserved mean at time t is also dependent on the regression term eXt\u03b4 where Xt is a matrix of covariates and \u03b4 is a vector of regression coefficients that is estimated from the data.The measurement equations represent the observed count data as outcomes of a sequence of Poisson random variables. Each observation, transition equation. Brandt et al. [The second part of the PEWMA state-space model is called the t et al. define tt\u03b7, describes the stochastic shift in the mean from one time to the next. This term is Beta distributed, denoted in the second equation by \u03b2. It is defined by the two standard Beta parameters and a weight, \u03c9, that discounts earlier observations exponentially\u2014hence the \u201cExponentially Weighted\u201d part of the PEWMA acronym. The \u03c9 parameter accounts for autocorrelation in the PEWMA model, and is estimated from the data. The parameters that appear in the Gamma and Beta distributions are also estimated from the data. Brandt et al. [a and b and a maximum likelihood approach. Online R scripts for estimating PEWMA models have been provided by Brandt et al. [www.utdallas.edu/~pbrandt/pests/pests.htm).These equations characterize the change in the unobserved mean through time. The first equation defines the mean at a given time, and has three terms. The first of these, et al.\u2019s notationt et al. calculatt et al. reconstructions for the summer and winter seasons in the Cariaco Basin . These rted with ). The rated with , an oxygted with , and theted with . In contted with , we founAs the foregoing study suggests, the PEWMA method has the potential to improve our understanding of past human-environment interaction. However, given the ubiquity of chronological uncertainty in archaeological and palaeoenvironmental time-series, there is a need to better understand how chronological uncertainty affects the method\u2014especially radiocarbon dating uncertainty, which is highly irregular, as we explained earlier.To explore the effect of chronological uncertainty on the PEWMA method, we carried out a series of simulation experiments. The experiments involved creating thousands of pairs of artificial palaeoclimatic and archaeological time-series with known relationships and then testing for those relationships with the PEWMA method. The regressions were set up with the synthetic archaeological time-series as the dependent variable and the synthetic palaeoenvironmental time-series as the independent variable. We used error-free dates for the artificial archaeological time-series so that we could limit the sources of error and see the effects more clearly. This analytical control also had the benefit of allowing us to compare the simulation results to our previous work on the Classic Maya because the dependent variable in that study was a historical record with little chronological uncertainty . Thus, iobserve-then-predict mechanism, which as the phrase suggests involves first observing some data and then making a prediction based on that observation. It filters through a given count series one observation at a time, updating its predictions for the next time based on previous observations. It can account for autocorrelation in the count data by discounting the information from older observations as it filters through the series. More discounting implies less autocorrelation in the observed data because older values in the series have a lower impact on subsequent values. The algorithm can also be fed covariates to see whether they improve its predictions of the time-series of interest. To estimate the statistical parameters for a model, the algorithm uses maximum likelihood, which means we can use Akaike\u2019s Information Criterion (AIC), a measure of information loss, to estimate the goodness of fit of a given model [L is the log-likelihood of the model, k is the number of covariates, and N is the number of observations in the time-series. This formula is a small sample size-corrected version of the AIC, which is generally appropriate for archaeological research given the small numbers of observations typical of archaeological time-series.Using the R statistical programming language , we ran en model \u201332. Modetop-level pairs. Each top-level pair was subjected to a chronological bootstrap\u2014i.e., random sampling of the radiocarbon date distributions used to date the synthetic palaeoenvironmental time-series\u2014which resulted in 2000 sub-pairs of time-series. Each sub-pair only differed from the others because different dates were used to create their age-depth models.In the simulations, we aimed to determine how calibrated radiocarbon date uncertainty affects the PEWMA model. Specifically, we sought to investigate the impact of radiocarbon date uncertainty on the PEWMA method when it is used to identify correlations between a calendrically-dated archaeological time-series and a radiocarbon-dated palaeoenvironmental time-series. To do so, we ran a massive simulation. The simulation was broken down into experiments. Each experiment involved a set of fixed parameters that were the same for every experiment and a set of variable, or free, parameters that we wanted to investigate. Within each experiment, 1000 pairs of synthetic time-series were analyzed using the PEWMA algorithm. We refer to these as the arima.sim. This autocorrelated component caused the linear signal to increase and decrease in a nonlinear fashion, mirroring the kind of variation commonly seen in palaeoenvironmental time-series. In each experiment, we controlled the amount of noise by tuning the standard deviation of the arima.sim function. The standard deviation could vary freely among three values, namely 1, 0.1, and 0.01. Increasing the standard deviation increased the level of noise, thereby decreasing the signal-to-noise ratio of the synthetic palaeoenvironmental observations\u2014i.e., the variance of the autocorrelated noise increased relative to the variance of the signal. We then dated the observations by selecting radiocarbon dates from the INTCAL-13 calibration curve from 12000\u201313000 BP [back-calibration. These back-calibrated dates became the synthetic radiocarbon assays for the time-series. They stood in for the uncalibrated radiocarbon measurements that we might receive from a dating lab in a real investigation. We then set the error of the simulated radiocarbon dates to a standard deviation of \u00b1 50 years, a fixed parameter corresponding to a common magnitude of error returned by dating labs. Setting these errors to a constant value was necessary to isolate the errors introduced by calibration\u2014i.e., the irregular uncertainties we were interested in.The experiments involved several steps. First, we created 1000 synthetic palaeoenvironmental time-series spanning a thousand-year period, from 12000 to 13000 calibrated years BP, a fixed parameter of the experiments. This slice of the curve was chosen because it has a moderate amount of chronological uncertainty relative to older and younger periods, meaning our results should be relevant to a wide range of archaeological research. We created the observations in each series using a linear function with a slope of 0.01, also a fixed parameter. This function was chosen to simulate an environmental process that increased gently over the 1000-year period of the series\u2014i.e., a synthetic environmental signal. We then added autocorrelated random error with a fixed autocorrelation of 0.7, creating noise in the synthetic environmental signal. The autocorrelated noise was generated using an R function called 13000 BP . There cPests, the R software package written by the developer of the PEWMA method [In the second step, we created 1000 synthetic archaeological time-series using a PEWMA filter in reverse. Instead of iterating over an existing count time-series to estimate its statistical parameters, the algorithm can produce a time-series by feeding it a set of parameters. So, to simulate an archaeological process that was affected by environmental conditions, we fed in each of the synthetic environmental series created in the previous step. To do that, we sampled each 1000-year environmental series 200 times at regularly spaced intervals and used them as covariates in the creation of 1000 PEWMA count time-series, creating 1000 time-series pairs see . By tuniA method .In the third step, we created 2000 age models for each of the 1000 synthetic environmental series see . Most pabenchmarks for identifying statistically significant results. We reasoned that if the AIC of a given model with an environmental covariate outperformed its benchmark, the PEWMA algorithm had successfully identified the underlying correlation\u2014or, in the case of no underlying correlation, erroneously identified one. For each of the 1000 synthetic archaeological series, we had 2000 PEWMA results, which meant we could calculate the percentage of the analyses that yielded a positive result\u2014i.e., the hit rate. We then tallied these percentages to create a distribution of hit rates for each experiment.In the last step of each experiment, we used the PEWMA method to create regression models with the synthetic archaeological time-series as dependent variables. For each archaeological time-series, we created 2000 PEWMA models. In each model, a given archaeological series was compared to one of the 2000 environmental series from its partner bootstrap ensemble. Since each of the 1000 archaeological time-series was paired to an ensemble of 2000 bootstrapped environmental time-series, we ran a total of 2,000,000 PEWMA analyses for each experiment. In each analysis, a given synthetic environmental time-series was used as a covariate for predicting its partner archaeological time-series. To determine whether including the environmental series improved a given model, we created another PEWMA model for each archaeological series that included only a constant and no covariate. The models with no environmental covariate acted as Permuting all possible values for the free parameters yielded 36 experiments, the results of which are shown in Figs Another unsurprising result involves the SNR. Holding the other parameters constant, we found that increasing the SNR from 10 to 100 generally improved the hit rate. When the SNR was 100, the PEWMA analysis was able to correctly identify the underlying correlation more than 80\u201390% of time in experiments with correlations of 0.5 or 0.75. Dropping the SNR to 10, though, reduced the hit rates. For the strongest correlation we explored\u20140.75\u2014an SNR of 10 reduced the hit rate from greater than 80% to between 30% and 60%. For the lower correlation values, the hit rate was similarly reduced, but the distribution was also spread out across a greater range of values, indicating more variability in the hit rate as the SNR decreased. This finding makes sense since the climate data would be noisier, leading to a less clear relationship between the synthetic environmental series and the synthetic archaeological series.Lowering the SNR further to 1 yielded what is, on the face of it, a counterintuitive result\u2014the hit rate improved somewhat. For example, in experiments where the correlation was 0.75, reducing the SNR to 1 increased the mode of the hit rate distribution to more than 80%. This seems to suggest that noisier environmental data made it easier to identify an underlying correlation. However, the effect was caused by the fact that the autocorrelated noise we added to the main climate signal was included in the creation of the synthetic archaeological count data. So, increased environmental noise translated into increased noise in the archaeological data, too. Thus, when the correlation of a given experiment was strong, the increased variance of the environmental data resulted in higher overall co-variance of both time-series\u2014both were noisy but strongly correlated. Consequently, the primary mode of the hit rate distribution shifted upward. Still, the hit rate distributions generally show higher variance as the SNR decreases, even in experiments with high correlations, which is more in line with the expectation that more noise should make it harder to see underlying relationships. In addition, a second mode appeared in the experiments with SNRs of 1 and correlations of 0.5 or 0.75. That smaller secondary mode in the hit rate distributions was much lower, around 10% or less. It indicates that the chances of failing to see the underlying correlation increased with very low SNR values, even in experiments with high correlations. Consequently, the overall effect of SNR values on the simulation was as expected, namely that more noise reduced the power of the method.One surprising result involves the false positive rate of the PEWMA method. By setting the correlation of some experiments to zero, we were able to determine how often random variation resulted in spurious correlations. Overall, the modes of the hit rate distributions hovered around 10%, irrespective of the experimental parameters. Thus, the most common false positive hit rate for the PEWMA method appears to be around 10%. This false positive rate was lower than expected. Given the impact of radiocarbon dating uncertainty on other time-series methods we have explored , we werThe last result is also surprising. It involves the number of radiocarbon dates. Surprisingly, increasing the number of radiocarbon dates used to date the time-series above five had little effect on the experimental hit rates compared to the other variables. Irrespective of the correlation and signal-to-noise ratios, the distributions of hit rates were almost identical whether the series were dated with five, 15, or 25 synthetic radiocarbon dates. So, from these results it appears that increasing the number of radiocarbon dates above five is unlikely to affect the accuracy of a PEWMA regression analysis even when using a bootstrap to account for dating uncertainty. This is surprising given our previous experience with radiocarbon dating uncertainty and its negative impact on time-series analyses. In a previous study , we deteOur simulation experiments yielded three main findings regarding the impact of radiocarbon date uncertainty on the PEWMA method when it is used to identify correlations between a count-based archaeological time-series and a radiocarbon-dated palaeoenvironmental time-series:The method\u2019s true-positive rate ranges from 20\u201390%, with the most realistic rates being between 30 and 50%.The method\u2019s false positive error rate is approximately 10%.no noticeable effect on the true- or false-positive rates.Increasing the number of radiocarbon dates used to date the time-series above five had specificity, a statistical term describing the rate of true-negative findings. A high specificity is ultimately the most important trait when investigating long-term human-environment interaction because spurious correlations abound in the real world and filtering out unlikely hypotheses is an important part of scientific research. On the other hand, the wide range of true-positive findings implies that we might miss important correlations because of chronological uncertainty, especially when the climate data are very noisy or the underlying correlation is weak. This is clearly a problem that should be addressed with more methodological work, but for now the PEWMA method appears to be a good tool for testing hypotheses involving correlations between palaeoenvironmental records and archaeological count data.Taken together, the first two findings\u2014a low false-positive rate and a moderate-to-high true-positive rate\u2014indicate that the PEWMA method is suitable for research on past human-environment interaction. A low false-positive rate means we are reasonably unlikely to be fooled into thinking correlations exist when they do not\u2014i.e., the method has a high top-level pairs. Each top-level pair was subjected to a chronological bootstrap, which resulted in 2000 sub-pairs of time-series. Each sub-pair only differed from the others because different chronological anchors\u2014i.e., dates sampled from calibrated radiocarbon date distributions\u2014were used to create their age-depth models. So, if chronological uncertainty was irrelevant, we would expect the PEWMA analysis results to have been identical between sub-pairs. That is, we would expect that the PEWMA method would either succeed or fail 100% of the time for a given top-level pair because the sub-pairs only differed due to chronological uncertainty. What we saw instead was that each top-level result was a fraction ranging from zero to one, indicating the percentage of the 2000 sub-pairs for which the PEWMA method was able to identify the underlying correlation. Therefore, we can be sure that chronological uncertainty had an effect, which means that another explanation is required.The third finding\u2014that increasing the number of radiocarbon dates above five had no effect on the simulation results\u2014is counterintuitive, though, and requires further thought. We initially expected that including more dates would markedly improve the true-positive rate and decrease the false positive-rate. That did not happen. One possible explanation for the counterintuitive relationship between dates and true-positive rates is that chronological uncertainty is not relevant at all because using more dates seemed to have no impact on the results. This possibility, however, can be dismissed by looking at the results of a single bootstrap iteration. Recall that the simulation was broken down into experiments. Each experiment involved a combination of simulation parameters that was constant throughout a given experiment. Within each experiment, 1000 pairs of synthetic time-series were analyzed using the PEWMA algorithm\u2014the A more likely explanation, we think, is that chronological uncertainty has an effect, but it is not as important as the other variables, namely the signal-to-noise ratio and the strength of the underlying correlation. So, large differences in the signal-to-noise ratio and the strength of the underlying correlation will mask the effect of chronological uncertainty to some degree. Consequently, had we included chronological uncertainty in the archaeological time-series as well as the palaeoenvironmental time-series, we might have seen a greater effect. To some extent, therefore, these results should be considered relatively liberal, since archaeological time-series generally do contain chronological uncertainty. In a similar vein, had we used an older portion of the calibration curve or wider radiocarbon dating errors for the individual dates, we would expect the utility of the model to decrease. Still, since the effect we see in the simulation results is small, similar amounts of chronological uncertainty in the archaeological time-series, or small differences in other chronological uncertainties, should only slightly decrease the true-positive rate of the PEWMA method.These findings have implications for our previous research on climate change and Classic Maya conflict . As we eblind analyses\u2014i.e., real studies with no prior information about the existence, or non-existence, of an underlying relationship between human and environmental conditions. Imagine setting out to conduct a real analysis with the PEWMA method. Our simulation suggests that having at least five to 10 radiocarbon dates per 1000 years for a given palaeoenvironmental series is sufficient as long as those dates are spread fairly evenly throughout the series. Spending resources on more dates would likely make little difference in the results. This means, for instance, that most of the palaeoenvironmental time-series that are readily available online have sufficient numbers of radiocarbon dates to create reliable PEWMA models. The largest, and most popular, online source for palaeoenvironmental time-series is the NOAA website (www.noaa.gov). Perusal of their catalogue revealed that many of the time-series they curate come with more than five radiocarbon dates. Consequently, our hypothetical analysis could involve the existing palaeoenvironmental data, and if we need to gather a new dataset our chronometric costs would be low.To appreciate the implications of our simulation results more generally, we can think in terms of conducting We could also be confident that our PEWMA analysis would be able to identify an important relationship if it existed, at least much of the time. Correlations with coefficients of 0.25 or greater were recoverable at least 20% of the time, and correlations of 0.5 or greater were recoverable upwards of 90% of the time. Thus, failing to find a relationship could suggest that there was no important relationship to find. If we hypothesized that rainfall variation, for instance, was strongly correlated to the rise and fall of Classic Maya socio-political complexity, then the PEWMA method should be able to identify such a relationship given a proxy time-series for past rainfall and one for socio-political complexity. If it failed to identify a relationship, one possible reason is that the correlation is quite low, at least according to our simulation results. Failing to find such a correlation, then, might simply indicate that the underlying relationship is not very important, falsifying the hypothesis that a strong relationship existed. On the other hand, for low to moderate correlations the method could miss a true relationship 50% of the time or more. A simple way to overcome this problem would be to test the hypothesis with additional time-series since that would increase the chances of finding a true-positive correlation. Therefore, with some replication we could be fairly confident in our findings.It is important to keep in mind, though, that our simulations also imply that one in ten positive results might be spurious. There are at least two obvious ways to control for false positive findings. One is to use a more stringent test for statistical significance. Since the PEWMA method we used relies on comparing AICs to determine when a significant relationship has been identified, we could change the baseline for significance from identifying AICs that are strictly lower than a benchmark AIC to a baseline that required AICs to be lower by some predetermined amount, giving a confidence buffer of sorts. This is what we did in our previous analysis on climate change and Classic Maya conflict , and we Overall, our results indicate that the PEWMA method is a promising time-series analysis tool for archaeological and palaeoenvironmental research. The method is suitable for analysing any archaeological count time-series, which potentially includes a wide range of archaeological proxies for past human behaviour, and it performs well even with relatively few radiocarbon dates\u2014only five dates for a time-series 1000 years long. Therefore, we can make use of many of the published palaeoenvironmental time-series readily available online and maintain low chronometric costs when gathering new data. The method can also reliably find moderate to strong correlations between archaeological and palaeoenvironmental time-series when the latter have a strong signal. It should also be noted that leads and lags in a putative human-environment relationship could be tested for in the usual way\u2014i.e., by included lagged versions of covariate time-series in the model. Thus, we think that the PEWMA method has the potential to contribute substantially to research on past human-environment interaction.There is one very important caveat to keep in mind, which is that the results yielded by applications of the PEWMA method to archaeological time-series are assumption dependent. Like most statistical techniques, the PEWMA model was created with a specific class of problems in mind and therefore makes certain assumptions about the data. While it appears to be fairly robust to chronological uncertainty, it is best suited to cases where the count-based archaeological data represent a past process that 1) contained autocorrelation; 2) had temporal persistence that can be characterized adequately by exponential decay\u2014e.g., the influence of past conflict levels or population sizes diminished exponentially with the passage of time; and 3) was subjected to temporally persistent effects from covariates. The last of these traits is particularly important because the PEWMA model assumes a given process was the product of its past states, which includes the previous impacts of any relevant covariates. So, the effect of covariates persists through time. If, in contrast, a process is suspected to have had covariates with only instantaneous impacts at any given time, then a PEWMA model may not be appropriate. It is, therefore, important to be aware of what one is attempting to model before using the PEWMA method. It would be wise to use the diagnostics outlined in Brandt et al. (2000) to determine whether a PEWMA model is suited to a given problem and dataset. It might also be useful to compare the PEWMA model results to other models, perhaps using AIC.There are at least three avenues to explore in future research. One involves looking at the effect of calibrated radiocarbon date uncertainty on the dependent\u2014i.e., response\u2014variable. We chose to focus on chronological uncertainty in the palaeoenvironmental data in order to limit the sources of error in the simulation and see the effects of chronological uncertainty as clearly as possible. However, most archaeological time-series will likely contain chronological uncertainty, usually from radiocarbon dating. While we suspect the effect of additional radiocarbon dating uncertainty in the response time-series to be small\u2014since the overall effect of chronological uncertainty appears to be small\u2014it would still be prudent to investigate it further. Future research should involve simulations that look at how the PEWMA method performs when both the response and predictor time-series are dated with radiocarbon.ad hoc exercise. In the future, we need to determine how best to combine the estimates while ensuring that the confidence intervals are calculated correctly. This research will require statistical development and further simulation work.The second avenue for future research involves estimating the magnitude of an underlying correlation in the presence of chronological uncertainty. Our experiment involved determining whether we could identify an underlying correlation. An obvious parameter to explore, therefore, was the strength of that correlation, which we varied between experiments. The bootstrap simulations resulted in a range of estimates of the magnitude of correlations between the synthetic archaeological and palaeoenvironmental series. Clearly, it would be useful to use the bootstrap estimates to produce a single estimate for the underlying magnitude. That magnitude would indicate how important a given covariate was relative to other covariates, and it would also allow us to estimate effect sizes\u2014i.e., the size of the impact that a given covariate had on the dependent archaeological time-series. However, combining the magnitude estimates from the chronological bootstrap is not straightforward and would have been an Lastly, it would be helpful to explore the impact of changing temporal scales on the PEWMA method. In the study reported here, we effectively used an annual resolution for the time-series, but often archaeological and palaeoenvironmental data have different resolutions. Many modern palaeoenvironmental records boast annual resolutions, for example, while most archaeological time-series will have much coarser resolutions. Consequently, we have to change the resolution of one or both time-series in order to perform analyses. Future research, therefore, should explore the effect of changing the resolutions of the independent and dependent time-series to match each other. Exploring these two potential research avenues would help us to determine the limits of the PEWMA method, a method with considerable potential to deepen our insights into past human-environment interaction.Time-series analysis has considerable potential to improve our understanding of past human-environment interaction. However, there is reason to think that its application could be undermined by the widespread reliance on calibrated radiocarbon dates for age-depth models. Calibrated radiocarbon dates have highly irregular uncertainties, as we mentioned earlier. These highly irregular uncertainties potentially pose a significant problem because they undermine the assumptions of standard statistical methods. With this in mind, we conducted a large simulation study in which we explored the effect of calibrated radiocarbon date uncertainty on a potentially useful Poisson regression-based method for time-series regression, called PEWMA. To test the effect of calibrated radiocarbon date error on the PEWMA method, we simulated thousands of archaeological and palaeoenvironmental time-series with known correlations and then analysed them with the PEWMA algorithm.Our simulation experiments yielded three important findings. One is that the PEWMA method was able to identify true underlying correlations between the synthetic time-series much of the time. The true-positive rate for the method ranged from 20\u201390%, with higher true-positive rates when the synthetic environmental series contained less noise and the correlation between the time-series was stronger. Under the most realistic conditions, with moderate noise levels and correlation strengths, the true positive rate was around 30\u201350%. Decreasing the noise levels and increasing the correlation coefficients to 0.5 or 0.75 led to true positive rates upwards of 90%. While it is not surprising that stronger correlations in less-noisy data were easier to identify, it is important to be aware that the method might miss low correlation relationships.The second important finding is that the false positive error rate of the method is roughly 10%, on average. This is surprising because we were expecting the highly irregular chronological errors of radiocarbon dates to warp the time-series in ways that could cause many spurious correlations and therefore a high false positive rate. Instead, the 10% false-positive rate suggests that finding spurious correlations is actually unlikely\u2014in the context of archaeological research at any rate.The third, and perhaps most surprising finding, was that varying the number of radiocarbon dates used to date the time-series had no noticeable effect. The true-positive rates were largely consistent whether five, 10, or 15 radiocarbon dates were used. This was surprising because it seems like adding more dates should reduce chronological uncertainty by increasing the number of chronological anchors for the age-depth models. Thus, we expected that more dates would improve our ability to find underlying correlations. That increasing the number of dates above five had no substantial impact on the true- or false-positive rates indicates that the PEWMA method is fairly robust to chronological uncertainty.Taken together, our findings indicate that the PEWMA method is a useful quantitative tool for testing hypotheses about past human-environment dynamics. It can be used to determine whether an underlying correlation exists between a calendrically-dated archaeological time-series and a radiocarbon-dated palaeoenvironmental time-series. Crucially, it has a low false-positive rate, a moderate-to-high true-positive rate, and it appears to be fairly robust to chronological uncertainty. Methods with these traits are essential for analyzing archaeological and palaeoenvironmental time-series, which is a vital part of understanding past human-environment interaction.S1 FileR script for building age-depth models.(R)Click here for additional data file.S2 FileR script for calibrating radiocarbon dates.(R)Click here for additional data file.S3 FileR script containing low-level simulation functions.(R)Click here for additional data file.S4 FileR script with top-level simulation function\u2014called from (R)Click here for additional data file.S5 FileR script for running the simulation in parallel.(R)Click here for additional data file."}
+{"text": "The development of single-cell RNA sequencing has enabled profound discoveries in biology, ranging from the dissection of the composition of complex tissues to the identification of novel cell types and dynamics in some specialized cellular environments. However, the large-scale generation of single-cell RNA-seq (scRNA-seq) data collected at multiple time points remains a challenge to effective measurement gene expression patterns in transcriptome analysis.We present an algorithm based on the Dynamic Time Warping score (DTWscore) combined with time-series data, that enables the detection of gene expression changes across scRNA-seq samples and recovery of potential cell types from complex mixtures of multiple cell types.https://github.com/xiaoxiaoxier/DTWscore.The DTWscore successfully classify cells of different types with the most highly variable genes from time-series scRNA-seq data. The study was confined to methods that are implemented and available within the R framework. Sample datasets and R packages are available at The online version of this article (doi:10.1186/s12859-017-1647-3) contains supplementary material, which is available to authorized users. Methodological advances provide transcriptomic information on dozens of individual cells in a single-cell sequencing project \u20133 to stuk\u2212nearest neighbor graphs without dimensional reduction. Critically, dimensional reduction does not make full use of the rich information provided by scRNA-seq time-series data. However, the existing methods may overlook un-synchronization over the entire time series. It is a challenging problem to provide the approaches to identify the set of genes from distinct cells that are differentially expressed over time. Moreover, estimating at which time periods the transcriptional heterogeneity with different cell types is present can provide additional insight into temporal gene functions.Among others, one key objective is to define the sets of genes that best discriminate transcriptional differences by inferring the heterogeneity of cells\u2019 unsynchronized evolution . This stIn this article, we present an algorithm based on the Dynamic Time Warping score (DTWscore) that is For single-cell RNA-seq data, the gene expression level of some fixed time points become more easily obtainable than traditional bulk RNA-seq data . A commoBriefly, the DTWscore focuses on detecting the cell-to-cell heterogeneity among time-period scRNA-seq data and highlights the highly divergent genes that are used to define potential cell types. The input of the DTWscore is a matrix of time-series gene expression data. The rows of the matrix stand for individual genes, and the columns represent the gene expression profiles of different cells from discrete time points. The method is performed on both simulated and real datasets. In particular, if a gene expression level between different time periods is quantified through the same process function, we consider genes of this type to show non-heterogeneity across cells, while the remaining genes are deemed as highly variable genes between time series data for further analysis. A graphical representation of our method pipeline is displayed in Fig. t to simulate gene expression patterns with four different \u2018biological processes\u2019(see Methods for details). If the gene expression patterns are tracked during the unfolding of a biological process, the process can be conceived as some specific functions over time. Four typical trajectories of gene expression are simulated graphically and f3(t), while the red curve represents condition 2 simulated by the biological functions f2(t) and f4(t).In the simulation, two groups of scRNA seq data with time are generated as follows. Group one (non-heterogenerous genes): the gene expression matrix at multiple time points is generated by the same function shown in Fig. Mclust function from R package mclust [In this section, DTWscore is applied to the recently published data from human skeletal muscle myoblasts which wee mclust , 25 to pIn order to assess the performance of DTWscore in relation to other approaches, we run Monocle and SLICER on the HSMM data and compared the classification results from all the three approaches.Monocle uses independent component analysis (ICA) to reduce the dimensionality of the expression data before clustering the cells. Monocle also provides algorithms on unsupervised cell clustering and semi-supervised cell clustering with known marker genes. Figure k-nearest neighbor graph shows the clustering using SLICER branching analysis. It appears that SLICER benching analysis suggest that cells should fall on many different branches which maybe more than the real number of cell types. Obviousily SLICER is capable of detecting types of features but sometimes it will overfit. However, DTWscore is a model-based method to infer the potential cell types which is more flexible for diverse datasets.Because SLICER can infer highly nonlinear trajectories and determine the location and number of branches and loops, the cells fall on more different branches. Figure We stress that our method is different from the approach that detects cell clusters and expression differences, such as those described previously , 15, 20,The approach is limited in that only classification of cell types are feasible. A generalized DTW algorithm used for the analysis will make analyses of more than three to four cells over time possible; work in that direction is underway. Finally, we note that, while the differentially expressed genes identified by the DTWscore may prove useful in downstream analysis and cellular branches and trajectories inference, extensions in this direction are also underway.To date, a large amount of available high-throughput data has been measured at a single time point . Time-seOur analysis of scRNA-seq time-series gene expression datasets increased the ability to study various cellular mechanism over time. First, in HSMM cells, we identified highly significantly differentially expressed genes with time-series data, indicating that the genes are marked for use in the following clustering. The expression of these genes possibly arose from the un-synchronized time-series scRNA-seq experiments. Second, given the various biological processes, the DTWscore for each gene was calculated using our pipeline. By combining the method to set thresholds, quantitative analysis has enabled the direct separation of heterogeneous and non-heterogeneous genes. The DTWscore can manage uneven and sparsely sampled time series gene expression data without need for prior assumptions about the evenness or density of the time-series data. Moreover, all pairs of cells are calculated by DTWscore, a procedure that could result in the stability of finding important highly variable genes. Finally, the DTWscore could successfully identify the potential cell types from bunch of scRNA-seq data.Regarding computational future directions, recovering the genes\u2019 heterogeneity over time in individual cells is only a fist step in understanding the complex dynamic processes that drive changes in gene expression. Most scRNA-seq data sets consist of hundreds (and sometimes thousands) of cells that have recently allowed parallel sequencing of substantially larger numbers of cells in an effective manner, which brings additional challenges to the statistical analysis of scRNA-seq data sets . We expect that developing unified computational methods with time-series single cell gene expression data will yield more biological insights. Inferring the potential types and states of individual cells is thus a useful tool for studying cell differentiation and govern a much wider array of biological processes.ci\u223cN and \u03b5i\u223cN. For the actual values of t, we used the sequence range from 0 to 10\u2217\u03c0 of 314 values. We considered 1000 genes from two types, highly variable genes and non-heterogeneous genes; and two cells from two time series. For each cell, we varied the interval of each different time points, and the corresponding DTWscore is calculated for each gene.We performed our pipeline on both synthetic time series data and real temporal gene expression data downloaded from (GEO). The real time-series scRNA-seq data were obtained from GSE52529 . The datThe DTWscore pipeline contains four steps Fig. .The first step in single-cell RNA-Seq analysis is identifying poor-quality libraries. Most single-cell workflows will include at least some libraries made from dead cells or empty wells in a plate. The expression level of each gene was represented by FPKM values. DTWscore hold the genes expressed in at least 80 percent of the total cells in the data set. Genes that were non-expressed in more than 80 percent of the total cells were excluded, leaving the remaining genes for further analysis. Consequently, thousands of genes could be reduced to hundreds for further analysis.np temporal gene expression values. Let i of jth time points in the pth cells. Briefly, if the expression levels of some gene i are tracked during the unfolding of a biological process, the process can be conceived as tracing out a trajectory over time. We consider the two temporal gene expression in two cells as two time series: f is defined between any pair of elements xm and xn, with some type of distance: The DTWscore is calculated based on the FPKM gene expression levels. The dynamic time warping technique is used \u03d5 such that Note that the most common choice is to assume the Euclidean distance, different definitions may be ing path . ConsequThe DTW algorithm makes use of dynamic programming and works by keeping track of the cost of the best path at each point in the grid: Consequently, amed dtw , which pDi,for each gene is obtained. Suppose Model-based threshold to identify highly variable genes will change significantly among various types of datasets. As the variabilities are high in scRNA-seq time-series data, a fixed threshold for the DTWscore is less effective in many settings. Flexible thresholds for the DTWscore are necessary, allowing the test of variabilities in response to a numerically estimated trend in the predictors, alleviating the burden of specifying their distribution. We utilized the distribution model to identify the specific gene sets for further analysis. We briefly summarize the main insight. As noted above, the empirical distribution of the DTWscore from all time-course datasets falls into normal distribution Fig. . ProbabiWe can classify cells as follows. First, to cluster the cells, we choose the gene with the highest DTWscore and make full use of its expression values of all the time points. Next procedure requires R package mclust which provides Gaussian finite mixture model fitted by EM algorithm . The rocAdditional file 1Figure S1. DTWscore identifies heterogeneous genes and non-heterogeneous genes from the synthetic data (condition 3). (PDF 122 kb)Additional file 2Figure S2. DTWscore identifies heterogeneous genes and non-heterogeneous genes from the synthetic data (condition 4). (PDF 122 kb)Additional file 3Figure S3. DTWscore identifies heterogeneous genes and non-heterogeneous genes from the synthetic data (condition 5). (PDF 122 kb)Additional file 4Figure S4. DTWscore identifies heterogeneous genes and non-heterogeneous genes from the synthetic data (condition 6). (PDF 122 kb)Additional file 5Figure S5. Model-based clustering of HSMM dataset by any two highly variable genes. (PDF 40 kb)Additional file 6Figure S6. Model-based clustering of HSMM dataset by any three highly variable genes. (PDF 68 kb)"}
+{"text": "The analysis of microarray time series promises a deeper insight into the dynamics of the cellular response following stimulation. A common observation in this type of data is that some genes respond with quick, transient dynamics, while other genes change their expression slowly over time. The existing methods for detecting significant expression dynamics often fail when the expression dynamics show a large heterogeneity. Moreover, these methods often cannot cope with irregular and sparse measurements.The method proposed here is specifically designed for the analysis of perturbation responses. It combines different scores to capture fast and transient dynamics as well as slow expression changes, and performs well in the presence of low replicate numbers and irregular sampling times. The results are given in the form of tables including links to figures showing the expression dynamics of the respective transcript. These allow to quickly recognise the relevance of detection, to identify possible false positives and to discriminate early and late changes in gene expression. An extension of the method allows the analysis of the expression dynamics of functional groups of genes, providing a quick overview of the cellular response. The performance of this package was tested on microarray data derived from lung cancer cells stimulated with epidermal growth factor (EGF).Here we describe a new, efficient method for the analysis of sparse and heterogeneous time course data with high detection sensitivity and transparency. It is implemented as R package TTCA and can be installed from the Comprehensive R Archive Network, CRAN. The source code is provided with the Additional file The online version of this article (doi:10.1186/s12859-016-1440-8) contains supplementary material, which is available to authorized users. Time course microarray experiments are frequently conducted to study the dynamics of gene expression at several consecutive time points. Associated data sets often require own custom-made analysis strategies, and cannot been adequately exploited with standard methods which were established to compare groups. The variability of the dynamics, spanning from fast and transient to slower, long-lasting changes, is a challenge for the analysis of time series microarray data. In perturbation experiments, sampling frequency is often adapted to reflect the expected changes in gene expression. This kind of experimental design leads to irregularly sampled data sets. Irregular time sampling may also arise when time points are chosen to be omitted after quality control, for instance when the respective arrays represent outliers with respect to the global trajectory resulting from principal component analysis (PCA) as shown in Fig. The first methods applied on time course microarrays including SAM , ANOVA and Limmp-value cluster, which impedes reproducibility of the results.EDGE was one of the first methods taking the time sequence into account , 6. EDGET2 statistics is implemented in the R package timecourse [An alternative method using multivariate empirical Bayes statistics and one-sample Hotelling mecourse . This pamecourse , which umecourse , 13. Butmecourse . LongituMethods based on Gaussian processes select differentially expressed genes from one channel experiments and fromtrue underlying functions [Another class of time course methods is based on principal component analysis (PCA) . Inspireunctions , 21, metunctions , 23. Theunctions and yielunctions . For theFinally, even simple methods can yield good results for sparse data, for instance by computing distances or the area between curves , 27. AlsTo sum up, most existing methods cannot reliably analyse sparse and irregularly sampled time course gene expression data sets. Further details and a method comparison are provided in the Additional files. A method overview is given in Additional file The method TTCA includes different scores to identify genes showing differential expression dynamics of various kinds.dynamics scorepeak scoreintegral scorerelevance scoreminimum overlap score\u0398i is computed to identify gene ontology groups with maximal separation of the group specific expression bandwidths between two conditions. Significance threshold and effect size are calculated for each score and the consensus scoreThe For the detection of differential gene expression based on two channel microarray data, we recommend to create a constant gene expression profile as control profile. This control profile might start with the expression value of the first time point, or could be set to the average expression value of the experimentally derived gene expression profile.The gene expression level is based on an assembled set of detected probes of 25 bp length. In this article, we focus on the expression dynamics of genes, however, those probe level signals can also be mapped to related transcripts or other longer oligonucleotides. These can equally be analysed with TTCA.In the following section, preprocessing for microarray time series data is addressed. Next, all relevant scores and components of the proposed method are explained briefly.Microarray data are usually afflicted by batch effects, i.e. unwanted variability in the samples arising from their experimental, technical and digital processing history. Batch effects can be introduced when samples are processed on different hybridisation batches (maximum 6-12 samples at once), or when a subset of the samples experienced slightly different experimental conditions . Many batch effects can be technically detected and can be removed if enough replicates are available. Microarray time course data sets are frequently sparse and the number of replicates per time point is low. In such data it is impossible to detect batch-effects . Moreovedynamics score in three steps based on the method EDGE [We define a hod EDGE and its hod EDGE .H0 is that the stimulus does not significantly alter the expression level of gene i. Thus, the measurements of the respective conditions (i.e. treatment vs. control) are derived from the same expression pattern and can be combined for a single function fit. In Fig. 1 is that the measurements are derived from different expression patterns, and that the two conditions have to be treated separately. Hence, the data is split into the two conditions, and each time course is fitted to an individual function and the residuals rij are obtained by minimising The fit is based on quantile regression . The fit\u03c10.5, and implemented in the R-package Quantreg [g is fitted to the measurements yj,j\u2208{1,\u2026,n} taken at time points tj,j\u2208{1,\u2026,n} with n measurements in total. The first term of Eq. (yj and the function g(tj). Microarrays are inflicted with a certain proportion of outliers [\u03bb. We estimated \u03bb=0.6 for SCAN-processed data with the help of real-time PCR profiles from genes that are known to be differentially expressed after the stimulation. The obtained residual-vectors Ri are modified by weighting vectors \u03a9. These weights account for the uneven experimental design in the following way: First, each time point should have the same weight independent from the number of replicates. Second, more values in one condition than in the other result in higher residuals without a better fit. TTCA balances the uneven design. Third, to reduce the unwanted bias by this vector, the sum of all vector elements of the weighting vector is forced to the same value. The scalar product of the residual and weighting vector yields a scalar value for each gene.The quantile regression algorithm is symbolised by Quantreg in functoutliers . If thesdynamics scoreDi is then defined by The 0/H1 quantifies how much worse the null-hypothesis fits in comparison to the alternative hypothesis and is easy to interpret.The relation Hpeak score. Let T={t1,\u2026,tn} denote the set of the measurement time points. For each time-point t\u2208T, we define peak score is then given by Perturbation experiments may invoke fast and transient peak dynamics in a gene subset, where the peak might be captured by only a small number of measurements. In this case, peaks, although biologically meaningful, may be overlooked by microarray analysis methods. To account for this, we introduce the i is considered as significant, if The success of this approach has been pointed out by Di Camillo et al. . To testentclass1pt{minimai is two-fold larger than the gene group noise threshold, these genes are classified as unstable. The instability score is binary and appears in the results table together with a relative effect size, explained below. TRUE indicates instable genes that are likely false positives, and FALSE indicates genes with acceptable variance between replicates. For an example see the gene SNORA11 in Table Some genes, found highly significant in the previous scores, exhibit an extreme variance between replicates. If the median of the standard deviation of replicated measurements of gene integral score is intended to quantify the area between the expression profiles for control and treatment. To compute the integral between the two expression dynamics of each gene i we first linearly interpolate the missing values of the quantile regression at measured time points t and at time points where the curves intersect. We then estimate the area between the two dynamics Di applying the trapezium rule. This integral i serves as a measure for the difference in the mRNA production between the two conditions. Figure integral score, which can be computed for different time intervals. Hereby, four separate scores are computed . This score indicates whether a gene is already well known to be associated with the condition or potentially a new target.By using the R package RISmed we queryconsensus score is used for the final ranking of the genes and combines the four scores. By merging the dynamics score with the peak score, combined integral score and relevance score, and normalising the result to be between 0 and 1, we obtain The entclass1pt{minimapeak score we did not define any significance threshold, yet. For the other scores a significance level can be computed by a one-sided, one-group hypothesis test. The program fits the Cauchy, Gamma, log-normal, logistic, normal, Poisson and Weibull distribution to the empirical distribution of score values using the function fitdistr provided by the R package MASS [x\u2212axis it can be fitted in the negative part as well. The obtained significance threshold is transformed back afterwards. The distribution function providing the best fit of the distribution of score values is automatically selected and plotted. To estimate the significance for a differentially expressed gene we provide the p-value as well as the effect size [peak score is defined as the distance between the expression dynamics, normalised by the maximum distance possible, i.e. the highest expression value within the data set minus the lowest expression value within the data set. The largest observed expression change in our data set covers 25.9% of the whole detection range and represents the effect size. The same normalisation is used for the instability score and also for the integral score, where the maximum area is given as the maximal distance multiplied by the time period. In the consensus score, a gene is considered to be significant, if it is considered significant in at least two scores.Except for the age MASS . The logect size . The effsdu and lower sdl standard deviation of all genes within each ontology group are calculated for each time point. The upper standard deviation hereby accounts for all measurement points above the group mean and the lower standard deviation accounts for all measurement points below the average. Separation into upper and lower standard deviation helps to better recognise when the subset of the functional group shows increased (or decreased) expression. This would lead to enlarged upper (or lower) standard deviations, where the classical standard deviation does not allow such distinction. We then consider gene groups differentially expressed if their expression bandwidths are separated by the condition, i.e., that the variability between genes in the same ontology group are small in contrast to changes caused by different treatments. To test, whether the expression bandwidths are separated by condition, we distinguish two different cases, as shown in Fig. j\u2208{1,\u2026,n} and genes i, where n indicates the total number of measurements per gene. We are only interested in the maximum distance between the bands or the minimum mutual overlap for the score \u0398i= max at each time point. Positive values indicate a separation of the bands and negative values indicate overlap. The average expression profile for each gene group and treatment is then used to calculate the other scores as described above. Hence, TTCA ranks functional groups high if they contain genes with similar expression pattern over time within a condition and if they clearly change the expression dynamics from one condition to the other. Although we did not compare the performance of the gene set module, the application on real data seems promising. Alternatively, the user can use the ranking of the individual genes to apply other methods for gene set analysis.To investigate the behaviour of functional groups, the genes are linked to gene ontology groups using the BiomaRt-package , 38. TheTTCA is computationally fast using about 1 h for one contrast. This includes the analysis of expression dynamics and the generation of relevant figures on a standard laptop . Furthermore, TTCA uses the R-package tcltk2 for a pr5 cells per well. After incubation for 3 days, cells were washed 3 times and supplemented with DMEM without FCS for overnight starvation. On the following day, cells were stimulated with 50 ng/mL of EGF diluted in starvation medium. Samples were harvested after 0, 0.5, 1, 2, 4, 6, 8, 12, 24 and 48 hours. Subsequently RNA was extracted, as described below. Total cellular RNA was isolated with the NucleoSpin RNA II kit according to the manufacturers\u2019 instructions. RNA concentrations were determined by measuring the absorbance (230 - 400 nm) using a NanoDrop\u00ae;ND-1000 spectrometer. The purity of the RNA was determined through the ratio of the absorbance at 260nm and 280nm. RNA with a ratio \u2265 1.8 was used for further analysis. After assessing RNA integrity using the Agilent Bioanalyzer, 100 ng in 3 \u03bcl per sample were handed over. After amplification, labelling with biotin and fragmentation of the RNA, hybridisation with GeneChip Human Gene 2.0 ST Array was performed for 16 h at 45 \u00b0C. Subsequently, washing and staining was performed using an Affymetrix Fluidics Station 450 and the microarray was scanned using an Affymetrix GeneArray Scanner 3000.The cell line H1975 were obtained from LGC Standards . The cell line was authenticated by STR-analysis and routinely checked for mycoplasma contamination. H1975 NSCLC cells were seeded in 6-well-plates with 1.33\u00b710The method Single Channel Array Normalisation (SCAN) was usedrelevance score, such as CTGF and yield very good results, comparable with other tools. It should be noted, however, that the scores included in TTCA detect specifically expression patterns arising after perturbation or stimulation experiments. For detecting specific dynamical behaviours, e.g. oscillations, we recommend specialised methods like Lomb-Scargle periodograms , JTK-CYCWe believe that the developed TTCA package is a valuable and efficient tool for the dissection of important information that is usually concealed by experimental and biological variations leading to data heterogeneity. The connection with the number of PubMed publications has to our knowledge never been included in other packages and supports the user in distinguishing between new and already known genes affected by the applied perturbation. Further new features (at least to our knowledge) are the automatic detection of the best density function, the approach to detect false positives (the instability score), or the distinction between early, middle and late response. Also, the outbalancing of the sampling design using weighting factors is an important new feature. Moreover, we provide a new gene set significance approach, which pools genes into gene ontology groups which expression bandwidths are separated . TTCA provides automatically quality checks and plots the gene expression profiles. Thus, the user can easily judge the performance of the package for any included data set. Strong advantages of TTCA are the high degree of transparency, the multitude of visual output for quality assessment, search flexibility and sensitivity also in cases where other methods cannot be applied."}
+{"text": "Modeling and predicting biological dynamic systems and simultaneously estimating the kinetic structural and functional parameters are extremely important in systems and computational biology. This is key for understanding the complexity of the human health, drug response, disease susceptibility and pathogenesis for systems medicine. Temporal omics data used to measure the dynamic biological systems are essentials to discover complex biological interactions and clinical mechanism and causations. However, the delineation of the possible associations and causalities of genes, proteins, metabolites, cells and other biological entities from high throughput time course omics data is challenging for which conventional experimental techniques are not suited in the big omics era. In this paper, we present various recently developed dynamic trajectory and causal network approaches for temporal omics data, which are extremely useful for those researchers who want to start working in this challenging research area. Moreover, applications to various biological systems, health conditions and disease status, and examples that summarize the state-of-the art performances depending on different specific mining tasks are presented. We critically discuss the merits, drawbacks and limitations of the approaches, and the associated main challenges for the years ahead. The most recent computing tools and software to analyze specific problem type, associated platform resources, and other potentials for the dynamic trajectory and interaction methods are also presented and discussed in detail. Recent advancement in the omics fields and the associated technologies , mass spectrometry (MS)) have provided huge amount of information for delineating the roles of biological entities in complex diseases and biological system states for the human organisms \u20137. On thDespite considerable computational and statistical efforts over the decades with thousands of computational tools, algorithms and models developed ranging from single model to multi-level (such as meta-frame), the key computational challenges of system medicine remains: how to best mine and learn the continuing arrival of big omics data given thousands of interacting entities with relatively weak or small accumulative effects over time on health conditions or diseases \u201314. The From the clinical or biomedical perspective, the challenge issue is the reliability for avoiding false discovery, and reproducibility across different patient cohorts and the associated biological interpretability of the findings. These are all crucial in order to extract fully confirmed actionable knowledge for systems medicine and P4 solutions. But the evolving, heterogeneous and dynamic information with low intensity signals with respect to noise from omics technologies make the key drivers led to complex diseases difficult to characterize. Fig.\u00a0To tackle those dynamic, interacting, hidden but valuable biomedical information, various analytical tools ranging from single level to more sophisticated hybrid data mining, machine learning tools, and advanced statistical models are needed, especially the advanced approaches for causal network inferences and dynamic trajectory predictions for drug and disease responses , 6, 8\u201311To process and model the temporal omics data, several layer/levels analyses could be applied to meet the needs of the state-of-the-art omics data in order to overcome the challenges. Fig.\u00a0One of the goals of modeling temporal omics data is to infer and predict the biological networks and interactions and for further causal, pathway, function and integrative analysis. The advanced level analysis is the focus of the paper, which includes dynamic trajectory, interactions, network/module based modeling, and knowledge/data integrations with pathway, regulatory and function analysis. Figure\u00a0The synergistic system formalism is a static differential equation based deterministic approach that has been applied to genetic, immune and biochemical network data \u201325. NonlSingular value decomposition has also been developed for modeling the dynamics of microarray experimental data through matrix decomposition and eigenvalue analysis , 30. TheStochastic paradigm treats the dynamic process of temporal change as a stochastic process and describes it as a probability system in time with uncertainty \u201340. ExamState space model (dynamic linear models) and hidden Markov model are two important applications of statistical models combined with stochastic process techniques. State space model combines the stochastic process with the observation data model uniformly to model a continuous process for capturing the change of gene states . Hidden State space models have greater flexibility in modeling non-stationary and nonlinear short time course data and were implemented and applied to genomic studies . HoweverThe choice of statistical modeling approaches for temporal omics data depends on the features and types of the data (univariate (I), multivariate (II), cycling (III), phenotype dependent (IV), Fig.\u00a0Moreover, they are also related to the way the association between and across outcomes is modeled ; or how the effects of the variables are treated . So the related approaches can be categorized into classical frequentist inferential approaches (fixed effects), Bayesian models (random effects), or mixed of the classical inferential techniques and Bayesian model, which lead to mixed models , 45 see.Conventional time series techniques such as autoregressive or moving averaging models and Fourier analysis require stationary conditions, linearity for lower order autoregressive models, and uniformly spaced distributed time points, which are not present in short time course omic experiments and therefore are not suitable for unevenly spaced or distributed omic experiment , 47. TheThe probability and confidence measures play important roles in omic temporal data not only due to the variations, high noise levels and experimental errors resident in the experiments but also the stochastic nature involved in the biological process. The Bayesian paradigm is very well suited for examining these features and other properties in the temporal data, such as highly correlated inputs and phenotypes, missing data, and small sample size \u201356. In BMoreover, Bayesian approaches can account for the variability induced by the collection of models and construct credible intervals accounting for model uncertainty through investigating the impact of the choice of priors on model space. Then they can construct new search algorithms that take advantage of parallel processing with Markov Chain Monte Carlo (MCMC) algorithm. Bayesian approaches can be used in the case when there are more covariates than observations. Bayesian method is a hybrid generative-discriminative model that can add prior knowledge (such as distributions of the input) or encode the domain knowledge to improve the learning or training phases. Bayesian approaches can well capture linear, non-linear, combinatorial, and stochastic types of relationships among variables across multiple levels of biological organization and have been extensively applied for the time course gene expression study with various hierarchical settings , 47\u201357.Clustering analyses or unsupervised learning without class labels are the most commonly used methods for time course genomic experiments. These approaches are based on similarity or correlation or distance measures for identification of groups of genes with \u2018similar\u2019 temporal patterns of expression, which is a critical step in the analysis of kinetic data given the large number of genes involved \u201366. HierSupervised clustering or classification approaches incorporate known disease status or the prior known genomic knowledge as class labels for classifying the genomic temporal patterns and disease/health outcomes \u201370. SuppClassification or supervised clustering approaches can be also distinguished as either generative versus discriminative models. Generative approaches learn the joint probability of inputs x and output class label y , then make prediction based on the conditional probability obtained through Bayes rules. Na\u00efve Bayesian classifier is a simple example of generative approaches while BaNeural network is popular computational approach for prediction problems, which can either apply discriminative or generative strategies , 71, 74.The computational or statistical approaches for network construction include various levels such as transcriptional regulation network, metabolic network, protein-protein, and disease-drug-genes network \u201388. NetwFor instance, weighted correlation network analyses identify modules/clusters of highly correlated transcripts, genes, proteins, metabolites , 105. BaDynamic Bayesian networks (DBN) have been popular for learning and inferring the gene regulatory networks, which have been compared with Granger causality and probabilistic Boolean network , 114\u2013116For examining the potential causal relationships and network structure, autoregressive models for gene regulatory network inference using time course data for sparsity, stability and causality were investigated . GrangerTo investigate the dynamic aspects of gene regulatory networks measured through system variables at multiple time points, Acerbi et al. (2014) proposed continuous time Bayesian networks for network reconstruction. They compared two state-of-the-art methods: dynamic Bayesian networks and Granger causality analysis . ResultsTwo general categories for data integrations are either through meta-analysis , which performs analysis for each individual dataset first, then combines the results; or mega-analysis, which combines the data first then conducts the analysis. No matter which strategy, for better interpretations and visualization purposes, pathway and functional analysis need be conducted. The pathway based analysis move to next level of analysis (complementary to the DAVID and KEGG) to define how the selected individually regulated genes, transcripts, or metabolites interact as parts of complex pathways, such as signaling, metabolic pathways based on known knowledge and published literature \u2013126.http://www.ingenuity.com/) that computes a score for each network according to the fit of the network, one can select a cut-off score of 3 for identifying gene networks significantly affected by the specific gene or genotypes. This score indicates that there is a 1/1000 chance that the genes are in a network due to random chance and therefore, scores of 3 or higher have a 99.9% confidence of not being generated by random chance alone. Then one may compare the selected pathways and networks between DEG lists obtained from individual comparisons to find the common and unique pathways between each compartment. These comparisons will indicate the difference of specific genes at the pathway level in addition to our biological process and molecular function analyses, pinpointing the relationship among potential candidate driver genes, chromosomal abnormalities, and pathways.For instance, using Ingenuity Pathway Analysis software identify modules and entities that are enriched and statistically significant over-representation of particular functional categories and major gene/metabolites groups/families [Further function over-representation analysis through the Database for Annotation, Visualization and Integrated Discovery provided excellent examples for temporal omics data sets that involve various most updated biomedical challenge questions through multiple team competitions [DREAM utilized silico temporal gene expression data sets from DREAM4 for inferring network structures and predicting the response of the networks to novel perturbations in an optional \u201cbonus round\u201d . They prEren et al. (2015) developed an advanced automated and human-guided characterization and visualization platform for microbial genomes in metagenomic assemblies. The platform has interactive interfaces that can link omics data from multiple sources into a single, intuitive display . The sofThe most popular software packages for conducting computations are omics data are the Bioconductor from R, toolboxes from Matlab, Genomics from SAS/JMP. In addition, C++, Visual Basic, Python, Java, and JavaScript, WinBugs are often used programming languages for developing various types of analytic, visualization tools, pipelines \u2013155. ForIn addition, they provide a dynamic interface (Grinn) to integrate gene, protein, and metabolite data using more advanced biological-network-based approaches such as Gaussian graphical models, partial correlation and Bayesian networks for omics data integration . For instance, time-vaRying enriCHment integrOmics Subpathway aNalysis tOol (CHRONOS) is an R package built to extract regulatory sub-pathways along with their miRNA regulators at each time point based on KEGG pathway maps and user-defined time series mRNA and microRNA (if available) expression profiles for microarray experiments , 157. ItSome popular interaction and network analysis resources and databases for biological systems resulted from literatures including IntAct, BioGRID, and MINT. Other network construction software could be useful such as Genetic Network Analyzer (GNA), which is a computer tool for modeling and simulation of gene regulatory networks. GNA allows the dynamics of a gene regulatory network to be analyzed without quantitative information on parameter values, analyzing its dynamical behavior in a qualitative way . For effLearning and integrating dynamic omics temporal data and gene-protein-disease-drug/treatment correlation, interdependence and causal networks between hybrid systems may improve our understanding of system-wide dynamics and errors of pharmacological and biomedical agents and their genetic and environmental modifiers. Most available dynamic approaches and existing applications focus on the genomic time course data, but the same techniques or methodologies can be extended and employed to various types of omics data (such as metagenomics) with the applications to other biological networks and pathways. For instance, RNA-Seq data has revealed far more about the transcriptome than microarrays, primarily because analysis is not limited to known genes. This opens possibly for splicing analysis, analyzing differential allele expression, variant detection, alternative start/stop, gene fusion detection, RNA editing and eQTL mapping.Either from computational complexity or clinical reproducibility point of view, one cost effective resolutions and future directions would be develop more intelligent AI based data integrations, learning and automations with hierarchical ensemble approaches, not just connectivity. With efficient multi-task learning algorithms (with automatic reasoning and consensus predictions with boosts and bagging) embedded into multilayer computational automated ensemble model systems with pipelines, the latent component of correlated biological entities can be divided and the key components/pathway or elements can be captured through utilizing continuously arriving, evolving, temporal omics data. Investigating the causality rather than the association among various biological entities ranging from RNA, microRNA, DNA, gene, protein, disease, and drug in an integrative perspective would be important, to which relative a few integrative efforts have been dedicated so far.To overcome other bottleneck issues for omics data that may partially arisen from the biomedical systems\u2019 complexity, that encompasses biological/genetic, behavioral, psychosocial, societal, environmental, systems-related, ethical and other intertwined factors. Further incorporations of electronic health records linked to behavioral, psychosocial, societal, environmental, and clinical lab measures with temporal omics data in hierarchical ensemble automated system will provide us more interpretable and reproducible scientific results and practical clinical decision making for P4 patient outcomes."}
+{"text": "During tissue development, patterns of gene expression determine the spatial arrangement of cell types. In many cases, gradients of secreted signalling molecules\u2014morphogens\u2014guide this process by controlling downstream transcriptional networks. A mechanism commonly used in these networks to convert the continuous information provided by the gradient into discrete transitions between adjacent cell types is the genetic toggle switch, composed of cross-repressing transcriptional determinants. Previous analyses have emphasised the steady state output of these mechanisms. Here, we explore the dynamics of the toggle switch and use exact numerical simulations of the kinetic reactions, the corresponding Chemical Langevin Equation, and Minimum Action Path theory to establish a framework for studying the effect of gene expression noise on patterning time and boundary position. This provides insight into the time scale, gene expression trajectories and directionality of stochastic switching events between cell states. Taking gene expression noise into account predicts that the final boundary position of a morphogen-induced toggle switch, although robust to changes in the details of the noise, is distinct from that of the deterministic system. Moreover, the dramatic increase in patterning time close to the boundary predicted from the deterministic case is substantially reduced. The resulting stochastic switching introduces differences in patterning time along the morphogen gradient that result in a patterning wave propagating away from the morphogen source with a velocity determined by the intrinsic noise. The wave sharpens and slows as it advances and may never reach steady state in a biologically relevant time. This could explain experimentally observed dynamics of pattern formation. Together the analysis reveals the importance of dynamical transients for understanding morphogen-driven transcriptional networks and indicates that gene expression noise can qualitatively alter developmental patterning. The bistable switch, a common regulatory sub-network, is found in many biological processes. It consists of cross-repressing components that generate a switch-like transition between two possible states. In developing tissues, bistable switches, created by cross-repressing transcriptional determinants, are often controlled by gradients of secreted signalling molecules\u2014morphogens. These provide a mechanism to convert a morphogen gradient into stripes of gene expression that determine the arrangement of distinct cell types. Here we use mathematical models to analyse the temporal response of such a system. We find that the behaviour is highly dependent on the intrinsic fluctuations that result from the stochastic nature of gene expression. This noise has a marked effect on both patterning time and the location of the stripe boundary. One of the techniques we use, Minimum Action Path theory, identifies key features of the switch without computationally expensive calculations. The results reveal a noise driven switching wave that propels the stripe boundary away from the morphogen source to eventually settle, at steady state, further from the morphogen source than in the deterministic description. Together the analysis highlights the importance dynamics in patterning and demonstrates a set of mathematical tools for studying this problem. Tissue development relies on the spatially and temporally organised allocation of cell identity, with each cell adopting an identity appropriate for its position within the tissue. In many cases, cellular decisions are made by transcriptional networks controlled by extrinsic signals \u20133. Thesegenetic toggle switch . Th. ThM > Mby M See .e.g. . Each point is the average of 200 stochastic trajectories. Parameters are the same as in Position of the boundary is measured as the value of the signal for which \u2329(TIF)Click here for additional data file.S8 FigA along the tissue at different time points for different values of \u03a9. Results correspond to averaging of 500 trajectories with \u03bdA = \u03bdB = 1; the rest of the parameters are the same as in Mean and standard deviation in expression of the morphogen activated gene (TIF)Click here for additional data file."}
+{"text": "Complex network methodology is very useful for complex system explorer. However, the relationships among variables in complex system are usually not clear. Therefore, inferring association networks among variables from their observed data has been a popular research topic. We propose a synthetic method, named small-shuffle partial symbolic transfer entropy spectrum (SSPSTES), for inferring association network from multivariate time series. The method synthesizes surrogate data, partial symbolic transfer entropy (PSTE) and Granger causality. A proper threshold selection is crucial for common correlation identification methods and it is not easy for users. The proposed method can not only identify the strong correlation without selecting a threshold but also has the ability of correlation quantification, direction identification and temporal relation identification. The method can be divided into three layers, i.e. data layer, model layer and network layer. In the model layer, the method identifies all the possible pair-wise correlation. In the network layer, we introduce a filter algorithm to remove the indirect weak correlation and retain strong correlation. Finally, we build a weighted adjacency matrix, the value of each entry representing the correlation level between pair-wise variables, and then get the weighted directed association network. Two numerical simulated data from linear system and nonlinear system are illustrated to show the steps and performance of the proposed approach. The ability of the proposed method is approved by an application finally. Association networks are found in many domains, such as networks of citation patterns across scientific articles \u20133, netwoAssociation network inference has been a research topic for several years. We will review some methods that have been proposed so far to addressed the undetermined relationships among variables. The most classical approach is based on correlation. For instance, Guo et al. incorporMoreover, Gaussian Graphical Models also performed well to infer association network on specific experimental dataset. Sch\u00e4fer and Strimmer introducSome approaches to infer association networks rely on information theoretic-based similarity measures. Margolin et al. describeIn addition, approaches rooted in Bayesian Networks (BN) employ probabilistic graphical models in order to infer causal relationships between variables. Aliferis et al. presenteGranger causality (GC) is also a very popular tool for association networks inference. It can assess the presence of directional association between two time series of a multivariate data set. GC was introduced originally by Wiener , and latOf course, there are many more methods for association networks inference and we have not mentioned above, such as neural network , SparCC Although any of the abovementioned researches have its advantages approved by different styles, it is not always suitable for any network inference problem. Because each strategy applies different assumptions, they each have different strengths and limitations and highlight complementary aspects of the network. In this paper, we aim at inferring weighted directed association network from multivariate time series and the abovementioned methods can\u2019t meet our requirements well. For instance, some of these popular tools are non-directional, e.g. correlation or partial correlation, mutual information measures and Bayesian Networks, thus these measures cannot satisfy one\u2019s directed association networks inference study . GrangerTo address the issues mentioned above, we will propose an approach called small-shuffle partial symbolic transfer entropy spectrum(SSPSTES). This work face with five challenges:Time series being non-stationary and continuous: It is very important that the time series is statistically stationary over the period of interest, which can be a practical problem with transfer entropy calculations . In addiThreshold selection: Many current methods, e.g. correlation efficient, mutual information and transfer entropy, decide whether exists an edge between two time series by threshold selection. If a larger value is selected, it will loss many real correlations and result a sparse network. By contrast, if a smaller threshold is selected, it will bring many spurious relationships and result a dense network. Although there are many researches on threshold selection, it is still difficult for user to select a proper threshold when inferring association network. The proposed method is a solution for this problem.Strong relationships identification: In general, we are more interested in the strong correlation than weak correlation. Because the relationships among these variables are unknown, strong correlations are more convincing but weak correlations have a greater probability of misidentification and this may bring a serious consequence. In addition, strong correlation is usually direct relation and not indirect relation. It is expected in the inference of association network.The direction and quantity of influence: The direction of edge is crucial for network prediction and evolution. It means that the proposed method should have the ability of detecting the directional influence that one variable exerts on another between two variables.Temporal relation identification: The proposed method has some ability of detecting the specific temporal relation based time lags, namely the function relation of time.In the next section, we will propose a method of inferring association network from multivariate time series. The emphasis is on how to solve the five challenges mentioned above. Section 3 will apply the proposed method to two numerical examples whose coupled relationships of their components are clear and the values are time-varying. We summarize the results of this paper and figure out some topics for further study in Section 4.In this section, we will explain the proposed approach in detail. First, we will show you an integrated framework of the approach, and then carry out a detailed description around the framework.The approach designed for association network inference takes exploration and application into account so that minimizing human intervention when modeling. Therefore, the approach starts with inputting data and ends with outputting a network inferred from multivariate time series. The modelling process is transparent for users. The main principle of the proposed approach is shown in The integrated framework has three layers. The first layer, so-called Data Layer, is the interface interaction with users. One thing to do in this layer is to input the original multivariate time series and modelling parameters, the other thing to do is to shuffle the original data several times with a surrogate data method. The most important and complicated layer of the framework is the second layer, i.e. Model layer. We will identify all the impossible relationships among the multivariate time series in this layer. In order to achieve this goal, the core things to do are time series symbolism, partial symbolic transfer entropy calculation and spectrum construction. The output of this layer is candidate relationships. The task of last layer is to construct a weighted directed network. In order to retain the strong correlation only, the candidate relationships are filtered. For the indirect correlation, it is removed by DPI. For theAs shown in The technique of surrogate data analysis is a randomization test method . Given tThe SSS method destroys local structures or correlations in irregular fluctuations (short-term variabilities) and preserves the global behaviors by shuffling the data index on a small scale. The steps using SSS method are described as follows.x(t), let i(t) be the index of x(t) , let g(t) be Gaussian random numbers, and s(t) will be the surrogate data.Let the original data be x(t):Shuffle the index of where A is an amplitude.i'(t) by the rank order and let the index of i'(t) be Sort Obtain the surrogate data:Parameter A reflect the extent of shuffling data. A higher value of parameter A results more difference between surrogate data and original data. On the contrary, the smaller the value of A, the less the difference. The parameter A is input at the beginning of the method and its empirical value of A is 1.0.The technique of time series symbolization was introduced with the concept of permutation entropy , 43. ThiV1,V2, be {vt1,}, {vt2,} respectively, t = 1,2,\u22ef,k. The embedding parameters in order to form the reconstructed vector of the time series V1 are the embedding dimension m1 and the time delay \u03c41. Accordingly, m2 and \u03c42 are the embedding parameters defined for V2. The reconstructed vector of V1 is defined as:t = 1,2,\u22ef,k' and k' = k \u2212 max((m1\u22121)\u03c41,(m2\u22121)\u03c42).For original multivariate time series, let two time series \u03bdt1,, the ranks of its components assign a rank-point rj,t \u2208 {1,2,\u22ef,m1} for j = 1,2,\u22ef,m1. For each vector \u03c4 is the time delay, Symbolic transfer entropy means that our transfer entropy calculation is based on symbolic time series data in section 2.3. Symbolic transfer entropy is defined as follows :STEv2\u2192vSymbolic transfer entropy uses a convenient rank transform to find an estimate of the transfer entropy on continuous data without the need for kernel density estimation. Since slow drifts do not have a direct effect on the ranks, it still works well for non-stationary time series .z = {v3,v4,\u22ef,vn}.z. The partial symbolic transfer entropy is similar as partial correlation, it can eliminate some of the indirect correlation and remain the pure or direct information flow between v2 and v1.The partial symbolic transfer entropy(PSTE) is definn times for each pair of time series. This process is described using algorithm 1 shown in Due to the time delay is underdetermined, the partial symbolic transfer entropy is calculated Algorithm 1: Partial Symbolic Transfer Entropy Calculation with Different Time LagsInput:tm, maximum time delayOutput:PSTEML, a list of partial symbolic transfer entropy matrixMethod:tm; t++) {for {\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0for {\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if (j \u2260 i) {\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0STS = call the function of time series symbolization\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0STS(j) = the column j of STS\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0STS(i) = the column i of STS\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0STS(z) = the columns z of STS\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0PSTE_matrix = call_PSTE_Function(STS(j), STS(i), STS(z), t)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\u00a0\u00a0\u00a0\u00a0}PSTEML = PSTE_matrix\u00a0\u00a0\u00a0\u00a0Element t of }ReturnPSTEMLWe first use algorithm 1 to get a list of symbolic transfer entropy matrix on original time series. Then we shuffle the original data several times which has been specified at the beginning of our method. We repeat the algorithm 1 on each shuffled data accordingly.Partial Symbolic Transfer Entropy Spectrum(PSTES) is defined as follows:The PSTES between time series Y and X is composed of their many partial symbolic transfer entropy curves drawn in a rectangular coordinate system. The horizontal axis represents different time delays and the vertical axis represents transfer entropy. One of the transfer entropy curves is resulted from original data and other curves are resulted from shuffled data.Let tm and shuffling times sm. Let tm = 10, sm = 99, then the output of last step is a list of 100 elements and each element is a list of 10 transfer entropy matrices. Moreover, each entry of the transfer entropy matrix reflects the correlation strength of a pair of time series. Thus, according to the define of PSTES, we first split the output of section 2.4 into pieces and then compose partial symbolic transfer entropy spectrum in a certain way.In order to compose transfer entropy spectrum, we must understand the structure of the output in section 2.4. The output is a complicated list of PSTE matrix. For each data, original data or shuffled data, a list of PSTE matrix with different delays is returned after carrying out algorithm 1. Thus, for all data, the returned result of last step is a list of PSTE matrix lists. The parameters input at the beginning of the method are maximum time delay The target of the proposed method in this paper is strong correlation identification and is not all correlation among multivariate time series. The scenario for this method is that we don\u2019t know the relationships in the complex system. We pay more attention to the precision of correlation identification than the sensitivity. Because the misidentification of relationships among variables may bring a serious consequence to our data analysis.aij is denoted as follows:t \u2208 , s \u2208 , i to variable j with a time delay t based on the original data and i to variable j based on the shuffled data.Our decision whether existing a strong correlation or not between two variables is made by the characteristic of PSTES. This characteristic is based on the theory of hypothesis testing which is often used in surrogate data method , 38, 41.In order to retain the strong correlation only, the candidate relationships are filtered. In order to deal with the indirect correlation, three ideas are synthesized into the filter method.X, Y and Z which form a Markov chain in the order X\u2014>Y\u2014>Z, then the mutual information between X and Y is greater than or equal to the mutual information between X and Z. Of course, the mutual information between Y and Z is greater than or equal to the mutual information between X and Z. PSTE is extended from mutual information, so we deal with indirect relations according to the following equations:The first component of the filter method is DPI. The datPSTEX\u2192Z \u2264 PSTEX\u2192Y and PSTEX\u2192Z \u2264 PSTEY\u2192Z,IF THEN the relationship between X and Z is removed.Second, for the bidirectional correlation, we deal with this problem by an empirical criterion. The criterion is defined as follows:PSTE*0.4 >= PSTE, THEN PSTE = 0.IF PSTE*0.4 >= PSTE, THEN PSTE = 0.IF Third, although PSTE measures the correlation of variation trend, it doesn\u2019t measure the correlation of value. As a complementary method, we introduce Granger causality which is based on the residual of linear model. The strategy is as follows:GC = 0, THEN PSTE = 0.IF aij = 1, the relationship between i and j is called strong relationship.After this step, we will get the final 0\u20131 adjacency matrix. If G = . Here V = {v1,v2,\u22ef,vn} is the set of vertices, i.e. time series variables, and E is the set of edges, i.e. the strong correlations, identified in the section 2.6, between each pair of vertices in V.The association network inferred from multivariate time series can be denoted as E. The selected measure for the weight is the corresponding maximum symbolic transfer entropy of original data calculated in section 2.4 and the Eq , x2(t), x3(t), x4(t), x5(t). The relationships among these variables are modelled by the following expressions [ri(t) are random noise, independent and identically distributed Gaussian random variables with mean zero and standard deviation 1.0. These five time series are shown in First, we apply our method to a linear system which has five time series variables, i.e. ressions :x1(t)=1y is a linear combination of variables x1,x2,\u22ef,xn, we say y is a response variable and x1,x2,\u22ef,xn are the drive variables. In the network, we denote the drive-response relationship between y and x1 as a arrowed edge from x1 to y. Therefore, the responding network of above linear system is shown in It is difficult for us to find the relationships among the five time series variable from x1 is driven by two other variables x2,x4, variable x3 is driven by variables x1,x4 and x4 is driven by x1. However, x2 and x5 is not driven by any other variables and it is only as a driven variable of x1. It is noted that there are autocorrelations in Eqs (As shown in s in Eqs \u201313) but butx1 isAfter generating the simulated data by Eqs \u201313) in in 13) iThen, we need to identify the candidate relationships from Next, the candidate relationships are filtered by the method described in section 2.6.2. After this step, we get all the strong relationships and the output is a 0\u20131 adjacency matrix. The resulted adjacency matrix is described by Eq :C= , 45. PreTP is the numbers of edges which are in the intersection between original edge set and inferred edge set, FP is the number of edges which is in inferred edge set but not in original edge set and FN is the number of edges which is not in inferred edge set but in the original edge set. In order to test whether the model is sensitive to the system noise, we generate ten groups of data generated by Eqs \u201313)13) and then apply the proposed method to these data. As a result, we get ten precision values and sensitivity values and their average values shown in Here, Next, we discuss the temporal relation identification of the proposed method. Please note that the following discussion is based on those edges inferred correctly. The time lag assigned to two correlation variables is the time point when PSTE of original data achieve the maximum value. Based on this definition, we define a measure, i.e. the precision of time lags(PTL), to assess the temporal relation identification of the proposed method. It is defined as Eq :PTL=TPLTPL is the correct number of temporal relation identification in those edges which have been identified correctly, FPL is the error number of temporal relation identification in those edges which have been identified correctly. The results of PTL are shown in Here, In addition, we discuss how the dimension of symbolic time series affects the performance of the proposed method and the results are shown in We also discuss how the length of data affects the performance of the proposed method and the results are shown in SSPSTES is a synthetic method, we make a comparison between the proposed method and some other common methods. The results are shown in In this section, we validate whether the proposed method work well for nonlinear system. The simulated data is generated by Eqs \u201324):x1:x124):ri(t) are random noise, independent and identically distributed Gaussian random variables with mean zero and standard deviation 1.0. In this example, all variables except x1 are nonlinear. In Eq (x2 is nonlinear. In Eq (x3 is nonlinear. In Eq (x3(t\u22124) and this results that x4 is nonlinear. In Eq (x4(t\u22123) and this results that x5 is nonlinear. In Eq (x2(t\u22121)x3(t\u22125) and this results that x6 is nonlinear. In this example, we introduce into three kinds of direct nonlinear correlations, i.e. square correlation, square root correlation and the product of two one-order item. The time series are areri(t)x1. The second kind of nodes is that the in-degree is zero and the third kind of nodes is that both the out-degree and the in-degree are not zero.According to the drive-response relationships among the six time series variables, the responding original network of this nonlinear system is shown in We apply the proposed method to this nonlinear system and the process is the same as that described in section 3.1. The resulted partial symbolic transfer entropy spectrum is shown in The candidate relationships are filtered by the method described in section 2.6.2. All the retained strong relationships are denoted as an adjacency matrix Eq :C=.From In order to infer a weighted directed association network from multivariate time series, we have proposed a method named small-shuffle partial symbolic transfer entropy spectrum(SSPSTES) which synthesizes Symbolic Transfer Entropy(STE) and Small-Shuffle Surrogate(SSS) method and a filter algorithm. We first proposed the framework of the method. It is composed of three layers, i.e. Data Layer, Model Layer and Network Layer. Then we described the seven main process of SSPSTES from section 2.2 to section 2.7. Next, we applied the proposed method to numerical simulated linear system and nonlinear system. We used three indicators, i.e. precision, sensitivity and PTL, to assess the proposed method. We discussed how the different dimension of symbolic time series and different length of the data affect the performance of the proposed method. We also made a comparison between SSPSTES and three other relevant methods. As a result, the proposed method makes a better performance both on linear system and nonlinear system than other methods. In general, the method can identify the strong correlation and also find out the time delay between pairwise time series. Finally, we applied the proposed method to a real multivariate time series data set, i.e. overseas departures from Australia. The inferred association network is reasonable.Although it is illustrated that the proposed method is good at inferring association network from multivariate time series, there are still some topics that are worth studying in future. First, in this paper, it is considered that the misidentification of relationships may bring with the serious consequences, thus we aim to the strong correlation identification and ignore the proportion of identified relationships among all relationships existing in the complex system. The sensitivity is unstable and sometimes may be a little low. Therefore, we will attempt to improve the sensitivity of SSPSTES. Second, the proposed method can be optimized to reduce the complexity. Third, we will apply the method to some lager systems and real complex systems, e.g. the gas pipe monitoring system and electric power monitoring system. All these topics are interesting and worth deeply studying. Nevertheless, the proposed method still can serve as a heuristic tool for inferring association network from multivariate time series so as studying the system deeply with complex network knowledge.S1 Dataset(CSV)Click here for additional data file.S2 Dataset(CSV)Click here for additional data file.S3 Dataset(CSV)Click here for additional data file."}
+{"text": "We have developed a machine learning approach to predict stimulation-dependent enhancer-promoter interactions using evidence from changes in genomic protein occupancy over time. The occupancy of estrogen receptor alpha (ER\u03b1), RNA polymerase (Pol II) and histone marks H2AZ and H3K4me3 were measured over time using ChIP-Seq experiments in MCF7 cells stimulated with estrogen. A Bayesian classifier was developed which uses the correlation of temporal binding patterns at enhancers and promoters and genomic proximity as features to predict interactions. This method was trained using experimentally determined interactions from the same system and was shown to achieve much higher precision than predictions based on the genomic proximity of nearest ER\u03b1\u00a0binding. We use the method to identify a genome-wide confident set of ER\u03b1\u00a0target genes and their regulatory enhancers genome-wide. Validation with publicly available GRO-Seq data demonstrates that our predicted targets are much more likely to show early nascent transcription than predictions based on genomic ER\u03b1\u00a0binding proximity alone. Gene expression is dependent upon the binding of transcription factor (TF) proteins to genomic regions which regulate transcriptional initiation\u00a0. In eukaRecent progress in experimental techniques such as ChIA-PET, 3C and its derivatives 4C, 5C, and Hi-C\u00a0 have mapChIP-seq experiments enable the discovery of the genomic location of transcriptionally relevant proteins such as TFs, RNA polymerase and modified histones. Multiple ChIP-Seq datasets can be combined with data from other relevant genomics assays to identify active promoters and enhancers using genomic segmentation algorithms\u00a0. Others Approaches for discovering cell-type specific interactions include PreSTIGE\u00a0, RIPPLE\u00a0The majority of the above methods require data from multiple cell-types and therefore do not allow discovery of interactions given data from one cell-type. Most existing methods also assume a stringent distance constraint and are therefore unable to discover distal links beyond this constraint. Finally, these methods do not take into account evidence from time course data.\u03b1) and RNA polymerase (Pol II) ChIP-Seq time course data are shown to be highly informative for predicting interactions. We also stratify our predicted interactions to those that lie within Topologically Associating Domains\u00a0. Other previously published data from the same set of experiments are available for Pol-II ChIP-Seq and RNA-Seq\u00a0 in MCF7 breast cancer cells. Our previous studies included only the Pol-II and RNA-Seq time course data from these experiments\u00a0 and hereThe MCF-7 human breast cancer cell line originates from a 69-year old Caucasian woman and is estrogen receptor (ER) positive, progesterone positive (PR) and HER2 negative. Here MCF-7 cells kindly provided by Prof.\u00a0Edison Liu, Jackson Laboratories, Maine, USA) were grown in 15\u00a0cm plates to 80% confluency. Plates were then washed two times with PBS and overlaid with 20\u00a0ml of phenol-red free high glucose DMEM (Gibco) containing 2% charcoal stripped FCS (Sigma). After 24\u00a0h of incubation, the cells were again washed with PBS and fresh media containing 2% charcoal stripped FCS was added. This process was repeated over a three day period to generate cells devoid of estrogen. The time course was initiated by replacing media with prewarmed media containing 10 nM E2. In addition, an untreated sample was included in the experiment as a zero time point.Cells were fixed for 10\u00a0min at room temperature by the addition of formaldehyde to a final concentration of 1%, after which glycine was added to a concentration of 100 mM. Cells were then washed twice with PBS and collected into 2 ml of lysis buffer , 100 mM PMSF). The lysate was sonicated for 3\u00a0\u00d7\u00a030\u00a0s using a Branson ultrasonicator equipped with a microtip on a power setting of 3 and a duty cycle of 90%. Samples were cooled on ice between rounds of sonication. Alternatively, a Bioruptor sonicator was used to fragment chromatin. In either case, the resulting sonicate was centrifuged at 4,000\u00d7\u00a0g for 5\u00a0min, an aliquot of 10% retained for input and the remaining material transferred to a fresh tube. Four mg of anti-ERaantibody , 2\u00a0\u00b5g of anti-RNA Polymerase II antibody , 3\u00a0\u00b5g of anti-H3K4me3 antibody and 2\u00a0\u00b5g anti-Histone H2A.Z (acetyl K4 +\u00a0K7 +\u00a0K11) antibody were added to the samples, which were then incubated overnight at 4\u00a0\u00b0C with rotation. Chromatin antibody complexes were isolated, either by addition of 10\u00a0\u00b5L of protein G labeled magnetic beads prewashed in lysis buffer or with 20\u00a0\u00b5L protein A/G beads (Santa Cruz Biotechnology). Afterwards, the complexes obtained with protein G magnetic beads were washed three times with lysis buffer, then reverse crosslinked in 0.5 ml 5 M guanidine hydrochloride, 20 mM Hepes, 30% isopropanol, 10 mM EDTA for a minimum of 4\u00a0h at 65\u00a0\u00b0C. Recovered DNA was then purified using a Qiaquick spin column and eluted in 50 \u00b5L of 10 mM Tris pH 8.0. Where protein A/G beads were used, the complexes were washed sequentially with three different buffers at 4\u00a0\u00b0C: two times with solution of composition 0.1% SDS, 0.1% DOC, 1% Triton, 150 mM NaCl, 1 mM EDTA, 0.5\u00a0mM EGTA, 20 mM HEPES pH 7.6, once with the solution as before but with 500 mM NaCl, once with solution of composition 0.25 M LiCl, 0.5% DOC, 0.5% NP-40, 1 mM EDTA, 0.5\u00a0mM EGTA, 20 mM HEPES pH 7.6 and two times with 1 mM EDTA, 0.5 mM EGTA, 20\u00a0mM HEPES pH 7.6. A control library was generated by sequencing input DNA (non-ChIP genomic DNA). Immunopurified chromatin was eluted with 200\u00a0\u00b5L of elution buffer , incubated at 65\u00a0\u00b0C for 4 h in the presence of 200 mM NaCl, isolated using a Qiaquick spin column and eluted in 50 \u00b5L of 10 mM Tris pH 8.0. Libraries were prepared for Illumina sequencing according to the manufacturer\u2019s protocols (Illumina). Briefly, DNA fragments were subject to sequential end repair and adaptor ligation. DNA fragments were subsequently size selected (approx. 300 base pair (bp)). The adaptor-modified DNA fragments were amplified by limited PCR (14 cycles). Quality control and concentration measurements were made by analysis of the PCR products by electrophoresis and by fluorometric dye binding using a Qubit fluorometer with the Quant-iT dsDNA HS Assay Kit respectively. Cluster generation and sequencing-bysynthesis (36 bp) was performed using the Illumina Genome Analyzer IIx (GAIIx) according to standard protocols of the manufacturer (Illumina).https://www.genomatix.de/solutions/genomatix-mining-station.html) to enable further analysis. The sequencing depth, i.e.,\u00a0the total number of sequenced reads, was very similar for each dataset, however, on average only 81%, 76%, 67%, 61%, 64% of ER-\u03b1, Pol-II (rep 1), Pol-II (rep 2), H3K4me3, and H2AZ ChIP-seq reads were mapped uniquely to the genome. The non-uniquely mapped reads were discarded from further analysis. Using the statistical criterion provided by MACS, we established that our sequencing depth allows for no duplicates of reads, thus we discarded any duplicated reads as they are most likely an artefact in ChIP-Seq.Raw reads from the experiments were mapped onto the human reference genome (NCBI_build37) using the Genomatix Mining Station \u00a0\u03b1 binding locations. The last two time points were not included as the number of ER-\u03b1 mapped reads was found to be very low at these times compared to earlier times. Persistent co-occurring ER-\u03b1 binding locations (i.e occurring at least twice across two time points after t\u00a0=\u00a00) were merged by a union operation , otherw\u03b1 binding sites to create time series over enhancer regions for each of our antibodies. To normalise the counts, we divided each read count over the total number of uniquely mapped and non-duplicated reads across all time points and multiplied the resultant values by the total number of mapped reads in the t\u00a0=\u00a00 min dataset. We concatenated the normalised counts to produce time series for each ChIP-seq dataset. We refer to each enhancer time series as Xj,n, where j\u00a0\u2208\u00a0J (number of intergenic enhancers) and n\u00a0\u2208\u00a0N (number of time course ChIP-seq datasets). We repeated the process for the gene regions to create the analogous time series over gene regions, extending the genes by 300\u00a0bp upstream from their canonical TSS. We refer to each time series over gene as Yk,n where k\u00a0\u2208\u00a0K (number of genes). We filtered out genes and intergenic enhancers from consideration if the total number of mapped reads across all time series was less than 30.We calculated the mapped read counts for each individual time point ChIP-seq dataset over the consensus ER-\u03b1 at enhancers and genes we clustered the data with the R-implementation of Affinity Propagation (AP)\u00a0\u00a0. AP is a\u03b1, due to lack of replicates, we only clustered the time series with more than 100 reads in total across all times. Prior to the clustering we standardized each time series to z-scores to bring all time series onto the same scale. We obtained 20 and 22 clusters for Pol II time series over enhancer and genes, respectively. Similarly we obtained 21 and 21 clusters for ER-\u03b1 time series over enhancer and genes. We also jointly clustered time series of PolII and ER-\u03b1. The results of the clustering can be seen in To reduce the effect of noise, for Pol II we clustered only the pairs of the time series for which the Pearson correlation coefficient was at least 0.2 between replicates and the total number of mapped reads was at least 30. For ER-j\u00a0=\u00a01,\u00a0\u2026,\u00a0J regulates a gene k\u00a0=\u00a01,\u00a0\u2026,\u00a0K at a number of time points, and that their contact is mediated by a protein. We can expect that the time course data of ChIP-seq data at an enhancer j i.e.,\u00a0 Xj\u00a0=\u00a0 and gene k i.e.,\u00a0 Yk\u00a0=\u00a0 would on average be more correlated for interacting pairs than their non-interacting counterparts. Here, we intend to learn the underlying distribution of correlations of the two classes of pairs for four complementary datasets and on their basis jointly classify a new unobserved instance. In addition, we combine the time course derived attributes with the corresponding distribution of genomic separation for interacting and non-interacting elements.Suppose that an enhancer K-dimensional random variables Ij\u00a0=\u00a0Ij,1,\u00a0\u2026,\u00a0Ij,K and Dj\u00a0=\u00a0Dj,1,\u00a0\u2026,\u00a0Dj,K. The first variable Ij encodes a structure of simultaneous contacts of a given enhancer j with its surrounding K putative target genes. It has K binary entries Ij,k indicating whether forms an interacting or non-interacting pair . The variable Dj is a K\u00a0\u00d7\u00a0N-dimensional matrix of observed attributes with each row consisting of N values of pair-wise comparisons between time series of an enhancer j and a gene k, and their genomic location. The first set of comparisons rely on Pearson correlation and involves calculating its value cj,k,n for each pair , i.e.,\u00a0its time series , and for each dataset n\u00a0\u2208\u00a0N, where N is a number of time course ChIP-seq datasets. Additionally, the data vector also contains the Euclidean distance dj,k calculated between the genomic coordinates of the canonical TSS of a gene k to the centre of an enhancer j.Our model is defined in terms of two Dj under a given structure Ij. Due to its regulatory role, an enhancer is unlikely to regulate a high number of genes, thus we can expect that the true P(Ij), which in the Bayesian treatment is a prior distribution over the structures, would be sparse. Moreover, we could expect that Dj,k and Dj,k\u2032 of any two interacting pairs k,\u00a0k\u2032 would be interlinked, as correlations between gene-enhancer pairs are not independent variables. These dependencies would be reflected in a true form of the likelihood P(Dj|Ij). Lastly, we could also expect that the N\u00a0+\u00a01 attributes i.e correlations cj,k,n and distance dj,k of a pair j,\u00a0k of the vector Dj,k would also be correlated.The joint likelihood of the model can be written as: The modelling of all dependencies however is difficult given the relative sparsity of our training data. We therefore restrict the form of the joint distribution and construct an approximate joint probability of enhancer-gene contacts. Pairwise correlations provide a valid likelihood if we restrict our model to consider one gene per enhancer.P(Dj|Ij) can be factorised and written in the form: Ij\u00a0=\u00a0Ij,1,\u00a0\u2026,\u00a0Ij,K and Dj\u00a0=\u00a0Dj,1,\u00a0\u2026,\u00a0Dj,K. Hence the distribution of each Dj,k is conditionally independent of other allocations and conditional only on the indicator variable Ij,k.We assume that the likelihood j can interact with only one gene k. We restrict the event space of P to its subspace P(Ij) follows a multivariate Bernoulli distribution, and thus the restriction is equivalent to setting the probabilities of all the structures Ij with non-singular number of contacts i.e., a priori.We assume further, that an enhancer P becomes: dj,k is a distance from the centre of an enhancer j to the TSS of a gene k, whereas cj,k,n is a correlation between the time series of the nth time course dataset between an enhancer j and gene k.Assuming that the attributes are conditionally independent, the likelihood component K class indicators makes this algorithm a special case of Naive Bayes (NB) model.Combining the assumption of the factorisable likelihood Dj, i.e., the values of the data-specific correlations and distance for each pair and all complementary pairs . The posterior probabilities can be used to infer the most likely target of an enhancer j out of K genes.The posterior distribution under the model is: \u03b1 and Pol II antibodies from ENCODE/GIS-Ruan (GSM970209 and GSM970212). The overall design and processing of the datasets is described under GEO accession number GSE39495. The sources contain the high-confidence binding sites and protein-mediated chromatin interactions with three and four replicates for ChIA-PET with antibodies for ER-\u03b1 and Pol II respectively. Overlapping the enhancers and genes with the concatenated set of empirically confirmed interactions revealed a total of 2,733 enhancer-promoter links, and shows that 2,087 of our distal enhancers interact with at least one promoter.We overlap the distal enhancers and promoter-extended-genes with the combined set of ChIA-PET predicted links using both ER- GIS-Ruan and P using kernel density estimation (KDE) with a Gaussian kernel. To ensure that the bandwidths of positive distributions are biologically meaningful and robust, we used cross-validation. As part of the approach, we sequentially removed all features of each chromosome from their total set across all chromosomes and at each time calculated the log-likelihood of KDE for the reduced set of features. We then used the value of the bandwidth with the highest log-likelihood over left-out data. In contrast, due to a large number of negative examples and computational cost associated with KDE, employing the same approach for negatives was infeasible. Their size, however, also entails less requirement for optimised fitting, and thus to select the bandwidth we resorted to the Scott\u2019s rule . For Pol II we used the average correlation across the two replicates. For the distance feature we used the logt\u2019s rule .We trained the classifier on the odd chromosomes and estimated the training error. Similarly, we tested the method on the even chromosomes and obtained the test error. Since the test data is not used to build the classifier , its predictions on the test data can be considered unbiased. We measured the performance in two ways. Firstly, we evaluated and plotted precisions against the True Positive Rate of 10%, 20%, and 30% for various combinations of features. Secondly, we used an alternative MAP measure. Under our model each enhancer possesses a maximum a posteriori (MAP) gene which is our best guess of enhancer\u2019s target. The MAP measure is the percentage of times the MAP inferred target gene is confirmed by the positive set of interactions in the ChIA-PET data.http://www.ncbi.nlm.nih.gov/genome/tools/remap)We stratified our predicted interactions at 10%, 20%, and 30% thresholds into those that lie within domains and those that crossed domain boundaries. Each TPR threshold maps to a subsets of negative and positive links, and therefore each subset was partitioned into inter- and intra- domain interactions. We then tested precisions for each of the subsets. For details of TAD preparation refer to the Supplementary Material to detect whether Pol II molecules are engaged in transcription at the start of the experiment. The experiments were performed with the same cell-line and stimulation as ours and were used to determine the early transcriptional response of genes following E2 treatment. Using these data and the regulation probability scores defined in q-value of less than 0.05, 0.01, 0.001. For each q-value, we combined the DE genes from each of the time points into a single list.\u03b1 TF associates with numerous enhancers to regulate transcription of target genes. ER- \u03b1, encoded by the ESR1 gene, is a particularly well studied example of a nuclear receptor due to its role in breast cancer development. Its genome-wide binding pattern under stimulation with estrogen has been established through ChIP-seq experiments and two histone marks (H3K4me3 and H2AZ) associated with transcriptional competence, were measured via ChIP-seq at eight consecutive time-points after exposure of cells in estrogen free media to estradiol. ChIA-PET data are also available in this system and were used to evaluate our method\u2019s performance while 26,585 were distant from genes .To locate binding events formed after stimulation with estradiol, we determined a set of genomic loci associated with ER-\u03b1-bound enhancers are known to form links with promoter-extended genes. Overlapping regions with interactions derived from two public ChIA-PET datasets that used the same ER-\u03b1 and Pol II antibodies revealed a total of 2,733 enhancer-promoter links. These interactions were used as a positive set for the purpose of developing our classifier. Missing interactions involving the same enhancers and other promoters in the same chromosome were used as the negative set. When training and testing the classifier, we did not include enhancers that did not have any interactions according to the ChIA-PET data. These enhancers are most likely not detected by the ChIA-PET method due to its limited sensitivity and their inclusion would introduce many false negatives into our training and testing data. However, we apply the classifier to all enhancers when making target gene predictions.Next, we determined how many of our distal ER-\u03b1 binding sites to create time series data for genes and enhancers . We clustered the ER-\u03b1 and Pol II data to help visualise the occupancy dynamics at enhancers and genes. As shown in \u03b1 profiles were also detected, suggesting that occupancy is not solely determined by the nuclear concentration of ER-\u03b1.We calculated the number of mapped reads for each of our ChIP-seq datasets over promoter-extended-gene bodies and over our consensus ER-\u03b1 and Pol II curves and 4F. Most enhancer-promoter interactions are thought to occur within the same Topologically Associating Domain (TAD) and we were interested in whether our method can discover interactions across TAD boundaries. In order to assess the performance of the model on discovery of intra-domain interactions and the ones involving elements from two different domains, we stratified our predicted interactions into those two groups, and recomputed precision-recall and MAP performance \u20134M\u2013Q/R.\u03b1 and distance features provide the greatest contribution to performance. The Pol-II feature is also informative but does not add much to performance when combined with the ER-\u03b1 data. Interestingly, within domains the \u201cdata-alone\u201d model possesses much higher predictive power than in the chromosome-wide model. By excluding the possibility of long-range interactions beyond domain boundaries, the number of false positives is greatly reduced. Nevertheless, we see that incorporation of the distance feature still improves classification performance within domains.The majority (79%) of enhancer-promoter interactions lie within domains. The PR curves in On the contrary see and 4R f\u03bblocal parameter produces similar results to our default parametrisation where we switched that parameter off. Our selection of data features involved some arbitrary choices and therefore we considered robustness to varying some of the parameters used. We first investigated alternative promoter region sizes for promoter-gene regions, their effect on test and training sets and the effect on the performance of the model. The comparison between the distributions of features in We have used eight timepoints for this study, but most epigenomic time course datasets from a single stimulation have fewer available timepoints. We therefore assessed the performance for reduced datasets with six and four timepoints in Finally, we used our method to provide a highly confident (FDR of 0.25) list of directly ER-regulated target genes in this system. This list includes\u03b1 binding events is much less predictive of early differential expression.In We have developed a Bayesian method which is capable of integrating genomic distance with a correlation of ChIP-seq time series in order to predict physical interactions between enhancers and promoters. We evaluated the performance of our method against ChIA-PET predicted links and using different combinations of features. Using complementary GRO-seq data from the same cell-line and stimulation we show that our model can accurately predict distally regulated, differentially expressed genes under stimulation with estrogen.Experimental approaches to identifying ehancer-promoter interactions genome-wide are increasingly popular but have some limitations. ChIA-PET datasets typically only identify a relatively small number of enhancer-promoter interactions with confidence, while HiC data typically has too low genomic resolution to resolve specific enhancer interactions. Even the more focussed Capture-HiC protocals are limited to restriction fragments of several kb and HiC data are generally associated with complex noise characteristics requiring sophisticated corrections for background. Our model can therefore serve as a useful complementary approach to these techniques and offers insight into stimulation-dependent, and cell-type specific transcriptional regulation.In this work we have focussed on intergenic enhancers, because our data contains Pol-II ChIP-Seq data which has transcriptional signal on introns and is therefore not ideally suited for identifying intronic enhancers. However, the computational method could potentially work for intronic enhancers with different ChIP-Seq data combinations. For example, with access to enhancer enriched epigenomic marks such as H3K27ac or H3K4me1 then the data may be suitable for identifying links involving intronic enhancers.10.7717/peerj.3742/supp-1Figure S1\u03b1 bindings. In the process MACS-detected time varying peaks from [0], 5, . . . , 320 min time points which co-occur at least twice across time points are merged by union operation to produce the approximate consensus locations of a single binding. The single occurrences of peaks are discarded.Cartoon shows the process of merging individual MACS-called peaks with the objective of finding approximate locations of time persistent ER-Click here for additional data file.10.7717/peerj.3742/supp-2Figure S2\u03b1 at enhancers with Affinity Propagation. The clustering involves only the time series which individually possess a sum of at least 200 tags across all time point.Figure shows the clustering of the joint time course of Pol II and ER-Click here for additional data file.10.7717/peerj.3742/supp-3Figure S3\u03b1, PolII, H2AZ and H3K4me3 collected across all 23 chromosomes. The figure (E) shows the distribution of genomic distances between centres of distal enhancers and 300 bp-upstream-shifted-TSS of genes. The set of positive and negative pairs was constructed using 300 bp- upstream-extended-genes and distal enhancers.The graphs (A\u2013D) show positive (green) and negative (yellow) distributions of correlations between time series of 300\u00a0bp-upstream-extended- gene regions and enhancer bodies for ER- Click here for additional data file.10.7717/peerj.3742/supp-4Figure S4\u03b1, PolII, H2AZ and H3K4me3 collected across all odd chromosomes. The figure (E) shows the distribution of genomic distances between centres of distal enhancers and 1,500 bp-upstream-shifted-TSS of genes. The set of positive and negative pairs was constructed using 1,500 bp- upstream-extended-genes and distal enhancers.The graphs (A\u2013D) show positive (green) and negative (yellow) distributions of correlations between time series of 300\u00a0bp-upstream-extended- gene regions and enhancer bodies for ER- Click here for additional data file.10.7717/peerj.3742/supp-5Figure S5\u03b1 enhancers.Figure shows the comparison of performance of the NB model on odd chromosomes (training data) measured by Precision-TPR and MAP scores. The Precision-TPR curves show the accuracy of the predictions with the highest 10%, 20%, 30% scores i.e., posterior probabilities. The second and the third rows stratify predictions at each of the thresholds into those which take place within domains and those involving inter-domain contacts. The set of positive and negative pairs for the first model was constructed using 300 bp-upstream- extended-genes and distal enhancers. The correlation-based attributes of the two models were estimated using signals (time series) aggregated over 300bp- upstream-extended-genes, and distal enhancer bodies. The separation-based feature was estimated from 300bp-upstream-shifted TSS to the centres of the ER-Click here for additional data file.10.7717/peerj.3742/supp-6Figure S6\u03b1 enhancers.Figure shows the comparison of performance of the NB model on even chromosomes (test data) measured by Precision-TPR and MAP scores. The Precision-TPR curves show the accuracy of the predictions with the highest 10%, 20%, 30% scores i.e. posterior probabilities. The second and the third rows stratify predictions at each of the thresholds into those which take place within domains and those involving inter-domain contacts. The set of positive and negative pairs for the first model was constructed using 300bp-upstream- extended-genes and distal enhancers. The correlation-based attributes of the two models were estimated using signals (time series) aggregated over 300bp- upstream-extended-genes, and distal enhancer bodies. The separation-based feature was estimated from 300 bp-upstream-shifted TSS to the centres of the ER-Click here for additional data file.10.7717/peerj.3742/supp-7Figure S7\u03b1 enhancers.Figure shows the comparison of performance of the NB model on odd chromosomes (training data) measured by Precision-TPR and MAP scores. The Precision-TPR curves show the accuracy of the predictions with the highest 10%, 20%, 30% scores i.e. posterior probabilities. The second and the third rows stratify predictions at each of the thresholds into those which take place within domains and those involving inter-domain contacts. The set of positive and negative pairs for the first model was constructed using 1500bp-upstream- extended-genes and distal enhancers. The correlation-based attributes of the two models were estimated using signals (time series) aggregated over 300bp- upstream-extended-genes, and distal enhancer bodies. The separation-based feature was estimated from 1500bp-upstream-shifted TSS to the centres of the ER-Click here for additional data file.10.7717/peerj.3742/supp-8Figure S8\u03b1 enhancers.Figure shows the comparison of performance of the NB model on even chromosomes (test data) measured by Precision-TPR and MAP scores. The Precision-TPR curves show the accuracy of the predictions with the highest 10%, 20%, 30% scores i.e. posterior probabilities. The second and the third rows stratify predictions at each of the thresholds into those which take place within domains and those involving inter-domain contacts. The set of positive and negative pairs for the first model was constructed using 1500bp-upstream- extended-genes and distal enhancers. The correlation-based attributes of the two models were estimated using signals (time series) aggregated over 300bp- upstream-extended-genes, and distal enhancer bodies. The separation-based feature was estimated from 1,500 bp-upstream-shifted TSS to the centres of the ER-Click here for additional data file.10.7717/peerj.3742/supp-9Figure S9\u03b1 bindings) from the scan with the p- value of 1e\u221207 and the local control switched off, in which case the search is done with \u03bb BG . In the second column we see the performance under the alternative peak calling with the p-value of 1e\u221205 (MACS\u2019 default), no control and the local control flag on. The set of positive and negative pairs for the first model was constructed using 300 bp-upstream-extended-genes and distal enhancers. The correlation-based attributes of the model were estimated using pairs of 300 bp- upstream-extended-genes, and enhancers . The separation-based feature was estimated from 300bp-upstream-shifted TSS to the centres of the ER-\u03b1 enhancers.The first column of the figure shows the performance of the NB model on all odd chromosomes. The model was trained on the stringent time persistent merged MACS-called peaks from the scan with the p-value of 1e\u221207 and the local control switched off, in which case the search is done with \u03bb BG. In the second column we see the performance un- der the alternative peak calling with the p-value of 1e\u221205 (MACS\u2019 default), no control and the local control flag on. The set of positive and negative pairs for the first model was constructed using 300 bp-upstream-extended-genes and distal enhancers. The correlation-based attributes of the model were estimated using pairs of 300 bp-upstream-extended-genes, and enhancers . The separation-based feature was estimated from 300 bp- upstream-shifted TSS to the centres of the ER- \u03b1 enhancers.The first column of the figure shows the performance of the NB model on all even chromosomes. The model was trained on the stringent time persistent merged MACS-called peaks for ER-\u03b1, PolII, H2AZ and H3K4me3 collected across all odd chromosomes. The figure (e) shows the distribution of genomic distances between centres of distal enhancers and 300 p-upstream-shifted-TSS of genes. The set of positive and negative pairs was constructed using 300 p-upstream-extended-genes and distal enhancers.The graphs show positive (green) and negative (yellow) distributions of correlations between time series of 300 bp-upstream-extended- gene regions and enhancer bodies with correlation-based features calculated using the first (a) 8, (b) 6, (c) 4 time points of our logarithmically spaced time course data. The performance was measured using Precision-TPR and MAP scores. The Precision-TPR curves show the accuracy of the predictions with the highest 10%, 20%, 30% scores i.e., posterior probabilities. The set of positive and negative pairs for the first model was constructed using 300 bp-upstream-extended-genes and distal enhancers. The correlation-based attributes of the two models were estimated using signals (time series) aggregated over 300 bp-upstream-extended-genes, and distal enhancer bodies. The separation-based feature was estimated from 300 bp-upstream-shifted TSS to the centres of the ER-Click here for additional data file.10.7717/peerj.3742/supp-13Table S1Click here for additional data file."}
+{"text": "Survival analysis methods have been widely applied in different areas of health and medicine, spanning over varying events of interest and target diseases. They can be utilized to provide relationships between the survival time of individuals and factors of interest, rendering them useful in searching for biomarkers in diseases such as cancer. However, some disease progression can be very unpredictable because the conventional approaches have failed to consider multiple-marker interactions. An exponential increase in the number of candidate markers requires large correction factor in the multiple-testing correction and hide the significance.p-value correction technique for statistical tests for combination of markers. LAMP cannot handle survival data statistics, and hence we extended LAMP for the log-rank test, making it more appropriate for clinical data, with newly introduced theoretical lower bound of the p-value.We address the issue of testing marker combinations that affect survival by adapting the recently developed Limitless Arity Multiple-testing Procedure (LAMP), a p-values. Gene combinations with orders of up to 32 genes were detected by our algorithm, and effects of some genes in these combinations are also supported by existing literature.We applied the proposed method to gene combination detection for cancer and obtained gene interactions with statistically significant log-rank The novel approach for detecting prognostic markers presented here can identify statistically significant markers with no limitations on the order of interaction. Furthermore, it can be applied to different types of genomic data, provided that binarization is possible. Survival analysis is generally used in studies whose primary interest is the time of occurrence of an event. For instance, one may be interested in the time from first treatment of leukemia patients to time of remission, the time from first heart attack until death, or the time from being cancer-free to time of recurrence. Unlike ordinary regression models, survival analysis methods can incorporate censorship and time information which are usually present in clinical data. They can also be used to estimate survival, or the probability of surviving up to a certain time, and hazard, or the instantaneous rate of occurrence of the event. In addition, they can be utilized to describe the effects of important factors on the survival of the individual, such as age, gender, or treatment. In a similar manner, we can take advantage of these methods to help identify significant biomarkers for survival.Prognostic biomarkers for diseases like cancer are commonly identified using genomic data such as genome-wide expression profiles \u20136. RecenX in Fig.\u00a0X may not be discovered as a candidate marker. All the while, X shows noteworthy effect on the survival of individuals in Fig.\u00a0If only single markers or pair markers were considered for statistical assessment, it would be computationally feasible to exhaustively test each candidate. But given the size of standard genomic data and that the size of the combination is arbitrary, the number of tests can be exceedingly large, leading conventional methods for identifying prognostic markers to perform statistical assessment on individual genes or individual SNPs only. This leaves several prospective markers, such as those of high order combinatorial interactions, untested for significant effects. Other approaches try to perform a screening step to narrow down candidates involved in combinations. For example, a subset of the original set of markers may be retained based on their individual statistical significance after performing some initial evaluation. Then, higher order candidate combination markers are generated by considering interactions of the markers retained in this subset and assessed for significant associations. Wang et al. adopted this strategy by restricting the set to the top significantly differentially expressed genes first before generating and selecting candidate combinations using a robust likelihood-based procedure . In a si\u03b1, usually 0.05 or 0.01 in value, by excluding infrequent combinations that will never be significant, and hence do not contribute to the family-wise error rate (FWER), or the probability of making at least one false discovery [To overcome the dilemma occurring in statistical assessment of multiple hypotheses, the Limitless Arity Multiple-testing Procedure (LAMP) was proposed by Terada et al. for finding significant motif combinations that regulate gene expressions . Using fiscovery . Howeverp-values computed using log-rank, like in the work of duVerle et al. [p-value, and later test them using the log-rank p-value. Statistical significance of their detected combinations is not necessarily guaranteed. On the other hand, our approach directly exploits the log-rank p-value to identify meaningful individual markers and multiple-marker interactions. By modifying LAMP, the procedure becomes more suitable for survival data, which generally involves censored information, while enabling us to identify high order combinations without dealing with issues raised by test multiplicity. Similar to [In this research, we propose an extension of LAMP for log-rank test to detect prognostic gene combinations. Log-rank test is commonly used for differentiating chances of survival between groups. It can also be interpreted as a time-stratified Cochran-Mantel-Haenszel test (CMH) . The CMHe et al. . To findmilar to , our appWe applied our algorithm to datasets of mRNA expression profiles from The Cancer Genome Atlas (TCGA). Cancer is a complex disease whose course and prognosis is highly variable, and some cancer types cause more deaths than others, such as lung, liver, stomach and breast cancers.Therefore, treatment options differ for each individual, and it has been essential to establish prognosis of patients. Aside from early detection before the spread of the disease being crucial, prognostic and predictive markers have also become highly relevant in personalizing medical care and improving the quality of treatment. Our method identified combinatorial interactions with orders of up to 32 genes, and existing studies can confirm the effects of some these genes included in these combinations. Additionally, the method presented here is not restricted to gene expression data, but can also be applied to other types of genomic data such as copy number variations, or single-nucleotide polymorphisms, as long as binarization of values can be performed. This makes our strategy more flexible than other data-defined methods for marker identification.i for each sample \u2113, i.e. gi(s\u2113)\u2208{0,1}{i=1,2,\u2026,M},{\u2113=1,2,\u2026,N}. For example, gi may represent a single gene, highly expressed genes are denoted by 1 and not highly expressed genes are denoted by 0. When gi is assumed as a SNP, 1 and 0 mean minor homozygous SNP and non minor homozygous SNP, respectively. In addition, let y(s\u2113)=1 if the event of interest occurred for the individual, which we refer to as a failure or failed sample, and y(s\u2113)=0 if the information on the sample is censored. Let X be a pattern of m markers s\u2113 such that \u03c4\u2113=tj and y(s\u2113)=1. Then for any failure time tj, we can also subdivide the N individuals into two groups: P1={s\u2113|gi(s\u2113)=1 \u2200gi\u2208X and \u03c4\u2113\u2265tj} or the set of samples containing pattern X who survived to at least until tj, and X who also survived to at least until tj. Our goal is to detect combinations X such that survival times of individuals from the two groups P1 and P0 are statistically significantly different, while taking censored information into account. Thus, we can say that X is associated to survival, making it a promising candidate marker.In this study, we will focus on the following problem setting. Suppose we have a survival dataset composed of a set of markers X. But to use it to exhaustively investigate the effects of combination markers, statistical assessment must be performed for every possible combination, i.e., 2M\u22121 statistical tests are performed. Such approach does not only cause computational complexity problems, but also yields a serious number of false discoveries. To overcome these problems, we present an algorithm for finding combinatorial interactions significantly associated with the survival of individuals while controlling FWER and correcting for multiple hypotheses. To achieve this goal, the proposed method integrated the statistical evaluation capability of the log-rank test with the multiple testing correction power of LAMP.A statistical test for survival analysis, such as the log-rank test, is useful to evaluate statistical significance of a combination like The log-rank test is used to determine statistical difference in the time-to-event for any given time between the two populations. For example, one might be interested in the time before death between treatment and placebo for a complex disease in a clinical trial. The test assumes that occurrence of event is not dependent on censoring, and that event probabilities are unaffected by the start times of the individuals in the study .p-value with the log-rank test. Table\u00a0tj, where t1<\u22efnj. Fixing \u03bbj=\u03bb for all j, we get the bounding function defined in Eq. Moreover, this achieves its minimum when the table is most biased . Therefoentclass1pt{minimaf is monotonically decreasing, observe that when \u03bb\u2264nj: To show that nj\u2264Yj, then (nj\u2212\u03bb)/(Yj\u2212\u03bb)\u22641. On the other hand, when \u03bb>nj, fj(\u03bb) is independent of \u03bb. Therefore, fj(\u03bb) decreases with respect to \u03bb, and the conclusion follows. \u25a1And since \u03bb and the computation of the minimum p-value bound.To find statistically significant interactions using log-rank test, we implemented the following algorithm tailored from the original LAMP algorithm . Briefly\u03bb is initially set to the maximum frequency over all patterns X in the data. If this value is larger than the minimum number of samples at risk Yj over all failure times, \u03bb is set to this value in line 2. Lines 4\u20135 call the LCM algorithm while decreasing the value of \u03bb by 1 for each iteration to find all patterns X whose number of occurrences is at least \u03bb, until the value f(\u03bb\u22121)\u2264\u03b1/k, k equal to the number of such patterns X. The value of f is computed using the bound defined in Eq. p-value for the corresponding \u03bb is computed in each failure time, and the product across all failure times is obtained. When the condition in line 5 is not met, or if the current \u03bb is already equal to 2, then algorithm finally outputs the optimal value of \u03bb, an exhaustive list of all testable patterns corresponding to this value, and the total number of these patterns, k.Similar to LAMP, z-scores of median-centered per gene data were provided, and we used this to binarize the expression values such that z-scores greater than 2 are classified as highly expressed. The average number of highly expressed samples per gene was around 21 samples for both data. To finish the computations within three days, we opted to divide the data into sets with 250 genes per set and implemented the algorithm per set. This yielded 59 sets for the TCGA-BRCA data and 79 sets for the TCGA-OV data. We aggregated the results for all experiments and used the total correction factor for all analyses as the significance threshold correction factor. We filtered the significant gene interactions detected by our algorithm by selecting those whose raw p-value multiplied by the total correction factor is still less than the threshold \u03b1, set here to 0.05. We performed all our experiments in a machine with two Intel Xeon E5-2650 v3 (2.30GHz) processors with 128GB memory.To test our approach, the algorithm was applied to two publicly available datasets with clinical data from The Cancer Genome Atlas (TCGA) Database: samples from breast invasive carcinoma (TCGA-BRCA) data andk=556284 to retain statistically significant combinations across all analyses, reducing the number of significant interactions to 5836 with the largest size of gene combinations is 32. Due to some unexpected bias that may be presented when detected significant markers only affect one or very few samples in the whole the data, we sorted the combinations in decreasing number of occurrences of the marker combination. Table\u00a0We obtained a total of 9634 statistically significant combinations from TCGA-BRCA, and the average correction factor per analysis is 9428. We used the total correction factor p=1.9545e\u221209, adjusted p=0.00109), which is one of the three genes whose occurrence of somatic mutations are greater than 10% among all breast cancers [p=1.9545e\u221209, adj. p=0.00109; second combination has size 8, raw p=3.3648e\u221208, adj. p=0.01871). From the table, the frequently appearing TIMM17A gene is a known breast cancer marker [The combinations yielded by our analysis involve genes that have been previously implicated in disease incidence or associated with disease prognosis. These include the PIK3CA gene gene in the first and fourth combination in Table\u00a0 gene in \u03b1=0.05 and k=556284 , but their combinations yield statistically significant results, e.g. C1orf55 and TIMM17A in Fig.\u00a0t=50 months, compared to the individual gene cumulative hazards, as shown in Fig.\u00a0To illustrate the effects of interactions vs single gene on patient survival, Kaplan-Meier plots of the first 3 gene combinations from Table\u00a0k=920351. After correction on the raw p-values, 2849 combinations with size of at most 28, and where 1893 combinations are present in more than one sample, are retained. Top interactions with the most number of occurrences in the data samples are enumerated in Table\u00a0For the TCGA-OV data, we obtained a total of 5193 candidate combinations from the 79 sets, with average correction factor of 12962 per set and a total correction factor of Similar to the TCGA-BRCA results, genes in the interactions potentially affecting the ovarian cancer survival include known oncogenes and novel candidates. As an example, high expressions of GGCX, the top gene in Table\u00a0z-scores were computed using the normalized values, and two types of binarization were applied thereafter. The first one is similar to our experiment settings focusing on high expressions of the genes of interest included in the combinations: z-scores greater than 2 were set to 1, otherwise, set to 0. The second one considers the case when the genes of interest have low expressions, for which we set entries with z-scores less than \u2212\u20090.5 to 1, otherwise, set to 0. Probes were mapped to genes using their Gene Entrez IDs, with some genes mapped to multiple probes in the data. Therefore, we checked all possible probe combinations of each respective gene combination, provided all genes in the combination have a corresponding probe in the data. Otherwise, such combinations cannot be assessed. Survival with various events of interest were analyzed using the validation data, namely: relapse with time to relapse or last follow-up (GSE2034), distant recurrence-free survival with time from operation to the first distant recurrence (GSE25066), disease-specific survival with DSS time (GSE3494), overall survival with time from primary surgery (GSE13876), and progression free survival (PFS) with survival time that the disease does not get worse during or post-surgery. The summary of proportions of statistically significant combinations found in the validation data sets, i.e., their raw p-value is less than 0.05, is given in Table\u00a0As further support to the resulting combinations detected by the proposed algorithm, we used separate breast and ovarian cancer data sets to check if these combinations are also statistically significant survival markers for these data. Gene expression and clinical data for breast and ovarian cancers were obtained from the National Center for Biotechnology Information Gene Expression Omnibus with accession numbers GSE2034 , GSE2506An advantage of the algorithm presented here over other methods is its flexibility on the type of data used for analysis. It can easily deal with other genomic data such as SNPs or copy number variations provided values can be binarized. Moreover, scope of the application can be expanded to any type of disease and event of interest . The method can also be extended to continuous values, as techniques for significant pattern mining dealing with real-valued data have also been proposed .While the proposed method can detect high order interactions without any theoretical limitations to the order of interaction, it is not without cost. One caveat of the algorithm is the calling of LCM multiple times, making it very time-consuming, especially for large-scale data, hence the data division performed in the analyses. A faster version for LAMP has been proposed , invokinp-value bound, which returns very small p-values. This also causes the algorithm to run longer, due to the longer time it takes to terminate pruning in the LCM algorithm. The value of \u03bb decreases unnecessarily, therefore increasing the number of testable items. While the correction factor is still significantly smaller than what would have been if Bonferroni correction is used, a tighter bound is still preferred.Another shortcoming of the method is the relaxed minimum In this study, we presented a novel approach to finding potentially relevant high order gene markers that affect disease prognosis. By utilizing existing significant pattern mining techniques, our method can find multiple order combinations associated with the survival probabilities of affected and unaffected individuals while controlling the FWER and not being computationally expensive. Applying our algorithm to existing cancer survival study data yielded interactions involving genes already associated with cancer prognosis from existing literatures, as well us genes whose roles in cancer are still unknown."}
+{"text": "The elucidation of gene regulatory networks is one of the major challenges of systems biology. Measurements about genes that are exploited by network inference methods are typically available either in the form of steady-state expression vectors or time series expression data. In our previous work, we proposed the GENIE3 method that exploits variable importance scores derived from Random forests to identify the regulators of each target gene. This method provided state-of-the-art performance on several benchmark datasets, but it could however not specifically be applied to time series expression data. We propose here an adaptation of the GENIE3 method, called dynamical GENIE3 (dynGENIE3), for handling both time series and steady-state expression data. The proposed method is evaluated extensively on the artificial DREAM4 benchmarks and on three real time series expression datasets. Although dynGENIE3 does not systematically yield the best performance on each and every network, it is competitive with diverse methods from the literature, while preserving the main advantages of GENIE3 in terms of scalability. Gene regulatory networks (GRNs) define the ensemble of interactions among genes that govern their expression. The elucidation of GRNs is crucial to understand the functioning and pathology of organisms, and remains one of the major challenges of systems biology. Since the advent of high-throughput technologies, computational approaches have been proposed to infer GRNs from the measurement of gene expressions in various conditions using statistical inference or machine learning techniques. While network inference methods have reached some maturity over the years, their performance on real datasets remains far from optimal and calls for the permanent improvement of existing methods.1. One issue comes from the fact that expression data are currently mainly obtained using microarray or RNA-seq technologies, which both measure the gene expressions in populations of cells. Inaccurate measurements can thus occur if the cells are not synchronised at the different sampling time points. A second important issue is the choice of the sampling time points. The high cost of genomic experiments usually prevents a dense sampling over a long time period, and it may be difficult to choose the correct time points that will allow to capture all the expression changes. Dealing with the scarcity in time series expression data is an important challenge for network inference methods and this calls also for methods that can exploit jointly both steady-state and time series data.Measurements about genes that are exploited by these methods are typically available in two forms: static steady-state expression vectors, obtained by applying a perturbation to the system under study and measuring gene expressions once the system has reached some equilibrium point, or time series expression data, measuring the temporal evolution of gene expressions over several time points following the perturbation. Steady-state expression data are plethoric for many organisms. They however offer limited information regarding the dynamics of gene regulation, which limits the performance of network inference methods when they only exploit such data. Time series data on the other hand are intrinsically much more informative about the dynamics and should in principle make the inference more effective than steady-state data. In particular, time series data allow to infer causal relationships among genes, by analysing the cascade of expression changes across time. Unfortunately, collecting time series data poses several important technical and design issues that make such data very scarce6. These methods typically have good scalability, enabling reconstructions of networks of thousands of genes, and have consistently achieved state-of-the-art reconstruction performance in comparative evaluations7. On the other hand, model-based methods first define a quantitative dynamical model of the system, for example using differential equations8 or auto-regressive models9, and then infer the network by learning the parameters of this model from observed time series data. Model-based methods are rather computationally intensive and their parametric nature usually implies very stringent assumptions about the dynamics (e.g. linearity). These methods have nevertheless some appealing properties that model-free methods do not have: they have clearly defined semantics in terms of the underlying dynamical system properties, which makes them more interpretable than model-free methods. Most importantly, model-based methods can be used for simulating and predicting the dynamical system behaviour under perturbations.Mostly two families of methods have been explored in the literature to solve the GRN inference problem: model-free and model-based methods. Model-free methods infer the network by directly estimating the gene dependencies from the data through more or less sophisticated statistical or machine learning-based analyses6. This method exploits variable importance scores derived from Random forests10 to identify the regulators of each target gene. The main properties of this method are its fully non-parametric nature, its good scalability, and its ease of use. GENIE3 was the best performer of the DREAM4 Multifactorial Network challenge and the DREAM5 Network Inference challenge7, and has since been shown to be competitive with several other methods in several independent studies (e.g.12). Motivated by the good performance of GENIE3 on steady-state data, the aim of this paper is to evaluate GENIE3 and a new variant of GENIE3, when they are applied for the analysis of time series data and for the joint analysis of steady-state and time series data. The proposed variant for time series data, called dynGENIE3 , is based on a semi-parametric model, in which the temporal evolution of each gene expression is described by an ordinary differential equation (ODE) and the transcription function in each ODE is learned in the form of a non-parametric Random forest model. The regulators of each target gene are then identified from the variable importance scores derived from the corresponding Random forest model.In our previous work, we proposed GENIE3, a model-free method that infers networks from steady-state expression dataSeveral experiments are carried out on artificial and real datasets to assess the performance of GENIE3 and dynGENIE3. While dynGENIE3 consistently outperforms GENIE3 on the artificial data, the relative performances of the two methods become very data-dependent when they are applied to real data. In addition, our experiments show that, even though dynGENIE3 does not systematically reach the best performance in every setting, it is nevertheless very competitive with existing methods from the literature.13, a Bayesian inference method based on Gaussian processes. CSI has however the major drawback of being very computationally intensive, with respect to the number of observations and the number of candidate regulators (more details can be found in the \u201cRelated works\u201d section).To summarise, dynGENIE3 is a highly scalable network inference method able to exploit time series and steady-state data jointly. It consistently yields good performance on diverse artificial and real networks. On the DREAM4 networks, it is only outperformed by CSI14 and that was applied for the inference of the GRN underlying the drought response of common sunflower15.The present work supersedes the time series extension of GENIE3 that we proposed previouslyDTS and DSS be two expression datasets. The first dataset DTS, called the time series dataset, contains the expression levels of p genes, measured at N time points following a perturbation of the network:k\u2009=\u20091, \u2026, N is a vector containing the gene expression values at the k-th time point:Let DSS, called the steady-state dataset, contains the expression levels of the same p genes, measured in M experimental conditions once the system has reached some equilibrium point:k\u2009=\u20091, \u2026, M is a vector containing the expression values at steady-state of the p genes in the k-th condition:The other dataset DTS, possibly together with DSS, in order to assign weights wi,j\u2009\u2265\u20090, to putative regulatory links from any gene i to any gene j, with the aim of assigning the largest weights to links that correspond to actual regulatory interactions. Note that in this article, we leave open the problem of automatically choosing a threshold on the weights to obtain a practical network and focus on providing a ranking of the regulatory links.The goal is to exploit 6 treats the network inference problem as p feature selection problems, each feature selection problem consisting in recovering the regulators of a given gene. The method was originally designed to exploit steady-state data and makes the assumption that the expression of each gene j in a given condition is a function of the expression levels of the other genes in the same condition:xj\u2212 denotes the vector containing the expression levels of all the genes except gene j and \u03b5k is a random noise. GENIE3 further makes the assumption that the function fj only exploits the expression in xj\u2212 of the genes that are direct regulators of gene j, i.e. genes that are directly connected to gene j in the targeted network. Recovering the regulatory links pointing to gene j thus amounts to finding those genes whose expression is predictive of the expression of gene j.The GENIE3 methodj\u2009=\u20091 to p:j:Generate the learning sample of input-output pairs for gene fj from wi,j(i\u2009\u2260\u2009j), i\u2009=\u20091, \u2026, p, for all the genes except gene j.Learn For wi,j as weight for the regulatory link i \u2192 j.Use The GENIE3 procedure works as follows:wi,j such that gene i is not a candidate regulator are set to zero.Note that when a set of candidate regulators (e.g. known transcription factors) is given, the input genes in j is modelled through the following ordinary differential equation (ODE):xj is a function fj of the expression levels of the p genes (possibly including the gene j itself) and \u03b1j is a parameter specifying the decay rate of xj.GENIE3 can be applied to time series data in a naive way, by regarding the different time points as independent steady-state conditions. An alternative solution is to modify the procedure in order to take into account the dependence between the time points. The dynamical variant of GENIE3 (dynGENIE3) assumes that the expression level of gene fj can thus be learned using the following learning sample:The ODE has the fj from the concatenation of the learning samples Note that this procedure allows the incorporation of multiple time series experiments by learning the transcription function 18. A more detailed comparison with this method is provided in the \u201cRelated works\u201d section.The ODE model and its tk+1\u2009\u2212\u2009tk is constant 14.It is interesting to note that when the time interval equation simplifiAt steady-state, equation becomes:LS\u2009j used to learn the function fj can thus be obtained by concatenating the two types of data:The learning sample fj is learned in the form of an ensemble of regression trees. Regression trees split the data samples with binary tests based each on one input variable, trying to reduce as much as possible the variance of the output variable in the resulting subsets of samples. Candidate splits for numerical variables compare the input variable values with a threshold that is determined during the tree growing. Single trees are usually very much improved by ensemble methods that average the predictions of several trees. For example, in a Random forest ensemble each tree is built from a bootstrap sample of the original learning sample and at each test node K variables are selected at random among all the input variables before determining the best split10.In GENIE3 and dynGENIE3, the function 19 that computes, at each test node S denotes the set of samples that reach node St (resp. Sf) denotes its subset for which the test is true , Var(.) is the variance of the output variable in a subset, and # denotes the cardinality of a set of samples. Given one regression tree, the overall importance w of one variable is computed by summing the I values , NS is the size of S, and Varj(S) is the variance of the target gene j estimated in S. As a consequence, if we trivially use the scores wi,j to order the regulatory links, this is likely to introduce a positive bias for the regulatory links directed towards the genes whose expression levels vary the most. To avoid this bias, we normalize each importance score wi,j by the total variance that is explained by the putative regulators (excluding self-interactions):The sum of the importance scores This normalization implies that the importance scores inferred from different models predicting different gene expressions are comparable.\u03b1j, j\u2009=\u20091, \u2026, p represents the decay rate of the expression of gene j. Its value may be retrieved from the literature, since there exist many studies that experimentally measure the mRNA decay rates in different organisms. However, when such information is not available or when dealing with simulated data, we use the same approach as in the Jump3 method20. In this method, the value of the decay rate \u03b1j is estimated directly from the observed expression xj, by assuming an exponential decay xj. In the remaining of this paper, the \u03b1j values estimated using this method will be called the \u201cdata-derived\u201d values.In the ODE model , the kinhttp://www.montefiore.ulg.ac.be/\u02dchuynh-thu/dynGENIE3.html.Python, MATLAB and R implementations of dynGENIE3 are available at 21. These methods mainly differ in the terms present in the right-hand side of the ODE , the mathematical form of the models fj, the algorithm used to train these models, and the way a network is inferred from the resulting models. dynGENIE3 adopts the same ODE formulation as in the Inferelator approach16: each ODE includes a term representing the decay of the target gene and the functions fj take as input the expression of all the genes at some time point t. In the specific case of dynGENIE3, the functions fj are represented by ensembles of regression trees, which are trained to minimize the least-square error using the Random forest algorithm, and a network is inferred by thresholding variable importance scores derived from the Random forest models. Like for the standard GENIE3, dynGENIE3 has a reasonable computational complexity, which is at worst O(prN\u2009log\u2009N), where p is the total number of genes, r is the number of candidate regulators and N is the number of observations.Like dynGENIE3, many network inference approaches for time series data are based on an ODE model of the type \u20098,21. Thfj are linear and train these models by jointly maximizing the quality of the fit and minimizing some sparsity-inducing penalty . After training the linear models, a network can be obtained by analysing the weights within the models, several of which having been enforced to zero during training. In contrast to these methods, dynGENIE3 does not make any prior hypothesis about the form of the fj models. This is an advantage in terms of representational power but this could also result in a higher variance, and therefore worse performance because of overfitting, especially when the data is scarce. A few methods also exploit non-linear/non-parametric models within a similar framework, among which Jump320, OKVAR-Boost22 and CSI13. Like dynGENIE3, Jump3 incorporates a (different) dynamical model within a non-parametric, tree-based approach. In the model used by Jump3, the functions fj represent latent variables, which necessitated the development of a new type of decision tree, while Random forests can be applied as such in dynGENIE3. One drawback of Jump3 is its high computational complexity with respect to the number N of observations, being O(N4) in the worst-case scenario. Moreover, Jump3 can not be used for the joint analysis of time series and steady-state data. OKVAR-Boost jointly represents the models fj for all genes using an ensemble of operator-valued kernel regression models trained using a randomized boosting algorithm. The network structure is then estimated from the resulting model by computing its Jacobian matrix. One of the drawbacks of this method with respect to dynGENIE3 is that it requires to tune several meta-parameters. The authors have nevertheless proposed an original approach to tune them based on a stability criterion. Finally, CSI is a Bayesian inference method that learns the fj models in the form of Gaussian processes. Since learning Gaussian processes does not embed any feature selection mechanism, network inference is performed in CSI by a combinatorial search through all the potential sets of regulators for each gene in turn, and constructing a posterior probability distribution over these potential sets of regulators. As a consequence, the complexity of the method is O(pN3r\u2009d/(d\u2009\u2212\u20091)!), where d is a parameter defining the maximum number of regulators per gene8. Its high complexity makes CSI unsuitable when the number of candidate regulators (r) or the number of observations (N) is too high. Supplementary Table\u00a0per target gene. The CSI algorithm can be parallelised over the different target genes (like dynGENIE3), but even in that case the computational burden remains an issue when inferring large networks containing thousands of genes and hundreds of transcription factors (such as the E. coli network).In comparison, most methods in the literature (including Inferelator) assume that the models 25. The PR curve plots, for different thresholds on the weights of the links, the proportion of true positives among all the predictions (precision) versus the percentage of true positives that are retrieved . A perfect ranking, i.e. a ranking where all the positives are located at the top of the list, yields an AUPR equal to one, while a random ranking results in an AUPR close to the proportion of positives in the true network.GENIE3 and dynGENIE3 both provide a ranking of the regulatory links from the most confident to the least confident. To evaluate such a ranking independently of the choice of a specific threshold, we use the precision-recall (PR) curve and the area under this curve (AUPR), as suggested by the DREAM consortiumn different networks:i-th network that a given or larger AUPR is obtained by a random ranking of the putative edges. This probability is estimated from 100,000 random edge rankings. A higher AUPR score thus indicates a better overall performance over the n networks.For the DREAM4 networks (see below for the data description), we used the \u201cAUPR score\u201d, as proposed by the challenge organizers, to aggregate the AUPRs obtained for In Silico Network challenge . We then applied the methods to three real expression datasets related to different organisms . Supplementary Table\u00a0T\u2009=\u20091000 trees were grown and the main parameter K of the Random forest algorithm was set to the number of input candidate regulators.We first evaluated the performances of GENIE3 and dynGENIE3 on the simulated data of the DREAM4 In Silico Network challenge was to recover 5 networks of 10 genes and 5 networks of 100 genes, from both time series and steady-state data. Each time series experiment consisted in a perturbation that is applied to the network at time t\u2009=\u20090 and is removed after 10 time points, making the system return to its original state. Each time series contains noisy gene expressions levels that were sampled at 21 time points, with equal time intervals of 50 time units. The steady-state data contain the gene expression levels in various experimental conditions .The goal of the DREAM4 20), pairwise mutual information (CLR4 and its time-lagged variant tlCLR26), dynamic Bayesian networks (G1DBN27 and VBSSM28), ordinary differential equations (tlCLR/Inferelator pipeline17 and TSNI29), non-linear dynamical systems and Granger causality (GCCA31). Since the expression data are here simulated, we can not use known biology in order to set the values of the degradation rates \u03b1j in dynGENIE3 and we thus set \u03b1j to the data-derived values . We also used these parameter values for Inferelator and Jump3, which also have degradation rates in their respective models. The resulting AUPR scores are shown in Table\u00a08.We first compared GENIE3 and dynGENIE3 to various network inference algorithms, using only the time series data. Among the competitors are algorithms based on decision trees , while dynGENIE3 has a lower computational complexity and can thus be applied for the inference of very large networks.We applied GENIE3 and dynGENIE3 for the joint analysis of time series and steady-state data Table\u00a0 and S5. In Silico Network challenge make an intensive use of the steady-state expression data resulting from the single gene knockouts32, highlighting the importance of this type of data for the inference of regulatory networks. To check if our dynGENIE3 procedure could be improved by an appropriate use of the knockout data, we combined it with the MCZ method17. In the latter procedure, the weight of the edge directed from gene i to gene j is given by the following median corrected z-score:j when gene i is deleted, j, and \u03c3j is the standard deviation of gene j expression. To combine MCZ with dynGENIE3, we simply take the product of the scores of the two methods. The final weight wi,j will thus have a high value if the edge i \u2192 j is top-ranked by both methods. As shown in Table\u00a0Two out of the three best performing methods of the DREAM4 \u03b1j, j\u2009=\u20091, \u2026, p (decay rates of the genes) and the Random forest parameters (number T of trees per ensemble and number K of randomly selected variables at each tree node).dynGENIE3 has two types of parameters: the model kinetic parameters \u03b1j values yield a higher AUPR score than most of the other tested values of \u03b1j and thus seem to be good default values for the inference of the DREAM4 networks. Setting \u03b1j to the true decay rates, i.e. the decay rates that were actually used to simulate the DREAM4 data , do not necessarily yield better performances . Actually, there is no clear correlation between the true and estimated decay rates and the model used for simulating the data, the decay rate values being adjusted to compensate for the fact that a different model is used.Figure\u00a0K, the variations in the AUPR scores are much weaker here compared to the variations observed when varying the values of the kinetic parameters \u03b1j.Supplementary Table\u00a0t\u2009=\u20090, we used the ODE models learned by dynGENIE3 to predict the expression levels of non-deleted genes at successive time points until they reach a steady-state. The initial expression levels were set to zero for the two knocked out genes and to the wild-type expression levels for the remaining genes.The DREAM4 challenge comprised a bonus round where the goal was to predict for each network the steady-state gene expression levels in several double knockout experiments (where two genes are simultaneously deleted). Given initial gene expression levels at t\u2009=\u20090 as predictions. For each network, our approach yields a higher correlation between the predicted and true expression levels than the baseline , a large number of predictions remain however far from perfect predictions returned by dynGENIE3 to a baseline approach that uses the initial expression levels at ine Fig.\u00a0. AlthougSaccharomyces cerevisiae, Drosophila melanogaster and Escherichia coli. These organisms are much studied in the literature and known biology can hence be used here to guide the network inference. For each method and dataset, we restricted the candidate regulators to known transcriptions factors (TFs) and ranked all the putative regulatory interactions between these known TFs and the remaining genes. For dynGENIE3, Jump3 and Inferelator, we set the decay rate parameters to experimentally measured mRNA decay rates (see next section). For the genes for which a measured decay rate could not be retrieved, \u03b1j was set to the median measured decay rate of the corresponding species.We applied different network inference methods for the reconstruction of real-world sub-networks in three different organisms: Saccharomyces cerevisiae dataset33: This dataset comprises gene expression levels in the budding yeast, measured over 2 cell cycles in wild-type cells and 1.5 cell cycles in cyclin-mutant cells. To validate the network predictions, we used the gold standard network provided by the DREAM5 challenge7. We restricted our analysis to the genes that are periodically transcribed and that are also present in the gold standard. Measured mRNA decay rates were retrieved from the work of Geisberg et al.34.Drosophila melanogaster dataset35: This dataset comprises gene expression levels measured over the 24-hour period during which the embryogenesis of the fruitfly D. melanogaster takes place. We focused our analysis on the 1000 genes whose expression vary the most across the time series. We used as gold standard the experimentally confirmed binding interactions between TFs and genes that have been curated in the DroID database36 (version 2015_12). mRNA decay rates (measured from whole embryos) were retrieved from the work of Burow et al.37.Escherichia coli dataset38: This dataset comprises gene expression levels in E. coli, measured at several time points after five different perturbations: cold, heat, oxidative stress, glucose-lactose shift and stationary phase. We used as gold standard the verified regulatory interactions available in RegulonDB39 (version 9.0), and we focused our analysis on the genes that are present in both the dataset and the gold standard. mRNA decay rates (measured in cells with a growth rate of 0.63\u2009h\u22121) were retrieved from the work of Esquerre et al.40.We used the following time series datasets for each of the three organisms. The relative performances of GENIE3 and dynGENIE3 are also very data-dependent, with dynGENIE3 performing better than GENIE3 on the S. cerevisiae and DREAM4 datasets while the opposite is observed for D. melanogaster and E. coli.Figure\u00a0\u03b1j allow to retrieve a high number of gold standard edges compared to the other tested \u03b1j values. For D. melanogaster, although a significant number of true edges are retrieved with the experimentally measured decay rates, much better performances can be obtained with other values of \u03b1j. Supplementary Fig. E. coli where the top-500 edges do not contain any gold standard edge.As for the DREAM4 networks, the performance of dynGENIE3 on the real networks does not change much when using other values of the Random forest parameters and the true ones. Figure\u00a0\u03b1j, since a higher prediction score tends to coincide with a higher AUPR. However, this becomes less clear for the real networks, the prediction score being positively correlated with the number of retrieved edges for S. cerevisiae but negatively correlated for D. melanogaster and E. coli. Although disappointing, these results show that optimising the model predictive performance does not necessarily lead to a good feature selection (i.e. the selection of the true regulators for each target gene).It would thus be desirable to have an automatic way of tuning the kinetic parameters, which we first tried to achieve by checking how the ability of dynGENIE3 to predict new expression profiles vary according to the values of 41 in order to tune the parameters \u03b1j. The idea is to compare the T rankings of candidate regulators respectively returned by the T trees of an ensemble, the candidate regulators being each time ranked using the variable importance scores derived from one regression tree. More specifically, we used as stability score the average size of the intersection of the two sets of top 5 candidate regulators respectively returned by two regression trees:p is the number of tree ensembles (one for each target gene) and Si,j is the set of 5 top-ranked candidate regulators returned by the i-th tree of the j-th ensemble. Supplementary Fig. S. cerevisiae network. On a general note, caution should however be taken when drawing conclusions from real data, since real gold standard networks are usually very far from being complete.We also attempted to use a feature stability criterionIn this article, we evaluated the performances of tree-based approaches, GENIE3 and its dynamical variant dynGENIE3, for the inference of gene networks from time series of expression data. For this evaluation, we used artificial data from the DREAM4 challenge and real datasets related to three different organisms. Our experiments show that dynGENIE3 is competitive with diverse methods from the literature, even though it does not systematically yield the best performance for every network (but none of the compared methods was able to do so). Furthermore, our method can also be applied for the joint analysis of steady-state and time series data.D. melanogaster and DREAM4 10-gene networks, most of the information seems to be contained in the first half of the time series, while for the other networks better performance is obtained when data are sampled over a longer time period.While dynGENIE3 consistently outperforms GENIE3 on the DREAM4 data, the same conclusion cannot be drawn for the real datasets, where the relative performances of the two methods are very data-dependent. These results could potentially be explained by the multiple differences that exist between the organisms and datasets, such as differences in the dynamics of the gene expression regulation or in the rates at which expression levels are sampled. A thorough analysis of these differences and their impact on the network inference methods would thus be desirable. As a preliminary result, Supplementary Table\u00a0As a side result, we showed that dynGENIE3 can be used to make predictions of gene expression profiles at successive time points. Here, we evaluated its predictive performances in the context of (simulated) double knockout experiments. Preliminary results show that dynGENIE3 performs better than a baseline approach. Such results should of course be completed with an evaluation on real data and a comparison to other predictive methods.\u03b1j that would yield the best performances in terms of network reconstruction. While both criteria appear to be good indicators for the artificial DREAM4 networks, they are not always positively correlated with the number of retrieved gold standard edges in the case of the real networks. The design of a method to automatically tune the parameters \u03b1j is thus left as future work. Meanwhile, setting \u03b1j to experimentally measured decay rates already allows to obtain good performances.We investigated the predictive performance of dynGENIE3, estimated on the out-of-bag samples, as well as a stability criterion as means for automatically identifying the values of the kinetic parameters xj(t1), \u2026, xj(tN)42. Such a method would have the advantage of returning an estimate of the derivative at any time point t (and not only at the observation time points).In our current implementation of dynGENIE3, we use the finite difference approximation to estimate the derivative 43. Pseudo time series derived from static single-cell data could therefore be used to unravel gene regulatory networks, and some promising initial steps are being taken44.An important direction of future research is the application of the dynGENIE3 framework for the analysis of single-cell expression data. Emerging single-cell technologies now allow to measure gene expression levels simultaneously in hundreds of individual cells. Even when the gene expressions are measured at one single time point, cells are in different developmental stages, and several algorithms have been developed for ordering the cells along the developmental trajectoryet al.45, proposed an approach to bias the selection of features in Random forests, which could be adapted for dynGENIE3.While we believe that dynGENIE3 is a step in the right direction, we also acknowledge that the complexity of gene regulation will pose a strict limit to the potential of GRN inference from expression data alone. Another important future research direction is thus the integration in dynGENIE3 of complementary data, such as microRNA expression, ChIP-seq, chromatin, or protein-protein interactions. Recently, Petralia 46. In the future, we plan to explore and evaluate other hybrid approaches combining parametric terms based on first principles with non-parametric terms in the form of tree ensembles.Like the Jump3 method, dynGENIE3 is a hybrid model-free/model-based method that incorporates a dynamical model within a non-parametric, tree-based approach. Various gene regulation models have been proposed in the literature, which could be exploited. These models differ in their level of details and also in the way they model uncertaintiesSupplementary information"}
+{"text": "The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP), which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at Transcriptome-wide measurement of gene expression dynamics can reveal regulatory mechanisms that control how cells respond to changes in the environment. Such measurements may identify hundreds to thousands of responsive genes. Clustering genes with similar dynamics reveals a smaller set of response types that can then be explored and analyzed for distinct functions. Two challenges in clustering time series gene expression data are selecting the number of clusters and modeling dependencies in gene expression levels between time points. We present a methodology, DPGP, in which a Dirichlet process clusters the trajectories of gene expression levels across time, where the trajectories are modeled using a Gaussian process. We demonstrate the performance of DPGP compared to state-of-the-art time series clustering methods across a variety of simulated data. We apply DPGP to published microbial expression data and find that it recapitulates known expression regulation with minimal user input. We then use DPGP to identify novel human gene expression responses to the widely-prescribed synthetic glucocorticoid hormone dexamethasone. We find distinct clusters of responsive transcripts that are validated by considering between-cluster differences in transcription factor binding and histone modifications. These results demonstrate that DPGP can be used for exploratory data analysis of gene expression time series to reveal novel insights into biomedically important gene regulatory processes. PLOS Computational Biology Methods Paper.This is a guilt by association , except for data with small length scales when the proportions were equivalent . This implies that the simulated sampling rates in these cases were too low for DPGP to capture temporal patterns in the data.An important advantage of DPGP, as a probabilistic method, is that uncertainty in clustering and cluster trajectories is captured explicitly. Some implications of the probabilistic approach are that cluster means and variances can be used to quantify the likelihood of future data, to impute missing data points at arbitrary times, and to integrate over uncertainty in the cluster assignments . Using tFor the simulations with Gaussian-distributed error, in which DPGP performed worse than BHC or SplineCluster with respect to recovering the true cluster structure, the clusters inferred from the data provided useful and accurate CIs for unseen data. For example, DPGP performed decreasingly well as the marginal variance was increased to 0.4, 0.5, and 0.6. However, the median proportions of test points within the 95% CIs were 93.4%, 92.6%, 91.9%, respectively . This sup < 0.05). This suggests that, in nearly all simulated examples, the posterior probability was not strongly peaked at the true partition.DPGP may also be used to evaluate the confidence in a specific clustering with respect to the fitted model, which can be important for revealing instances when many different partitions model the data nearly as well as one another. For example, across our simulated datasets, when DPGP did not precisely recover the cluster structure, we found there was also substantial uncertainty in the optimal partition. Specifically, the posterior probability of the oracle clustering with respect to the simulated observations was greater than both the posterior probability of the DPGP MAP partition and than the mean posterior probability across all DPGP samples in only 1.6% of cases exposed to oxidative stress induced by addition of H2O2 . We. We55]. p \u2264 1.5 \u00d7 10\u22129, down-reg-slow versus non-DE, MWU, p \u2264 2.07 \u00d7 10\u221215, down-reg-fast versus non-DE, MWU, p = 3.18 \u00d7 10\u22125, down-reg-fast cluster had lower median log fold change than the down-reg-slow cluster in FOXA1 and USF1 binding . Our results suggest that differences in TF binding over time may underlie differences in dynamic transcriptional response both in terms of up-regulation versus down-regulation and also in the speed of the transcriptional response.Increased binding of transcriptional activators was associated with increased expression and with more rapidly increased expression, while decreased binding was associated with decreased expression and more rapidly decreased expression. Specifically, genes in both up-regulated clusters had higher median log fold change in binding of CREB1, FOXA1, and USF1 compared to the two down-regulated clusters to cluster measurements of genomic features such as gene expression levels over time. We showed that our method effectively identified disjoint clusters of time series gene expression observations using extensive simulations. DPGP compares favorably to existing methods for clustering time series data, is robust to non-Gaussian marginal observations, and, importantly, includes measures of uncertainty and an accessible, publicly-available software package. We applied DPGP to existing data from a microbial model organism exposed to stress. We found that DPGP accurately recapitulated previous knowledge of TF-mediated gene regulation in response to Hp < 0.05 or Kwiatkowski\u2014Phillips\u2014Schmidt\u2014Shin p > 0.05), and the residuals did not follow a Gaussian distribution , violating assumptions (i) and (v). However, despite these assumption violations, we found that DPGP clustered expression trajectories in a robust and biologically interpretable way. Furthermore, because DPGP does not assume that the gene expression levels are observed at identical intervals across time, DPGP allows study designs with non-uniform sampling.As with all statistical models, DPGP makes a number of assumptions about observations. In particular, DPGP assumes i) cluster trajectories are stationary; ii) cluster trajectories are exchangeable; iii) each gene belongs to only one cluster; iv) expression levels are sampled at the same time points across all genes; and v) the time point-specific residuals have a Gaussian distribution. Despite these assumptions, our results show that DPGP is robust to certain violations. In the human cell line data, exposure to dex resulted in a non-stationary response with a Gaussian process. With the DP mixture model, we are able to cluster the trajectory of each gene over time without specifying the number of clusters a priori.Then, we define the generative DP mixture model as follows:G in the DP to find the conditional distribution of one cluster-specific random variable \u03b8h conditioned on all other variables \u03b8h\u00ac, which represent the cluster-specific parameter values of the observation distribution . This allows us to describe the distribution of each parameter conditioned on all others; for all clusters h \u2208 {1, \u2026, H} we have\u03b4\u03b8h(\u22c5) is a Dirac delta function at the parameters for the hth partition. A prior could be placed on \u03b1, and the posterior for \u03b1 could be estimated conditioned on the observations. Here we favor simplicity and speed, and we set \u03b1 to one. This choice has been used in gene expression clustering . The lom empty clusters are re-generated, each of which has a mean function drawn from the prior mean function \u03bc0 with variance equivalent to the marginal variance described above. These empty clusters are also assigned the initial covariance kernel parameters described above.Before each iteration, sth iteration to increase speed. Specifically, we compute the posterior probabilities of the kernel hyperparameters. To simplify calculations, we maximize the marginal likelihood, which summarizes model fit while integrating over the parameter priors, known as type II maximum likelihood = the proportion of Gibbs samples for which a pair of genes j, j\u2032 are in the same partition, i.e., Q samples and j in iteration q. This PSM avoids the problem of label switching by being agnostic to the cluster labels when checking for equality.Our MCMC approach produces a sequence of states drawn from a Gibbs sampler, where each state captures a partition of genes into disjoint clusters. In DPGP, we allow several choices for summarizing results from the Markov chain. Here, we take the maximum t-distributed error for testing. To generate each data set, we specified the total number of clusters and the number of genes in each cluster. For each cluster, we drew the cluster\u2019s mean expression from a multivariate normal (or multivariate t-distribution) with mean zero and covariance equivalent to a squared-exponential kernel with prespecified hyperparameter settings, then drew a number of samples (gene trajectories) from a multivariate normal (or multivariate t-distribution) with this expression trajectory as mean and the posterior covariance kernel as covariance.In order to test our algorithm across a wide variety of possible data sets, we formulated more than twenty generative models with different numbers of clusters (10\u2013100) and with different generative covariance parameters . We varied cluster number (data sets 1\u20135) and covariance parameters both across models and within models. For each model, we generated 20 data sets to ensure that results were robust to sampling. We simulated 620 data sets with Gaussian-distributed error and 500 data sets with We compared results of DPGP applied to these simulated data sets against results from six state-of-the-art methods, including two popular correlation-based methods and four model-based methods that use a finite GMM, infinite GMM, GPs, and spline functions.BHC (v.1.22.0) ;GIMM (v.3.8) ;hierarchical clustering by average linkage Agglome];k-means clustering KMeans ];Mclust (v.4.4) ;SplineCluster (v. Oct. 2010) .Hierarchical clustering and k-means clustering were parameterized to return the true number of clusters. All of the above algorithms, including our own, were run with default arguments. The only exception was GIMM, which was run by specifying \u201ccomplete linkage\u201d, so that the number of clusters could be chosen automatically by cutting the returned hierarchical tree at distance 1.0, as in \u201cAuto\u201d IMM clustering .a equal the number of pairs of co-clustered elements that are in the same true class, b the number of pairs of elements in different clusters that are in different true classes, and N the total number of elements clustered:We evaluated the accuracy of each approach using ARI. To compute ARI, let ove, see .H. salinarum control and \u0394rosR TF deletion strains were grown under standard conditions until mid-logarithmic phase. Expression levels of all 2,400 genes in the H. salinarum genome [2O2 and at 10, 20, 40, 60, and 80 min after addition. Mean expression across replicates was standardized to zero mean and unit variance across all time points and strains. Standardized expression trajectories of 616 non-redundant genes previously identified as differentially expressed in response to H2O2 [\u03b1IG = 6 and \u03b2IG = 2 to allow modeling of increased noise in microarray data relative to RNA-seq. Gene trajectories for each of the control and \u0394rosR strains were clustered in independent DPGP modeling runs. Resultant clusters were analyzed to determine how each gene changed cluster membership in response to the rosR mutation. We computed the Pearson correlation coefficient in mean trajectory between all control clusters and all \u0394rosR clusters. Clusters with the highest coefficients across conditions were considered equivalent across strains was tested using FET. To determine the degree of correspondence between DPGP results and previous clustering results with the same data, we took the intersection of the list of 372 genes that changed cluster membership according to DPGP with genes in each of eight clusters previously detected using k-means [Gene expression microarray data from our previous study (GEO accm genome were mea to H2O2 were the k-means . SignifiA549 cells were cultured and exposed to the GC dex or a paired vehicle ethanol (EtOH) control as in previous work with triRNA-seq reads were mapped to GENCODE (v.19) transcripts using Bowtie (v.0.12.9) and quanTo query the function of our gene expression clusters, we annotated all transcripts tested for differential expression with their associated biological process Gene Ontology slim (GO-slim) terms an\u21131) and ridge (\u21132) penalties. Elastic net tends to shrink to zero the coefficients of groups of correlated predictors that have little predictive power [10 normalized counts of TF binding and histone modifications in control conditions (2% EtOH by volume and untreated) and, separately, from log10 fold-change in normalized TF binding from 2% EtOH by volume to 100 nM dex conditions. We used stochastic gradient descent as implemented in SciKitLearn [\u21131/\u21132 ratio and the regularization multiplier (\u03bb) by fitting our model with 5-fold stratified cross-validation across a grid of possible values for both variables . We selected the sparsest model (least number of non-zero coefficients) with mean log-loss within one standard error of the mean log loss of the best performing model [We compared DPGP clusters in terms of TF binding and histone modification occupancy as assayed by ChIP-seq . For eacve power . We ran KitLearn to efficng model .10 library size-normalized binned counts of TF binding and histone modifications in control conditions only for the observations that corresponded to transcripts in the four largest DPGP clusters.We performed principal components analysis (as implemented in SciKitLearn ) on the S1 Figt-distributed error (df = 2). Vertical dotted lines separate data sets generated with widely varied cluster size distributions (left) from data sets generated with widely varied generating hyperparameters (right). Observations that lie beyond the first or third quartile by 1.5\u00d7 the interquartile range are shown as outliers.(A\u2013H) Box plots show summaries of the empirical distribution of clustering performance for each method in terms of Adjusted Rand Index (ARI) across twenty instances of 25 data set types detailed in (TIF)Click here for additional data file.S2 FigBox plots show distribution of ARI across varied gene-to-cluster inclusion probabilities for (A) all data sets in (TIF)Click here for additional data file.S3 Fig(A) Mean runtime of BHC, GIMM, DPGP, and fDPGP across varying numbers of gene expression trajectories generated from GPs parameterized in the same manner as simulated data sets 11, 21, and 27 in (TIF)Click here for additional data file.S4 FigFor all data sets detailed in (TIF)Click here for additional data file.S5 FigEach stick on the x-axis represents a singular data cluster of the 13 total clusters. Note that the two clusters with sizes 22 and 23 are difficult to distinguish by eye.(TIF)Click here for additional data file.S6 Fig10 normalized binned counts of ChIP-seq TF binding and histone modifications in control conditions. Distance indicated in row names reflects the bin of the predictor (e.g. < 1 kb = within 1 kb of TSS).Heatmap shows all coefficients estimated by elastic net logistic regression of cluster membership for the four largest DPGP clusters as predicted by log(TIF)Click here for additional data file.S7 FigAll non-zero coefficients estimated by elastic net logistic regression of cluster membership for two largest down-regulated DPGP clusters on TF binding and histone modifications in A549 cells in control conditions. Distance indicated in row names reflects the bin of the predictor .(TIF)Click here for additional data file.S8 Fig10 normalized ChIP-seq binned counts around the TSS of genes representing TF binding and histone modification occupancy in control conditions was decomposed by PCA. The percentage of variance explained by each of the top ten PCs is shown here.The log(TIF)Click here for additional data file.S9 FigVariables are as described in Materials and methods.(TIF)Click here for additional data file.S1 Table(XLSX)Click here for additional data file.S2 Table(XLSX)Click here for additional data file.S3 Table(XLSX)Click here for additional data file.S4 Table(XLSX)Click here for additional data file.S5 Table(XLSX)Click here for additional data file.S6 Table(XLSX)Click here for additional data file."}
+{"text": "Gene regulatory networks (GRNs) play an important role in cellular systems and are important for understanding biological processes. Many algorithms have been developed to infer the GRNs. However, most algorithms only pay attention to the gene expression data but do not consider the topology information in their inference process, while incorporating this information can partially compensate for the lack of reliable expression data. Here we develop a Bayesian group lasso with spike and slab priors to perform gene selection and estimation for nonparametric models. B-spline basis functions are used to capture the nonlinear relationships flexibly and penalties are used to avoid overfitting. Further, we incorporate the topology information into the Bayesian method as a prior. We present the application of our method on DREAM3 and DREAM4 datasets and two real biological datasets. The results show that our method performs better than existing methods and the topology information prior can improve the result. Gene regulatory network plays an important role in diverse cellular functions. A reliable method to identify the structure and dynamics of such regulation is important for understanding complex biological processes and is helpful for treatment of diseases. With the development of high throughout technologies in recent years, gene expression data has provided a useful way to investigate the cellular system.Generally, there are two types of gene expression data used to predict the structure of GRNs, which are steady-state data and time-series data. The steady-state data measures the steady-state levels in different samples, while time-series data measures the expression levels at several successive time points. Since the time-series data contains the dynamic information of the network while the steady-state data does not , we focuOver the last several years, a number of network inference methods have been developed to tackle this problem, including Bayesian network , 3, dynaInferring a GRN from time-series data is known to be challenging partly due to the high number of genes relative to the number of data points. More importantly, the interactions between genes are typically nonlinear; thus linear model may be inefficient to recognize the nonlinear interactions. A flexible way to solve this problem is to use B-spline functions to describe the nonlinear interactions, and the B-spline functions have been used to infer GRNs in previous studies , 15. A kRecently, proposedIn this paper, we work with a dynamic Bayesian network and use spline regression to detect the nonlinear interactions between genes. A Bayesian group lasso is also used to avoid overfitting and reduce the number of parameters to be estimated. Comparing with group lasso, Bayesian group lasso is a better choice because there are 2 major advantages of Bayesian selection methods: (1) The tuning parameter can be set flexibly. (2) The topology information can be incorporated easily. Further, instead of taking a traditional Bayesian group lasso, we use a Bayesian group lasso model with spike and slab priors since this problem only requires the sparsity on the group level and spike and slab priors can exclude or include the entire group of B-spline basis functions. Finally, we incorporate the topology information as a prior in the Bayesian approach which controls the size of the selected model. This method is assessed by applying to DREAM3 and DREAM4 datasets and two real biological datasets.G \u00d7 T matrix Y, where T is the number of the gene expression levels measured times and G is the number of genes. A DBN model represents probabilistic relationships between genes via a directed acyclic graph \u03d1. In this graph, genes are represented by a set of nodes V = {V1,\u2026, VG} and the interactions between genes are represented by a set of directed edges E\u2286{ : i, j \u2208 V}. A directed edge from node i to node j means gene i is a regulator of gene j. The probability distribution of genes Yt given its parents can be expressed asYg,t is the gene g expression level at time t and Pa is the set of all the parent nodes of gene g at time t. In the case of the regression-based DBN, the conditional distribution p) can be written asyg,t is the expression level of gene g and yg,t\u22121\u2212 is the vector without yt\u22121:ft(\u00b7) = f(\u00b7) and the error term \u03b5g ~ N. Although f(\u00b7) can be characterized by any nonlinear functional representation, is placed on k, where m is a predetermined integer. Then the model becomes\u03b2g\u2212 = to denote the coefficient vector \u03b2 without the gth group and Xg\u2212 = to denote the covariate matrix corresponding to \u03b2g\u2212. The full conditions of and are\u03bcg = \u03a3gXgT(Y \u2212 Xg\u2212\u03b2g\u2212) and \u03a3g = (XgTXg + (1/\u03c4g2)Img)\u22121.We use the Bayesian group lasso method proposed in ; the hieposed by , we use posed by , to plac\u03b2g, we have\u03b3g through\u03b2g is\u03b2g is a normal distribution:\u03c4g2, \u03b3g = 1) and are\u03c4g2 is\u03c32 is\u03c32 isZg = {1, if\u2009\u2009\u03b3g = 0; 0, if\u2009\u2009\u03b3g \u2260 0} and D\u03c4 = diag\u2061{\u03c412, \u03c422,\u2026, \u03c4G2}. And it can be verified that the conditional posterior distributions of other parameters are\u03bb:p equal to 1 + mj \u00d7 (G \u2212 1) is the number of the total regressors and E\u03bbk\u22121) curve and the area under precision-recall (AUPR) curve are computed according to the gold standard network topology provided by DREAM3 and DREAM4 challenge. The prediction performances on the DREAM4 10-gene networks and 100-gene networks are summarized in Tables Saccharomyces cerevisiae consisting of 5 genes: CBF1, GAL4, SWI5, GAL80, and ASH1. Both of the two time-series gene expressions include switch-on data and switch-off data. The switch-on data is taken from 5 experiments and the switch-off data is taken from 4 experiments with a total of 142 samples measured by [F-measure = (2 \u00b7 PR \u00b7 RR)/(PR + RR) to evaluate the performance and select a best threshold as [The IRMA network data is a subnetwork embedded insured by and alsosured by , 34. Theshold as . The sigshold as , althougl1 penalized method, grpLasso, and TAlasso to infer causal interactions.We then apply our method on the cell cycle genes in human cancer cell lines (HeLa) which were analyzed by Whitfield . A subnel1 penalized methods and all the true positives of Morrissey's method are also found by BGL and BGL_prior. On the other hand, the interactions from RFC4 to CDC2 and CDC2 to CCNE1 are found not only by BGL and BGL_prior, but also by 2 of other 3 comparable methods. It may be because these interactions exist in real regulatory network but are not included in the BioGRID dataset.Here we use the third experiment of Whitfield as the pl1 norm penalty to prevent overfitting, and topology information, the knowledge of the exponential decrease in in-degree that most genes have only a small number of regulators as a prior. A spike and slab prior is used to facilitate variable selection by putting a multivariate point mass at 0m\u00d71 for an m-dimensional coefficients group. The performance of the proposed method is demonstrated by applications to the DREAM4 in silico data of sizes 10 and 100 network challenges and the real biological data of IRMA and Hela cell network. The results show that the topology information indeed contributes to the gene regulatory network inference which can improve the AUROC remarkably of the DREAM4 in silico data and improve the results of the IRMA network and Hela cell data. B-spline regression model also performs better than linear model in real biological data. Therefore, our method is an effective way of inferencing gene regulatory network from the time-series data.In this study, we propose a fully Bayesian method, based on B-spline, group lasso, and topology information to infer gene regulatory network from time-series data. We use B-spline functions to capture the nonlinear interactions between genes,"}
+{"text": "All biological processes are inherently dynamic. Biological systems evolve transiently or sustainably according to sequential time points after perturbation by environment insults, drugs and chemicals. Investigating the temporal behavior of molecular events has been an important subject to understand the underlying mechanisms governing the biological system in response to, such as, drug treatment. The intrinsic complexity of time series data requires appropriate computational algorithms for data interpretation. In this study, we propose, for the first time, the application of dynamic topic models (DTM) for analyzing time-series gene expression data.A large time-series toxicogenomics dataset was studied. It contains over 3144 microarrays of gene expression data corresponding to rat livers treated with 131 compounds (most are drugs) at two doses (control and high dose) in a repeated schedule containing four separate time points . We analyzed, with DTM, the topics (consisting of a set of genes) and their biological interpretations over these four time points. We identified hidden patterns embedded in this time-series gene expression profiles. From the topic distribution for compound-time condition, a number of drugs were successfully clustered by their shared mode-of-action such as PPAR\u0251 agonists and COX inhibitors. The biological meaning underlying each topic was interpreted using diverse sources of information such as functional analysis of the pathways and therapeutic uses of the drugs. Additionally, we found that sample clusters produced by DTM are much more coherent in terms of functional categories when compared to traditional clustering algorithms.We demonstrated that DTM, a text mining technique, can be a powerful computational approach for clustering time-series gene expression profiles with the probabilistic representation of their dynamic features along sequential time frames. The method offers an alternative way for uncovering hidden patterns embedded in time series gene expression profiles to gain enhanced understanding of dynamic behavior of gene regulation in the biological system.The online version of this article (doi:10.1186/s12859-016-1225-0) contains supplementary material, which is available to authorized users. All biological processes including perturbation-responses are inherently dynamic. Investigating the temporal behavior of these dynamic processes is an important part of biological research. With the advancement of technology and reduction in cost, study of time-series gene expression has become routine . The objTo analyze time-series gene expression profiles, several approaches have been used which can be divided into two classes. One of the classes is conventional clustering algorithms such as hierarchical, k-means clustering and self-organizing maps, which do not consider any dependencies between temporally successive profiles. In other words, even if we permute the order of time points, the results of these algorithms would not change. Additionally, another drawback of these approaches is the mutual exclusiveness of genes with respect to their involvement in biological processes responding to exposure. The second class of approaches is the clustering algorithms primarily designed to analyze time-series expression. For example, Aach and Church introduced dynamic time warping algorithm for the alignment of expression profiles in different time series . SchliepIn this study, we propose dynamic topic model (DTM) as a novel approach to cluster time-series gene expression profiles. DTM was originally developed by Blei to analyze the time evolution of topics in large document collections in the field of text mining . DTM is t, evolve from the topics associated with the previous time, t-1. It treats documents as mixtures of topics in which words are represented by the probability distribution across all time points. This representation can be analogously applied to biological systems since a biological component (topic) is different between the initial unstable stage, which is shortly after drug treatment, and the steady stage into which cellular system enters after a certain time period. In other words, the acting genes tend to evolve over time, consistent with the dynamic behavior of cellular and molecular effects after drug exposure changes over time. The advantage of using DTM is that it is a soft clustering technique which does not assume mutual exclusivity and permits multiple topic assignment with a probabilistic way to the same sample and gene, reflecting true biological complexity.However, to the best of our knowledge, DTM has not been explored as an applicable method for the analysis of time-series gene expression profiles. It extended the static LDA to take into consideration time evolution that existed in a real document collection. Static LDA assumes that documents from the same set of topics are exchangeable, the probability of which is invariant to permutation. However, that assumption completely ignores one significant variable, i.e., time, that is present in documents organized according to sequential time where the topics evolve over time. DTM assumes that the topics associated with time, Here, we applied DTM to a set of time-series gene expression profiles generated by the Japanese Toxicogenomics Project to inveshttp://dokuwiki.bioinf.jku.at/doku.php/start).The Japanese Toxicogenomics Project generated large-scale gene expression profiles for the same compounds tested in rat livers and kidneys as well as using both rat and human primary hepatocytes . The datThe probe-level microarray data were quantile normalized followed with mapping of a probe set into its corresponding genes , then muk are the Dirichlet prior parameters on the topic distributions over document and the word distribution over topic k, respectively. Different from a static LDA, DTM adopts logistic normal distribution for two prior distributions (topic per document and word per topic) and hence is more complex compared to static LDA, which assures conjugacy between prior and posterior distributions. Specifically, static LDA assumes that the words of each document are independently drawn from a mixture of multinomial. However, this implicit assumption of independency is not appropriate, because the topic (a set of words) in a document collection evolves over time. Our goal is to explicitly address the dynamics of the underlying topics as a function of sequential time. DTM provides a solution to this problem by assuming that topics at time i evolved from the topics at time i-1 with the reflection of real organization of document collections. DTM assumes that the data is divided by time slice, modeling the documents of each slice with a static topic model, where the topics associated with slice t evolve from the topics associated with slice t \u2013 1. In a static LDA model, it assumes that the topic-specific word distributions are drawn from a Dirichlet distribution. However, DTM does not assume Dirichlet distribution to approximate posterior inference, the word distributions over multiple time points are chained by Gaussian distribution. Due to the nonconjugacy of the Gaussian and multinomial models, Blei applies variation approximations such as Kalman filters and nonparametric wavelet regression to approximate posterior inference.All variants of LDA are probabilistic, which usually involves a series of processes to determine the optimal parameters to maximize the posterior probability of the observed data. In static LDA, \u03b1 and \u03b2++ package was applied from the author\u2019s website (https://www.cs.princeton.edu/~blei/topicmodeling.html). The modeling results include two different distributions: multinomial distribution over topics for each document and multinomial distributions over words for each time point associated with each topic. In our analysis, the number of topics was heuristically determined by closely examining two hyperparameters, alpha and top_chain_var which defines the number of topics. Specifically, alpha controls the shape of the topic distribution of a sample. A smaller alpha results in each document to be more probabilistically associated with fewer topics. The top_chain_var determines how similar topics would be over multiple time points. A smaller top_chain_var leads to similar word distributions over multiple time points. In our study, we have tested several parameter settings for alpha and top_chain_var and found that the varied values do not have a significant effect on our interpretation of the sample clustering results and topic distribution over time points. Thus, choose the default value of and, at this condition, we feel that the choice of 20 topics is sufficient to balance between extreme generalization of the model and maximizing the chance of an informative discovery.In this study, the open-source DTM CP(T|D). This probability is a signature of the sample, which can be used to assess sample similarities. The latter represents the conditional probability of each gene given a topic at a particular time point, P(W|T)time, indicating which genes are important to a given topic in a particular time point. As we have four time points , four different P(W|T) were obtained, i.e., P(W|T)4days, P(W|T)8days, P(W|T)15days, and P(W|T)29days. First, to group documents, each document was assigned to the topic with the largest conditional probability value of P(T|D). The other distribution, P(W|T)time was used for clustering genes. Since DTM is designed to cluster words co-occurring frequently across whole documents, the genes with a high rank in the same topic are likely involved in the same biological process. To take advantage of this information, functional pathway analysis was performed for each topic using the Fisher\u2019s exact test with data from Kyoto Encyclopedia of Genes and Genomes (KEGG) (http://www.genome.jp/kegg/), and Gene Ontology (GO) (http://geneontology.org/). Above all, the strongest benefit of DTM is its ability to monitor the behavior of the genes at the given time points and thus aid in investigating the significantly active genes at each time point. Each gene was ranked for each time point, and functional analysis was conducted for the top 300 genes according to their rank.After building a probabilistic model for our observed temporal DEGs using DTM, two distributions (matrix) were generated: topic distribution over document and a series of word distributions over multiple time points for each topic. The former includes the conditional probability of each topic given a sample, P(T|D), the topic distributions for a given compound-time condition, and the other is P(W|T)time, a series of distributions over genes at multiple time points for each topic.This study consists of several steps: (1) generation of documents of DEG lists for each compound at each time point; (2) Building a generative probabilistic model using DTM to maximize the posterior probability of observed temporal DEGs; (3) Assignment of the topic with largest conditional probability value to each compound-time condition; (4) ranking DEGs according to their conditional probability of each topic and assessment of topic evolution over time (4) topic analysis in the biological context. From these procedures, we obtained two outputs, one of them is P(T|D). This can be used for the assessment of the association between a specific condition and a specific topic. We used this statistical probability to group the conditions by connecting them with topics. These results are provided in Additional file DTM provides the distribution of topics for a given compound-time condition, P(W|T). Especially, in our analysis, four different P(W|T)4days, P(W|T)8days, P(W|T)15days, P(W|T)29days were derived. This probability can be interpreted as the contribution of a gene to a particular topic at a certain time point. The most associated 10 genes at 4\u00a0days for each topic is provided in Table\u00a0stac3 was associated with the largest number (19) of topics across all of the four time points. Stac3 (SH3 And Cysteine-Rich Domain-Containing Protein) is well known to be highly expressed and control the cell cycle in skeletal muscle while little is known about its involvement in liver function. To determine which biological processes were over-represented in a particular topic, we searched KEGG and GO with the top 300 ranked genes at each time point for each topic and used the Fisher\u2019s exact test to assess the significance of association for a particular topic, P(W|T) regardless of time; however, DTM yields multiple conditional distributions, P(W|T)4days, P(W|T)8days, P(W|T)15days, P(W|T)29days for each time point that can offer more information for biological interpretation. From these probabilities, the dynamic nature of genes was investigated. The results from DTM and the original fold changes are compared in Fig.\u00a0P(W|T) estimated from the DTM across four time points. As expected, in Topic 5, Acot1 is most highly up-regulated over the whole set of time points followed by Fabp3. Acot1 (Acyl-CoA thioesterase I) is an enzyme that hydrolyzes long-chain acyl-CoAs to the free fatty acid and coenzyme A and is widely known to be the target of PPAR\u03b1 agonists [Fabp3 is a well-known biomarker for skeletal muscle toxicity while little is known about its function in liver [Fabp3. Also, Fabp3 has been reported to cause drug induced liver injury. The drastic change of Apoa4 was observed from 8-day rather than the initial stage right after drug treatment, which was reflected by P(W|T). Apoa4 is known to be one of the apolipoproteins that bind lipids to form lipoproteins and transport the lipids out of the liver.In static topic modeling, we only derive a single agonists . Fabp3 iin liver . Even theisa to generate co-expression modules. The thresholds of standard deviations from the mean gene expression were varied from 2.5 to 5 by 0.1 along both rows and columns, keeping only genes and samples that showed expressions levels that exceeded the given threshold. Among them, totals of 25, 7, 8, 3, 3 and 2 modules were identified at the thresholds of 2.5, 3, 3.5, 4, 4.5 and 5, respectively. At all thresholds, clusters of PPAR\u03b1 agonists were identified. When a threshold of 2.5 was applied, the ISA yielded one co-expression module representing PPAR\u03b1 agonists, which included 528 genes and 26 drug-time conditions including seven different drugs namely WY-14643, benzbromarone, benziodarone, clofibrate, fenofibrate, gemfibrozil and simvastatin. Even though four drugs among seven were PPAR\u03b1 agonists, they showed discernible enriched GO patterns as illustrated in Fig.\u00a0Fabp3 is highly expressed upon exposure to WY-14643 and fenofibrate, which is identified in topic 5 is one of the most widely used bi-clustering algorithms . We utilc 5 Fig.\u00a0 but not To supplement the drawbacks of traditional clustering algorithms, DTM was explored as one of the model based algorithms for the analysis of time-series gene expression data, which has not been applied previously to our best knowledge. Therefore, our study is significant as a pilot study that explores the feasibility of applying a text mining approach to time-series biological datasets. As a result of our investigation, we identified hidden patterns embedded in time series toxicogenomics. From the topic distribution for each document, a number of drugs were successfully clustered by their shared MoA, for example, PPAR\u0251 agonists and COX inhibitors. The biological meaning underlying each topic was interpreted using diverse sources of information such as functional analysis of the pathways and therapeutic uses of the drugs, which could provide a better understanding of drug perturbation mechanisms. Additionally, we found that sample clusters produced by DTM are much more coherent in terms of functional categories than the ones from traditional clustering algorithms. Above all, time specific activity distribution according to each sequential time provided tremendous opportunity to uncover the underlying toxicological dynamic changes. In summary, our study found that DTM has several distinct advantages. Firstly, it can reduce data dimension very effectively in terms of the latent variable , with the assumption of time-dependency present in toxicogenomics. Secondly, it also allows samples and genes to be associated with multiple topics in an intuitive probabilistic manner without the mutual exclusivity assumption, reflecting the complexity of real biological system. Most importantly, topic dynamics over time relapse could provide new biological insights into the evolution of gene regulation."}
+{"text": "We suggest a new nonlinear expansion of space-distributed observational time series. The expansion allows constructing principal nonlinear manifolds holding essential part of observed variability. It yields low-dimensional hidden time series interpreted as internal modes driving observed multivariate dynamics as well as their mapping to a geographic grid. Bayesian optimality is used for selecting relevant structure of nonlinear transformation, including both the number of principal modes and degree of nonlinearity. Furthermore, the optimal characteristic time scale of the reconstructed modes is also found. The technique is applied to monthly sea surface temperature (SST) time series having a duration of 33 years and covering the globe. Three dominant nonlinear modes were extracted from the time series: the first efficiently separates the annual cycle, the second is responsible for ENSO variability, and combinations of the second and the third modes explain substantial parts of Pacific and Atlantic dynamics. A relation of the obtained modes to decadal natural climate variability including current hiatus in global warming is exhibited and discussed. Natural variability plays a key role in climate response to different external forcings including carbon dioxide emission. There are several strong internal modes of the climate system, such as El-Ni\u00f1o Southern Oscillation (ENSO)456686In this work an alternative, empirical way to analyze natural modes of climate is proposed: we attempt to resolve empirically dominant hidden signals which actually govern the observed behavior, so as they could be suitable for a low-dimensional dynamical description of climate variability on the considered time scales. In fact, our aim is to reduce the data dimension by means of expanding measured spatio-temporal observations into a few number of modes, which would form a basis for a dynamical system\u2019s phase space construction capturing main features of the observed dynamics.There is a set of widely used approaches for obtaining principal components by various linear rotations of multivariate data in space domain interpreted as internal modes driving observed variability, and maps Fi needed for analysis of NDMs represented on the data (geographic) grid (hereinafter we will use the abbreviation NDM for both time series pi(t) and its image Fi(pi(t)) in data space). This is in contrast to isomap methods based on principal component analysis (PCA) kernels25each of which is a map of some hidden (latent) value of Fi(.) in (1) defines a nonlinear curve in pi(t1), ..., pi(tN) are just coordinates of N points along such a curve. These points are specific projections of the observed data as well as to find values of hidden time series p. Also, it includes very important step, which prevents over-fitting: selection of optimal complexity of decomposition (1), from a statistical point of view. This step is based on Occam Razor principal stating that minimal but explaining things model is the best one: it answers the question of how many nonlinear modes can be resolved from available data. Moreover, it yields the optimal nonlinearity degree of the functions Fi(.) as well as prior correlation properties of p-time series, which control smoothness of the obtained principal curves in data space. In practice, such prior gives us an efficient separation of time scales between NDMs; hence, different NDMs turn out to be responsible for different modes of natural variability. As a result, we obtain a set of statistically significant nonlinear structures Fi(.) and \u201cnoise\u201d \u03b6(t) \u2013 the time series of the residual field. In the next section results of our nonlinear expansion of SST time series is demonstrated and analyzed.We developed a Bayesian procedure for a reconstruction of principal modes defined by (1); see details in the section Methods and in p \u2013 the set of scalar time series capturing the more the better part of observed variability. To this end, we applied our expansion defined through (1) to this data using the methodology expounded briefly in Methods and in more detail in For analysis we took the time series of NOAA OI.v2 SST monthly field27p(t) are shown in p(t) estimation mapping scalar values of pi(t) into the data space, it enables us to visualize the time series of NDMs on the geographic grid. Such a visualization is presented in the Since for every iLet us look how the identified modes are manifested in different geographical regions and which climatic phenomena they reflect. We distinguished several SST-based indices corresponding to different regions in the Northern hemisphere, that have a significant impact on the global climate NDM gives an almost periodical, slightly modulated signal in every NDM-based index see . It corrThe left panels of However, contributions to the considered regions of linear PCs are spread over many PCs, as it can be seen from the right panels of The most interesting event that plumped into our analyzed epoch is the 1997\u20131998 El-Ni\u00f1o episode \u2013 one of the strongest since records began (see the behavior of Nino 3.4 index on top panel of 6689p(t) (This climate shift manifests itself in our second NDM as a distinct jump in its hidden time series p(t) on time p(t) . Since tp(t) . The negDetecting principal curves in data space allows investigating teleconnections \u2013 the linkages of dynamics at widely-spaced regions of the globe , p2(t), p3(t). The climate shift in this space looks like a transition between two areas. However, we analyzed here quite short time series, which includes only the single transition. Further expansion of longer data taking into consideration several such climate shifts in the past\u2013including PDO transitions from a positive to a negative phase and back\u2013would be the proper step on the way of constructing a dynamical model suitable for forecasting future events on decadal scales, which strongly affect global warming scenario. That is a challenging current problem in climate science: modern global climate models hardly predict internal decadal variabilityThe obtained expansion is interesting from the point of view of global climate analysis including a description of different phases of global warming regime. Really, the resulting NDMs reflect three main components of global climate variability, which have characteristic time scales less than the duration of the analyzed time series: annual cycle, ENSO mode and decadal mode associated with PDO behavior. In particular, the single second NDM correctly reproduces the climate shift of 1997\u201398\u2013the change of PDO phase: the nature of the shift is clearly manifested in SST variability captured by this NDM. Further, if we consider the set of time series X(tn) :\u2009=\u2009Xn into a set of principal modes, each of them is represented as a one-dimensional parametrically defined curve According to Eq. F1, having the largest contribution to data variance, by minimization of mean square deviations of residuals between data vectors Xn and mode values \u03a6n. The main problem here is the extremely high dimension of climate data defined on spatial grid, which makes such a minimization very hard computationally. Therefore, a preliminary data truncation would be reasonable. For this purpose we firstly rotate the data to the basis of EOFs which gives the set of PCs \u2013 the new variables Yn\u2009=\u2009VTXn \u2013 which are ranked in accordance with their variances . After such a rotation the main problem is to determine the subspace in the whole space of PCs, where the sought nonlinear curve Fi(.) actually lies. Let us then construct the nonlinear transformation only in the space of the d first PCs, while the other PCs are treated as noise:As in traditional EOF decomposition, we use a recursive procedure for a successive reconstruction of terms (modes) in Eq. a are the unknown parameters in the \u03a6 representation, D is the total number of PCs, \u03b6 are the residuals, which are assumed to be Gaussian and uncorrelated. Thus, for reconstruction of this leading mode we should find proper values of both the latent variables pn and the parameters a. Note that it is easy to show that in the special case of linear \u03a6 this strictly corresponds to the traditional EOF rotation: minimization of squared residuals under condition |a|\u2009=\u20091 gives the largest EOF in the a vector and the corresponding PC in p time series.Here X1, ..., XN: the learning model (2) means finding the global maximum of this PDF over unknown a and p1, ..., pN. This PDF can be expressed through the Bayes theorem:In the framework of the Bayesian paradigm the cost-function used for learning model (2) is constructed in a form of a probability density function (PDF) of unknown variables under the condition of data P :\u2009=\u2009L is a likelihood function, i.e. the probability that the data X are obtained by both the model (2) with the parameters a and the series p1, ..., pN, and the subsequent rotation by matrix V. This function can be easily written under the assumption of normality and whiteness of noise \u03b6 . Though the ensemble so defined is quite general, it provides an efficient restriction of the class of possible solutions by excluding from consideration short-scale signals.Actually, Eq. i(p) which are mutually orthogonal in the probability measure defined by (4):Under the prior constraint (4) we introduced a representation of the functions \u03a6(.) in (2) as a superposition of polynomials \u03a0\u03b4ij is Kronecker delta. Such a representation facilitates a model learning procedure substantially, since the problem becomes linear with respect to the parameters a. At the same time it allows an increasing model power by simply adding more orthogonal polynomials in (5). The idea how to set a prior PDF for the parameters of such a representation is quite apparent: it should be the most general, but permitting functions \u03a6i(.) to have a priori the same variances as the corresponding time series of PC Yi . Thus, it is reasonable to define this PDF as a product of Gaussian functions of each where d - the number of PCs involved in the nonlinear transformation, m - the number of orthogonal polynomials determining the degree of nonlinearity, and \u03c4 - the characteristic autocorrelation time of the reconstructed mode. All these values determine the complexity of the data transformation, and therefore should be relevant to the available statistics. For example, if we take a very large m and \u03c4\u2009=\u20090, we would obtain the curve passing through every point p1, ..., pN would capture the whole variability in the subspace of the d first PCs. But, indeed, in this case we would get an overfitted, statistically unjustified model. Thus, a criterion of optimality is needed allowing the selection of the best model from the model set defined by different values of . We define optimality E through a Bayesian evidence function, i.e. the probability density of data X1, ..., XN given the model :It is clear that the proposed method provides a solution that strongly depends on three parameters: E is the minimal amount of information required for transferring the given series Y1, ..., YN by the model ; so, the same criterion could be derived in the framework of the minimal description length principleActually, a and latent variables p1, ..., pN with the prior probability measure:The Bayesian evidence function can be calculated by the integral of likelihood over all parameters We calculate this integral by the Laplace method assuming that the main contribution comes from the neighborhood of the maximum of the integrand over all variables; see details in \u03c8(.) \u2013 minus log of the cost-function; pn and a correspondingly. The last two terms in (8) penalize the model complexity; they provide the existence of an optimum of E over the model set. Thus, for estimating the optimality of the given model with , we should (i) learn the model by finding the values \u03c8(.) second derivation matrices in the domains of parameters a and latent variables p1, ..., pN in the point Here Yn\u2009=\u2009VTXn; columns of the V matrix are EOFs of the X time series.Rotating given data Y. This step includes finding the optimal dimension d of the subspace for a nonlinear expansion, estimation of the optimal degree of nonlinearity m as well as characteristic autocorrelation time \u03c4 of the hidden mode. Here we also obtain concrete parameters Finding the optimal model in the space of PCs D\u2009\u2212\u2009d zeros, since only d PCs are involved in the nonlinear transformation. In fact, at this step we set the new data vectors Subtracting the obtained mode corresponding to the maximum of the cost-function from the data vectors: The main steps of the proposed algorithm are the following.p of the new mode equal to constant zero, or, equivalently, we find that the optimal degree m of polynomials (5) is equal to zero. It means that we cannot resolve any more nonlinearity in the data, and the noise is most probably significant; in other words, all the best we can further do is traditional EOF decomposition of the residuals. In particular, the expansion of SST time series into three NDMs presented in the current paper gives d\u2009=\u20095, m\u2009=\u20093 for the first NDM, d\u2009=\u20094, m\u2009=\u20096 for the second and d\u2009=\u20098, m\u2009=\u20098 for the third. Eventually, the entire nonlinear expansion of data can be written as follows:The iteration of the procedure i.1\u2013i.3 is stopped when we obtain time series q is the total number of obtained NDMs. Note that each of the NDMs is defined in its own subspace, which is rotated relative to the initial data space: the orthogonal matrices Vi are different for different NDMs.where How to cite this article: Mukhin, D. et al. Principal nonlinear dynamical modes of climate variability. Sci. Rep.5, 15510; doi: 10.1038/srep15510 (2015)."}
+{"text": "To study a biological phenomenon such as finding mechanism of disease, common methodology is to generate the microarray data in different relevant conditions and find groups of genes co-expressed across conditions from such data. These groups might enable us to find biological processes involved in a disease condition. However, more detailed understanding can be made when information of a biological process associated with a particular condition is obtained from the data. Many algorithms are available which finds groups of co-expressed genes and associated conditions of co-expression that can help finding processes associated with particular condition. However, these algorithms depend on different input parameters for generating groups. For real datasets, it is difficult to use these algorithms due to unknown values of these parameters.We present here an algorithm, clustered groups, which finds groups of co-expressed genes and conditions of co-expression with minimal input from user. We used random datasets to derive a cutoff on the basis of which we filtered the resultant groups and showed that this can improve the relevance of obtained groups. We showed that the proposed algorithm performs better than other known algorithms on both real and synthetic datasets. We have also shown its application on a temporal microarray dataset by extracting biclusters and biological information hidden in those biclusters.Clustered groups is an algorithm which finds groups of co-expressed genes and conditions of co-expression using only a single parameter. We have shown that it works better than other existing algorithms. It can be used to find these groups in different data types such as microarray, proteomics, metabolomics etc.The online version of this article (doi:10.1186/s12859-016-1356-3) contains supplementary material, which is available to authorized users. To capture the behavior of an organism under different experimental conditions, we need a method that simultaneously study and compare the gene/protein expression level measured for different conditions . High-thMany algorithms have been introduced since the year 2000, which extract groups of co-expressed genes and associated conditions of co-expression from a microarray data. Few of them are CC algorithm , ISA algTo overcome these two limitations, we introduce a new algorithm which uses only one parameter: depending on whether we want overlapping biclusters in the result or not. Accordingly, we set the parameter equal to 1 for overlapping bicluster and 0 for non-overlapping. In our algorithm, we first discretize each gene and then group them based on their similar discretized profiles. Finally, we select clusters (or groups) with high correlation coefficient and large size. These high correlation coefficient clusters along with the discretization information gives the biclusters with both genes and conditions. Our method is similar to \u2018Correlation maximization biclustering methods (CMB)\u2019 which seeks for subsets of genes and samples where the expression values of the genes (samples) correlate highly among the samples (genes) . Other CIn the present paper, we first introduce the algorithm and then show its performance on synthetic and real datasets. We then compare our algorithm with other existing algorithms from literature. Finally, we show the application of our algorithm on a real biological dataset obtained from a mouse liver tissue.N number of genes across C number of conditions, we need to find groups of co-expressed genes and the conditions of co-expression. We first started with each gene separately and determined the conditions where it is expressed. We used the idea similar to one proposed by [Given a microarray mRNA data matrix with posed by where thposed by , the setBelow, we are presenting the steps of the algorithm one by one in detail. For better understanding of the algorithm we are providing a small dataset example depicting application of each step of our algorithm. To enable readers to replicate our results, a small dataset example with its output bicluster results has been provided as Additional file N number of genes across C number of conditions, we first normalized the data by dividing each gene by the square root of the sum of the squares of their expression across the conditions. To find groups of co-expressed genes and the conditions where this co-expression occurs, we first need to find the condition(s) where each gene is expressed. For this, we used expression profile of each gene separately, e.g. gene \u2018a\u2019 and identified the condition(s) where gene \u2018a\u2019 shows high expression value relative to other condition(s). These condition(s) are obtained by taking the consecutive differences of sorted absolute normalized expression profile of gene \u2018a\u2019. This helps us to identify the index with maximum difference. The condition(s) corresponding to this identified index and all other indices above this index gives us the condition(s) where the gene is expressed. This step is illustrated in Fig. Given a microarray mRNA data for Once we have the discretized data, in the next step, we group them into different clusters using the GROUP function. After discretizing each gene based on its expression values, we added the sign of expression i.e. positive or negative depending on its up or down regulation. Then, we grouped genes with same signed discretized profiles into a cluster. Each cluster is shown in a 2-D plot Fig.\u00a0 by a poiThis step of the algorithm generates cluster correlation matrix which is used to find cluster pairs sufficiently similar to each other and hence could be merged. We termed this step as CORRELATION function. We used this step on the clusters (obtained using the GROUP function) to get the cluster correlation matrix . This gives, for each query cluster, a set of clusters obtained after discretizing its correlation profile. We then selected those cluster pairs which are present in the discretized correlation profile of each other. Say, for an example, we picked clusters In this step, we multiplied the correlation profile for each pair of clusters (input cluster pair) and discretize the product of correlation profiles of the cluster pairs. Finally, we checked if the cluster(s) obtained are same as the input cluster pair or not. If yes, then those two clusters are filtered through CHECK2 function, as shown in red dots in the matrix in Fig. Here, for each pair of clusters, we checked if the average correlation coefficient between them is greater than the minimum correlation of each cluster with itself or not (for details see Section 3.2). If so, then that pair of clusters were filtered through CHECK3, as shown in red dots in the matrix in Fig. The cluster pairs satisfying all these three checks are termed as CONSENSUS clusters pairs and are merged using the next step of the algorithm called MERGE function.In this step, we group the clusters pairs found above by taking the union of the genes and union of the conditions of the paired clusters. This gives us the merged clusters. This step of the algorithm that groups the cluster pairs is termed as MERGE function.This step provides the user with a choice to get an overlapping bicluster. This step is termed as OVERLAP function. Here, the user chooses a predefined parameter (termed as overlap parameter), which gives them the choice to go for overlapping bicluster (see Section 3.3 for details). If the user selects overlap parameter equal to 1, then the biclusters are allowed to overlap and we go to the next step for final selection of cluster. If overlap is selected as zero, then we directly go to the next step for final selection of cluster.N), where N is the number of genes in the input dataset. The resulting straight line with normalizing factor is shown below:y is the correlation coefficient, x is the size of the bicluster, N is the number of genes in the input matrix and c is the parameter that can be obtain using random matrix data.In a real dataset, due to it complex patterns, normally we get clusters with different sizes and different correlation coefficients. The clusters with large size and high correlation coefficient will contain a large number of genes showing similar pattern and could be relevant in terms of some biological process. Whereas, the clusters with small size and/or low correlation coefficient can be considered as the random clusters with very less or no functional relevance. So, in this step of the algorithm which we termed as SELECT function, we separate the relevant clusters from the random ones by using a cutoff. To derive the cutoff, we first generated the clusters by applying our algorithm on randomly generated genes and checked their cluster size and correlation coefficient values. We then checked whether, the genes with expression values generated randomly from a normal distribution could form a cluster with large size and high correlation coefficient. For this, we built random data matrices, each of 1000\u2009\u00d7\u200910 dimension where the gene expression values were generated from a normal distribution with mean zero and standard deviation equal to different noise levels. The same procedure was followed as given in Section 3.4, except that in this case no pattern was considered for the genes. We applied our algorithm on these 1000\u2009\u00d7\u200910 data matrices with three noise levels and plotted the size and correlation coefficient of the resultant clusters in three subplots, see Fig. c (say c*), which is independent of the noise level. For a given random data matrix, applying the algorithm, we get say K numbers of biclusters, each having fixed x and y. We then calculated c using Eq.i\u2009=\u20091\u2009:\u20093.To obtain the value of parameter c* is independent of the number of genes and number of conditions of the input data matrix. We measured the value of c* using the input data matrices of different sizes as mentioned above (resulting biclusters) that contained samples of query bicluster. If the number of such resulting biclusters were greater than zero, then we took the union (intersection) of genes (conditions) of query and resulting biclusters to create an overlapping bicluster. If we didn\u2019t find any biclusters, we searched for biclusters (resulting biclusters) whose samples were subset of samples of this query bicluster. If the union of samples of resulting biclusters is smaller than that of query bicluster, we include query bicluster in the list of overlapping biclusters. This procedure was repeated for each query bicluster.A matrix of zeros was created with 100 rows and 100 columns denoting 100 genes and 100 samples respectively with 1st 10 genes upregulated at 1st 10 samples i.e. expression value of these genes at these samples is 1. Similarly, the next 10 genes are up at next 10 samples. This was repeated and we get a pattern of 10\u2009\u00d7\u200910 sub matrix block at the diagonal of the original matrix. These 10 X 10 sub matrix blocks represent ideal biclusters to be used to calculate recovery and relevance scores. Then normal distributed random numbers with mean 0 and standard deviation as per the noise levels given in text were added to the matrix to generate the final matrix. Same procedure was followed for the case where biclusters were overlapping; the expression value at overlap region remaining 1. For data with zeros noise and non-zero noise with overlapping clusters, we used a 100 X 100 matrix. For data with non-zero noise and non-overlapping clusters, we used 100X 50 matrix with ten 10 X 5 blocks at diagonal of matrix.http://www.ncbi.nlm.nih.gov/geo/) with GEO accession GSE2361. It contains expression of all genes of human across 36 normal tissues. TiGER database [The real data was collected from human gene expression data series in NCBI GEO database (database was usedThe microarray data were obtained from an experiment where one group of mice were fed with high fat high sucrose diet (HFHSD) (treated group) and another group with normal diet (control group) for certain days before taking tissue samples from both the groups of mice. Both groups of mice were fed respective diets in the following days: Day1, Day 6, Day 10, Day 14, Week 0, Week 3, Week 6, Week 9, Week 12, Week 15 and Week 18. This experiment was repeated for three times. Then, microarray experiment was performed on tissue samples and after suitable normalization of the signal intensities of each probe using Agilent Genespring GX software, three values of log fold change for the control sample and the treated sample were obtained for each probe and at each time for each tissue. Further details of the experiment are given in .This data for liver tissue was downloaded from the NCBI repository under GEO accession number GSE63175. The data also contains information about data pertaining to mice fed with high fat high sucrose diet plus an ayurvedic formulation which is out of scope from our present study. The data of the ayurvedic formulation corresponds to the columns with columns header \u201cP2_HFx_y \u201c and were removed. The column headers have information of the time point of the experiment in days as well as weeks. Weeks were recorded in the experiment as the number of weeks after Day 14. Thus, 14\u00a0days were added while converting weeks to days. This implies Day 14 and Week 0 would correspond to same time and thus the information of the Day 14 and Week 0 was combined in the final matrix. So, the final time points in the matrix are Day 1, Day6, Day 10, Day 14, Day 35, Day 56, Day 77, Day 98, Day 119, and Day 140.t-test was generated. The data contains 40628 probes corresponding to 29411 gene symbols. Gene symbol information for each probe was taken from column with column header \u201cGene symbol\u201d. There can be multiple probes corresponding to a gene. We used the following steps to obtain a single value for each gene:Step 1. First, we filtered the data to have only those genes whose absolute values are changed for at least 2 fold in all three treated samples at a time point i.e. whose values (log) are lying outside the interval (\u22121 and 1) and considered them significantly perturbed genes. In case two probes corresponding to the same gene satisfy this condition, the probe with minimum p-value was chosen. We repeated this process for data at different time points and combined (union) all filtered genes to form a matrix of filtered genes and time points. The matrix elements are fold change values of all filtered genes inserted at respective time points. If a gene is not significantly perturbed at some time point, then the matrix element of that gene at that time point will be empty. For these genes, we used the following steps to insert values at these time points.Step 2. For the selected genes with empty matrix element at some time point, we check its probe\u2019s fold change value at all three samples. If all these values are outside the interval (\u22121 and 1), we select those probes and go to step 3. If no probe of the selected gene satisfies the above criteria, we then select the probes which would have values at all three samples within the interval (\u22121 and 1) and go to Step 3.Step 3. The selected probe\u2019s average over three samples were taken if in all three cases the value is greater/less than 0. If multiple probes of a gene satisfied this condition, probe with minimum p-value was chosen. If no probe out of selected probes satisfies this condition, the probe\u2019s average value over two (out of three) samples with value greater/less than 0 was taken. For multiple probes satisfying this condition, probe with minimum p-value was taken. For the probe chosen, if the average value lied between \u22120.8 to 0.8, then for simplicity we inserted a number 0.001 in the matrix, else the average value was inserted.For each probe, the means of the log fold change for treated samples were calculated and a p-value signifying difference between three control values and three treated values by The resulting matrix contained log fold change values at eleven time points. We combined Day 14 and Week 0 information in the following way. If a gene is significantly perturbed (in same direction) for both time points, then we took the average value. If they are perturbed in opposite directions, we assigned a small number (0.001) to that gene. If the gene\u2019s value is perturbed at only one time point, we used that value in the matrix. If it is unperturbed at both the time points, we assigned any one of the non-perturbed value in the matrix. In the resulting matrix of 10 time points, if a gene is not perturbed even at a single time point, it is removed.The resulting matrix contained log fold change values at ten time points for 19303 genes. The matrix was clustered using default clustergram function of MATLAB which uses algorithm of Eisen et al. resultinHere, we have benchmarked our algorithm based on two scores: recovery and relevance scores. The scores were compared with best performing algorithms (as shown in ) like ISWe used the same strategy as mentioned in Eren et al. , where bS comparing two bicluster sets M1 and M2 as given in Eren et al. [M1| is the number of biclusters in bicluster set M1.We used a score n et al. :\\documens is chosen to be the Jaccard coefficient applied to matrix elements defined by each biclusters as given in Eren et al. [Here, n et al. :b1\u2009\u2229\u2009b2| is the number of elements in intersection of two biclusters i.e. number of intersecting genes X Number of intersecting conditions common between two biclusters b1,\u2009b2 and |b1\u2009\u222a\u2009b2| is the number of elements in their union. S takes values between 0 and 1, where 0 means two biclusters are disjoint and 1 means biclusters are identical. Any score between 0 and 1 is the fraction of total elements shared by both biclusters.where |E be the set of actual biclusters and F be the set of output biclusters from the algorithm. Recovery score is calculated as S; its maximum value being 1 implies E\u2009\u2286\u2009F i.e. algorithm has captured all the ideal biclusters. Relevance score is calculated as S; its maximum value being 1 implies F\u2009\u2286\u2009E i.e. all found biclusters were true biclusters.Let For a dataset of fixed noise levels and fixed overlap degree, we generated 10 data matrices, and on each data matrix we applied different algorithms to capture biclusters. As a control, we first obtained actual biclusters from the data matrices before adding noise . We then compared the resulting biclusters with the actual biclusters and calculated scores using the above formulas. Thus, for each fixed noise level and overlap degree, we obtained 10 recovery and relevance scores for each algorithm. We then took their mean. These mean values obtained for matrices of different noise levels and overlap degrees were plotted in Fig.\u00a0To show the importance of the SELECT function in improving the relevance of the obtained biclusters, we here considered a more complex dataset. We used an input dataset similar to Fig. For real datasets, we followed the methodology mentioned in Oghabian et al. . Here, wAs a final benchmarking of our algorithm, we used a real dataset from the breast tumor dataset GDS3716 and compared our algorithm\u2019s performance with performance of other algorithms. The dataset and the comparison strategy is similar to one used in Oghabian et al. . The datc* (used by SELECT function) for N\u2009=\u200919303 genes and 10 conditions. The obtained value of c* of 1.093 for different times (hence conditions are time points in this case) . The microarray data matrix contains log fold change gene expression values for mice under HFHSD fed condition in compared to normal diet condition at different time points. Heatmap of the microarray data matrix is shown in Fig. Biclusters play an important role in extracting information from the microarray data, particularly in case if it contains temporal dimension. This can help in elucidating processes perturbed during the experiment under different conditions and can give us mechanistic insights. To extract such biclusters from a microarray data using an algorithm whose input parameters are data independent is a challenging task. In this work, we have developed an algorithm which uses just one user input for generating biclusters. For this, we primarily used the whole time profile of a gene to find the conditions where a gene is expressed. This is similar in concept where the whole time profile of a gene is used to find whether the gene is perturbed or not .In the present study, we have introduced an algorithm to find groups of co-expressed genes and conditions of co-expression. The first advantage of our algorithm is that it is general enough to be used on any kind of high throughput data matrix. It can give output biclusters as overlapping set or as non-overlapping set depending on the choice of the user. Default mode is selecting all biclusters and overlap is allowed. This default mode was used everywhere in our study except in section using Synthetic Dataset 2 and in section with liver and cancer data. In these cases complicated biclusters could come and so it is easier to analyze them as non-overlapping sets. Second advantage is that CG algorithm also doesn\u2019t use any parameter like score cutoff etc., as used by other algorithms. This we could attain by combining the discretization step with the grouping step and hence a single parameter can be used to filter biclusters rather than two parameters usually required for these steps. Finally, applying our novel method of using random matrix data, we have even removed the dependency on this single parameter making our algorithm parameter independent for filtering biclusters.\u22125) as compared to clusters generated using the DISCRETIZE function of our algorithm Fig.\u00a0. Thus, oHere, we derived the score cutoff for the clusters from a matrix by comparing it with randomly generated matrices of same dimension. This means we are deriving score cutoff of clusters assuming all the genes in the original matrix are behaving randomly. This is a very conservative estimate, since in a normal data matrix, all genes won\u2019t be behaving randomly and there would be genes with some definite pattern that would be captured by our algorithm. So, we can safely say that the selected biclusters from the algorithm are not random and are biologically relevant. The algorithm can be applied to any microarray data or other high throughput data like proteomics data to find biclusters.Since, we have shown that our proposed algorithm performs better in comparison to other algorithms on the dataset with unknown noise levels, so it is expected that the present algorithm will definitely perform better on a dataset with known noise level. Biclusters generated from the algorithm, when integrated with transcriptional networks can help finding transcription factors driving such expression patterns. Also, the selected clusters from two or more microarray datasets can be compared to reveal similarities/differences among the patterns followed by genes of two datasets.Biclusters present in a high throughput data is important information to be extracted to find the underlying patterns present in the data. Available biclustering algorithms use many input parameters to find biclusters. Since, on a real dataset, it is difficult to know apriori about the values of these parameters and hence, an algorithm which uses minimum input parameters is highly desirable. We proposed here an algorithm clustered groups, which find groups of co-expressed genes and conditions of co-expression. Despite requiring only a single input parameter, we have shown that our algorithm still works better than other existing algorithms. The algorithm can be used to find such groups in different data types such as microarray (as shown in this study), proteomics, metabolomics etc."}
+{"text": "S. cerevisiae. We develop this inference approach based on a time series of mRNA synthesis rates from a synchronized population of cells observed over three cell cycles. We first estimate the functional form of how input transcription factors determine mRNA output and then derive GRFs for target genes in the CLB2 gene cluster that are expressed during G2/M phase. Systematic analysis of additional GRFs suggests a network architecture that rationalizes transcriptional cell cycle oscillations. We find that a transcription factor network alone can produce oscillations in mRNA expression, but that additional input from cyclin oscillations is required to arrive at the native behaviour of the cell cycle oscillator.To quantify gene regulation, a function is required that relates transcription factor binding to DNA (input) to the rate of mRNA synthesis from a target gene (output). Such a \u2018gene regulation function\u2019 (GRF) generally cannot be measured because the experimental titration of inputs and simultaneous readout of outputs is difficult. Here we show that GRFs may instead be inferred from natural changes in cellular gene expression, as exemplified for the cell cycle in the yeast DOI:http://dx.doi.org/10.7554/eLife.12188.001 Living cells rely on networks of genes to control their behavior, including how they grow, develop and respond to stress. Genes encode instructions needed to make proteins and other molecules, and much of the control is exerted at the first stage of protein production, known as transcription. During this process, a gene is copied to make molecules known as transcripts that may later be used as templates to make proteins.Many genes encode proteins that act to regulate transcription. Therefore, an individual gene may receive inputs from other genes, and these inputs affect how much transcript the gene produces, which can be considered as the gene\u2019s output. While these inputs and outputs can often be wired together to form a network, it is less clear exactly how all the different inputs at a gene interact to determine its output. These interactions are known as \u201cgene regulation functions\u201d, and knowing them would be an important step towards understanding gene networks, which would help us to predict how cells will behave in different situations.Gene regulation functions are difficult to measure directly, so researchers would like to find other ways to assess them indirectly. A recently developed experimental technique called \u201cdynamic transcriptome analysis\u201d seemed promising as it measures both the inputs and outputs of all genes in a cell over time. Hillenbrand et al. used this technique to infer gene regulation functions with one or two inputs in yeast cells.Comparing these estimates with experimental data from previous studies showed that these inferred gene regulation functions could successfully predict the output of a gene based on its inputs. Hillenbrand et al. then used these estimates to search and model a well-known genetic network that is thought to be part of the molecular clockwork that controls the timing of events that cause a cell to divide.Currently, the approach used by Hillenbrand et al. treats gene regulation functions like \u201cblack boxes\u201d. This means that, while an output can be predicted if the inputs are known, it cannot reveal all of the detailed mechanisms behind it. Gaining insights into the inner workings of these black boxes will require taking more data into account, such as how abundant the proteins that regulate transcription are, where they are located within cells or whether they are active or not. Therefore, the next challenge is to incorporate these kinds of data to gain a fuller picture of how gene networks operate within cells.DOI:http://dx.doi.org/10.7554/eLife.12188.002 E. coli, the measurement of a GRF was demonstrated for a single node with two inputs that are activated by periodically expressed cyclins . In addiS. cerevisiae cells over three cell cycles in two replicate experiments (Here we develop a method to infer GRFs from DTA data and apply it to cell cycle genes in yeast. We use DTA data providing the mRNA synthesis rates and levels for synchronized eriments . We consOur method to infer gene regulation functions (GRFs) from DTA data is illustrated in st genes . The inpst genes , but forWe infer a GRF by constructing a parameterized model, which describes the measured target gene output Because protein levels are unknown, we infer a proxy Rnr1 . The mRNA level time series for the input factors Fkh2 and Ndd1 are shown in the top left of each panel, with the inferred proxies by Fkh1 . In eachHof1, however, the data points sample only a narrow region in the two-dimensional Ndd1-Fkh1 concentration space of the GRF, such that the shown two-dimensional GRF is an extrapolation that cannot be verified with this data set. This example illustrates a useful property of our GRF inference scheme: the sampling density in the input space of the GRF already indicates the range over which the GRF inference is supported by the data.The estimated GRFs in CLB2 cluster target gene Kip2 provides an example for a complex GRF inference task, since it has Fkh1 as regulatory input in addition to Fkh2 and Ndd1 disSwi4 gene by Yox1 and Yhp1, which can bind to its ECB promoter element , Mbp1 is not periodically transcribed, and the activity of the MBF complex (Mbp1-Swi6) is likely cell cycle dependently modulated by cyclin-dependent posttranscriptional regulation with the Cln2 input set to zero, mimicking the loss of cell cycle dependent cyclin activity.We found that the network has the intrinsic ability to oscillate. After an initial disturbance due to the abrupt disappearance of the Cln2 input, regular oscillations were recovered in most genes for replicate 1 , while dWhile most genes still oscillate, the oscillation period is shortened by behavior .We established a method to infer gene regulation functions (GRFs) from the intrinsic cellular dynamics of the transcriptome. GRFs are key quantities for the modeling of transcription regulation and provide a basis for the quantitative analysis of functional genetic modules. Inference of GRFs is necessary, since their direct measurement is notoriously difficult while indirect measurements are limited to specific TFs and confOur inferred GRFs agreed well between biological replicates, were able to capture the expression dynamics of the target genes also in an independent test dataset, and correctly predicted whether the effect of a TF on its target is activating or repressing wherever experimental evidence was available. The method identifies the two inputs of a gene that are most significant for the description of the experimental data set. We illustrated the consistency of this reduced description with a more detailed physico-chemical model that takes all known inputs into account. The amount of data needed to infer GRFs rises strongly with the number of regulatory inputs, due to the combinatorial complexity of the task. Given the data that is currently available, our inference of GRFs with two inputs is at the limit of what is possible in a systematic and unbiased way.Clearly, GRFs could be extracted more directly and reliably if the time-dependent protein levels of all input TFs were also measured, as well as their activity and nuclear localization. However, we showed that one can obtain surprisingly consistent GRFs from high-accuracy dynamic transcriptome data alone, which simultaneously provides the transcript levels of the inputs and the mRNA synthesis rates of their targets. These GRFs are not based on detailed mechanistic models of the underlying molecular processes, but correspond to effective regulation models which subsume many processes, including the transport of the TFs to and from the nucleus and possibly activation of the TFs, e.g. by phosphorylation.The ability of our method to infer GRFs from dynamic transcriptome data depends on two general conditions: First, prior knowledge about which TFs potentially regulate which targets is required, since the information contained in the time series does not suffice to discriminate the correct regulatory interactions from all possible wrong associations. For the examples that we considered, the prior information provided by the YEASTRACT database proved sufficient. This database lists more input TFs than are relevant under the conditions that we study. However, since the number of irrelevant interactions is small , our method is able to identify the most relevant ones. Second, the information contained in the data must be sufficiently redundant to permit self-consistency conditions to be imposed during the inference procedure. We made use of redundant information derived from the length of the time series (the transcriptome data covers three cell cycles) and the pleiotropy of the cell cycle transcription factors (which must produce consistent effects in multiple targets). Self-consistency conditions are essential for our method to circumvent the need for protein data.One of the most promising applications of GRF inference is the quantitative analysis of functional genetic modules. Modules composed of interacting molecules and genes are widely considered central for the understanding of cellular functions . While tcdc28 mutant under accession code E-MTAB-1908.The dataset used in this work (cDTA for the yeast cell cycle) has been previously published . The comTo fit GRFs to the time series of input and output signals, we use a score function that quantifies the fraction of the variance in the output signal that is not explained by the candidate GRF with the given input signals. For a given target gene where The GRFs are functions of the protein levels of the input TFs. We model the dynamics with a translation rate where the mRNA time series Our GRFs are parameterized via Hill functions as shown in To minimize the score and identify the best-fit GRF, we use simulated annealing with a self-adapting cooling schedule to find SCR_006076, SCR_003030, CLB2 cluster . Because the data are logarithmic, and each gene is individually normalized, we exponentiated the data and performed a linear transformation to make its range comparable to our dataset. The linear transformation, Microarray data from http://yeastract.com/download/, Documented regulatory interactions between TFs and target genes have been downloaded from the YEASTRACT database . We include regulation by Cln2 and Swi4 as additional (additive) activators:We model coupling to the primary cyclin-CDK oscillator by placing In a network with The mRNA synthesis rate In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.eLife. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Detlef Weigel as the Senior Editor. One of the two reviewers has agreed to reveal his identity: Hernan Garcia.Thank you for submitting your work entitled \"Inference of gene regulation functions from dynamic transcriptome data\" for consideration by The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submissionSummary:Gene regulatory functions (GRFs) are the fundamental unit of any quantitative description of transcriptional programs. These functions describe the output level of gene expression as a function of the input concentrations of activators and repressors and of the binding site arrangement of these molecules on regulatory DNA. The measurement of these GRFs has been demonstrated repeatedly in the context of synthetic constructs, where DNA regulatory architecture and input transcription factor concentration can be precisely controlled. However, the field is missing technology to go beyond synthetic circuits and systematically expand these dissections to endogenous gene regulatory circuits. Only with the combined ability to assay synthetic and endogenous gene circuits can we develop a predictive understanding of the gene regulatory code underlying cellular decision programs.In this manuscript, Hillenbrand et al. develop a computational approach to infer GRF from endogenous RNA-Seq data sets. They use mRNA data of oscillatory genes in order to infer the protein concentration of input transcription factors. This inferred protein dynamic is combined with the output mRNA dynamics of target genes in order to obtain GRFs. These GRFs are put in the context of the underlying DNA regulatory architecture using theoretical models based on equilibrium statistical mechanics. Finally, they show how this approach can go beyond the quantification of GRFs to also provide a means to map gene regulatory networks and their quantitative parameters.Essential revisions:eLife, as rigorous as possible validation of the model is required. This can include checking resulting parameters against published values, as well as against published or novel experimental data confirming any of the parameters, such as through knockout or knockdown or overexpression experiments.For acceptance at Indeed, an intriguing consequence of plots such as those shown in For acceptance at eLife, as rigorous as possible validation of the model is required. This can include checking resulting parameters against published values, as well as against published or novel experimental data confirming any of the parameters, such as through knockout or knockdown or overexpression experiments.Indeed, an intriguing consequence of plots such as those shown in Given that the inferred GRFs are the central output of our method, an as rigorous as possible validation of the GRFs is indeed key to establishing our approach. Our validation rests on multiple independent pieces of evidence. Several of these are based on consistency checks that can be applied on the level of individual GRFs:swi4 gene is not adequately described by regulatory inputs from Yhp1 and Yox1 alone, but has other significant inputs from the cyclin-CDK oscillator .1) By construction, our method includes a self-consistency check, which relies on redundancies in the data. Specifically, we used the redundancy created by the length of the time series (the transcriptome data covers three cell cycles) and the pleiotropy of the cell cycle transcription factors (which must produce consistent effects in multiple targets). In our analysis of cell cycle transcription factors, we only accept GRFs which produce a consistent description of the data. Of course, this by itself is not sufficient to assure the validity of a GRF, however it is a useful first test. This criterion also led us to the conclusion that the transcription rate time series of the 2) We performed a qualitative test of our inferred GRFs against experimental data, by comparing the shape of the GRFs with the known roles of the input transcription factors as transcriptional activators or repressors. In our analysis of cell cycle transcription factors, we found that all best-scoring GRFs correctly predicted whether the regulatory effect of a transcription factor is activating or repressing, whenever independent experimental evidence was available, see 3) Furthermore, transcription factors that form complexes at their target genes are expected to act cooperatively, such that both TFs are needed to exert the regulatory function. Our inference scheme indeed predicted a cooperative AND-like regulation function for target genes of the Clb2 cluster, which are regulated by Fkh2 and Ndd1 in a complex with Mcm1.4) The fact that the DTA data used for our GRF inference contains two biological replicates allowed us to perform our analysis independently on each replicate. We found that the shapes of the resulting GRFs agree well between these two replicates, as illustrated by the examples in ash1 is particularly sensitive to perturbations of the cyclin-CDK oscillator, consistent with experiments that find ash1 to be non-oscillating in some cyclin mutant strains. Thereby, we indirectly validated our GRFs by demonstrating their effectiveness to describe the in vivobehavior of a functional genetic module.To go beyond consistency checks for individual GRFs, we tested whether the separately inferred GRFs of mutually regulating genes could be combined into a dynamic model that coherently describes the observed transcription time-series of these genes. We focused on the dynamics of ten transcription factors that display strong cell cycle oscillations and are interconnected into a transcriptional network. A model based on the inferred GRFs was able to capture the synergistic dynamics of these genes. Moreover, it predicted that the oscillatory behavior of the gene Taken together, these tests indicate that the inferred GRFs are adequate coarse-grained descriptions of the complex molecular processes that determine the transcription rate of the associated genes, at least over the range of input concentration values that are present in our calibration data set. As pointed out by the Reviewers, it would be desirable to test whether the inferred GRFs are also valid beyond this range. One way to test this would be to repeat the DTA experiments of Eser et al.(2014) with mutant strains, in which the dynamics of cell cycle transcription factors is significantly perturbed. However, the cost (>100 k\u20ac) and required time is prohibitive for such experiments to be done for validation purposes. In response to the comments of the Reviewers, we instead used published data from other groups to probe the ability of our GRFs to predict regulatory effects outside the regime of the calibration data set.First, we considered microarray data from \u03b1-Factor synchronized yeast cells, which follows transcript levels over two cell cycles at 5 min intervals . We asked whether the same GRFs that we had previously inferred from the data of Eser et al.(2014) would also be able to describe the mRNA dynamics of this dataset, which we had not used to calibrate our GRFs. The new S. cerevisiae transcriptional cell cycle oscillations found a strongly reduced expression of yox1 with a weak rapid oscillation (period ~45 min). This phenotype is recapitulated by the model prediction in panel A of the new figure.Second, we considered cell cycle experiments with mutant strains and tested whether a model based on our inferred GRFs is able to recapitulate observed phenotypes . Since the deletion of a regulator can affect the expression of its target gene both directly and indirectly, we used our dynamic model for llations as a basswi4 gene is considered in a \u2206yox1\u2206yhp1 double mutant background, as studied experimentally by Pramila et al. (2002). The microarray data exhibits a delayed and longer peak expression, which reaches into S and G2 phase . Our model of the transcriptional cell cycle oscillator qualitatively reproduces this behavior when yox1 and yhp1 expression is set to zero. It should be noted, however, that the effect is significantly more pronounced in the model than in the data, suggesting that our model for S. cerevisiae transcriptional cell cycle oscillations is missing a mechanism that buffers against the effects of the \u2206yox1\u2206yhp1 double mutation.A second case is shown in panel B of the same figure, where the expression of the rnr1, a target gene of the transcription factors Swi4 and Mbp1. Both of these DNA-binding proteins are known to act together with the co-regulator Swi6, via the SBF (Swi4-Swi6) and MBF (Mbp1-Swi6) complexes. Our calibration data set demonstrates a significant cell-cycle dependence for the transcription of swi4, whereas both mbp1 and swi6 show no significant periodic modulation of their transcription rates exhibits an oscillatory expression of rnr1, with a delayed peak time and an increased amplitude. As shown by Bean et al.(2005), rnr1 is in fact redundantly regulated by SBF and MBF, such that only the swi4-mbp1 double mutant displays an rnr1 expression that is essentially cell-cycle-independent. The MBF-mediated regulatory input is likely due to the known cyclin-dependent posttranscriptional regulation of the MBF complex .While the qualitative agreement obtained in panels A and B suggests that our inferred model indeed captures important aspects of the transcriptional regulation of cell cycle genes, it is clear that it will fail to predict the effect of deletions that unmask post- transcriptional effects. This is illustrated in panel C of the new ates see . AccordiTaken together, our validation against published data illustrates both the predictive power and the limitations of the GRFs inferred with our method. The quantitative reconstruction of in vivoGRFs is still in its infancy, and we believe our work establishes the first general and systematic GRF inference based on dynamic transcriptome data. In the future, GRF reconstruction could be taken to the next level by combining dynamic transcriptome data with simultaneously measured dynamic information on protein levels and protein localization. Alternatively, or in addition, GRF reconstruction could be further improved by simultaneous inference from DTA data for the wild-type as well as TF knockout strains. Both of these future directions are discussed in the Discussion section."}
+{"text": "Integration of such data aims to identify molecules that show similar expression changes over time; such molecules may be co-regulated and thus involved in similar biological processes. Combining data sources presents a systematic approach to study molecular behaviour. It can compensate for missing data in one source, and can reduce false positives when multiple sources highlight the same pathways. However, integrative approaches must accommodate the challenges inherent in \u2018omics\u2019 data, including high-dimensionality, noise, and timing differences in expression. As current methods for identification of co-expression cannot cope with this level of complexity, we developed a novel algorithm called DynOmics. DynOmics is based on the fast Fourier transform, from which the difference in expression initiation between trajectories can be estimated. This delay can then be used to realign the trajectories and identify those which show a high degree of correlation. Through extensive simulations, we demonstrate that DynOmics is efficient and accurate compared to existing approaches. We consider two case studies highlighting its application, identifying regulatory relationships across \u2018omics\u2019 data within an organism and for comparative gene expression analysis across organisms.Dynamic changes in biological systems can be captured by measuring molecular expression from different levels ( High-throughput \u2018omics\u2019 platforms such as transcriptomics, proteomics, and metabolomics enable the simultaneous monitoring of thousands of biological molecules , typically through a single static experimente.g., a subset of molecules for which expression changes occur simultaneously across time3712131415171820et al.23242324262730313330The statistical analysis of dynamic \u2018omics\u2019 experiments is difficult. Applying traditional statistical methods for static experiments is limited, since each time point will be treated as independent, ignoring potentially important correlations between sampling times. Indeed, realising the potential power offered in time course studies to investigate a wide variety of changes is nontrivial. Analytical challenges are further complicated by noise, small sample sizes per time point, and few sampled time points. In the past decade, several methods have been proposed to analyse time course \u2018omics\u2019 data, with a particular focus on microarray and RNA-Seq data. These methods perform differential expression analysis using spline fitting245689Delays can also hinder gene expression comparisons across organisms, since even highly conserved processes may vary in timing. The pre-implantation embryonic development (PED) is a highly conserved process across mammals, reflected through the progression of the same morphologic stageset al.i.e., at the sample level) rather than at the molecule expression level. The most commonly used approach for molecules is to consider the Pearson correlation34a.k.a. Pearson cross-correlation for lagged time series, circumvents this limitation by introducing artificial delays or lags in the time expression profiles for every possible time shift. The method eventually applies the delay that maximises the correlation with the original profile, but can be prone to overestimation of delay.To date, very few methods for time course \u2018omics\u2019 data account for time delay between molecule expression levels. Aijo et al.4041More sophisticated approaches for time course \u2018omics\u2019 data come at the expense of computational cost. Shi While integrating time course experiments from different \u2018omics\u2019 functional levels is the key to identifying dynamic molecular interactions, its challenging nature has thus far prevented much methodological development. Difficulties lie not only in the computation required by complex algorithms, but also in variation in types of correlation, levels of noise in the expression profiles, and the delays themselves.https://bitbucket.org/Jasmin87/dynomics.We present DynOmics, a novel algorithm to detect, estimate, and account for delays between \u2018omics\u2019 time expression profiles. The algorithm is based on the fast Fourier transform (FFT)4445associated, while ignoring the noise in each time expression profile.The expression changes of molecules monitored in time course experiments often form simple temporary, sustainable or cyclic patterns that can be modelled as mixtures of oscillating/cyclic patterns using the discrete Fourier transform (FT)x\u2009=\u2009, measured at time points t\u2009=\u20091, \u2026T, the FT decomposes x into circular components or cyclic patterns for each frequency k\u2009=\u20091, \u2026, T\u2009\u2212\u20091 as:For a given time series k\u2009=\u20090 simply describes the y-axis offset , this frequency is not included in our analysis context. a and imaginary part b as Xk\u2009=\u2009ak\u2009+\u2009bki.As the amplitude at frequency k\u2009=\u20091, \u2026, T\u2009\u2212\u20091 we can calculate the amplitude rk of the component as kth cyclic pattern to the overall trajectory, and the pattern with maximum amplitude Arg(Xk) is the offset of the cyclic pattern, defined as:For each frequency \u03d5k in degrees by:We can transform the argument to the phase angle (delay) Together, the amplitude and phase angle describe each frequency component, and the set of these quantities is known as the frequency domain representation.x and query y given in frequency domain representation. First, we identify K as the frequency of the pattern with maximum amplitude for x, i.e., the main reference pattern frequency. Then, for both x and y, we extract phase angles at this frequency \u03d5x\u2009=\u2009\u03d5xK, \u03d5y\u2009=\u2009\u03d5yK and define \u0394xy\u2009=\u2009\u03d5x\u2009\u2212\u2009\u03d5y as the difference between the phase angles. In FFT literature, \u0394xy is often expressed in the range of . To simplify representation in DynOmics, when \u0394xy\u2009<\u20090, we add 360 so that \u0394xy is in the range of 0 to 359. \u0394xy indicates both the sign of the correlation between x and y and the sign of the delay, as seen in x and y can be either positively Pearson correlation coefficient between the two trajectories xl and yl is defined as:Specifically, let where with \u03b40:We consider two sets \u03b41 is the result of the optimization over where We compared DynOmics performance with current available methods . With 14 time points, all methods performed similarly, including DynOmics. Pearson correlation which does not take time delays into account performed the worst in terms of sensitivity in all scenarios, demonstrating that ordinary correlation measures are not sufficient to detect associations when trajectories are delayed. A detailed description of the simulation study and the results is provided in the DynOmics is implemented in R and uses the FFT implemented in the function fft from the stats R packageet al.The study of Dong i.e., increased miRNA expression levels associated to a decreased (inhibited) mRNA expression levels, or vice versa. Associations were declared for all miRNA-mRNA pairs whose Pearson correlation coefficient was <\u22120.9. The mRNAs associated to a given miRNA were compared with miRNA targets predicted from sequence similarity from http://www.microrna.orgmicroRNA.org http://www.targetscan.org/mmu_71TargetScan (release 6.2)http://mirdb.org/miRDBmiRDB (Version 5)\u00ae Pathway Analysis .We compared associations detected between miRNA and mRNA pairs for both raw and LMMS modelled trajectories, using either classical Pearson correlation (on raw and LMMS modelled data) or DynOmics (on LMMS modelled data). MiRNAs are known to be able to target transcription regulators and therefore lead to the indirect expression of many mRNAs downstreamet al.Xie http://www.affymetrix.com/support/technical/byproduct.affx?product=hgu133plusHG-U133_Plus_2.na26.ortholog.csv. Seven human transcripts did not match any identifier in the orthology file and were removed. A total of 81,966 orthologous transcript pairs were analysed , where references and/or queries may have been included in multiple pairs. We applied DynOmics to every orthologous transcript pair to assess delays in expression levels between organisms and declared association when the absolute correlation exceeded 0.9. Pathway enrichment analysis was performed using IPA.We first converted the cell stages (zygote to blastocyte) into quantitative time points (one to seven) for input into modelling. For each organism, expression trajectories were modelled using LMMS with 14 regularly spaced time points. Human transcripts were taken as references, with reference-query pairs restricted to orthologous sequences with mouse and bovine as specified in the Affymetrix file Firstly, focusing on the miRNAs as reference trajectories, we compared the performance of Pearson correlation on the raw and LMMS modelled data. We defined the average agreement as the number of associations identified in common between the two methods divided by the number of associations observed by one method averaged over all miRNAs . We founi.e., a delay of 0). However, the percentage of overlap was still small , for mmu-let-7g the \u2018Axonal Guidance Signaling\u2019 pathway (P\u2009=\u20094.0\u2009\u00d7\u200910\u221211), and for mmu-miR-134 the \u2018Mitotic Roles of Polo-Like-Kinase\u2019 pathway (P\u2009=\u20091.29\u2009\u00d7\u200910\u22128). These pathways have been described as being involved in either embryonic or lung development. Phospholipase C was associated with fetal lung cell proliferation in rats55Finally, we investigated whether the putative delays were of biological relevance for miRNA-mRNA pairs. Three miRNAs in particular, mu-miR-429, mmu-let-7g, and mmu-miR-134, were associated with a large number of negatively delayed mRNAs, represented in We applied DynOmics to identify delays in orthologous transcript expression of mouse and bovine relative to human during PED. For an absolute correlation threshold of 0.9, we identified 32,329 (67%) orthologous pairs as being associated between human and mouse, and 26,769 (80%) between human and bovine, summarised in 36P\u2009=\u20097.94\u2009\u00d7\u200910\u221218), mTOR Signaling (P\u2009=\u20095.64\u2009\u00d7\u200910\u221212) and regulation of eIF4 and p70S6K Signaling (P\u2009=\u20095.72\u2009\u00d7\u200910\u221211). EIF2 Signaling and eIF4 and p70S6K Signaling play an important role in translation regulation and mTOR Signaling is an important pathway in embryonic developmentP\u2009=\u20091.75\u2009\u00d7\u200910\u221225) and the regulation of eIF4 (P\u2009=\u20093.48\u2009\u00d7\u200910\u22120.9) were also found to be enriched in bovine; however, the genes involved in these pathways changed expression after human expression changes.Pathway analysis using IPA was performed on the human orthologs for the three types of delay relative to mouse or bovine orthologs. As an illustrative example we display the trajectories of the orthologous transcripts involved in EIF2 Signaling in human and mouse with respect to the type of delay .We also performed enrichment analyses for human orthologs for all transcripts identified as associated, across all three types of delay. We highlight the conserved process of Acetyl-CoA Biosynthesis I, since it has not occurred in the enrichment looking at the delayed orthologs individually. Acetyl-CoA levels were found to play a role in the acetylation of proteins and may play a role in regulation of embryogenesisTo date, very few methods have been developed to integrate time course \u2018omics\u2019 data that are robust to delays in expression between co-expressed molecules. The integration task is particularly challenging as the data are often characterised by a high level of noise and measured on a small number of time points. Our algorithm DynOmics addresses these challenges by modelling time course trajectories, identifying delays and re-aligning trajectories to determine the degree of mutual dependency between reference and query trajectories.e.g., in the Lung Organogenesis case study, it ultimately increased the number of findings by considerably reducing the amount of noise in the data. In addition, modelling each trajectory as a noisy function of time allows integration of datasets with different time intervals or numbers of time points, as we demonstrated in the mammalian embryonic development case study. An additional analytical challenge may occur when the time series has temporal or cyclic changes and increases or decreases constantly over time. Time dependent trends are rarely observed in biological time course experiments, as those often lack the sampling resolution to observe such trends. If a time dependent trend were to be observed it should be removed prior to DynOmics analysis, for example by extracting residuals from a linear regression analysis of the expression values against time.Modelling time course trajectories is an important step in this process, as most methods developed to integrate time course data, such as DTW4OmicsThe selection of an appropriate threshold to define associations between co-expression trajectories is not trivial, and depends on the characteristics of the data themselves. For our analyses, we specified a correlation threshold of 0.9, as we were only interested in highly concordant expression trajectories.e.g., lung cell proliferation, lung branching and alveolar development. Thus, in this context, DynOmics has the potential to identify novel targets of miRNAs to aid in therapeutic development. Our study emphasises the importance of jointly studying miRNA and mRNA expression to understand the mechanisms of miRNA regulation.The role of miRNAs as gene expression regulators is an exciting new subject of study, as it is estimated that they control one-third of the expression of the human genome314950Model organisms present a simpler and more convenient alternative to directly study disease in humans. In the mammalian pre-implantation embryonic development study, we showed that DynOmics could identify delayed conserved expression between different organisms. This is a challenging task, as timing differences of expression changes can occur both in metabolic processes and across organs for different organismsCurrently, DynOmics has been used to identify associations between datasets of moderate size . The computational time would increase for large data sets . One solution could be to cluster profiles prior to applying DynOmics, to identify specific patterns of interest over time as queries and/or references. As the algorithm is based on independent pairwise comparisons, parallel computing could also be used to decrease the computational burden. Alternatively, as shown in the Lung Organogenesis study, the DynOmics analysis can be performed on a smaller number of queries selected based on prior knowledge or biological assumptions.Delays in molecular expression are an acknowledged and important phenomenon in many areas of biology. Here we demonstrated the need for and value of methods that are robust to delays, by showcasing the benefit of accurate delay estimates to interpret response dynamics and identify conserved molecular mechanisms. DynOmics overcomes the challenge of integrating data with timing differences of expression changes and therefore presents an effective tool to study time-sensitive molecular expression. The integration of multiple time course \u2018omics\u2019 data is becoming necessary in order to understand a biological system\u2019s formation, actions and regulation with high confidence. Our algorithm DynOmics provides a unique opportunity to study molecular interactions between multiple functional levels of a single system or multiple organisms, and paves the way to deeper biological time course studies analyses to investigate or unravel novel biological mechanisms.How to cite this article: Straube, J. et al. DynOmics to identify delays and co-expression patterns across time course experiments. Sci. Rep.7, 40131; doi: 10.1038/srep40131 (2017).Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations."}
+{"text": "Long-term alcohol use can result in lasting changes in brain function, ultimately leading to alcohol dependence. These functional alterations arise from dysregulation of complex gene networks, and growing evidence implicates microRNAs as key regulators of these networks. We examined time- and brain region-dependent changes in microRNA expression after chronic intermittent ethanol (CIE) exposure in C57BL/6J mice. Animals were sacrificed at 0, 8, and 120h following the last exposure to four weekly cycles of CIE vapor and we measured microRNA expression in prefrontal cortex (PFC), nucleus accumbens (NAC), and amygdala (AMY). The number of detected (395\u2013419) and differentially expressed microRNAs was similar within each brain region. However, the DE microRNAs were distinct among brain regions and across time within each brain region. DE microRNAs were linked with their DE mRNA targets across each brain region. In all brain regions, the greatest number of DE mRNA targets occurred at the 0 or 8h time points and these changes were associated with microRNAs DE at 0 or 8h. Two separate approaches were combined with pathway analysis to further characterize the temporal relationships between DE microRNAs and their 120h DE targets. We focused on targets dysregulated at 120h as this time point represents a state of protracted withdrawal known to promote an increase in subsequent ethanol consumption. Discrete temporal association analysis identified networks with highly connected genes including ERK1/2 , Bcl2 (in AMY networks) and Srf (in PFC networks). Similarly, the cluster-based analysis identified hub genes that include Bcl2 (in AMY networks) and Srf in PFC networks, demonstrating robust microRNA-mRNA network alterations in response to CIE exposure. In contrast, datasets utilizing targets from 0 and 8h microRNAs identified NF-kB-centered networks (in NAC and PFC), and Smad3-centered networks (in AMY). These results demonstrate that CIE exposure results in dynamic and complex temporal changes in microRNA-mRNA gene network structure. Excessive, chronic alcohol use can evoke persistent alterations in brain function that result in alcohol dependence , 2. SuchDrd1 [Bdnf [Studies have implicated microRNAs in both human \u201313 and aDrd1 and braid1 [Bdnf , 21. Impd1 [Bdnf , 9, 11 bThe chronic intermittent ethanol vapor (CIE) paradigm is known to increase voluntary ethanol consumption in rodents and is considered to be a model of ethanol dependence \u201324. ReceWe report microRNA expression changes in three brain regions at three time points following 4 cycles of CIE vapor in mice. For each brain region, differentially expressed (DE) microRNAs were paired with their putative mRNA targets DE in our previous study . ImportaAll procedures were approved by the Medical University of South Carolina Institutional Animal Care and Use Committee and adhered to NIH Guidelines. The Medical University of South Carolina animal facility is accredited by the Association for Assessment and Accreditation of Laboratory Animal Care.Adult male C57BL/6 mice, purchased from Jackson Laboratories , were used in this study. Mice were individually housed under a 12-hr light/dark cycle (lights on at 4:00 AM) in a temperature- and humidity controlled- animal facility. The animals had free access to food (Teklad rodent diet) and water throughout the experiment. The study began after an acclimation period (one week) and mice were monitored daily by the animal facilities staff and the research technicians.Chronic ethanol inhalation procedures, tissue harvest, and brain dissection methods were as previously described \u201329. Briehttp://www.ncbi.nlm.nih.gov/geo/) under accession number GSE90608. Arrays were hybridized with material from a single animal (no pooling); thus, 144 samples were profiled.Total RNAs from amygdala (AMY), nucleus accumbens and prefrontal cortex (PFC) were shipped to the Molecular Genomics Core Facility at Moffitt Cancer Center . Samples were biotin-labeled using the FlashTag Biotin HSR RNA Labeling Kit and hybridized to GeneChip miRNA 3.0 arrays according to manufacturer instructions. This platform used annotations from miRbase version 17 and contained probe sets for over 19,000 mature microRNAs from 153 species. Transcript abundance was measured by fluorescent intensity after scanning with the Affymetrix GCS3000 scanner and generation of cel files with Affymetrix AGCC v3 software. MicroRNA array data have been submitted to the NCBI Gene Expression Omnibus (GEO) (http://bioconductor.org) designed for the statistical language R (http://www.r-project.org) and Microsoft Excel unless otherwise noted. Data from each brain region were analyzed independently. Gene Expression Console was employed for data preprocessing and idend, DABG . Differend, DABG to compand, DABG . In thatwww.qiagen.com/ingenuity) to associate microRNAs with their previously reported DE mRNA targets [Two separate approaches were used to uncover the temporal relationship(s) between microRNAs and their downstream targets in each brain region. Both methods used the \"microRNA target filter\" utility in Ingenuity Pathway Analysis at each time point. To maintain consistency with previous analyses, we used linear fold changes and an FDR cutoff of 0.05 for targets DE at 0 or 8h and a nominal p value cutoff of 0.05 for targets DE at 120h. Only microRNA-mRNA associations consistent with changes in microRNA proceeding changes in mRNA were considered. For example, microRNAs DE at 8h were paired with targets from the 8 and 120h time points but not the 0h time point. In all, six paired datasets were created for each brain region . Paired www.graphpad.com. The number of unique, non-overlapping expression patterns indicates the number of assigned clusters for each brain region. For each identified microRNA cluster, IPA was used to construct a network relating the microRNAs with their DE gene targets reported in [An alternative approach, hierarchical clustering, used centered and scaled log ratios of DE microRNAs from all time points per brain region with the package \"clValid\" (version 0.606) . This R orted in using thMicroRNAs were profiled in AMY, NAC and PFC at three different time points after ethanol vapor treatment. After preprocessing, both the number of detected (395\u2013419) and DE (42\u201347) microRNA probesets was similar within each brain region. Despite most of the detected microRNAs being expressed in all brain regions, the DE probes were distinct for each brain region . There wDE gene targets of the ethanol-responsive microRNAs were identified for each time point. Both experimentally validated and predicted targets see were utiTo evaluate the role of microRNAs in the persistent (120h) changes in gene expression, paired DE microRNAs and associated 120h DE targets were assessed with IPA's \"Core Analysis\" to elucidate the predominant biological functions represented in these data. was a highly-connected member. These same networks contained other highly connected molecules, including Bcl2 (in AMY networks) and Srf (in PFC networks). In NAC and PFC, datasets utilizing targets from 0 and 8h microRNAs produced NF-kB-centered networks, while the same datasets in AMY generated networks sharing Smad3.As part of the \"Core Analysis\", IPA uses \"network eligible molecules\" (those molecules in the dataset that interact with other molecules in the IPA knowledgebase) as \"seeds\" to generate networks with a high degree of connectivity relative to all molecules in the knowledgebase. These networks provide insight to molecule connections and relationships that may not be detected by standard functional annotation methods. Using this approach, we compared the top two networks derived from each dataset containing DE microRNAs and their associated DE 120h targets , S2 Fig.We further employed a process of network merging to uncover the most important relationships between 120h DE genes and their associated microRNAs dysregulated during 0 and 8h time points. For each brain region, the top 3 networks obtained from the \"0hDEmiR/120hDEtargets\" and \"8hDEmiR/120hDEtargets\" datasets were combined, expanding component gene families or complexes to include all members present in the dataset . Genes DBcl2 showed high connectivity in AMY networks created from both approaches; similarly, for Srf in PFC networks. Additionally, hub genes from the cluster-based networks were also essential members of the combined networks described above. These included Bcl2 in AMY, Mapt in NAC and Fndc3b in PFC are provided for each cluster. MicroRNAs preceded by a \"+\" superscript were not DE in our dataset but were added and connected by the IPA algorithm. Italics denote critical network genes see . The lasGene expression profiling studies demonstrate clearly that alcohol consumption changes brain region-specific transcriptional profiles in human alcoholics and animal models of voluntary consumption . We receIndeed, DE probesets were distinct for the brain regions see and withWe reasoned that integrating differential expression profiles from both microRNAs and mRNAs would provide greater insight into the perturbed gene networks associated with withdrawal/protracted withdrawal. Our microRNA-target association procedure allowed for both perfect and imperfect sequence matching between microRNA-mRNA pairs. This undoubtedly influenced our results since a microRNA\u2019s function is determined in part by the extent of sequence complementarity with its target. In general, a perfect sequence match between microRNA and target is believed to cause mRNA degradation while an imperfect match is believed to result in translational repression . Thus, uERK1/2 , Bcl2 and Srf . These identified hub genes demonstrated robust microRNA-mRNA network alterations in response to alcohol exposure. In addition, temporal analyses identified NF-kB- and Smad3-centered networks in NAC and PFC. Mitogen-activated protein kinases (Mapk3 and Mapk1) are critical components of signal transduction pathways that are involved in cell growth, adhesion, survival and differentiation [Rit1, a microRNA target dysregulated at multiple time points in each brain region, was also identified in the current study family proteins regulate cell death by either pro- or anti-apoptotic mechanisms and Bcl2 is considered a \"prosurvival\" protein that is regulated by the Erk1/2 signaling pathway [Bcl2 in ethanol-mediated neuronal cell death [Bcl-related family members is clearly important to understand in the alcohol field given that alcohol in a variety of settings is known to be involved in cell death [Bcl2 is upregulated in alcohol preferring mice [Interestingly, divergent analytical approaches identified a number of common genes that were highly connected within networks (connectivity and hubs). These analyses have identified a number of neuroimmune- and cell death/survival-related pathways including ntiation as well ntiation . Rit1, atudy see . This gein mouse , 52 and in mouse brain. Tin mouse . Bcl2 mediates signaling from transforming growth factor beta (TGF-\u03b2) that is a regulator of cell proliferation, differentiation and death [Smad3 knockout mice have impaired immune function, suggesting SMAD signaling is involved in regulating the immune response [TGF-\u03b2/Smad3 receptor signaling and inflammatory response [TGF \u03b2 is a cytokine expressed in brain that is capable of controlling microglial activation, this signaling pathway may have the potential to regulate alcohol consumption. MicroRNAs that targeted these hubs included miR-34a-5p, miR-17-5p, miR-181a-5p, miR-16-5p for Bcl2 in the AMY . In NAC, miR-34a-5p and miR-146a-5p are potentially important regulators of Erk1/2 and NF-kB signaling pathways (the time-point based pairings only), respectively.ll death , 60 and ll death . A numbeh [NF-kB . Smad3 , and miThese results emphasize the importance of investigating the complex temporal relationships that exist between miRNA and gene expression changes in response to alcohol challenge. Many studies have focused on \"snapshots\" in time to establish the relationship between expression changes of miRNA and mRNA; however, this temporal relationship is complex and difficult to study. The current studies are unique because they address critical questions about the temporal relationship between miRNA regulation and mRNA function. These results demonstrate that alcohol exposure results in complex temporal changes in microRNA-mRNA gene network structure and that manipulation of microRNAs may rescue the aberrant synaptic plasticity associated with alcohol consumption.S1 FigPanel A = AMY , Panel B = NAC (nucleus accumbens), Panel C = PFC .(PPTX)Click here for additional data file.S2 FigGene networks and associated microRNA and mRNA expression data for networks described in (XLSX)Click here for additional data file.S3 FigTo investigate the potential time-dependent relationship of miRNA gene regulation, focused datasets were created from differentially expressed miRNAs (0 and 8 h only) and their differentially expressed target mRNAs (120h). For each of these miRNA-mRNA datasets, genes from the top three IPA-derived networks were merged and genes common to the merged networks were identified. This enabled us to focus on a key list of genes whose expression is modulated by miRNAs during withdrawal and remains dysregulated during protracted withdrawal, a period of time associated with drug relapse.(PPTX)Click here for additional data file.S4 FigNetworks were created from each cluster of microRNAs and their associated targets DE at any time point, emphasizing those targets dysregulated at 120h. Font color indicates time of DE . Maroon = DE at 0h, green = DE at 8h, blue = DE at 120h, gray = not DE at any time point. Solid and dashed lines indicate direct and indirect relationships, respectively. Hub genes are highlighted with a light blue box; their relationships to other molecules are indicated by a pink line. Molecules are colored to indicate direction of dysregulation (treated vs control) at the time point listed in the table. Red = upregulated, green = downregulated. Data are exported from IPA and thus utilize human gene nomenclature.(XLSX)Click here for additional data file.S5 FigAverage expression is plotted in red, and individual microRNAs are plotted in gray. Inset table provides expression data for microRNAs in the cluster. Time (\"Time DE\") and direction (\"Change\") of change are given for each microRNA . The number and direction of 120h DE targets are given for each microRNA.(TIF)Click here for additional data file.S6 FigAverage expression is plotted in red, and individual microRNAs are plotted in gray. Inset table provides expression data for microRNAs in the cluster. Time (\"Time DE\") and direction (\"Change\") of change are given for each microRNA . The number and direction of 120h DE targets are given for each microRNA.(TIF)Click here for additional data file.S7 FigAverage expression is plotted in red, and individual microRNAs are plotted in gray. Inset table provides expression data for microRNAs in the cluster. Time (\"Time DE\") and direction (\"Change\") of change are given for each microRNA . The number and direction of 120h DE targets are given for each microRNA.(TIF)Click here for additional data file.S1 TableND = not done.(DOCX)Click here for additional data file.S2 TableMicroRNA families are derived from IPA and include all microRNAs with the same seed sequence. Colored cells identify microRNA families with multiple members dysregulated at 0h in PFC.(DOCX)Click here for additional data file.S3 TableCentered and scaled expression log ratios of DE microRNAs were hierarchically clustered using the R package clValid . The tab(XLSX)Click here for additional data file.S4 TableThe table lists microRNA probe IDs and associated mirBase (v21) names. Fold changes (treated vs. control), p values and false discovery rates (FDR) were determined using empirical Bayes moderated t-statistics from the Bioconductor package limma.(XLSX)Click here for additional data file."}
+{"text": "Plant immunity protects plants from numerous potentially pathogenic microbes. The biological network that controls plant inducible immunity must function effectively even when network components are targeted and disabled by pathogen effectors. Network buffering could confer this resilience by allowing different parts of the network to compensate for loss of one another\u2019s functions. Networks rich in buffering rely on interactions within the network, but these mechanisms are difficult to study by simple genetic means. Through a network reconstitution strategy, in which we disassemble and stepwise reassemble the plant immune network that mediates Pattern-Triggered-Immunity, we have resolved systems-level regulatory mechanisms underlying the Arabidopsis transcriptome response to the immune stimulant flagellin-22 (flg22). These mechanisms show widespread evidence of interactions among major sub-networks\u2014we call these sectors\u2014in the flg22-responsive transcriptome. Many of these interactions result in network buffering. Resolved regulatory mechanisms show unexpected patterns for how the jasmonate (JA), ethylene (ET), phytoalexin-deficient 4 (PAD4), and salicylate (SA) signaling sectors control the transcriptional response to flg22. We demonstrate that many of the regulatory mechanisms we resolved are not detectable by the traditional genetic approach of single-gene null-mutant analysis. Similar to potential pathogenic perturbations, null-mutant effects on immune signaling can be buffered by the network. To protect themselves from pathogens, plants detect pathogen attack and send this information through a signaling network that activates various immune responses. Pathogens secrete effectors that disable components of the immune signaling network. Thus, the function of the plant immune signaling network must be well buffered from effector perturbations. Not much is known about how such network buffering is achieved. This is partly because the effects of mutations in single network components are also well buffered by the network. To overcome this shortcoming of single-gene mutant analysis, we employed a network reconstitution strategy, in which four major signaling sectors were first disabled and then restored one by one to determine the functions of the signaling sectors and their interactions. We collected and analyzed transcriptome and hormone profiles from all possible states of network reconstitution along time courses after immune stimulation with bacterial flagellin. We discovered that the network is not a collection of independent pathways; rather, interactions among the sectors dominate network regulation of the transcriptome response, which explains the extensive network buffering observed. Consequently, apparent network mechanisms inferred based on single-gene mutant analysis were often different from the underlying mechanisms. A major tenet of systems biology is that complex biological systems are more than the sum of their parts . This liBuffering has two different biological sources ,3: (I) cPlants are faced with a barrage of pathogen assaults during their lifetimes. Plants resist pathogen attacks using both preformed barriers ,6 and inThe resilience of the plant immune signaling network is qualitatively different from the robustness of switch-like networks, such as those that control development. These other networks ensure high-fidelity execution of an internal program despite internal and external noise. The plant immune signaling network, in contrast, must deploy the right response intensity depending on the reliability of attack information and attack severity\u2014a fundamentally quantitative response, rather than a switch-like one, as unnecessary immune responses carry a fitness cost for the plant . The plaNetwork buffering conceals a network\u2019s underlying mechanisms not only from potential pathogens but also from our attempts to study the network. For example, null mutant analysis of individual genes in such a network may result in incorrect mechanistic interpretations . How do dde2-2 removes the JA sector , including a negative value. The value 0 indicates no net contribution from the sector interaction terms while the value 1 indicates no net contribution from the single sector terms. We have previously demonstrated that regulation of both a phenotypic output of this network (inhibition of pathogen growth), as well as core network activity , is complex ,18. HereAmong the 5189 flg22-responsive and network-dependent genes, 2977 genes were fully buffered\u2014they showed no significant transcript response change relative to the wild type response in any single-sector mutant it showed no response in any ein2-containing genotypes , (II) ies ~ genotype: time. We noticed that the JA measurement had substantial technical error when the JA level was very low since the log2-transformed measured values from the dde2-containing genotypes, in which the actual JA level is 0 [2 JA\u201d values. See Hormone extraction and concentration measurements were performed with an ultra-performance liquid chromatography-tandem mass spectrometer (UPLC-MS/MS) with an ODS column as described previously ,22. Sampvel is 0 , ranged th percentile read count value to 300 counts was loge-transformed and used as the offset in the negative binomial generalized linear model (glm-nb) below for the purpose of between-libraries normalization.Stranded mRNA Tag-Seq libraries were prepared as previously described . Internath count value from the top was 25 or lower were removed. After the first and second filtering, 18750 genes remained \u2212 where e is transcript mean estimate, g is any non-quadruple combinatorial mutant (15 available), t is a time point after treatment and q is the quadruple mutant genotype. These difference-in-differences values are called transcript response changes here. The p-values from the 18750 genes were calculated using z-test with the means and standard errors for the e values involved in the comparisons. The p-values for the same comparisons with hormones were also calculated similarly, except that t-test (2-sided test) was used instead of z-test. All p-values were corrected together for multiple-hypothesis testing using Storey\u2019s FDR [q-values less than 0.05 and transcript response changes that are greater than 2-fold or less than 0.5-fold were called significant. Genes with at least one significant transcript response change were considered significantly regulated by the network. For the cases using lasso [For genes with significant transcript response changes, signaling allocation models were fit individually to each gene and hormone. These models were fit to transcript read count values or hormone abundance relative to steady state, ng lasso ,26. The ng lasso ,28 was sng lasso , insteadThe heatmaps and associated dendrograms in all Figs were produced using cosine distance complete-linkage agglomerative hierarchical clustering, except for GO enrichment analysis was performed using The Gene Ontology Consortium\u2019s web-based application ,63. GO ODDE2 (AT5G42650), EIN2 (AT5G03280), PAD4 (AT3G52430), SID2 (AT1G74710), FLS2 (AT5G46330), ARGOS (AT3G59900), PR1 (AT2G14610), NPR1 (AT1G64280), MKK2 (AT4G29810), EDS1 (AT3G48080), SARD1 (AT1G73805), PDF1.2a (AT5G44420), PR3 (AT3G12500), PR4 (AT3G04720), and ORA59 (AT1G06160).The following is the AGI codes of the Arabidopsis genes that are specifically described by their common names in the main text: Tag-Seq data and the derived read-counts-per-gene data have been submitted to NCBI\u2019s Gene Expression Omnibus (GEO); GEO data series GSE78735.Some of supplementary information is available from Figshare with a title of \u201cArabidopsis transcriptome/hormone response to flg22\u201d and the following description :The raw data used to generate the data in this set: (GEO GSE78735) Arabidopsis thaliana RNA-3seq data collected from 17 genotypes x 7 time points x 3 biological replicates.Contents of this set:All are tab-delimited text. The number of columns does not include the row name column (gene/hormone name).Mean: \u201cgenotype.time.mean.estimates.txt\u201dStandard error: \u201cgenotype.time.std.err.txt\u201dThe mean and standard error estimates for each combination of the genotype and the time (119 columns) for each of 18750 genes (rows).Mean: \u201chormone.genotype.time.mean.estimates.txt\u201dStandard error: \u201chormone.genotype.time.std.err.txt\u201dThe same as 1. but for 37 hormones (rows) and related compounds . Note that for JA, all 8 j-containing genotypes were handled as a single genotype, so they have the same values for a single time point.Mean: \u201cdid.vs.quad.txt\u201dAssociated p-value: \u201cdid.pval.vs.quad.txt\u201dThe transcript response change (difference in differences) for the comparisons of each of the 15 combinatorial genotypes (not including the jeps) vs. jeps at each of 6 time points , for 18750 genes (rows).Mean: \u201cdid.vs.wt.txt\u201dAssociated p-value: \u201cdid.pval.vs.wt.txt\u201dThe transcript response change (difference in differences) for the comparisons of each of the 15 combinatorial genotypes (not including JEPS) vs. JEPS (WT) at each of 6 time points (90 columns), for 18750 genes (rows).Mean: \u201cdid.fls2.vs.wt.txt\u201dAssociated p-value: \u201cdid.pval.fls2.vs.wt.txt\u201dThe transcript response change (difference in differences) for the comparisons between fls2 and JEPS (WT) at each of 6 time points (6 columns), for 18750 genes (rows).Mean: \u201chormone.did.vs.quad.txt\u201dAssociated p-value: \u201chormone.did.pval.vs.quad.txt\u201dThe same as 3., but for 37 hormonesMean: \u201chormone.did.fls2.vs.wt.txt\u201dAssociated p-value: \u201chormone.did.pval.fls2.vs.wt.txt\u201dThe same as 5., but for 37 hormonesMean: \u201callocation.AICc.model.mean.txt\u201dAssociated p-value: \u201callocation.AICc.model.pval.txt\u201dThe signaling allocation results with the model selected based on AICc. with the full (unregularized) model. with the model selected based on BIC. used in for 4954\u201cgenes.for.allocation.figs.txt\u201d: The gene sets used in the figures indicated below among the genes in 8. . A value of 0 indicates not used in the figure. For clusters and randomly selected genes, a value of 1 indicates the membership. For heatmaps, non-zero values indicate the order of the genes from the top in the figure. The figures are indicated in the column (22 columns): 'Fig2.heatmap', 'Fig2.column.a', 'Fig2.column.b', 'Fig6A.heatmap', 'Fig6B.heatmap', 'Fig6C.heatmap', 'Fig8.heatmap', 'Fig8.clusterS1', 'Fig8.clusterS2', 'Fig8.clusterS3', 'Fig9A.heatmap', 'Fig9B.heatmap', 'Fig9C.heatmap', 'Fig9A.clusterE1', 'Fig9A.clusterE2', 'Fig9A.clusterE3', 'Fig10A.heatmap', 'Fig10E.heatmap', 'Fig10F.heatmap', 'Fig10G.heatmap', 'S4Fig.heatmap', 'S7Fig.ET.geneset'\u201cS2Fig.genes.txt\u201d: The gene sets used in the \u201ccor.design.mat.txt\u201d: The cosine correlation among the signaling allocation terms in linear model. The design matrix for only one time point was used.S1 DataThey are in pmol/g dry weight. U.Q., under quantification limit.(TXT)Click here for additional data file.S1 Text(DOCX)Click here for additional data file.S2 Text(ZIP)Click here for additional data file.S1 FigJEPS , jeps , and fls2 (pink). See the legend on the right. The responses are calculated by subtracting the value for 0 hpt in each genotype for each sector. Some lines specifically discussed in the main text are labeled with the genotype names. Log2-transformed free JA and SA levels were used for the JA and SA sector activities, respectively, and log2-transformed transcript levels of the marker genes ARGOS and AT4G04500 were used for the ET and PAD4 sector activities, respectively.The lines for the genotypes are color coded for combinations of the active signaling sectors: JA (red), ET (yellow), PAD4 (green), and SA (blue), except for (TIF)Click here for additional data file.S2 Fig2-transformed transcript level data from the genotypes fls2, JEPS, and jeps at all the time points were used. See The heatmaps after clustering based on the Pearson correlation coefficient are shown. The log(TIF)Click here for additional data file.S3 FigHeatmaps of the signaling allocations based on the models with no regularization (A) and with the regularization stringency selected by BIC (B) are shown. The orders of the genes in the rows of the heatmaps are the same as that in (TIF)Click here for additional data file.S4 FigHeatmaps of the signaling allocations based on the AICc-selected regularization (A) and no regularization (B) for seven genes that passed the filter for the SA single sector dominance are shown. Since there is no evidently consistent pattern between (A) and (B), it is likely that the allocation results in (A) are artifacts of regularization. For better visualization, the color intensity of each row is scaled for the most extreme value of the row.(TIF)Click here for additional data file.S5 FigARGOS and AT4G04500, respectively Click here for additional data file.S6 FigFLS2 transcript levels in the ein2-containing genotypes are increased to nearly wild type levels within 1 hour after flg22 treatment. (A and B) Transcript level time courses of AT5G46330 (FLS2) for 17 genotypes are shown. The lines for the genotypes are color coded for combinations of the active signaling sectors: JA (red), ET (yellow), PAD4 (green), and SA (blue), except for JEPS , jeps , and fls2 (pink). See the legend on the right. (B) A zoom-in of (A) on early time points. Note that lines for all EIN2-containing genotypes (lines containing yellow) are higher than the lines for all ein2-containing genotypes at 0 hpt but there is almost no difference between them at 1 hpt.(TIF)Click here for additional data file.S7 FigJePS compared to JEPS. Each trace represents one of the 100 genes. The timecourse is color coded as shown. (B-E) Artificially generated examples of expected patterns in the timecourse comparison plot (A). The left panels show example transcript response timecourses in each of the genotypes JEPS (black) and JePS (red-green-blue). The right panels show what the timecourses in the left panel looks like in the timecourse comparison plot. If transcript response is delayed in JePS due to low FLS2 level at early time points, counter-clock-wise trace patterns (yellow to green to sky blue) are seen in the timecourse comparison plot as seen in . However, such trend is not evident in (A), indicating delayed transcript responses are not a major trend among these genes.(A) The timecourses of 100 genes randomly selected from 1270 genes whose transcript responses were significantly changed in (TIF)Click here for additional data file.S8 FigARL (AT2G44080) and (B) EBF2 (AT5G25350). The lines for the genotypes are color coded for combinations of the active signaling sectors: JA (red), ET (yellow), PAD4 (green), and SA (blue), except for JEPS , jeps , and fls2 (pink). See the legend on the right. Data were collected at 0, 1, 2, 3, 5, 9, and 18 hours after flg22 treatment.Mean transcript levels of common ET marker genes, across all genotypes and time points profiled. (A) (TIF)Click here for additional data file."}
+{"text": "Biological systems and processes are highly dynamic. To gain insights into their functioning time-resolved measurements are necessary. Time-resolved gene expression data captures temporal behaviour of the genes genome-wide under various biological conditions: in response to stimuli, during cell cycle, differentiation or developmental programs. Dissecting dynamic gene expression patterns from this data may shed light on the functioning of the gene regulatory system. The present approach facilitates this discovery. The fundamental idea behind it is the following: there are change-points (switches) in the gene behaviour separating intervals of increasing and decreasing activity, whereas the intervals may have different durations. Elucidating the switch-points is important for the identification of biologically meanigfull features and patterns of the gene dynamics.growth, decay, spike and cleft, which reflect important dynamic aspects. With this, the gene expression profiles are represented in a qualitative manner - as sets of the dynamic features at their onset-times. We developed a Web application of the approach, enabling to put queries to the gene expression time-courses and to deduce groups of genes with common dynamic patterns.We developed a statistical method, called SwitchFinder, for the analysis of time-series data, in particular gene expression data, based on a change-point model. Fitting the model to the gene expression time-courses indicates switch-points between increasing and decreasing activities of each gene. Two types of the model - based on linear and on generalized logistic function - were used to capture the data between the switch-points. Model inference was facilitated with the Bayesian methodology using Markov chain Monte Carlo (MCMC) technique Gibbs sampling. Further on, we introduced features of the switch-points: trans retinoic acid (ATRA). The analysis revealed eight patterns of the gene expression responses to ATRA, indicating the induction of the BMP, WNT, Notch, FGF and NTRK-receptor signaling pathways involved in cell differentiation, as well as the repression of the cell-cycle related genes.SwitchFinder was applied to our original data - the gene expression time-series measured in neuroblastoma cell line upon treatment with all-SwitchFinder is a novel approach to the analysis of biological time-series data, supporting inference and interactive exploration of its inherent dynamic patterns, hence facilitating biological discovery process. SwitchFinder is freely available at https://newbioinformatics.eu/switchfinder.The online version of this article (doi:10.1186/s12859-016-1391-0) contains supplementary material, which is available to authorized users. Time-resolved measurements are performed to study the dynamics of biological processes e.g. the dynamics of gene expression in response to treatments, upon induction of a transcription factor, during cell cycle or embryonic development. The temporal response patterns may shed light on coordination and regulation of the genes, aiding the inference of gene regulatory networks. Several methods for the analysis of the time-course gene expression data were developed, reviewed in , howeverimpulse model - was proposed for fitting the individual gene profile. The model contains seven biologically relevant parameters, emphasizing important aspects of the gene dynamics e.g. point of induction. In [The most common purpose of the time-resolved gene expression data analysis was to derive groups of genes with similar dynamical responses. Model-based clustering executestion. In , the modSwitchFinder, a time-series model is proposed that explicitly assumes the existence of the switch-points (switches) between intervals of increasing and decreasing activities, which are interpolated with linear or generalized logistic function. Fitting the model to the time-resolved gene expression data implies the prediction of the switch-points of individual genes.In the present approach, called Our approach has origin in the change-point modelling, that has been widely applied in engineering, ecology, economics and finance \u201314. The The central interest of the present work was the inference of the switch-points indicating changes between the regimes of the gene activity. Our model represents a series of switch-points (peaks and troughs), joined by lines or logistic curves. We developed a Gibbs sampling procedure for the Bayesian inference of the model.The switch-points elucidated by the analysis may indicate an onset of features like Growth or Decay, introduced here to capture substantial dynamic properties of the gene behaviour. Knowing onset-times of the dynamic features enables to represent the gene profiles in a qualitative manner. This is utilized in our approach to perform partitioning of the genes into groups with common dynamic patterns. The present approach inspires to put queries to the gene set like for example: which genes have peaks/troughs of their activity at certain time points? Which genes exhibit growth or decay at the particular onset-times? The Web application of the approach provides the query interface and grants to a human expert a possibility to query the time-resolved data, facilitating the biological discovery process.The present approach decouples the two tasks \u2013 statistically fitting individual genetic profiles and grouping of them. We do not regard the gene data set as multivariate time-series data, as was done in . The ratIn the next section, we introduce the model and its Bayesian formulation.T=14 measurements. The model contains N=5 switches at time-points 1, 6, 9, 12, 14 (switch locations) of the following types: trough, peak, trough, peak and trough. The switches separate intervals of increasing and decreasing activities of the gene, called regimes. The model assumes that the data within the regimes is interpolated with linear functions. The goal of the present method is to infer the most probable time-points of switches between the regimes while fitting the model to the time-series data.Figure r be the regime index: r=1,\u2026,N\u22121. We denote the locations of the switches with Lr and the y-values at these locations (switch heights) with Hr. The model assumes that the data values at time-points between the switches are determined by the linear interpolation. Figure t\u2208{Lr,\u2026,Lr+1}. The interpolated value at the time-point t is denoted by yt. Due to the linearity property, the following proportion is valid: yt, while denoting with linear factor), we get: Let Y=(yt)t=1,\u2026,T is the data, the set of Eq. r=1,\u2026N. So the model underlying our approach is specified as: Y=X\u00b7H+e, where X is the (T\u00d7N)-dimensional design matrix, defined with the help of the linear factors for all t and all r , where \u03c3 is the standard deviation of the error term. The parameters of the model to be estimated in course of the model inference are: locations of the switches Lr, r=2,\u2026,N\u22121, the heights of the switches Hr,r=1,\u2026,N and \u03c3. .If Lr are known, the linear regression model is specified and can be fitted to the data Y by the Ordinary Least Squares (OLS) method. Then, the parameters of the model (i.e. the switch heights) can be determined by: H=(XTX)\u22121XTY. The fitted values under the model are calculated by: Yfitted=X\u00b7H.If switch locations \u03b2 instead of H. Let the linear regression model be formulated as follows: In the following, for the sake of simplicity, we use a common notation for the linear regression model: N-dimensional vector of regression coefficients \u03b2 and the standard deviation \u03c3 are parameters to be estimated.The \u03b2 and \u03c3) was facilitated by the Bayesian methodology. Within a Bayesian framework, inference about parameters of a model, \u03b8, is made based on its posterior distribution given the data, p(\u03b8|Y), using the proportionality: p(\u03b8|Y)\u221dL(\u03b8|Y)p(\u03b8), where L(\u03b8|Y) is the likelihood function and p(\u03b8) is the prior distribution of the parameters. Since the direct Bayesian inference of the present model is infeasible, the Markov chain Monte Carlo (MCMC) technique Gibbs sampling presents an attractive possibility. Gibbs sampling reduces a problem of sampling from a complex posterior distribution to a series of more tractable subtasks of sampling from simpler, lower-dimensional distributions, simulations from which can be done using standard functions [full conditional posterior distributions as outlined below.Probabilistic inference of the model . Given an arbitrary set of starting values Suppose the model has k are repeated J times, where J is the number of iterations, to obtain the samples full conditional posterior distribution. If J is large enough, after some L, the Gibbs sampler has converged [\u03b81,\u2026,\u03b8k can be approximated by the empirical distributions of the simulated values. E.g. the mean of the marginal distribution of \u03b8i may be calculated by: Steps 1 through onverged . Then th\u03b2 and \u03c32.In the following, we derive the conditional posterior distributions of \u03c32 is known. We prescribe a multivariate normal distribution for the parameter \u03b2. Let the prior distribution of \u03b2 is given by:Assume \u03b2|\u03c32\u223cN, where the vector \u03b20 and the covariance matrix \u03a30 are known. The prior density can be written as: Because of the assumption of normality in , the lik\u03b2, conditional on \u03c32, is specified by the following normal distribution : \\docu\u03b20 is the vector of nulls and \u03a30 contains big values, the Bayesian estimate of the probability distribution of \u03b2 is analogous to the distribution of the best linear unbiased estimator of \u03b2 obtained by the OLS method. Namely, the unbiased estimator of \u03b2 is a normally distributed random variable [In case of an uninformative prior i.e. when variable : \\docum\u03b2fitted=(XTX)\u22121XTY and \u03a3=\u03c32(XTX)\u22121 as the mean and the covariance matrix for sampling the values of \u03b2.So, we can use \u03bc and the covariance matrix \u03a3 of the multivariate normal distribution are known, a commonly used method for generating values from this distribution is the following. Identify matrix A, which is the Cholesky decomposition i.e. AAT=\u03a3, then the sample value is calculated as: \u03bc+AE, where E is an N-dimensional vector of standard normal variables sampled from N.If the mean vector \u03b2, rejection sampling was used to ensure the validity of the new model: only models with alternating troughs and peaks and non-degenerate (i.e. with each data point as switch or with a regime having low amplitude) are admissible.While sampling \u03b2 is known. The usual specification for the distribution of \u03c32 is the inverted Gamma distribution . So, \u03bd0 and \u03b40 are known, so Assume The likelihood function is given by . Multipl\u03bd0=0, \u03b40=1) this distribution is analogous to the distribution of the unbiased estimator of \u03c32, determined by the OLS method. If \u03c32, then it is distributed as , \u03c72 is the chi-squared distribution. So, we can use \u03c32, where \u03c3fitted is calculated from data.It can be shown that in case of uninformative priors ( as (see ): \\docur, we assume that the locations of the previous and the subsequent switches are known, so the possible choices lie in the interval i\u2208{Lr\u22121+1,\u2026,Lr+1\u22121} representing a finite number of possibilities. Figure i, by Bayes theorem, the posterior probability of the switch taking the particular location is the following: p(Lr=i) is the prior probability, L(Y|Lr=i) is the likelihood of data, given the particular location. It can be written: p(Lr=j) are the same for all ji.e. p(Lr=j)=p(Lr=i). Thus, the following formula for the calculation of probabilities of the possible switch locations results: While sampling a location for a switch et\u223cN, so we can use the R function pnorm(et)to obtain the individual probabilities (the number of probabilities is Lr+1\u2212Lr\u22121\u22121).The likelihood of data, given the particular location, can be calculated as the product of the probabilities of making the error p(Lr=i|Y) i.e. the probabilities of each possible location given all the other information (let denote them with probs), we can sample an integer value with these probabilities by the Roulette selection method . LOESS fits a non-linear smoothing curve to the data, helping to reveal structural patterns in it. We use the fitting data to calculate local minima and maxima along the curve suggesting the number of the switch-points. Higher values of the span produce smoother curves, hence, the number of the switch-points decreases. Setting for the span is found in an iterative procedure. Starting with the small span 0.1, a curve is fitted to the data while increasing the span by a small amount (0.05) until none of the local minima and maxima are located immediately adjacent. The last number of the minima and maxima (added with 2 for the first and the last time-points) yields the number of the switch-points.The number of the switch-points, with which the MCMC procedure is initialized, is calculated with the exploratory non-parametric technique LOESS , originaWe call the presented model Model_Lin to distinguish it from the Model_Logit described in the next section.Ltrough,Lpeak}between two switch points, which are a trough and a peak, the fitted data lies on a logistic (sigmoid) curve and is calculated as follows: Sometimes the increasing/decreasing activity of a gene exhibits a saturated behaviour, stabilizing with time. To model this, the generalised logistic function was used. We assume that in each time interval {Htrough and Hpeak are switch heights, propt is the proportional location of the time-point t with respect to the trough and is calculated as FL is the generalized logistic function defined as (see [Richard\u2019s growth equation: y(t0)=y0, where where as (see ): \\docub allows the shape of the sigmoid curve to vary flexibly. K is the maximum observable value of y, in our case K=1. In the present work, we used the following parameterization : The parameter B plays a role of the growth rate. Note that the linear transformation of a logistic curve in Eq. (FL (0 and 1) are moved to be the trough and the peak values, respectively. Equation =(1\u2212FL(propt))\u00b7Htrough+FL(propt)\u00b7Hpeak. Then Y (R function lm) facilitates calculation of the switch heights, analogously to the Model_Lin described above. The generalized logistic transformation of the proportional location of each time-point between the neighbouring trough and peak allows for flexible modelling of the gene expression increase/decrease within time intervals of different length. Sampling of the logistic function parameters B and \u03ba was executed with the help of bootstrapping - by fitting the nonlinear function FL to the data x=propt, y=propyt. So, for the current model in each MCMC iteration, the design matrix is constructed and the linear regression model . For a location, its highest-level feature is stored. The profiles that were fitted with a one-regime logistic model additionally obtain features LogitUp or LogitDown. Grouping of the genes is executed by k-means clustering of the qualitative profiles using the Jaccard similarity [We developed a Web application of the method SwitchFinder, which provides the user-interface for uploading the time-series data, executing the algorithm and performing queries to the results of the data analysis, thus maintaining a feedback-loop between generation and interpretation of the results. We propose the concept of see Fig. . By incrnts Fig. . The quents Fig. . Single milarity . Jaccardsigma=0.2 from the following models: a) Logit_Up (500 samples) and Logit_Down (500 samples) using 10 different combinations of the parameters \u03ba and B: , , , , , , , , , including extreme values that challenge the fitting procedure; b) models Model_Lin with one internal switch point of the type peak located at t=2/4/6 (600 cases) and of the type trough located at t=2/4/5 (600 cases); c) models Model_Lin with two internal switch points of the types located at time-points t=2,5 (200 cases) and of the types located at t=2,6 (200 cases); d) model Model_Logit with parameters \u03ba=20,B=10 and one internal switch at t=5 (100 cases). The parameters (heights) of the models were simulated to obtain realistic gene expression values as commonly produced by Agilent technology: sampled from log-normal distribution and truncated to the interval . The scheme produces biologically realistic data sets with rich dynamic responses. Table To test robustness of the algorithm SwitchFinder, especially with respect to short time-series data, we generated 10 data sets, each containing 2500 synthetic gene expression profiles of the length T=7. The simulation scheme for a data set was the following. The profiles were generated with standard deviation To verify that the algorithm is suitable for long time-series, we applied it to the gene expression data from measuredSLBP, MCM6, MSH2, NUCKS for their activation need earlier signals.We sorted the genes by the time-points of their first peaks over the time, peaks with expression around 0 were neglected. The ordering revealed a clear picture of the cyclic activity of the genes and a good separation of G1/S and G2/M cell cycle phases . BE(2)-C in neuroblastoma cell line BE(2)-C after treatment with the differentiation agent all-RL-2268, ) is a cl [limma, ). A probGrowth at the first time-point for the group A. However, the patterns were delineated not solely by the features-based clustering, but also by some additional considerations. Many genes from the group F were fitted with the model Logit_Down as the genes from E, however, their declining cyclic pattern was further discerned by the additional condition: if the expression value at 24 hrs. was lower than at the neighbouring time-points. Further on, the Logit_Up model was a good fit for many activated genes. However, to elucidate the time of induction more precisely, we sorted the profiles by decreasing k and decreasing B (parameters of the logistic model), thus obtaining the temporal ordering of the genes starting from steep, early responses via S-formed to convex, late responses.Eight groups of genes, delineated by the analysis, reflect the time-resolved transcriptional response of neuroblastoma genes to the treatment with ATRA. Four groups comprise the activated genes, which were induced: immediately responses are very rich on transcription factors involved in determination of cell fates and regulation of embryonic development: hrs. resSNAI2, playing a role in the epithelial-to-mesenchymal transition (EMT), is over-expressed (group A) accompanied with the down-regulation of adherence junction genes like cadherins CDH4/7/22, claudin CLDN11, cingulin CGN, catenins CTNNA1/2, as well as of tight and gap junction genes TJP1 and GJA5 [SNAI1, initially over-expressed as compared to the control, decays under the influence of ATRA. The mesenchymal markers were induced immediately or at 12 hours: fibronectin FN1, fibronectin receptors ITGB1/3/8, FNDC4/5, vitronectin VTN and vimentin VIM. Cell polarity regulator PPARD and a member of crumbs complex, CRB1, belong to the group A. The metalloproteinases MMP2/11/15 and ADAM19/22/23, which facilitate degradation of the extracellular matrix, were active immediately or at 12 hours. Thus, the results indicate a contribution of ATRA to the migratory phenotype of neuroblastoma cells.The gene and GJA5 . InteresNTRK1 and NGFR - regulators of the nerve growth factor signalling known to be responsible for the maturation of the peripheral nervous systems through regulation of proliferation, differentiation and survival of neurons [EFNB2, EFNA2/4, EPHA2, EPHB3, SEMA4C/6C, PLXNA2/4A, SLIT2, SLITRK6. Interestingly, the semaphorin SEMA6A, known to control cell migration, was repressed, although its receptor PLXNA2 was activated after 12 hrs. Previously, SEMA6A was found upregulated in undifferentiated embryonic stem (ES) cells [NRP1 and NRP2, group E) was repressed, together with the ephrin ligand EFNA1. In general, a complex spatio-temporal expression of guidance molecules and genes involved in neuron migration was observed. Vast transcriptional changes were induced by ATRA at genes involved in cytoskeleton organization, cell polarization and immune processes. E.g. the chemokine receptor CXCR4 was induced at 24 hrs. It represents a positive cue for the migration of the NC cells (its ligand CXCL12 was active after 48 hours). We suppose that canonical Wnt signalling is repressed or delayed upon treatment with ATRA, with non-canonical Wnt signalling taking place: PPARD/G and TLE3 were induced, TCF7 and TCF19 were repressed, DACT3 (antagonist of beta-catenin) and further genes annotated with negative regulation of canonical Wnt signaling pathway were induced: ANKRD6, DKK1/2, SFRP1. The gene WNT11 was activated lately (group D).Induced immediately were the receptors neurons . ActivatS) cells . FurtherMYCN, AURKA/B, BIRC5, CDC2/6, CENPF, PCNA, PLK1/4 etc. (Group F). Furthermore, genes responsible for negative regulation of cell proliferation e.g. CDKN1A were active at 12 or 24 hrs. Notably, the gene ALK, an important unfavourable prognostic marker in neuroblastoma, was repressed.A clearly observable effect of ATRA-treatment on NB cells is the repression of genes involved in cell cycle regulation, particularly in G1/S and G2/M transitions of mitotic cell cycle, in cell proliferation, DNA metabolic process, DNA damage response, DNA repair signalling: To summarize, our study documented a powerful transcriptional effect of ATRA on NB cells. A complex gene regulatory machinery controls the two properties of neural crest cells: ability to extensively migrate and differentiate into numerous derivatives and to maintain multipotency . The rolIdentifying dynamic patterns under various biological conditions is crucial for the understanding of a biological system. The patterns reflect the coordination, co-regulation and control of the system components. Identifying temporal changes and patterns of gene expression is important for the inference of gene regulatory networks. We developed a method SwitchFinder for the analysis of time-resolved data, applicable to the gene expression data. The change-point model at the core of the method represents a series of the switch-points between regimes of increasing and decreasing activities, captured by linear or generalized logistic functions. SwitchFinder fits the model to the gene-expression profiles, inferring the switch-points inherent to the gene dynamics. The method exploits Bayesian model inference with the MCMC technique Gibbs sampling. To note, the method is suitable for long, as well as for short time-series.features of the data might be designated as important by an expert subjectively - beyond those obtained by statistical learning based on statistical characteristics. The features, in a next level of abstraction, can constitute further features or patterns. Such a qualitative approach should overcome over-fitting and lead towards biologically meaningful results.The advantage of the present approach is the inference of biologically justified and interpretable features of the genetic activity, as well as the possibility of their subjective exploration by researchers with different goals and background knowledge, in different biological scenarios. The Web application of the approach provides the user interface for querying the gene time-series. A flexibility is given to the user to adjust the selection criteria for restricting the results to substantial dynamic phenomena. Actively guiding the data analysis is valuable for biologists, as opposite to an automatic, unsupervised application of a statistical/bioinformatics method. Some The features-based clustering is preferrable than clustering methods based on distance measures like Euclidian distance or correlation. The latter ignore the dynamic nature of the temporal data and overlook single data points, which represent important changes in the gene behaviour associated with the events of the gene regulatory control.To mention, the present method is independent of the quantitative expression levels of different genes. It would not miss a relation between the genes with different abundance, but with the same qualitative pattern.dominant points, defining the intervals, were determined by the approximation of the data curves based on thresholds chosen by the user. This makes the algorithm sensitive to noise. The method concentrated on shapes of the gene profiles, rather than on proper timing of the dynamic events. To emphasize, the statistical inference of the prominent time-points in the temporal profiles is of advantage. The Temporal Abstraction clustering was implemented in the software TimeClust [Previously, a platform PESTS was created, making the analysis of some statistical features of the gene profiles accessible via the user interface . QualitaimeClust , togetheimeClust performeThe present method combines quantitative and qualitative characteristics: statistically inferred timing of dynamic events and the qualitative dynamic features.The approach offers a great flexibility in the induction of biological knowledge from time-series data: the user may explore the gene set by clustering (unsupervised) or interactively (supervised) by putting queries and experimenting with the qualitative features of particular time-points.The results of the method provide a platform for studying temporal relations like e.g. time delays with the goal to deduce dependencies between the genes. Modelling cellular dynamic responses on the level of pathways and networks can be considered as possible extensions of the approach. Our next goal is to adapt the SwitchFinder to the analysis of RNA-seq time-series."}
+{"text": "After Peripheral nerve injuries (PNI), many complicated pathophysiologic processes will happen. A global view of functional changes following PNI is essential for the looking for the adequate therapeutic approaches. In this study, we performed an in-depth analysis on the temporal expression profiles after sciatic nerve injury by bioinformatic methods, including (1) cluster analysis of the samples; (2) identification of gene co-expression modules(CEMs) correlated with the time points; (3) analysis of differentially expressed genes at each time point (DEGs-ET); (4) analysis of differentially expressed genes varying over time (DEGs-OT); (5) creating Pairwise Correlation Plot for the samples; (6) Time Series Regression Analysis; (7) Determining the pathway, GO (gene ontology) and drug by enrichment analysis. We found that at a 3\u2009h \u201cwindow period\u201d some specific gene expression may exist after PNI, and responses to lipopolysaccharide (LPS) and TNF signaling pathway may play important roles, suggesting that the inflammatory microenvironment exists after PNI. We also found that troglitazone was closely associated with the change of gene expression after PNI. Therefore, the further evaluation of the precise mechanism of troglitazone on PNI is needed and it may contribute to the development of new drugs for patients with PNI. Despite many preclinical and clinical studies have made significant progress in understanding the mechanism underlying this disease2, many complicated pathophysiology processes will happen, including a series of cellular and molecular responses accompanied with the alteration of various gene expressions after PNI3. The poorly understanding brings certain difficulties in searching for the adequate therapeutic approaches4. Thus, a global perspective of changes following PNI is warranted. Microarray is one of the most popular methods that can detect the genome-wide transcriptome profiling in certain conditions. Some studies9 have identified many genes that are disturbed after peripheral nerve injury. However, the precise mechanisms underlying the specific events or biological processes after PNI are not completely understood. For the occurrence and development of PNI, it is well known that genes are not alone and usually act through joint actions with other genes in pathways or networks. Thus, one of the interesting questions can be aroused, what\u2019s the particular function and pathways changed in the process of PNI? And which drug can be potentially used for the treatment of PNI? To preliminarily answer those questions, the high-throughput gene data related to PNI can be explored conveniently and time-saving. In the other hand, as Li et al.10 mentioned, despite high-throughput gene data are expanding and can be obtained quickly, the in-depth and comprehensive analysis remains to be completed by the aid of newly-developed statistical and bioinformatic tools. Moreover, by these bioinformatic methods, the different perspective and valuable information about the molecular regulation of transcriptional responses of PNI will be disclosed.Peripheral nerve injury (PNI), one of serious health problems, can often lead to lifelong disability10. The anatomy of the dynamic changes of differentially expressed genes associated with PNI can help understand the response in the process of the PNI, and find some new treatment strategies target the regulation of essential genes. To date, temporal expression profiles or time course data for sciatic nerve injury have been published11. Therefore, the aim of this study was to in-depth analyze the temporal expression profiles after sciatic nerve injury by bioinformatic methods. The second aim of this study was to elucidate the biological process and pathways in the response of sciatic nerve injury.Sciatic nerve injury is a widely used model for PNI and peripheral nerve regeneration studiesThe analysis process is composed of the following seven parts: (1) cluster analysis of the samples in the microarray data; (2) identification of gene co-expression modules (CEMs) correlated with the time points; (3) analysis of differentially expressed genes at each time point (DEGs-ET); (4) analysis of differentially expressed genes varying over time (DEGs-OT); (5) creating Pairwise Correlation Plot for the samples in the microarray data; (6) Time Series Regression Analysis; (7) determining the Pathway, GO and drug by enrichment analysis.11 was obtained from National Center of Biotechnology Information Gene Expression Omnibus (GEO) database12. The more detail information about GSE33175 can refer to Wang et al.11. Briefly, the platform used in GSE33175 was GPL7294 Agilent-014879 Whole Rat Genome Microarray 4\u2009\u00d7\u200944K G4131F. The experiments were conducted on adult male Sprague-Dawley rats weighing 180\u2013220\u2009g. Proximal sciatic nerve tissues (0.5\u2009cm) were generated at 0\u2009h, 0.5\u2009h, 1\u2009h, 3\u2009h, 6\u2009h and 9\u2009h after sciatic nerve resection. Total RNA extracted from those tissues was used for cDNA array hybridization11.The gene chip data of GSE33175http://linus.nci.nih.gov/BRB-ArrayTools.html)13. The raw expression data were converted to log2 values and then normalized by using the quantiles normalization. In terms of spot filters, spots with intensity\u2009<10 were removed. The replicate spots within an array were averaged. Moreover, the genes under any of the following conditions were excluded: percentile of the log-ratio variation in less than 75, percent of data missing or filtered out exceeds 50%.Gene chip data of GSE33175 were analyzed by using BRB Array Tools of highly correlated genes, for summarizing such clusters using the module eigengene for relating modules to external sample traits. The network and the modules were constructed and detected by using WGCNA as previously described16. The name of co-expression modules were named as the colour assigned by WGCNA. The more detail information including script and parameters can be found in Supporting Information To identify CEMs, the weighted correlation network analysis (WGCNA)GSE33175 data include six experimental groups at different times. To identify the DEGs-ET at 0.5\u2009h, 1\u2009h, 3\u2009h, 6\u2009h and 9\u2009h comparing with the time point 0\u2009h, a random-variance t-test was used. Genes were considered highly significant if their P value was less than 0.001 and the false discovery rate (FDR) less than 0.05.13.To identify the genes whose expressions were varying over time (DEGs-OT), the plug-in \u201cTime course analysis\u201d from BRB Array Tools was employed. This plugin can be used for regression analysis of time series expression data. The tests were performed at a FDR threshold of 0.01To determine the relationship of the expression of arrays between different time points, we performed a correlation analysis for the expression of arrays using the Spearman Correlation Test by using the plug pairwise correlation plot in BRB Array Tools. Log intensity of the top 50 genes from the DEGs-OT was utilized in this process.17, software program \u201cShort Time-series Expression Miner\u201d was used18. The main advantage of STEM is that it implements a novel method for clustering short time series expression data that can differentiate between real and random patterns18. To reduce the redundancy, the genes whose expressions were varying over time were used in this process. The raw expression data of these genes from the different times were retrieved by BRB Array Tools, and then sorted by the user manual of STEM. Finally, STEM Clustering Method was adopted to cluster genes with the parameters by default18. The group was defined as temporal expression profiles n according to the corresponding P values. Genes belong to the profile n were fellow named temporal expression profiles gene n (TEPGs).To identify the genes or gene sets co-regulated time dependently We thought the common gene disturbance measured by different methods could help us understand the molecular mechanism of PNI. Therefore, it is reasonable to presume that the same GO term, pathways and drugs identified in four datasets might be closely related to PNI.http://www.funrich.org)19. Enriched pathways and drugs were identified by using ToppCluster (https://toppcluster.cchmc.org/)20. The functional annotations for the GO term level were mainly focused on biological process in this study. Hypergeometric test and multiple-testing corrections (Bonferroni) were employed in this process with p\u2009\u2264\u20090.05 considered significant.Since we obtained four different gene sets , we performed a compared enrichment analysis. Simply, the identified CEM, DEGs-ET, DEGs-OT and TEPGs were then used to obtain functional pathways and GO term. Relevant GO terms for those gene set were analyzed by using Funrich software , 134 genes (1\u2009h vs 0\u2009h), 407 genes (3\u2009h vs 0\u2009h), 550 genes (6\u2009h vs 0\u2009h), and 1073 genes (9\u2009h vs 0\u2009h) were identified, respectively Tables\u00a0\u2013S6. The http://www.cs.cmu.edu/~jernst/stem/), they were totally mapped to 49 model temporal expression profiles, three temporal expression profiles were statistically significant , pathway and drugs were picked out.As mentioned, four modules that are highly associated with time had been identified. To facilitate a biological interpretation, we would like to know the function of the genes in the modules, whether they are significantly enriched in certain functional categories, pathways or which drugs can be potentially targeted.After pathway enrichment analysis, we found that the turquoise module was mainly involved in metabolic pathways, such as cholesterol biosynthetic, super-pathway of cholesterol biosynthesis, cholesterol biosynthesis, steroid biosynthesis and mitochondrial fatty acid beta-oxidation. Only one pathway, galactose metabolism was enriched for the yellow modules. The blue module was enriched for genes related to rRNA modification in the nucleus and cytosol, Ribosome biogenesis in eukaryotes, rRNA processing in the nucleus and cytosol Table\u00a0. That maMeanwhile, although few or no known biological processes were significantly enriched by those modules after the multiple-testing corrections (Bonferroni), the unadjusted results reminded that some biological processes, such as cell proliferation, response to lipopolysaccharide, positive regulation of nitric oxide biosynthetic process, negative regulation of apoptotic process, positive regulation of actin cytoskeleton reorganization, peripheral nervous system myelin maintenance might be associated with these modules Table\u00a0.We performed a comparative pathway enrichment analysis using ToppCluster, and the enriched pathways associated with the DEGs-ET were identified. As shown in Fig.\u00a0According to the biological process analysis, there was only one significant biological process for DEGs-0.5\u2009h: positive regulation of transcription from RNA polymerase II promoter Table\u00a0. DEGs-1hFor the DEGs-OT, 37 pathways were significantly enriched Fig.\u00a0, the topAfter enrichment analysis for TEPGs, 2 and 10 pathways were identified for the genes from profile 24 and 39, respectively Fig.\u00a0. HoweverTo determine which drugs may associate with the CEMs, DEGs-ET, DEGs-OT, and TEPGs, or which drugs can be targetable to interact with CEMs, DEGs-ET, DEGs-OT and TEPGs, the ToppCluster with its enrichment function was used. ToppCluster allows for analysis of enrichment against genes associated with various drug actions. After enrichment for the different gene sets, the results suggested that many drugs may have closed relationship with CEMs, DEGs-ET, DEGs-OT and TEPGs Tables\u00a0\u2013S29. Come serious health problems, and there is no easily available formula for successful treatment21. After PNI, a series of signal cascades can be triggered, accompanied with the alteration of various gene expressions3. In the past decade, the high-throughput technologies for gene analysis have tremendous potential in determining the pathogenesis of diseases, including the PNI. For example, by using the gene chip, a large number of differentially expressed genes are presented. However, the interpretation of what it means may be more significant for understanding the disease22. Moreover, bioinformatics has been developing rapidly and contributes to dissection of high-throughput data from novel perspectives.Peripheral nerve injuries (PNI) causAlthough the studies about the pathogenesis and etiology of PNI by using the high-throughput technology are beginning to emerge, the studies which explain the complex molecular mechanism based on the variety of levels are few. In the present study, we have used the various kinds of bioinformatics methods to detect the involved molecular functions and biological pathways after PNI. Moreover, the potential drug targets have also been explored.23. A previous study suggested an \u201cinitial phase\u201d for axonal regeneration23. In addition, clinical research showed that there was a significant negative correlation between the time interval from injury to surgery and motor function recovery, implying early exploration of sciatic nerve injuries can be beneficial if the nerve injury does not improve spontaneously24. However, the precise mechanism was still poorly understood. Moreover, the \u201cinitial phase\u201d and \u201cearly\u201d phase are always drawn from an empirical point of view and has no detailed timeline in short time points. Clinically, failure to diagnose early leads to permanent disability25. In the present study, the results suggest that the 3\u2009h \u201ctime window\u201d may give us new clues for further treatment of PNI.After nerve injury, injured nerve undergoes structural and molecular changes in preparation for the process of axonal regeneratione between 3\u2009h, 6\u2009h and 9\u2009h. Thirdly, by using the plug-in \u201cTime course analysis\u201d from BRB Array Tools, the DEGs crossing over time have also displayed a remarkable diversion at 3\u2009h we used for enrichment analysis, it was indicated that \u201cresponse to LPS\u201d or \u201ccellular response to LPS\u201d always ranked among the top enriched GO terms .First of all, the unadjusted results from CEMs reminded that response to LPS might be associated with blue modules. Second, the biological process from GO analysis indicated that response to LPS has been collectively enriched by all DEGs from 1\u2009h, 3\u2009h, 6\u2009h, and 9\u2009h. What\u2019s more, after exploring the associated GO terms in integrity across all over the time points (DEGs-OT) or partial profile (profile 39), \u201cresponse to LPS\u201d was also picked out.29. Injury to the peripheral nervous system (PNS) can induce a well-orchestrated cellular process. Several inflammatory cytokines/chemokines are produced as early as 1\u2009h after peripheral nerve lesion, with expression levels peaking at ~24 h30. Thus the enriched biological process in our study is consistent with the inflammatory microenvironment30, which indeed exists as early as 1\u2009h after PNI .LPS, a component of the outer membrane of Gram-negative bacteria, is a potent activator of innate immune responses. Due to the fact that LPS can induce endogenous inflammatory cytokines such as TNF-\u03b1, IL-1\u03b2 and IL-6, which are responsible for the neurotoxicity observed in neurodegenerative diseases, LPS is always used as a proinflammatory agent to mimic neuroinflammation or neuropathic pain after PNI31. As reported previously, TNF signaling pathway is implicated in the development of neuropathic pain after peripheral nerve injury31. TNF-\u03b1 could differentially regulate synaptic plasticity in the hippocampus and spinal cord after PNI33. Moreover, TNF mRNA had been detected early following PNI31.As to the pathway analysis, the TNF signaling pathway was shown to be significantly enriched by all DEGs from 0.5\u2009h, 1\u2009h, 3\u2009h, 6\u2009h and 9\u2009h, suggesting an essential role of TNF signaling pathways in the early phase of PNI. This is consistent with results from a previous study34. Later, other drugs with similar mechanism of action as troglitazone but presumably without liver toxicity were developed35, such as rosiglitazone and pioglitazone.Besides, we have found troglitazone have a close relationship with CEMs, DEGs-ET, DEGs-OT and TEPGs. Troglitazone, one of the thiazolidinediones, was initially approved by FDA (Food and Drug Administration) of the United States but has subsequently been withdrawn from the market in the year 200036. In addition, previous report showed that pioglitazone has a benefit on PNI. For instance, Reza et al.37 demonstrated that pioglitazone had a protective effect on sciatic nerve ischemia/reperfusion injury. Masaki et al.38 showed that pioglitazone could promote peripheral nerve remyelination.To date, there was no record refer to the effect of troglitazone on PNI. However, the beneficial effect troglitazone on peripheral neuropathy in STZ-induced diabetic rats has been reportedAt present, troglitazone cannot be clinically used in patients because of its deleterious effects on the liver. However, with the results from the present study , it is reasonable for us to presume that its potential bio-efficacy including on peripheral nerve injuries might be ignored. Thus it is possible that the evaluation of the precise mechanism of Troglitazone on PNI will lead to the development of new drugs for patients with PNI.39, due to the analytical strategy in our study incorporated prior knowledge, some genes (in the modules or cluster) for which there are no functional interaction data available cannot be interpreted39. Finally, we combined various method and look into the overlapped pathway, GO and drugs for the PIN, that no doubt increased results\u2019 reliability, and on other hand inevitably ignored some content, which needs great attention . Notwithstanding these limitations, we have provides a modular view for the PIN.This study has limitations that should be acknowledged. Firstly, we only pay our attention on the short time after PIN, mainly within 9\u2009h. More long time, such as couple of days should be considered in the future. Secondly, as similar to other studyIn conclusion, the time course gene expression was deeply analyzed by bioinformatics methods in this study. A 3\u2009h \u201cwindow period\u201d for the gene expression after PNI provides a new clue for further treatment. Responses to LPS and TNF signaling pathway play an important role, suggesting an inflammatory microenvironment after PNI. Moreover, troglitazone is closely associated with the alteration of gene expression after PNI. Further evaluation of the precise mechanism of Troglitazone on PNI will lead to the development of new drugs for patients with PNI.Statistical code used for generating the weighted gene co-expression network.docTable S1Table S2\u2013S6Table S7Table S8\u2013S10Table S11Table S12Table S13Table S14Table S15\u2013S19Table S20Table S21Table S22Table S23\u2013S25Table S26Table S27Table S28Table S29"}
+{"text": "Ever since the accidental discovery of Wingless , research in the field of Wnt signaling pathway has taken significant strides in wet lab experiments and various cancer clinical trials, augmented by recent developments in advanced computational modeling of the pathway. Information rich gene expression profiles reveal various aspects of the signaling pathway and help in studying different issues simultaneously. Hitherto, not many computational studies exist which incorporate the simultaneous study of these issues.This manuscript \u2219 explores the strength of contributing factors in the signaling pathway, \u2219 analyzes the existing causal relations among the inter/extracellular factors effecting the pathway based on prior biological knowledge and \u2219 investigates the deviations in fold changes in the recently found prevalence of psychophysical laws working in the pathway. To achieve this goal, local and global sensitivity analysis is conducted on the (non)linear responses between the factors obtained from static and time series expression profiles using the density (Hilbert-Schmidt Information Criterion) and variance (Sobol) based sensitivity indices.The results show the advantage of using density based indices over variance based indices mainly due to the former\u2019s employment of distance measures & the kernel trick via Reproducing kernel Hilbert space (RKHS) that capture nonlinear relations among various intra/extracellular factors of the pathway in a higher dimensional space. In time series data, using these indices it is now possible to observe where in time, which factors get influenced & contribute to the pathway, as changes in concentration of the other factors are made. This synergy of prior biological knowledge, sensitivity analysis & representations in higher dimensional spaces can facilitate in time based administration of target therapeutic drugs & reveal hidden biological information within colorectal cancer samples. Recently observed psychophysical laws working downstream of the Wnt pathway rely on ratio of deviations in input & absolute value of input. These deviations are crucial for observation of a phenotypic behaviour during a time interval. This work explores the influences of fold changes and deviations in fold changes in time using density based sensitivity indices which employ kernel methods to capture nonlinear relations among the involved intra/extracellular factors. On static gene expression toy example in normal and tumor cases & time series dataset, they outperformed the variance based sensitivity indices. Synergy of prior biological knowledge, sensitivity analysis and representations in higher dimensional spaces facilitates development of time based target specific interventions at molecular level within the pathway.i compartmentalize the manuscript into three different parts \u2219 a short review containing the systems wide analysis of the Wnt pathway divided into introduction, problem statement and a solution to address the same via latest sensitivity analysis methods \u2219 an extensive description of the methodology, the description of the dataset and the design of the experiments and finally \u2219 the biological findings from the system wide study of the Wnt pathway using sensitivity analysis.Sharma\u2019s accidentWith the rapid development of methods in biotechnology and the availability of of vast amounts of datasets at molecular level, there has arisen a need to understand the mechanism of these signaling pathways at a greater level. Systems biology is a field where the idea is to understand the deeper aspects of biology via various models that translate the biological problem into a computational/mathematical framework. Latest opinion on current trends in systems biology can be found in and 17]17]. One FZD)/LRP coreceptor complex. FZD may interact with the Dishevelled (DVL) causing phosphorylation. It is also thought that Wnts cause phosphorylation of the LRP via casein kinase 1 (CK1) and kinase GSK3. These developments further lead to attraction of Axin which causes inhibition of the formation of the degradation complex. The degradation complex constitutes of AXIN, the \u03b2-catenin transportation complex APC, CK1 and GSK3. When the pathway is active the dissolution of the degradation complex leads to stabilization in the concentration of \u03b2-catenin in the cytoplasm. As \u03b2-catenin enters into the nucleus it displaces the GROUCHO and binds with transcription cell factor TCF thus instigating transcription of Wnt target genes. GROUCHO acts as lock on TCF and prevents the transcription of target genes which may induce cancer. In cases when the Wnt ligands are not captured by the coreceptor at the cell membrane, AXIN helps in formation of the degradation complex. The degradation complex phosphorylates \u03b2-catenin which is then recognized by FBOX/WD repeat protein \u03b2-TRCP. \u03b2-TRCP is a component of ubiquitin ligase complex that helps in ubiquitination of \u03b2-catenin thus marking it for degradation via the proteasome. Cartoons depicting the phenomena of Wnt being inactive and active are shown in Fig.\u00a0Before delving into the problem statement, a brief introduction to the Wnt pathway is given here. From the recent work of , the canSuccinctly, the endeavour is to address the following issues - \u2219 explore the strength of contributing factors in the signaling pathway, \u2219 analyse the existing causal relations among the inter/extracellular factors effecting the pathway based on prior biological knowledge and \u2219 investigate the significance of deviations in fold changes in the recently found prevalence of psychophysical laws working in the pathway in a multi-parameter setting. The issues related to \u2219 inference of hidden biological relations among the factors, that are yet to be discovered and \u2219 discovery of new causal relations using hypothesis testing, will be addressed in a subsequent manuscript. The current manuscript analyses the sensitivity indices for fold changes and deviations in fold changes in 17 different genes from a set of 74 genes as presented by . An immeIn order to address the above issues, sensitivity analysis (SA) is performed on either the datasets or results obtained from biologically inspired causal models. The reason for using these tools of sensitivity analysis is that they help in observing the behaviour of the output and the importance of the contributing input factors via a robust and an easy mathematical framework. In this manuscript both local and global SA methods are used. Where appropriate, a description of the biologically inspired causal models ensues before the analysis of results from these models.f0 and summands of different dimensions, if f0 is a constant and integral of summands with respect to their own variables is 0. This implies that orthogonality follows in between two functions of different dimensions, if at least one of the variables is not repeated. By applying these properties, it is possible to show that the function can be written into a unique expansion. Next, assuming that the function is square integrable variances can be computed. The ratio of variance of a group of input factors to the variance of the total set of input factors constitute the sensitivity index of a particular group.Seminal work by Russian mathematician lead to Besides the above \u2019s varianIt is this strength of the kernel methods that HSIC is able to capture the deep nonlinearities in the biological data and provide reasonable information regarding the degree of influence of the involved factors within the pathway. Improvements in variance based methods also provide ways to cope with these nonlinearities but do not exploit the available strength of kernel methods. Results in the later sections provide experimental evidence for the same.GSK3 dynamics to uncertainty in an insulin signaling model. Similar efforts, but on different pathways can be found in [Recent efforts in systems biology to understand the importance of various factors apropos output behaviour has gained prominence. comparesfound in and 73]GSK3 dynaSA provides a way of analyzing various factors taking part in a biological phenomena and deals with the effects of these factors on the output of the biological system under consideration. Usually, the model equations are differential in nature with a set of inputs and the associated set of parameters that guide the output. SA helps in observing how the variance in these parameters and inputs leads to changes in the output behaviour. The goal of this manuscript is not to analyse differential equations and the parameters associated with it. Rather, the aim is to observe which input genotypic factors have greater contribution to observed phenotypic behaviour like a sample being normal or cancerous in both static and time series data. In this process, the effect of fold changes and deviations in fold changes in time is also considered for analysis in the light of the recently observed psychophysical laws acting downstream of the Wnt pathway .There are two approaches to sensitivity analysis. The first is the local sensitivity analysis in which if there is a required solution, then the sensitivity of a function apropos a set of variables is estimated via a partial derivative for a fixed point in the input space. In global sensitivity, the input solution is not specified. This implies that the model function lies inside a cube and the sensitivity indices are regarded as tools for studying the model instead of the solution. The general form of g-function (as the model or output variable) is used to test the sensitivity of each of the input factor (i.e expression profile of each of the genes). This is mainly due to its non-linearity, non-monotonicity as well as the capacity to produce analytical sensitivity indices. The g-function takes the form - d is the total number of dimensions and ai\u22650 are the indicators of importance of the input variable xi. Note that lower values of ai indicate higher importance of xi. In our formulation, we randomly assign values of ai\u2208. For the static (time series) data d=18(71) (factors affecting the pathway). Thus the expression profiles of the various genetic factors in the pathway are considered as input factors and the global analysis conducted. Note that in the predefined dataset, the working of the signaling pathway is governed by a preselected set of genes that affect the pathway. For comparison purpose, the local sensitivity analysis method is also used to study how the individual factor is behaving with respect to the remaining factors while working of the pathway is observed in terms of expression profiles of the various factors.were, Finally, in context of \u2019s work rGiven the range of estimators available for testing the sensitivity, it might be useful to list a few which are going to be employed in this research study. These have been described in the \u03b2-catenin and \u2219 the transcriptional machinery of the Wnt pathway depends on the fold changes in \u03b2-catenin instead of absolute levels of the same and some gene transcription networks must respond to fold changes in signals according to the Weber\u2019s law in sensory physiology. In an unpublished work by u=f(x), xi1,i2,\u2026,is exist, where 1\u2264i1<\u22ef, a particular factor is affecting the pathway in a major way. This has deeper implications in the fact that one is now able to observe when in time an intervention can be made or a gene be perturbed to study the behaviour of the pathway in tumorous cases. Thus sensitivity analysis of deviations in mathematical formulations of the psychophysical law can lead to insights into the time period based influence of the involved factors in the pathway. This will also shed light on the duration in which the psychophysical laws might be most prevalent.In context of \u2019s work rSPECIFIC EXAMPLES OF BIOLOGICAL INTERPRETATIONSGSK3\u03b2 - It is widely known that WNT stimulation leads to inhibition of GSK3\u03b2. In contrast to this regard GSK3\u03b2 shows a up-regulated levels at t3,t12 and t24. The author is currently unaware of why this contasting behaviour is exhibited. Later upregulation might point to the fact that the effectiveness of Wnt stimulation has decreased and GSK3\u03b2 plays the role of stabilizing and controlling the behaviour of the pathway by working against the Wnt stimulation and preventing further degradation. While work by 50].sobol2002 - implements the Monte Carlo estimation of the Sobol indices for both first-order and total indices at the same time , at a total cost of (p+2) \u00d7n model evaluations. These are called the Saltelli estimators. This estimator suffers from a conditioning problem when estimating the variances behind the indices computations. This can seriously affect the Sobol indices estimates in case of largely non-centered output. To avoid this effect, you have to center the model output before applying \u201csobol2002\u201d. Functions \u201csoboljansen\u201d and \u201csobolmartinez\u201d do not suffer from this problem .sobol2007 - implements the Monte Carlo estimation of the Sobol indices for both first-order and total indices at the same time , at a total cost of (p+2) \u00d7 n model evaluations. These are called the Mauntz estimators .sobolmartinez - implements the Monte Carlo estimation of the Sobol indices for both first-order and total indices using correlation coefficients-based formulas, at a total cost of (p + 2) \u00d7 n model evaluations. These are called the Martinez estimators.sobol - implements the Monte Carlo estimation of the Sobol sensitivity indices. Allows the estimation of the indices of the variance decomposition up to a given order, at a total cost of (N + 1) \u00d7 n where N is the number of indices to estimate .The PACKAGE ( and 31]SENSITIVI"}
+{"text": "Systems research spanning fields from biology to finance involves the identification of models to represent the underpinnings of complex systems. Formal approaches for data-driven identification of network interactions include statistical inference-based approaches and methods to identify dynamical systems models that are capable of fitting multivariate data. Availability of large data sets and so-called \u2018big data\u2019 applications in biology present great opportunities as well as major challenges for systems identification/reverse engineering applications. For example, both inverse identification and forward simulations of genome-scale gene regulatory network models pose compute-intensive problems. This issue is addressed here by combining the processing power of Graphics Processing Units (GPUs) and a parallel reverse engineering algorithm for inference of regulatory networks. It is shown that, given an appropriate data set, information on genome-scale networks (systems of 1000 or more state variables) can be inferred using a reverse-engineering algorithm in a matter of days on a small-scale modern GPU cluster. One of the outstanding challenges of systems biology is to reconstruct and simulate genome-scale regulatory networks based on genome-scale data. This challenge is made difficult by the sparseness and noisiness of genome-scale expression and proteomics data\u00a0, 2, as wGraphics Processing Units (GPUs) facilitate parallel calculations in applications that can be scaled from a desktop to a high-performance computing environment. GPU computing has impacted computational modeling with applications in various fields of research such as bioinformatics\u00a0, moleculN variables scales as Inference of network models from high-dimensional data has two bottlenecks: the size of the problem\u00a0, and robnt}O(N2)\u00a0. This \\dmescales\u00a0. HoweverYet GPU computing is challenging due to the required programming . Effecti\u00ae CUDA\u00ae compiler. It centers on solving non-linear ordinary differential equations (ODE) describing the interactions between genes. Two variants of the algorithm utilizing different ODE solvers are implemented on the GPU platform. The first one is ODEINT\u00a0[in silico data set illustrates the viability of reverse engineering of genome-scale biological networks. Results from the GPU method were compared with a widely used method called TIGRESS \u00a0[This article reports an updated version of the Bazil et al. algorithm tailored for a GPU architecture applied to reverse-engineer a 1000 node network from high-throughput time-course in silico data. The application is developed using the NVIDIAs ODEINT\u00a0, which us ODEINT\u00a0 to solves ODEINT\u00a0 on a GPUs ODEINT\u00a0. ApplicaTIGRESS \u00a0. TIGRESSTIGRESS \u00a0 with staTIGRESS \u00a0, 29 and TIGRESS \u00a0.N one-dimensional problems, one for each of the N variables in the network. The algorithm involves searching for maximally likely versions of the one-dimensional model for each state variable. In practice, its application requires integrating millions of different realizations of the basic underlying ODE model.The distributed algorithm developed by Bazil et al. is suited for analyzing large-scale data sets including, but not limited to, time-course mRNA expression data. The time-course inverse problem of determining regulatory networks is decomposed into jth gene is governed by a mass balance equation\u00a0[jth gene is transcribed is jth gene. The rate is modeled by competitive binding of activating and inhibiting transcription factors subject to co-operativity and saturation:jth gene. The maximal rate of mRNA production is The governing ODE used to describe regulatory pathways between genes in a network is similar to the ODE described in the community-wide challenge within the DREAM project\u00a0. The mRNequation\u00a0:1\\documeAccording to these governing equations, gene transcription is determined by a competition among inhibitory and activating factors. There is no single weight associated with a given activating or inhibiting edge. Rather each activation interaction has an associated value of N genes into N independent one-dimensional problems (sub-networks) for each gene, these independent one-dimensional problems can be run in parallel to search for trial models for each gene. This implementation of independent sub-networks is described in pseudo-code in Algorithm\u00a01. The ODE solver is run on the GPU so that millions of independent candidate sub-networks can be evaluated simultaneously. A final global network is then generated by combining all the sub-networks generated for each gene and filtering out unlikely interactions. To test the application of GPU computing to this problem, computational costs for simulating large numbers of realizations of the gene expression model are determined for implementations on CPU versus GPU.Since the algorithm developed by Bazil et al. splits the network problem for \u00ae version of the LSODE ODE solver developed by Hindmarsh and Petzold\u00a0[The algorithm is implemented using two different ODE solver packages, ODEINT and cuLsoda, on the GPU. ODEINT is developed in a C++ environment by Ahnert and Mulansky\u00a0. cuLsoda Petzold\u00a0. To testODEINT uses Thrust\u00a0, which iCPU calculations for the case of ODEINT are conducted using a quad-core Intel i5 processor where OpenMP, via Thrust, is used to utilize all the four cores of the CPU. The parameters of the governing ODE are declared with single (using single precision floating point) precision in one case and double precision in another.\u00ae compatible version to be run on a GPU\u00a0[LSODE was originally developed in FORTRAN as part of a systematized collection of ODE solvers by Hindmarsh\u00a0. Thompsoon a GPU\u00a0. A full on a GPU\u00a0 is used on a GPU\u00a0 is used on a GPU\u00a0. Zhou eton a GPU\u00a0) was fouon a GPU\u00a0. The GPUThe LSODA/CUDA implementation of sub-network identification component of the Bazil et al. algorithm is distributed on GitHub\u00a0.The GPU implementation of the reverse-engineering algorithm is validated by applying it to identify connections for a target gene in a 10 gene network generated in silico. This allows us to verify whether the GPU implementation of the algorithm performs equivalently for the same test problems used by Bazil et al.\u00a0. PerformThe 10-gene network model illustrated in Figure 3 of Bazil et al.\u00a0 is reverTo test speed and accuracy of the reverse-engineering algorithm a 1000 gene in silico network is generated. The method is implemented using MATLAB R2013b . The network generation algorithm generates connections for each gene and assigns kinetic parameters associated with these connections described in Eq.\u00a0 and two inhibitors (genes 2 and 3). The algorithm works by finding as many models (sets of activators and inhibitors) that can explain the time course data as possible. In practice, several thousand candidate subnetworks are generated at random. . These subnetworks are evaluated by finding optimal parameter values (for activation and inhibition constants associated with each model) that best fit the time course data. Trial subnetworks that can effectively fit the data are accepted as potential connections. These recovered connections are described for gene ID #4. Figure\u00a0Histograms of inferred activator and inhibitor edges for this system are illustrated in Fig.\u00a0Although this test problem represents an idealized data set, the important finding is that the GPU implementation of the algorithm returns effectively the same results as the CPU implementation. This is an independent validation for the GPU version of the algorithm. On the architectures used here, the GPU implementation is approximately 6 times faster than the CPU implementation. This speedup is obtained by comparing time taken to generate 2051 trial networks on the GPU and 500 trial networks on the CPU. If same number of networks (2000) on both CPU and GPU are generated then the speedup factor\u00a0on the GPU\u00a0reaches approximately 24.\u00ae devices the NVIDIA\u00ae profiler gives insight into how the two ODE solvers, ODEINT and LSODA, perform on the GPU. The fraction of total simulation time spent on data transfer between host and device, represented by f, relative to the size of the problem M, the number of ODEs solved, is illustrated in Fig.\u00a0f drops down considerably as the problem size goes up yielding larger speedups for problem sizes larger than a million. This is why the number of jobs must be close to a million for the ODEINT solver in order to obtain any significant speedup on the GPU. In the case of LSODA, f does not dominate to the same extent for smaller problem sizes and saturates after the problem size exceeds 1600.For any application, obtaining useful speedup on a GPU architecture requires reducing the amount of time spent in data transfer between the host (CPU) and the device (GPU). For NVIDIA\u00ae has developed CUDA as well as GPU hardware since release of Tesla K20 GPUs. However, there are other avenues for speedup which can be explored such as: Shared memory and asynchronous data transfers\u00a0[More than an order of magnitude speedup is observed, close to 100 the GPU\u00a0, 37. Theransfers\u00a0, and usaransfers\u00a0. HoweverTo determine the performance characteristics of the algorithm applied in a realistic setting, the inference algorithm is applied to time-course in silico data generated in silico from a 1000 gene network. Figure\u00a0We generated time-course in silico data from this network from two independent simulation experiments. This approach is used to represent two independent experiments on a biological system. For these two experiments, the internal network connections and kinetic parameters remain unchanged. Different dynamic behavior in the two experiments is obtained through two different sets of external stimuli, as detailed in the appendix. In brief, a random subset of the 1000 genes is activated in each experiment by assigning a non-zero value for the terms Expression profiles of first 16 genes are illustrated in Fig.\u00a0The time taken to reverse-engineer connections for this 1000 node network for one experiment is 13.1 days on 5 T K20 GPUs. Here the regulatory network determined for two genes, gene ID #645 and gene ID #614, are analyzed in greater detail. Gene 645 is a case where our strategy succeeds and gene 614 is a case where the algorithm is less successful.The \u201ctrue\u201d network for gene 645 consists of three activators 359, 594, and 973 and is illustrated in Fig.\u00a0The frequency distribution of putative activators for this gene is presented for the two experiments in Fig.\u00a0Results for gene 614 are illustrated in Fig.\u00a0For this case, since we obtain no useful information from Experiment 1, it is not useful to take the intersection of identified edges to improve the false positive rate. For the 1000 gene in silico data set analyzed here, Figure\u00a0In TIGRESS, the percentage true edge recovered was higher at lower threshold values with the highest recovery rate of mentclass2pt{minimThus our algorithm performed better than TIGRESS for this trial problem in recovering true edges, while generating more false positives. TIGRESS generated significantly fewer edges overall and a marginally lower false positive rate. Moreover, this comparison is biased in favor of our algorithm since the in silico data used for the test are generated from a model that uses the same underlying mathematical structure as assumed by our inference algorithm. Practical applications to real data may draw on multiple algorithms, each with its own relative advantages and disadvantages \u00a0\u201316.The algorithm presented here is able to reverse-engineer network connections in a By leveraging multiple GPU\u2019s, network inference on the scale of thousands of variables becomes feasible. Thus, genome-scale biological network inference is becoming feasible using the current generation of GPU-architecture computing clusters. The network prediction of GPU method is comparable to other GRN methods such a TIGRESS. The main advantage of GPU method is in its ability to process large data sets in a relatively short period of time.The data-driven approach used by the algorithm depends on the information available for each node for a particular experiment. So in order to identify a given interaction in the network, data must contain extractable information pertaining to a particular interaction. Furthermore, when multiple genes show similar time-course behavior, it is difficult to distinguish true interactions from false positives. This shortcoming can potentially be overcome by utilizing robust clustering algorithms when applying the reverse-engineering algorithm to high dimension biological time-course data\u00a0."}
+{"text": "Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool. Current biotechnology has allowed researchers in various fields to obtain immense amounts of experimental information, ranging from macromolecular sequences and gene expression data to proteomics and metabolomics. In addition to large-scale genomic information obtained through such methods as third-generation DNA sequencing, newer technology, such as RNA-seq and ChIP-seq, has allowed researchers to fine-tune the analysis of gene expression patterns \u20133. More cis-regulatory modular DNA sequence elements that interact with TFs. These sequences read and process information incoming from the cell, transducing it into the formation of gene products while modulating their abundance [ cis-regulatory DNA and nucleotide sequences [The core GRN apparatus consists of the sum ofbundance . To makeequences .Interactions among genes are mediated by gene products such as DNA-binding proteins (including TFs) and miRNAs. The analyses of gene interactions can be difficult if time-series data are part of the experimental design . AnalysiDetailed experimental analysis of several functional regulatory elements has revealed that they consist of dense clusters of unique, short DNA sequences specifically recognized by a range of TFs. Biochemically, protein-binding to these sequences controls the regulatory output of the clusters and, from an informational perspective, clustered specific target sites determine the type of regulatory outcome and the cellular functions that will be performed. GRNs are encoded in the DNA and can be thought of as a sequence-dependent regulatory genome, given that TFs recognize specific short DNA motifs. The small length of these motifs means that they will occur frequently but randomly within the enormity of the total genome of a particular organism \u201322. TherAnalyses of time-series data from microarrays can show the chronological expression of specific genes or groups of genes. These temporal patterns can be used to infer or propose causal relationships in gene regulation . Thus, g Saccharomyces cerevisiae genes with 49 time points and transcriptional oscillations of about 4 hours. These oscillations reveal cell redox states, which in turn result from changes in metabolic fluxes and cell cycle phases [Numerous algorithms have been developed for inferring GRNs \u201327. In te phases .As knowledge in several biological fields leads to an ever-expanding accumulation in gene expression data, the main consideration in data processing is that analysis of information becomes increasingly time-consuming, thus creating a demand to speed up the analytical process. In order to obtain results more expeditiously, we develop information-theoretic algorithms using MapReduce that run on a distributed, multinode Apache Hadoop cluster in a cloud environment. Cloud resources are increasingly more flexible and affordable compared with local traditional computing resources. Cloud computing advantages in the field of bioinformatics research are well known \u201339.Previous information-theoretic algorithms for network inference were implemented in the R programming language using steady state data and timem genes, and each gene has n expression values recorded at n different time points, respectively. Our framework consists of three steps. g, the first time point t (t > 1) at which a substantial change in the gene expression of g with respect to the gene expression of g at time point 1 takes place. This t is referred to as the time point of Substantial Change of Expression (ScE) of gene g, denoted as ScE(g). gx and gy, the influence of gx on gy, denoted as influence, based on the ScE values of the genes. In contrast to the R-based ARACNE and TimeDelay-ARACNE, our proposed information-theoretic framework is tailored to the MapReduce programming paradigm. Like TimeDelay-ARACNE , the inpg(t) be the expression value of gene g at time point t. We say g is activated (or induced) at time point t (t > 1) if g(t)/g(1) > \u03c4, where \u03c4 > 1 is a threshold. We say g is inhibited (or repressed) at time point t (t > 1) if g(t)/g(1) < 1/\u03c4. For each gene g, we maintain two sets of time points: g+(t) and g\u2212(t); g+(t) contains all time points at which g is induced and g\u2212(t) contains all time points at which g is repressed. Initially, g+(t) = \u2300 and g\u2212(t) = \u2300. The two sets of time points are then updated as follows. For each time point t (t > 1), \u03c4 \u2264 g(t)/g(1) \u2264 \u03c4, then g is neither induced nor repressed at time point t. In this case, we simply ignore this time point t without adding t to g+(t) or g\u2212(t). The value of \u03c4 used in this study is set to 1.2. With this threshold value and datasets used in the study .Let DREAM4 \u201331), the, theg(t)g) represent the first time point t (t > 1) at which g is either induced or repressed; that is,ga and gb, there are three cases to be considered.Let ScE and the expression values of the two genes to We send the ordered pair and the expression values of the two genes to We send the ordered pair and should be considered, together with their gene expression values, to We send gx, gy) received from n is the total number of time points, p(gxi) is the marginal distribution of gxi, and p is the joint distribution of gxi and gyi+k. The parameter k, 1 \u2264 k \u2264 h, represents the length of delayed time and h is the maximum length of delayed time. The notation gxi denotes the gene expression of gx at time point i and gyi+k is the gene expression of gy at time point i + k. k = 2). Each rectangle represents the gene expression value obtained at some time point. Mutual information of rectangles with the same color is computed. Then, the influence of gx on gy is calculated as follows: ga, gb) and send and influence to gb, ga) and send and influence to ga, gb) \u2265 influence, then we send and influence to gb, ga) and influence to For each pair of genes received from gx, gy) > \u03b5, then we create an edge from gx to gy indicating that gx substantially influences gy or gx regulates the expression of gy; that is, there is a predicted present edge from gx to gy. If influence \u2264 \u03b5, then we do not create an edge from gx to gy; that is, there is a predicted absent edge from gx to gy. With the predicted present and absent edges, we are able to infer or reconstruct a gene regulatory network. The value of \u03b5 used in this study is set to 0.96. With this threshold value and datasets used in the study (DREAM4 [t-test (p < 0.05).Let DREAM4 \u201331), the, the\u03b5 begx to represent both a gene and its identifier when the context is clear.) Each line in the input file contains a pair of genes and their expression values. Genes are sorted based on their identifiers. Each pair of genes gx, gy occurs in the input file only once; specifically, the gene pair in which the identifier of gx is less than the identifier of gy occurs in the input file. Suppose there are m genes. There are (m \u00d7 m \u2212 1)/2 lines in the input file.Each gene has an identifier. and the value is influence.Algorithm M3. In this algorithm, mappers have to do Steps gx \u2192 gy and the value is the influence of gx on gy that exceeds the threshold \u03b5.O(m2/M) and the time needed by reducers is bounded by O(m2/R), where m is the number of genes, M is the number of mappers, and R is the number of reducers. Thus, the time complexity of our MapReduce algorithms is O(m2/M) + O(m2/R). Note that this is a very pessimistic upper bound since reducers often work in parallel with mappers, and hence the actual time needed by the algorithms is much less. Note also that, in practice, M > R, and hence the time complexity of our algorithms is bounded by O(m2/R).The time needed by mappers is bounded by The four algorithms described in http://www.ncbi.nlm.nih.gov/geo/. This dataset was created using an Affymetrix Yeast Genome 2.0 Array containing 5,744 probe sets for S. cerevisiae gene expression analysis. The dataset contains 10,928 genes with 49 time points. The dataset is split into key-value pairs as described in The dataset used in the experiments was GSE30052 , downloaWe divided GSE30052 into smaller datasets that were subsets of GSE30052 with varying numbers of genes. We then fixed the algorithm and used M2 in all subsequent experiments. Finally, we conducted experiments to compare the MapReduce implementation of the M2 algorithm running on the Hadoop cluster (denoted MRC), the MapReduce implementation of the M2 algorithm running on a standalone single-node server (denoted as MRS), the Java implementation of the M2 algorithm running on a single-node server, and the R implementation of the related time-delayed mutual information algorithm, TimeDelay-ARACNE . In FiguOur information-theoretic algorithms for inferring gene regulatory networks are implemented in MapReduce and run on a Hadoop cluster. A tool that is closely related to our work is the TimeDelay-ARACNE program in R , which aTo evaluate the accuracy of these programs, we adopted the five time-series gene expression datasets available in the DREAM4 100-gene in silico network inference challenge \u201331. Each\u03c4, \u03b5, and the maximum length of delayed time, h, used in the proposed algorithms. Experimental results showed that the default values for these parameters achieve the highest accuracy. When compared with other parameter values , the accuracy achieved by the default parameter values is significantly higher than the accuracy achieved by the other parameter values according to Wilcoxon signed rank tests [p < 0.05).We also tested different values for the parameters nk tests that uses mappers to perform a large portion of work and reducers to perform a relatively small amount of computation achieves the best performance. This M2 algorithm is faster than an algorithm in which the mappers have to do all the computation. Moreover, the M2 algorithm is much faster than another algorithm in which the reducers have to do all the computation and become too busy to quickly complete the job.When tested on DREAM4 datasets with 100 genes in each dataset, our MapReduce program (M2) is slightly better than a closely related R program (TimeDelay-ARACNE ) in termThe work presented here shows that distributing highly parallel tasks in a cloud environment achieves higher performance than running the tasks in a standalone or noncloud environment. In general, cloud computing can provide the power to integrate the ever-increasing information about the Three Spaces of gene networks as well Epigenetics is an emGenome-scale metabolic models are becoming essential in biomedical applications, and researchers are moving towards building such models . MapRedu"}
+{"text": "Inference of gene regulatory network structures from RNA-Seq data is challenging due to the nature of the data, as measurements take the form of counts of reads mapped to a given gene. Here we present a model for RNA-Seq time series data that applies a negative binomial distribution for the observations, and uses sparse regression with a horseshoe prior to learn a dynamic Bayesian network of interactions between genes. We use a variational inference scheme to learn approximate posterior distributions for the model parameters.The methodology is benchmarked on synthetic data designed to replicate the distribution of real world RNA-Seq data. We compare our method to other sparse regression approaches and find improved performance in learning directed networks. We demonstrate an application of our method to a publicly available human neuronal stem cell differentiation RNA-Seq time series data set to infer the underlying network structure.Our method is able to improve performance on synthetic data by explicitly modelling the statistical distribution of the data when learning networks from RNA-Seq time series. Applying approximate inference techniques we can learn network structures quickly with only moderate computing resources. Methods for the inference of gene regulatory networks from RNA-Seq data are currently not as mature as those developed for microarray datasets. Normalised microarray data posses the desirable property of being approximately normally distributed so that they are readily amenable to various forms of inference, and in the literature many graphical modelling schemes have been explored that exploit the normality of the data \u20139.The data generated by RNA-Seq studies on the other hand present a more challenging inference problem, as the data are no longer approximately normally distributed, and before normalisation take the form of non-negative integers. In the detection of differential expression in RNA-Seq data, negative binomial distributions have been applied \u201313, provOne specific case of interest in the analysis of RNA-Seq data is the study of time series in a manner that takes into account the temporal relationships between data points. Previous work in the literature has developed sophisticated models for the inference of networks from microarray time series data , 5, but Here we present a method for the inference of networks from RNA-Seq time series data through the application of a Dynamic Bayesian Network (DBN) approach, that models the RNA-Seq count data as being negative binomially distributed conditional on the expression levels of a set of predictors. Whilst there has been work applying negative binomial regularised regression approaches in the literature , here wet is dependent only on the values of a set of parent variables at time t\u22121. This is illustrated in Fig.\u00a0In a DBN framework , considei) is the set of parents of variable i in the network. In our case we wish to model the expression level of a gene conditional on a set of parent genes that have some influence on it. To learn the set of parent variables of a given gene, it is possible to perform variable selection in a Markov Chain Monte Carlo framework, proposing to add or remove genes to the parent set in a Metropolis-Hastings sampler. Another option is to consider all possible sets of parent genes as suggested in [where Pa whose coefficients are significantly larger than zero as parents. To simplify the presentation, below we consider the regression of the counts for a single gene i, y=DL,i2:, conditional on the counts of the remaining W=M\u22121 genes X=DL\u22121),\u2212i1:t and dispersion \u03c9, where \u03b2 is a vector of regression coefficients \u03b2w and a constant term \u03b2c. The model for a gene i is then The counts st is a scaling factor for each sample to account for sequencing depth. The st can be estimated from the data by considering the sum of counts for each sample, or by the more robust approach of [\u03b2c and to enforce sparsity of the \u03b2w we apply a horseshoe prior [where we have applied the NB2 formulation of the negative binomial distribution, and roach of where thoe prior , 24, ass\u03c4 that allows the degree of shrinkage to be learnt from the data Then as in we set a\u03b2w can be seen in figure 8 in Appendix 2. Finally we place a gamma prior on the dispersion parameter \u03c9. This gives a joint probability (subsuming the model parameters into \u03b8) of An example of the sparsity induced in the p(\u03b8|x) with a distrubtion q(\u03b8). To do so we attempt to minimise the Kullback-Leibler (KL) divergence between the two, defined as We now apply a variational inference \u201330 schemAs the KL divergence is bounded below by zero, it follows from and so we can define a lower bound on the logarithm of the model evidence as q(\u03b8i) on some partition of the parameters, To make the problem of minimising the KL divergence tractable we can consider a mean field approximation where the posterior is approximated by a series of independent distributions p(\u03b8|x) and q(\u03b8), or equivalently to maximise the model evidence lower bound (or ELBO) q(\u03b8i) is Under the mean field assumption it can be shown that to minimise the KL divergence between q(\u03b8j\u2260i). In many cases this formalism is sufficient to derive a coordiante ascent algorithm to maximise the ELBO where the variational parameters of the where the expectation is over the remaining \u03b2w does not have a tractable solution. However following [Unfortunately in our model the optimal distribution ollowing we can sollowing , and we \u03b8i by considering the neighbours of \u03b8i. Then we can rewrite Eq. Considering our model as a graphical model as in Fig.\u00a0i denotes the children of node i in the graphical model. Considering each term on the right hand side of Eq. where Chly as in . In the ly as in , derivedly as in .Saccharomyces cerevisiae gene regulatory network, and then simulating the dynamics of the networks under our DBN model. Subnetworks of 25 and 50 nodes were generated and used to simulate 20 time points with 3 replicates.We apply our method to the task of inferring directed networks from simulated gene expression time series. The time series were generated by utilising the GeneNetWeaver software\u03b2 sampled from a mixture of equally weighted ERP003613) consisting of 171 samples from 25 tissues in humans [Synthetic count data were generated by constructing a negative binomial DBN model as in Eq. database true positives than true negatives, a situation in which the AUC-ROC does not distinguish performance as well as AUC-PR [For networks of 25 nodes in Fig.\u00a0s AUC-PR , 40.As can be seen in Fig.\u00a0SRP041159. The data consist of RNA-Seq counts from neuronal stem cells for 3 replicates over 7 time points after the induction of neuronal differentiation [To illustrate an application of our model to a real world RNA-Seq data set, we consider a publicly available RNA-Seq time course data set available from the recount2 database , accessintiation , and selApplying our method and selecting edges with a posterior probability >\u20090.95 produced the network shown in Fig.\u00a0For each node we also calculate the betweenness centrality, which is the fraction of the total number of shortest paths between nodes in the network that pass through a given node. This gives a measure of the importance of a node in the network, as nodes with a larger betweeness centrality will disrupt more paths within the network if deleted, and act as bottlenecks that connect modules within the network. Looking at the betweenness centrality of each node it appears that PARP12 and CDON, and to a lesser extent COL17A1, are important carriers of information within the network. Of these genes playing a central role in the network, CDON has been shown to be promote neuronal differentiation through the activation of p38MAPK pathway , 43 and We have developed a fast and robust methodology for the inference of gene regulatory networks from RNA-Seq data that specifically models the observed count data as being negative binomially distributed. Our approach outperforms other sparse regression based methods in learning directed networks from time series data.Another approach to network inference from RNA-Seq data could be to further develop mutual information based methodologies with this specific problem in mind. Mutual information based methods have the benefit of being independent of any specific model of the distribution of the data, and so could help sidestep issues in parametric modelling of RNA-Seq data. However this comes at the cost of abandoning the simplifying assumptions that are made by applying a parametric model that provides a reasonable fit to the data, and presents challenges of its own in the reliable estimation of the mutual information between random variables.From the results in the mode\u03b2 represented using a mixture of inverse gamma distributions, and the horseshoe prior on The mean field approximation of the posterior is then \u03bbt can be derived as The variational updates for and due to the properties of the log-normal distribution \u03a3 is the covariance matrix of \u03b2 under where \u03b2 is As derived in , applyin\u03c9 we apply numerical integration as described in [and for the dispersion ribed in .\u03b2, the variational updates are Then for the horseshoe prior on"}
+{"text": "Escherichia coli, and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs.Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Assuming that the time delay between X and Y is m, Xm)(=(x[x[T\u2013m]) and Ym), MI(Xm)) is calculated. The time delay m in which the MI value is maximized is recorded as the transcription delay between X and Y. It should be noted that in the absence of knowing the direction of regulation, it is necessary to calculate the transcription delay X versus Y and Y versus X, respectively. And thus we can get a transcription time delay matrix N\u00d7N, N represents the number of genes.The gene expression profile of the target gene relative to its regulatory gene delayed a different unit of time. The larger of MI value calculated by the two genes, the greater the probability that the two genes will have a regulation relationship under the delay. The MI algorithm can be used to derive the transcriptional delay between genes with regulatory relations. First of all, given the maximum time delay unit allowed in the calculation kk=,2,\u2026,T\u20131,X may be a regulatory gene of Y, only if the initial change in the gene expression value X precedes the initial point of change in gene expression Y. The formula of TRS as followsICN is the total number of time points for the regulation of gene expression changes, T is the time points, vN is the number of regulating genes.On the basis of the gene transcription time delay, a novel transcriptional regulation score (TRS) was proposed. The TRS method was used to evaluate the network transcription score by calculating the data after recombination. The TRS evaluation reflects the true nature of gene regulation: a gene In the calculation of TRS values between pairs of genes, the calculation data need to be used to recombine the original data based on the determined results of the transcriptional delay. The TRS value obtained on the basis of this data reorganization is in accordance with the requirements of multiple time-delayed GRN construction and the obtained TRS score is used as the network transcription evaluation score.In the process of constructing the GRN using DBN model, the scoring model plays a decisive role in identifying the direction of regulation between genes. In this paper, a new scoring model of network structure, comprehensive score (CS), is proposed in order to accurately calculate the causal intensity of genes and improve the accuracy of network construction. In this method, the linear correlation, non-linear correlation and dynamic characteristics of gene expression data are mined. In particular, the consideration of the dynamic characteristics of genes makes the construction of GRNs more in line with the biological basic physiological mechanism and closer to the real network \u201384. The RO\u03b2 is the regulation intensity obtained by the RO algorithm, which is non-zero number; parameter \u0277 is the weighting coefficient for MI and RO, and the range is . MI\u03b2 is the calculation value by MI, which is a non-negative real number. TRS is transcriptional time-delay score calculated through TRS model. RO\u03b2, MI\u03b2 and TRS value involved in calculation are all normalized values, the standardization process is achieved by background standardization. Parameter \u03c3 is a trade-off between \u03b2 and TRS value and the range is (default: \u03c3=0.6).The numerical method of CS as follows\u03b2 value when there is no such requirement, that is, the data required for the two numerical calculations can be different. It is because in order to obtain a multi-delay dynamic evaluation function values must use time-series gene expression data, and determining the direction of regulation does not have this need. The GRN structure information in the time-series gene expression data and the non-time-series gene expression data are included. Standardization of the three parts of the numerical values can eliminate the differences caused by different data.It is worth noting that the use of TRS method requires time-series gene expression data, and the calculation of Dynamic Bayesian network (DBN) is an extension of Bayesian network that is able to infer the interaction uncertainties among genes by using a probabilistic graphical model . DBN inti th node in the Bayesian network. In addition, the lowercase Xi. It is composed of initial states of a Bayesian network B0 specifies the joint distribution of the variables in X[0] and t. In slice 0, the parents of B0, which means t, A DBN is defined as a pair essed as , 16Pr with the DBN model. For convenience, this algorithm is called DBNCS. The DBNCS algorithm uses the CMI2NI algorithm to construct the network structure profiles, namely the search space, and uses the DBN model to identify the direction of gene regulation and search for the optimal network structure. Experimental environment: Intel (R) Xeon (TM) CPU E5-2650 @ 2.30GHz 2.30GHz with 32.0GB of RAM, MATLAB 2014b programming implementation. Figure DBNCS algorithm to reconstruct multiple time-delayed GRNs specific implementation steps are as follows:n genes, a complete graph with n nodes is constructed first and set a decision independence threshold \u03b8. Let L = \u22121, L is the maximum number of variable Z in Equation , recording as T. If T < L, termination algorithm, that is, the final network structure profiles is G. If T \u2265 L, in order to reduce the computational complexity and ensure the accuracy of network inference, L genes are selected from Tgenes as conditional genes, and let them be K has L, G = 0. This process is repeated until no edge can be removed, so that the network structure profiles can be obtained, that is, building DBN search space to determine the independence of information between genes.In general, a group of genes that have high CMI2 values are co-expressed genes, one of which is the target gene and the other is a regulatory gene. For gene expression datasets with Equation . Let L =equation , select RO\u03b2. Set the threshold \u03b80, the regulation relationship is less than \u03b80, the regulation relationship is determined as the low regulation intensity relationship, and the regulation intensity of the gene pairs are set to zero, and the nonzero variables in the model are reestimated in the next step. In this re-optimization solution, we can eliminate the effect of noise and redundant regulation to a certain extent, and reduce the false positive rate. In order to achieve the highest accuracy, the above process will be repeated until there is no non-zero variables needed to be reestimated so far, and finally get removal of redundant network structure profiles G\u2032. A target gene is usually regulated by multiple transcription factors, RO optimization model can control the sparseness of the network by adjusting the coefficients.In this step, only genes with coexpression relationships in the network structure profiles G can be removed redundant regulations by the RO optimization model. Taking the formula as the oD cliques, where D is the number of regulatory relations existing in the network structure profiles. Each clique consists of gene Gi and one co-expressed gene.For a large GRN, searching for the optimal network structure among all possible structures using the DBN model is a NP-hard problem . In ordeN\u00d7N is generated, N represents the number of genes. Data reorganization is followed. The traditional BN model is characterized by its single feature, insufficient information mining, poor stability, and high computational cost to construct large-scale regulatory network. DBNCS algorithm applies the comprehensive score model to the DBN, and uses the DBN hybrid learning method to construct the GRNs within the cliques, thereby identifying the direction of gene regulation and significantly reducing the cost of computing. The comprehensive score model can extract the linear correlation, nonlinear correlation and dynamic characteristics among the gene expression data, fully excavate the regulatory information, realize the accurate measurement of the causal intensity among the genes, and reduce the false positives of the network construction. In each clique, each possible network structure is scored by the formula based on DBN to construct the multiple time-delayed GRNs. The algorithm uses CMI2NI algorithm to study the network structure profiles, and uses the recursive optimization algorithm to remove the redundant regulations, thereby reduce the false positive rate of network construction. The CMI2 algorithm is used to calculate the optimal transcription time-delayed between pairs of genes in the search space. After the network structure profiles are decomposed into multiple cliques without loss, the DBN model is used to identify the direction of gene regulation within cliques, and the optimal network structure search is performed, which significantly reduces the computational complexity. Among them, the comprehensive score model not only uses the TRS algorithm to mine the multi-delay dynamic information in the gene expression data, but also through the recursive optimization algorithm mines linear correlation information, through CMI2 mining non-linear correlation information, and comprehensively considers the three aspects, which accurately calculate the causal intensity between genes, and effectively avoid the structural loss caused by the single model features. But in the absence of a priori information, such as the unknown real network, how to accurately determine the threshold of judgment independence is still an unresolved problem \u201390. On t"}
+{"text": "Identifying regulons of sigma factors is a vital subtask of gene network inference. Integrating multiple sources of data is essential for correct identification of regulons and complete gene regulatory networks. Time series of expression data measured with microarrays or RNA-seq combined with static binding experiments or literature mining may be used for inference of sigma factor regulatory networks.We introduce Genexpi: a tool to identify sigma factors by combining candidates obtained from ChIP experiments or literature mining with time-course gene expression data. While Genexpi can be used to infer other types of regulatory interactions, it was designed and validated on real biological data from bacterial regulons. In this paper, we put primary focus on CyGenexpi: a plugin integrating Genexpi with the Cytoscape software for ease of use. As a part of this effort, a plugin for handling time series data in Cytoscape called CyDataseries has been developed and made available. Genexpi is also available as a standalone command line tool and an R package.Genexpi is a useful part of gene network inference toolbox. It provides meaningful information about the composition of regulons and delivers biologically interpretable results.The online version of this article (10.1186/s12859-018-2138-x) contains supplementary material, which is available to authorized users. Uncovering the nature of gene regulatory networks is one of the core tasks of systems biology. Identifying direct regulons of sigma factors/transcription factors can be considered the basic element of this task. In fact a large portion of software for network inference is limited to such direct interactions e.g., \u20133). It h. It h3])Primary focus of this paper is on CyGenexpi \u2013 a plugin for the Cytoscape platform that useGenexpi is based on an ordinary differential equation model of gene expression introduced in . In the While there are multiple tools for gene network inference from the command line or programming languages , a command-line interface and an R interface. In this section we describe the model and fitting method of Genexpi \u2013 the implementation of the interfaces is straightforward. Initial part of this section is taken from and its z controlled by set of m regulators y1,..,ym (genes or any other regulatory influence) is determined by activation function f(\u03c1(t)) of the regulatory input wj is the relative weight of regulator yj and b is bias (inversely related to the regulatory influence that saturates the synthesis of the mRNA). In our case, f is the logistic soft-threshold function f(x)\u2009=\u20091/(1\u2009+\u2009e-x). The transcript level of z is then governed by the ODE:Genexpi is based on an ordinary differential equation (ODE) model for gene regulation, inspired by the neural network formalism . In thisk1 is related to the maximal level of mRNA synthesis and k2 represents the decay rate of the mRNA. Both k1 and k2 must be positive. The complete set of parameters for this model is thus \u03b2\u2009=\u2009{k1, k2, b, w1,\u2026, wm}. Given N samples from a time series of gene expression taken at time points t1, \u2026, tN the inference task can be formalized as finding \u03b2 that minimizes squared error with regularization:where z is the observed expression profile, \u03b2 and the observed expression of y1,..,ym, and r(\u03b2) is the regularization term. The regularization term represents a prior probability distribution over \u03b2 that gives preference to biologically interpretable values for \u03b2 and is discussed in more detail below. Assuming Gaussian noise in the expression data, (2) is the maximum a posteriori estimate of \u03b2.Here k2) \u2013 it assumes decay is always one. Further, Inferelator minimizes the error of the predicted derivative of the expression profile, while we minimize the prediction error for the actual integrated expression profile and introduce the regularization term.Our model is similar to that used by the Inferelator algorithm , althougSince the expression data is noisy, Genexpi encourages smoothing the data prior to computation. We have had good results with linear regression of B-spline basis with degrees of freedom equal to approximately half the number of measurement points. By smoothing we get more robust results with respect to low frequency phenomena, but sacrifice our ability to discover high-frequency changes and regulations . Further our experiments with fitting raw data or tight interpolations of the data have had little success in fitting even the profiles that were highly correlated, due to the amplified noise in the data. Smoothing of time series profiles has been used previously for network inference .Further advantage of smoothing is that it lets us subsample the fitted curve at arbitrary resolution. The subsampling then allows us to integrate (1) accurately with the computationally cheap Euler method, making evaluation of the error function fast and easy to implement in OpenCL.Genexpi minimizes eq. f, the activation function becomes approximately linear over the whole interval, so increasing the weights and decreasing bias while decreasing k1 yields a very similar r(\u03b2). In particular, we expect k1 smaller than the maximal expression level of the target gene , we put a bound on maximal steepness of the regulatory response: j and we expect the regulatory input to come close to zero (the steepest point of the sigmoid function) for at least one time point: \u03b3 the regularization term becomes:Note that in some cases, multiple vastly different combinations of parameters may yield almost identical regulatory profiles. For example, if the interval of attained regulatory input x\u2009>\u20090 and bound \u03c9 is:where c is a constant governing the amount of regularization. In our work, the penalty for value \u03b3 is then the same as maximizing log-likelihood, assuming that x is distributed uniformly over with some probability p and as \u03c9x\u00a0+\u00a0\u03b1|e| with probability (1 \u2013 p) where e\u2009\u223c\u00a0N. In this interpretation, the probability p is uniquely determined by c in the regularization term and by choosing \u03b1 such that the resulting density function is continuous.Minimizing\u00a0c to be approximately one tenth of the number of time points after smoothing. While without regularization, many of the inferred models contained implausible parameter values, regularization forced almost all of those parameters into given bounds - r(\u03b2) was zero for most models. At the same time the mean residual error of the models inferred with regularization differed by less than one part in hundred from models inferred without regularization.We have empirically determined the best value of z(t):To evaluate whether a fit is good, we have chosen a simple, but easily interpretable approach. The primary reason is that we intend to keep the human in the loop throughout the inference process and thus the human has to be able to understand the criteria intuitively. Since most published time series expression data is reported only as averages without any quantification of uncertainty, we let the user set the expected error margin based on their knowledge of the data. The error margin is determined by three parameters: absolute, relative and minimal error. These combine in a straightforward way to get an error margin for each time point, depending on the expression level Fit quality is then the proportion of time points where the fitted profile is within the error margin of the measured profile. A fit is considered good if fit quality is above a given threshold .Prior to analyzing a gene as being regulated, we need to test for two baseline cases that would make any prediction useless. The obvious first case are genes that do not change significantly over the whole time range. Genes that do not change are excluded from further analysis as both regulators and targets as the Genexpi model contains no information in that case.A slightly more complicated case is the constant synthesis model where we expect the mRNA synthesis to be constant over the whole time range:w\u2009=\u20090, and large b, those genes are excluded as targets. However, regulators that could be explained by constant synthesis are still analyzed, as there is meaningful information. Fitting the constant synthesis model is also done via simulated annealing in OpenCL.Note that this is the same as assuming there are 0 regulators. Since genes with constant synthesis could be fitted by any regulator by simply putting For the putative regulations excluded this way, the correct interpretation is that the underlying dataset provides no evidence for or against such regulations. If there are biological justifications that the regulations should be visible in the data (e.g. that the regulatory effect should be larger than the measurement noise), it is possible to cautiously consider this as evidence against the regulations taking place.In this section we describe the intended workflow for analysis with Genexpi and its user interface and then we discuss results of evaluation on real biological data.The primary user interface for Genexpi is the CyGenexpi plugin for the Cytoscape software, but Genexpi can also be run directly from R and via a command line interface. For CyGenexpi, an important improvement over the Aracne or NetworkBMA Cytoscape plugins is the direct involvement of user in the process.Start with a network of putative regulations either obtained from database mining or experiments.Import the time-course expression data and smooth them to provide a continuous curve.Remove genes whose expression does not change significantly throughout the whole time-course.Remove genes that could be modelled by the constant synthesis model.Optional: Human inspection of the results of steps 3&4, possibly overriding the algorithm\u2019s decisions.Finding best parameters of the Genexpi model for each gene-regulator pair. The fitted models are then classified into good and bad fits. Good fits indicate that the regulation is plausible, while bad fits show that the regulation either does not take place or involves additional regulators.Optional: Human inspection of the fits, possibly overriding the algorithm\u2019s classification , an arbitrary pair of regulators is able to model the expression of a large fraction of all genes, increasing the false positive rate. CyGenexpi therefore currently does not expose GUI for using more than one regulator in the model. Using more regulators is however available for more advanced users via the command-line or R interfaces.For CyGenexpi, the time series data is imported with CyDataseries from either a delimited text file or the SOFT format used in Gene Expression Omnibus.While Genexpi can be used for de-novo regulon identification from time-series expression data only, high rate of false positives should be expected. The main reason is that in real biological data, multiple sigma factors may have similar expression profiles and Genexpi thus considers all genes regulated by one of the sigma factors as possibly regulated by all of the similar sigma factors. The evaluation in this paper therefore focuses on identifying the regulated genes among a set of plausible candidates. Nevertheless, the workflow for de-novo inference is almost the same as described above, only the initial network should contain a link from each investigated regulator to all other genes.We evaluated Genexpi in three ways: 1) direct biological testing of the suggested regulatory relationships, 2) comparing the ability of Genexpi and other tools to reconstruct two literature-derived regulons and 3) measuring computing time required to process the data. The first part of the evaluation is taken from our previous work , while tThis section recapitulates the relevant results obtained with Genexpi, originally reported as a part of . We perfB. subtilis from Subtiwiki [To extend the biological evaluation from and to bubtiwiki as of Jaubtiwiki and b) tubtiwiki and the ubtiwiki . Both veubtiwiki .For each of the literature regulons we first exclude targets that were constant or had constant synthesis (steps 3&4 of the workflow) and determined how many of the remaining members were considered by Genexpi to be regulated by the respective sigma factor \u2013 these correspond to true positives. Then we generated a set of random expression profiles with similar magnitude and rate of change as the sigma factor. Inspired by we draw We consider testing a random regulator profile as a more reliable assessment than testing the complement of the literature-based regulon for two reasons. First, it is a better match for the intended Genexpi workflow, which starts with a set of candidate genes. Here, using a random profile for the regulator models the situation where the candidate list is wrong and we expect Genexpi to reject that there is regulatory influence on most genes. Second, the complement is usually composed of less characterized genes and there is little guarantee that the complement contains genes that are not regulated by the sigma factor. The complement may include genes that are regulated with the sigma factor, but were not annotated yet, and also genes that have expression profile similar to the profiles of the regulon of the analyzed sigma factor due to chance or non-regulatory interactions. Such profiles would be classified as false positives, while they in fact have nothing to do with the analyzed regulon and its sigma factor. Comparing the performance on regulon complement actually depends more on the uniqueness of the sigma factor profile than on the inference algorithm.For this evaluation we ran Genexpi with default settings and without any human input. Complete code to reproduce all of the results for this and the following section is attached as an R notebook in Additional\u00a0file\u00a0For comparison, we performed the same analysis with TD-Aracne \u2013 an extFor all analyses, we smoothed the raw data by linear regression over B-spline basis of order 3 with 3\u201310 degrees of freedom. TD-Aracne was tested with the raw data as well as the smoothed data subsampled to give lower number of equal-spaced time points as expected by TD-Aracne. For TD-Aracne we tested three methods of recovering the regulon from the inferred network over the full gene set: a) take only the genes that were marked as directly regulated by the sigma factor, b) take all genes connected by a directed path from the regulator and c) take all genes connected to the regulator. Variant a) had very low performance overall, among b) and c) we report the result more favorable to TD-Aracne. For the SigR regulon of Kim et al., the results were very similar when only the targets marked as having \u201cstrong\u201d evidence were used. All results not shown here can be found in Additional\u00a0file\u00a01. See Table\u00a0In the SigB regulon, the Genexpi performs slightly better than TD-Aracne. While TD-Aracne (in multiple settings) confirms almost all of the literature regulon while rejecting over half of the regulations by a random profile, Genexpi using spline with 4 degrees of freedom rejects two thirds of random regulations while also recovering 90% of the literature regulon. Moreover, Genexpi has the advantage of allowing for a sensitivity/specificity tradeoff by choosing the degree of freedom for the spline \u2013 with high degrees of freedom, almost all random regulations are rejected while still recovering majority of the literature regulon. The performance of TD-Aracne varied unexpectedly with the chosen degree of freedom. We also see, that running TD-Aracne with smoothed data and removing no change and constant synthesis genes as in Genexpi workflow, allows for only slight improvements for the performance of TD-Aracne over running directly with the raw data (as TD-Aracne is designed to work).For both variants of the SigR regulon, TD-Aracne mostly found little difference between the literature based and random regulons. The few cases of better performance by TD-Aracne occurred unpredictably with certain smoothing of the data. At the same time, Genexpi was rarely misled by the random regulations and recovered large fractions of the literature regulon while behaving consistently: the proportion of both true and random regulations grows with more aggressive smoothing (less degrees of freedom).For analysis of computing time, Genexpi was run on a mid-tier GPU (Asus Radeon RX 550) and TD-Aracne on an upper-level CPU (Intel i7 6700\u00a0K). Both algorithms were run on a Windows 10 workstation with only basic precautions to prevent other process from perturbing the system load. The numbers reported should therefore not be considered benchmarks but rather an informative estimate of the computing time during a normal analysis workflow. The results are shown in Table\u00a0Saccharomyces cerevisiae [While Genexpi was designed for bacterial regulons, we also tested its performance on eukaryotic data, in particular the time series of gene expression throughout the cell cycle of revisiae , depositrevisiae and downrevisiae . We usedIn this case, the signal is weaker than in the prokaryotes, which is not unexpected given the increased complexity of eukaryotic regulation. Genexpi gives the worst (undistinguishable from random) results for MBP1, SWI4 and SWI6, which are known to regulate in complexes and thus break the model expected by Genexpi. Interestingly, TD-Aracne is able to determine some of those regulations. For the other genes, Genexpi provides consistent, but weak information while TD-Aracne provides strong signal for some genes, while performing very poorly on the others.The full code to reproduce the analysis can be found in Additional\u00a0file\u00a0The Genexpi workflow was kept deliberately simple, but this involves some inaccuracies. Most notably, Genexpi masks uncertainty in the data and uses multiple hard thresholds. Following that useOur evaluation has shown that Genexpi is a useful part of a bioinformatician\u2019s toolbox for uncovering and/or validating regulons in biological systems. Genexpi was designed for bacterial regulons, but can be \u2013 with caution \u2013 employed also for eukaryotic data. It also provides transparent results and \u2013 unlike other similar programs - lets the human to stay in the loop and apply expert knowledge when necessary. The parameters of the fitted models are biologically interpretable and thus can guide design of future experiments. Time-series expression data cannot in principle provide complete information about the regulatory interactions taking place and Genexpi is therefore best used as one of multiple sources of insight about a biological system.Genexpi is equipped with both simple point&click interface for the Cytoscape application and with R and command-line interfaces for advanced users.Additional file 1:https://www.rstudio.com/) to reproduce the evaluation on bacterial regulons in this paper.\u00a0evaluation.nb.html \u2013 Compiled version of evaluation.Rmd for easy reading, including stored results produced by running all the code. \u2022\u00a0evaluation_sacharomyces.Rmd \u2013 R Markdown notebook to reproduce the evaluation on Sacharomyces data.\u00a0\u2022 evaluation_sacharomyces.nb.html \u2013 Compiled version of evaluation_sacharomyces.Rmd, including stored results produced by running all the code.evaluation.zip - an archive containing: \u2022\u00a0evaluation.Rmd \u2013 R Markdown notebook (best used with RStudio,"}
+{"text": "Identifying gene regulatory networks is an important task for understanding biological systems. Time-course measurement data became a valuable resource for inferring gene regulatory networks. Various methods have been presented for reconstructing the networks from time-course measurement data. However, existing methods have been validated on only a limited number of benchmark datasets, and rarely verified on real biological systems.We first integrated benchmark time-course gene expression datasets from previous studies and reassessed the baseline methods. We observed that GENIE3-time, a tree-based ensemble method, achieved the best performance among the baselines. In this study, we introduce BTNET, a boosted tree based gene regulatory network inference algorithm which improves the state-of-the-art. We quantitatively validated BTNET on the integrated benchmark dataset. The AUROC and AUPR scores of BTNET were higher than those of the baselines. We also qualitatively validated the results of BTNET through an experiment on neuroblastoma cells treated with an antidepressant. The inferred regulatory network from BTNET showed that brachyury, a transcription factor, was regulated by fluoxetine, an antidepressant, which was verified by the expression of its downstream genes.We present BTENT that infers a GRN from time-course measurement data using boosting algorithms. Our model achieved the highest AUROC and AUPR scores on the integrated benchmark dataset. We further validated BTNET qualitatively through a wet-lab experiment and showed that BTNET can produce biologically meaningful results.The online version of this article (10.1186/s12918-018-0547-0) contains supplementary material, which is available to authorized users. A gene regulatory network (GRN) is a biological network representing relationships between genes and their regulators. One representative regulator is a transcription factor that regulates a target gene\u2019s expression. Reconstructing the gene regulatory network is important for understanding the biological system. The gene regulatory network could identify causal relationships among molecular interactions, help to prioritize experimental design, or be considered as network biomarkers . Its appA good deal of research on reverse-engineering has been conducted using the gene expression data \u20139. In thVarious GRN inference methods using time-course data have been developed \u201318. CurrIn contrast, model-free methods compute the degree of regulation based on information-theoretic criteria. TD-ARACNE obtains in-silico multi-factorial challenge [One of the state-of-the-art methods used in model-free methods is GENIE3-time, a time-lagged version of GENIE3 , 13. Bashallenge , and thehallenge both in However, we found it difficult to objectively compare the performance of the current state-of-the-art methods because they were quantitatively validated on a small amount of dataset or different benchmark datasets. To address this problem, we integrated eight time-course gene expression benchmark datasets from the previous studies. Then, we re-evaluated the baseline methods , 22, 23 In this article, we propose BTNET which is a boosted tree based gene regulatory network inference algorithm that is employed to reconstruct the network using time-course measurement data. The boosted tree is used to compute regulatory interaction scores between candidate regulators and target genes. To the best of our knowledge, this is the first study to use the boosted tree to infer GRNs using time-course measurement data. We evaluated BTNET on the integrated benchmark dataset and showed that our method outperformed 9 baselines including the current state-of-the-art method, GENIE3-time.Furthermore, to verify if BTNET actually produces biologically meaningful networks, we qualitatively assessed the GRN inferred by BTNET using time-course data obtained from our experiments with antidepressant treated neuroblastoma cells. We treated SK-N-SH neuroblastoma cells with fluoxetine, an antidepresseant, and measured the transcription factors\u2019 change in activity over time. From this data, BTNET inferred that brachyury, a transcription factor, was regulated by fluoxetine and this inference was validated by immunoblot assays.nT\u00d7P expression matrix E as an input where n is the number of experiments, T is the number of times points and P is the total number of genes. Then, BTNET outputs a weighted adjacency matrix wi,j is the regulatory interaction score that indicates how strongly gene i regulates gene j. We use only high confidence regulatory interactions where its scores are above the threshold to reconstruct a gene regulatory network.In this section, we describe our inference model that reconstructs a gene regulatory network from time-course measurement data. Our model takes an in-silico Multi-factorial Challenge [p different subproblems where p denotes the number of genes in expression data. In each subproblem, one gene is considered as a target gene and other genes except the target gene are regarded as candidate regulators. Then, a bagging based ensemble tree method which is Random Forest [The tree-based ensemble method, GENIE3 is one of the state-of-the-art approaches for inferring regulatory networks . The methallenge , and DREhallenge . In GENIm Forest or Extram Forest can compt+1 time point expression values of a target gene are modeled by t time point values of candidate regulators as follows. GENIE3-time modifies GENIE3\u2019s original regulatory interaction scoring method to compute the scores of candidate regulators for time-lagged expression value of a target gene . Formalli at time point t of genes except gene i, and \u03b5t indicates random noise at time t. A weighted adjacency matrix is then constructed after obtaining regulatory interaction scores of candidate regulators for each target gene i in total P genes. In a bagging procedure of GENIE3-time, regression trees are fitted to independent bootstrapped samples. Then, the ensemble score is obtained by averaging importance scores of all independently trained regression trees.where In this article, we introduce a new ensemble tree based gene regulatory network inference algorithm that uses time-course measurement data when inferring the network. The ensemble tree we used for inferring the network is boosted tree. The boosted tree differs from bagging based tree applied in GENIE3-time in that while GENIE3-time aggregates multiple independent estimators for constructing the final ensemble method, boosted tree continuously updates estimator itself to make it stronger by compensating for the weakness of previous estimators . We propf be a base estimator, T the number of boosting iterations, xi the feature vector of sample i, N the total number samples, and L be a loss function; then, AdaBoost is run using the following steps [i has a sample weight wi where wi=1/N.Assign initial sample weights where each sample This means all samples start with the same weight.N by sampling with replacement according to the sample weights. The weight represents the probability of the samples being selected.Build a training set size f on the sampled training set.Train erri byMake predictions on every training sample and compute normalized sample loss erri = Here, we use a linear loss function whereL(f(xi),Yi)=(Yi\u2212f(xi)).Calculate average loss \u03b2 whereUpdate the sample weights using \u03b2 = wi = wi\u03b2erri) is the variance of the target values in the set S; Sleft and Sright refer to the sets of samples in the left and right child nodes after splitting, respectively.where G is as follows. After obtaining VISs from all trees, the ensemble variable importance score is computed by aggregating of the scores. In AdaBoost, the ensemble importance score is calculated by the weighted average of VISs. Thus, the equation for computing ensemble VIS of a variable W where the value in i-th row and j-th column indicates the regulatory interaction score from gene i to gene j.By taking all the genes in the expression data as target genes and obtaining VISs for candidate regulators of the target genes, we could obtain regulatory interaction scores for all pairs of genes. The regulatory interaction scores are represented as a weighted adjacency matrix Once the adjacency matrix is obtained, only interactions that satisfy a certain threshold are represented by edges in the inferred gene regulatory network.L of an estimator f by adding residual fitted estimator h [L used here is based on squared error as follows. We use gradient boosting, another boosted tree based ensemble inference method, for scoring regulatory interactions. Gradient boosting was also successfully used for inferring gene regulatory networks from steady-state gene expression data , 30. Theimator h . The losR is obtained by derivative of L by f. Then, residual f(xi) denotes a prediction value of i-th sample and Y denotes a target value of the i-th sample. In Gradient boosting, a base estimator f0 produces its prediction by simply averaging the target values. where t, a new estimator ht is fitted to the residuals R of previous estimator ft\u22121 where the residuals are derivatives of square loss function L over ft\u22121. Then, ht is added to the previous learner with the learning rate \u03b2. At each stage ft continuously improve its prediction power by compensating the previous estimator\u2019s error.The additive estimator The only difference between BTNET-AdaBoost and BTNET-GraBoost, other than the boosting method itself , comes from aggregating method of single trees\u2019 variable importance scores. In the case of BTNET-AdaBoost, the ensemble importance scores were computed by weighted average whereas the ensemble scores of BTNET-GraBoost were obtained by just averaging the importance scores of each tree\u2019s as follows. The methods for computing variable importance scores in a single tree, obtaining the weighted adjacency matrix that contains regulatory interaction scores for all gene pairs, and constructing a GRN are the same as those used in BTNET-Adaboost.We built BTNET by modifying GENIE3-time python implementation . We modiO(pTKNlogN) where p is the number of genes, K is the number of candidate regulators, T is the number of iterations for boosting, and N is the total number of samples. Training one regression tree has a complexity on the order of O(KNlogN). Since building an ensemble tree takes T times longer than a single tree, both BTNET-AdaBoost and BTNET-GraBoost require a time complexity on the order of O(TKNlogN). To obtain full regulatory networks, the ensemble tree must be fit to p total genes. Therefore, BTNET has a computational complexity on the order of O(pTKNlogN).The computational complexity of BTNET is the same as that of GENIE3-time, which is In this section, we briefly describe the 8 benchmark datasets we used for the quantitative evaluations and report the AUROC and AUPR scores of our BTNET method and 9 baseline methods. We also report the results of qualitative analysis that experimentally verifies a regulatory interaction inferred by BTNET using antidepressant treated human SK-N-SH neuroblastoma cells.The in vivo reverse-engineering and modeling assessment (IRMA) network is a yeast (Saccharomyces cerevisiae) synthetic network that was made for validating the performance of GRN inference methods . The netThe Spellman dataset contains time-course gene expression data on yeast (Saccharomyces cerevisiae) cell cycle . We seleWe obtained time-course gene expression dataset of Caenorhabditis elegans (C.elegans) and yeast cell cycle, and the ground truth networks for each dataset from the study named DDGni . The C.ein silico time-course dataset, from the DREAM4 In Silico Network Challenge, is well-known simulated benchmark dataset used for assessing network inference methods. Among 10 networks that were provided in the challenge, five networks had 10 genes and the other five had 100 genes. The networks containing 10 genes and the others containing 100 genes have 5 and 10 replicates of time-course expression data, respectively. Each replicate has 21 time points. At t=0, about one third of the genes were perturbed by increasing or decreasing initial expression of those genes. After 10 time points, the perturbation is then removed and returned to its original state. Initially perturbed genes were different from each replicate.DREAM4 The output of BTNET is a weighted adjacency matrix containing regulatory interaction scores of all possible interactions between all genes. To form a gene regulatory network, we select a subset of interactions where the regulatory interaction scores are above a certain threshold. Area Under Receiver Operating Characteristic (AUROC) and Area Under Precision-Recall (AUPR) have usually been used to evaluate the performance of GRN inference methods , 17, 18.in silico-size10 and DREAM4-size100 dataset having 10 GRNs in total. In case of the DREAM4 dataset, we averaged AUROC/AUPR scores for each five networks of the size10 and size100 networks, respectively. Thus, we received 10 evaluation results for each AUROC and AUPR scores. The difference between the number of datasets and evaluations in the IRMA dataset was caused by evaluating two inferred GRNs from switch-on and switch-off data with the two ground truth networks producing four evaluation results. We averaged the 10 scores for each AUROC and AUPR and compared BTNET with the following nine baseline methods: BGRMI [We conducted a quantitative evaluation of BTNET by comparing AUROC and AUPR scores on the eight benchmark datasets. On the IRMA dataset, we inferred two GRNs from switch-on and switch-off time-course data. We inferred two GRNs from Spellman cdc-15 and cdc-28 time-course data. We also obtained two GRNs from C.elegans and yeast cell cycle time-course data. On DREAM4 dataset, we inferred five GRNS for each DREAM4 s: BGRMI , JUMP3 [s: BGRMI , GENIE3-s: BGRMI , TDARACNs: BGRMI , TSNI [2s: BGRMI , timedels: BGRMI , time-las: BGRMI and infeTo further evaluate BTNET, we performed an additional qualitative analysis using wet lab experiments. We first inferred a regulatory network from time-course activity data of transcription factors using BTNET. The activity of transcription factors was measured after treating human neuroblastoma cell line SK-N-SH with fluoxetine, a popular antidepressant. Twenty depression related transcription factors were chosen for measuring activities. Living cell array was usedThe inferred network is shown in Fig.\u00a0Additionally, Fig.\u00a0\u03b2-actin were used as controls for protein quantification. The immunoblot assay result demonstrates that brachyury was in fact regulated by fluoxetine, and fluoxetine may have affected brachyury between day 4 and day 6 after the treatment. Additional file\u00a0To verify the inferred relation, immunoblot assay was examined. In Fig.\u00a0We developed a more accurate and robust method that infers GRNs from time-course measurement data. Most GRN methods using time-course data were validated only on a limited number of benchmark datasets. To address this problem, we integrated time-course gene expression datasets from previous studies and re-evaluated the baseline methods on the integrated benchmark set. GENIE3-time achieved the best performance among the baseline methods. GENIE3-time infers GRNs by computing all possible pairs of regulators-target gene regulatory interaction scores using Random Forest (or Extra Trees). We attempted to improve the current state-of-the-art method, GENIE3-time, by using boosting algorithms to compute regulatory interaction scores.We proposed two boosted tree based GRN inference methods: BTNET-AdaBoost and BTNET-GraBoost. BTNET-AdaBoost uses adaptive boosting and BTNET-GraBoost uses gradient boosting to compute the regulatory interaction scores. BTNET-GraBoost achieved the highest AUPR/AUROC scores and the best average ranks. We performed wet lab experiments to validate whether BTNET could infer biologically meaningful networks. Living cell array analysis was used to analyze the activity of TFs in real time at various time points after treating human SK-N-SH cell lines with fluoxetine. BTNET inferred a regulatory network from the time-course data and brachyury was shown to be regulated by fluoxetine. The inferred regulation of brachyury was verified by testing the expression of downstream molecules of the TF and actual increase of expression on brachyury\u2019s downstream molecules was observed.Additional file 1Supplementary file of BTNET. The file contains the URL of the source code and dataset used in this study, evaluation results on individual benchmark datasets and materials and methods used for the qualitative analysis. (PDF 263 kb)"}
+{"text": "TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPSBiological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (DOI:http://dx.doi.org/10.7554/eLife.18541.001 Time series experiments are very commonly used to study a wide range of biological processes. Examples include various developmental processes , stem ceWhile mRNA gene expression data have been the primary source of high-throughput time series data, more recently several other genomic regulatory features are profiled over time. These include miRNA expression data , ChIP-SeWhile integrated analysis of high-throughput genomic datasets can greatly improve our ability to model biological processes and systems, it comes at a cost. From the monetary point of view, these costs include the increased number of Seq experiments required to profile all types of genomic features. While such costs are common to all types of studies utilizing high-throughput data, they can be prohibitively high for time series based studies since they are multiplied by the number of time points required, the number of repeats performed for each time point and the number of different types of data being profiled. Importantly, even if the budget is not an issue, the ability to obtain enough samples for profiling all genomic features at all time points may be challenging, if not completely prohibitive.One of the key determinants of the experimental and sample acquisition costs associated with time series studies is the number of time points that are being profiled. In most studies, the first and last time point can usually be determined by the researcher . However, the number of samples required between these two points and the sampling frequency (given a fixed budget) are often hard to determine based on phenotypic observations since the molecular events of interest may precede such phenotypic events. To date, sampling rates have largely been determined using one of two ad-hoc protocols. The first utilized uniform sampling across the duration of the study with theRelatively, little work has focused so far on the selection of time points to sample in high throughput time series studies. Singh et al and RosaTPS), an algorithm that uses spline based analysis and combinatorial search to select a subset of the points that, when combined, provide enough information for reconstructing the values for all genes across all time points. The number of points selected can either be set in advance by the user or can be defined as a function of the reconstruction error. The selected time points are then used for the larger, genome-wide experiments across the different types of data being profiled.Here, we propose the first non iterative method to address the issue of sampling rates across all different genomic data types. Our method starts by selecting a small set of genes that are known to be associated with the process being studied . Next, we use a cheap array-based technology to sample these genes at a high, uniform rate across the duration of the study. Note that unlike standard curve fitting algorithms, a method for selecting time points for these experiments is required to accommodate over a hundred curves simultaneously, and we discuss various ways to formulate this as an optimization problem. To solve this optimization problems, we developed the Time Points Selection method . The resulting set is then used for the larger genomic and epigenetic experiments.We developed TPS , we used it to determine time points for a lung development study in mice. We first profiled the expression of 126 genes known or suspected to be involved in lung development using NanoString (See Appendix Methods for a list of the selected genes and the reason each was selected). We then used TPS analysis of these experiments to select a subset of time points for profiling the expression of a larger, unbiased, set of miRNAs. Finally, we have used TPS to design time series experiments to study DNA methylation patterns for a subset of the genes.To test the usefulness of TPS by using it to select subsets of points ranging from 3 to 25 and evaluating how well these can be used to determine the values of non-sampled points. To determine the accuracy of the reconstructed profiles using the selected points, we computed the average mean squared error for points that were not used by the method . The results are presented in We have tested the performance of TPS when compared to randomly selected points. Importantly, we also see a significant and consistent improvement over uniform sampling highlighting the advantage of condition-specific sampling decisions. Sorting initial points by absolute values further improves the performance highlighting the importance of initialization when searching large combinatorial spaces. Simulated annealing, weighting, and multiple point selection improve performance as well. As the number of points used by TPS increases, it leads to results that are very close to the error represented by noise in the data (0.108) .TPS to select TPS to fit all quite accurately without overfitting .To test the usefulness of our method for predicting the correct sampling rates for other genomic datasets, we next profiled mouse miRNAs for the same developmental process. miRNAs have been known to regulate lung development and sevelization .TPS on this dataset, we used the mRNA expression data to select time points and then used the miRNA expression values for the selected time points to reconstruct the complete trajectories for each miRNA. The results are presented in To test TPS when using the mRNA data to select sampling time points for profiling the levels of more than 1000 proteins. We observed results that are very similar to the results obtained for the miRNA time point selection. Specifically, the points selected by TPS\u00a0lead to reconstruction errors that are lower than those observed for uniform sampling or for a random set of the same number of points further demonstrating the general applicability of our method. See Appendix Results for details.We have also analyzed the performance of Akt,1 Cdh11, and Tnc. These were the genes with the strongest negative correlation between their methylation and expression. As can be seen, in several cases we observed strong negative or positive correlations between the two datasets in the time points we used serving as another indication for the ability to use one dataset to select the sampling points for the other. See In addition to mRNA and miRNA expression data, epigenetic data have been increasingly studied in time series experiments . To testTime series gene expression experiments are widely used in several studies. More recently, advances in sequencing and proteomics are enabling the profiling of several other types of genomic data over time. Here we focused on lung development in mice with the goal of identifying an optimal set of time points for profiling various genomic and proteomic data types for this process.An important question is: Whether a better selection of time points really leads to observations that are missed when using an inferior set of points (even if the number of points is the same)? To answer this question we looked at several prior studies that profiled mouse lung development over time using various high throughput assays. on day 7 whereas To illustrate these problems we compared the resulting curves using three of the sampling rates from average . Similar average and when average . This inrandom set of genes. To illustrate this we repeated the analysis presented in Results using only the measured values of 25% of genes in our original set and replacing the values for the other genes with random profiles. As we show in Our method relies on a very small subset of genes that are known to be involved in the process studied for the initial (highly sampled) set of experiments. While such set is known for several processes, there may be cases where very little is known about the biological process and so it may be hard to obtain such set. TPS\u00a0can still be applied to determine sampling rates for such processes using a small Beyond the analysis of a specific type of data, several studies have now been profiling multiple types of genomic data over time. Such studies need to agree on a set of time points which would be common to all experiments so that these diverse types can be integrated to form a unified model . To dateTPS addresses these problems by using a principled method for determining sampling rates. An important goal in the development of TPS was to enable it to be successfully applied to different types of biological datasets. As we show, a relatively inexpensive, gene centric, method provides a very good solution for RNA expression profiling as well as other types of data including miRNAs and DNA methylation. Thus, a combined experiment can be fully designed using our method.While we evaluated TPS\u00a0on several types of high throughput data, we have only tested it so far on data for a specific biological process (lung development in mice). While we believe that such data is both challenging and representative and thus provides a good test case for the method, analysis of additional datasets may identify new challenges that we have not addressed and we leave it to future work to address these.TPS in the design pipeline of their studies.TPS, including all initialization methods discussed, is implemented in Python and is available on the supporting website. We hope that as sequencing technology continues to advance, more and more studies would integrate diverse types of time series data and will utilize To select the list of 126 genes used in the NanoString profiling we searched the literature for genes that have been linked to the following processes: (a) Cell type specification genes , (b) genes known to be up or down regulated during septation, (c) genes known to be altered in DNA methylation during development, (d) genes known to be involved in septation, (e) genes known to be regulated by miRNA involved in septation, and (f) genes known to be regulated by DNA methylation during fibrosis. For the miRNA set we used a commercially available, unbiased, array .A total of 240 samples were isolated by Laser Capture Microscopy (LCM) from murine lung at multiple time points . The samples were used to prepare total RNA. RNA extraction was performed by miRNeasy MicroKit (Qiagen) following the manufacturer\u2019s protocol. RNA concentration and integrity were measured by using NanoDrop ND-2000 and 2200 Tape Station. A custom NanoString probe set (Reporter Code set and Capture Probe set) for 126 genes was designed and the nCounter Gene Expression Assay was performed using 50 ng total RNA. The data files produced by the nCounter Digital Analyzer were exported as a Reporter Code Count (RCC) file and data normalization was performed using the nSolver, the analysis software provided by Nanostring.Igfbp3, Wif1, Cdh11, Eln, Sox9, Tnc, Dnmt3a, Akt, Vegfa, Lox, Foxf2, Zfp536 and Src, based on published data . Incubation with Digestion buffer and proteinase K was done overnight at 55\u00b0C in inverted tubes. 13 genes were chosen for targeted NextGen bisulfite sequencing (NGBS): hed data . Targeteall genes or other molecules being profiled. We assume that we can efficiently and cheaply obtain a dense sample for the expression of a very small subset of representative genes and attempt to use this subset to determine optimal sampling points for the entire set of genes.Our goal is to identify a subset of time points that can be used to accurately reconstruct the expression trajectory for Formally, let To constrain the set of points we select, we assume that we have a predefined budget g spline functionGiven Problem statement: Before discussing the actual procedure we use to select the set of time points, we discuss the method we use to assign splines based on a selected subset of points for each gene. There are two issues that need to be resolved when assigning such smoothing splines: (1) The number of knots (control points) and (2) their spacing. Past approaches for using splines to model time series gene expression data have usually used the same number of control points for all genes regardless of their trajectories , and mosatisfied . The\u00a0regBecause of the highly combinatorial nature of the time points, we rely on a greedy iterative process to select the optimal points as summarized in There are three key steps in this algorithm which we discuss in detail below.Selecting the initial set of points: When using an iterative algorithm to solve non-convex problems with several local minima, a key issue is the appropriate selection of the initial solution set and replace it with all points that were not in the selected set . For each pair of such point, we compute the error resulting from the change , and determine if the new point reduces the error or not. Formally, let where Fitting smoothing spline: The third key step of our approach is fitting a smoothing spline to every gene independently for the selected subset of time points. As discussed above, this is done by using a regularized version of approximating splines which allow us to determine a unique number of control points and spacing for each of the genes. See Appendix Methods for more details.So far, we assumed that error of each gene has the\u00a0same contribution to the overall error. However, this assumption ignores the fact that the\u00a0expression profiles of genes are correlated with the expression of other genes. To take the correlation between gene profiles into account, we also performed cluster based evaluation of genes where we analyzed the error by weighting each gene in terms of inverse of the numbers of genes in the cluster it belongs. This scheme ensures that each cluster contributes equally to the resulting error rather than each gene. We find clusters by k-means algorithm over time series-data by treating each gene as a point in d spline . We use d spline . In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Aviv Regev as the Senior Editor. The reviewers have opted to remain anonymous.Thank you for submitting your article \"Determining sampling rates for high-throughput time series studies\" for consideration by The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted these comments in an effort to crystallize our concerns about the novelty of this work. Before we render a binding decision, the Board asks that you respond soon to the essential concerns below in order to help us assess the technical advance you have achieved.Summary:Time course studies are critical for understanding biological processes. Facilitated by advances in sequencing technologies, high-throughput time series analyses are becoming more and more prevalent, and often more than one type of data is collected for each time point in order to probe/understand various aspects of cellular activity. Selecting the precise time points to use in such studies is challenging and often ad-hoc. In this manuscript, Kleyman et al. describe Time Point Selection (TPS), a method designed to address the problem of determining sampling rates in time series studies. The idea behind TPS is as follows:1) start with a small set of genes known to be important for the process to be studied;2) use cheap array-based experiments (e.g. NanoString) to measure expression of these genes at a high, uniform, sampling rate;3) use the data to infer a subset of points that can be used to optimally \"reconstruct\" the entire data set from step 2 ;4) use the time points selected in step 3 for the genome-wide experiments.The authors applied TPS to lung development in mice, and showed that the time points identified by TPS are better than uniform sampling and even phenotype-based sampling (in terms of reconstructing the mRNA profiles of the selected genes), and can be used not only for time series expression data, but also for miRNA and methylation time series data. The general strategy implemented in TPS has potential and could prove useful for future time course studies.Essential concerns:1) All the reviewers felt that the method is only compared to the most trivial baselines (uniform and random). E.g. there is no comparison to even the authors\u2019 own 2005 paper on a closely related setup. There are various existing works on selecting knots for splines. One could use them, for example, by first fitting a spline to all the high-rate data, and then select points for that curve. Does TPS work better than non-trivial baselines? The authors must perform systematic comparisons to related work such as Rosa et al. 2012, Optimal Timepoint Sampling in High-Throughput Gene Expression Experiments. Further, the paragraph in the Discussion section showing the advantage of using TPS over phenotype-based sampling should be moved into the main text and presented clearly as this is a critical part of the paper.2) All the reviewers also felt the manuscript required additional clarity and specific comments are below.3) Since TPS uses an iterative algorithm for solving an optimization problem with many local optima, the initialization procedure is critical. I appreciate the fact the authors tested multiple procedures for initialization, and when using TPS one can try all initialization procedures (and in the end select the one with the best results on the left-out data). It is not clear, however, whether the software implemented by the authors tests all these initialization procedures.4) In addition, the initialization metrics/methods are confusing. The Methods and Supplementary Methods sections talk about \"uniform\", \"intervals of equal sizes\", \"max distance\", metricA, metricB, and metricC. But it is not clear what initialization method was used in different analyses/figures. What procedure was used in 5) The \"iterative improvement\" procedures are also confusing. The method presented in Methods optimizes the mean squared error (on the left out data points) by iteratively removing/adding one point at a time. I assume this method was used in not in that figure.6) For the miRNA data, the authors perform a comparison of results using mRNA versus miRNA data for deciding the sampling time points. But the way the results are presented is again confusing. In the main text the authors say: \"The results are presented in 7) It is nice that the project has a website but it looks mostly incomplete. Can the authors make the site a full-fledged web service so novice users and biologists with no programming background can use the software? Users should be able to upload a time series dataset and receive results with interactive visualization.8) There is only one case study. The method is described as a broad method, but since it was applied to one process, it should be stated more carefully that it is a general method. Perhaps the authors should focus on this case study without the need to \"sell\" the method as the primary focus of the study.9) The examples in 10) Performance curves , 4. Is t11) 12) 13) Subsection \u201cIdentified time points using mRNA data are appropriate for miRNA profiling\u201d, second paragraph: \"Using the selected based on mRNA data [\u2026] random points (p<0.01)\". What is the p-value for the uniform baseline?14) There is a line in the paper that says 'performance is very similar [...].0.43 [\u2026] 0.40\". Provide a measure supported by a statistical test.eLife. Your revised article has been favorably evaluated by Aviv Regev as the Senior Editor, a Reviewing Editor, and two reviewers.Thank you for resubmitting your work entitled \"Selecting the most appropriate time points to profile in high-throughput studies\" for further consideration at The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below by the referees. We ask that you pay close attention to these specific items, which we call out here, and are in the specific reviews below:1) The revised version still has formatting issues. The response refers to Table 5, but no such table is given; figures are not numbered and provided separately from their captions, which makes it hard to follow and comment.2) The new 3) We do not see a benefit in having the left panel of 4) 5) The authors mentioned that they \"improved the caption of 6) The authors added Figure 4\u2014figure supplement 3 to show the performance of TPS trained on miRNA data. But that plot could/should be included directly in 7) Some of the figures are still of inadequate quality. Some plots have a strange aspect ratio e.g. . SeveralReviewer #1:The authors addressed most of the concerns raised. Most importantly, the authors added some evaluation to other approaches for selecting time points, and replaced results based on outliers.The revised version still has formatting issues. The response refers to Table 5, but no such table is given; figures are not numbered and provided separately from their captions, which makes it hard to follow and comment.Reviewer #2:The authors addressed several comments from reviewers. I am now ok with the substance of the paper. While it would be useful to see validation on additional data/systems, I appreciate the difficulties in getting additional data sets for further testing of the method. Essential concerns:1) All the reviewers felt that the method is only compared to the most trivial baselines (uniform and random). E.g. there is no comparison to even the authors\u2019 own 2005 paper on a closely related setup. There are various existing works on selecting knots for splines. One could use them, for example, by first fitting a spline to all the high-rate data, and then select points for that curve. Does TPS work better than non-trivial baselines? The authors must perform systematic comparisons to related work such as Rosa et al. 2012, Optimal Timepoint Sampling in High-Throughput Gene Expression Experiments. Further, the paragraph in the Discussion section showing the advantage of using TPS over phenotype-based sampling should be moved into the main text and presented clearly as this is a critical part of the paper.The reviewers are correct that two prior studies (one from our group) have discussed theoretical ideas about determining which points to sample. However, there are a number of key differences between TPS and these prior studies. The most important difference is the fact that both prior methods mentioned above require an iterative process. These methods start by profiling all genes at a small number of time points. Based on these initial experiments they select another time point, profile it etc. until they reach some stopping criteria. This strategy, while theoretically interesting, is not practical . The main issue is the fact that such a strategy can take a very long time to complete. Given sample preparations which should be performed several times (at each iteration) and the sequencing itself, which has to be done one at a time, the entire process of selecting the time points can take weeks and even month for an experiment with 20 total time points such as the one we discuss. It is very unlikely that anyone would be willing to spend this much time. In addition, such iterative process introduces several new issues including the fact that each time point is prepared and sequenced on a different day, which has been shown to introduce biases and the fact that there is no real way to tell when to stop. Unlike TPS that starts with oversampling, and can thus determine accuracies for the subset of selected points, the iterative methods cannot compare their final decisions to any true profile and so are much more likely to reach a local minimum. Finally, the Rosa et al. method mentioned above relies on the availability of accuracy of related gene expression experiments. While this may be O.K. for some biological processes that have already been studied, when studying a new process or a new treatment this method cannot be applied since it is likely that no such relevant datasets exist and even if they do exist they may themselves be sampled at the wrong time points and thus no \u2018ground truth\u2019 exists for these methods. In contrast, TPS is both practical , cheap and does not rely on the availability of other high throughput studies for the same system.The above discussion refers to convenience and practicality. Given the above comment we also compared the accuracy of TPS to these prior methods. The Rosa et al. implementation did not work and even after consulting with the authors we were unable to use it. We were able to run the Singh et al. method and perform the comparison with that method. As can be seen in We have revised the Introduction to discuss these prior methods and the difference between TPS and these methods and revise the supplement to discuss the comparison between the two methods.Based on these comments we have modified the text. First, we moved the paragraph mentioned in the comment from Discussion to Results. We also added the following to Introduction \u201cRelatively little work has focused so far on the selection of time points to sample in high throughput time series studies. [\u2026] In addition, these methods employ a stopping criteria that does not take into account the full profile and the Rosa et al. method also requires that related time series expression experiments be used to select the point, which may be problematic when studying new processes or treatments.\u201d2) All the reviewers also felt the manuscript required additional clarity and specific comments are below.3) Since TPS uses an iterative algorithm for solving an optimization problem with many local optima, the initialization procedure is critical. I appreciate the fact the authors tested multiple procedures for initialization, and when using TPS one can try all initialization procedures (and in the end select the one with the best results on the left-out data). It is not clear, however, whether the software implemented by the authors tests all these initialization procedures.We have implemented and made available all initialization methods in the software. As we now note in Discussion, since we are optimizing a specific function (MSE for selected genes at the selected time points) all initialization methods can be compared and the solution that leads to the lowest error be used. Thus, the large number of initialization methods should not be a problem for users of the method.4) In addition, the initialization metrics/methods are confusing. The Methods and Supplementary Methods sections talk about \"uniform\", \"intervals of equal sizes\", \"max distance\", metricA, metricB, and metricC. But it is not clear what initialization method was used in different analyses/figures. What procedure was used in As the reviewer suggested, we improved the caption of 5) The \"iterative improvement\" procedures are also confusing. The method presented in Methods optimizes the mean squared error (on the left out data points) by iteratively removing/adding one point at a time. I assume this method was used in our iterative method and which are for baseline / comparison methods that are not iterative.We agree with the reviewer and now explicitly state, in the caption, which names correspond to different initializations for 6) For the miRNA data, the authors perform a comparison of results using mRNA versus miRNA data for deciding the sampling time points. But the way the results are presented is again confusing. In the main text the authors say: \"The results are presented in The reviewer is correct that the results were not shown in the figure. We have now added a new figure with these results and changed the text to: \u201cFigure 4\u2014figure supplement 3 presents the error achieved when using the miRNA data itself to select the set of points . [\u2026] For example, when using the 13 selected mRNA points, the average mean squared error is 0.4312 whereas when using the optimal points based on the miRNA data itself the error is 0.4042.\u201d7) It is nice that the project has a website but it looks mostly incomplete. Can the authors make the site a full-fledged web service so novice users and biologists with no programming background can use the software? Users should be able to upload a time series dataset and receive results with interactive visualization.The software actually provides a graphical user interface which allows the user to determine how many time points to use by displaying different error levels for different selections and also the optimal time points for each selection. The software would be publicly available but we prefer not to implement it as a webserver. We had good success with prior stand-alone software tools we released (for example STEM and DREM) and the use of this one would also be easy and intuitive. We decided not to provide a webserver since some researchers prefer to not upload new data to public webservers and the ability to use the software on their own machines would be a plus for these individuals whereas it should not be detrimental to others.8) There is only one case study. The method is described as a broad method, but since it was applied to one process, it should be stated more carefully that it is a general method. Perhaps the authors should focus on this case study without the need to \"sell\" the method as the primary focus of the study.While it is indeed focused on one biological process, we believe the analysis is pretty comprehensive. Specifically, we tested it on several different types of high throughput biological data . Still, to address this issue we added the following to Discussion \u201cWe evaluated TPS on several types of high throughput data. However, we have only tested it so far on data for a specific biological process (lung development in mice). While we believe that such data is both challenging and representative and thus provides a good test case for the method, analysis of additional datasets may identify new challenges that we have not addressed and we leave it to future work to address these.\u201d\u20199) The examples in This comment is addressed by Appendix-Table 4 which shows the difference in error from different sampling rate methods (TPS and prior sampling rates). As can be seen, TPS does much better even when considering the MSE across all genes.10) Performance curves , 4. Is tAgain, as shown by the much better means, but also much better STD values (Appendix-Table 4), the performance of TPS is consistently better across most of the genes and is not a function of a few, very noisy, ones.11) all is available on the supporting website.Based on this comment we have performed the bootstrapping proposed by the reviewers and we now include the result as We also note that these correlation values are not a major aspect of the paper (the key for us is whether we can reconstruct the curves based on the time points selected) and were mainly presented to show that time series information could be useful to determine relationship between methylation and expression values.12) The reason we removed the uniform sampling from 2B is for clarity. We wanted to have one clean figure showing the difference between the standard method (uniform) and the basic TPS method (with a uniform weight and greedy search). Then, after establishing the advantage of the general version of TPS in 2A, we show in 2B that we can further improve its results by improving the search method . We now explain this in the caption.13) Subsection \u201cIdentified time points using mRNA data are appropriate for miRNA profiling\u201d, second paragraph: \"Using the selected based on mRNA data [\u2026] random points (p<0.01)\". What is the p-value for the uniform baseline?The p-value is based on repeated (100) random samplings of time points and we have now clarified this in the text.14) There is a line in the paper that says 'performance is very similar [...].0.43 [\u2026] 0.40\". Provide a measure supported by a statistical test.As discussed in the response to 13, we evaluated error significance using errors produced by random sets of points. For these we see standard deviations in the range of 0.05 as can be seen in The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below by the referees. We ask that you pay close attention to these specific items, which we call out here, and are in the specific reviews below:1) The revised version still has formatting issues. The response refers to Table 5, but no such table is given; figures are not numbered and provided separately from their captions, which makes it hard to follow and comment.eLife upload instructions and policy. In the initial version we uploaded figures and captions together but were told by the journal editorial team to upload figures separately. We believe this would be fixed after acceptance.This is indeed our mistake. Table 5 should have been referring to Appendix-Table 4 and we have now fixed all references in the text to the correct table. As for the comment about the figures, this is because of the 2) The new Indeed, in response to a comment in the initial round of review we replaced some of the genes presented in 3) We do not see a benefit in having the left panel of As the reviewer suggested we removed this panel to the supplement as \u20184) Fixed.5) The authors mentioned that they \"improved the caption of The reviewer is correct, we meant to say that we revised the captions for 6) The authors added Figure 4\u2014figure supplement 3 to show the performance of TPS trained on miRNA data. But that plot could/should be included directly in As the reviewer suggested we now added Figure 4\u2014figure supplement 3 as 7) Some of the figures are still of inadequate quality. Some plots have a strange aspect ratio e.g. . SeveraleLife conversion of the latex file. As for the figures, we added y-axis labels of relative expression to The reference issue was fixed earlier and resulted from problems related to the"}
+{"text": "Accurate gene regulatory networks can be used to explain the emergence of different phenotypes, disease mechanisms, and other biological functions. Many methods have been proposed to infer networks from gene expression data but have been hampered by problems such as low sample size, inaccurate constraints, and incomplete characterizations of regulatory dynamics. Since expression regulation is dynamic, time-course data can be used to infer causality, but these datasets tend to be short or sparsely sampled. In addition, temporal methods typically assume that the expression of a gene at a time point depends on the expression of other genes at only the immediately preceding time point, while other methods include additional time points without any constraints to account for their temporal distance. These limitations can contribute to inaccurate networks with many missing and anomalous links.https://github.com/pn51/laggedOrderedLassoNetwork.We adapted the time-lagged Ordered Lasso, a regularized regression method with temporal monotonicity constraints, for de novo reconstruction. We also developed a semi-supervised method that embeds prior network information into the Ordered Lasso to discover novel regulatory dependencies in existing pathways. R code is available at We evaluated these approaches on simulated data for a repressilator, time-course data from past DREAM challenges, and a HeLa cell cycle dataset to show that they can produce accurate networks subject to the dynamics and assumptions of the time-lagged Ordered Lasso regression.The online version of this article (10.1186/s12859-018-2558-7) contains supplementary material, which is available to authorized users. A major challenge in systems biology is understanding the structure and function of the molecular interaction networks that regulate cellular processes. Gene regulatory networks (GRNs) are abstractions of these networks in whichHowever, computational approaches for GRN reconstruction pose another set of challenges. Since every ordered pair of genes presents the possibility of an edge, an exponentially large space of GRNs needs to be considered. Furthermore, while high-throughput sequencing technologies have advanced significantly and can simultaneously measure the expression levels of thousands of genes in an efficient and affordable manner, dataset sample sizes still tend to be very small compared to the number of genes. This disparity results in clusters with many genes that have similar expression profiles, allowing many GRNs to plausibly account for the observed patterns of expression in a dataset. In addition to GRN reconstruction being an underdetermined problem, other issues such as missing data, gene expression stochasticity, confounding, and incomplete characterizations of the gene regulatory dynamics can also adversely affect GRN predictions. While the wealth of gene expression data has been a boon to understanding GRNs, there is still a demand for accurate and interpretable GRN inference methods that properly address these problems with promising modeling assumptions and efficient algorithms.Most GRN reconstruction methods can be broadly classified into two categories. De novo approaches attempt to infer GRNs solely from expression data. Specifically, edges between genes are inferred by deriving edge confidence scores based on similarities between expression profiles \u201315, statIn both cases, most methods rely on static expression data. Alternatively, since expression regulation is a dynamic process, time-course data can be used to infer causality. However, temporal data tends to exhibit high autocorrelation and is usually only gathered for a few time points and subjects. In addition, many temporal methods typically assume that the expression of a gene at a time point depends on its regulators at only the immediately preceding time point, while other methods include additional time points but do not impose any constraints to account for their temporal distance. For instance, pairwise Granger causality , 35 testIn this paper, we first describe a de novo approach for GRN reconstruction based on the Ordered Lasso , a recenThe organization of the rest of this paper is as follows. In the \u201cThe main difficulties in fitting models for gene expression are the high dimensionality and small sample size of an expression dataset. Due to the large number of genes relative to the number of samples, fitting even simple one-lag models in which the expression of a gene depends on the expression of other genes at a previous time point may be an underdetermined problem wherein many models plausibly fit the dataset and result in overfitting or difficulties with model selection and interpretation. Higher-order lagged models in which the dependence extends to multiple preceding time points provide more flexibility by accounting for long-range and multiple-lag dependencies, but the additional variables that are introduced further compound the problems encountered in the one-lag model. Furthermore, the lagged variables of a predictor tend to have high autocorrelation, especially when the temporal resolution of the data is small. Therefore, additional reasonable modeling assumptions must be imposed to ensure that accurate, interpretable models can still be feasibly learned.\u21131-regularized regression [To this end, one useful approach to prevent overfitting, improve model interpretability, and produce accurate predictions is the lasso or gression . The las\u21131-regularized linear regression problems with monotonicity constraints on the coefficients [In certain regression problems, an order constraint may be imposed to reflect the relative importance of the features. Recently, the Ordered Lasso was introduced to solve ficients , with a ficients .\u21131 penalty forces many of the coefficients to zero, a lagged variable may be considered relevant if it has a non-zero coefficient. In addition, because of the monotonicity constraint on the lagged features of a predictor, all of the coefficients may be equal to zero beyond a certain lag. Therefore, the time-lagged Ordered Lasso can also provide insight into the maximum effective lag or range of influence of each predictor on the response.Like the ordinary lasso, the time-lagged Ordered Lasso can facilitate feature selection and model interpretability. Since the i in a time-course expression dataset, we then fit an expression model with maximum lag lmax allowed by the data and lasso regularization by solving the following problem using the time-lagged Ordered Lasso: To adapt the time-lagged Ordered Lasso for de novo GRN reconstruction, we impose several assumptions on the dynamic model for the expression of a gene. We first assume that the expression of each gene is linearly dependent on the expression of its regulators at multiple preceding time points, a common assumption in many reconstruction methods for time series expression data. Furthermore, to reflect the importance of recent expression data, we assume that as the temporal distance between a target gene variable and a lagged variable of a predictor gene increases, the regulatory influence of the lagged variable on the target decreases, a justifiable assumption for many expression datasets. For example, since expression data tends to be sparsely sampled at distant time points, it is unreasonable to expect expression data at highly distant time points in the past to be strongly influential on the current expression level of a gene. For each gene xi(t) is the expression of gene i at time t and the monotonicity constraint j to gene i if any of the coefficients j are non-zero. Because of the monotonicity constraint, in effect, this only requires checking that the first lagged variable is non-zero. However, this does not imply that the higher-order lagged variables have no bearing on the edges that are predicted; the additional lagged variables of one gene may better explain a target gene\u2019s evolution in expression than the lagged variables of multiple other genes in a lower-lag model will, thereby eliminating the corresponding edges and potentially lowering the false positive rate in the higher-lag model.where p is the number of genes in the network, for each term we wish to consider, plmax additional lagged variables need to be added to the model. In addition, the low sampling rates and time coverage of a dataset may be insufficient to accurately characterize these terms without overfitting or signal aliasing. Therefore, linearity serves as a simplifying assumption, deterrent to prevent overfitting, and preemptive measure to reduce computational overhead. We expect this approximation to be adequate for most applications, especially when detailed dynamics are difficult to observe due to the short time coverage and sparse sampling of a dataset.Although a gene\u2019s expression may in reality depend nonlinearly on its regulators, we use a simplified linear model for several reasons. First, having too many terms may be computationally restrictive; if \u03bb, we test the method against known/synthetic networks and compute the area under the curve (AUC) of the receiver operating characteristic (ROC) curve as \u03bb is varied. Since edges may potentially enter, leave, and re-enter a model as \u03bb decreases, to ensure the ROC curve increases monotonically, we consider an edge to be predicted at a given value of \u03bb if it enters at that value or larger. This can be viewed as applying a threshold on \u03bb and merging the predicted networks for that value and larger. Here, the AUC may be interpreted as the probability that a randomly chosen edge is ranked higher or enters a model earlier than a randomly chosen non-edge. Additional details on choosing \u03bb may be found in Additional file\u00a0To assess prediction accuracy across different values of Since partial knowledge of the dependencies between genes is available, we also consider GRN refinements with semi-supervised adaptations. For most researchers, the primary interest in GRN reconstruction is discovering novel edges, or pairs of genes that are not previously known to interact, but their existence can be supported with evidence from transcriptomic data. On the other hand, prior information may also contain incorrect edges due to curatorial errors or differences between a canonical GRN forming the prior and that which exists in a particular phenotype. Thus, discovering both novel and anomalous connections is of interest.\u03bb, we replace it with two parameters, \u03bbedge and \u03bbnon-edge, to separately regularize the prior edge and non-edge coefficients, respectively. An expression model for gene k with maximum lag lmax is fit by solving the following problem with the time-lagged Ordered Lasso: We modify the de novo approach for semi-supervised reconstruction by embedding a prior GRN into the lasso as follows. Rather than use one general penalty parameter E denotes the set of edges in the prior GRN. If \u03bbedge<\u03bbnon-edge, the magnitude of the coefficients of the prior edges will be penalized to a lesser extent than those of the prior non-edges, thereby allowing the former to account for most of the evolution in expression of the target gene. As a result, the prior edge coefficients are more likely to be non-zero, and the prior edges are more likely to be recovered as posterior edges.where \u03bbedge>0), this approach allows us to predict novel and anomalous edges. For a gene j that is not known to regulate a target gene i, we predict a novel edge if any of the coefficients j is previously known to regulate gene i, we predict an anomalous edge if all of the coefficients Since the prior edges will not necessarily account for all of the output variance and the corresponding coefficients may still be fit with zero values tests for Granger causality, which may be computationally prohibitive when n is large. Lasso-Granger attempts to address these problems by solving Eq. The second standard approach to which we compare our method is Lasso-Granger . One of We also compare our GRN predictions to those made by the truncating adaptive lasso , groupedA summary of the time-course datasets to which we apply the method is given in Table\u00a0When designing experiments, experimentalists need to decide for how long and often data should be collected while factoring in technical complexity, cost, and other considerations. Since time series expression data tends to be short or sparse, predicting accurate GRNs from these datasets may be difficult. Therefore, we first analyze and demonstrate the effect of using different time series sampling rates and lengths on accuracy. To do so, we simulate data for a repressilator , a synth\u03b1=4 and n=3; examples of simulated time series data are shown in Fig.\u00a0For our simulations, we set E. coli and S. cerevisiae gene networks, and expression values were simulated with ordinary differential equations and added measurement noise using GeneNetWeaver [To analyze the utility of the methods in recovering known GRNs, we apply them to synthetic time-course data from several DREAM challenges. In one of the DREAM2 challenges, 50-node networks were derived from Erdos-Renyi and scale-free topologies with Hill-type kinetics driving gene expression . The DREetWeaver \u201347. FinaWhile synthetic expression data can be used elucidate the GRN inference properties of a method in a controlled manner, models for generating these datasets do not fully capture all of the nuances of real data and GRNs. To assess empirical practicality, we consider applications to the HeLa cell cycle gene expression dataset by Whitfield et al. . This daThe first reference subnetwork to which we compare our results, shown in Fig.\u00a0lmax\u2208{1, 2, 3} when fitting Eq. We first evaluate our method using simulated data for a repressilator. We primarily investigate the effect of using different sampling intervals mentclass2pt{minimT is large, many of these AUCs are 1, indicating that the time-lagged Ordered Lasso can correctly infer the network when an adequate amount of regularization is used to learn the expression models. As T decreases, the AUCs remain constant until much less than a period of oscillatory behavior is sampled. However, the AUCs remain above 0.5, so the method still does better than chance at identifying the true edges. When the time series is too short to observe any relevant dynamics, the method effectively does no better than chance. Therefore, using a time series that covers a sufficiently long period of time is necessary to ensure that a reliably accurate GRN is inferred.In Fig.\u00a0\u0394t is large relative to T, the AUCs degrade considerably, in some cases to 0. However, when the time series are dense, the time-lagged Ordered Lasso produces high AUCs. Moreover, beyond a sampling rate when the surplus of sampled points do not provide any additional detail about the relevant dynamics, the AUCs do not change, therefore becoming robust to changes in \u0394t. Accordingly, \u0394t does not have to be extremely small to infer an accurate GRN, but the resulting time series should not be excessively sparse. However, low sampling rates can be detrimental. When the time series are extremely sparse because lmax on the AUCs appears to be negligible for large T and small \u0394t. This suggests that the time-lagged Ordered Lasso can accurately describe the repressilator\u2019s behavior with lmax=1. Moreover, for lmax>1, the time-lagged Ordered Lasso is able to suppress the effect of the additional lagged variables by enforcing the monotonicity constraint. However, when T is small or \u0394t is large relative to T, the AUCs appear to be sensitive to the choice of lmax. Since increasing lmax results in fewer samples to learn from, the accuracy of the time-lagged Ordered Lasso is expected to be robust to changes in lmax when it is small relative to the number of time points.Lastly, the effect of T is small. In addition, its AUCs are sensitive to changes in lmax and vary unpredictably with changes in T and \u0394t, making it difficult to suggest experimental designs for other GRNs. In contrast, Lasso-Granger tends to be on par with the time-lagged Ordered Lasso. For a one-lag model, there is no monotonicity constraint, so their AUCs match for lmax=1. For lmax>1, the AUCs deviate when \u0394t has sufficiently increased; when \u0394t is large, the Lasso-Granger AUCs tend to decrease with increasing lmax, while the time-lagged Ordered Lasso AUCs are more robust, remaining at 1 in some cases.For comparison, Granger causality and Lasso-Granger AUCs are also shown in Fig.\u00a0Based on these results, the time-lagged Ordered Lasso has the potential to outperform other methods. Unlike Granger causality, it can handle short time series and still produce reasonably accurate networks. In addition, while Granger causality and Lasso-Granger allow higher lags to flexibly explain the repressilator\u2019s expression dynamics, they may correspond to false edges; in contrast, the time-lagged Ordered Lasso enforces a reasonable assumption about the diminishing strength of higher lags to mitigate their presence. Therefore, the repressilator is an example in which a better regression fit does not imply a more accurate GRN. Lastly, using time series that cover long periods of time can improve the time-lagged Ordered Lasso\u2019s ability to articulate the true edges, provided that the sampling rate is not extremely low. However, sampling over shorter periods with relatively high sampling rates to observe sufficient changes in expression can still produce fairly accurate networks. Therefore, the total number of observations, rather than frequency or length alone, is a major factor in inference accuracy.lmax.We next apply our method to the DREAM challenge datasets. Since the networks are fully known, biologically plausible, and endowed with detailed dynamical models of gene expression, these challenges serve as a testbed for benchmarking methods across different network sizes, topologies, sample sizes, and stochasticity conditions. As with the repressilator, we compute AUCs at different model orders lmax. In Fig.\u00a0We study the overall performance of each method across the DREAM networks by considering the distribution of AUCs for each combination of method and lmax can be used to obtain slight improvements in the overall accuracy of one method over another; based on the AUC density curves and medians at the considered lmax values , groupedEq. 9 of . Since w\u03bb on a per-gene basis or different heuristics that are more specific to the time-lagged Ordered Lasso may improve its network predictions. In addition, these results can be used to guide a choice between TAlasso and the time-lagged Ordered Lasso, depending on the importance of specificity versus sensitivity as well as predicting a sparse network versus the potential to discover more novel edges that may be verified with follow-up experiments, especially when the reference networks may only be partially known.In Fig.\u00a0\u03bbedge to 0. We again compute AUCs, this time by tracking the prior non-edges that enter an expression model as \u03bbnon-edge decreases from a sufficiently large value (corresponding to no prior non-edges predicted as posterior edges). This AUC may be interpreted as the probability that a randomly chosen true novel edge is ranked higher or enters a model earlier than a randomly chosen true non-edge.The availability of the original and updated networks also presents an opportunity to analyze the semi-supervised time-lagged Ordered Lasso adaptation. For illustrative purposes, we evaluate the method\u2019s ability to predict novel edges by treating the original Sambo et al.-network as the input prior network and setting lmax\u2208{1, \u2026, 6} are shown in Fig.\u00a0lmax increases. More importantly, the AUCs at the larger values of lmax are well above 0.5, indicating that the semi-supervised method can predict novel edges at rates better than chance using the described parameter settings. Since all prior edges were unpenalized in these results, possible improvements in accuracy can be made by choosing positive values of \u03bbedge, which can also facilitate anomalous edge detection. Nevertheless, the time-lagged Ordered Lasso already displays a strong potential for reliable novel edge detection; even without these adjustments, the current semi-supervised adaptation is still able to synthesize a partially known GRN with an expression dataset to resolve the inconsistencies between both inputs and accurately identify the missing edges in the GRN.The novel edge prediction AUCs for model orders The time-lagged Ordered Lasso imposes a monotonicity constraint based on temporal distance that is adequate for many time series applications, performs model regularization, and has a canonical feature selection mechanism, making it well-suited for GRN reconstruction. We have presented adaptations of the method for de novo and semi-supervised reconstruction from time-course gene expression data. To do so, we assumed that the expression of a gene depended linearly on the expression of its regulators at multiple preceding time points and that the regulatory strength of a predictor decreased for increasing lags. A local model of gene expression is then learned for each gene using the time-lagged Ordered Lasso, and a GRN is predicted by applying the feature selection mechanism on each gene\u2019s model to determine the predicted regulators. To modify the de novo method for semi-supervised reconstruction, we introduced a second regularization parameter that allows us embed a prior GRN into the model fitting procedure in order to predict novel and anomalous edges.In our applications, we showed that the time-lagged Ordered Lasso enforces the monotonicity constraint to accurately predict a variety of networks. In most cases, the time-lagged Ordered Lasso performed on par with or better than competing methods. Most importantly, we showed that it can accurately discover novel network connections and anomalous links using real data, as demonstrated by the improved performance when compared to the updated HeLa network. Specifically, the time-lagged Ordered Lasso predicted edges that were not known at the time that the HeLa data was published and would have been erroneously considered false positives with the respect to the Sambo et al.-network, but were later confirmed by further experiments. This is an important validation of the time-lagged Ordered Lasso\u2019s capabilities.Our results illustrated several important properties of the time-lagged Ordered Lasso adaptations. For instance, provided that a time series covers a sufficiently long period of time and is not extremely sparse, our method was able to accurately recover GRNs from the data, whereas other methods had more difficulty doing so under the same conditions. In addition, predicting a GRN from a fitted model only required checking the first lagged variable of each predictor. However, because the additional lagged variables of one gene may better explain a target gene\u2019s evolution in expression than the lagged variables of multiple other genes in a lower-lag model will, the higher order lags will still be important to the model and reduce false positive edge predictions at adequately chosen penalty parameters. Lastly, because of the monotonicity constraint, the time-lagged Ordered Lasso can automatically select the maximum effective lag of influence for each gene-gene pair, so the predicted GRNs are expected to be robust to the model order if a time series is sufficiently long and the model order is sufficiently large. As a result, the monotonicity constraint precludes the need for any complicated heuristics to choose the model order that other approaches may require to optimally reconstruct a GRN.Our algorithms can be modified in several ways. Here, we assumed that the expression of a gene depended linearly on the lagged expression of its predictors. However, we included the lagged expression of the gene itself as covariates, even if self-regulation was not evident; one modification is removing them. Another common modeling approach is using differential equations. Details and results for these changes may be found in Additional file\u00a0While GRN inference remains challenging, our approach provides several advances. First, to infer GRNs, our approach uses a time-ordered constraint on regulatory influence, which we showed can accurately predict a variety of networks. Our approach can also accommodate prior knowledge for semi-supervised GRN inference. In addition, the performance of our methods increases monotonically with the maximum lag of an expression model, obviating the need to optimize that parameter. Lastly, our methods also have the ability to make accurate novel discoveries, as demonstrated with the BioGRID example.Even without extensive modifications, our current algorithm is still able to predict fairly accurate GRNs with reasonable, basic assumptions for dynamic gene expression modeling. Thus, the GRNs that are inferred using the time-lagged Ordered Lasso can be used as starting points for further analyses and network refinements, and the time-lagged Ordered Lasso can serve as a backbone for additional GRN reconstruction algorithms.Additional file 1Supplementary information. (PDF 289 kb)"}
+{"text": "The neurodegenerative disorder spinocerebellar ataxia type 1 (SCA1) affects the cerebellum and inferior olive, though previous research has focused primarily on the cerebellum. As a result, it is unknown what molecular alterations are present in the inferior olive, and whether these changes are found in other affected tissues. This study addresses these questions for the first time using two different SCA1 mouse models. We found that differentially regulated genes in the inferior olive segregated into several biological pathways. Comparison of the inferior olive and cerebellum demonstrates that vulnerable tissues in SCA1 are not uniform in their gene expression changes, and express largely discrete but some commonly enriched biological pathways. Importantly, we also found that brain-region-specific differences occur early in disease initiation and progression, and they are shared across the two mouse models of SCA1. This suggests different mechanisms of degeneration at work in the inferior olive and cerebellum. A hallmark of neurodegenerative diseases is brain-region-specific cell death and dysfunction. Why specific brain regions are affected and others remain unscathed is a key question in the field. In order to answer this question, the innate differences and similarities in affected tissues must first be established. Studies in a diverse subset of neurodegenerative disorders have begun using large-scale transcriptomics and proteomics to assess brain region-dependent molecular alterations. For example, a large RNA-sequencing (RNA-seq) study conducted across brain regions and peripheral tissues in Huntington\u2019s disease (HD) mouse models identified gene clusters related to transcription, chromatin factors, and mitochondria that appear to be specific to the striatum . The samMouse models for neurodegenerative disorders can be utilized to query the underlying differences and similarities in affected tissues, including the models generated for spinocerebellar ataxia type 1 (SCA1). SCA1 is a fatal late-onset neurodegenerative disorder that is characterized by impaired motor coordination and balance . AtrophySCA1 is caused by a polyglutamine (polyQ) expansion in the ATAXIN-1 (ATXN1) protein, which is involved in gene regulation, including transcription . TranscrPcp2 promoter and transgenic (Tg) . Two indSince no gender-effects have previously been reported in SCA1 mouse models and to minimize potential gender-effects, if any, three male mice from each genotype were initially used for all RNA-seq experiments. Inferior olive was collected using the decussation of the pyramid and pons as a reference, and the anatomical location was verified under a dissection microscope before processing. RNA-seq data analysis was carried out using the Tuxedo pipeline, using TopHat2 v2.1.0 and Cufflinks v2.2.1 . Differe154Q/2QAtxn1 KI mouse model on a pure C57BL/6J background, which expresses polyQ-expanded mutant mouse Atxn1 in appropriate cell types and tissues under the presence of its endogenous regulatory elements control littermates and analyzed the inferior olive differentially regulated genes (We first verified that polyQ-expanded Atxn1 is expressed in the inferior olive of KI mice . A band KI mice . We thened genes . RNA-seqenotype) . Of thesenotype) .154Q/2QAtxn1 KI inferior olive at 5 weeks were related to response to organic substance, hormone metabolic process, and neuropeptide hormone activity . The Horpathways .154Q/2QAtxn1 KI inferior olive was predicted to be the top upstream regulator of the 154Q/2QAtxn1 KI inferior olive transcriptome at 5 weeks of age . IPA prediction for upstream regulators influencing gene expression changes identified Irf7 as the transcriptional regulator with the highest enrichment of downstream targets (In the 12 week enotype) . Functioenotype) . Other penotype) . These fenotype) . The defenotype) , and werenotype) , but no significant change in Gfap fluorescence intensity at this time-point . Since Iicroglia , and is icroglia , we asse context . The mur t-test) .154Q/2QAtxn1 KI inferior olive, and included genes classically associated with behavior and learning . These genes have previously been studied in ataxia research in the cerebellum, suggesting that a subset of ataxia-related genes are also significantly altered in the affected inferior olive of 154Q/2QAtxn1 KI mice at the 12 week time-point (Identifying similar gene expression and pathway enrichment across the 5 and 12 week time-points in the gression . We firscontrols . Sixty-tene list . Among tene list . The log = 0.90) . The rem = 0.90) . Those 3 = 0.90) .154Q/2QAtxn1 KI inferior olive was assessed terms from each time-point into EnrichmentMap . A key f < 0.05) .154Q/2QAtxn1 KI time-point . Microaronducted . To alloenotype) . The topenotype) . The genegulated . Among tsor Per1 .154Q/2QAtxn1 KI mice cerebella , indicating a slight progression in the severity of the up- or down-regulation of genes across the temporal window , or the stringent FDR p-value < 0.05 cutoff . To allow for further temporal and cross-tissue comparisons, we sought to extend our p-value cutoff for the 5 week old 154Q/2QAtxn1 KI cerebellum. Previous microarrays in the 154Q/2QAtxn1 KI cerebellum at 4 weeks and 7 weeks of age found a large number of significantly different genes relative to WT controls (The comparison between the 5 week and 12 week e-points . Of the e-points . The gen = 0.78) . This in = 0.78) .154Q/2QAtxn1 KI cerebellum at 5 weeks and 12 weeks and 5. T < 0.05) and 5. T < 0.05) and 5. A < 0.05) . Commonl < 0.05) . This su154Q/2QAtxn1 KI mice are similar or distinct. To analyze the similarities between these two affected tissues in greater detail, the overlap in the two SCA1 affected tissues was assessed at each time-point have immune functions, and were significantly up-regulated at 5 weeks in the ATXN1-82Q Tg inferior olive . The major olive . Genes a, and Kl .The 12 week ATXN1-82Q Tg inferior olive RNA-seq data revealed 126 genes differentially regulated . Approxi2 = 0.87), indicating the degree of up-regulation remained largely stable at the 5 and 12 week time-points . This su = 0.89) . An exam = 0.89) . Pathway = 0.89) . The celd Gabbr1 . CollectC1qa, C1qb, C1qc) . Howeverb, C1qc) .154Q/2QAtxn1 KI mice that there are some common, and largely unique, biological pathways over-represented in the inferior olive and cerebellum SCA1 transcriptomics and Ifih1 (MDA5) are significantly up-regulated in the 154Q/2QAtxn1 KI inferior olive, but do not change in the ATXN1-82Q Tg inferior olive. Due to the fundamental difference in the expression of polyQ-expanded mutant ATXN1 protein in the inferior olive between the two different SCA1 models and RIN values were checked to ensure RNA integrity before proceeding with RNA-seq . Librari154Q/2QAtxn1 KI cerebellum, a nominal p-value < 0.01 was used for analysis before quantification and differential expression analysis with cufflinks v2.2.1 . Cuffnoranalysis . Sample analysis . Enrichmanalysis . NIH DAV cut-off . Network cut-off . GO annoMice were anesthetized before intracardial perfusion with phosphate buffer saline (PBS) and 4% paraformaldehyde (PFA). Brains were extracted and fixed overnight in 4% PFA before incubation in 20% and 30% sucrose gradient. Samples were frozen in OCT compound and sliced in 30 \u00b5m sections on a cryostat. Sections were washed in PBS and PBS with 0.1% Triton-X before incubation in 5% normal goat serum (Jackson Labs 005-000-121) at room temperature. Primary antibody incubation was carried out at 4\u00b0C with the following antibodies: mouse anti-Calbindin, rabbit anti-Iba1, and chicken anti-Gfap. Sections were washed before incubation in secondary antibodies goat anti-mouse Alexa488, goat anti-rabbit Alexa568, and goat anti-chicken Alexa555. Sections were mounted and coverslipped with DAPI. Fluorescent images were scanned on a Zeiss LSM800 confocal microscope. Three brain slices were imaged and quantified for each mouse.ImageJ was used for all image processing. To measure the fluorescence intensity for Gfap-positive staining, the inferior olive was traced based on calbindin-positive staining, and the area in \u03bcm was measured and mean fluorescence intensity measured. To count the total number of Iba1-positive cells, \u2018analyze particle\u2019 was used and manually checked by a blinded individual to ensure Iba1-positive staining co-localized with DAPI.154Q/2QAtxn1 KI inferior olive and cerebellum, 70 \u03bcg and 20 \u03bcg of protein were added per lane, respectively. For ATXN1-82Q Tg inferior olive and cerebellum, 10 \u03bcg of protein was added per lane. Protein was transferred onto 0.42 \u03bcm nitrocellulose, and blotted with primary antibodies rabbit anti-Atxn1 11750 (a kind gift from Dr. Huda Zoghbi) . Samples were then sonicated to ensure breakdown of protein aggregates before rotation at 4\u00b0C for 10 min and centrifugation for 10 min at 13,000 rpm at 4\u00b0C. Supernatant was quantified using a BCA assay (ThermoFisher 23225) and protein from each sample was added to a 8% Tris-HCl gel for western blotting. For Zoghbi) , and mou2. Cells were transfected with pcDNA3.1 and Flag-ATXN1-85Q constructs using an Amaxa Nucleofector (Lonza) and the Amaxa Cell Line Nucleofector Kit T (Lonza) according to manufacturer\u2019s instructions. Cells were maintained post-nucleofection for 48 hr before RNA extraction using the Qiagen RNeasy Mini Kit and subsequent real-time quantitative reverse transcription polymerase chain reaction outlined below.Murine microglial BV2 cells were a kind gift of Dr. Katerina Akassoglou . MicroglThe microglia-like nature of BV2 cells was verified via immunostaining with Iba1 and the induction of cytokine production following lipopolysaccharide stimulation. BV2 cells were tested for mycoplasma contamination by PCR. Cells were pelleted, resuspended in nuclease free water, and lysed. PCR was conducted with 2 \u00b5L cell lysate using the following primers: Forward - 5'-GGCGAATGGGTGAGTAAC; Reverse - 5'- CGGATAACGCTTGCGACCT. PCR reaction products from BV2 cells were run on an agarose gel, and BV2 cells were found to be negative for mycoplasma.ATXN1, Irf7, Ifitm3, Oasl2, Calb1, Actb, and Gapdh (Applied Biosystems). All samples were run in triplicate and normalized to the Gapdh and Actb expression values for \u2206CT calculation and subsequent data analysis.Real-time quantitative reverse transcription polymerase chain reaction (RT-qPCR) cDNA was synthesized from 100 ng of total RNA from cell lines or mouse models using the iScript cDNA Synthesis Kit following the manufacturer\u2019s protocol (Bio-Rad). Quantitative real-time PCR was run on a C1000 Thermal Cycler (Bio-Rad) using probes for 2, then normalized to control samples to obtain a percentage. Immunohistochemistry and western blotting data were analyzed using t-tests in Prism v7. For RT-qPCR, the Bio-Rad CFX Manager v3.1 was utilized for statistical analysis, and data were plotted in Prism v7.For Gfap- and Iba1-positive fluorescence intensity and cell counting, fluorescence intensity or total cells counted were normalized to the total area measured to attain an intensity value or cell count value per 1 mm In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by a Reviewing Editor and a Senior Editor. The following individual involved in review of your submission has agreed to reveal his identity: Stefan Pulst (Reviewer #2).Thank you for submitting your article \"Molecular pathway analysis towards understanding tissue vulnerability in spinocerebellar ataxia type 1\" for consideration by The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.Summary:This paper provides comparative transcriptomic analysis of distinct brain regions in two different SCA1 mouse models. Importantly, this is the first characterization of the transcriptional signature of the inferior olive, which is a vulnerable brain region in SCA. These analyses reveal substantial overlap but also some differences in transcriptomic changes between knock-in and transgenic models, as may be expected. More importantly, these analyses found brain region-specific differences that are shared between the two distinct models of SCA1, suggesting different mechanisms of degeneration at work in the inferior olive and cerebellum. Interestingly, changes in genes representing the \"defense response\" are prominent in the inferior olive, appearing to reflect non-neural changes due to activation of glia. This work enriches our understanding of two commonly used models of SCA1 and provides valuable novel insights into the mechanisms that may contribute to regional neurodegeneration in SCA more generally.Essential revisions:1) It is not clear why a 12-week time point was chosen for analysis \u2013 a time point when both lines examined here are symptomatic. Ideally, a time point should be chosen that precedes behavioral and major morphological changes since cell loss may distort the transcriptional signatures. We ask the authors to consider adding analysis of an earlier, presymptomatic time point to assess which transcriptional changes are occurring first and the trajectory of the changes. Perhaps the 12-week time point was selected to correspond to prior published results . It would be helpful for the authors to discuss the current findings in the context of previously published results.2) The statement that \"common behavioral and pathological phenotypes of the two mouse models may be due to dysregulation of a common subset of genes, and not entirely based on the gene expression directionality\" is highly speculative and not supported by much data. This conclusion should be tempered or supported by more experimental evidence. Essential revisions:1) It is not clear why a 12-week time point was chosen for analysis \u2013 a time point when both lines examined here are symptomatic. Ideally, a time point should be chosen that precedes behavioral and major morphological changes since cell loss may distort the transcriptional signatures. We ask the authors to consider adding analysis of an earlier, presymptomatic time point to assess which transcriptional changes are occurring first and the trajectory of the changes. Perhaps the 12-week time point was selected to correspond to prior published results . It would be helpful for the authors to discuss the current findings in the context of previously published results.154Q/2QAtxn1 KI and ATXN1-82Q Tg mouse models, and no obvious cellular changes were reported in the 154Q/2QAtxn1 KI inferior olive outside of nuclear inclusion formation at an 18 week time-point . These observations suggested to us that RNA-seq analysis at 12 weeks was plausible without the consideration of distorted transcriptional alterations due to substantial cell loss. Third, previously published RNA-seq and microarrays in both mouse models have used the 12 week intermediate time-point, which would allow for potential comparison between our results and those previously published.The initial 12 week time-point was chosen for three reasons. First, the onset of transcriptional changes in the inferior olive are unknown, and an intermediate time-point was initially chosen to determine if gene expression changes are occurring in the inferior olive. Second, substantial cell-loss has not been reported in the cerebellum in both the 154Q/2QAtxn1KI and ATXN1-82Q Tg inferior olive and cerebellum at 5 weeks of age. This time-point coincides with the earliest reported onset of or prior to motor phenotypes in these mouse models . Bioinformatic analysis of the individual brain regions and mouse models, as well as the cross-tissue comparison were conducted on the 5 week data-set. Further, the commonalities and differences between the 5 week and 12 week datasets within each brain region were assessed. We believe that the addition of the 5 week time-point contributes substantially to our initial manuscript, and solidifies the findings that there are brain region-specific molecular alterations in SCA1.In consideration of the reviewer's suggestion of adding an earlier, pre-symptomatic time point to our analysis, we have added RNA-sequencing results from the 154Q/2QAtxn1 KI and ATXN1-82Q Tg inferior olive and cerebellum analysis can be found in the following figures: Figure 2\u2014figure supplements 1 and 4; Figure 3\u2014figure supplements 1-4; Figure 4\u2014figure supplement 1; Figure 5\u2014figure supplements 1 and 3; Figure 6\u2014figure supplements 1, 3, and 4.Data pertaining to the 5 week The addition of the 5 week time-points led to a substantial revision of the text, however, the main findings from the 5 week time-points can be found in the Results section.The incorporation of the 5 week time-point generated 13 additional Supplementary Data tables . To reduce the number of Supplementary Data tables, we have removed Supplementary Data tables found in the original manuscript that contained differentially regulated gene lists from each brain region. The justification for this is that the differentially regulated gene lists can be easily accessed in our processed data text files found on Gene Expression Omnibus (GEO accession number: 122099).Cck and Col18a1 were down-regulated in our 5 and 12 week datasets, which match the previous report . In addition, our 5 and 12 week old 154Q/2QAtxn1 KI cerebellum dataset overlaps with microarray findings from a previous study analyzing 4 week old and 9-12 week old 154Q/2QAtxn1 KI mice .In consideration of the reviewer's comment about discussing our current findings in the context of previously published works, we have added this to the Discussion section. Briefly, our ATXN1-82Q Tg cerebellum datasets have consistent findings with those from a previous publication . From that study, the 10 hub genes identified in the magenta module, which was significantly associated with ataxia, were also significantly down-regulated in our 5 and 12 week datasets. In addition, 2) The statement that \"common behavioral and pathological phenotypes of the two mouse models may be due to dysregulation of a common subset of genes, and not entirely based on the gene expression directionality\" is highly speculative and not supported by much data. This conclusion should be tempered or supported by more experimental evidence.We agree with the reviewers that this statement is speculative and extrapolated from our data without much supporting evidence. In order to address this comment, we have removed this concept from the Discussion section. The Results section was re-assessed, and we feel that this concept does not appear in this portion of the text."}
+{"text": "Bmal 1a promoter in mouse fibroblast cells. We also discuss how the dynamic heterogeneity of transcription and translation rates affects the frequency-spectra of the mRNA and protein number.Even in the steady-state, the number of biomolecules in living cells fluctuates dynamically, and the frequency spectrum of this chemical fluctuation carries valuable information about the dynamics of the reactions creating these biomolecules. Recent advances in single-cell techniques enable direct monitoring of the time-traces of the protein number in each cell; however, it is not yet clear how the stochastic dynamics of these time-traces is related to the reaction mechanism and dynamics. Here, we derive a rigorous relation between the frequency-spectrum of the product number fluctuation and the reaction mechanism and dynamics, starting from a generalized master equation. This relation enables us to analyze the time-traces of the protein number and extract information about dynamics of mRNA number and transcriptional regulation, which cannot be directly observed by current experimental techniques. We demonstrate our frequency spectrum analysis of protein number fluctuation, using the gene network model of luciferase expression under the control of the Bmal 1a promoter in mouse fibroblast cells, and a more general vibrant gene network model.Recent advances in single-cell experimental techniques enable direct visualization of dynamic fluctuations of the biomolecular concentration in each cell; however, a robust, quantitative understanding of the stochastic dynamics of the chemical noise in living cells has yet to be achieved. To understand how the frequency spectrum of product number fluctuation is related to the topology of the reaction network and the dynamics of elementary processes composing the network, we derived an exact analytic result for the frequency spectrum of the product number fluctuation starting from a generalized master equation, enabling the extraction of the frequency spectrum of the reaction rate fluctuation (FSRR) from the frequency spectrum of the product number fluctuation (FSPN). The FSRR serves as a sensitive probe of the mechanism and dynamics of the product creation process; the FSRR vanishes when product creation is a Poisson process. We demonstrated our approach to frequency spectrum analysis of chemical fluctuation for generalized enzyme kinetic models, the gene network model of luciferase expression under the When \u0393(t) is a stationary Gaussian process, this model is exactly solvable, and the time correlation function of the reaction rate is simply given bySR(\u03c9), into We emphasize that the application range of Eqs odel See . In Fig An important example of a non-renewal process is the gene expression process; rates of the chemical processes constituting gene expression can be a stochastic variable depending on various cell-state variables, such as the promoter-regulation state, the population of gene machinery proteins and transcription factors, the phase in the cell cycle, and the nutrition state, forcing the stochastic property of gene expression to deviate from a simple renewal process , 68.Bmal 1a promoter in mouse fibroblast cells [Bmal 1a promoter involves a sub-Poisson gene activation process composed of 7 intermediate reaction steps and a simple one-step gene deactivation process, a Poisson process [We demonstrate an application of Eqs st cells . Accordi process , 69, as Sm(\u03c9), from the protein number frequency spectrum, Sp(\u03c9), by applying RTL = kTLm, where kTL and m are the translation rate coefficient and the mRNA copy number, respectively. In general, kTL represents the translation rate per mRNA and is a stochastic variable dependent on various cell state variables, which include ribosome concentration, concentrations of amino acids, and mRNA conformation to name a few. However, let us first consider the simplest case where, compared to the mRNA number variation, the fluctuation of kTL negligibly influences the protein number fluctuation, which is found to be true in bacterial gene expression [RTL\u232a(= kTL\u2329m\u232a) can be estimated from the mean protein level and the inverse lifetime of protein, that is, \u2329RTL\u232a = \u2329p\u232a\u03b3p or kTL = \u2329p\u232a\u03b3p/\u2329m\u232a. Given that the inverse lifetime, \u03b3m, of mRNA can be estimated separately, the value of kTL can also be estimated from the following asymptotic relation, We can obtain the mRNA number frequency spectrum, pression . By applSm(\u03c9), by three different methods: first, using For the gene expression network model in RTX\u232a = \u2329m\u232a\u03b3m. As shown in The frequency spectrum of mRNA number fluctuation can be converted to the frequency spectrum of the transcription rate fluctuation, which is difficult to observe experimentally. Applying RTX = \u03bekTX. Here, \u03be denotes the stochastic variable representing the promoter regulating gene state, whose value is 1 for the gene in the active state but 0 for the gene in the inactive state, and kTX denotes the active gene transcription rate. For the model shown in \u03c8on(off)(t), of the active (inactive) gene state by [\u03be, and the Laplace transform of the lifetime distribution of the active (inactive) gene state, respectively. \u03c4on(off) designates the mean lifetime of the active (inactive) gene state, that is, The frequency spectrum of the transcription rate fluctuation carries valuable information about the dynamics of the transcription regulation process. For the transcription network model shown in state by SRTX(\u03c9)=\u03c8on(t) is a simple exponential function; in contrast, the gene activation process is a multi-step process with \u03c8off(t) being a non-monotonic, unimodal distribution and Bmal 1a in the embryonic stem cells of mice is best represented by 7 consecutive Poisson reaction processes, and the corresponding lifetime distribution, \u03c8off(t), of the inactive gene state involves 7 different rate parameters, the optimized values of which are given by k1 = k2 = \u22ef = k6 = 9.93\u00d710\u22122min\u22121 and k7 = 0.23min\u22121. Substituting these expressions of f, the inverse Laplace transform of ta\u22121et/b\u2212/ba\u0393(a) . Here, \u03be and kTX(\u0393) denote the gene state variable defined above Sq(\u03c9)/\u2329q\u232a2 ; The frequency spectrum of the transcription rate fluctuation is also sensitive to the magnitude and speed of the active gene transcription rate fluctuation. As can be seen in utor see . HoweverkTL, is constant, it can also be a random variable, the value of which differs from cell to cell. We find that, for this case as well, kTL is much greater than the dynamic fluctuation of kTL in each cell , or when the first term on the R.H.S. of We finish this section by emphasizing that, in application of our theory to the analysis of experimental FSPN data, accuracy of the FSRR extracted from the FSPN data using We investigated how the frequency spectrum of product number fluctuation is related to the topology of the reaction network and the dynamics of elementary processes composing the network. For this purpose, we derived an exact analytic result for the frequency spectrum of the product number fluctuation starting from a generalized master equation. This result enables one to obtain the frequency spectrum of the reaction rate fluctuation (FSRR) from the frequency spectrum of the product number fluctuation (FSPN). The FSRR is more sensitive to the mechanism and dynamics of the product creation process than the FSPN. The FSRR vanishes when product creation is a Poisson process. However, the FSRR is a monotonically decaying function of frequency, when the product creation process is a super-Poisson process, such as a multi-channel process, but is a non-monotonic function of frequency with one or more peaks when the reaction is a sub-Poisson process, such as a multi-step process.a priori explicit model for the environment coupled to the system network; we explicitly model only the control variable dependent part of the entire network and the effects of the remaining part of the network and environment are collectively accounted for by the time correlation function (TCF) of the rate fluctuation. This vibrant reaction network model based approach is useful in quantitative analysis of chemical fluctuations generated from intracellular reaction networks consisting of elementary reaction processes with arbitrary reaction time distribution and environment coupled rate fluctuations, for which it is a difficult to construct the correct and explicit model in terms of the conventional kinetic network model consisting of a few discrete chemical states and Poisson transition process between them.Our theory is applicable not only to the conventional kinetic network model but also to vibrant reaction network model consisting of multistep and/or multichannel elementary processes with arbitrary reaction orders, reaction time distributions, and rate coefficient fluctuations. Vibrant gene expression network models enables quantitative understanding of the mRNA and protein number fluctuations for various gene expression systems , 55, whiluciferase expression under the Bmal 1a promoter in mouse fibroblast cells and for a more general vibrant gene network model.We demonstrated our approach to frequency spectrum analysis of chemical fluctuation for a generalized enzyme kinetic models showing that the frequency spectrum of reaction rate serves as a sensitive probe of the reaction dynamics of the enzyme-substrate complex. Then, by applying our approach to a gene expression network, we can extract the mRNA number frequency spectrum from the protein number frequency spectrum. From the mRNA number frequency spectrum, we can further extract quantitative information about the gene regulation dynamics of the promoter and the active gene transcription dynamics. This was demonstrated for the gene network model of E+S\u2192ES). The time elapsed for each enzyme-substrate association event (E+S\u2192ES) is sampled from \u03c6ES(t) = ta\u22121et/b\u2212/\u0393(a)ba. The fate of a given ES complex, that is, either dissociation (E+S\u2190ES) or catalytic reaction (ES\u2192E+P), is chosen using the probability, p2, of catalytic reaction. A uniform random number is then generated between 0 and 1, and if it is smaller than p2 at that time, a catalytic reaction occurs, resulting in a product molecule. Otherwise, the ES complex is dissociated into a free enzyme and a substrate. Either case is followed by another round of enzyme-substrate association reactions. The lifetime of each product molecule is sampled from In this section, we present the detailed algorithm used to generate the simulation results in N-enzyme reaction system considered in N single-enzyme trajectories, independent of each other, are simultaneously generated and superposed to yield a single trajectory of the product number fluctuation. Because we need stationary pooled trajectories to calculate the FSPN, the initial time of pooled trajectories is arbitrarily chosen at a time long enough that the distribution of the product number reaches its stationary state.To obtain the FSPN of the R(\u0393), stochastically fluctuates over time because of its coupling to state variable \u0393, which can be a multidimensional vector. When a stochastic realization of \u0393 is generated with a time step, \u0394t, the reaction probability, R(\u0393(t))\u0394t, is calculated at every time step and compared with a uniform random number between 0 and 1. When R(\u0393(t))\u0394t is less than this random number, a reaction occurs, resulting in a product molecule at that time.For vibrant reaction models, the reaction rate, \u0393 is much longer than the time scale of the individual reaction event. However, the current method is free of such limitation. In comparison with the Extrande method developed by Voliotis, Thomas, Grima, and Bowsher [\u03bb, of the exponentially decaying time correlation function, \u2329\u0393(t)\u0393(0)\u232a/\u2329\u03932\u232a = et\u03bb\u2212. We use the following update equation to generate this process: R(\u0393), is modelled as R(\u0393) = k0e\u03b1\u0393\u2212, with k0 and \u03b1 being constant in time, and the value of \u0394t is chosen to be k0\u0394t = 10\u22123 here.Our method generalizes the algorithm in reference to a sta Bowsher , our metvariance . For \u0393 tS1 Text(PDF)Click here for additional data file.S2 Text(PDF)Click here for additional data file.S3 Text(PDF)Click here for additional data file.S4 Text(PDF)Click here for additional data file.S5 Text(PDF)Click here for additional data file.S6 Text(PDF)Click here for additional data file.S7 Text(PDF)Click here for additional data file.S8 Text(PDF)Click here for additional data file.S9 Text(PDF)Click here for additional data file.S10 Text(PDF)Click here for additional data file.S1 Fig(PDF)Click here for additional data file.S2 Fig(PDF)Click here for additional data file.S3 Fig(PDF)Click here for additional data file.S4 Fig(PDF)Click here for additional data file.S5 Fig(PDF)Click here for additional data file."}
+{"text": "Introduction: Drug-resistant infections are becoming increasingly frequent worldwide, causing hundreds of thousands of deaths annually. This is partly due to the very limited set of protein drug targets known for human-infecting viral genomes. The eleven influenza virus proteins, for instance, exploit host cell factors for replication and suppression of the antiviral immune responses. A systems medicine approach to identify relevant and druggable host factors would dramatically expand therapeutic options. Therapeutic target identification, however, has hitherto relied on static molecular networks, whereas in reality the interactome, in particular during an infection, is subject to constant change.Methods: We developed time-course network enrichment (TiCoNE), an expert-centered approach for discovering temporal response pathways. In the first stage of TiCoNE, time-series expression data is clustered in a human-augmented manner to identify groups of biological entities with coherent temporal responses. Throughout this process, the expert can add, remove, merge, or split temporal patterns. The resulting groups can then be mapped to an interaction network to identify enriched pathways and to analyze cross-talk enrichments and depletions between groups. Finally, temporal response groups of two experiments can be intersected, to identify condition-variant response patterns that represent promising drug-target candidates.Results: We applied TiCoNE to human gene expression data for influenza A virus infection and rhino virus infection, respectively. We then identified coherent temporal response patterns and employed our cross-talk analysis to establish two potential timelines of systems-level host responses for either infection. Next, we compared the two phenotypes and unraveled condition-variant temporal groups interacting on a networks level. The highest-ranking ones we then validated via literature search and wet-lab experiments. This not only confirmed many of our candidates as previously known, but we also identified phospholipid scramblase 1 (encoded by PLSCR1) as a previously not recognized host factor that is essential for influenza A virus infection.Conclusion: With TiCoNE we developed a novel approach for conjointly analyzing molecular networks with time-series expression data and demonstrated its power by identifying temporal drug-targets. We provide proof-of-concept that not only novel targets can be identified using our approach, but also that anti-infective drug target discovery can be enhanced by investigating temporal molecular networks of the host in response to viral infection. We hypothesize that it is better modeled quantitatively and based on the temporal expressions profiles of the nodes . Big \u201comics\u201d data sets, including an increasing number of time-resolved data sets, as well as molecular networks, have been curated already. However, suitable approaches to analyze these data types together are lacking. Thus, an integrated analysis to illuminate systemic response patterns of temporal resolution has been infeasible so far.p-values , a novel human-augmented time series clustering method combined with a temporal network enricher that enaTiCoNE works with most kinds of biological entities and most types of molecular measures acquirable for them . To increase readability, in the remainder of this article we simply refer to genes and gene expression as we focus on applying TiCoNE to influenza A virus (IAV) transcriptomics data.In this study, we briefly outline the computational approach of TiCoNE and the utilized data sets. For an exhausting and more formal description of the methodology, we refer to Computational Methods section in TiCoNE is a human-augmented clustering method for time series data combined with a temporal network enricher. As any clustering approach it seeks to partition objects of a data set into groups such that (1) objects of the same group are similar and (2) objects of different groups are dissimilar to each other. In this study, we group objects based on their time course. Since TiCoNE is human augmented, it allows the user to interfere with the clustering process and, thus, can incorporate valuable domain knowledge. After the clustering process, TiCoNE offers a sophisticated set of methods for an enrichment of the clustering with molecular networks paving the way for unprecedented systems medicine analyses of time series data.The objective of clustering is to cluster biomolecular entities in such a way that the time courses of genes assigned to one cluster are more similar to each other than to time courses of genes assigned to different clusters. To establish the similarity between two time courses, the user can choose between the Euclidean distance and the Pearson correlation coefficient, depending on whether one wants to emphasize the time course amplitudes or their shapes, respectively. Note that TiCoNE can also directly cope with replicates.10 k-means,11 PAMK,12 STEM,13 and transitivity clustering.14 Once the initial clustering is identified, TiCoNE applies a prototype-based clustering scheme16 and behaves similar to k-means11: Each cluster is represented by a prototype and the following two steps are performed alternatingly:1.Assign all genes (based on their time courses) to the most similar prototype.2.Update the prototypes accordingly.With this information and some optional data cleaning , TiCoNE produces an initial clustering using one of several common clustering approaches. One may choose between CLARA,This process can be repeated automatically until convergence . The essential working mode of TiCoNE, however, is to allow the user to interfere with the clustering after each iteration in various ways. We term this approach human-augmented clustering.Merge clusters of genes with apparently too similar prototypes.Split clusters in cases wherein a cluster appears to the user to be the union of multiple different time courses.Delete clusters in case a cluster is perceived as noise or uninteresting.The result of each clustering iteration is presented to the user in a sophisticated graphical user interface allowing for the efficient inspection and manipulation of the intermediate clustering results. The user has the following main options:We refer to the Computational Methods section in 17 to perform this task. It extracts a maximal connected subnetwork consisting only of genes from the selected clusters but a user-given number of exceptions.After clustering, TiCoNE offers several novel network enrichment methods to boost the significance of the derived clusters and put them in a systems biological context. The most straightforward but naive analysis is the extraction of the node-induced subnetworks consisting of the genes presented in a selected cluster of interest. This approach is naive, as it will fail to find connected networks if the genes of a cluster are not directly linked in the network. In a biomedical setting, it is reasonable to assume that not all functionally related objects show a very similar time behavior and end up in the same cluster . Thus, one may want to allow for a certain number of exception nodes that are not in the selected clusters but connect other objects that are. We use KeyPathwayMinerTiCoNE allows for temporal cross talk enrichment by scanning the network for pairs of clusters that are connected more (less) often than expected by chance in a given network. We utilize randomly permuted networks to assess significance levels.Different conditions may be compared over time. First, the data for both conditions are treated independently for clustering. After clustering, TiCoNE identifies significantly overlapping clusters of the two conditions and evaluates the similarity of their prototypes. We regard those cluster pairs as most interesting that have a significant overlap and a very similar (or dissimilar) prototype to investigate the commonalities (or the differences) of the conditions. These clusters may afterward be inspected using the aforementioned network enrichment analysis.We implemented TiCoNE in a Cytoscape app as well as a feature reduced interactive web application. The TiCoNE Cytoscape app is full featured and includes all approaches described in this article. In this study, TiCoNE is complemented by Cytoscape's network visualization and analysis functionalities. The TiCoNE web application includes only a limited feature set, most importantly the formation of clusters of time series data sets and the visualization and enrichment of identified clusters on a biological network. Temporal cross talk identification and phenotype comparison are not included.18 for more details. It is an ideal scenario for TiCoNE's phenotype comparison feature. In this study, we focus on comparing (1) IAV versus (2) RV infection. We mapped the probe set IDs of the data set to Entrez gene IDs. For genes with multiple probe sets measured, we kept only the one with the highest variance over time. The input values have already been log2 transformed such that we only normalized them against the control. Furthermore, we removed genes from the data sets, which were not present in the used interaction network under three different experimental conditions: BEAS-2B lung cells have been infected with (1) IAV, (2) rhinovirus (RV), or (3) coinfected with both. The data set contained five biological replicates for all time points except the first one, for which it contained six. See Kim et al.We clustered both the expression data of the IAV and the RV infection experiments with CLARA and 20 negative/positive discretization steps into 500 initial clusters. We chose CLARA as it designed to cluster large data sets efficiently. We used the Pearson correlation as similarity function, cluster aggregation function 19 which integrates various other databases of known, experimental, and predicted PPIs. We mapped Uniprot IDs to Entrez gene IDs to be able to integrate the network with our example time series data. After removing duplicate interactions between the same pair of genes, the resulting network contains 199,025 interactions and 15,161 nodes (genes). Note that our TiCoNE approach works with any kind of graph loaded into Cytoscape or to the feature reduced TiCoNE online platform. The use case determines the most appropriate network. In this study, we aimed for finding human response protein complex formation candidates; hence our choice of using the I2D interactome.We extracted the human interactome from the Interologous Interaction Database (I2D) protein\u2013protein interaction (PPI) database,p-values and cross talk graphs. (5) Finally, one may analyze multiple condition-specific time series experiments to identify overrepresented temporal patterns responding differentially to conditions in multiple experiments.18 Viral pathogens such as influenza virus pose a severe threat to human welfare.To demonstrate how TiCoNE can facilitate temporal systems medicine drug target discovery, we analyzed human whole-genome time series transcriptomics data of BEAS-2B lung cells after infection with IAV or RV together with the human protein interactome.21 Therapeutic target discovery has recently focused on host cell factors needed for or restricting IAV replication, as they are not encoded in the viral genome and the virus cannot easily adapt through mutation to become resistant. Interfering with such factors may thereby allow treating IAV infections independent of the specific virus strain and preventing viral resistance.Antivirals normally attack viral functions, such as antibodies that bind to viral proteins, or drugs that impair viral functions. As an RNA virus, IAV possesses high genetic flexibility, allowing for quick adaptation to selective pressures imposed by these antiviral attacks. The result is that neither a lasting immunization nor an effective therapy has been developed. IAV exploits the host cell after infection by hijacking a variety of fundamental intracellular signaling cascades.The transcriptomics data contain expression levels for 49,386 probes at baseline and over nine time points between 2 and 72\u2009h after infection with five replicates for each time point. TiCoNE clustered the expression data of IAV infection and RV infection experiments into 88 and 71 significant form a large connected complex along the timeline potentially representing the gene program response cascades to IAV infection over time. In contrast, such significant RV cluster pairs seem to be more fragmented and form multiple separated cluster complexes possibly indicating that no such tightly controlled process of gene program response cascades is activated. Out of the 61 significant gene cluster pairs for the IAV infection, we see more often an enriched number of edges (39/61) than for the 34 significant gene cluster pairs for the RV infection (19/34). These results highlight that both conditions exhibit fundamentally different behavior over time on a systems biology level.de novo pathways in the network that behave differently over time under the two conditions. p-Values were derived using a permutation test with 1000 permutations. We identified 30 such pathways (subnetworks) of highly significant size (23 In We then compared the systems medicine response between the two infection types (IAV vs. RV) to identify ractions . This coractions . In addiQuerying the established drug target databases DrugBank and Therapeutic Target Database for the identified genes confirms that TiCoNE recovered several known drug targets , althougPLSCR1), FK506 binding protein like (FKPBL), and helicase with zinc finger (HELZ2), which were not previously known to be specifically activated upon IAV infection. PLSCR1 encodes an interferon (IFN)-inducible protein that mediates antiviral activity against DNA and RNA viruses in vitro including hepatitis B viruses,24\u201326 vesicular stomatitis virus (VSV),27 herpes simplex virus,28 and encephalomyocarditis virus.27 On the contrary, PLSCR1 mediates hepatitis C virus entry into host cells.29TiCoNE analysis identified several genes, such as phospholipid scramblase 1 in human lung cells (A549) and in human bronchial epithelial cells (BEAS-2B) in the presence or absence of different concentrations of the PLSCR1 inhibitor R5421 ,30 and it was suggested that the antiviral effect of PLSCR1 against VSV is correlated with increased expression of specific ISGs. Therefore, phospholipid scramblase 1, which is itself an ISG-encoded protein, is involved in amplifying and enhancing the IFN response through increased expression of a subset of potent antiviral genes.27 Also, it was found that primary plasmacytoid dendritic cells (pDCs) from PLSCR1-deficient mice produced a lower amount of type-1 IFN than pDCs from the wild-type mice in response to IAV stimulation,28 indicating that PLSCR1 might also be involved in IFN expression and eventually expression of ISGs in IAV-infected cells.To investigate whether or R5421 . We testor R5421 . Applicaor R5421 . These dp-values for clusters, phenotype comparisons and network co-enrichment based on different kinds of permutation tests with significantly overrepresented time series behavior or time series coexpression behavior. TiCoNE offers a human-augmented cluster optimization strategy and allows the user to iteratively refine the clustering automatically or visually by applying operations such as adding, deleting, merging, or splitting clusters. Our approach can compare different phenotypes and identify differentially behaving network complexes. TiCoNE computes empirical ests see section.We demonstrate the power of TiCoNE by processing time series gene expression data of human host lung cells, infected with either RV or IAV. In this study, we use our coenrichment analysis approach to construct complexes of clusters along a timeline that are biologically meaningful and may explain the systemic unraveling of host immune response to influenza virus infection. We find specific properties of coenrichments of IAV and RV data clusters, which may help explain the large difference in severity of the two viruses on a systems biology level.de novo groups of genes that behave consistently but show temporally different behavior under the two conditions. By integrating these candidates with the human PPI network, we discovered a complex of 50 genes, out of which 30 have been previously associated with IAV infection.We discovered PLSCR1, FKPBL, and helicase with zinc finger (HELZ2). Through experimental validation we could confirm that we have identified PLSCR1 as a novel host factor, which is acting as a negative regulator for IAV infection. These data provide proof-of-principle that a systems medicine approach to analyze the virus and host interactome provides novel and possibly more effective therapeutic approaches in line with network pharmacology31 than focusing on the virus only.Among the identified host genes not previously known to be specifically activated upon IAV infection were 32 and as feature reduced interactive web application at (https://ticone.compbio.sdu.dk).With TiCoNE, we provide the first integrated temporal systems medicine drug target identification approach. It extends time series expression data directly to temporal disease-specific subnetworks. It also identifies cross talk between them, and we show that it can identify novel drug targets for IAV infection. TiCoNE is publicly available in the Cytoscape app store"}
+{"text": "Recently, a number of studies have been conducted to investigate how plants respond to stress at the cellular molecular level by measuring gene expression profiles over time. As a result, a set of time-series gene expression data for the stress response are available in databases. With the data, an integrated analysis of multiple stresses is possible, which identifies stress-responsive genes with higher specificity because considering multiple stress can capture the effect of interference between stresses. To analyze such data, a machine learning model needs to be built.In this study, we developed StressGenePred, a neural network-based machine learning method, to integrate time-series transcriptome data of multiple stress types. StressGenePred is designed to detect single stress-specific biomarker genes by using a simple feature embedding method, a twin neural network model, and Confident Multiple Choice Learning (CMCL) loss. The twin neural network model consists of a biomarker gene discovery and a stress type prediction model that share the same logical layer to reduce training complexity. The CMCL loss is used to make the twin model select biomarker genes that respond specifically to a single stress. In experiments using Arabidopsis gene expression data for four major environmental stresses, such as heat, cold, salt, and drought, StressGenePred classified the types of stress more accurately than the limma feature embedding method and the support vector machine and random forest classification methods. In addition, StressGenePred discovered known stress-related genes with higher specificity than the Fisher method.StressGenePred is a machine learning method for identifying stress-related genes and predicting stress types for an integrated analysis of multiple stress time-series transcriptome data. This method can be used to other phenotype-gene associated studies. Recently, cellular molecule measurement technologies, such as microarray and RNA-However, existing methods were designed to analyze gene expression data of a single stress, not of multiple stresses. Analyzing gene expression data of multiple stresses can identify stress-responsive genes with higher specificity because it can consider the effect of interference between stresses. However, since no method of integrating multiple stress gene expression data has been developed, this study aims to develop a method for an integrated analysis of transcriptome of multiple stress types.For the integrated analysis of transcriptome data of multiple stress, heterogeneous time-series analysis is should be considered . HeterogMany algorithms have been developed to analyze gene expression data. However, as far as we are aware of, there is no readily available machine learning algorithm for predicting stress types and detecting stress-related genes from multiple heterogeneous time-series data. Support vector machine (SVM) models are known to be powerful and accurate for classification tasks. Recently, SVMs are extended for multi-class problems and also for regression prediction. However, applying SVM for predicting stress-related genes and associating with phenotypes is not simple since the essence of the problem is to select small number of genes relevant to a few phenotypes. In fact, there is no known readily available prediction method for this research problem. Principal component analysis (PCA) is designed for predicting traits from the same structured input data, but it is not designed to analyze heterogeneous time-series data. Random forest (RF) is a sparse classification method, so how significant a gene is associated with stress is hard to be evaluated. Naive Bayes method can meashigh-dimension and low-sample-size data problem, which is one of the major challenges in machine learning. The data consists of a large number of genes and a small number of samples (about less than 100). To deal with the high-dimension and low-sample-size data problem, our model is designed to share a core neural network model between twin sub-neural network models: 1) biomarker gene discovery model 2) stress type prediction model. These two submodels perform tasks known in the computer field as feature selection and label classification, respectively.Thus, we designed and implemented a neural network model, StressGenePred, to analyze heterogeneous time-series gene expression data of multiple stresses. Our model used feature embedding methods to address the heterogeneous structure of data. In addition, the analysis of heterogeneous time-series gene expression data, on the computational side, is associated with the k-th time-series gene expression data, Dk, contains expression values for three dimensional axes: gene axis, Heterogeneity of time dimension. Each time-series data may have different number of time points and intervals.Heterogeneity of experimental condition dimension. Each time-series data may have different experimental conditions, such as tissue, temperature, genotype, etc.Multiple stress time-series gene expression data is a set of time-series gene expression data. The In this paper, we analyze multiple heterogeneous time-series data of four major environmental stresses: heat, cold, salt and drought. We collected the 138 sample time-series data related to the four types of stress from ArrayExpress . Figure\u00a0StressGenePred is an integrated analysis method of multiple stress time-series data. StressGenePred Fig.\u00a0 includesY, and gene expression data, D, as input, and predicts which gene is a biomarker for each stress. This model consists of three parts: generation of an observed biomarker gene vector, generation of a predicted biomarker gene vector, and comparison of the predicted vector with the label vector. The architecture of the biomarker gene discovery model is illustrated in Fig.\u00a0This model takes a set of stress labels, Xk, from gene expression data of each sample k, Dk. Since each time-series data is measured at different time points under different experimental conditions, a time-series gene expression data must be converted into a feature vector of the same structure and the same scale. This process is called feature embedding. For the feature embedding, we symbolize the change of expression before and after stress treatment by up, down, or non-regulation. In detail, a time-series data of sample k is converted into an observed biomarker gene vector of length 2n, Xk={xk1,\u2026,xk2n}, where xk2n\u22121\u2208{0,1} is 1 if gene n is down-regulation or 0 otherwise, xk2n\u2208{0,1} is 1 if gene n is up-regulation or 0 otherwise. For determining up, down, or non-regulation, we use the fold change information. First, if there are multiple expression values measured from replicate experiments at a time point, the mean of expression values is calculated for the time point. Then, the fold change value is computed by dividing the maximum or minimum expression values for a time-series data by the expression value at first time point. After that, the gene whose fold change value >0.8 or <1/0.8 is considered as up or down regulation gene. The threshold value of 0.8 is selected empirically. When the value of 0.8 is used, the fold change analysis generates at least 20 up or down regulation genes for all time-series data.This part generates an observed biomarker vector, Yk. Xk. The values of Xk` means up or down regulation as same as Xk. For example, xk2n\u22121=1 means gene n is predicted as a down-regulated biomarker, or xk2n=1 means gene n is predicted as a up-regulated biomarker, for a specific stress Yk.This part generates a predicted biomarker gene vector, W, measures the weights of association between genes and stress types. The predicted biomarker gene vector, k and the logical stress-gene correlation layer, i.e., Yk\u00d7W. In addition, we use the sigmoid function to summarize the output values between 0 to 1. The stress vector, Yk, is encoded as one-hot vector of l stresses, where each element indicates whether the sample k is each specific stress type or not. Finally, the predicted biomarker gene vector, A logical stress-gene correlation layer, Xk, and predicted biomarker gene vector, The logical stress-gene correlation layer has a single neural network structure. The weights of the logical stress-gene correlation layer are learned by minimizing the difference between observed biomarker gene vector, Xk, and predicted biomarker gene vector, Cross-entropy is a widely-used objective function in logistic regression problem because of its robustness to outlier-including data . Thus, wn, a stress vector gn is defined as gn= with l stresses and gnl=max. Then, the a group penalty is defined as gnl will have a value between 0 and 1. In other words, if gn is specific to a single stress, the group penalty will be 1. However, if the gene n reacts to multiple stresses, the penalty value will increase quickly. Using these characteristics, the group penalty loss is defined as below:By minimizing the cross-entropy loss, logistic functions of the output prediction layer are learned to predict the true labels. Outputs of logistic functions can predict that a given gene responds to only one stress or to multiple stresses. Although it is natural for a gene to be involved in multiple stresses, we propose a new loss term because we aim to find a biomarker gene that is specific to a single stress. To control relationships between genes and stresses, we define a new group penalty loss. For each feature weight, the penalty is calculated based on how much stresses are involved. Given a gene \u03b1 regulates effects of group penalty terms. Too large \u03b1 imposes excessive group penalties, so genes that respond to multiple stresses are linked only to a single stress. On the other hand, if the \u03b1 value is too small, most genes respond to multiple stresses. To balance this trade-off, we use well-known stress-related genes to allow our model to predict the genes within the top 500 biomarker genes at each stress. Therefore, in our experiment, the \u03b1 was set to 0.06, and the genes are introduced in \u201cRanks of biomarker genes and the group effect for gene selection\u201d section.On the group penalty loss, hyper-parameter W. To build stress type prediction model from feature vectors, we utilize the transposed logical layer WT and define a probability model as below:From biomarker gene discovery model, the relationships between stresses and genes are obtained by stress-gene correlation layer W is calculated from a training process of the biomarker gene discovery model. Ak means an activation value vector of stress types, and it shows very large deviations depending on the samples. Therefore, normalization is required and performed as below:Matrix a and b are general vector parameters of size L of logistic model g(x).For the logistic filter, these normalized embedded features vectors encapsulate average weight stress-feature relationship values that reduce variances among the vectors with different samples. As another effect of the normalization, absolute average weights are considered rather than relative indicator like softmax. So, false positive rates of predicted stress labels can be reduced. Using the normalized weights Learning of this logistic filer layer is started with normalization of the logistic filter outputs. This facilitates learning by regularizing the mean of the vectors. Then, to minimize loss of positive labels and entropy for negative labels, we adopted the Confident Multiple Choice Learning(CMCL) loss function for our \u03b2 is set by recommended setting from the original CMCL paper [\u03b2=0.01\u22481/108 is utilized.To avoid overfitting, a pseudo-parameter CL paper . In our In this paper, two types of experiments were conducted to evaluate the performance of StressGenePred.StressGenePred was evaluated for the task of stress type prediction. The total time-series dataset (138 samples) was divided randomly 20 times to build a training dataset (108 samples) and a test dataset (30 samples). For the training and test datasets, a combination analysis was performed between two feature embedding methods (fold change and limma) and three classification methods . The accuracy measurement of the stress type prediction was repeated 20 times.Table\u00a0Then, we further investigated in which cases our stress type prediction model predicted incorrectly. We divided the total dataset into 87 samples of training dataset and 51 samples of test dataset (28 cold stress and 23 heat stress samples). Then, we trained our model using training dataset and predicted stress types for the test dataset. Figure\u00a0p-value of Fisher\u2019s method was calculated using the limma tool for each gene for each stress types . The genes were then sorted according to their p-value scores so that the most responsive genes came first.The second experiment was to test how accurately biomarker genes can be predicted. Our method was compared with Fisher\u2019s method. The p=0.0019 for the Wilcoxon Signed-Rank test).Then, we collected known stress-responsive genes of each stress type in a literature search, investigated EST profiles of the genes, and obtained 44 known biomarker genes with high EST profiles. We compared the ranking results of our method and Fisher method with the known biomarker genes. The Table\u00a0Our method is designed to exclude genes that respond to more than one stress whenever possible and to detect genes that only respond to one type of stress. To investigate how this works, we collected genes known to respond to more than one stress. Among them, we excluded genes that resulted in too low a ranking for all stress cases.When comparing the results of our method to the Fisher method for these genes, 13 of 21 genes ranked lower in the result of our method than Fisher method and several unknown genes encoded in the mitochondrial genome that only affected heat stress. Some genes in mitochondria may be involved in the initial transcriptional response when under heat stress. In the case of salt and drought stress, we predicted two TF genes, HD-ZIP and NAC (ANAC019: AT1G5289), which are associated with both stresses. These two genes are likely to respond early to water-related stress. NAC domain TF is prominent in salt stress, but not drought stress. We observed SAURs in drought stress, which means that it is a small RNA that is actively involved in plant physiological regulation during long-term water deficiency.In this study, we selected four different types of stress to find and classify the affected genes. The effects of these environmental stresses are overwhelming, but they do not define specific parts of metabolism and physiological consequences. The characteristics of the four stresses we studied have in common with the physiological response associated with water. Although they react differently depending on the signaling pathways of each stress, they do not have complete separation because of the commonalities associated with using water. Many of the biomarker genes we have found have been shown to respond to multiple stresses, and have shown a variety of phenotypes for different stresses in plants that have been transfected with mutations or recombinant genes. The APX gene is a gene that responds to all four stresses, and other genes such as AREB, AtRIP, DREB, Gols and MAPs are well known as genes that respond to multiple stresses. In this study, the genes involved in the specific stresses we predicted were either identical in other stresses or related to multiple complex stresses.This study presented StressGenePred, a method of analyzing a set of time-series transcriptome data for multiple types of stress. StressGenePred consists of twin classification models to achieve two analytic goals. The biomarker gene discovery model aims to discover genes that respond to specific stresses. The goal of the stress type prediction model is to classify samples into four types of stress, heat, cold, drought, and salt. The key problem in this study is to train the StressGenePred model from high-dimension and low-sample-size data (138 sample data in the study). Analysis of high-dimension and low-sample-size data is a difficult computational problem that many researchers are studying.In order to be trained with a small number of data, StressGenePred is designed to use a simplified architecture with a small number of parameters. StressGenePred is also designed so that twin classification models share the same logical layer and its parameters. In twin classification models, the logical layer is used symmetrically with respect to input and output. For example, the input and output in the biomarker gene discovery model are stress and genes, respectively, and the stress type prediction model is vice versa. When the logical layer is shared by both classification models, the parameters of the logical layer are trained redundantly in both models, reducing the number of data required.In experiments using Arabidopsis stressed gene expression data, StressGenePred detected known stress-related genes at a higher rank compared to Fisher\u2019s method. StressGenePred showed better performance than random forest and support vector machine in stress type prediction."}
+{"text": "Studies conducted in time series could be far more informative than those that only capture a specific moment in time. However, when it comes to transcriptomic data, time points are sparse creating the need for a constant search for methods capable of extracting information out of experiments of this kind. We propose a feature selection algorithm embedded in a hidden Markov model applied to gene expression time course data on either single or even multiple biological conditions. For the latter, in a simple case-control study features or genes are selected under the assumption of no change over time for the control samples, while the case group must have at least one change. The proposed model reduces the feature space according to a two-state hidden Markov model. The two states define change/no-change in gene expression. Features are ranked in consonance with three scores: number of changes across time, magnitude of such changes and quality of replicates as a measure of how much they deviate from the mean. An important highlight is that this strategy overcomes the few samples limitation, common in transcriptome experiments through a process of data transformation and rearrangement. To prove this method, our strategy was applied to three publicly available data sets. Results show that feature domain is reduced by up to 90% leaving only few but relevant features yet with findings consistent to those previously reported. Moreover, our strategy proved to be robust, stable and working on studies where sample size is an issue otherwise. Hence, even with two biological replicates and/or three time points our method proves to work well. High dimensional transcriptome data is described by many features, however, many are either redundant or irrelevant. Identifying those is key in order to claim results are trustworthy. Data mining techniques, machine learning algorithms or statistic models are applied to classify features but at the cost of other important problems such as model over-fitting or the increase of computational resources and higher analysis cost 2]. A p. A p2]. FS is a technique often used in domains where there are many features and comparatively few samples, particularly used in microarray or RNAseq studies where there are thousands of features and a small number of samples 6]. FS . FS 6]. Due to the high dimensionality of most gene expression analyses, it is necessary to select the most relevant features to get better results interpretation and a deeper insight into the underlying process that generated the data. However, the noisy data and small sample size pose a great challenge for many modelling problems in bioinformatics making it necessary to use adequate evaluation criteria or stable and robust FS models .In general, FS techniques can be classified into three main categories: filters, wrappers, and embedded 8]. Fil. Fil8]. A variety of FS techniques that have been proposed can be classified into parametric and non-et al. . Th. Th4) asdatabase was usedtoxicity . AquirinIn summary, for all three cases the feature space was reduced and almost 90% of the variables were filtered out, Results using the FSHMM strategy outputs a subset of relevant features which is relatively low compared to the original set but yet, it proved to be informative even in situations where number of replicates is as low as two and time series only involves 3 point measurements. The three datasets used to validate our method were chosen because they covered microarray and RNAseq data. Also, because they offered opportunity to compare its efficiency at a pathway level as it was the case of Ikaros data, or gene-to-gene level as we did for the high-fat diet study or on common knowledge after literature review on results from the toxicogenomics data. In all three cases, the strategy efficiently reduced the number of relevant features, simplified the analysis, maintain the time series nature of the studies and provided an insight to the system dynamics. Features were selected according to three scores. One evaluates gene perturbation over time adding more value to genes that fluctuate more over time. A second score evaluates the magnitude of those perturbations and the third score evaluate statistical significance by means of consistency among replicates.The parameters of the proposed HMM estimated from all three data sets had the same mean vector in their emitted normal distributions. This happens because the HMM models the magnitude of the change in expression instead of the actual expression level. This fact agrees with the hypothesis that only a small set of genes change, and according to our assumptions only those changing in the condition group with no changes among the control set are considered. Therefore, the distributions are centered around zero and this further reduces the number of parameters to estimate from data.The estimated parameters of the proposed HMM on all three data sets had the same mean vector in their emitted normal distributions. This happens because the HMM models the magnitude of the change in expression instead of the actual expression level. This fact agrees with the hypothesis that only a small set of genes change, and according to our assumptions only those changing in the condition group with no changes among the control set are considered. Therefore, the distributions are centered around zero and this further reduces the number of parameters to estimate from data.It is important to highlight that there is not hard cut-off defined within any step of the proposed strategy. Instead, state of change is defined based on emission and transition probabilities after Viterbi algorithm. Therefore, state variance becomes an important parameter as it determines the change/no-change path each observation transits and it is estimated from the data. Furthermore, the independence assumption on features reduces the number of parameters to estimate allowing to model robustly even with limited number of observations.et al, however the multivariate normal distributions proposed let the user provide all the replicates with its inherent variability to model the system dynamics.Parameters estimated from data using FSHMM strategy were compared to the models presented in 12] and and12] aAlthough, similar methods for time series data using either FS or HMM have been proposed. The embedded approach we present in this manuscript is novel as it includes several ideas that have been done separately, in just one model. For instance, feature selection in time series using Granger causality was published by Li et al in 2015 with the aim at prediction . GrangerIn terms of accuracy of detection, the variation in data may be simply due to random error particularly because almost all of the time series transcriptomic experiments have low replication and few time measurements. Hence, we present the simplest model cabable to describe the major features of the data simply following the principle of parsimony . By keepAnother important aspect of our strategy is its capability to work with multiple conditions. Other methods can analyze multiple conditions, for instance, DEclust is a recAlso, it is essential to highlight the role of data preprocessing but even more important the input data rearrangement. For our FSHMM strategy, each gene represents a different observation sequence in the training matrix. Therefore, even if the sample size is low, the model parameters can be estimated from data. Moreover, with the feature randomization, it is less probable to overfit. We also analyzed the idea of adding a third hidden state to model the up-regulation, down-regulation, and no-change. Results showed it was not worth it to add complexity by increasing the dimensionality of the hidden vector but instead keep it as simple as possible would allow us to handle studies with minimum counts of data points. The model feeds from changes in expression instead of expression levels themselves, still we can model the sign of the state that emitted the observation. Thus, the two hidden state transition graph is capable of modelling the desired dynamics without increasing the model complexity.The Feature Selection with a hidden Markov model (FSHMM) strategy starts with an already normalized gene expression matrix, a vector of time points, the biological conditions, the number of replicates and if necessary a set of parameters to customize the model estimation. Data is rearranged into matrices, one per each condition. Then, it is necessary to remove the offset value of each feature by computing the differences in gene expression between consecutive time points. Therefore, instead of using the expression level of each feature for the model parameters, it will receive the change in expression from two consecutive times.et al . Un. UnX1:T posterior decoding and the Viterbi algorithm [The decoding function is used to get the hidden states that were traversed by the Markov chain. To decode the states in a hidden Markov model the most commonly used algorithms are the lgorithm 33]. Th. Thpostecomplete data set. The process is iterated until the convergence criteria are met [The Expectation-Maximization (EM) algorithm is applied when there is missing data, or an optimization problem does not have an analytical solution but can be simplified by assuming the existence of hidden variables. The EM algorithm objective is to maximize the complete data set log-likelihood in a two-step procedure. In the first step, it computes the function\u2019s expected value to fill the missing data. And in the second step the algorithm maximizes the model parameters given the are met .The proposed strategy was divided in three stages as stated in g is represented as a matrix with the time points as columns and replicates as rows, The first step in the pipeline is the data rearrangement and transformation. Each condition is organized in a 3D matrix. Each feature T to T \u2212 1. For the next step, as it estimates the model parameters, the order of the features may bias the estimation. Therefore, a randomization step is done to shuffle them. This process is based on the preprocess of the sub-sampling cross-validation approach to avoid overfitting [Then, each feature expression value its transformed to get the expression change by substracting the two consecutive time points, rfitting .g\u0394. Represents the number of changes that occurred in the time series, in a multiple condition experiment it also considers the feature in each one (Z). The greater the number of changes decoded by the Viterbi algorithm (X = C), the better the score will be, Number of changes across time (#i\u2225). It represents how much each relevant variable changes. Even if the feature has only one change in time, if it was very large, this variable will have a good score, Magnitude of change (\u2225\u0394scoreRi). It represents the variability between biological replicates. The greater the difference, the lower the value of this score, Quality of replicates Click here for additional data file.S1 Tableet al work, and the second sheet has those found exclusively with the FSHMM strategy.Excel book with the enriched GO terms found with FSHMM. The first sheet contains the GO terms in common with the Ferreiros (XLSX)Click here for additional data file.S2 TableTable with the gene symbol of the all relevant features found in the GSE75417 RNAseq data with the FSHMM strategy.(XLSX)Click here for additional data file.S3 TableTable with the over repressented GO terms using the relevant features found in the GSE75417 RNAseq data and DAVID. The table has the GO term, its description, the p-value, adjusted p-value, q-value and its relevant genes.(XLSX)Click here for additional data file.S4 TableTable with the over repressented GO terms using the relevant features found in the GSE39549 microarray data and DAVID. The table has the GO term, its description, the p-value, adjusted p-value, q-value and its relevant genes. The rows highlighted in yellow are related to immune response.(XLSX)Click here for additional data file.S5 TableTable with the gene symbol of the all relevant features found in the TGP data with the FSHMM strategy.(XLSX)Click here for additional data file.S6 Table4 microarray data and DAVID. The table has the GO term, its description, the p-value, adjusted p-value, q-value and its relevant genes.Table with the over repressented GO terms using the relevant features found in the TGP CCl(XLSX)Click here for additional data file.S7 Table4 microarray data and KEGG. The table has the KEGG pathways, its description, the p-value, adjusted p-value, q-value and its relevant genes.Table with the over repressented KEGG pathways using the relevant features found in the TGP CCl(XLSX)Click here for additional data file."}
+{"text": "Throughout the HIV-1 replication cycle, complex host-pathogen interactions take place in the infected cell, leading to the production of new virions. The virus modulates the host cellular machinery in order to support its life cycle, while counteracting intracellular defense mechanisms. We investigated the dynamic host response to HIV-1 infection by systematically measuring transcriptomic, proteomic, and phosphoproteomic expression changes in infected and uninfected SupT1 CD4+ T cells at five time points of the viral replication process. By means of a Gaussian mixed-effects model implemented in the new R/Bioconductor package TMixClust, we clustered host genes based on their temporal expression patterns. We identified a proteo-transcriptomic gene expression signature of 388 host genes specific for HIV-1 replication. Comprehensive functional analyses of these genes confirmed the previously described roles of some of the genes and revealed novel key virus-host interactions affecting multiple molecular processes within the host cell, including signal transduction, metabolism, cell cycle, and immune system. The results of our analysis are accessible through a freely available, dedicated and user-friendly R/Shiny application, called PEACHi2.0. This resource constitutes a catalogue of dynamic host responses to HIV-1 infection that provides a basis for a more comprehensive understanding of virus-host interactions. Conversely, the host cell defense system tries to counteract infection through innate immune cellular responses, attempting to block different stages of the replication cycle. The host genes involved in these defense responses are called HIV inhibitory factors (HIFs) and include virus-specific restriction factors13.Upon cellular invasion of the host T-cell, the success of HIV-1 infection depends on numerous virus-host interactions. During the roughly 24-hour-long replication cycle, HIV-1 enters the host cell, integrates its genome, and utilizes the host cellular machinery in order to produce new virions. The host genes that are utilized by the virus to support its lifecycle are called HIV dependency factors (HDFs)14, proteomic assays15, and functional screens17. Most of these analyses were focused on one type of omics data, either transcriptomics or proteomics, at a single time point. Only a few studies investigated HIV-host interactions with a temporal resolution20. In order to gain a more comprehensive understanding of virus-host interactions over time, we explored virus-induced cellular reprogramming at multiple molecular levels and time points. A series of high-resolution, genome-wide measurements of the transcriptome, proteome, and phosphoproteome were conducted in uninfected and infected SupT1 CD4+ T cells in order to define the dynamic, integrated proteo-transcriptomic response of the cell to infection with an HIVeGFP vector and to understand the key molecular players maintaining a balance between host support for viral replication and host defense response to inhibit infection products, integrated viral DNA, viral translation (GFP expression), and viral particle release (p24 release) - were evaluated every 2\u2009h during 24\u2009h to ascertain successful infection and follow HIV replication cycle advancement and with viral particle release , consistent with HIV life cycle progression protein and phosphoprotein ratios and protein ; 568 genes had measurements at all three levels Fig.\u00a0.Differential expression analysis between Mock and HIVeGFP-infected conditions was performed initially regardless of the time point (cf. Methods) and resulted in 1506 differentially expressed genes, 415 differentially expressed proteins, and 157 differentially expressed phosphorylation sites, corresponding to 125 proteins\u00a0with a differential phosphorylation status and retrieve the corresponding genes, as well as inspect the behavior of custom gene lists and download the associated data .The results of the clustering analysis define the proteo-transcriptomic expression response patterns of host genes to HIV-1 infection. These patterns and their underlying data are available in an R/Shiny application providing a user-friendly querying platform, coined PEACHi2 , accessible at 21, as well as in a curated library of gene sets associated to HIV-related processes, available at18 class I and II presentation, TOLL-receptor cascades and cytokine signaling, downregulated proteins were enriched in pathways containing four innate immune genes as well as genes involved in CD28 family co-stimulation, CTLA-4 inhibitory signaling, TCR signaling, and translocation of ZAP70 to the immunological synapse , known to inhibit HIV transcriptionThe ensemble of 157 identified differentially phosphorylated proteins in our study can be retrieved and visualized on the PEACHi2 platform and constitutes a useful resource for analyzing the regulation of post-transcriptional modifications during HIV infection. However, it remains to be determined how HIV triggers up- or downphosphorylation of these proteins, as well as the roles of these phosphorylated proteins within the viral replication cycle.47. Recent development and increasing resolution of proteomic technologies have led to new insights in characterizing virus-host interactions at the proteome and phosphoproteome level48. In the present work, we employed both transcriptome and proteome high-throughput screening and reported host and virus transcriptomic, proteomic, and phosphoproteomic abundances in Mock- and HIVeGFP-infected T cells at consecutive time points of the viral replication cycle. Successive additional measurements of several viral intermediates tracing the HIV replication cycle advancement correlated with viral transcriptome and proteome dynamic quantification and delineated the time course and the main stages of the replication cycle. On the cellular host side, by clustering time series expression profiles at the three levels, we were able to identify host genes involved in HIV replication, based on their displayed protein upregulation or downregulation expression patterns. The identified temporal patterns have been compiled into an interactive resource, PEACHi2, describing the proteo-transcriptomic responses of a host immune cell to HIV infection.Most studies investigating on a large scale how HIV influences the host cell during infection considered so far the cellular transcriptome at a single time point, with little attention to protein and phosphoprotein changes. Although very useful, these analyses do not capture virus-specific post-transcriptional cellular regulation and cannot identify the influence of the virus on the host cells at the protein levelIntegrating proteo-transcriptomic gene expression dynamic profiles revealed 388 host genes with putative roles in HIV replication, corresponding to 192 upregulated host proteins and 196 downregulated host proteins. Over 33.5% of the identified factors have been previously reported, in support of our findings. Of note, gene expression changes induced by HIVeGFP could be attributed either to HIV or to GFP expression, as it was not possible to dissociate GFP impact in the current experimental setting.\u00a0While our study recapitulated well-known virus-host interaction factors, such as CD4, CDK9, NFKB1, or UNG, it also revealed numerous novel factors which have not been described before. These new candidate factors are involved in a wide range of molecular processes, such as signaling, immune response, cell cycle, gene expression, or metabolism. The majority of these factors were not found differentially expressed at the RNA level. This may indicate, on one hand, direct induction or inhibition by HIV at the protein level, arguing for a multi-omics approach. On the other hand, it is possible that low changes in RNA expression for the majority of these genes, which were not considered significant by our analysis, may trigger stronger response at the protein level. Furthermore, noise levels, technical biases across omics technologies, as well as limit of detection at the protein level, may contribute to this effect. In contrast, HIV-mediated modulation of genes at all the three omics levels - such as for the four genes identified here, LIMA1, HIST1H1B, TFAP4, and UNG - may suggest a critical role of these genes in the HIV life-cycle. A quick, strong, and long-term response needed for these particular genes throughout the viral life-cycle could justify the need of such a multi-targeted strategy.49, a large number of phosphoproteins could not be assessed due to lack of quantification at the protein level. Alternatives to SILAC, such as tandem mass tagging (TMT), could be attractive as they may yield a better coverage and especially less missing data in time series19. Also, combining quantifications from different proteomic techniques may increase the number of genes detected at the proteome and phosphoproteome level, and lead to more rigorous quantifications.SILAC proteomic measurements presented inferior detection compared to transcriptome quantification. Only 26.5% of the genes detected at the RNA level were also measured successfully at the protein level, and only 4% were quantified at all three omics levels. Furthermore, the phosphoproteomic measurements contained a large amount of missing values and noise, and consequently could not be used for identifying proteo-transcriptomic expression signatures of infection. Furthermore, because phosphoprotein measurements needed to be adjusted by their protein expression in order to remove an existing abundance bias in SILAC data18. Our data consists of short time series (5 sampled time points per gene), which allows detection of global up- and downregulation patterns in response to infection. However, due to the relatively small number of time points and increased variability, our data is not suited for a more fine-grained stratification of expression patterns, capable for example of distinguishing early from late differential regulation. A superior number of equally distant time points, such as for example the design used in18, would be necessary to capture these changes robustly, as they would be supported by more than one consecutive time point.Time series expression profiling of the host cell during HIV-1 infection allows describing how the host gene expression is modulated through time in response to viral infection and replication. Specific temporal transcriptomic expression patterns associated to different stages of the replication cycle have been previously described with high-resolution time series analysesHIV can impact the host cell at multiple levels. To alter the expression of genes at different omics levels, the virus disposes of a multitude of strategies. For example, it can affect transcription rates of genes, target RNA or protein degradation directly, or modulate enzyme activity involved in post-transcriptional and post-translational regulation. Through a multi-omics time series analysis of the host T cell gene expression, our work broadens and refines the landscape of HIV-host interactions. The different temporal expression patterns of the genes may reflect the diverse strategies of how HIV modulates host cell content. While offering a more detailed view of the host response to HIV infection, the presented analysis constitutes an initial step towards understanding the corresponding regulatory mechanisms. The identified host candidates require further validations and more targeted functional analyses in order to understand their precise role and interactions with HIV. PEACHi2 offers a reliable resource for investigating and selecting candidates for follow-up analyses. Finally, studying how interfering with these interactions affects HIV replication success may provide new insights for developing novel treatment strategies.6 SupT1 cells (lymphoblastic T cell line) was cultured in RPMI 1640 medium with 10% (v/v) heat-inactivated fetal bovine serum (FBS) (Invitrogen) , Andover, MA) were included in the heavy (H) SILAC medium at 100\u2009mg/l, while normal Arginine and Lysine were used in the light (L) SILAC medium. Heavy or light SILAC labeling was achieved by culturing the cells in the two media (H and L) for a minimum of 2 weeks to allow for at least 5 cell divisions. H-labeled cells were Mock infected, while the L-labeled cells were infected with an HIVeGFP/VSV-G virus at 3\u2009\u03bcg/106 cells. Infection (both Mock and with an HIVeGFP vector) was carried out by spinoculation for 30\u2009min at 1500\u2009g in presence of 5\u2009\u03bcg/ml polybrene. As previously described18, this allowed reaching a quasi-universal infection. Cells were then washed and further incubated. The HIVeGFP viral vector used expresses GFP instead of the viral protein env. At multiple time points post-infection, cells were collected and processed for analysis of HIV life cycle progression and normalized by the 24\u2009h time point as in18, as well as for transcriptome, proteome and phosphoproteome as detailed below , followed by high-throughput sequencing performed on the Illumina HiSeq2000 Sequencer at the Genomics Technology Facility of the University of Lausanne. A replicate experiment was performed for the 24\u2009h time point for both HIV and Mock conditions.TM phosphatase inhibitors (Roche). Clarified heavy and light extracts were quantitated and mixed 1:1. Proteins were reduced by incubation with 5\u2009mM DTT and alkylated with 20\u2009mM iodoacetamide, then precipitated with trichloroacetic acid/deoxycholate. Pellets were resuspended in 8\u2009M Urea, 50\u2009mM Ammonium bicarbonate by sonication and digested with sequencing grade trypsin (1:50) overnight at 37\u2009\u00b0C. Digests were desalted on Sep-Pak C18 cartridges, and lyophilized. For total proteome analysis, digests were separated into 12 subfractions by peptide isoelectric focusing as previously described50. For phosphopeptide enrichment (i.e. peptide with phosphorylation site as a surrogate for phosphorylation status), aliquots of 1.0\u2009mg of unfractionated digests were dissolved in loading buffer and incubated with 6\u2009mg of titanium dioxide beads for 10\u2009min. The resin was washed 3x with 500\u2009\u03bcl loading buffer, 2x with 80% acetonitrile, 0.1% TFA. A second elution was performed and analyzed separately. Phosphopeptides were eluted with 100\u2009\u03bcl of 1% ammonium hydroxide and the eluate was immediately acidified with 1% TFA. Peptides were desalted on POROS Oligo R3 beads, dried and analyzed by LC-MS/MS. Dried peptides were resuspended in 0.1% formic acid, 2% (v/v) acetonitrile. Samples were analyzed on a hybrid linear trap LTQ-Orbitrap Velos Pro mass spectrometer interfaced via a nanospray source to a Dionex RSLC 3000 nanoHPLC system . Peptides were separated on a reversed-phase Acclaim Pepmap nanocolumn with a gradient from 5 to 45% acetonitrile in 0.1% formic acid in 120\u2009min). Full MS survey scans were performed at 60,000 resolution. All survey scans were internally calibrated using the 445.1200 background ion mass. In data-dependent acquisition controlled by Xcalibur 2.1 software (Thermo Fisher), the twenty most intense multiply charged precursor ions detected in the full MS survey scan were selected for Collision-Induced Dissociation (CID) fragmentation in the LTQ linear trap with an isolation window of 3.0\u2009m/z in multi-stage activation mode and then dynamically excluded from further selection during 120\u2009s.At 2\u2009h, 6\u2009h, 12\u2009h, 18\u2009h and 24\u2009h post Mock and HIV infection, cells were lysed by pulse sonication in 8\u2009M Urea, 50\u2009mM Tris pH 7.5 and Phos-stop51. Reads were then filtered using Prinseq-lite52 which removed their poly-A and low-quality boundaries and kept for further analysis only the reads longer than 30 nucleotides and with a mean quality score larger than 20. The FastQC software53 was used for quality assessment of the datasets. Sequencing reads were aligned using STAR v2.554 to the GRCh38 Human Genome Assembly concatenated to the HIVeGFP/VSV-G HIV genome sequence were filtered out and the gene read counts were smoothened by adding a pseudo-count of 10 reads. Afterwards, log2(HIV/Mock) fold changes were computed for each gene at each time point . A separate database was constructed, containing all sequences of HIV proteins and used in parallel. Cleavage specificity was Trypsin with two missed cleavages. Initial mass tolerances were of 4.5 particles per million (ppm) for the precursor and 0.5\u2009Da for CID tandem mass spectra. Protein and phosphopeptide identifications were filtered at 1% false discovery rate (FDR) established by MaxQuant against a reversed sequence database. Common contaminants and hits against reverse sequences were filtered out. Light/Heavy (L/H) ratios were normalized and the log2(HIV/Mock) fold changes at all time points were calculated. Only proteins with measurements supported by more than 2 peptides at a minimum of two consecutive time points were considered for further analysis. As suggested in49, the phosphoprotein ratios were normalized by their corresponding protein ratios in order to remove the bias in the phosphoprotein measurements introduced by the protein relative abundance. Therefore, only phosphoproteins with available protein measurements were considered for the analysis. Principal component analysis was used to explore the profiles of log2(HIV/Mock) ratios at protein and phosphoprotein levels over time.Mass spectrometry data from protein and phosphoprotein experiments were analyzed and quantified using MaxQuant v1.3.0.5 (2013 release). The human subset of the release 2013_07 of the UNIPROTkb database was used . The fractions at each time point were further adjusted by the last time point, resulting in relative production of total HIV protein over time.Normalized relative production of total HIV protein expression was calculated using the intensity-based absolute quantification (IBAQ)59 was employed for differential expression analysis. Accordingly, a z-score threshold of 2 standard deviations was applied to the distribution of gene log2(HIV/Mock) fold changes at each level and time point. A gene was considered differentially expressed between Mock and HIV-infected conditions if, at least at one time point, its log2(HIV/Mock) fold change was at least two standard deviations beyond the mean of the fold changes at the corresponding time point, i.e., had an absolute z-score greater than 2 at least at one time point as described in2(HIV/Mock) time series profiles, differentially expressed genes were clustered using a Gaussian mixed-effects model as introduced in60 (see the Supplementary Information for a detailed description of the model). The clustering model was implemented in the R/Bioconductor package TMixClust and is available at https://bioconductor.org/packages/release/bioc/html/TMixClust.html. Of note, we could not assess the variability of individual time point measurements due to the limited number of replicates. However, since measurements at consecutive time points are considered to be correlated, an increased number of time points is usually preferred over the availability of replicates61. Our statistical model is designed to capture smoothness of expression between time points and time series with highly variable behavior between consecutive time points would yield an unstable clustering result and would consequently be discarded.Based on their log62 for the 50 separate clustering runs, quantifying the agreement between the different clustering solutions and the solution with the highest likelihood, in order to assess the stability of the clustering. After stability analysis for different numbers of clusters K\u2009=\u20091, \u2026, 5, the distribution of the silhouette coefficient (or silhouette width)63 for each number of clusters was used to select the number of clusters that best fits the data. More precisely, the number of clusters that resulted in a clustering configuration with the largest average silhouette width was chosen.We repeated the clustering procedure 50 times in order to avoid local optima and to identify the clustering configuration with the highest likelihood. We employed the distribution of the Rand indexhttps://peachi2.shinyapps.io/peachi2/.Clustering analysis with TMixClust was applied to the time series observations of the differentially expressed genes at each data level separately. We characterized the overall temporal gene expression pattern of each resulting cluster per data type by comparing, for the ensemble of genes in each cluster, the distribution of their HIV/Mock expression throughout time to the baseline. An upregulated behavior was defined if the average temporal expression profile of the genes in a cluster was increasing and above the baseline, while a downregulation behavior corresponded to an average temporal expression profile of the genes in a cluster which was decreasing and below the baseline . The hierarchical structure of the Reactome pathways21 was used to assign enriched gene sets to major functional categories and to construct an enrichment summary.Enrichment analysis was performed using the Reactome pathwaysSupplementary InformationS1S2S3S4S5S6S7S8S9S10S11S12"}
+{"text": "The increasing amounts of genomics data have helped in the understanding of the molecular dynamics of complex systems such as plant and animal diseases. However, transcriptional regulation, although playing a central role in the decision-making process of cellular systems, is still poorly understood. In this study, we linked expression data with mathematical models to infer gene regulatory networks (GRN). We present a simple yet effective method to estimate transcription factors\u2019 GRNs from transcriptional data.Saccharomyces cerevisae. Then, we applied this method using experimental data of the plant pathogen Phytophthora infestans. We evaluated the transcriptional expression levels of 48 transcription factors of P. infestans during its interaction with one moderately resistant and one susceptible cultivar of yellow potato (Solanum tuberosum group Phureja), using RT-qPCR. With these data, we reconstructed the regulatory network of P. infestans during its interaction with these hosts.We defined interactions between pairs of genes (edges in the GRN) as the partial mutual information between these genes that takes into account time and possible lags in time from one gene in relation to another. We call this method Gene Regulatory Networks on Transfer Entropy (GRNTE) and it corresponds to Granger causality for Gaussian variables in an autoregressive model. To evaluate the reconstruction accuracy of our method, we generated several sub-networks from the GRN of the eukaryotic yeast model, S. cerevisae. Results suggest that GRNTE is comparable with the state-of-the-art methods when the parameters for edge detection are properly tuned. In the case of P. infestans, most of the genes considered in this study, showed a significant change in expression from the onset of the interaction (0\u2009h post inoculum - hpi) to the later time-points post inoculation. Hierarchical clustering of the expression data discriminated two distinct periods during the infection: from 12 to 36 hpi and from 48 to 72 hpi for both the moderately resistant and susceptible cultivars. These distinct periods could be associated with two phases of the life cycle of the pathogen when infecting the host plant: the biotrophic and necrotrophic phases.We first evaluated the performance of our method, based on the transfer entropy (GRNTE), on eukaryotic datasets from the GRNs of the yeast P. infestans during its interaction with two hosts which differ in their level of resistance to the pathogen. Although the gene expression analysis did not show differences between the two hosts, the results of the GRN analyses evidenced rewiring of the genes\u2019 interactions according to the resistance level of the host. This suggests that different regulatory processes are activated in response to different environmental cues. Applications of our methodology showed that it could reliably predict where to place edges in the transcriptional networks and sub-networks. The experimental approach used here can help provide insights on the biological role of these interactions on complex processes such as pathogenicity. The code used is available at https://github.com/jccastrog/GRNTE under GNU general public license 3.0.Here we presented an algorithmic solution to the problem of network reconstruction in time series data. This analytical perspective makes use of the dynamic nature of time series data as it relates to intrinsically dynamic processes such as transcription regulation, were multiple elements of the cell act simultaneously and change over time. We applied the algorithm to study the regulatory network of The online version of this article (10.1186/s12976-019-0103-7) contains supplementary material, which is available to authorized users. Generation of new and abundant next generation sequencing data has enabled a better understanding of the molecular dynamics of diseases, and interactions between organisms in general , 31, 63.Genome-wide clustering of gene expression profiles provides an important first step towards building predictive models by grouping together genes that exhibit similar transcriptional responses to various cellular conditions and are therefore likely to be involved in similar cellular processes , 36. HowIn the last decade, several approaches to face these limitations have arisen. The main goal consists on capturing gene interaction as a network model. Nodes of the network are genes, and edges represent direct interactions among genes , 17, 35.Saccharomyces cerevisae\u2019s GRN. Our benchmarking procedure aims to test our method in multiple sets of data to estimate performance over a range of sub-networks. Subsequently, the method was applied to the plant pathogen Phytophthora infestans in a compatible (susceptible host) and incompatible (moderately resistant host) interaction. Phytophthora infestans, is the causal agent of potato (Solanum tuberosum) late blight disease . Therefore, for each triplet of genes, direct interactions can be estimated by comparing the values of mutual information and the interaction with minimum value. This is also the case for the TE formulation, where given a lag step l the joint entropy H is under the same constraint. We used this property to avoid estimation of interactions due to spurious events. This differs from Frenzel & Pompe [v1 and v2, the edge has direction v1\u2009\u2192\u2009v2 if I\u2009>\u2009I. This process however cannot address bidirectional interactions; thus, the result is a directed network of the genetic interactions based on an expression profile, our implementation also optimizes the lag value (l) as it estimates the lag step that maximizes mutual information for each pair of genes.In accordance with the data processing inequality , 37, 60, & Pompe partial p-value for each comparison that corresponded to the fraction of TE values that were above or equal to the observed value of TE in the distribution. This was done for 105 different reshuffling iterations in each pairwise comparison to attain reliable estimates of the significance of the interaction. We call this new method Gene Regulatory Networks on Transfer Entropy (GRNTE).Transfer entropy takes non-negative values between 0 and infinity. To assess the significance of this measurement we compared the value of each candidate interaction with a null distribution of TE values. For this, we randomly shuffled the expression values of genes across the time series and evaluated the TE for such manifestly independent genes (See next section for the generation of gene expression data). Based on this, we obtained an empirical null distribution of the values of TE. Higher values of TE indicated a stronger relationship. We assigned a S. cerevisiae [S. cerevisiae. These networks consist of 200 randomly selected genes. GeneNetWeaver uses ordinary differential equations to simulate expression values, the interaction parameters are estimated based on network topology. We simulated expression values for a time series consisting of 21 points. With these expression data we reconstructed the network topology using GRNTE. For each sub-network, we calculated a receiving operating characteristic (ROC) curve, by estimating the true and false positive rates over a varying threshold and calculated the area under the curve. By doing this we could easily assess the specificity of the algorithm. However, it has been noted that small variations from a value of 1 area under the ROC curve can result in large number of false positives [To evaluate the reconstruction accuracy of our method, we generated several sub-networks from the GRN of the eukaryotic yeast model, revisiae . Using Grevisiae , we simuositives . Therefox belongs to community with node y if the coordinate A\u2009=\u20091 [We estimated network modularity by assigning nodes to communities with two different algorithms. Multilevel community detection (MCD) and Markov Clustering (MCL). MCD assigns a community to each mode in the network, so that in the first step there are as many communities as nodes. In subsequent steps nodes are reassigned to a community in a local manner such that it achieves the highest contribution to modularity , 38. Modx,y)\u2009=\u20091 . This reP. infestans while interacting with S. tuberosum. We determined a set of TFs that were significantly overexpressed during this interaction. Initially, we applied significance microarray analysis (SAM) to determine the set of differentially expressed genes in the available microarray experiment from [2FC)\u2009>\u20091, and false discovery rate (FDR) q-value \u22640.01. We then cross-validated our results with the Serial Amplification of Gene Expression (SAGE) analysis [We decided to apply our model for the reconstruction of part of the regulatory network of the plant pathogen ent from , according to the criteria established in Buitrago-Fl\u00f3rez et al. . All genS. tuberosum group Phureja, Col2 and Col3, kindly provided by the Potato breeding program from Universidad Nacional de Colombia, were used. Cultivar Col2 is a susceptible variety whereas Col3 is moderately resistant to late blight . All plants were grown under greenhouse conditions .Two cultivars of P. infestans strain Z3\u20132 [5 sporangia per ml was prepared as previously described [P. infestans growing on PDA medium. The whole experiment was replicated three times .Leaflets from 6-weeks-old plants were collected and infected with ain Z3\u20132 . The strescribed . Infecti\u2212\u20091 of total cDNA.Total RNA was extracted using the Qiagen RNeasy extraction kit according to the manufacturer\u2019s protocol and resuspended in 50\u2009\u03bcl of RNAse-free water. Treatment with DNAse was performed to avoid contamination with genomic DNA. Reverse transcription was performed using the DyNAmo 2 step synthesis kit , with 1\u2009\u03bcl of RNA in a 50\u2009\u03bcl final volume. The oligo-dT were used as primers. Quantification of cDNA was performed using a Nanodrop 1000 , and cDNA was then diluted to a final concentration of 800\u2009ng\u2009\u03bclElongation factor 2 and \u00df -tubulin genes, which were used as reference genes for the RT-qPCR. Three different annealing temperatures, 61.5, 60.5, and 59.5\u2009\u00b0C, were tested. Among the 48 genes coding for transcription factors, 28 had an optimum annealing temperature of 61.5\u2009\u00b0C and 20 had an optimum annealing temperature of 59.5\u2009\u00b0C. Therefore, we separated the analyses into two independent groups. Group one corresponded to genes, whose optimum annealing temperature was 61.5\u2009\u00b0C and the \u00df-tubulin gene was used as the reference gene . Group two corresponded to genes, whose optimum annealing temperature was 59.5\u2009\u00b0C and the Elongation factor 2 gene was used as the reference gene. The expected amplicon size was confirmed in an 1.5% agarose gel using the QuantPrime software . Pairs oP. infestans growing on PDA medium (0 hpi). Experiments were performed using the Dynamo SyBRGreen RT-qPCR kit according to the manufacturer\u2019s instructions. Samples were run in 96-well plates containing 1\u2009\u03bcl of cDNA and a total volume of 10\u2009\u03bcl for 40\u2009cycles. Amplification temperature was set according to the annealing temperature for the reference gene in each group of evaluated genes. Expression values were calculated as the relative ratio of expression compared to the reference gene according to Pfaffl method [Gene expression at the different time-points was compared to that of sporangia of l method , 52.S. cerevisiae. A total of 100 sub-networks were subsampled consisting of 200 nodes each. For each sub-network we generated time series expression data using GeneNetWeaver [p-value indicates the significance of the interaction. The shifting of the time series may also give a sense of directionality given that when the MI increases, the regulated TF is shifted with respect to the regulator, and vice versa when the shift occurs the other way around the MI decreases. Using the p-values we ranked the regulatory edges from the most confident to the less confident. To evaluate such a ranking independently of the choice of a specific threshold, we used the standard convention of calculating the area under the Precision Recall curve (AUPR) and the area under the receiving operating characteristic (AUROC) [We evaluated the performance of transfer entropy (TE) on eukaryotic datasets from the GRNs of the yeast etWeaver . We used (AUROC) .Fig. 1Ex), we transformed the directed graphs generated by the TE to symmetric undirected graphs. Each algorithm assigns a confidence value, between 0 and 1 for each edge. The AUPR determines the proportion of true positives among all positive predictions (prediction precision) versus the fraction of true positives retrieved among all correct predictions at varying thresholds. Conversely the AUROC estimates the average true positive rate versus the false positive rate.To facilitate comparison between algorithms . This pattern or high precision at high confidence threshold is preserved when prediction the DREAM4 dataset. Where AUPR is low for all the algorithms. Overall for this dataset, values of AUPR and AUROC are lower than the average obtained in our benchmark networks.Figure\u00a0AUROC values of GRNTE were significantly higher than most methods tested, which shows a high rate of detection of true positive interactions. This suggests that the GRNTE is more reliable than both TDARACNE and BLARS at high thresholds but rapidly becomes unreliable at low thresholds. Notably although SWING showed a lower mean AUROC it didn\u2019t show any significant differences when compared to GRNTE. These results suggest that the GRNTE may be comparable with state-of-the-art methods when the parameters for edge detection are properly tuned, although it must be noted that the accuracy of GRNTE comes with a higher running time compared to most of the compared methods Table\u00a0.Table 2AUltimately GRN analysis aims to extract the global structure of a set of gene interactions , 38, 48,P. infestans during its interaction with potato cultivars Col2 and Col3 were assessed via RT-qPCR. An expression profile was constructed for each TF by calculating the ratio of the expression for the gene at each time-point after inoculation in comparison with the expression of the same gene in P. infestans growing in PDA medium (Time 0) . Notably genes PITG_01528, PITG_11223, PITG_13133, PITG_19851, and PITG_21561 only exhibited this pattern in cultivar Col3. Additionally, gene PITG_00513 (cell division cycle 5-related protein) had a different expression pattern in Col2, where it went from highly expressed at the early stages to lowly expressed at the late stages . This network was composed of 3 modules as detected by MCD with a modularity value of 0.2878 , 56. SomAnother significant concern is the validation of the resulting network. A standard framework has been set up by DREAM in order to compare different algorithms , 54, theThe use of GRN simulation tools becomes particularly relevant when one intends to evaluate the network structure as a whole. If the objective is to understand physiology as an emergent property of gene expression, properly assessing the network features is paramount to make reliable predictions and design constructive experiments , 42, 49.Plasmodium falciparum distinct clusters of genes have a differential behavior during the different stages of the complex life cycle of this human pathogen [P. infestans, expression profiling did not reflect synchronized changes in time as it was observed in P. falciparum phaseograms, thus rendering difficult the study of physiological changes of the infection stages of P. infestans\u2019 life cycle. Notably, most of the genes sampled in this study, showed a rather drastic transition from growing on artificial-medium (0 hpi) to growing on leaf tissue. However, during leaf infection, from 12 to 72 hpi drastic transcriptional changes did not occur. Despite having a few variations throughout the expression profile, hierarchical clustering of the expression data discriminated two distinct periods during the infection: from 12 to 36 hpi and from 48 to 72 hpi. These distinct periods can be associated with two phases of the life cycle of the pathogen when infecting the host plant: the biotrophic and necrotrophic phases. Transcription factors within the GRNs changed their expression levels and gained or lost interactions throughout the infection process. This reflects the role of TFs in controlling different aspects of the infection process despite showing only slight changes in their expression level. When comparing the transcriptional patterns between the two cultivars, again, very few genes were differentially expressed. Most of these genes were annotated as Myb -like DNA-binding proteins. The role of the Myb transcription factor during early infection of Phytophthora sojae was demonstrated by Zhang et al. [PsMYB1 resulted in abnormal sporangial development and affected zoospore-mediated plant infection. More studies on the role of Myb transcription factors on the biology of infection of P. infestans are needed to understand the tight transcriptional control of a compatible and incompatible interactions.We have shown that network inference from expression data is a key tool for improving the biological insights obtained from transcriptomics data. Exploiting time series transcriptome analyses has helped in the understanding of the infection process of animal pathogens. Such studies have shown, for instance, that in pathogen . Howeverg et al. , where tin-planta but differences in the expression ratios of the TFs of the pathogen when infecting Col2 or Col3 were not significant. However, when using the GRNs, for example, highly connected nodes, and gene modules in the GRNs did not necessarily agree with drastic changes in expression profiles, thus highly expressed genes do not necessarily have high centrality and hierarchical clustering groups of genes do not correspond to network communities. Additionally, genes that show changes in expression in different hosts do not show highly different centrality. Our comparison of the two networks, showed that despite having small changes in gene expression, a high number of changes occurred in the establishment of connections inside the GRN for each host. The fact that only about 30% of the interactions of one network were preserved in the other network, suggest that the system shows several changes comparing a compatible and an incompatible interaction. Although the number of modifications was much less than expected between two random networks, it is possible to speculate that the rewiring of P. infestans GRN is subjected to several constraints and that the process has been evolutionarily optimized. If we consider that any operation of rewiring is possible, the expected value for the Hamming distance would be very close to those of two random networks. However, the control of the transcription regulation is not random, as this value is much lower. Editions to the network structure, although many, should be precise to keep the balance and functionality of the network [On the other hand, the networks allowed us to evaluate aspects of transcription, which are beyond the raw expression changes as was shown when exploring the changes in gene expression using the GRN in each environment/host. As mentioned above, the most significant changes in the expression values for most of the TFs were observed between the oomycete growing in culture medium and network . It is iAt the same time, preserved topological features (such as modularity and the large fraction of genes which remain affiliated to a community) indicate that there are core regulatory functions preserved between two different environments. Thus, there is a tight control in the regulation of the transcriptional program in a compatible and incompatible interaction. Just a relatively small subset of changes is required to have a completely different behavior, compatible (Col2) vs incompatible interaction (Col3), without drastic changes in TF expression levels, compared to the random case. Large differences in expression levels in one gene may be balanced by smaller changes in other components in the GRN. However, our reconstruction was not able to distinguish rearrangements occurring at higher levels in the whole GRN. A larger sample of genes is needed to search for evidence that may support larger transcriptional rewiring.Community organization has been proposed as a property indicative of functional units in complex networks , 58. OurP. infestans has been helpful in the discovery and characterization of the effector genes and in distinguishing between different stages of the infection [Phytophthora, and to fully understand phenomena such as host specificity or hemibiotrophy. Network biology proposes that data coming from large experiments can be analyzed in several different layers. A regulatory network built from transcriptional data may be interpreted from its basic properties to more complex levels all of which may give different insights depending on the context [Expression profiling of nfection , 16. Alsnfection . However context , 58. We Phytophthora [Complex behavior such as hemibiotrophy, may be explained via the effect of regulatory events occurring at distinct times. The regulatory capacities of the TFs inside a network may be best explained by the information that these transmit to other elements of the network. Small differences in network rewiring and conserved levels of expression, may be explained by the effect of each individual TFs, in terms of its information flow inside the network. The information flow can be assessed by estimating the betweenness centrality; genes PITG_10768 (zinc finger C2H2 superfamily) and PITG_08960 (Myb-like DNA binding protein) showed the highest betweenness centrality in Col2 and Col 3 sub-networks respectively. These genes are constantly down-regulated and this agrees with the hypothesis that shifts in physiological behavior are controlled via negative regulation in ophthora , 40. Theophthora . If the P. infestans, analytical tools that elucidate the process via study of the mRNA, can be greatly expanded via network reconstruction. Using this framework, differences in the behavior of an organism in different environments can be found, as shown in the rewiring for the sub-networks in different environments. Additionally, although expression profiling may be a powerful tool to determine major genes involved in the infection process, it is limited to clearly discriminate possible mechanism and hypothesis underlying host-pathogen interactions, network analysis broaden the analytical power of this data sets as it allows to determine modules and to narrow the number of candidate genes for experimental validation [P. falciparum [P. infestans are less directly indicative of regulatory function changes. This is the first study to use network reconstruction as a way to overcome the limitations of gene expression profiling. Some of the ideas discussed here are widely used in other fields [The preservation of modules, despite heavy rewiring of the network, may indicate that these circuits have large biological importance and play key roles in the physiology of infection. In organisms such as lidation . Unlike lciparum , gene exr fields , 22, 39 P. infestans during its interaction with two hosts which differ in their level of resistance to the pathogen. Although the gene expression analysis did not show differences between the two hosts, the results of the GRN analyses indicated rewiring of the genes\u2019 interactions according to the resistance level of the host. This suggests that different regulatory processes are activated in response to different environmental cues. Applications of our methodology showed that it could reliably predict where to place edges in the transcriptional networks and sub-networks. The experimental approach used here can help provide insights on the biological role of these interactions on complex processes such as pathogenicity. The code used is available at https://github.com/jccastrog/GRNTE under GNU general public license 3.0.Here we presented an algorithmic solution to the problem of network reconstruction in time series data. This analytical perspective makes use of the dynamic nature of time series data as it relates to intrinsically dynamic processes such as transcription regulation, where multiple elements of the cell (e.g. transcription factors) act simultaneously and change over time. We applied the algorithm, GRNTE, to study the regulatory network of Additional file 1:Table S1. Primer sequences for TFs genes in P. infestans assayed in this study. Forward and reverse primers were designed by QuantPrime [antPrime . ExpecteAdditional file 2:Figure S1. PCR amplification for testing primer viability. cDNA extracted from P. infestans in PDA media was amplified at 45 PCR cycles. Observed fragment size x to expected fragment size (~\u200950-70\u2009bp) when observed in 2% agarose gel. Some primer dimers can be observed. (PDF 387 kb)Additional file 3:Table S2. Mean expression values for 48 TFs from P. infestans during the interaction with S. tuberosum group Phureja Col3 and Col2. Relative expression is calculated comparing RT-PCR measurements to time 0\u2009h.p.i. for both Col2 and Col3. Expression profiles are also compared between the two hosts to observe differences during the interaction specific of cultivar. (XLSX 26 kb)Additional file 4:Figure S2. Expression profile of Col2 compared to Col3. Heatmap representing the expression profiles were compared for the two cultivars. For each transcript each time point is compared to the same timepoint in the other cultivar Although only minor changes in expression are observed Genes overexpressed in Col2 and a clear separation between early and late infection can be observed by hierarchical clustering. (PDF 189 kb)Additional file 5:Network S1. Network file for the sub-network of P.infestans when interacting with S. tuberosum Col2, as observed in Fig. Additional file 6:Network S2. Network file for the sub-network of P. infestans when interacting with S. tuberosum Col3, as observed in Fig. Additional file 7:Figure S3. Expression is unrelated to degree. Node degree is computed for each node in the network and plotted against it mean expression value in Col2 (A) and in Col3 (B). No correlation is observed. (PDF 179 kb)Additional file 8:Network S3. Network file for the subnetwork of P. infestans extracted from the intersection of nodes and edges between networks for infection in Col2 and Col3 as observed in Fig."}
+{"text": "Essential genes play an indispensable role in supporting the life of an organism. Identification of essential genes helps us to understand the underlying mechanism of cell life. The essential genes of bacteria are potential drug targets of some diseases genes. Recently, several computational methods have been proposed to detect essential genes based on the static protein\u2013protein interactive (PPI) networks. However, these methods have ignored the fact that essential genes play essential roles under certain conditions. In this work, a novel method was proposed for the identification of essential proteins by fusing the dynamic PPI networks of different time points . Firstly, the active PPI networks of each time point were constructed and then they were fused into a final network according to the networks\u2019 similarities. Finally, a novel centrality method was designed to assign each gene in the final network a ranking score, whilst considering its orthologous property and its global and local topological properties in the network. This model was applied on two different yeast data sets. The results showed that the FDP achieved a better performance in essential gene prediction as compared to other existing methods that are based on the static PPI network or that are based on dynamic networks. Essential genes (and their encoded proteins) play an indispensable role in supporting the life of organisms, and without them, lethality or infertility is caused. Studying essential genes helps us to understand the basic requirements for cell viability and fertility . MoreoveSequence-based methods are based on the fact that essential genes evolve much slower than other genes, and that they usually conserve across different species ,7,8. ThiNetwork-based methods identify essential genes according to their topological properties in the protein\u2013protein interactive (PPI) networks. Essential genes are supposed to be the center of the PPI network, because removing them from the networks would cause the lethality and break down of the network . A serieHowever, there are some limitations in the network-based methods, including incomplete and error-prone currently available PPI data, and the neglect of other intrinsic properties of the essential genes. To overcome these limitations, some methods integrate PPI networks with other biological information to improve the prediction accuracy of essential proteins. One of these methods refines the currently available PPI network by introducing other biological information, such as gene functional annotation data , gene exHowever, the aforementioned methods all ignore the fact that essential genes interact with each other and play essential roles under certain conditions. Most of these methods are based on the static PPI networks , which consist of interactions accumulated in different conditions and time points. In fact, the interactions between genes in cells change over time, environment, and different stages of the cell cycle . Some reIn this work, a novel method was developed for the identification of essential genes by fusing the dynamic PPI networks of different time points (FDP). Firstly, a serial of active PPI networks of each time point was constructed using Xiao\u2019s method , and theSaccharomyceas cerevisiae were adopted to evaluate our method. One dataset was the DIP_PPI, downloaded from the DIP database [FDP and other existing computational methods were applied to predict the essential genes of S. cerevisiae (Bakers\u2019 Yeast). Two different PPI datasets of database publishedatabase , which cSaccharomyces genome database (SGD) [Saccharomyces Genome Deletion Project (SGDP) [The list of essential genes was integrated from the following databases: The Munich Information Center for Protein Sequences (MIPS) , Saccharse (SGD) , Databasse (SGD) , and Sact (SGDP) . There wt (SGDP) , includiInformation on the orthologous proteins was taken from Version 7 of the InParanoid database. In our study, yeast proteins were mapped to another 99 species to find their orthologous proteins. Only the proteins in the seed orthologous sequence pairs of each cluster generated by InParaniod were chosen as the orthologous proteins.Our dynamic PPI networks were constructed based on the gene expression profiles and the PPI network. The expression profiles consisted of periodically (time-dependent) and non-periodically (time-independent) expressed profiles and some inevitable noise. De Lichtenberg et al. point oux = {x1, \u2026, mx, \u2026, Mx} be a time series of observation values at equally-spaced time points from a dynamic system. A gene is supposed to be time dependent if its gene expressions have linear relationships and can be modeled by an AR model of order p (see Equation (1)). A gene is regarded to be time independent if its gene expressions have nonlinear relationships and can be modeled by an AR model of order zero (see Equation (2)).i\u03b2 is the autoregressive coefficient, and m\u03b5 denotes the random error, which follows a normal distribution with a mean of 0 and a variance of \u03c32. Since the order of the AR model in Equation (1) is unknown, similar to Reference [p-values for all possible orders p (1 \u2264 p \u2264 (M \u2212 1)/2) were calculated. A gene is regarded to be time dependent if one of these p-values calculated from its expression profile is smaller than a user-preset threshold value (threshold = 0.01). The expression profiles of a gene will be considered as noise if the gene is not only time-independent, but also if the mean of its expression values across all time points is very small .k was set to 2.5, u and \u03c3 were the mean and standard deviation of their expression values.After identifying the time-dependent and the time-independent genes, and filtering out the noisy genes, the next step was to detect which of them were active at each time point. A gene was considered to be active when its expression value was above a given threshold (see Equation (3)). In this work, similar to Reference , we set Algorithm 1 Dynamic Network ConstructionInput: A static PPI (S_PPI) network represented as Graph G = , a time series of the gene expression profile of each gene in G, parameter k.Output: The active networks of each time point.Step1: Identify two categories of genes, the time-dependent genes and the time-independent genes. using Equations (1) and (2), according to their expression profiles.Step2: Filter out the noise genes in the time-independent genes.Step3: Identify the active genes of each time point from the remaining two categories of genes by judging whether or not their expression values are above the threshold ).Step4: Map the active genes of each time point to the S_PPI network and extract the active networks of each time point.Therefore, a serial of active PPI networks was generated by mapping the active genes at each time point to the S_PPI and extracting the edges connecting them. Since the active genes were different at different time points, these active PPI networks dynamically changed over time. The details of the dynamic network construction algorithm are shown in Algorithm 1.After constructing the active networks of each time point, the next step was to fuse them into a single network, which captured the shared and complementary network structure of all the active networks, offering insight into how the expression of proteins was similar across different time points from the view of the network structure. To formally define the process of fusing networks, the following variables were introduced.v\u2208V represents a gene and an edge e \u2208E denotes an interaction between two genes v and u. w denotes the weight of the edge e, which measures the similarity between genes v and u. A dynamic PPI can be represented as a serial of active networks of different time points G1, G2,.... Gi, \u2026 Gn, where Gi = represents a subgraph of G at the ith time point. Vi\u2208V is the set of nodes that are active at the ith time point. Ei\u2208E is a subset of E that connects the active genes at the ith time point. Wi is an adjacency matrix of Gi, where its entry wi measures the closeness of two nodes in the ith active network. The edges in the active network of each time point are weighted by Equation (5).iN(iu) is the neighbor of iu in Gi. iECC is the edge-clustering coefficient of ie in Gi, which is defined as the number of common neighbors of node iu and node iv in Gi divided by the number of common neighbors that might possibly exist between them. Since essential genes tend to form density clusters [iEcc)) is the average of the edge clustering coefficient values between iu and its neighbors in Gi. \u03bc is a parameter that is empirically set to 0.5 according to the recommendation in Reference [A static PPI network (S_PPI) can be represented as an undirected graph G = , where a node clusters , their eclusters ,23,46,47eference .ith time point Gi, its adjacency matrix iW has two derivatives, namely, matrix iP and matrix iS. Matrix iP carries the global information about the similarity of each gene to all the others obtained by performing normalization on iW:For the active network of the iS only encodes the similarity between each gene in Gi and its K nearest neighbors (K = 20 according to the recommendation in Reference [Matrix eference ):(9)si(uiW of Gi using Equation (5) for the ith time point, i = 1, 2, 3, 4, \u2026 M. iP and iS were obtained from Equations (8) and (9), respectively. The aim of the network fusion was to fuse the M active networks into a single network. The process was as follows.Given the number M of active networks at different time points, we could construct an adjacency matrix iW . Then the nearest two networks, i.e., i and j, were selected to fuse by the following iterative process.Firstly, the similarities between any two networks were calculated based on the Euclidean distance of their adjacency matrixes t = 0. ith and the jth time point after t iteration steps, respectively. After t iteration steps, the fused network of the two networks was computed asLet i and j. After that, the similarities between R and the remaining active networks were recomputed again. R and its closest active network were selected to fuse into one network by repeating the above process until all the active networks were fused into a single network. Algorithm 2 shows the algorithm for fusing active PPI networks.Algorithm 2 Active PPI network fusionInput: Active networks of each time point, parameter K.Output: Final fused network.iW of the ith active network using Equation (5).Step1: Construct adjacency matrix iP and iS of the ith active network using Equations (8) and (9).Step2: Construct Step3: Calculate the similarities between any two networks based on the Euclidean distance of their adjacency matrixes.iG and jG, t = 0.Step5: Compute t = t + 1.Step6: Repeat step 5 until t = 20.Step7: Compute the fused network R of iG and jG using Equation (12).Step8: Let rW = R, construct rP and rS of the fused network R using Equation (8) and (9).Step9: Find the nearest active network kG to R from the remaining active networks, let t = 0.Step10: Compute t = t + 1.Step11: Repeat step 10 until t = 20.Step12: Compute the fused network of R and kG using Equation (12), the fused network is named as R.Step13: Remove kG from active network list and repeat steps 8 to 12 until all the active networks are fused to a final network.Step14: Output the final fused network.Step4: Select the nearest two active networks Then, R was the result of the fused active networks After fusing the active networks of different time points, an algorithm was designed to assign each gene in the fused network a ranking score. The ranking score measured the importance of the gene in the fused network from both the global and local perspectives.h, were normalized by row. F is the number of genes in the network. In fact, F is the number of genes that are active at one of the time points. Let pr(i) be the ranking score of node i with respect to its global property in the fused network, which can be computed as follows.o(i) denotes the orthologous scores of node i, which is calculated by the number of times that the node has orthologs in the reference organisms. A random walking process was implemented on the fused network to capture the global information of each gene. Let H be an F*F adjacency matrix of the final fused network. All its entries, i.e., eference , we adopeference .i, since we only considered its local properties, the interactions connecting to its K closest neighbors were selected to calculate its IFE values denotes the K closest neighbor set of node i, |KNN| denotes the selected neighbor set size. Equation (16) was employed to perform the min-max normalization on the node\u2019s IFE value.The interaction frequency entropy (IFE) of a gene in the final fused network measured its local topological properties. For a gene eference ).(15)IFi in the final fused network, which was represented by FDP(i), equaled to the linear combination of its global topological score denoted as pr(i) and its local closest neighbors\u2019 influence denoted as IFE(i). The parameter \u03bb (0 \u2264 \u03bb \u2264 1) was used to adjust the weight of the two scores in the ranking score. Algorithm 3 shows the algorithm for computing the FDP values of genes.Algorithm 3 FDPInput: A static PPI network represented as Graph G = , gene expression profile, orthologs data sets between Yeast and 99 other organisms (ranging from H.sapiens to E.coli), stopping error Output: FDP values of genes.Step1: Construct active networks of each time point using the Dynamic Network Construction algorithm.Step2: Fuse these active networks into a final fused network using the Active PPI network fusion algorithm.Step3: Calculate the orthologous scores of each node in the final fused network using Equation (14).H and normalize all its entries by row.Step4: Construct matrix pr with pr0 = d, let t = 0.Step6: Compute t+pr1 using Equation (13), let t = t + 1.Step7: Repeat step 6 until Step5: Initialize Step8: Calculate the IFE value of each gene in the final fused network using Equations (15) and (16).Step9: Calculate the FDP value of each gene in the final fused network by linearly combining its pr value and IFE value (see Equation (17)).Eventually, the ranking score of a node In order to evaluate the performance of FDP in essential gene prediction, we compared the FDP with other existing methods . Compared to the NC, in each top number , the prediction accuracy of the FDP improved by 61.82%, 30.16%, 22.53%, and 21.74% on the DIP_PPI network, respectively, and it improved by 16.88%, 13.29%, 13%, and 12% on the SC_net network, respectively. Hence, overall, the FDP outperformed all the other comparative methods in the prediction of essential genes. Especially, with the small number of candidate genes selected, the advantage of the FDP becomes increasingly obvious.x-axis represents the number of genes ranked at the top in descending order, according to their ranking scores computed by the corresponding methods. The y-axis is the cumulative count of the real essential genes within the ranked genes. To investigate the performance of all the testing methods when selecting the different number of genes ranked at the top as candidates, jackknife curves were employed to show the results, where the Precision-recall (PR) curves were also plotted to further show the overall performance of the comparative methods. Precision measures the percentage with which the predicted essential genes match the known genes in all the predicted genes. Recall measures the percentage that known essential genes matched the predicted ones over all the known essential genes. Essential genes play important roles in cell life under certain conditions and their mRNA expression levels tend to change within a narrow range. Under these observations, in this work, a novel method was proposed to identify essential genes by fusing the dynamic PPI networks of different time points. Compared with previous methods, our method hierarchically fuses the active networks of different time points into a single one. Moreover, it comprehensively utilizes the genes\u2019 orthologous property and both their global and local topological properties to select the candidate essential genes from the fused network. The prediction results on two yeast PPI network datasets, show that our method improves essential gene prediction significantly, compared to the methods based on the static PPI network, including the methods considering the topological properties, i.e., DC, NC, and also the methods combining the PPI network with other biological properties, i.e., PeC and ION. Moreover, our method also outperformed the methods based on Xiao\u2019s dynamic PPI network , i.e., ACompared with the existing methods, the FDP shows outstanding performance when selecting a small number of genes as the candidate essential genes. It may benefit from the construction of a dynamic network, which filters out the non-active genes of each time point. However, some real essential genes that consistently express low values across different time points have also been regarded as noise and have been ignored. It causes the decrease of the FDP\u2019s prediction performance when selecting a large number of candidates. Hence, our future work is to construct a high quality dynamic network from the expression profiles that are full of mRNA isoforms and inevitable background noise. The prediction of essential genes also has great relations with the biological properties of known essential genes. New potential correlations between biological events and essential genes will be mined, such as alternative splicing. Moreover, the fused network is fully connected, which introduces some false interactions between the genes and causes poor performance when only considering the topological properties in the network. Therefore, another future work for us is to develop a more efficient strategy to fuse the active networks of different time points."}
+{"text": "The generalized approach allows the use of an arbitrary measure of pairwise association between nodes, an arbitrary scoring scheme that transforms the associations into weights of the network links, and a method for inferring the directions of the links. While this makes the approach powerful and flexible, it introduces the challenge of finding a combination of components that would perform well on a given inference task.The generalized relevance network approach to network inference reconstructs network links based on the strength of associations between data in individual network nodes. It can reconstruct undirected networks, i.e., relevance networks, We address this challenge by performing an extensive empirical analysis of the performance of 114 variants of the generalized relevance network approach on 47 tasks of gene network inference from time-series data and 39 tasks of gene network inference from steady-state data. We compare the different variants in a multi-objective manner, considering their ranking in terms of different performance metrics. The results suggest a set of recommendations that provide guidance for selecting an appropriate variant of the approach in different data settings.The association measures based on correlation, combined with a particular scoring scheme of asymmetric weighting, lead to optimal performance of the relevance network approach in the general case. In the two special cases of inference tasks involving short time-series data and/or large networks, association measures based on identifying qualitative trends in the time series are more appropriate. The genome plays a central role in the control of cellular processes in an organism. The sequencing efforts for different organisms have led to the identification of genes, i.e., individual components of the genome. However, the functions of each gene and its product cannot be studied in isolation. To fully understand genome functionality, we have to consider genes and gene products as highly connected and structured networks of information that flow through a cell. These biological networks are typically referred to as gene regulatory networks (GRNs), where nodes correspond to genes or gene products and edges correspond to biological or chemical interactions among them.Here, we address the task of inference of GRNs from gene expression data. The rapid advance and wide availability of technology for measuring cellular activities at genome-wide scale have caused enormous interest in methods addressing the GRN inference task in contemporary biology. As a result, a wide repertoire of inference methods has been established . In general, methods for GRN inference take one of two major perspectives on the task . One is Methods following the statistical perspective employ a simple \"guilt-by-association\" heuristic ,7, a conTaken together, these developments have led to a generalized relevance network approach that folData for GRN inference come typically from microarray experiments perturbing and stressing genes that produce highly resolved time-series and steady-state measurements of transcript levels. Steady-state measurements are made by perturbing every gene in the network and recording the pseudo state reached after the perturbation. Perturbing every gene is not necessary for time-series data that record gene expression levels over a certain period of time after the perturbation. For both data types, the captured dynamic response of the regulatory effects within a cell should provide robust information about the GRN under consideration . While tThe variety of similarity measures and scoring schemes that can be applied within the generalized relevance network approach makes the approach flexible and applicable in various scenarios. Many surveys emphasize and focus on the flexibility of the relevance network approach by presenting and categorizing its variants. The distinctive aspect of the survey presented here is the focus on selecting an appropriate variant of the relevance network approach for a given dataset. Our basic conjecture is that some variants of the approach perform better than others, in general, and that the performance of the variant is related to the properties of the dataset at hand. Another contribution of our survey is that it includes inference tasks from the two types of data, i.e., steady-state and time-series data.Escherichia coli and Saccharomyces cerevisiae (Yeast) [12\u201314], as well as of their simulation counterparts (networks) X in one We use the time-shifting method as a third, obligatory component of the CRN approach in all cases where the scoring scheme results in undirected network. For the AWE scoring scheme, where the result is a directed network, the use of the time-shifting method is optional. In that case, we have two alternative CRN-approach variants: one with and one without applying time shifting. The time-shifting method is not applicable in case of steady-state data.In the comparative evaluation of the variants of the CRN approach, we have considered all combinations of association measures and scoring schemes, with time shifting applied where appropriate. There are 114 candidate combinations corresponding to 114 variants of the relevance network approach. Each of the 114 variants was applied to the 47 tasks of GRN inference from time-series data and 39 tasks of GRN inference from steady-state data. Performance was measured by comparing the inferred network structure with the structure of the given network (in case of reconstructing known networks from simulated data) or with the structure of the best known network . We use performance measures, one of which is the area under the receiver-operator characteristics curve, and the other two are different versions of the area under the precision-recall curve.The goal of the comparative analysis is to identify the best-performing variants of the CRN approach and the properties thereof. We are especially interested in finding out what association measures and scoring schemes work best and what are the interactions between them that lead to the best performance. We also investigate the impact of the time-series length and network size on the best-performing variants of the CRN approach.To identify the best-performing methods for a given set of GRN inference tasks, we proceed as follows. First, for each performance measure and each task, we sort and rank the methods in decreasing order with respect to their performance on the task, so the top-performing method gets the rank of 1 and the worst-performing method the rank of 114. Furthermore, for each performance measure, for each method we calculate the average ranks of the method for the given set of tasks. Finally, we perform a Pareto analysis of the three-dimensional space of performance metrics to identify Pareto fronts of points corresponding to the best-performing methods, i.e., methods with the lowest average ranks.The remainder of this section provides further details on the experimental setup for performing the comparative analysis. We first introduce the tasks of GRN inference, then provide a detailed description of the performance metrics used and conclude with a brief overview of the implementation details.in silico networks and real networks of two microorganisms: Escherichia coli and Saccharomyces cerevisiae (Yeast). We use datasets from five previously published studies on GRN inference and related benchmarks.The comparative study has been conducted using real and simulated micro-array data over time-course and steady-state conditions. In particular, the data or the simulation model used for obtaining data are based on E.coli and Yeast. We consider subnetworks of 100, 150, and 200 genes, characterized by 121, 202, and 303 existing links with an average node degree of 2.42, 2.46, and 3.03, respectively. In order to guarantee consistency between subnetworks and expression data, SynTReN generates different expression data for each selected subnetwork. Additionally, three level of noise have been considered: 0.0 (deterministic - without noise), 0.1, and 0.5. These values represent the \u03c3 parameter of the log-normal distribution \u223clogX, according to which the noise is generated by SynTReN. For each configuration, 6 technical replicates of 10 time points have been generated and the expression data associated with each gene obtained as the average over the replicates. This is necessary to cope with the nondeterministic nature of the SynTReN data generation algorithm. This source has been employed for time-series analysis only have been perturbed with three different approaches, producing, in total, 30 datasets (in Table\u00a0IS2).Another source of data are the DREAM4 ,17 and tallenges ,13. The DREAM5 challenge) is considered in the analysis of both time-series and steady-state data. Originally, the challenge provides five networks, of which we consider three: Network1, Network3, and Network4, based on Affymetrix gene expression data of In silico, E.coli, and Yeast networks, respectively, taken from the Gene Expression Omnibus database [The latter . Network3 contains 4,511 genes and 2,066 known (existing) links with density of 1.1IS1_1, E2_3, and Y2_3, respectively, for Network1, Network3, and Network4.For steady-state data, all three networks are considered, each with one dataset. Network1 contains 1,643 genes and 3,940 interactions (links). Steady-state datasets from Network 1 and Network 3 contain 342 records (observations), while the dataset from Network4 has 238 records. Complete references are given in Table\u00a0The fourth data source provides real measurements, collected as a part of the study conducted by , which ain vivo benchmarking, based on a Yeast gene network [The last data source is a benchmark study that proposes a synthetic network for network . The netthresholding metrics, which consider the variability of the discrimination threshold and avoid setting it to a default value. Thus, methods are evaluated with regard to the complete set of possible thresholds, which results in an analysis of the performance space. For this purpose, two different spaces have been applied: receiver operating characteristic (ROC) curve and precision-recall (PR) curve space.To evaluate the performance of the inference method on a given task, we perform a matching between the structure (links) of the given/known GRN (true network) and the structure (links) of the inferred GRN (inferred network). Since the output of the inference method is a network connectivity matrix containing numeric link weights, we can perform the matching after setting the threshold value that would decide upon the presence and absence of links. To this end, we set aside metrics that require prior assumptions, i.e., performance metrics that require a predefined or default discrimination threshold. Instead, we follow the standard framework for evaluating network inference and employ P to denote the number of true network links and N to denote the number of absent links in the true network.Both spaces are defined over quantities derived from a confusion matrix. A confusion matrix ,49 is a ROC curve. This is a two-dimensional space that illustrates the performance of a binary classifier as its discrimination threshold is varied [s varied . Its dimSince the ROC curve is two dimensional, various summary statistics can be derived from it. Most commonly used is the area under the curve (AUC) that quantifies the area that is found below the curve, which is also considered in our study for the comparative evaluation. The AUC is calculated by integrating the AUC and expressing it as a single quantity .The ROC space is a unit two-dimensional space with a total area of 1. Thus, it can be plotted on a two-dimensional plot with both axes ranging from 0 to 1. Furthermore, the ROC curve is monotonic, which to a certain extent guarantees that by considering the curve, an optimal threshold can be found. The ROC curve or analysis overall is suitable for comparison of a classifier with a default classifier (random selection), which is represented in the space as a diagonal line from to .However, ROC analysis has disadvantages, as well. Mainly, it can be misinterpreted if the problem under consideration is characterized with an imbalanced distribution of class values. This disadvantage can appear due to the fact that true negatives are considered as correct classifications of examples, even though the problem focuses on the correct classification of positive examples (classification of minority class) only. Reconstruction of GRNs is such a problem where we face networks with many nodes, but a very small number of existing links (minority class), and many nonexisting links (majority class). Hence, correct classification of the former is a much more complex task than the correct classification of the latter. The ROC analysis dismisses the complexity of the classification tasks and considers the correct classification of the minority class of existing links to be equally important as the correct classification of nonexisting links.The AUC quantity also has its own properties. Its values range from 0 to 1; values close to 1 represent better classifiers, while values around 0.5 mean that the classifier is no better than the default (random) classifier. Values below 0.5 mean that the evaluated classifier behaves worse than the default classifier. The disadvantages of the ROC curve are reflected also in the AUC quantity. Namely, considering the problem of GRN reconstruction, we can end up with overall high AUC, increased mainly by the accurate classification of the majority class (correctly predicting nonexisting links).The PR curve is also a two-dimensional space that defines the performance of a binary classifier as its discrimination threshold is varying ,52. CommAUPRC-0.2 and total area AUPRC. Guided by the importance of discovering only true links, without an expectation that all of them would be discovered, we consider the 20% of the AUPRC that corresponds to lower recall (up to 0.2), referred to as AUPRC-0.2. It means that we try to evaluate a classifier in accordance with the top scored predictions for true links. So, if the classifier has high precision within this region (subspace), then it is considered to be good classifier and can ensure that those links that are predicted with very high scores are true links.Summary statistics can also be derived from the PR curve, such as the commonly used area under the PR curve (AUPRC). In this analysis, we used two different portions of the area under the curve: partial Unlike the ROC curve, the PR curve is not monotonic and, therefore, the performance can vary by varying the discrimination threshold. The curve is plotted in two-dimensional unit space, with a total area of 1, starts from the point and finishes at the point . There is no PR curve for a default (random) classifier, but some agreement exists on how it could be plotted on a graph . The advAUC, AUPRC values range from 0 to 1, while AUPRC-0.2 values range from 0 to 0.2, where higher values indicate better performance. In the remainder of this article, we refer to the three performance measures introduced here as AUROC, AUPRC, and rAUPRC (restricted AUPRC).Similarly to The methods we evaluate are the 114 variants of the CRN approach. An exhaustive list of the variants is given in Appendix A, Table\u00a0ttest to check whether the average performance of a given method on the GRN inference task is significantly higher than 0.5 and 0.1, respectively, for the expected AUROC and AUPRC performance of a random classifier, and 0.2 (for the rAUPRC performance measure). Methods that are not significantly better than a random classifier with respect to at least one of the three performance measures are excluded from further analysis.To rank the variants of the CRN approach according to their performance on the 47 tasks for GRN inference from time-series data and 39 tasks for GRN inference from steady-state data, we first filter out the low-performing methods by testing the statistical significance of the difference between the measured method performance and the performance of a random classifier. In particular, we employ the one-sample Student rM \u2212 1)/(N \u2212 1), where rM is the rank of the method M, while N denotes the number of all compared methods. Figure\u00a0In the next step, for each performance measure and each GRN inference task, we rank the CRN-approach variants according to their performance on the particular task. Then, we average each method\u2019s ranks over all the tasks to obtain three mean rankings of the CRN-approach variants with respect to the AUROC, AUPRC, and rAUPRC measures. To obtain a joint ranking along the three performance measures, we employ the nondominated sorting algorithm used in multi-objective decision theory\u00a0. We firsTo identify the top-ranked CRN-approach variants, we search for a set of nondominated points in the three-dimensional space, i.e., we identify the Pareto front of the nondominated points in the space. The points in the Pareto front correspond to the methods that are the best performers according to at least one performance measure. After we assign the top ranks to these Fig.\u00a0 CRN-apprTable\u00a0In the comparative analysis of the performance of the CRN-approach variants, the focus on the top-ranked variants included in the first three Pareto fronts identified with the nondominated sorting algorithm. For each CRN-approach variant in these Pareto fronts, we analyze its composition in terms of the association measure and the scoring scheme employed. We proceed with the analysis as follows. First, we identify the overall top-performing methods on the 47 tasks of GRN inference from time-series data listed in Table\u00a0The comparison of performance on all the tasks of GRN inference from time-series data identifies the correlation-based measures Fig.\u00a0,\u00a0left asIn contrast with the clear differences in performance among the association measures, the scoring schemes cannot be so clearly differentiated Fig.\u00a0,\u00a0right. l) on the performance of the CRN-approach variants, we clustered the 47 datasets into two groups of tasks with short and long time series.To investigate the impact of time-series length to be a measure of the network size. When analyzing its impact on the performance of the CRN-approach variants, we cluster the GRN inference tasks into two groups of tasks of inferring small and large networks.We consider the number of network nodes or genes , correlation-based association measures should be used, while for large ones, one should also consider symbolic measures as another valid option.The comparison of performance on all the tasks of GRN inference from steady-state data identifies the CRN-approach variants involving association measures based on correlation and mutual information Fig.\u00a0,\u00a0left asIn contrast with the notable differences in performance among the variants with different association measures, all scoring schemes can be found among the 12 top-performing variants and steady-state data. Furthermore, correlation-based measures perform well with other scoring schemes, except for ARACNE. The observation that correlation-based methods, when combined with a certain scoring scheme, yield overall performance improvements on time-series tasks leads to the conclusion that they can perform well with time shifting only, but, in fact, performance improvements can also be gained by selecting an appropriate scoring scheme.Symbolic association measures have been identified as the second best-performing group of measures that are frequently present among the top-performing CRN-approach variants. In contrast with the correlation-based measures, they operate on temporal data only and are therefore useful only in the context of time-series data. Also, symbolic measures appear to perform well only in combination with the AWE scoring scheme.Overall, correlation-based association measures show robustness with regard to the selection of a scoring scheme, while the AWE scoring scheme improves performance in general, without limiting the choice of an association measure. For tasks involving steady-state data, top-performing CRN-approach variants also include association measures based on mutual information.what works where insights. For short time series, symbolic and mutual information association measures lead to top-performing variants of the CRN approach. Symbolic measures behave robustly and work well in combination with all scoring schemes, except the one that applies time shifting only. This leads to the conclusion that symbolic association measures are robust in general and give more flexibility in choosing a scoring scheme but need to be corrected by a scoring scheme prior to inferring the links directions. Unlike the symbolic association measures, the ones based on mutual information do not show robustness with respect to the selection of a scoring scheme. They perform well only if combined with the ARACNE scoring scheme or the time-shifting method applied without any scoring scheme.The results of the analysis of method performance on datasets with varying time-series length reveal further The dominance of the correlation-based association measures increases in the setting of a long time series. Namely, they have been identified in most of the tasks as part of the best-performing method compositions. However, they seem to perform well only in combination with scoring scheme AWE, while time shifting only and MRNET are observed among top performers in one case only. Competing with correlation-based CRN-approach variants are those based on mutual information with strong limitation in choosing a scoring scheme, i.e., AWE with or without time shifting. In sum, when addressing an inference task involving short time series, symbolic association measures are recommended as a robust solution. For long time series, these measures become more dependent on a limited set of scoring schemes. This is a result of the fact that they examine the associations exhaustively throughout the time point\u2019s space. Therefore, for shorter time series, they are capable of complete search of the space, which is not a case for longer time series, where they are constrained due to computational complexity. Conclusively, correlation-based measures can be recommended as a robust solution for long time series, since they are not constrained by the computational complexity issue for retrieval of knowledge from a larger amount of data.The comparative analysis of method performance over different network sizes shows more consistent results over different settings on the tasks involving steady-state data. Namely, correlation-based association measures outperform all other measures across all network sizes.For tasks involving inference of small networks from time-series data, the correlation-based group of measures performs well in combination with all scoring schemes, except ARACNE and MRNET. As in the general case, ARACNE performs aggressive cutoffs of inferred links, without considering the difference between estimated associations. However, from these observations, we can conclude that correlation-based association measures are the most robust solution and allow flexibility in choosing a scoring scheme and construction of a customized CRN approach. For tasks involving inference of large networks from time-series data, the association measures of choice are the symbolic ones and mutual information. The former are to be combined with the AWE scoring scheme, while the latter can be used without a scoring scheme or combined with AWE.Finally, worth mentioning is the observation that distance-based association measures have not been identified among the best-performing association measures in either of the settings considered. Thus, they are excluded from the list of recommended groups of association measures worth considering for tasks of GRN inference from time-series or steady-state data.The comparative analysis presented here is based on an extensive empirical evaluation of the performance of 114 variants of the general relevance network approach on 86 tasks of inferring gene regulatory networks from time-series and steady-state data. The 114 CRN-approach variants are based on six general classes of association measures (with a variety of parameter settings) and six scoring schemes, some of which are accompanied by a time-shifting method for inference of the direction of network links from time-series data. The performance of the CRN-approach variants is measured using three different performance metrics widely used in other studies on inferring gene regulatory networks from data.The main contribution provided here is the general framework for comparative evaluation of the numerous variants of the general relevance network approach to inference of gene regulation networks. The proposed framework is flexible and modular; one can easily extend it along any dimension of comparison, such as adding new association measures, scoring schemes, performance metrics, or network inference tasks. The publicly available source code of the implemented framework allows for simple implementation of such extensions, as well as reproducing the results presented in this study.What works where?\" The answer provides important guidance for applying the generalized relevance network approach in a particular situation in terms of selecting an appropriate combination of an association measure and a scoring scheme that would lead to reasonably good performance on a given dataset. Another important aspect of our survey and comparative analysis is that it involves tasks of inference from both time-series and steady-state data. The results of the comparative analysis lead to the following recommendations for configuring the generalized relevance network approach: In general, the safest combination is a correlation-based association measure with the AWE scoring scheme for both time-series and steady-state data.The association measures based on simple distances and dynamic time warping never lead to a top-performing variant of the CRN approach.For short time series (with less than 10 time points), the general class of symbolic association measures leads to best-performing variants of the CRN approach. These measures can be combined with an arbitrary scoring scheme.For long time series (of at least 10 time points), the general recommendation is to combine a correlation-based association measure with the AWE scoring scheme.For large networks with more than 100 nodes, symbolic association measures (combined with the AWE scoring scheme) gain an edge over the correlation-based ones when inducing GRNs from time-series data.The main motivation for the evaluation that we performed is answering the question, \"While this set of recommendations provides clear guidance for selecting an appropriate variant of the generalized relevance network approach, further experiments are necessary to strengthen the generality of the results. This is especially true for the results on the impact of network size; too few large-size networks are included in the current set of inference tasks. In future work, one would extend the set of inference tasks with ones that involve networks with a varying number of nodes. Future work could also exploit an important source of input for the relevance network approach, not considered in this study, namely, expert knowledge about the presence or absence of certain links in the network. Note, however, that none of these limitations of our study should represent an obstacle for applying the proposed framework for empirical evaluation. The framework is flexible enough to be used for comparative analysis on extended sets of GRN tasks and CRN-approach variants (methods).dtw [Project name: RN-approach projecthttps://vkuzmanovski@bitbucket.org/vkuzmanovski/rn-approach.gitProject home page: Operating system(s): Platform independentProgramming language: ROther requirements: NoneLicense: FreeBSDRRID: SCR_016488 (SciCrunch.org)We implemented all the components and the variants of the CRN approach in the R software environment for statistical computing. We implemented most of the components using standard R functions, except for the association measure based on the dynamic time warping (DTW) measure of distance between time series, for which we used the DTW implementation in the R package dtw . The souROCR [emoa [Project name: RN-evaluation projecthttps://vkuzmanovski@bitbucket.org/vkuzmanovski/rn-evaluation.gitProject home page: Operating system(s): Platform independentProgramming language: ROther requirements: NoneLicense: FreeBSDTo calculate the values of the performance metrics, we used the functions implemented in the R package for evaluating the performance of classifiers ROCR . For perCR [emoa . The souThe complete materials, including data and source code, are also available publicly through the GigaDB repository .ARACNE: Accurate Reconstruction of Accurate Cellular Networks; AUC: area under the curve; AUPRC: area under the precision-recall curve; AWE: Asymmetric Weighting; CLR: Accurate Reconstruction of Accurate Cellular Networks; CRN: causal relevance network; DTW: dynamic time warping; FN: false negative; FP: false positive; GRN: gene regulatory network; MRNET: Maximum Relevance/minimum redundancy NETwork; PR: precision-recall; ROC: receiver operating characteristic; TN: true negative; TP: true positive.The authors declare no competing financial, professional, or personal interests that might have influenced the performance or presentation of the work described in this manuscript.The study was financially supported by the Slovenian Research Agency , the Slovenian Ministry of Education, Science and Sport (through funding agreement C3330-17-529020), and the European Commission .S.D. initiated the study and formulated the general methodological problem and the specific application domain. V.K. and L.T. designed and implemented the computational and evaluation framework. All authors conceived and planned the experiments. V.K. and L.T. performed the experiments and analyzed the results. V.K. and L.T. drafted the manuscript. All authors reviewed and approved the final manuscript.Authors_Response_To_Reviewer_Comments_Revision_1.pdfClick here for additional data file.GIGA-D-17-00222_.pdfClick here for additional data file.GIGA-D-17-00222_Revision_1.pdfClick here for additional data file.GIGA-D-17-00222_Revision_2.pdfClick here for additional data file.Response_To_Reviewer_Comments_.pdfClick here for additional data file.Reviewer_1_Report_ -- Frank Emmert-Streib6/10/2017 ReviewedClick here for additional data file.Reviewer_2_Report_ -- Michele Ceccarelli1/23/2018 ReviewedClick here for additional data file.Reviewer_2_Report_Revision_1 -- Michele Ceccarelli7/27/2018 ReviewedClick here for additional data file.Supplemental FileClick here for additional data file."}
+{"text": "Integrated analysis that uses multiple sample gene expression data measured under the same stress can detect stress response genes more accurately than analysis of individual sample data. However, the integrated analysis is challenging since experimental conditions (strength of stress and the number of time points) are heterogeneous across multiple samples.HTRgene is a computational method to perform the integrated analysis of multiple heterogeneous time-series data measured under the same stress condition. The goal of HTRgene is to identify \u201cresponse order preserving DEGs\u201d that are defined as genes not only which are differentially expressed but also whose response order is preserved across multiple samples. The utility of HTRgene was demonstrated using 28 and 24 time-series sample gene expression data measured under cold and heat stress in Arabidopsis. HTRgene analysis successfully reproduced known biological mechanisms of cold and heat stress in Arabidopsis. Also, HTRgene showed higher accuracy in detecting the documented stress response genes than existing tools.HTRgene, a method to find the ordering of response time of genes that are commonly observed among multiple time-series samples, successfully integrated multiple heterogeneous time-series gene expression datasets. It can be applied to many research problems related to the integration of time series data analysis. Over the past two decades, the rapid development of molecular measurement technologies, such as microarray and RNA Analysis of detecting differentially expressed genes (DEGs) has been widely performed to identThe motivation of this paper is to propose a more robust DEG detection method by integrated analysis of multiple gene expression data of a stress. The integrated analysis for DEG detection is now possible since time-series gene expression datasets measured under the same stress are increasing and they are available for integrated analysis. For instance, the OryzaExpress database providesThe integrated analysis for DEG detection will be a new type of approach of DEG detection because there are many DEG methods so far but existing methods mainly focused on individual experimental analysis and did not consider the interrelationships with other samples. For instance, the pair-wise DEG detection approach that compares the expression value of gene before and after stress treatment using statistical models, such as DESeq , edgeR , and lim\u2218C heat stress, <0,2,8> hours vs. 21 days old, 38 \u2218C heat stress, <0,2,4,10> hours.Heterogeneous meta-properties , 16 is aDifferent environmental conditions cause the difference in the biological system\u2019s response timing to stress. For example, the response time of the same gene is delayed in stress-resistant condition sample (e.g. 4h in mature and low temperature-treated sample) relative to stress-sensitive condition sample (e.g. 2h in infant and high temperature-treated sample).Different time points cause unmeasured time points in the time series dataset. Therefore, we may not know the expression levels in another sample data.Generally, DEG detection analysis of stress data investigates the change of gene expression levels before and after the response time to the stress. However, heterogeneous meta-properties cause the difficulty to specify the response time.the response order of genes will be preserved even if the response time of genes is delayed or advanced across multiple samples. It is based on the biological knowledge that biological adaptation to stress is a deterministic and sequential process; a gene activates the target genes and this regulation continues according to a deterministic stress response pathway. Based on this idea, we developed HTRgene, a method to identify \u201cresponse order preserving DEGs\u201d for multiple time-series samples.The unspecified response time issue makes the integrated analysis of time-series data much more challenging than analysis of an individual time-series data. In order to address the unspecified response time issue, our work is based on an idea that stress response time is defined based on a study of Chechik and Yosef [HTRgene is an algorithm to identify \u201cresponse order preserving DEGs\u201d by the integrated analysis of multiple heterogeneous time-series gene expression datasets. To define \u201cresponse order preserving DEGs\u201d, nd Yosef , 18. Thei is measured at li time points, resulting in eg,i,j, the expression level of a gene g in sample i at time point j. Then, let Ag,i,j be a set of expression levels of a gene g in sample i after time point j including j, i.e., Bg,i,j be a set of expression levels of a gene g in sample i before time point j excluding j, i.e., {eg,i,1,\u2026,eg,i,j\u22121}.Suppose that time-series sample A response time (RT), g in sample i where a statistical test of significance of expression level difference is maximized between A response time vector, g for m samples, i.e., The order of two response time vectorsA longest response schedule is a longest consistent ordering of genes for a set of binary ordering of two genes based on response time vectors.Response order preserving DEGs are defined as DEGs belonging to the longest response schedule.A response phase is the position of response in the response schedule.Complexity issue: The number of genes determines the complexity of determining and ordering response times. It is known that 27,416 coding genes exist in Arabidopsis [bidopsis , which rNoise issue: Noise often occurs when measuring gene expression. The noise of the expression value of a gene can cause the noise of response time followed by the entire response ordering, resulting in the overall result unstable.Below introduce two computational issues in discovering response order preserving DEGAS.HTRgene\u2019s idea to reduce complexity and noise effect is to determine and order the response times at the gene cluster level, not at the gene level. Figure\u00a0T=0) to zero. Different base normalization methods are used depending on the shape of data distribution. For instance, when plotting expression levels of a gene, the plot follows a normal distribution, so substitution-based normalization . If a gene is differentially expressed in at least one time domain in the sample, the gene is considered a DEG in a single time-series sample. After detecting single sample DEGs for each sample, a gene \u00d7 sample matrix is constructed, where the element is 1 if gene i are determined as a DEG in sample j or 0 otherwise.From normalized time-series gene expression data, HTRgene discovers consensus DEGs that are differentially expressed across multiple time-series samples. First, differential expression tests are performed using the limma tool forp-value of DEG frequencies is measured, and Benjamini-Hochberg multiple correction [adj.p<0.05) are considered consensus DEGs.Then, a statistical test is performed to investigate the number of samples in which a gene could be a consensus DEG for multiple samples. The elements of the gene \u00d7 sample matrix are randomly shuffled, and how many samples contain DEGs is counted to generate a background distribution of DEG frequency. Then, the rrection is perfoK gene clusters are produced, {C1,\u2026,CK}. Among them, small-sized clusters with less than three member genes are discarded.To determine the response time points of the multiple time-series samples, clustering of genes is performed across different samples. To address a three dimension issue of multiple time-series samples (genes \u00d7 samples \u00d7 time points), our clustering analysis considers an approach that TimesVetor proposedCi. Determining an optimal response time vector is a computationally complex problem because of its exponentially increased search space. To handle the big search space issue, a hill-climbing approach is used to determine the optimal RT solution suggested in [The goal of this step is to determine the response time vector ested in : 1) an Rri is determined for each sample i for all genes in Ci and then s. For convenience, we will omit Ci when we discuss an RT.The hierarchical clustering of genes is used to generate the initial After initialization of a RT, candidates of gj\u2208Ci. The expression values of gene gj of sample si before the response time point are assigned to Ci, is defined as an average of quality scores of all genes in Ci.Let After measuring ResponseSchedule is defined. Informally, a response schedule Among all clusters, the goal is to select and order a set of clusters that are consistent in terms of response times. To do this, the concept of Ci with the next best quality score is tested whether Ci has conflicts with any of the clusters that are already included in Ci is added to Ci is discarded. This process ends when there is no cluster to be considered. Finally, the \u201cresponse phases\u201d are defined as the positions of the clusters remaining in ResponseSchedule In this study, K, increased from 50 to half of the number of consensus DEGs by 50. Finally, K was selected to maximize the F1 score, which measures the association between the resultant genes and the top-ranked DEGs. The best K was 200 in both cold and heat experiments.The number of gene clusters was chosen empirically by examining how many ground truth genes were included in the clustering result. In our experiment, the top-ranked DEGs were selected as ground truth genes. Then, HTRgene was performed for the number of clusters, Alternatively, the user can use genes with stress-related Gene Ontology (GO) terms to determine the number of clusters. However, in this paper, genes with cold/heat stress related GO terms are used to evaluate the performance of tools in further analysis \u201c\u201d sectionHTRgene analysis was performed for heat and cold stress time-series data in Arabidopsis. Raw data of each stress were collected from GEO and ArraThe HTRgene analysis outputted 425 and 272 candidate response genes that were assigned to 12 and 8 response phase gene clusters for cold and heat stress datasets, respectively. Figure\u00a0The HTRgene analysis for cold stress data discovered 425 response order preserving DEGs belonging to 12 response phase clusters. The results were compared to known cold stress pathway genes summarized in review papers \u201329. Figu2+. Then, the activation status of proteins are sequentially changed, such as CBL-CIPKs, CPKs, CLRK, MEKK1, MKK2, MPK3/4/6, CAMTA3, and ICE1 [The cold stress signal, in the signal transmission level pathway, affects membrane rigidity and changes the concentration level of Ca and ICE1 , 29. HTRand ICE1 \u201329, affe2+/CaM-regulated receptor-like kinase that activates MEKK1-MKK2-MPK4/6 [CLRK is a Ca 2-MPK4/6 and it c2-MPK4/6 . MEKK1 /C-repeat binding factor (CBF) family including CBF1/DREB1B, CBF2/DREB1C, and CBF3/ DREB1A, respectively [CAMTA3 and ICE1 were activated genes at the last stage of the signal transmission level pathway. In the TF cascade level pathway, CAMTA3 and ICE1 bind to MYB, CG1, and ectively . CBFs arectively because ectively \u201337. The In the downstream gene level pathway, HTRgene assigned 21 genes that were reported as downstream genes of CBFs to the \u201cp4,\u201d \u201cp6,\u201d \u201cp7,\u201d \u201cp8,\u201d \u201cp9,\u201d \u201cp10,\u201d \u201cp11,\u201d and \u201cp12\u201d response phase gene clusters, which were later than the response phase of CBFs. Collectively, it was shown that the HTRgene analysis successfully reproduced known biological mechanisms for cold stress.The integrated analysis for heat stress data produced 272 candidate response genes in 7 response phase clusters. The results were also compared to the known heat stress pathway . Figure\u00a02+. Then, the activation status of some proteins are sequentially changed, such as CBL-CIPKs, CPKs, PP7, CDKA1, CBK3, and HSFA1s [The heat stress signal, in the signal transmission level pathway, alters membrane rigidity and the concentration level of ROS and Ca d HSFA1s . The HTRd HSFA1s , which d2+ sensing and kinase function involved in development and various abiotic stresses responses [CBK3 is a well-known CaM-binding protein kinase that regulates phosphorylation of HSFA1 positively in heat-shock response . PP7 pathway enrichment analyses of 12 and 7 clusters for cold and heat stress, respectively, were performed for cold stress Fig.\u00a0a and heaWe investigated how TFs are likely to regulate other genes through TF network analysis. To construct the TF network, a template TF network including 599 TF was downloaded from PlantRegMap database. The template TF network was refined by TF binding motif existence. Then, a network clustering algorithm, GLay in the cHTRgene was evaluated in comparison with existing tools. Qualitatively, HTRgene produces more informative output than other stress data analysis tools because it discovers not only candidate response order preserving DEGs but also response phases. However, DEG detection tools, e.g., DESeq , edgeR , and limp-value of Chi squared test are summarized in Additional file\u00a0HTRgene was quantitatively compared with other tools in terms of accuracy of determining candidate stress response genes only because the existing tools do not provide response phases. First, we determined ground truth genes as 330 and 158 genes with GO annotation \u201cresponse to cold\u201d and \u201cresponse to heat\u201d from the TAIR database . Then, tO(n!), where n is the number of genes. We thus use clustering analysis to reduce the complexity of the problem from the number of genes to the number of gene clusters. Also, we take a greedy approach to find the longest ordering of response time. The greedy approach scans gene cluster by gene cluster starting from gene clusters of more differential expression. Thus, although our greedy-based method could not produce the globally optimal solution, the result of our approach is likely to include differentially expressed genes, which is a very clear signal of stress.To detect stress response signaling genes, HTRgene is developed to find a specific pattern, the ordering of response time of genes preserved among multiple gene expression time-series data. However, the problem of determining and ordering response time has a high complexity of p<10\u221245).The results in \u201cERD7, LKP2, and COR27, were excluded after consideration of the response ordering. In addition, some experiments provide non-stress-treated time-series samples for control data and phenotype-domain (the tissue of samples and the age of samples are different).We developed and implemented HTRgene, a method to integrate multiple heterogeneous time-series gene expression datasets to find the ordering of response time of genes that are commonly observed among multiple time-series samples. Our strategy of defining and using response times is very effective in producing not only gene clusters but also the order of gene clusters.The utility of HTRgene was demonstrated in the investigation of stress response signaling mechanisms in Arabidopsis. The HTRgene integration analysis for 28 and 24 time-series sample gene expression datasets under cold and heat stress successfully reproduced known biological mechanisms of cold and heat stress in Arabidopsis.Additional file 1Table S1. Association between predicted genes and ground truth genes for cold stress analysis. Table S2. Association between predicted genes and ground truth genes for heat stress analysis."}
+{"text": "Differential expression analysis identifies global changes in transcription and enables the inference of functional roles of applied perturbations. This approach has transformed the discovery of genetic drivers of disease and possible therapies. However, differential expression analysis does not provide quantitative predictions of gene expression in untested conditions. We present a hybrid approach, termed Differential Expression in Python (DiffExPy), that uniquely combines discrete, differential expression analysis with in vitro, time-series RNA-seq data from several genetic PI3K/PTEN variants of MCF10a cells stimulated with epidermal growth factor. DiffExPy proposed ensembles of several minimal differential equation systems for each differentially expressed gene. These systems provide quantitative models of expression for several previously uncharacterized genes and uncover new regulation by the PI3K/PTEN pathways. We validated model predictions on expression data from conditions that were not used for model training. Our discrete, differential expression analysis also identified SUZ12 and FOXA1 as possible regulators of specific groups of genes that exhibit late changes in expression. Our work reveals how DiffExPy generates quantitatively predictive models with testable, biological hypotheses from time-series expression data.To demonstrate the distinct insight provided by DiffExpy, we applied it to published, https://github.com/bagherilab/diffexpy).DiffExPy is available on GitHub (Bioinformatics online. P-value. DEGs are often split into groups of genes that are overexpressed or underexpressed , existing analyses cannot predict if that gene will be overexpressed, underexpressed or unchanged when a different drug is applied. Researchers can only infer how the regulation might occur and qualitatively predict how expression will differ in untested contexts.Distinct from statistical enrichment approaches, differential equation models aim to use mechanistic information to describe how species, such as genes or proteins, interact and are well-suited to quantitatively predict gene expression in untrained conditions. However, designing and fitting differential equation parameters requires sufficient data; therefore, such models only exist for a few well-studied systems systems that mimic the experimental conditions. Then, the discrete response of a gene is matched to models in the simulation library to train an ensemble model. This trained model can predict that gene\u2019s expression in new conditions. DiffExPy also clusters genes by discrete response, and infers the timing of regulatory events by associating these gene groups with TFs and GO terms .PI3K) and phosphatase and tensin homolog (PTEN), which respectively phosphorylate and dephosphorylate phosphatidylinositidol-4, 5-bisphosphate (PIP2) to and from phosphatidylinositidol-3,4,5-trisphosphate (PIP3). PIP3 regulates many downstream pathways, most notably the AKT pathway (PTEN knockout (PTEN KO), A66-treated cells and PI3K knockin (PIK3CA H1047R) conditions. A66 inhibits the p110\u03b1 PIK3CA and we refer to it as the inhibited condition (PI3Kinh). The histidine-to-arginine substitution makes PIK3CA constitutively active, and we refer to it as the knockin (PI3K KI) condition. In the original study, expression was measured from MCF10a cells, a commonly used human breast epithelial cell line, stimulated with epidermal growth factor (EGF) using RNA-seq in three replicates at 0, 15, 40, 90, 180 and 300\u2009min after EGF stimulation (We demonstrate the efficacy of DiffExPy on publicly available RNA-seq data from the GeneExpressionOmnibus(GEO) repository, accession number GSE69822 and forkhead box A1 (FOXA1) on their target genes.Using the differential-expression data between the de novo for many genes that were not previously characterized. Currently, DiffExPy is limited in that it constructs small models based on time-series data from individual perturbations. However, DiffExPy is readily extensible and can be adapted to other differential-expression packages, model assumptions and genomic data. For example, future improvements to DiffExPy could be made to incorporate multiple perturbations, additional omics data types and prior knowledge. Our work provides a foundation on which more complex models of gene expression can be developed.DiffExPy is distinct from the status quo in generating dynamical system models Using time-series RNA-seq data, DiffExPy sorts genes according to their discrete, dynamic, differential gene expression profiles . Each gelimma contrasts compare expression between experimental conditions at each time points. Time-series (TS) contrasts compare expression between a time point and the previous time point. Autoregressive (AR) contrasts compare expression between a time point and the time point before the treatment was applied. A detailed description of these and other combinations of contrasts (PW-TS and PW-AR) is available in P-values and signs of LFC for the individual contrasts. If the P-value for a contrast is above the user-specified threshold, the LFC is not considered significant and is set to zero. The discrete response for gene i is defined as x is one of the set {PW, TS, AR, PW-TS or PW-AR}. The discrete values are calculated using the signs of the LFC values as follows:p(l) is the adjusted P-value of the LFC for the contrast and pcut is the significance threshold. For a time series of T time points, each discrete response has T. We did not filter LFC values by magnitude, but the option is provided in DiffExPy. We used a P-value threshold of 0.05 for all of our tests. A lower P-value cut-off could result in discrete responses with more zero values.To facilitate downstream analyses, DiffExPy calculates a discrete response for each gene based on the DiffExPy uses GeneNetWeaver (GNW) to generate minimal differential equation models for unique, three-node networks and to carry out stochastic simulations. GNW models include a protein and mRNA component between a model\u2019s average LFC (of three stochastic runs) and the true LFC. MSE values range from 0 to \u221e, where smaller values indicate that the prediction is closer to the true LFC value. We know of no existing, data-driven method that generates models capable of quantitative, time-series, gene expression prediction to provide an appropriate basis for comparison. Thus, we used the selection of a random model from our library as the null model comparison.We validated the accuracy of the quantitative predictions by calculating the error between PI3Kinh and WT data, and we used the PI3K KI data as test data to validate the predictions. Simulations for each network model were created to match the PI3K genetic condition and EGF stimulation. The simulated data are sampled at the same time points used in the experiments. Details of the library creation are provided in We used the discrete response profiles to match each gene to an ensemble of three-node SDE models that each share similar dynamics upon simulation. Our library consists of 2172 uniquely structured SDE models. We trained the models using the PI3Kinh does not affect the expression of many genes might interact with the perturbed gene (node G) and the rest of the genome (node x). In this experiment, G represents PI3K as it was the knocked-out, knocked-in or inhibited gene. Summaries of the models that match each gene and create the quantitative predictions reveal possible regulatory interactions that result in the observed dynamics (CLASP1), regulator of cell cycle (RGCC) and retinoic acid receptor alpha (RARA). CLASP1 and RARA were previously shown to interact with components of the PI3K/AKT pathway lower than random (\u0394MSE). However, after ranking genes by mean PI3Kinh-WT LFC, we applied an elbow rule to select the top 40 genes. Our results indicate that the top-ranked genes have significantly more accurate predictions than random models. The top genes have a median \u0394MSE of 0.523 (P\u2009=\u20091.45e-6) and a %MSE of 33.3% . Term depth quantifies the level in the GO hierarchy, and it is used as proxy for term specificity. Even though no genes are categorized exclusively as dDEGs or DRGs, there are very specific terms associated only with these groups . ResultsSimilar to GO term enrichment analysis, we calculate TF enrichment for gene clusters. A group of genes enriched for association with a particular TF may indicate that the TF is responsible for the observed change in expression. Existing methods, such as weighted gene co-expression network analysis (WGCNA) and dynamic regulatory events miner (DREM), perform clustering of gene profiles for subsequent GO and TF enrichment analysis . In contP\u2009<\u20090.05). This set includes supressor of zeste 12 (SUZ12) and forkhead box A1 (FOXA1), which were not identified in the original study (SUZ12 is a zinc finger protein and a component of the polycomb repressive complex 2 (PRC2). PRC2 has histone methylation activity, yet its regulatory role in cell fate is uncertain ; NSF CAREER Award [CBET-1653315 to N.B.]; NIH NU-CCNE (U54 CA199091-03); and the McCormick School of Engineering.Conflict of Interest: none declared.btz256_Supplementary_FileClick here for additional data file."}
+{"text": "The inherent stochasticity of gene expression in the context of regulatory networks profoundly influences the dynamics of the involved species. Mathematically speaking, the propagators which describe the evolution of such networks in time are typically defined as solutions of the corresponding chemical master equation (CME). However, it is not possible in general to obtain exact solutions to the CME in closed form, which is due largely to its high dimensionality. In the present article, we propose an analytical method for the efficient approximation of these propagators. We illustrate our method on the basis of two categories of stochastic models for gene expression that have been discussed in the literature. The requisite procedure consists of three steps: a probability-generating function is introduced which transforms the CME into (a system of) partial differential equations (PDEs); application of the method of characteristics then yields (a system of) ordinary differential equations (ODEs) which can be solved using dynamical systems techniques, giving closed-form expressions for the generating function; finally, propagator probabilities can be reconstructed numerically from these expressions via the Cauchy integral formula. The resulting \u2018library\u2019 of propagators lends itself naturally to implementation in a Bayesian parameter inference scheme, and can be generalised systematically to related categories of stochastic models beyond the ones considered here. Understanding the process of gene expression in the context of gene regulatory networks is indispensable for gaining insight into the fundamentals of numerous biological processes. However, gene expression can be highly stochastic in nature, both in prokaryotic and in eukaryotic organisms; see e.g.\u00a0the work by Elowitz et\u00a0al. , Raj andTo test the validity of such stochastic models, a comparison with experimental data needs to be performed. The development of experimental techniques, such as time-lapse fluorescence microscopy partial differential equations (PDEs).PDE systemODE system: Applying the method of characteristics\u2014combined, if necessary, with perturbation techniques\u2014we transform the system of PDEs obtained in step 1 into a dynamical system, that is, a system of ODEs.ODE systemExplicit solution: Making use of either special functions (model A) or multiple-time-scale analysis (model B), we obtain explicit solutions to the dynamical system found in step 2.We emphasise that the \u2018characteristic system\u2019 of ODEs which is obtained in step 2 is low-dimensional, in contrast to the underlying CME system, as well as that it exhibits additional structure, allowing for the derivation of a closed-form analytical approximation for the associated generating function.Both model A and model B are formulated in terms of the chemical master equation (CME), which is the accepted mathematical representation of stochastic gene expression in the context of the model categories considered here; cf.\u00a0Iyer-Biswas and Jayaprakash and ShahInitial conditions are originally stated in terms of the CME, and first have to be reformulated in terms of the corresponding system of PDEs to ensure well-posedness; then, initial conditions can be extracted for the dynamical system that was obtained via the method of characteristics, reverting step 3.To transform solutions to the characteristic system into solutions of the underlying PDE system, the associated \u2018characteristic transformation\u2019 has to be inverted, reverting step 2.Lastly, solutions of the CME have to be extracted from solutions to the resulting PDE system, reverting step 1. Although the correspondence between the two sets of solutions is exact, theoretically speaking, the complexity of the expressions involved precludes the efficient analytical reconstruction of propagators from their generating functions. Therefore, we propose a novel hybrid analytical-numerical approach which relies on the Cauchy integral formula.The various steps in our analytical method, as indicated above, are represented in Fig.\u00a0To convert the results of the above procedure into solutions to the original stochastic model, the three steps involved in our analysis have to be reverted. To that end, we require the following three ingredients:The present article is organised as follows. In Sect.\u00a0We first demonstrate our analytical method in the context of an autoregulatory stochastic gene expression model, as presented by Iyer-Biswas and Jayaprakash ; see alsD and the active state P) with rate The basic stochastic model for gene expression is represented by the reaction schemea, thereby accelerating its own production; in the case of autorepression, viz. or active (1). The time variable is nondimensionalised by the protein decay rate The CME system that is associated to the reaction scheme in , with aufined in , is giveeffective activation or repression. It can hence be argued that the simultaneous inclusion of both effects would introduce superfluous terms and parameters, which could be considered as poor modelling practice. Therefore, we choose to model the two autoregulation mechanisms separately.A priori, it is possible to incorporate both autoactivation and autorepression in a single model, by merging systems and 2.52.5. HoweRather than investigating the dynamics of and 2.52.5 numerGardiner :2.6\\docuj)(t) in ; likewisj)(t) in gives ri1). Both .To allow for an explicit analysis of systems and 2.82.8, we ma in .Using , we can given in in AppenThe expression for 0)(w) in allows ut}m=1 in , we obtagiven in . Finallygiven in to detergiven in and in , we findables in . Thus, . Geometrically speaking, these characteristic curves foliate the -plane over the v-axis. Therefore, each characteristic can be identified by its base point, which is the point where the characteristic curve intersects the v-axis, at The characteristics of the differential operator (=\u2202s) in and (2.3(=\u2202s) in are the see Eq.\u00a0 and Fig.t-axis. Indeed, the differential equation for the v-component of a characteristic curve with the time variable (t). Hence, the initial conditions in in , we obtaline of initial conditions in the phase spaces of the dynamical systems -coordinate plane is foliated by characteristics, which are the integral curves of the vector field ; recall Fig.\u00a0t,\u00a0v) lies on a unique characteristic; flowing backward along that characteristic to its intersection with the v-axis, we can determine the corresponding base point s) with the time variable (t).As mentioned in Sect.\u00a0t,\u00a0v) lies. For that characteristic, identified by its base point s and To determine the value of E system or \\documentnecessary for the generating functions s. In other words, they are constant along the particular characteristic on which the dynamical system is solved. These constants\u2014such as e.g.\u00a0The geometric interpretation of characteristics that was used to motivate the inverse transformation in also shotions to or 2.8)\\document systems and from , using \u00a7In this section, we apply our analytical method to model B, a stochastic gene expression model presented by Shahrezaei and Swain which exD and M) with rate P) with rate The model for stochastic gene expression considered here is given by the reaction schemeAs in model A, autoregulatory terms can be added to the core reaction scheme in . Since bm mRNA and n protein being present at time t while the gene is either inactive (0) or active (1). As in (The CME system associated to the reaction scheme in is given). As in and in depend oE system , the assnisms in into sysentclass1pt{minimas. For simplicity, we have introduced the new variables u and v, which are defined asu(s) given in in , we findables in .s. In particular, given an arbitrary triple t,\u00a0u,\u00a0v)-coordinate space over the s) is identified with the time variable (t). We can therefore uniquely identify such a characteristic\u2014interpreted as a fibre over the Next, we can infer e.g.\u00a0from and well flow of to flow f system , foliaterbits of provide orbit of . Hence, orbit of yield, otions in in ordernt}gi in and -coordinate space is foliated by the characteristics of the operator t,\u00a0u,\u00a0v) lies on a unique characteristic. Flowing backward along that characteristic to its intersection with the v-coordinate do not depend on u, we may use , , \\docume to Eqs.\u00a0.For arbitrary initial conditions exponentially close to exponentially large terms in the transformation, precluding any sensible series expansion.All orbits that have their initial conditions on the same Fenichel fibre are ation in that yieThus, although the construction of a composite \u2018 slow\u2014(ultimately) fast\u2019 expression of The absence of any detailed analysis of the characteristic system in its slow formulation, Eq.\u00a0, can be systems and accurate. To that end, we have to compare In Sect.\u00a0nly, cf.\u00a0; similar}O(\u03b5) in , for theVerhulst to inferThe inclusion of any type of autoregulation into system \u2014which issmall in comparison with the protein decay rate We assume that the autoregulation rates fined in , are smau in \u037e cf.\u00a0. System (0)\u037e cf.\u00a0, we findentclass1pt{minimagiven in . Thus, tcall Eq.\u00a0. Togetheq.\u00a0\\documentith Eqs.\u00a0. The coe; recall and \u037erecall . In thatred from to give to Eqs.\u00a0\u20133.53) t t\\docume to Eqs.\u00a0 can be f to Eqs.\u00a0 with addTo summarise Sect.\u00a0Main result: The PDE system : The PDE system , , \\documederlying , we may We can determine Note that step 2 has been anticipated in the analysis of model A, by introducing the series expansion in , , \\docume systems ; see Defnsion in . Indeed,In the present article, we have developed an analytical method for obtaining explicit, fully time-dependent expressions for the probability-generating functions that are associated to models for stochastic gene expression. Moreover, we have presented a computationally efficient approach which allows us to derive model predictions (in the form of propagators) from these generating functions, using the Cauchy integral formula. It is important to note that our method does not make any steady-state or long-evolution-time approximations. On the contrary, the perturbative nature of our approach naturally optimises its applicability over relatively short (or intermediate) time scales; see also Remark\u00a0necessary for determining explicit expressions for the generating functions themselves. Therefore, we can only be certain of the validity of our approach if we assume that the autoregulation rates are small in comparison with other model parameters, as is done there. Moreover, in the analysis of model B, we have to assume that the protein decay rate is smaller than the decay rate of mRNA; recall Assumption\u00a0As mentioned in Sects.\u00a0explicitly. To that end, it is highly beneficial that the systems (first order. Inclusion of second-order reactions would introduce both nonlinear terms and second-order differential operators in the PDE systems for the corresponding generating functions, which would severely increase the complexity of these systems, thus preventing us from obtaining explicit solutions.The method which is described in Sect.\u00a0 systems and (3.1 systems obtained systems and (2.2 systems , are of The method presented in this article, and the results thus obtained, can be seen, first and foremost, as the natural extension of previous work by Popovi\u0107 et\u00a0al. . AnalytiOther authors have attempted to solve several classes of CMEs directly, i.e.\u00a0without resorting to generating function techniques or integral transforms. A noteworthy example is the work of Jahnke and Huisinga on monomIt is important to emphasise that the \u2018time dependence\u2019 referred to in the title of the present article is solely due to the dynamic nature of the underlying stochastic process, and that it hence manifests exclusively through time derivatives in the associated CMEs, such as e.g.\u00a0in . In partThe availability of analytical expressions for generating functions does, in principle, allow one to try to obtain insight into the underlying processes by studying the explicit form of said expressions, as has been done e.g.\u00a0by Bokes et al. (Bokes et\u00a0al. The analytical approach explored in the article does not, of course, represent the only feasible way of obtaining numerical values for propagator probabilities, which can, in turn, serve as input for a Bayesian parameter inference scheme. For an example of a direct numerical method in which the Cauchy integral plays a central role, the reader is referred to the work by MacNamara . Our maiThe analytical results obtained thus far, as presented in the article, are ready for implementation in a Bayesian parameter inference framework. An analysis of the performance of the resulting approximations to the associated generating functions in the spirit of the article by Feigelman et\u00a0al. , where p"}
+{"text": "Bayesian networks are directed acyclic graphical models widely used to represent the probabilistic relationships between random variables. They have been applied in various biological contexts, including gene regulatory networks and protein\u2013protein interactions inference. Generally, learning Bayesian networks from experimental data is NP-hard, leading to widespread use of heuristic search methods giving suboptimal results. However, in cases when the acyclicity of the graph can be externally ensured, it is possible to find the optimal network in polynomial time. While our previously developed tool BNFinder implements polynomial time algorithm, reconstructing networks with the large amount of experimental data still leads to computations on single CPU growing exceedingly.In the present paper we propose parallelized algorithm designed for multi-core and distributed systems and its implementation in the improved version of BNFinder\u2014tool for learning optimal Bayesian networks. The new algorithm has been tested on different simulated and experimental datasets showing that it has much better efficiency of parallelization than the previous version. BNFinder gives comparable results in terms of accuracy with respect to current state-of-the-art inference methods, giving significant advantage in cases when external information such as regulators list or prior edge probability can be introduced, particularly for datasets with static gene expression observations.We show that the new method can be used to reconstruct networks in the size range of thousands of genes making it practically applicable to whole genome datasets of prokaryotic systems and large components of eukaryotic genomes. Our benchmarking results on realistic datasets indicate that the tool should be useful to a wide audience of researchers interested in discovering dependencies in their large-scale transcriptomic datasets. Bayesian networks (BNs) are graphical representations of multivariate joint probability distributions factorized consistently with the dependency structure among variables. In practice, this often gives concise structures that are easy to interpret even for non-specialists. A BN is a directed acyclic graph with nodes representing random variables and edges representing conditional dependencies between the nodes. Nodes that are not connected represent variables that are independent conditionally on their parent variables\u00a0. In geneThis algorithm was implemented in BNFinder\u2014a tool for BNs reconstruction from experimental data\u00a0.One of the common use of BNs in bioinformatics is inference of interactions between genes\u00a0 and protBNFinder is implemented in Python programming language and is therefore available for most modern operating systems. It is a command line tool, but the usage is quite easy even for the scientists without a strong Computer Science background. The learning data must be passed to BNFinder in a text file split into 3 parts: preamble, experiment specification and experiment data. The preamble allows users to specify some features of the data and/or network, while the next two parts contain the learning data, essentially formatted as a table with space- or tab-separated values. For example, if the user wants to infer the network with four genes {G1, G2, G3, G4}, where G1 and G2 are known transcription factors, the input file would look like https://github.com/sysbio-vo/bnfinder/raw/master/doc/bnfinder_tutorial.pdf), which includes several working examples of simple and more complex usage scenarios.In cases when more information is available about the experiment, the preamble section of the input file can be used to incorporate perturbational data, prior probabilities of the genes interaction or the expected structure of the signaling pathway . These parameters are not required but they can substantially increase the accuracy of inferred network. Readers interested in applying BNFinder to their own data might find it useful to look through the BNFinder tutorial , where each line represents the fact of a single interaction between two variables. The SIF output for the network in G1\u00a0+\u00a0G3 G2\u00a0+\u00a0G4 https://github.com/sysbio-vo/bnfinder/raw/master/doc/bnfinder_doc_all.pdf.The detailed explanation on all input parameters and their influence on output data is described in the user manual, freely available from the dedicated github repository\u2014Even though BNFinder can be applied to many different datasets, the practical usage of the algorithm is limited by its running times that can be relatively long. Since the algorithm published by The general scheme of the learning algorithm is the following: for each of the random variables find the best possible set of parent variables by considering them in a carefully chosen order of increasing scoring function. This function consists of two components: one is penalizing the complexity of a network and the other one is evaluating the possibility of explaining data by a network. The acyclicity of the graph allows to compute the parents set of each variable independently, therefore algorithm first starts with computing scores for all possible singleton parents sets for a given random variable. Then it checks if penalty on increasing parents set size is too high or if it reached user defined parents set size limit, and if no\u2014proceeds with two-element parents sets and so on. The detailed description of the algorithm and scoring functions is given in Dojer manuscript .variable-wise as it distributes the work done on each variable between the different threads. However, such approach has natural limitations. Firstly, the number of parallelized tasks cannot exceed the number of random variables in the problem, meaning that in the cases where only a few variables are considered we get a very limited performance boost. Secondly, variable-wise parallelization might be vulnerable (in terms of performance) to the datasets with highly heterogeneous variables, i.e., variables whose true dependency graph has a wide range of connections. As the time spent on computing parent sets for different variables varies - it leads to uneven load of threads. In biology we usually observe networks with scale-free topology consisting of a few hub nodes with many parents and a large number of nodes that have one or small number of connections , it would probably take more time than when allocating 10 cores. As it is difficult to tell, which problem might be more important in practice, we have implemented and tested two approaches: set-wise only and hybrid one - a natural combination of variable-wise and set-wise. It should be noted that, while the variable-wise parallelization was already implemented in the BNFinder, the set-wise and hybrid methods are novel in this particular application.While threads in compaset-wise algorithm uses each given core to compute parents sets for one gene and after finding parents it proceeds with the next gene. On the contrary, hybrid algorithm uniformly distributes cores between genes, for example, if a user has three genes in the network and six cores available, each gene will have two cores for computing its parents set. If there are seven cores available, one gene will have three cores, while other two genes will have two cores. Thus, once the gene is processed, the freed cores cannot be allocated to other genes, which may be a potential disadvantage.set-wise and hybrid algorithms can be described in the following way: k is the cores number, n is the number of random variables, and ti is the time one needs to compute optimal parents set for the ith variable using one core.So, the pure theoretical complexity of set-wise approach is the sum of time needed for each random variable, which is in fact average time one spends on finding the parents set for one variable, while inferring BN with a hybrid approach is bounded by the maximum time one spends on one variable. This estimate does not take into account the time, which operating system spends on multi-threading or accessing the disk storage.Thus, the time to reconstruct the whole network in case of a variable-wise, set-wise and hybrid. The original implementation (variable-wise) serves as a baseline for computing the speed-up and efficiency of the parallelization. For testing we used synthetic benchmark data as well as real datasets concerning protein phosphorylation network published by We compared implementations of three different algorithms: Set-wise and hybrid algorithms performance on a 20-gene synthetic network was very similar, while the speed-up and efficiency comparison revealed more differences between the algorithms . The more observations per regulator-target interaction we had, the better BNFinder predicted the network structure. However, as we studied running times per gene we observed that variables with the biggest number of parents did not always result in the longest computations. The latter is explained by how the scoring function works. BNFinder stops when the penalty for increasing the set of parents is so big that it cannot improve beyond what it has already found. In general, if the optimal parent set is very good in predicting the child variable value BNFinder will finish searching earlier. It means that the whole family of three-element parents sets can have worse scores than the two-element parents set, but the algorithm will proceed further because it has not yet reached the penalty on increasing the set size.The results of multiple tests showed that introducing complex layer structure of regulators always resulted in the In particular, running times also depend on the number of observations and the number of nodes in the networks. For example, set-wise and hybrid algorithms, we decided to provide users with both options with set-wise being the default algorithm.Since there is no obvious winner between Tests on benchmark and Sachs data were performed on the same server with AMD Opteron (TM) Processor 6,272 and 512GB RAM. During the tests the server was loaded only by regular system processes, but to ensure statistical significance we performed each test five times, plotting average values with standard deviations.For the new version of BNFinder we also implemented an option for distributed usage of the tool. The idea is quite simple and did not require specific python libraries or tools. The user has to submit the file with subset of genes as input argument, so BNFinder can calculate partial result. When all the genes are processed user must place all the results into one folder and run BNFinder again, so it will aggregate the results.For the tests we chose Challenge 5 data from DREAM2 competition . Challenset-wise algorithm performance with context likelihood of relatedness (CLR) algorithm - an extension of the relevance networks approach, that utilizes the concept of mutual information , which increases the computation time dramatically in non-linear way, especially in case of dataset with many variables. We compared ormation . We chosormation .The CLR tests were performed on the GP-DREAM platform, designed for the application and development of network inference and consensus methods . BNFindeEven though computing with parents sets limit of 3 takes significant amount of time and resources, it is clear that BNFinder is able to reconstruct genome-scale datasets, significantly broadening its application range after it was adapted to parallel and distributed computing.Previously we compared accuracy of BNFinder algorithm with Banjo on data DREAM2 Genome Scale Network Inference 3,456 genes \u00d7300 experiments dataset, log-normalized compendium of Escherichia coli expression profiles described above .Yeast static synthetic: 2,000 genes \u00d72,000 experiments dataset generated using GNW simulator , and Brem data was imported from NetworkBMA R packagWe used two metrics to assess methods performance: Area Under the Precision Recall curve (AUPR) or Area Under Receiver Operating Characteristic curve (AUROC), implemented in MINET R package . HoweverFastBMA, Genie3 and BNFinder methods allow us to infer directed interactions, therefore they were additionally evaluated on directed gold networks (where applicable), while for the undirected evaluation gold network adjacency matrices were converted to symmetrical ones (higher edge probabilities are preserved) as well as outputs of directed methods.In case of static gene expression data FastBMA can infer the regulators of a particular gene by regressing it on the expression levels of the other genes. Therefore we used the method with time series data only.We mostly used DREAM2 data for running times tests and evaluated accuracy of CLR method only. The results of BNFinder and CLR tests are shown in We ranked AUROC and AUPR values across all the methods for each of five 10 and 100 genes network from DREAM4 challenge. Different evaluation strategies for 10-gene networks showed quite a different results and 6, wYeast time series network inference showed extremely bad results for all the methods with MutRank having slightly better AUROC = 0.58 and AUPR = 0.07 values according to MINET package. In contrast to synthetic DREAM4 data which has 21 time points, YeastTS has only 6, which could explain worse results.Surprisingly, BNFinder significantly outperformed other methods when reconstructing network from Brem at al. static gene expression data . ImportaWe also studied the effect of the number of experiments on the accuracy of inferred network. For GNW2000 synthetic Yeast data we performed two separate tests: one with full dataset\u20142,000 experiments, and second with only 150 randomly selected observational points. The main advantage of BNFinder in comparison with heuristic search Bayesian tools such as Banjo is that BNFinder reconstructs optimal networks, which also means that the same parameters lead to the same result. However, with BNFinder one can use a number of input arguments such as scoring functions , static or dynamic BNs, perturbation data, or even prior information on the network structure. All of these may alter results significantly, so, naturally we are interested in choosing best parameters for a particular dataset. Here we studied the impact of two very important parameters: parents sets limit and number of suboptimal parents sets .In We studied the effect of different parameters by plotting AUPR against AUROC values. In general, we can conclude that if the user is interested in the very top of the strongest interactions in the network, one should use small numbers of sub-optimal parent (up to 5) sets and small limit on the parent-set size (up to 3). However, if one is interested in discovering the more global picture of the true regulatory network, one should focus on the higher number of sub-optimal parent sets with limit on the set size as high as it is computationally feasible.https://github.com/sysbio-vo/article-bnf-suppl.The results of all performance and accuracy tests with the scripts to generate all the plots are available from a dedicated github repository\u2014hybrid algorithm and many cases where the hybrid and the set-wise algorithms can be applied, we can give users the best practice guidance for BNFinder application. In case of small networks where number of variables is 2 or more times less than the number of cores it is advised to use the hybrid algorithm, because the set-wise would generate more context switching events per variable. The same applies when a user imposes parent set limit equal or less than 3, which makes the computational load per variable more even. In case when the complex layered structure of regulators is introduced it is always better to use the set-wise. In case of big networks, when the number of variables is greater than the number of cores in a single computational unit, one can also use the new BNFinder feature, which allows to define subset of genes as it was described in Distributed computations testing section of this paper. Defined subsets can be computed separately and simultaneously on different computational units, and user should apply the same logic when choosing parallelization algorithm as for the smaller networks.Despite seemingly complex behavior of the set-wise algorithm did not show major drop in performance in comparison to the hybrid one.And finally, the user may just use the default parameters as While we understand that there are many more tools for gene regulatory networks reconstruction in the literature we believe that NetBenchmark package is representative for the field since it incorporates state-of-the-art methods, which are based on a variety of different algorithms. On top of that using benchmarking tool makes it easier for other researchers to compare their methods to our results.Measuring AUROC and AUPR values on 14 different datasets revealed that studied methods behave differently on different datasets, and none of the methods scored best in all cases. In general, time series data proved to be more challenging for the methods than inferring network from static gene expression datasets. Our results on 10 genes networks evaluation with top 20% and 100% interactions showed that such small networks can hardly be used as the only source of comparison.Testing BNFinder on the mentioned datasets we concluded that it performed best on the static gene expression datasets with additional prior knowledge , while for other methods such as Genie3 the same information did not yield significant improvement.It should be noted, that in most of these scenarios, our knowledge of the true network connections is also limited and different tools use different definitions of a meaningful \u201cconnection\u201d, therefore it is expected that, for example, a tool using mutual information will capture a different subset of connections than a tool using Bayesian-Dirichled equivalence. While our opinion is that such automated methods will be gaining importance as their accuracy increases, there is still a lot of work ahead of us on careful validation of the problems various tools have and on refinement of the definitions of regulatory interactions.The improvement over previous version of BNFinder made it feasible to analyze datasets that were impossible to analyze before by utilizing the power of distributed and parallel computing. It allowed us to significantly extend the application range of the tool, and, for the first time, to compare it with the best-performing non-Bayesian methods. BNFinder showed overall comparable performance on synthetic and real biological data, providing significant advantage in cases when prior knowledge on genes interactions can be introduced. This can lead to further research on the optimization of the BNFinder method for the purpose of finding larger networks with better accuracy. We provide the new BNFinder implementation freely for all interested researchers under a GNU GPL 2.0 license.10.7717/peerj.5692/supp-1Supplemental Information 1Click here for additional data file."}
+{"text": "The similarity or distance measure used for clustering can generate intuitive and interpretable clusters when it is tailored to the unique characteristics of the data. In time series datasets generated with high-throughput biological assays, measurements such as gene expression levels or protein phosphorylation intensities are collected sequentially over time, and the similarity score should capture this special temporal structure.We propose a clustering similarity measure called Lag Penalized Weighted Correlation (LPWC) to group pairs of time series that exhibit closely-related behaviors over time, even if the timing is not perfectly synchronized. LPWC aligns time series profiles to identify common temporal patterns. It down-weights aligned profiles based on the length of the temporal lags that are introduced. We demonstrate the advantages of LPWC versus existing time series and general clustering algorithms. In a simulated dataset based on the biologically-motivated impulse model, LPWC is the only method to recover the true clusters for almost all simulated genes. LPWC also identifies clusters with distinct temporal patterns in our yeast osmotic stress response and axolotl limb regeneration case studies.https://github.com/gitter-lab/LPWCand CRAN under a MIT license.LPWC achieves both of its time series clustering goals. It groups time series with correlated changes over time, even if those patterns occur earlier or later in some of the time series. In addition, it refrains from introducing large shifts in time when searching for temporal patterns by applying a lag penalty. The LPWC R package is available at Time series data are collected extensively to study complex and dynamic biological systems , 2. TracSimilarity in gene expression patterns can correspond to similarity in biological function, which helps direct future research . CountleMany time series clustering algorithms have been introduced to understand the dynamics of biological processes. Some of these clustering approaches are hierarchical, iteratively merging small clusters or dividing large clusters. Others partition entities into clusters, which often requires specifying the number of clusters in advance.Hierarchical clustering methods, such as clustering with correlation or transformed Euclidean distance for similarity, were a common choice before the proliferation of time series-specific algorithms and contMany partition-based clustering algorithms are available for biological time series data as well. The Short Time-series Expression Miner (STEM) enumerates temporal template profiles and matches genes to them, which works best for short time series (3-8 timepoints) . DynaMitAnother category of time series clustering methods is Bayesian models \u201330. SeveDespite the abundance of clustering algorithms, many popular clustering methods do not have special support for important temporal properties such as lags and irregular timepoints, which we demonstrate with a simple example. Even the methods that do allow lags typically do not treat irregular timepoints differently from regular timepoints. Figure\u00a0One of the main contributions of LPWC is a similarity function that accounts for pairs of temporal profiles that occur at slightly different times. This generates a gene-gene similarity matrix that can be used as input for standard similarity- or distance-based clustering methods such as hierarchical clustering. The LPWC similarity score is derived from weighted correlation, but the correlations of lagged temporal profiles are penalized using a Gaussian kernel. The kernel is also used to account for irregular time sampling. We demonstrate the advantages of LPWC over existing general and time series clustering algorithms on a simulated impulse model dataset and case studies on the yeast osmotic stress response and axolotl limb regeneration.The goal of LPWC is to group genes that have similar shapes in their expression levels over time. These shapes or temporal profiles refer to the patterns of increases and decreases in expression. Two genes have similar temporal shapes if the timing of these increases and decreases coincides even if the expression levels are not the same. In order to identify similar temporal shapes that are not perfectly synchronized, LPWC applies a lag operator to re-align the timepoints when comparing two expression profiles. The lag operator compares the timepoints of one expression profile with later timepoints in the other profile. Because the aligned time series can pair measurements that are temporally far apart, LPWC weights the pairs of timepoints to give stronger consideration to those that are close in time.To assess LPWC, we compared it to other popular clustering algorithms on simulated time series datasets where the true clusters are known and conducted two biological case studies. The yeast osmotic stress response data consist of NaCl-induced osmotic stress phosphorylation samples obtained from mass spectrometry . The axoIn our primary simulation, each simulated time series dataset contains 200 genes with 10 timepoints at 0, 2, 4, 6, 8, 18, 24, 32, 48, and 72 min. A simulated instance is composed of four distinct temporal patterns with 50 genes per pattern. We repeat the sampling, clustering, and evaluation procedure 100 times in both a low and high variance setting, where the variance controls how similar the simulated genes are to the four reference patterns. Figure\u00a0We compare LPWC with Euclidean distance with hierarchical clustering (heuc) and kmeans clustering (keuc), Pearson correlation with hierarchical clustering (hcorr) and kmeans clustering (kcorr), DTW with hierarchical clustering, and STS distance with hierarchical clustering have ARI scores of 1 because they do not use the temporal information. STS has a low ARI but performs poorly on this dataset. It places all genes into a single cluster, except for two genes that are each assigned to their own singleton cluster or a vector of timepoints, which we denote as Ti for gene i. L reduces the effective length of the lagged vector, introducing NA placeholder values. For example, if Ti= and Yi=, then for Xi=1 we have L1Ti= and L1Yi=. For Xi=\u22121 we obtain L\u22121Ti= and L\u22121Yi=.where fts Fig.\u00a0. The lagw for weighted correlation.NA. Similarly, in the weighted correlation, defined here generically for input vectors x and y and weight vector z,NA in either x or y is the standard (unweighted) Pearson correlation of Yi and Yj.The overall penalty for aligning timepoints is derived from the mean weight Xi for all genes is NP-complete can be used directly by a clustering algorithm that requires gene-gene similarities as input. However, LPWC uses hierarchical clustering, which requires a distance measure instead. We know that \u22121\u2264corr\u22641. Thus, we transform the similarities with dist=1\u2212corr to obtain distances for hierarchical clustering such that 0\u2264dist\u22642. We run hierarchical clustering with complete linkage.The similarity measure C controls the width of the Gaussian kernel function and the severity of the penalty. The appropriate C is subjective and application-specific. Therefore, instead of choosing one universal default penalty parameter C, LPWC implements two data-dependent ways to set C: the high and low penalty modes. The high penalty (hLPWC) penalizes lags more, increasing the possibility of setting Xi=0 compared to the low penalty (lLPWC), which will set more Xi\u22600. In addition to these two default options, the user can also specify C directly to introduce more or fewer lags.Because lags reduce the number of timepoints used for the correlation calculation and biological time series data are typically short already, there is a risk that two lagged expression vectors will have a high correlation score by chance. Lagged correlation clustering without modification does not perform well . Thus, acorrw is Cm is the maximum lag and wl=(LlTi\u2212L0Ti)2 obtained from comparing the timepoint vector Ti with a lagged version of Ti.The overall penalty that LPWC applies to the weighted correlation C for which penalty(C) produces penalties between 0.5 and 0.95 with a step size of 0.05. For each of those C, we run LPWC and obtain the gene-gene similarity matrix. We choose the C for which the gene-gene similarity matrix is the most stable with respect to the similarity matrix from the previous C. Stability is computed by subtracting the two gene-gene similarity matrices, squaring the elements, and summing them. The lowest sum squared difference is preferred. Because it sweeps over multiple values of C, lLPWC is slower than hLPWC.For the low penalty, we compute the values of \u201cN is the number of genes and M is the set of valid lags. We set M={\u2212m,...,m}, where m is the maximum lag allowed. The Xi from \u201cXi=k, and Xi\u2260k. The i is k and the lag of gene j is l. That is, i and j where i>j. i and j with lags k and l, respectively. Here, c. Given an assignment to the i in O(|M|\u2217N) time and that O(|M|2\u2217N2) time. Finally, we can compute the objective function value and assess whether it is \u2265c in O(|M|2\u2217N2) time.To outline the proof that the decision version of Lag Optimization is NP-complete, we show that a solution can be verified in polynomial time and that the NP-complete Weighted Maximum Cut problem , 39 can G= with nonnegative weights si,j for all ei,j\u2208E. The objective is to assign the vertices into sets V1 and V2. Edges with one vertex in V1 and the other in V2 are cut edges. The decision version of Weighted Maximum Cut assesses whether the sum of the weights si,j for the cut edges is at least c.Next, we show that Weighted Maximum Cut reduces to Lag Optimization in polynomial time. In Weighted Maximum Cut , 39, we M={1,2}. Then create variables vi\u2208V. vi in vertex set V1, likewise for V2. Create i>j. Set si,j for all i>j. Set i>j. When si,j to the objective function. Otherwise, the pair contributes a weight of 0. Because si,j to the Weighted Maximum Cut objective function. Thus, the objective function value of the constructed Lag Optimization instance equals that of the original Weighted Maximum Cut instance, and the Lag Optimization and Weighted Maximum Cut decisions are identical. In addition, the transformation from the Weighted Maximum Cut instance to the Lag Optimization instance requires O(|M|2\u2217N2) time.To reduce Weighted Maximum Cut to Lag Optimization, first define the set of possible lags To test LPWC, we simulated time series gene expression data using an impulse model called ImpulseDE . ImpulseWe used the ImpulseDE parameters to define four canonical gene expression patterns (models) and simulated 50 genes from each pattern by adding random variation to the model parameters. We ran the simulation in a low variance Table\u00a0 and highh0, h1, and h2. This randomly shifts the entire simulated time course along the y-axis. Given the sampled parameters, we generated expression levels using the impulse model at 10 timepoints: 0, 2, 4, 6, 8, 18, 24, 32, 48, and 72 min. Finally, we added Gaussian-distributed noise to the simulated expression level at each timepoint, sampling from N in the low variance setting and N in the high variance setting. We repeated the overall simulation procedure 100 times for both the low and high variance settings to assess the clustering performance over many simulated datasets.To simulate a gene from a canonical pattern, we randomly sampled an additive offset for each of the six ImpulseDE model parameters using the parameter-specific Uniform distributions in Table\u00a0In addition, we used ImpulseDE to study clustering with regular or irregular timepoints and two simple canonical patterns: an early spike and a late spike. The early spike and late spike patterns each had 50 genes, which we divided so that 25 genes spiked slightly later than the other 25 . However, unlike the previous simulations, we added the same offset sampled from Uniform to t1 and t2 instead of having two independent offsets. This ensures that the duration of the spike is the same for all simulated genes. We again ran the simulation and clustering process 100 times for both the regular and irregular timepoints.We simulated genes from these two canonical patterns using the ImpulseDE parameters in Additional file\u00a0Cluster evaluation is difficult because the true clusters are not known for real data. The Rand index compares two clustering results . HoweverOne way to evaluate time series clustering algorithms without ground truth labels is by assessing how important the temporal information is to the clustering results. We obtain clusters using the original data and then permute the data by randomly reordering the timepoints (the gene expression observations do not change). The permutations destroy the true temporal dependencies in the data. If a clustering algorithm does not use the temporal information, the ARI score when comparing its clusters on the original and permuted data will be close to 1, which is undesirable. In the yeast and axolotl case studies, we repeat the timepoint permutation 100 times for each clustering algorithm and assess the distribution of ARI scores.Another challenge is choosing the number of clusters, which can be addressed with the silhouette method . This meWe applied LPWC in two case studies to demonstrate how it can be used to obtain coherent temporal clusters and derive biological insights into dynamic transcriptional and signaling processes. The first captures the rapid phosphorylation response to osmotic stress in yeast . KanshinThe second dataset contains time course RNA-seq data from the axolotl blastema after amputating the right forelimb . StewartWe performed gene enrichment analysis of the LPWC cluster members in DAVID 6.8 , 44 : Platform independentProgramming language: R (\u2265 version 3.0.2)Other requirements: NoneLicense: MITAny restrictions to use by non-academics: NoneAdditional file 1 Supplementary figures, tables, and methods.Additional file 2 Cluster assignments and DAVID enrichment for yeast lLPWC as tab-delimited text files. The cluster assignment files contain a header row. The id is the phosphopeptide id from Kanshin et al. [n et al. and UnipAdditional file 3 Cluster assignments and DAVID enrichment for yeast hLPWC as tab-delimited text files. The cluster assignment files contain a header row. The id is the phosphopeptide id from Kanshin et al. [n et al. and UnipAdditional file 4 Cluster assignments and DAVID enrichment for axolotl hLPWC as tab-delimited text files. The cluster assignment files contain the mapped human official gene symbols and do not have a header row.Additional file 5 Cluster assignments and DAVID enrichment for axolotl lLPWC as tab-delimited text files. The cluster assignment files contain the mapped human official gene symbols and do not have a header row."}
+{"text": "In vivo dynamics of protein levels in bacterial cells depend on both intracellular regulation and relevant population dynamics. Such population dynamics effects, e.g., interplay between cell and plasmid division rates, are, however, often neglected in modeling gene expression regulation. Including them in a model introduces additional parameters shared by the dynamical equations, which can significantly increase dimensionality of the parameter inference. We here analyse the importance of these effects, on a case of bacterial restriction-modification (R-M) system. We redevelop our earlier minimal model of this system gene expression regulation, based on a thermodynamic and dynamic system modeling framework, to include the population dynamics effects. To resolve the problem of effective coupling of the dynamical equations, we propose a \u201cmean-field-like\u201d procedure, which allows determining only part of the parameters at a time, by separately fitting them to expression dynamics data of individual molecular species. We show that including the interplay between kinetics of cell division and plasmid replication is necessary to explain the experimental measurements. Moreover, neglecting population dynamics effects can lead to falsely identifying non-existent regulatory mechanisms. Our results call for advanced methods to reverse-engineer intracellular regulation from dynamical data, which would also take into account the population dynamics effects. Technological developments in the past few decades enabled experimental in vivo measurements of protein levels in single cells with high temporal resolution, thus providing a good basis for studying gene expression control by mathematical modeling ,2. The uR-M systems have two main components: restriction endonuclease (R) recognizes specific DNA sequences and cuts them, while methyltransferase (M) methylates the same sequences and thereby protects them from being cut ,8. R-M slac operon that population dynamics effects have to be included in modeling of its gene expression to explain the measured data .,20.6,20]i) state-of-the-art measurements of protein dynamics for this system are available [ii) our previous work showed that theoretical modeling can accurately explain R-M experimental measurements [iii) R-M systems are both important experimental model systems and sufficiently simple to be realistically theoretically modeled. In particular, the number of parameters is significantly lower to what is often encountered in models of gene expression networks/dynamics [Consequently, a major goal of this research is to analyse the importance of population dynamics effects for intracellular dynamics. Understanding this is necessary to accurately predict gene expression dynamics in a cell, which is also crucial for a number of practical applications, ranging from synthetic biology ,22 to bavailable ; (ii) ouurements ,10,11,12dynamics , and alldynamics . c gene itself and the r gene (cr operon), and the methyltransferase (m) gene, as indicated in (M promoter of the m gene (denoted with MBS\u2014for Methyltransferase Binding Site), represses m transcription (see for a brCR promoter configurations are given by the following Equations:i) only RNA polymerase bound to the promoter, corresponding to basal transcription of the genes encoding C protein and restriction endonuclease (Equation (1)), (ii) RNA polymerase recruited to the promoter by the C dimer bound to the DBS (Equation (2)), and (iii) a second C dimer recruited to the PBS by the C dimer bound to the DBS, forming a C tetramer on DNA which represses transcription (Equation (3)). In the upper equations, k is a proportionality constant (with units one over concentration), concentrations of molecular species are labelled with square brackets, while protein-DNA and protein-protein interaction energies that enter Equations (1)\u2013(3), (\u0394Gs in the units of BTk) are denoted in a, b and c) that do not depend on C concentration, which results in the expressions for statistical weights denoted next to the appropriate configurations in Statistical weights of the PCR is given by the following equation :\u03b1 is a proportionality constant with units of transcript amount over time. Equation (4) can be rewritten introducing the derived statistical weights from CR in the absence of C protein, The standard assumption in thermodynamic modeling is that transcriptional activity of the promoter is proportional to the equilibrium probability that RNA polymerase is bound to that promoter . AccordiM promoter is modelled in the same manner (i) empty, (ii) in a transcriptionally active state when it is occupied by RNA polymerase, or (iii) in a repressed state, when a C dimer is bound to the MBS. Statistical weights of the configurations described under (ii) and (iii) read:M transcription activity is given by the equation:\u03b2 is a proportionality constant with units of transcript amount over time. As in the case of the PCR activity modeling, constants in Equations (6) and (7) can be absorbed into few parameters (f and g) that do not depend on C concentration, which results in the expressions for statistical weights denoted next to the appropriate configurations in iK corresponds to the equilibrium association constant for C dimer binding to DNA in the presence of a bound RNAP, which exerts an inhibiting effect on transcription.Transcription activity of Pe manner C, assumiWe here develop a model that includes an interplay of cell and plasmid division rates, so that full population dynamics effects due to (time dependent) division of both cells and plasmids are taken into account. Note that R-M systems typically produce thousands of RNAs and proteins of the two enzymes in the cell, so these systems can be reliably modelled deterministically. pn) is introduced as a time dependent variable, which increases due to plasmid replication, and decreases due to dilution after cell division. Specifically, the change of cell numbers (celln), and the number of plasmids per cell (pn) with time is described by the following differential equations:cell\u03bb and p\u03bb are division rates for cells and plasmids, respectively. Experimentally measured cell number dependence with time [cell\u03bb(t) is determined as follows:cell\u03bb1, cell\u03bb2 are constant parameters denoting the cell division rates in the first and the second time interval, while cells is a fixed parameter defining smoothness of the interpolation (taken as 120 min). Note that Equation (12) is just an empirical fit to the experimentally measured data, which is an input to the model of the gene expression dynamics, rather than a model prediction.The number of R-M system encoding plasmids per cell (ith time indicatep\u03bb(t) is introduced in an analogous manner:The plasmid division rate cr operon and the m gene change with time, while Equations (16)\u2013(18) describe the same for the amounts of proteins, namely, of C protein (C), restriction endonuclease (R) and methyltransferase (M). For parameter notation, see \u03bb\u2019s in the equations above) represents a sum of cell\u03bb (Equation (12)) and the corresponding molecule degradation rate . Equations (14) and (15) describe how the amounts of transcripts of the m gene is regulated by C, see Equations (15) and (18)), as was done with the minimal model provided in [pn that enters both Equations (14) and (15). Consequently, these sets of equations have to be solved simultaneously, and their parameters can no longer be separately fitted to the experimental data for R and M dynamics. This then significantly increases dimensionality of the parameter inference problem, i.e., the joint fit leads to inferring parameters in a 17-dimensional space, which is computationally complicated, since a very large number of parameter combinations have to be explored to find the best fit of the model to experimental data.If the population dynamics effects, in particular changes of per cell plasmid copy number throughout the experiment, would be neglected , the equvided in . Howeveri) with either fixed (previously inferred) parameters for population dynamics, in the case that the procedure converges, i.e., leads to a satisfactory fit to experimental data or (ii) by refitting (inferring again) the population dynamics parameters, if the convergence has not been achieved\u2014in this case, the whole procedure is further iteratively repeated until convergence. To resolve this problem, which would become even more prominent with a larger number of species in the model, we here propose an iterative, \u201cmean field-like\u201d approach to effectively decouple such equations. The main idea is schematically depicted in t). If the inferred (fixed) parameters of M dynamics lead to a satisfactory agreement with experimental data (converge) the procedure is stopped. If not, the plasmid dynamics parameters are again estimated (with C(t) from the previous step). Note that after each full cycle, the solutions for both M(t) and R(t) ) exactly solve the full system of the dynamical equations, though the parameters inferred after each cycle may not provide an optimal fit to data\u2014thus, the fit is, if needed, improved through iterative cycles. The application of this procedure to R-M dynamics is depicted in 2 (the sum of the squared errors). Note that this numerical procedure is quite more complicated than standard fits to experimental data, as in our case there is no closed form expression that can be fitted to the data points. That is, the closed form expressions for transcription activities (Equations (1)\u2013(9)) serve as an input for non-linear differential equations (Equations (14)\u2013(18)), which cannot be integrated in a closed form expression, but have to be solved numerically, and these solutions then compared with experimental data.To implement the above procedure, the system of differential Equations (11), (14)\u2013(16) and (18) that represents the full model of Esp1396I R-M system is solved numerically using the Runge-Kutta method . The ini2 was calculated for each fit. To quantitatively compare different fits , F values and corresponding P values used [p1 and p2 are the corresponding numbers of parameters, while n is the number of data points. From this, we can estimate the corresponding P-values from cumulative distribution function of F statistics.\u0422o quantify the model comparison with experimental data, adjusted Rues used : (19)F=. The increase of M later in the experiment, which is now accounted for, is a consequence of slower cell division compared to plasmid replication later in the experiment and not due to overfitting ), the plasmid division rate in the second time interval (p\u03bb2), and the time of transition between the two intervals , specifically, with a constant plasmid division rate throughout the course of the experiment. In the three parameter model, the plasmid division dynamics are dominated by the first division rate state [If one would attempt to explain the increase in the amount of M later in the experiment through transcription regulation alone, a non-existent activation of the Pe) state . HoweverWith regards to the R dynamics, if the population dynamics effects are neglected, R slowly increases to some saturation value B, while CR promoter by C protein , and when they are excluded from the model (dashed line), which is shown in M activation at higher concentrations of C protein, while in reality there is only repression [Consequently, one can speculate that a single, major effect behind the increase of M in the second interval of the experiment is plasmid dynamics, i.e., increase in the plasmid numbers late in the experiment. Neglecting plasmid dynamics in modeling can thus lead to misinterpretation of experimental results. For example, an increase in the amount of M later in the experiment can be interpreted as a consequence of non-existent Ppression . CR promoter control , but from a quantitative viewpoint, modeling such a case provides a significantly worse fit to the data (purple curve in \u22127). Therefore, intuitive reasoning may be in disagreement with a true situation in a cell.Similar holds when explaining the properties of R dynamics A: excludcr operon by C protein. Moreover, it could not be inferred even through analytical derivations, since they imply a constitutive (non-regulated) gene expression, and stable RNA and protein amounts, which is very different from reality. Therefore, an accurate understanding of the system regulation and the resulting dynamics requires taking into account all relevant effects, and their careful computational modeling.R dynamics are, therefore, a nice example for apparently simple (quadratic) time dependence generated by a complex interplay between very different opposing effects, in particular, those arising from intracellular regulation (repression at higher C concentrations) and population dynamics (the increase in plasmid numbers later in the experiment). This interplay could not be inferred intuitively, since an intuitive interpretation would suggest only activation of the An earlier minimal model of the Esp1396I R-M system expression predicted that the methyltransferase amount changes oppositely in time to what was experimentally observed in microcolonies grown from single transformed cells. The disagreement could have been interpreted as a consequence of unknown regulatory mechanism(s) operating in the system, that were not accounted for in the model. However, from our analysis it follows that the reason behind this disagreement are the commonly neglected population dynamics effects\u2014namely, kinetics of cell division and plasmid replication\u2014which, when included, lead to a very good agreement with the data. Consequently, neglecting global physiological effects on gene expression can lead to falsely identified regulation by significantly impacting qualitative properties of intracellular protein expression dynamics.From a computational perspective, we note that including population dynamics in the model, which is necessary to explain the experimental data, is a highly nontrivial task, as it inevitably increases dimensionality of the parameter inference. Still, we showed that this problem can be approached through a procedure in which the shared population dynamics parameters are initially estimated by considering dynamics of only one of the multiple species coupled by the regulation mechanisms, relying on approximated dynamics of the rest of the species. Such a \u201cmean field-like\u201d procedure can become even more necessary when considering larger gene regulatory networks, as dimensionality of the parameter inference problem would further increase from the one considered here. While the procedure is here introduced in the form that can be directly applied to any number of molecular species, it remains to be tested in practice when applied to more complex regulatory networks. As an outlook, we have seen here that population dynamics effects, which are modulated by changes in global physiological factors, can have a significant effect on gene expression dynamics. Consequently, expression patterns of molecular species within a cell can result from a complex interplay of intracellular regulation and population dynamics effects, as demonstrated here in the case of the Esp1396I system. This then calls for advanced methods to reverse-engineer intracellular regulation from dynamical data, which would take into account both intracellular regulation and effects due to changing external conditions ,20,34. O"}
+{"text": "Parus major) from lines artificially selected for early and late timing of breeding that were housed in two contrasting temperature environments in climate-controlled aviaries. We collected hypothalamus, liver and ovary samples at three different time points (before and after onset of egg-laying). For each tissue, we sequenced whole transcriptomes of 12 pools to analyse gene expression.Seasonal timing of breeding is a life history trait with major fitness consequences but the genetic basis of the physiological mechanism underlying it, and how gene expression is affected by date and temperature, is not well known. In order to study this, we measured patterns of gene expression over different time points in three different tissues of the hypothalamic-pituitary-gonadal-liver axis, and investigated specifically how temperature affects this axis during breeding. We studied female great tits (ZP4), which has also been shown to be under selection in these lines. Genes were differentially expressed at different time points in all tissues and most of the differentially expressed genes between the two temperature treatments were found in the liver. We identified a set of hub genes from all the tissues which showed high association to hormonal functions, suggesting that they have a core function in timing of breeding. We also found ample differentially expressed genes with largely unknown functions in birds.Birds from the selection lines differed in expression especially for one gene with clear reproductive functions, zona pellucida glycoprotein 4 contains supplementary material, which is available to authorized users. Over recent decades, environmental change (e.g. climate change) has resulted in phenological shifts of spring events across trophic levels \u20134. In sePhotoperiod plays a main role in timing of breeding, as the yearly predictive increase in photoperiod in early spring provides precise information for birds to track the time of the year and stimulates the photoreceptors in the hypothalamus, which then send information along the photoperiodic signalling pathway , 15. Thihow temperature affects seasonal timing of breeding and if this is only in the brain, like photoperiod, or also elsewhere in the HPGL axis.While the function of photoperiod is clear in timing of breeding and its Parus major), which is a model species in ecology and evolution, due to its willingness to breed in nest boxes, short generation time and large broods, and wide distribution [1 generation) were taken from wild broods of which the mother was either an extremely early or extremely late breeder. These chicks were genotyped and, based on their \u201cgenomic breeding values\u201d (GEBVs), individuals were selected for early and late line breeding pairs to produce the F2 generation in captivity . ThHere, making use of abovementioned tools, we measured overall gene expression levels by means of RNA-seq based expression profiling in three different tissues in great tit females housed in contrasting temperature treatments at three different time points related to egg-laying. As such, we explore time, temperature and tissue-specific gene expression patterns underlying timing of breeding. In order to identify molecular pathways likely to be involved in timing of breeding and the potential effect of temperature on these pathways, we performed functional gene enrichment analysis, network construction and hierarchal clustering of the RNA-seq datasets. In addition to exploring the molecular basis of seasonal breeding, our datasets and results will be an important starting point for future studies, especially on wild avian reproduction.The phenotypic results are described in detail in . In shorFor the downstream analyses, we sequenced on average 18\u2009\u00b1\u20093 million (mean\u2009\u00b1\u2009s.d.) single end reads in hypothalamus, 16\u2009\u00b1\u20092 million reads in liver and 15\u2009\u00b1\u20092 million reads in ovary and the overall alignment rate was on average 82.3% in hypothalamus, 79.8% in liver and 91.2% in ovary transformed expression values from the DeSeq2 package In the differential gene expression analysis with DeSeq2, we found significant differences between time points in 491, 569 and 5175 transcripts in hypothalamus, liver and ovary, respectively (Table\u00a0ZP4), clearly stood out having a strong differentiation between lines 11 different GO categories and KEGG pathways were overrepresented. These were related especially to circadian rhythm related GO terms and pathways were similar to the main model results with functions related to neuronal activity and the GABA pathway. However, the upregulated genes in cluster 2 were related to ribosomal, mitochondrial and ATP related metabolic functions , vitellogenin 2 and apovitellenin 1 were expressed at this time point . These terms and pathways were related to immunological functions, hormone responses and insulin response. Genes upregulated at time point 2 (cluster 3) were linked to two GO terms: carbon-nitrogen lyase activity and oxidoreductase activity. In time point 3 (cluster 1) 32 GO groups and KEGG pathways were enriched which were especially related to protein processing and amino acid response , ovalbumin-related protein Y , lamin-L(III)-like and avidin , were expressed at time point 1. In time point 3 genes were related to morphogenesis and development. The \u201cegg-laying gene\u201d APOV1 was expressed at time point 3 and also bird specific major histocompatibility complex class II beta chain gene were related to cell cycle, chromosome functions and spindle formation , which To investigate the patterns of co-expression among transcripts, we analysed the rld transformed data using weighed correlation network analysis (WGCNA) . We consWe could determine 13 \u201creal\u201d hub genes out of 21 modules based on combination of co-expression and PPI network connections and 3 (at the time of egg-laying). Gene expression levels of females from both lines and temperature treatments were following similar patterns in ovary and liver. In hypothalamus, however, we found a significant interaction between time point and temperature, which could indicate that temperature affects the timing of certain gene expression levels mainly in the brain. We found no effect of temperature on either egg-laying date in the first breeding season or follicle size in the second breeding season (see \u201cDownstream regulation of timing of breeding\u201d below). Many of the highly DEGs had an unknown function; either being non-coding RNA or the gene has an unknown function especially in birds. Furthermore, in every tissue we identified hub genes that may play a central role in timing of reproduction in great tits.Due to the small sample size for each time point in this study, the statistical analysis likely suffered from low power to detect differences between time points, temperature and line. We used pooled data without any replication and especially the interaction in hypothalamus would have benefitted of having individual level expression data with replication. Unfortunately, it is not possible to obtain tissue samples from the same individual in every time point to see how the expression patterns change in one individual. In great tits it has been shown that there is some genetic variation both in the onset and termination of egg-laying, and in the underlying mechanisms, and sometimes there is also an interaction with temperature , 46. Somdirectly lead to earlier egg-laying, because we found no effect of temperature on either egg-laying date (first breeding season) or follicle size (second breeding season). Our data are in line with the hypothesis that downstream processes in the liver and ovary play a more important role in the fine-tuning of egg-laying date than hypothalamic processes [Although gene expression levels in the hypothalamus seem to be affected by temperature, this does not rocesses , 52\u201354. At time point 1, the genes expressed in the hypothalamus were related to circadian rhythm and photoperiodism. In fact, in every time point and every tissue, also in the interaction model in hypothalamus, several genes involved in circadian rhythms were differentially expressed. In addition to the HPGL axis, the role of the circadian clock in annual cycles has been suggested for some time , 56. TheInterestingly, there was not much overlap in circadian genes between tissues and also between the two models (main effect and interaction) in hypothalamus. The more downstream tissues (i.e. ovary and liver) also possess their own circadian clockworks and entrain their tissue-specific rhythms through their own, the core or both outputs of the circadian system \u201362. EspeNR1D1; neuronal PAS domain protein 2, NPAS2; period circadian clock 2, PER2; period circadian clock 3, PER3 and basic helix-loop-helix family, member e41, BHLHE41). In birds, changes in circadian gene expression in liver has been linked to alteration in the seasonal state [In liver, eight differentially expressed molecular clock related genes were mainly expressed at time point 1 with their majority being circadian regulators of gene expression [FOXL2) and NOBOX oogenesis homeobox (NOBOX) [We found that the ovary exhibited the most DEGs and co-expressed genes. In the pools at time point 1 ovary was expressing genes that were related to cell cycle, mitosis and meiosis suggesting that it already started with the ovarian maturation, along with follicle development , 74. For, BMP15) , 76 and (NOBOX) . During (NOBOX) . Because (NOBOX) .PGR) and prolactin receptor . There were also genes part of angiogenesis and one of them being fibroblast growth factor (FGF1) which has also been shown to be linked to egg fecundity in chicken albeit from the bone RNA samples [Many of the DEG clusters from the time point 1 were also upregulated at time point 2 such as the circadian and activin related genes in hypothalamus and ovary\u2019s cell cycle related genes. In hypothalamus there was also a cluster of genes that were starting to be expressed at time point 2 and continued to be highly expressed at time point 3 as well. These genes were related to female reproduction such as the genes progesterone receptor ( samples .AKR1B10) which is known to be part in detoxifying compounds under oxidative stress conditions and it has also been shown in humans that aldo\u2013keto reductases are part of steroid hormone action and nuclear receptor signalling [In liver there was a specific upregulated gene cluster on time point 2. These genes were related to oxidoreductase and carbon-nitrogen lyase activity which do not have known function in reproduction. Both GO groups shared one gene, the aldo-keto reductase family 1 member B10 was not expressed in our hypothalamus samples. However, GnIH , iodothyronine deiodinase 2 (DIO2) and thyroid stimulating hormone receptor (TSHR) were active in hypothalamus and from these TSHR was especially expressed on time point 3 indicating HPG cascade going towards egg production [In time point 3 in hypothalamus the upregulated genes were related to many neuronal function groups but also to GABA receptor functions. GABA, the main inhibitory neurotransmitter, and glutamate, the main stimulatory neurotransmitter, set a level of sensitivity in the hypothalamus that decreases or increases the likelihood that GnRH will be synthesized or released based on the reproduction status of the females . Other Hoduction .VTG2 and APOV1 which also showed line differences in expression levels where early lines had higher expression especially in the early-warm condition at this time point. Furthermore, cathepsin E-A-like gene was upregulated at time point 3, which has been shown to be over-expressed during vitellogenesis in chicken liver and is regulated by estrogen [CTSEAL in the great tit genome is bestrophin 3 (BEST3) which was also upregulated at time point 3. BEST3 is an important gene in chloride channel activity but there is no known function in regards to reproduction. The similar expression pattern between BEST3 and CTSEAL and their closeness in the genome suggests that they might be co-regulated but it is unclear in the current study if mRNA from BEST3 used in the liver in the end. It is known that mRNA goes through several regulatory processes after it is made and this is often seen when comparing the expression levels from transcriptomes and proteomes [BEST3, we found additional genes from every tissue that have unknown function in bird reproduction. There were also transcripts that are annotated as ncRNA by the NCBI. This type of RNA has been shown to be important in eukaryotic gene regulation and also in hormonal pathways and meiosis during reproduction [At time point 3 in liver in addition to above mentioned metabolism and protein processing functional groups, vitellogenesis related genes were upregulated such as estrogen , 81. Nexroteomes , 83. In oduction . Furtheroduction .PER2 and PER3, were upregulated. In poultry these two have been linked to preovulatory follicle expression [FSHR/LOC107202460 and lutropin-choriogonadotropic hormone receptor, LHCGR/LOC107201154) were expressed in ovary and especially the expression of LHCGR increased towards time point 3 suggesting increased LH activity in our ovary samples. In birds the increased expression of LH receptors in ovary starts the final follicle maturation [FSHR transcription and is part of the cascade where pre-hierarchal follicles are selected into the preovulatory hierarchy [Most of the circadian genes were expressed in the ovary and especially at time point 3 supporting the idea that these genes are important in starting the ovulation in birds , 63. At pression . In genepression , 86. Theturation . In addiierarchy which isCLOCK), PER2 and RAR related orphan receptor A (RORA) which also have pleiotropic effects to many metabolic processes [In the hypothalamus gene expression was affected by the interaction between time point and temperature. However, due to limitations of the dataset the results should be treated as suggestive. Circadian genes were mostly expressed in time point 3 but there was a set of five circadian genes that were expressed at time point 1 which were mostly related to ubiquitination. Mutation in ubiquitin related genes can cause either elongation or shortening of the endogenous circadian period (tau) . Interesrocesses , 91.PRLR was indeed also upregulated at time point 3 in our samples suggesting that both dopamine and prolactin were active in hypothalamus.In general, the interaction model had two gene clusters that showed distinctive patterns. The genes that were upregulated more during time point 1 and 2 in cold and warm environments, respectively were associated with metabolic-related terms and pathways such as ATP, NADH and ribosomal metabolic processes. The molecular clock constantly receives feedback from the metabolic signals in the cells , 92 and In contrast to the hypothalamus, no convincing effect of an interaction between time point and temperature was found in liver and ovary, which was not surprising as no difference in egg-laying was observed between the temperature treatments. Liver had 30 differentially expressed genes between the temperatures and it was the only tissue with a co-expression module associated with temperature. However, no GO enrichment analysis could be conducted with the genes and hub gene was not found in the module.ZP4) which had a two to three times higher expression in early line compared to late line. It also appears in the co-expression results and is also under selection in these selection line birds [ZP4 is one of the genes responsible to making the zona pellucida or vitelline envelope , a glycoprotein layer surrounding oocytes [All the tissues showed some line differences in gene expression but in ovary one gene was highly differentially expressed. This was zona pellucida sperm-binding protein 4 . In addition, MAPK pathway being important in the ovary, MAPK1 is estrogen activated in the brain and is important in female sexual behaviour [MAPK1 and HSPA8 have been found to be differentially expressed in hypothalamus during spring migration in black-headed buntings [ADCYC2 in hypothalamus and SNAP25 in liver are important genes in insulin secretion and four genes are important in temperature detection . In addition to estrogen signalling pathway, other hormonal pathways related to reproduction were associated with these hub genes such as progesterone, thyroid, prolactin and oxytocin binding/signalling pathways suggesting that our hub genes are important in female reproduction.All the \u201creal\u201d hub genes that shared high interaction both in the co-expression and the PPI networks were all transcribing binding molecules and they were all in the same final PPI network. Six genes were found in the estrogen signalling pathway . ADCYC2 We generated comprehensive RNA expression data from a set of three tissues important in the neuro-endocrine cascade underlying avian seasonal timing of breeding, from three different time points and from two temperature treatments and two selection lines for breeding time. Time was the strongest driving variable in our dataset, as we would expect, but there was an interesting interaction between time and temperature in hypothalamus which should be studied more intensively in the future studies. It could be possible that gene expression in the brain is affected by temperature, perhaps through changes in expression of genes involved in the circadian clock which affect the sensitivity to photoperiod. However, because laying dates were not directly affected by temperature, the effect of temperature on timing of breeding is likely fine-tuned downstream in the reproductive axis, i.e. the liver and/or the ovary, rather than upstream, in the hypothalamus. These findings, as well as our datasets, will further the knowledge of the mechanisms of tissue-specific avian seasonality in the future.early line and 18 late line pairs) originating from the second generation (F2) of lines artificially selected for early and a late timing of breeding , weails see . The paiails see for detaIn the first breeding season, initiated on 4 January 2016, the four groups were kept in pairs in the climate-controlled aviaries during spring. Nesting material (moss and hair) was provided from the second week of March onwards. Females could choose between three nest boxes of which two were accessible from the outside to minimalize disturbance. Females initiated nest building and subsequent egg-laying, which were recorded together with other reproductive traits (e.g. clutch size). In addition, both sexes were blood sampled bi-weekly throughout the breeding season as part of another study .n\u2009=\u200912 pairs per group) as such that the egg-laying date distribution (recorded in the first breeding season) were similar per group. Every group was sacrificed at a different time point , days were shortened to 9\u2009L:15D and temperatures decreased to 10\u2009\u00b0C for seven weeks to make the birds photosensitive and temperature sensitive again. From September onwards, the pairs were subjected to the same photoperiod and temperature regimes again as in their first breeding season, to initiate their second breeding season. Four females were replaced with a sister, because they did not initiate egg-laying in the first breeding season. Females showed similar phenotypic responses in the first and the second breeding season . TherefThree time points throughout the second breeding season were chosen, based on the reproductive behaviour from the first breeding season: (1) October 7 (resembling March 7) when photoperiod exceeded 11\u2009h , (2) OctFrom hypothalamus, ovary and liver, RNA was isolated by Trizol extraction (see for detaParus major reference genome build 1.1. (https://www.ncbi.nlm.nih.gov/assembly/GCF_001522545.2) using Hisat2 v2.1.0 [Parus major annotation release 101 in NCBI (https://www.ncbi.nlm.nih.gov/genome/annotation_euk/Parus_major/101/). The obtained annotations were merged using cuffmerge. Unique reads that mapped to merged transcripts were counted using HTSeq v0.9.1 [Filtering of low quality reads was conducted at Baseclear by removing PhiX and adaptor sequences. The trimmed reads were mapped to the 2 v2.1.0 with def2 v2.1.0 , with deq v0.9.1 .p-value was below 0.05. Heatmaps were generated using the rld transformed expression values for DEGs using gplots and Pheatmap implemented in R.All analyses were performed and figures made in R v.3.4.4. Clustering of the samples was done using Principal Component Analysis (PCA) using the \u2018regularized log transformation procedure\u2019 (rld) transformed expression values in order to diminish the number of variables and summarize the data. Differential expression of genes (DEG) between different time points, line and temperature were performed with DeSeq2 v3.6 using thClustering of the DEGs was done separately for each tissue. A hierarchical dendrogram was generated using the \u201chclust\u201d function in R (R v.3.4.4), whereas the \u201cward. D\u201d objective criterion was used to merge a pair of clusters at each step. Trees were cut at k\u2009=\u20095, k\u2009=\u20093 and k\u2009=\u20093 in hypothalamus, liver and ovary time point models respectively and at k\u2009=\u20093 in hypothalamus for the time point-temperature interaction model to obtain clusters of genes that are expressed the similar way where k is the number of groups. Each cluster\u2019s fold change values at each time point were plotted as profile plots using ggplot2 in R.For the significant DEGs a GO enrichment analysis was conducted per tissue using the Cytoscape plugin ClueGo 2.5.2 with theThe weighed correlation network analysis (WGCNA) was usedWe further analysed the hub genes from the significant modules from each tissue and conducted a STRING pathway analyses in orderAdditional file 1:Table S1. Summary of the sequencing and alignment of the three tissue types and 12 pools. (XLSX 11 kb)Additional file 2:Table S2. The number of genes showing significant differential expression in the time point comparisons and interaction models for the three tissues. Time1 is comparison of time point 1 to time point 2 and 3, Time2 is comparison of time point 2 to time point 1 and 3, Time3 is comparison of time point 3 to time point 1 and 2. (XLSX 10 kb)Additional file 3:Table S3. The Likelihood Ratio Test results of the nine models and annotations for the transcripts in hypothalamus. Log2 fold changes are reported in the Log2FC \u2013columns and P-values were adjusted for multiple comparisons using Benjamini-Hochberg method. Annotations were based on the Parus major annotation release 101 in NCBI. Clusters and modules from the hierarchical clustering analysis and WGCNA are also reported. In the time point main effect model the log2 fold change is between time points 3 vs 1. (XLSX 4991 kb)Additional file 4:Table S4. The Likelihood Ratio Test results of the nine models and annotations for the transcripts in liver. Log2 fold changes are reported in the Log2FC \u2013columns and P-values were adjusted for multiple comparisons using Benjamini-Hochberg method. Annotations were based on the Parus major annotation release 101 in NCBI. Clusters and modules from the hierarchical clustering analysis and WGCNA are also reported. In the time point main effect model the log2 fold change is between time points 3 vs 1. (XLSX 4342 kb)Additional file 5:Table S5. The Likelihood Ratio Test results of the nine models and annotations for the transcripts in ovary. Log2 fold changes are reported in the Log2FC \u2013columns and P-values were adjusted for multiple comparisons using Benjamini-Hochberg method. Annotations were based on the Parus major annotation release 101 in NCBI. Clusters and modules from the hierarchical clustering analysis and WGCNA are also reported. In the time point main effect model the log2 fold change is between time points 3 vs 1. (XLSX 5693 kb)Additional file 6:Table S6. Significant GO terms associated with the time point main effect model gene clusters in hypothalamus. Results based on human GO-database. (XLSX 19 kb)Additional file 7:Table S7. Significant GO terms associated with the time point - temperature interaction model gene clusters in hypothalamus. Results based on human GO-database. (XLSX 120 kb)Additional file 8:Table S8. Significant GO terms associated with the time point main effect model gene clusters in liver. Results based on human GO-database. (XLSX 31 kb)Additional file 9:Table S9. Significant GO terms associated with the time point main effect model gene clusters in ovary. Results based on human GO-database. (XLSX 489 kb)Additional file 10:Table S10. The genes from the super pathway \u2018BMAL1-CLOCK, NPAS2 activates circadian gene expression\u2019 found in our time point main effect models and from the time point - temperature interaction model. (XLSX 10 kb)Additional file 11:Table S11. Modules of genes significantly correlated with time, temperature or line in hypothalamus. Gene symbol\u2009=\u2009annotation, GS\u2009=\u2009gene significance, p.GS\u2009=\u2009P-value of gene significance, MMcolor\u2009=\u2009ModuleMembership correlation coefficient, p.MMcolor\u2009=\u2009ModuleMembership p-value. (XLSX 4497 kb)Additional file 12:Table S12. Modules of genes significantly correlated with time, temperature or line in liver. Gene symbol\u2009=\u2009annotation, GS\u2009=\u2009gene significance, p.GS\u2009=\u2009P-value of gene significance, MMcolor\u2009=\u2009ModuleMembership correlation coefficient, p.MMcolor\u2009=\u2009ModuleMembership p-value. (XLSX 5375 kb)Additional file 13:Table S13. Modules of genes significantly correlated with time, temperature or line in ovary. Gene symbol\u2009=\u2009annotation, GS\u2009=\u2009gene significance, p.GS\u2009=\u2009P-value of gene significance, MMcolor\u2009=\u2009ModuleMembership correlation coefficient, p.MMcolor\u2009=\u2009ModuleMembership p-value. (XLSX 6338 kb)Additional file 14:Table S14. List of highly connected module genes in hypothalamus that have at least one connection degree in the PPI network. (XLSX 42 kb)Additional file 15:Table S15. List of highly connected module genes in liver that have at least one connection degree in the PPI network. (XLSX 13 kb)Additional file 16:Table S16. List of highly connected module genes in ovary that have at least one connection degree in the PPI network. (XLSX 100 kb)Additional file 17:Table S17. Significant GO terms associated with the real hub genes. Results based on human GO-database. (XLSX 18 kb)Additional file 18:Figure S1. Clustering of samples based on principal component analysis (PCA). Samples collected from warm (W) and cold (C) temperature treatments from two different lines, early (E) and late (L), from three different time points and from three different tissues: a. hypothalamus, b. liver and c. ovary. (PDF 102 kb)Additional file 19:Figure S2. Volcano plots of all the transcripts analysed in hypothalamus RNA-seq in six different models. Genes differentially expressed with p\u2009<\u20090.05 after correcting for false discovery rate are in orange. Genes with a p\u2009>\u20090.05 after correcting for false discovery rate are in black. (PDF 20 kb)Additional file 20:Figure S3. Volcano plots of all the transcripts analysed in liver RNA-seq in six different models. Genes differentially expressed with p\u2009<\u20090.05 after correcting for false discovery rate are in orange. Genes with a p\u2009>\u20090.05 after correcting for false discovery rate are in black. (PDF 19 kb)Additional file 21:Figure S4. Volcano plots of all the transcripts analysed in ovary RNA-seq in six different models. Genes differentially expressed with p\u2009<\u20090.05 after correcting for false discovery rate are in orange. Genes with a p\u2009>\u20090.05 after correcting for false discovery rate are in black. (PDF 20 kb)Additional file 22:Figure S5. Expression patterns of DEG clusters in hypothalamus time point main effect model. (PDF 31 kb)Additional file 23:Figure S6. Expression patterns of DEG clusters in hypothalamus time point-temperature interaction model. (PDF 144 kb)Additional file 24:Figure S7. Expression patterns of DEG clusters in liver time point main effect model. (PDF 22 kb)Additional file 25:Figure S8. Expression patterns of DEG clusters in ovary time point main effect model. (PDF 72 kb)Additional file 26:Figure S9. The raw expression levels of ZP4 in ovary. Numbers indicate different time points from the study. (PDF 8 kb)Additional file 27:Figure S10. The raw expression levels of CTSEAL, VTG2 (VTG2a - LOC107208431and VTG2b - LOC107208432) and APOV1 in liver. (PDF 12 kb)Additional file 28:Figure S11. Hierarchical clustering tree based on WGCNA module eigengenes in A. hypothalamus, B. liver and C. ovary. (PDF 105 kb)Additional file 29:Figure S12. Matrix with the module-treatment relationships and corresponding p-values between the detected modules on the y-axis and treatments on the x-axis based on hypothalamus RNA-seq. The relationships are coloured based on their correlation: red is a strong positive correlation, while blue is a strong negative correlation. The value at the top of each square represents the correlation coefficient between the module eigengene and the treatment with the correlation p-value in parentheses. (PDF 42 kb)Additional file 30:Figure S13. Matrix with the module-treatment relationships and corresponding p-values between the detected modules on the y-axis and treatments on the x-axis based on liver RNA-seq. The relationships are coloured based on their correlation: red is a strong positive correlation, while blue is a strong negative correlation. The value at the top of each square represents the correlation coefficient between the module eigengene and the treatment with the correlation p-value in parentheses. (PDF 51 kb)Additional file 31:Figure S14. Matrix with the module-treatment relationships and corresponding p-values between the detected modules on the y-axis and treatments on the x-axis based on ovary RNA-seq. The relationships are coloured based on their correlation: red is a strong positive correlation, while blue is a strong negative correlation. The value at the top of each square represents the correlation coefficient between the module eigengene and the treatment with the correlation p-value in parentheses. (PDF 44 kb)Additional file 32:Figure S15. Daily minimum (A) and daily maximum (B) temperatures for the cold (blue) and warm (red) spring provided in the first and second breeding season. The open triangle indicates the day on which the first breeding season stopped and birds went into the phase of the experiment where days were shortened and the temperature set at 10\u2009\u00b0C (see \u2018Second breeding season\u2019) in to prepare them for the second breeding season. The black triangles indicate the three time points on which the birds were sacrificed in the second breeding season. (PDF 9 kb)"}
+{"text": "Escherichia coli and Saccharomyces cerevisiae. Unlike previous studies, which report the fractal and long-range dependency of DNA structure, we investigate the individual gene expression dynamics as well as the cross-dependency between them in the context of gene regulatory network. Our results demonstrate that the gene expression time series display fractal and long-range dependence characteristics. In addition, the dynamics between genes and linked transcription factors in gene regulatory networks are also fractal and long-range cross-correlated. The cross-correlation exponents in gene regulatory networks are not unique. The distribution of the cross-correlation exponents of gene regulatory networks for several types of cells can be interpreted as a measure of the complexity of their functional behavior.Gene expression is a vital process through which cells react to the environment and express functional behavior. Understanding the dynamics of gene expression could prove crucial in unraveling the physical complexities involved in this process. Specifically, understanding the coherent complex structure of transcriptional dynamics is the goal of numerous computational studies aiming to study and finally control cellular processes. Here, we report the scaling properties of gene expression time series in Protein synthesis is fundamental for biological systems to perform a variety of functions. They control the organism\u2019s shape or can function as enzymes catalyzing specific metabolic pathways to regulate specific cellular processes. These cellular functions include responding to stimuli, transporting molecules and catalyzing metabolic reactions. In order to program cells for performing the desired functionality, one should regulate the protein synthesizing process. The process of protein synthesis from the activation of a specific gene is called gene expression .Gene expression and Escherichia coli (E. coli). Figures S. cerevisiae and E. coli, respectively. In these plots, the slope of the curve represents the Hurst exponent. We observe that 95 and 98% of the time series of genes from the S. cerevisiae and the E. coli gene expression networks, respectively, exhibit a long-range dependency property. More precisely, their Hurst exponent was greater than 0.5. To demonstrate this important property, Figures S. cerevisiae and E. coli, respectively. Generally speaking, a Hurst exponent that exceeds the 0.5 threshold value denotes a persistent (positively correlated) behavior in the sense that a high value is likely to be followed by another high value with nonzero probability. In addition, because the Hurst exponent for most of the genes is significantly higher than 0.5, the gene and TF time series cannot be regarded as a random process and modeled through Markovian formalism and S. cerevisiae (YLR256W to YKL020C). We have applied the multifractal detrended cross-correlation analysis for pairs of genes and TFs (links) in the gene regulatory network of E. coli and S. cerevisiae and found that there is a pronounced multifractal cross-correlation signature in these gene regulatory network links. Figure H(q) as a function of the order of the cross moments q in Figure q of the cross-moments. Instead, if the generalized Hurst exponent exhibits a nonlinear dependency cells.Although we observe the fractal and long-range cross-correlation in linked pairs of genes and TFs in the gene regulatory networks, the cross-correlation exponents were not the same in all the links. We have shown the distribution of the cross-correlation exponents for pairs of genes and TFs in the We analyzed the multifractal property of the cross-correlation of pairs of genes and TFs in a gene regulatory network. We investigated whether the observed multifractality can be explained by the known analytical cascade models including the Mandelbrot bimodal cascade model see Mat. We obseS. cerevisiae and none of the links in the network of E. coli can be modeled by the Mandelbrot cascade model for multifractal spectrums . We observe that even for the few links that we could find a closest Mandelbrot model spectrum, the deviation from the Mandelbrot model and the data we had for gene regulatory network was significant. We show two such samples in Figure E. coli gene networks. Note that the peak of the multifractal spectrum for these spectrums was lower than the value 1, which does not fit with the Mandelbrot Binomial Cascade Model can be measured simultaneously. By accessing a high- throughput data collection, a wide range of insights, such as characterizing the functions of specific genes, the relationships among these genes, and their regulation and coordination can be gained. These insights can also be used to understand the gene regulatory network as a complex network. There are many studies which try to infer the underlying gene regulatory network from empirical time series . HoweverE. coli and S. cerevisiae. We also investigate the cross-correlation of gene-TFs, which are linked together in gene expression networks. We report the fractal and long-range cross-dependency of linked genes and TFs of gene expression networks in E. coli and S. cerevisiae. We also show that the multifractal nature of these cross-correlations cannot be modeled through a Mandelbrot binomial cascade model. In contrast, we found very good agreement between the empirical multifractal spectrum of the cross-dependencies in the gene regulatory networks and the log-normal W-cascade model. We suggest investigating more cascade models on empirical data . In total, 536 chips with available raw data Affymetrix files were compiled. The completion of microarray normalization and filtering resulted in a total of 5,667 genes over the 536 microarrays. Transcriptional interactions and, hence, gene regulatory networks for E. coli and S. cerevisiae are collected from strong experimental supports in E. coli are collected from manually curated Ecocyc . For S. cerevisiae, we use the network based on the most stringent thresholds from MacIsaac et al. in S. cerevisiae, and 4511 time-series, each having 805 data points for E. coli expression series.We use the data set from the publicly available DREAM project1 , which id Ecocyc and Regud Ecocyc databaseSince DNA, RNA, and proteins involved in the gene expression process can be present and active at a few copies per cell, this process is sensitive to stochastic fluctuations . The fouIn this paper, we have used the MFDFA method for analysis of gene expression time series. This method, which is the extension of detrended fluctuation analysis (DFA) to extract the Hurst exponent , is intrprofile of the time series is obtained first, which is determined by the integration of the difference of the time series with its average value , where s is the scale). For each of these boxes, a least squared local trend is fitted.Second, it divides the profile into non-overlapping segments is denoted by yn.Third, it calculates the local trend within each segment. For each of these boxes, a least squared local trend is fitted. The value of the fitted time series obtained for boxes of length q with respect to scale (s), according to the following equation:Finally, the Generalized Hurst exponent is estimated by fitting a linear line to the log-log plot of the q = 2)), which is a special case and is used usually when one is interested only in analyzing the long-range dependency of a signal and not the multifractal chrematistics.The Hurst exponent is the value of the Generalized Hurst exponent ) is estimated by the Legendre transform:This method is designed to investigate the power law cross-correlation between two time-series . Similarprofile of each time series:DCCA method first computes the integrated yn1,(i), and yn2,(i)). Fourth, it calculates the covariance of the residual of profiles from local trends. It calculates the detrended covariance (H(q)) by summing over all segments of the nonstationary time series:Second, both the entire time series is divided into non-overlapping intervals. Third, it computes the local trend in each interval for each time series (s)q with respect to scale (s):The cross-correlation exponent (\u03bb) is estimated by fitting a linear line to the log-log plot of the This model is proposed by 0m \u03bc + 1m \u03bc\u03bc = 0m mass is assigned to the left subinterval and 1=1-m0m is assigned to the right subinterval. Repeating this step for each of the subintervals for n times will result in the Mandelbrot model with n iterations. Mandelbrot has proved that the limit behavior of this model when n is infinitely large (\u221e) can be best illustrated by multifractal formalism. He formulated the ) spectrum on the basis of the parameters of the Mandelbrot Cascade model. We have compared the observed multifractality spectrum in gene expression time series to the closest one obtained by Mandelbrot cascade model.Once the unit interval is divided into two subintervals, W-cascades and Log-Poisson W-cascades. For the log-normal cascade , the following equation holds for singularity spectrum:This model is propoW-cascades, the following equation holds for the mass exponent:For the Log-Poisson W-cascade model has the following equation:As can be seen, the second derivative of the mass exponent for Log-Poisson W-cascades model. Also, we have reported the disagreement of the multifractal spectrum of cross-dependencies in gene expression time series to log-Poisson W-cascade model due to its deviation from power-law shape as shown in Figure We have reported the similarity of the multifractal spectrum of cross-dependencies in gene expression time series to the log-normal the unpredictability of the state, or equivalently, of its average information content. Shannon defined the entropy of a discrete random variable X with possible values of {1x,2x,...,kx} and probability mass function P(X) as:Shannon entropy is a meaMore explicitly, entropy can be written as:Entropy is a measure of the unpredictability of the state or its average containing information. One example to illustrate is when there is no uncertainty and the random variables take only one value in which the value of the entropy will be zero. As the number of possibilities increases, the entropy increases as well.cross-correlation exponent of the time series of two time-series linked together in the gene regulatory network. Then, in the new constructed weighted network, we consider the distribution of the weights of the links and entropy of them as a measure of the entropy of the network. Also, this weighted network can be used for other algorithms measuring the entropy of complex networks proposed in We have used the notion of entropy in the context of networks. We consider the weight of the links in the network as the random variable and we discuss entropy of the weight of the weighted links. Given an undirected binary graph of gene regulatory networks and time series of genes and TFs in the network (which are the nodes in the gene regulatory network), we construct a weighted network (shown in Figure 1,2,\u2026, \u03b31,9 are the cross-corrections of the time series of the TF and genes which are linked together in the gene regulatory network in the left part of the figure. Hence, this shows how knowing the existing interactions in the network and having the time series of each node\u2019s dynamics can lead us to know cross-correlation exponents and then assigning the concept of entropy to the network dynamics.to each link. In this figure, \u03b3All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. PB conceived the idea. MG, EJ, and PB discussed the research problem and approaches, identified analytical challenges and solutions. MG implemented the research discussions and performed the research.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "INF-\u03b2 has been widely used to treat patients with multiple sclerosis (MS) in relapse. Accurate prediction of treatment response is important for effective personalization of treatment. Microarray data have been frequently used to discover new genes and to predict treatment responses. However, conventional analytical methods suffer from three difficulties: high-dimensionality of datasets; high degree of multi-collinearity; and achieving gene identification in time-course data. The use of Elastic net, a sparse modelling method, would decrease the first two issues; however, Elastic net is currently unable to solve these three issues simultaneously. Here, we improved Elastic net to accommodate time-course data analyses. Numerical experiments were conducted using two time-course microarray datasets derived from peripheral blood mononuclear cells collected from patients with MS. The proposed methods successfully identified genes showing a high predictive ability for INF-\u03b2 treatment response. Bootstrap sampling resulted in an 81% and 78% accuracy for each dataset, which was significantly higher than the 71% and 73% accuracy obtained using conventional methods. Our methods selected genes showing consistent differentiation throughout all time-courses. These genes are expected to provide new predictive biomarkers that can influence INF-\u03b2 treatment for MS patients. The highest incidences of MS have been reported in North America and Europe , and the lowest occur in East Asia and sub-Saharan Africa 2. This disease is the second most common neurological disability in young adulthood3. Approximately 80\u201390% of MS patients initially suffer from relapsing-remitting MS (RRMS) where MS repeatedly occurs with a variety of symptoms, including the stages of neurological disability (relapse) and recovery (remission)1. The disease gradually shifts to secondary progressive MS (SPMS) which is associated with frequent relapses. Therefore, a systematic treatment strategy to prevent and/or delay relapse is important for the improvement of the quality of life (QOL) of MS patients.Multiple sclerosis (MS) is one of the most common neurological disabilities of the central nervous system5 however, INF-\u03b2 treatment has two issues. First, the treatment only works for a limited number of patients, where approximately half of the patients relapse within 2 years despite treatment7. Second, this treatment can cause side effects, such as spasticity and dermal reaction5. Thus, effective surveillance and appropriate intervention over a long period of time post-treatment is required. Although the pathogenesis of MS has yet to be fully elucidated, various genetic factors involved in this disease have been reported8. Gene expression data have been intensively analyzed to predict INF-\u03b2 treatment responses12. Hundreds of genes, such as Caspase2, Caspase10, and FLIP, showed promise in predicting treatment response11; however, these genes were identified by conventional statistical methods which showed low prediction accuracies in some cases12. The MxA and ISG genes were reported to be predictive for IFN-\u03b2 treatment response10. The expression patterns of these genes, however, were not consistently differentiated throughout all of the time-courses. Given this, any predictions would only be accurate immediately after the observation of the gene expression levels, while the accuracy of prediction would be low for subsequent responses9. Therefore, the identification of genes showing highly accurate prediction abilities throughout all time-courses is needed.Interferon-\u03b2 (INF-\u03b2) has been commonly used to prevent relapse of MS14. Prediction using only the currently observable data to predict an outcome of treatment is the most useful but most challenging approach for optimizing patient treatment. Single time-point-based analyses13 and the high degree of multi-collinearity. Elastic net, a type of sparse modelling method, has been commonly utilized to identify differentiated genes to address these issues25. To our knowledge, however, the identification of genes showing highly accurate prediction abilities throughout all time-courses for MS patients by sparse modelling has not been reported.Generally, data analyses to identify biomarkers are categorized into single time-point and time-course-based approachess13 Fig.\u00a0 are chaliod Fig.\u00a0. In partThe purpose of this study was to identify new genes showing highly accurate prediction abilities throughout all time-courses for MS patients. Therefore, sparse modelling methods were modified, and two microarray time-course datasets collected from patients with MS were used for predictions of INF-\u03b2 treatment responses by our proposed method.25, a sparse modelling method, was modified to analyse time-course data. Our method was designed to find genes showing consistent differentiation between the two given groups throughout multiple time-points. Here, we addressed the following problems:High dimensionality. Microarray data includes a larger number of genes compared to a small sample size.26.Multi-collinearity. Microarray data includes many genes showing highly positive correlations. The use of these genes for a prediction model would deteriorate generalization abilityTime-courses. Genes showing consistent differentiation throughout multiple time-points should be identified.Elastic net25. We modified this method for the time-course data analyses.Elastic net was designed to analyse single time-point data to identify differentiated genes by preventing multi-collinearity27. Among the different sparse modelling methods, Least Absolute Shrinkage and Selection Operator (LASSO)28 have been commonly used in various studies24. LASSO, however, is limited in that it selects only one variable from two variables showing a high correlation (multicollinearity), and the other variables are not selected despite being differentiated25. The Ridge regression model is a method capable of solving this problem29. This method can construct models from two variables showing a multi-collinearity; however, this method does not select genes. Elastic net is another sparse modelling method able to reduce those two limitations. Elastic net is comprised of LASSO and Ridge26, which selects variable sets, and this method selects all variables, even those showing high multi-collinearities31. Here, we employed Elastic net rather than LASSO to select gene candidates showing predictive abilities for subsequent analyses.Sparse modelling is one of a variety of selection methods suitable for high dimensional data analyses32.y\u2009=\u2009{y1, y2, \u2026, yn; yi\u2009\u2208\u2009{0, 1}}: yi denotes the response variable that included good responders (labelled as 1) or poor responders (labelled as 0) to INF-\u03b2 treatment, respectively. n denotes the sample size of MS patients. Xt denotes the explanatory variables of gene expression levels at time-point t, and p denotes the number of genes. \u03b2\u2009=\u2009{\u03b21, \u03b22, \u2026, \u03b2p}: \u03b2 denotes the regression coefficients.The proposed method used a logistic regression eq.\u00a0 to predi\u03b2 in eq.\u00a0p \u226b the sample size n). Therefore, a small number of differentiated genes should be selected prior to the use of OLS. Sparse modelling assumes that only several regression coefficients are needed for the prediction model and that the others are not needed. This assumption means that the regression coefficient values of several genes which were needed for the prediction model were non-zero while the other values were zero. Specifically, genes exhibiting non-zero regression coefficients were selected as genes able to predict responses to INF-\u03b2 treatment. With the use of Elastic net, regression coefficients were calculated by adding a penalty term to a least-square loss function denotes loss of function of OLS, and \u03bb denotes the hyper-parameter for the penalty term of Elastic net. The penalty term was given after the second term of the equation. Hyper-parameters were generally set by analysts. \u03b1 (0\u2009\u2264\u2009\u03b1\u2009\u2264\u20091) denotes the hyper-parameter that indicated the degree between the Ridge w\u2009=\u2009{wt,1, wt,2, \u2026, wt,p} .(1\u20132) were repeated with multiple subsets.The frequency of selection using an arbitrary \u03bb value was calculated for multiple subsets.(1\u20133) were repeated using multiple \u03bb values.For each gene, the maximum of the probability calculated in (4) among multiple \u03bb values was regarded as the selection probability of the gene.ss were selected.Genes showing a selection probability above the threshold \u03b8Cross validation is commonly used for optimizing the \u03bb value in eq.\u00a0Screening of gene candidates Fig.\u00a0. Due to t, Elastic net . In LOO, one sample of data was used as test data, and other data were used for model construction. LOO was repeated until all the samples became test data.A constructed model was used for the prediction of data at identical time-points. Leave-one-out (LOO) was used to evaluate the prediction accuracy (ACCT) were calculated using the data at time-points for prediction.A constructed model was used for the prediction of the data at a time-point not used for model construction. The prediction accuracies (ACCmean) was calculated for each group of genes and for each time-point of model construction in eq.\u00a0mean denotes the mean of the prediction accuracy for the prediction model constructed by a group of genes and a time-point of data. d. D\u2009=\u2009{t1, tt, \u2026, tT} denotes the time-points of ACCT. t was not included for the time-point used for model construction. ACCo denotes the prediction accuracy using data at the time-point used for model construction.The mean of prediction accuracies 35 were used. These datasets included time from start of therapy to the first relapse; however, the definition of response is different for the two datasets35. Table\u00a02-fold change and quantile normalization were performed for pre-processing of gene expression data. Subsequently, the expression levels of each gene were converted to Z-scores.The evaluated data consisted of the time-course gene expression data from two MS patients who underwent INF-\u03b2 treatment. The two datasets of GSE24427 (Dataset A)mean was calculated using these selection probabilities. Thereafter, the genes in the group with the best ACCmean were regarded as identified genes. These genes were regarded as genes with the best performance throughout multiple time-points in the conventional method using data from a single time-point.The conventional method used only for the gene expression data at a single time-point. Elastic net with SS using data at a single time-point was used as the conventional method. Genes were ranked according to the selection probabilities by SS. Finally, using the procedures of the proposed method Fig.\u00a0, ACCmeanTP denotes the number of true positives, FP denotes the number of false positives, FN denotes the number of false negatives, and TN denotes the number of true negatives in the test data.The prediction accuracies were calculated by eq.\u00a0mean) throughout all time-points was calculated using ACCT and ACCo at each time-point using eq.\u00a0min) throughout all time-points, was selected from ACCT and ACCo at each time-point. To access the specificity and the sensitivity at each time-point, the receiver operating characteristic (ROC) curves, the area under the ROC curve (AUC), and the 95% confidence intervals were calculated.First, the prediction accuracies of the construction models were evaluated. In order to compare the prediction model of the proposed and the conventional method, the mean prediction accuracy . Additionally, the median values of the expression levels of genes at each time-point were compared between good and poor responders to assess if the expression levels of the two groups were consistently different throughout all time-points. The names of the selected genes were obtained from Gene Cards (http://www.genecards.org/).Last, the differences between good and poor responders in the expression levels of the genes obtained by the proposed method were investigated. The expression levels of these genes at each time-point were classified into two groups according to the treatment response, and the differences between the two groups were tested using the Wilcoxon rank sum test. The \u03b8ss was 0.5. The hyper-parameter of Elastic net in eq.\u00a0\u03b3\u2009=\u20092. The parameters of Elastic net with SS used in the conventional method were also the same as the parameters in the proposed method. The responses at 24 months after INF-\u03b2 treatment in dataset A and B was used, and the limma and glmnet (ver. 2.0\u20135) packages were used for quantile normalization and Elastic net, respectively. The source codes are available upon request.For the implementation of numerical experiments, R language was 81%. This prediction accuracy was significantly higher than 65% (p\u2009=\u20092.06\u2009\u00d7\u200910\u221223), 71% (p\u2009=\u20091.48\u2009\u00d7\u200910\u221210), and 68% (p\u2009=\u20091.16\u2009\u00d7\u200910\u221216) at t1, t2, and t3 in the conventional method (p\u2009<\u20090.001), respectively. As shown in Fig.\u00a0t2, t3, t4, and t5) provided by the proposed method was 78%. The prediction accuracies given by the conventional method at t1, t2, t3, t4, and t5 were 64% (p\u2009=\u20092.41\u2009\u00d7\u200910\u221240), 57% (p\u2009=\u20091.73\u2009\u00d7\u200910\u221294), 73% (p\u2009=\u20098.70\u2009\u00d7\u200910\u221211), 56% (p\u2009=\u20091.30\u2009\u00d7\u200910\u2212103), and 55% (p\u2009=\u20091.46\u2009\u00d7\u200910\u221278), respectively. In dataset B, the mean accuracy of the different time-points given by the proposed method was significantly higher than those by the conventional method (p\u2009<\u20090.001). Therefore, the prediction accuracies at different time-points obtained using the proposed method were significantly higher than those given by the conventional method.Bootstrap sampling was performed to evaluate the prediction accuracies at different time-points in the second evaluation. Figure\u00a0t1, t2, and t3 were 0.95, 0.94, and 0.90 given by the proposed method, respectively, and all of these were higher than or equal to 0.9. The lower limits of the 95% confidence interval were 0.88, 0.82, and 0.77 at t1, t2, and t3, respectively. As shown in Fig.\u00a0t1, t2, t3, t4, and t5 were 0.99, 0.76, 0.95, 0.89, and 0.93, respectively. The lower limits of the 95% confidence interval were 0.97, 0.56, 0.87, 0.74, and 0.83 at t1, t2, t3, t4, and t5, respectively. In dataset B, the AUC and the lower limits of the 95% confidence interval of the proposed method at t2 were 0.76 and 0.56, which were lower than or equal to the other time-points as obtained by the conventional method Fig.\u00a0. The med05) Fig.\u00a0. Given tThe genes identified by the proposed method showed consistent differentiation throughout all time-points and accurately predicted the responses of MS patients to INF-\u03b2 treatment.mean and ACCmin values given by the proposed method in dataset A were 86% and 79%, respectively. The ACCmean value was equal to or higher than that given by the conventional method ; Elastic net utilization to prevent the multicollinearity of expression levels among genes, and Elastic net modification to identify genes showing consistent differentiation throughout the time-course.Two publicly available datasets were used to identify genes showing highly accurate prediction abilities throughout all time-courses. The mean prediction accuracies of different time-points given by the proposed and conventional method were compared. The accuracies obtained using two datasets were 71% and 73% for the conventional method, and 81% and 78% (significantly higher) for the proposed method. The proposed method identified 11 and 8 genes in the two datasets. Differences in the expression levels of 9 and 6 genes between good and poor responders were consistent throughout the data at all time-points. Therefore, the genes identified by the proposed method were identified as capable of high-accuracy prediction throughout multiple time-points. Additionally, these genes included genes previously reported to be related to MS. The proposed modified Elastic net method for the time-course data analyses was used to identify genes showing consistent differentiation between two outcome groups throughout time-courses. Here, we demonstrated the use of this modified Elastic net for the prediction of INF-\u03b2 treatment responses in patients with MS. Additionally, this method could also be used for microarray time-course data analyses.Supplyment information"}
+{"text": "Reverse engineering of gene regulatory networks from time series gene-expression data is a challenging problem, not only because of the vast sets of candidate interactions but also due to the stochastic nature of gene expression. We limit our analysis to nonlinear differential equation based inference methods. In order to avoid the computational cost of large-scale simulations, a two-step Gaussian process interpolation based gradient matching approach has been proposed to solve differential equations approximately.We apply a gradient matching inference approach to a large number of candidate models, including parametric differential equations or their corresponding non-parametric representations, we evaluate the network inference performance under various settings for different inference objectives. We use model averaging, based on the Bayesian Information Criterion (BIC), to combine the different inferences. The performance of different inference approaches is evaluated using area under the precision-recall curves.We found that parametric methods can provide comparable, and often improved inference compared to non-parametric methods; the latter, however, require no kinetic information and are computationally more efficient.The online version of this article (10.1186/s12859-018-2590-7) contains supplementary material, which is available to authorized users. Gene expression is known to be subject to sophisticated and fine-grained regulation. Besides underlying the developmental processes and morphogenesis of every multicellular organism, gene regulation represents an integral component of cellular operation by allowing for adaptation to new environments through protein expression on demand \u20134.While the basic principles of gene regulation have been discovered as early as 1961 , understGiven the vast range of network inference approaches studied within and outside the life sciences, we limit our analysis in this work to infer gene regulatory interactions from time-course data (e.g. time-resolved mRNA concentration measurements) under a nonlinear dynamic systems framework, since most of data-driven methods either purely study the linear interactions or ignore the dynamic information from the data. More specifically, we will investigate the inference based on nonlinear ordinary differential equations (ODEs) and corresponding non-parametric representations.The application of ODE models in this context has the advantage that each individual term in the final ODE model can provide direct mechanistic insight (such as presence of activation or repression) , 14. Folxn(t) denotes the concentration of nth mRNA at time t, sn is the basal transcription rate, \u03b3n is the mRNA decay rate, x is a vector of concentrations of all the parent mRNAs that regulate the nth mRNA, the regulation function fn describes the regulatory interactions among genes such as activation or repression that are normally quantified by Hill kinetics, with \u03b2n the strength or sensitivity of gene regulation, and the parameter vector \u03b8n contains regulatory kinetic parameters. The right-hand-side of the nth ODE can be summarized in a single nonlinear function f with \u03b1n including all the kinetic parameters. Some approaches such as non-parametric Bayesian inference methods provide less mechanistic information but they may nevertheless provide realistic representations of complex regulatory interactions between genes, which a simple ODE system might not be able to capture : For the n conditionally independent of all other genes given its parents We also consider a fully non-parametric, GP-based gradient matching inference method adapted from . This is\u03d5n denoting the vector of hyper-parameters for the squared exponential covariance function n. The derivative of the nth gene expression where entclass1pt{minimaM=2 and N=5). Symbol definitions as previously stated in this section.As this is a purely data-driven approach, basal transcription and degradation are not treated separately as in the ODE approach. Because the degradation of mRNA is usually modelled as a first order reaction, we include gene self-interaction in every putative network. This does not affect the total number of candidate topologies. Furthermore, as this approach is unable to distinguish alternative regulatory types (activation or repression) between genes so that the number of possible network topologies is reduced to Following model optimisation, we obtain the final distance or likelihood of each gene with respect to their possible parents which we can use to calculate the Bayesian information criterion (BIC) for each model. For the ODE-based inference approach we have, S denotes the number of data points (sample size), G the number of free parameters and L2 distance defined in Eq. (where S and G are defined as before and x. We use the BIC for weighting candidate models rather than the commonly used Akaike information criterion (AIC), as it is asymptotically valid for large sample sizes [le sizes whereas j, We then calculate the Schwarz weight for eachi (BICi) and the lowest BIC across all models considered (BICmin).such that we associated with every edge e in the GRN. This is done for each edge by summing the Schwarz weight of every model that contains the edge in question, Once we have weighted all models across all genes in the network, we can calculate the weight Ie(i) denotes the indicator function which is 1 if edge e is present in model i and 0 otherwise.where To evaluate the overall performance of the GRN inference, we use the BIC weights of every edge in the network to calculate the Area Under the Precision-Recall (AUPR) curve . The detConsidering the sparsity of large GRNs, we use the AUPR instead of the Area Under the Receiver Operating Characteristic (AUROC) curve to evaluFor the deterministically simulated gene expression data, we compare three main approaches to network inference Table\u00a0 using thAll data presented in this section represent the mean of five independent repeats. It should be noted that in cases of noisy datasets, the number of repetitions should practically be selected according to the confidence intervals of the dataset.Figure\u00a0If we are only interested in the directionality of interactions and not their specific type, the three orange distributions in Fig.\u00a0The same trend can be seen when we are only predicting undirected edges. Interestingly, despite higher overall performance, constraining the ODE parameters can lead to worse performance under certain inference settings for this task , 41 as wDespite the deterministic nature of the data we use for evaluation in this section, we find a pronounced difference in performance depending on the method used for interpolating the input data. By taking into account the correlation between the different gene expression time-courses, interpolation with a multiple output GP is able to achieve significantly better results compared to using independent GPs.When interpolating oscillatory data using single output GPs, we observe that for low number of data points, the GP hyperparameters are optimised so that the oscillatory behaviour is no longer traced by the GP mean, but rather interpreted as noise .The most notable difference between the results for the noise-free and noisy gene expression data is the absolute decline in performance, which is not unexpected. Despite this difference, we nevertheless observe similar trends as for the noise-free data. The ODE-based modelling without prior edges between genes, we observe that the ODE-based model without prior performs slightly better than the GP-based approach; and both approaches perform better than PIDC.The pronounced narrowing of distributions towards higher AUPR across different approaches indicates that unlike inference based on noise-free data, both ODE and GP-based methods only produce meaningful results (i.e. significantly better than random performance) for a very narrow range of scenarios.Contrasting the performance for noise-less and noisy data shows not just lower absolute performance for each method for noisy data, but also different trends of their behaviour Fig.\u00a0.Interestingly, we can see from plot 1 of Fig.\u00a0We also see from plot 2 of Fig.\u00a0Again changing the maximum number of parents allowed for a gene appears to have no effect can be recovered during inference if enough knowledge about the network (e.g. parameter ranges) is present. For directed and undirected network inference, the parametric ODE method can provide comparable or even better inference performance compared to the non-parametric GP-based method, the latter approach however requires little mechanistic or kinetic regulatory information and computationally more efficient, which can be crucial for large-scale network inference problems. When applied to larger network or stochastic data, overall lower inference performance is observed for all methods, while consistent comparable performance between parametric and non-parametric methods is still obtained.Several promising avenues to improving inference performance emerge from this analysis: in particular there is potential for the use of multiple output Gaussian Processes for data interpolation in cases of small networks. When applying the same methods to more complex stochastic networks these may, however, become less reliable.A central result has been that Bayesian model averaging has real potential to increase the quality of network inference. We believe that combining the strengths of several existing approaches will ultimately be required to make significant further progress in solving this challenging problem.Additional file 1Contains additional information on some technical aspects of the research. (PDF 340 kb)"}
+{"text": "Non-homogeneous dynamic Bayesian networks (NH-DBNs) are a popular modelling tool for learning cellular networks from time series data. In systems biology, time series are often measured under different experimental conditions, and not rarely only some network interaction parameters depend on the condition while the other parameters stay constant across conditions. For this situation, we propose a new partially NH-DBN, based on Bayesian hierarchical regression models with partitioned design matrices. With regard to our main application to semi-quantitative (immunoblot) timecourse data from mammalian target of rapamycin complex 1 (mTORC1) signalling, we also propose a Gaussian process-based method to solve the problem of non-equidistant time series measurements.Arabidopsis thaliana and the mTORC1 signalling pathway. The inferred network topologies show features that are consistent with the biological literature.On synthetic network data and on yeast gene expression data the new model leads to improved network reconstruction accuracies. We then use the new model to reconstruct the topologies of the circadian clock network in All datasets have been made available with earlier publications. Our Matlab code is available upon request.Bioinformatics online. Dynamic Bayesian networks (DBNs) have become a popular tool for learning the topologies of cellular regulatory networks from time series data. The classical (homogeneous) DBN models assume that the network parameters stay constant in time, so that the network structure is inferred along with one single set of network parameters or with K (short) time series. The data are then intrinsically divided into K unordered components, and there is no need for inferring the segmentation. In this situation, it is normally not clear a priori whether the network parameters stay constant across components (conditions) or whether they vary from component to component (with the conditions). Three biological systems that we will consider in this article are: Section 4.4: The parameters of a metabolism-related regulatory network in Saccharomyces cerevisiae can depend on the medium, in which yeast is cultured . Section 4.5: The parameters of the circadian clock network in Arabidopsis thaliana can depend on the dark: light cycles, to which the plants were earlier exposed. Section 4.6: The parameters of the mammalian target of rapamycin complex 1 (mTORC1) protein signalling network can change in the presence of insulin. For more examples and a thorough discussion on the integration of single-cell data from multiple experimental conditions we refer to In many real-world applications, in particular in systems biology, data are often collected under different experimental conditions. That is, instead of one single (long) time series that has to be segmented, there are Y is regulated by two other variables, K = 2 and in terms of a regression model:If the parameters stay constant, all data can be merged and analysed with one single homogeneous DBN. If the parameters are component-specific, then the data should be analysed by a NH-DBN. The disadvantage of both approaches is that all parameters are assumed to be either constant (DBN) or component-specific (NH-DBN). In real-world applications there can be both types of parameters, so that both models are inappropriate. For example, if a variable A DBN ignores that \u03b2 and \u03b3 are different. A NH-DBN has to infer the same parameter \u03b1 two times separately. Both model misspecifications can increase the inference uncertainty, and are thus critical for sparse data.No tailor-made model for the situation in (1) has been proposed yet. To fill this gap, we propose a partially NH-DBN model, which infers the best trade-off between a DBN and a NH-DBN. Unlike all earlier proposed NH-DBNs, the new partially NH-DBN model operates on the individual interactions (network edges). For each interaction there is a parameter, and the model infers from the data whether the parameter is constant or component-specific. We implement the new model in a hierarchical Bayesian regression framework, since this model class reached the highest network reconstruction accuracy in the cross-method comparison by In Section 2.5 we propose a Gaussian process (GP)-based method to deal with the problem of non-equidistant measurements. The standard assumption for all NH-DBNs is that data are measured at equidistant time points. For applications where this assumption is not fulfilled, we propose to use a GP to predict the values at equidistant data points and to replace the non-equidistant values by predicted equidistant values. We will make use of the GP method when analysing the mTORC1 data in Section 4.6.N independent regression models. In the ith model, iZ is the response and the remaining t \u2212 1 are used as potential covariates for iZ at time point t. The goal is to infer a covariate set for each iZ, and the system of covariate sets describes a network; see Section 2.6 for details. As the same regression model is applied to each iZ separately, we describe it using a general notation, where Y is the response and DBNs and NH-DBNs are used to infer networks showing the regulatory interactions among variables Y and covariates K experimental conditions, which we refer to as K components. We further assume that the data for each component Y and iX at the tth time point of component k. In dynamic networks, the interactions are subject to a time lag k we build a component-specific response vector We consider a regression model with response k we could assume a separate Gaussian likelihood:K independent models. Alternatively, we could merge the data For each When some covariates have a component-specific and other covariates have a constant regression coefficient, both likelihoods (2) and (3) are suboptimal. For this situation, we propose a new partially non-homogeneous regression model that infers the best trade-off from the data. The key idea is to use a likelihood with a partitioned design matrix.K = 2 the matrix For now, we assume that we know for each coefficient whether it is component-specific or constant. Let the intercept and the first k with the component-specific regression coefficients. For the noise variance parameter The first subvector of For the component-specific vectors The hyperprior couples the vectors The prior of Given A graphical model representation of the new regression model is provided in The marginalization rule from Section 2.3.7 of i indicates whether iX has a constant . The new partially non-homogeneous model infers the best trade-off: Each regression coefficient can be either constant or component-specific.A homogeneous model merges all data, while a non-homogeneous model assumes each component S = 0) or constant (S = 1) regression coefficients. In our method comparison, we include:DBN: A homogeneous model that merges all data, see (3).S = 1, and the likelihood takes the form of (2) for S = 0.NH-DBN: The NH-DBN model switches between two states. We have a DBN for coupled NH-DBN: This model from For a fair comparison, we also allow the non-homogeneous model to switch between a homogeneous and a non-homogeneous state. However, like all models that have been proposed so far, it operates on the covariate sets. All covariates have either component-specific (S = 1) versus \u2018all covariates are component-specific\u2019 (S = 0). Those states refer to The NH-DBNs can switch between: \u2018all covariates are constant\u2019 the Bernoulli parameter must depend on For ribution . We trunt. If the data within a component k were measured at time points Determine the lowest time lag Given the observed data points Extract the response values at the time points: Build the response vector and design matrix such that the values The regression models assume that the time lag kT-by-kT covariance matrix, it and jt yield correlated variables l is the length scale. For the unobserved vector kT-by-kT matrices, whose elements are given by: l). For predicting the unobserved response vector, we have to make two decisions:The GP parameters can either be sampled via MCMC simulations or their maximum a posteriori (MAP) estimates can be computed.Given GP parameters, the vector A GP is a stochastic process We have implemented and cross-compared all four combinations. For lack of space, we here report the results obtained for predictive expectations based on MAP estimates. A comparison of the four approaches can be found in Part D of the K conditions and that the conditions influence some of the interactions. Let N-by-kT data matrix which was measured under condition k. The rows of kT time points. iZ at time point t under condition k.Assume that the variables jZ has an effect on iZ in the following sense: For all k the value iZ at t + 1) depends on jZ at t).The goal is to infer the network structure. Interactions for temporal data are usually modelled with a time lag, e.g. of order N separate regression models and combining the results. In the ith model There is no acyclicity constraint, and DBN inference can be thought of as inferring Y = iZ separately, to generate posterior samples. We extract the covariate sets, rth network We can thus apply the partially non-homogeneous model to each M is the number of edges in the true network, gives the precision\u2013recall curve iterations. We set the burn-in phase to 50k and we sample every 100th graph during the sampling phase. This yields We used scatter plots of edge scores from independent simulations to monitor convergence. In Section 4.3 we study convergence, scalability and the computational costs for model inference.We implement the GP method with the squared exponential kernel and used the Matlab package \u2018GPstuff\u2019 distributed, i, and t and all i. We further assume that X1 and X2 are covariates for:Our first objective is to provide empirical evidence that the proposed GP method from Section 2.5 can yield substantial improvements. To this end, we generate values for 10 autoregressive (AR) variables:i and t.In a second scenario we replace (13) by moving averages (MA):q in (15). We thin the data out and keep only the observations at the time points it for explaining Y at Y at Y at iX the score that iX is a covariate for Y. Our results show that the proposed GP method finds the true covariates X1 and X2, while the standard approach cannot clearly distinguish them from the irrelevant variables q = 10.We generate data for both scenarios (AR and MA) with different parameter settings N = 11 nodes and 20 directed edges. The topology of the RAF pathway is shown in Part E of the K = 2 components and kT = 10 data points each. The parent nodes of each node iZ build its covariate set iZ at time point t in component k, and k. The noise values iZ there are i we collect them in two vectors k = 1, 2), and we sample the elements of N Gaussian distributions. We then re-normalize the vectors to Euclidean norm one: k = 1, 2). We distinguish six scenarios:i.(S1) Identical: We withdraw i and j.(S2) Identical signs (correlated): We enforce the coefficients to have the same signs, i.e. we replace k . The component-specific coefficients i and all j.(S3) Uncorrelated: We use the vectors (S4) Opposite signs (negatively correlated): We withdraw the vector k, while the other 50% are uncorrelated. We randomly select 50% of the coefficients and set: Mixture of (S1) and (S3): We assume that 50% of the coefficients are identical for both k, while the other 50% have an opposite sign. We randomly select 50% of the coefficients and set: Mixture of (S1) and (S4): We withdraw The RAF pathway, as reported in For each scenario we generate 25 datasets. We then analyse every dataset with each model. iZ we first sample the number of parents ix from a Poisson distribution with parameter \u03bb = 1 (\u2018Poisson in-degree distribution\u2019), before we randomly draw a parent set ix, K = 2 and kT = 10. Here we present and discuss the results for the scenario: \u2018Mixture of (S1) and (S3)\u2019. For each N we generate 10 independent datasets, i.e. 40 in total. Next, we measure how many MCMC iterations W we can perform in 1 h. With our Matlab implementation on a desktop computer with Intel Xeon 2.5 GHz processor and 8GB of RAM, the average numbers of iterations per hour are: W = 208\u2009637 (N = 10), W = 83\u2009615 (N = 25), W = 41\u2009666 (N = 50) and W = 19\u2009855 (N = 100).To study the scalability of the new network reconstruction method, we generate data for random network structures with N we select the numbers of iterations such that the simulation takes 16 h. During the simulations we sample 200 equidistant networks per hour . When withdrawing the first 50% of the networks (\u2018burn-in\u2019), we have h hours. We use those hR networks to assess the performance after h hours of computational time. For running two independent 16 h long MCMC simulations on each of the 40 datasets , we use a computer cluster.To monitor convergence and network reconstruction accuracy in real-time, we perform long MCMC simulations. For each N = 100, N.To assess convergence, we consider scatter plots of the edge scores of two independent MCMC simulations on the same dataset. For the largest networks with N = 10 and = 25 converge to AUC = 0.72, while the AUC limits are lower for N = 50 (AUC = 0.58) and N = 100 (AUC = 0.49). The individual AUC curves are shown in Part F of the N = 100 nodes had to be inferred from 20 data points (K = 2 conditions with kT = 10 data points each), the rather low network reconstruction accuracy (AUC = 0.49) is not surprising. To show that higher AUCs can be reached for networks with N = 100 nodes, we repeat the study with kT = 20 and = 40. The bottom panel of N = 100 and W = 12 707 (kT = 20), and W = 9343 (The upper panel of N = 5 genes in S.cerevisiae (yeast); in vivo gene expression data: under galactose- (k = 1) and glucose-metabolism (k = 2). in vivo gene expression data. By means of synthetic biology, A.thaliana orchestrates the gene regulatory processes, related to the plant metabolism, with respect to the daily changing dark: light cycles of the solar day. The mechanism of internal time-keeping allows the plant to anticipate each new day, at the molecular level, and to optimize its growth. In K = 4 experiments Arabidopsis plants were entrained in different dark: light cycles, before the gene expressions of N = 9 circadian clock genes were measured under experimentally controlled constant light condition. The numbers of observed time points are kT = 13 for k = 2, 3, 4. For further details on the experimental protocols we refer to The circadian clock network in k = 1), and with amino acids plus insulin (k = 2). The phosphorylation states were measured at kT = 10 time points: The mammalian target of rapamycin complex 1 (mTORC1) is a serine/threonine kinase which is evolutionary conserved and essential in all eukaryotes . mTORC1 We focus first on the five interactions with the highest scores K-pT389] . Thus, pK-pT389] and mTORK-pT389] , and botK-pT389] . AnotherK-pT389] . Thus, tWe could also find evidence for 6 of the remaining 7 edges with scores in between 0.5 and 0.8. PRAS40 is an endogenous mTORC1 inhibitor . The edgAfter having identified 11 of 12 edges as true positives, we performed a literature review to find false negative edges, i.e. edges that our model did not extract, although it has been reported that they exist. This way, we could identify two false negative edges, namely: IR-beta-pY1146\u2192AKT-pT308 and AMPK-pT172\u2192TSC2-pS1387, which were reported in We propose a new partially NH-DBN model for learning network structures. When data are measured under different experimental conditions, it is rarely clear whether the data can be merged and analysed within one single model, or whether there is need for a NH-DBN model that allows the network parameters to depend on the condition. The new partially NH-DBN has been designed such that it can infer the best trade-off from the data. It infers for each individual edge whether the corresponding interaction parameter is constant or condition-specific. Our applications to synthetic RAF pathway data as well as to yeast gene-expression data have shown that the partially NH-DBN model improves the network reconstruction accuracy. We have used the partially NH-DBN model to predict the structure of the mTORC1 signalling network. As the measured mTORC1 data are non-equidistant, we have applied a GP-based method to predict the missing equidistant values. Results on synthetic data (see Section 4.1) show that the proposed GP-method (see Section 2.5) can lead to substantially improved results.All but one of the predicted interactions across the mTORC1 network are reflected in experiments reported in the biological literature. Although it worked well for the mTORC1 data, we note that the GP method from Section 2.5 requires the time series to be sufficiently smooth. For non-smooth time series the method might not be able to properly predict the values at unobserved time points, leading to biased response values. Then, network reconstruction methods, such as the new partially NH-DBN, will inevitably infer distorted network topologies and wrong conclusions might be drawn. The assumption of smoothness is therefore crucial for the complete analysis pipeline to work. Our results in Section 4.3 show that the new partially NH-DBN can also be used to infer larger networks. However, our results suggest that there is then need for more data points to reach accurate network predictions. A conceptual advantage of our partially NH-DBN is that it has two established models, namely the homogeneous DBN (M.S.K. and M.G. are supported by the European Cooperation in Science and Technology (COST) [COST Action CA15109 European Cooperation for Statistics of Network Data Science (COSTNET)]. K.T. acknowledges support from the BMBF e: Med initiatives GlioPATH [01ZX1402B] and MAPTor-NET (031A426B), the German Research Foundation [TH 1358/3-1], the Stichting TSC Fonds and the MESI-STRAT project, which has received funding from the European Union\u2019s Horizon 2020 research and innovation programme under [grant agreement No 754688]. K.T. is recipient of a Rosalind-Franklin-Fellowship of the University of Groningen and of the Research Award 2017 of the German Tuberous Sclerosis Foundation.Conflict of Interest: none declared.bty917_Supplementary_DataClick here for additional data file."}
+{"text": "The vascular endothelium is considered as a key cell compartment for the response to ionizing radiation of normal tissues and tumors, and as a promising target to improve the differential effect of radiotherapy in the future. Following radiation exposure, the global endothelial cell response covers a wide range of gene, miRNA, protein and metabolite expression modifications. Changes occur at the transcriptional, translational and post-translational levels and impact cell phenotype as well as the microenvironment by the production and secretion of soluble factors such as reactive oxygen species, chemokines, cytokines and growth factors. These radiation-induced dynamic modifications of molecular networks may control the endothelial cell phenotype and govern recruitment of immune cells, stressing the importance of clearly understanding the mechanisms which underlie these temporal processes. A wide variety of time series data is commonly used in bioinformatics studies, including gene expression, protein concentrations and metabolomics data. The use of clustering of these data is still an unclear problem. Here, we introduce kernels between Gaussian processes modeling time series, and subsequently introduce a spectral clustering algorithm. We apply the methods to the study of human primary endothelial cells (HUVECs) exposed to a radiotherapy dose fraction (2 Gy). Time windows of differential expressions of 301 genes involved in key cellular processes such as angiogenesis, inflammation, apoptosis, immune response and protein kinase were determined from 12 hours to 3 weeks post-irradiation. Then, 43 temporal clusters corresponding to profiles of similar expressions, including 49 genes out of 301 initially measured, were generated according to the proposed method. Forty-seven transcription factors (TFs) responsible for the expression of clusters of genes were predicted from sequence regulatory elements using the MotifMap system. Their temporal profiles of occurrences were established and clustered. Dynamic network interactions and molecular pathways of TFs and differential genes were finally explored, revealing key node genes and putative important cellular processes involved in tissue infiltration by immune cells following exposure to a radiotherapy dose fraction. Half of patients with tumors receive radiotherapy (RT) at some point during the course of their disease . In combThe response of vascular endothelial cells to radiation exposure leads to a long-term radiation-induced dysfunction phenotype . ConventHere, we wanted to go deeper into the use of this dataset by a bioinformatics analysis of differentially expressed gene clusters. This new study has sought to establish a novel method to cluster time periods of statistically differentially expressed genes determined by our previous method of GPs and the Bayesian likelihood ratio test. This new method has been applied to our previously published dataset of real-time qPCR measurements of the transcriptional profiles of human umbilical vascular endothelial cell (HUVEC) genes following irradiation at 2 Gy.With the advent of high-throughput measurements technologies, large-scale systems biology experiments are now routinely performed. Time series measurements of the transcriptomic state of cells can reveal important information on their inherently dynamic regulation and function. In this paper, we focus on the central task of determining the differentially expressed genes in a two-sample time series experiment \u201311. To tIn this paper we defined several families of kernel functions between GPs and propose a novel clustering algorithm suitable for kernels between GPs. We propose to extend the method by considering kernels between derivatives of GPs as well as to model the rate of expression changes. We analyzed the performance of the proposed kernel families and applied the method to clustering of gene expression time series for irradiation of human endothelial cells. We sought results for predicted transcription factors (TFs) to gain insights into the biological relevance of the clustering as regards the response of endothelial cells to a conventional RT dose fraction (2 Gy), finally providing biological insight by cluster analysis.We first review the notions of GPs and kernels between distributions, and then present several families of kernels between GPs.N noisy gene expression measurements f(t) explains the observations throughFirst, we construct smooth probabilistic models of the measured gene expression trajectories over time from point measurements using GPs. Let GPR is a Bayesian non-parametric and non-linear method for regression. A GP is a generalization of distributions to functions, where any subset of function evaluations is jointly Gaussian . A GP f\u22c6f(t), where KTT is a covariance, or more generally, a positive semi-definite kernel matrix between time points Tobs \u00d7 Tobs. We are interested in learning the GP given the data y and the function prior, which results in a posterior distribution K over T\u22c6 \u00d7 Tobs.According to GPR modeling, we determine the function class by placing a Gaussian prior\u03bc\u22c6 along with 95% confidence intervals The posterior of the true model can be visualized by the mean model pression .K plays an important role in determining the function space learned by the GP. The Gaussian kernel K = exp(\u2212\u2016t \u2212 t\u2032\u20162/2l2) is often used as a \u201cdefault\u201d kernel because of its simplicity, which naturally gives high covariance for close time points, resulting in smooth regression models. However, the Gaussian kernel is a function of t \u2212 t\u2032, and hence stationary. For non-stationary dynamics, we opt for the non-stationary Gaussian kernel [l(t) = log(t) or a parametric time-transformation l(t) =l \u2212 (l \u2212 lmin)ect\u2212. The three hyperparameters are: maximum length scale l, minimum lengthscale lmin (at time t = 0), and the curvature c controls how fast the function l(t) approaches its maximum value. We assume that the data are normalized such that perturbation occurs at time 0.The kernel choice n kernel Kl\u03b8 = of the kernel Kl. In a Bayesian model inference we would marginalize over the hyperparameters and the models implied by them. Due to computational tractability, we instead learn hyperparameters against the marginal log likelihood (MLL)\u03b8 by gradient descent over Eq p(f|\u03b8)dfwhich foN time points . The more time points we utilize, the more accurate the GPs are.We are interested in defining kernel functions between two GPs to be used for subsequent unsupervised or supervised learning. Let GP \u2212 convergent if this holds. The distribution-based kernels listed above converge to zero as we increase N, unless the objects are identical.While kernels between distributions, such as probability product kernels , KullbacKOVL, and GP \u2212 convergent variants of the probability product KPP and the symmetric Kullback-Leibler kernels KKL. We are interested in defining a kernelp is the density function of the corresponding MVN distribution. Comparing N-dimensional MVN distribution is numerically intractable, and hence we define a GP kernel as a weighted sum over the marginalized distributionspt is the marginalized Gaussian density at time t. This simplification entails only considering the diagonal variance diag \u03a3, which corresponds to the marginalized variances of the GP over time, which are commonly used to represent the model. Finally, we propose to enhance the kernel by taking a weighted mean according to the time interval lengths T = asT is the time window length and \u0394t is the time window length length. The kernel has the property limn\u2192\u221eKN = K.We propose three families of GP kernels, the overlap coefficient (OVL) kernel ation of , where p\u03c1 = 0.5 the kernel turns into a Bhattacharyya kernel [K = 1; and \u03c1 = 1 gives the expected likelihood kernel\u03c1 = 1/2 is\u03c1 = 1In a probaba kernel \u222bRNp(z)pThe kernel is SDP and N-convergent. Adding new time points during the GP only changes the similarity value if the new time points encode new information. We note that an alternative formulation of weighted geometric mean would result in a non-SDP kernel.Kullback-Leibler divergence between two MVNs is\u03b1 being a scaling parameter and \u03b2 a shift. The KL kernel converges to zero while N grows. The \u03b1 can be used to lessen the effect (e.g. setting \u03b1 = N), but using small \u03b1 leads to the kernel becoming numerically non-SDP.The Kullback-Leibler divergence is not symmetric, and hence we define a two-way symmetric KL divergenceWe adapt the KL kernel into a GP kernel by taking the weighted mean of the divergence according to the time intervalsp(z) and p\u2032(z) isWe propose another distribution similarity that measures the overlap between two distributions . An OVL ibutions .The OVL naturally measures both the shape and GP uncertainties. The similarity generalizes into any distributions. The overlap measures the volume of the overlapping region of the distributions, or the area under the overlapping curve for one-dimensional distributions.Parametric and non-parametric estimation frameworks for computation of OVL have been proposed , 31. Howz1 and z2 of equal density, which are the solutions to the square equationF is the cumulative distribution function of a Normal.The overlap between two Gaussians decomposes into a minimum of at most three intervals, each of which can be computed using the cumulative density of the smaller density. When the two Gaussians have equal variances, only two intervals emerge. When also the means are equal, only a single interval emerges. We handle these as special cases. Therefore, the three intervals have two points The four similarities have different interpretations. The overlap similarity measures the average volume of the overlapping or shared distribution over time, and is naturally normalized between 0 and 1. The two probability product kernels measure the geometric mean of two distributions. We note that in general one cannot retrieve the Bhattacharyya kernel by normalizing the expected likelihood kernel. The OVL kernel is a lower bound of the Bhattacharyya kernel. Finally, the Kullback-Leibler kernel is the expectation of the log difference between the two densities over the other one, and has well-known theoretical interpretations of information.T, the GP f\u03d1 of the estimated true function f(t) at target time points T\u22c6 is defined as [A derivative of a GP is another GP, as a derivative is a linear operation \u201334. In afined as \u03bc\u22c6\u2032=\u03d1K\u22c6TK, e.g. KKL. A mixture kernel\u03b1, i.e. \u03b1 = 0.5.Utilizing derivative GPs allows comparison of the change of variable over time in addition to comparing the variable values directly. A kernel between derivative GPs is in a normal fashion used as k eigenvectors of the LaplacianDii = \u2211jKij. The normalized graph Laplacian is Lnorm = D\u22121/2LD\u22121/2. This translation maximizes the separation of the components of the underlying structure [k-means over the translated points. However, this leaves the procedure vulnerable to outliers and noise. We propose to couple the Laplacians with an outlier-resistant k-means variant, denoted as k-means\u2014[Spectral clustering algorithms are a special class of clustering algorithms that are based on the graph Laplacians of similarity matrixes between objects determining the closest cluster centers for the remaining points, and (iii) computing new cluster centers as their mean. To couple the method with spectral clustering, we apply the weighting scheme of [The k-means consists of three iterative steps for some initial clustering: (i) computing the cheme of in the sTobs at 12 h, 1, 2, 3, 4, 7, 14 and 21 days. Gene expression assays were performed using a panel of premade TaqMan low-density array gene signature (Applied Biosystems). Experiments were performed in triplicate for each time point of the time course. GPR models were learned for each gene under both conditions over prediction time points T\u22c6 that cover smoothly days 0 to 24 [We used the dataset we generated and published previously . This daWe clustered the gene expression curves by the OVL kernel and the outlier-resistant spectral clustering as described in the Results and Discussion section.http://motifmap.ics.uci.edu/) was employed to obtain TF motifs present within promoters and predictions of candidate regulatory elements with a Bayesian branch length score (BBLS) score of at least 1 and a false discovery rate (FDR) of 0.1.The MotifMap system , 40 . For eack) the eigenfunctions which exhibit, in an optimal way according to a variance criterion, the main modes of variation of the TF profiles relative to ik) are uncorrelated random effect variables (scores) with mean 0 and variances \u03bbk in descending order to be interpreted as the contribution of kth variation mode to the total explained variance. Finally, (\u03d5k) are the functional principal components (FPC) or eigenfunctions which are orthogonal according to the inner product \u2329u,v\u232a = \u222bu(t)v(t)dt.The two-stage clustering method of these functional data starts with a dimension reduction step using functional principal component analysis (FPCA) followed by a second step which consists in clustering the scores obtained using a hierarchical complete-linkage algorithm. More precisely, the goal of the FPCA is quite the same as its multivariate counterpart since its aims is to succinctly describe the TF time variations that explain the most variability. Thus, the FPCA represents each TF profile in term of the Karhunen-Lo\u00e8ve decomposition Xi(t)=X\u00afAs usually done in the multivariate case, each TF profile was normalized by dividing the values of each function by their standard error to account for differences in degrees of magnitude among the TF time variation functions.The functional principal component scores (FPCS) were calculated using the Matlab package PACE and the Pathway and sub-network enrichment analyses were performed using the web version of the software Pathway Studio from Elsevier . Names a2. Confluent cells were irradiated at passage 3 with a cesium-137 source . For dose-fractionation experiments, cells were irradiated with five fractions of 2 Gy per week for two weeks (including one weekend break). For long-term experiments , culture medium was changed every week. Total RNAs were prepared with the total RNA isolation kit at day 0.5, 1, 2, 3, 4, 7, 14, 21 post-irradiation at a single dose of 20 Gy, and at day 21 after the first fraction of 2 Gy and day 21 after the last fraction of 2 Gy for dose-fractionation experiments. Total RNA integrity was analyzed using Agilent 2100 and after quantification on a NanoDrop ND-1000 apparatus (NanoDrop Technologies). Reverse transcription was performed using the High Capacity Reverse Transcription Kit (Applied Biosystems) according to the manufacturer\u2019s instructions. Gene expression assays were performed using a panel of premade TaqMan low-density array (TLDA) gene signature array (Applied Biosystems). cDNA (400 ng) per sample was loaded onto the port of each gene signature array card and PCR was performed with the ABI PRISM 7900 Sequence detection system (Applied Biosystems). Analyses were conducted according to the procedure previously described in detail [t-test p-values were adjusted using the Benjamini-Hochberg false discovery rate method using Data Assist software, and an adjusted p-value less than 0.05 was applied to select statistically differentially expressed genes.This section describes the experiments performed to collect expression data on endothelial cells exposed to either a single dose of 20 Gy or ten fractionated doses of 2 Gy. HUVECs from Lonza were cultured in EGM-2-MV medium at 37\u00b0C with 5% COn detail . Data Asn detail . ExperimWe experimented with the proposed kernels and the clustering method and then applied them to real data. To gain insights into the biological relevance of the clustering as regards the response of endothelial cells to a conventional RT dose fraction (2 Gy), (i) we clustered the gene expression curves by the OVL kernel and the spectral clustering, (ii) we searched for putative TFs associated with the clustered differential genes and (iii) we searched for pathway relationships between TF, gene entities and the term \u201cradiation\u201d. First, we employed simulated clustering data\u2014generated by GP models\u2014to analyze which kernel is best, and which clustering method is best. Afterwards, we simply employed the OVL kernel and outlier-resistant spectral method to real data.i = 1,\u2026,50 whereKl is a Gaussian kernel with length scale l. Hence, each simulated cluster is represented as a GP whose mean is a sample from another GP, and whose covariance is a kernel matrix defined by a sigmaf and l parameters sampled from Gamma distributions. We sampled 300 time series from these 50 GP clusters, while also sampling 100 independent time series, which represent outliers. Hence, the true outlier ratio is 25%.We generated simulated clusters by generating 50 GPs We compared the performance of the four kernels on simulated data with 400 time series from 50 true clusters. We generated the simulated data, learned the GP models, computed the kernel matrices and applied standard spectral clustering. We repeated the experiment 10 times, and report average results. l curves furthest away from the clusters left out at each iteration. The EM clustering is a probabilistic Gaussian mixture model, where an outlier distribution is maintained. We clustered the gene expression curves by the OVL kernel and the spectral clustering. We constructed the different curves and clustered the gene expression curves by the proposed methods. We retrieved a set of temporal clusters corresponding to similar profiles. We clustered both difference curves as well as the irradiated curves. As an application, we used the dataset we generated and published previously . This dakB and AP-1 [In the cellular response to radiation, several sensors detect the induced molecular damage, especially DNA damage, and trigger signal transduction pathways resulting in altered expression of many target genes , 50. Theand AP-1 , and a sand AP-1 , 52. In and AP-1 .motifmap.ics.uci.edu) [www.elsevier.com/pathway-studio) [To gain insights into the transcriptional response of endothelial cells following irradiation, we did transcription motif analysis on our genes and tried to take advantage of our clustering method to analyze this information. We extracted all putative transcription motifs related to the original 301 genes from the MotifMap system (uci.edu) , 40, wit-studio) . In the -studio) where weUsing the diagram of Considering the wealth of information given by the TFs associated with the differential genes, we assumed that the number of times TF occurred on each day or each time window post-irradiation may help to understand the response of endothelial cells to irradiation. The number of occurrences of each TF (i.e. the number of times a TF was predicted) at each day or each time window post-irradiation was determined and plotTo extract information from temporal profiles of occurrences, we clustered them by using functional data analysis (FDA). We obtained four main temporal profiles, i.e. i) TFs found in the early time points (early times), ii) TFs found in the middle of the time course for short periods of time , iii) TFs found in the late time points (late times), and iv) TFs found during long periods of time .A classification of the TFs according to their occurrence profiles is given in Almost all genes were differential in the intermediate time points, as shown by the plot of the number of differentially expressed genes, which displays a maximum between 7 and 14 days post-IR see . InteresConsidering the TFs found in early times (cluster 1), we established that they were interconnected and related to radiation through TP53 using the text mining algorithm of the software Pathway Studio (PS) by querying direct interactions . PS idenRather few TFs were found for intermediate times of short periods (cluster 2) . Among tConcerning the late times (cluster 3), we found 11 TFs that could be involved in the response of endothelial cells to radiation . Five ofFinally, as regards to the 25 TFs found for a long period (cluster 4), PS shows that JUN is strongly linked to radiation (by 31 references) and is the node of a network consisting of 14 TFs . IonizinWe then asked whether there were links between the clustered genes and the associated TFs highlighted by MotifMap. Querying promoter binding relationships with PS, we built networks between the TFs and the differential genes which were identified at the different time post-irradiation for different time windows over the 21-day post-irradiation period, i.e. 1\u20134, 4\u20137, 7\u201310, 10\u201314, 14\u201317 and 17\u201321 days . As showInterestingly, the networks presented in in vitro experiments must now take into account these new practices.As shown in To gain insights into the mechanisms involved in the molecular response of endothelial cells to ionizing radiation, we applied a new GP-kernel-based clustering to gene expression time series of irradiated HUVEC cells. This method exploits the results of the previous analysis we performed by establishing a new method that combines GPs and a novel Bayesian likelihood ratio test . In thisOverall, our results highlight that a dose of 2 Gy, which corresponds to a conventional RT dose fraction, could be sufficient to activate a basic molecular program, such as cell survival, activation or cell death, and on the other hand the process of cell adhesion, which is the first step of tissue infiltration. Furthermore, based on the cluster analysis, this new method allowed us to propose putative transcription factors involved in the regulation of gene expression following radiation, and five key genes as drivers of the response to ionizing radiation in endothelial cells. The importance of these five node genes is an interesting hypothesis, whose further biological validation warrants future studies.S1 Fig(PPTX)Click here for additional data file.S2 FigThe OVL and BH kernels achieve a consistently high performance.(PPTX)Click here for additional data file.S3 FigThe outlier approach achieves an overall performance similar to that of standard k-means, but with higher precision and lower recall.(PPTX)Click here for additional data file.S1 TableThe names, descriptions and Swiss-Prot IDs of the 49 statistically differentially expressed genes (determined by the GPR model as previously published in ) found i(XLSX)Click here for additional data file.S2 TableThis table presents the results of the MotifMap system analysis using an FDR of 0.1. The motifs, their location with respect to the start codon and their location in the genome, as well as the predicted TFs and their Bayesian Branch Length Score (BBLS) are given for each gene (identified by their NCBI Reference Sequence) whose expression was measured in this study.(XLSX)Click here for additional data file.S3 TableThis table presents the results of the MotifMap system analysis using an FDR of 0.1. The motifs, their location with respect to the start codon and their location in the genome, as well as the predicted TFs and their Bayesian Branch Length Score (BBLS) are given for each gene (identified by their NCBI Reference Sequence) whose expression was significantly statistically expressed in this study.(XLSX)Click here for additional data file.S4 TableThe names, descriptions and Swiss-Prot IDs of the 47 TFs predicted from the 49 differential genes using the MotiMap system are given in this table.(XLSX)Click here for additional data file.S5 TableThis table gives the names of the genes, the motifs IDs and the names (in MotifMap and their corresponding names in Pathway Studio) of the predicted TFs for each cluster of differential genes.(XLSX)Click here for additional data file.S6 TableWe report here the number of times each TF was respectively predicted for each day and each time window post-irradiation. Cluster numbers are also indicated for each day and time window.(XLSX)Click here for additional data file.S7 Tablep-values.The table presents the result of the subnetwork enrichment of the five node genes BIRC5, CXCL8, CXCL10, CXCL12, PTGS2 searching for regulating cell processes using the Pathway Studio software. Ranks of hits are based on the (XLSX)Click here for additional data file."}
+{"text": "Feature selection and gene set analysis are of increasing interest in the field of bioinformatics. While these two approaches have been developed for different purposes, we describe how some gene set analysis methods can be utilized to conduct feature selection.We adopted a gene set analysis method, the significance analysis of microarray gene set reduction (SAMGSR) algorithm, to carry out feature selection for longitudinal gene expression data.Using a real-world application and simulated data, it is demonstrated that the proposed SAMGSR extension outperforms other relevant methods. In this study, we illustrate that a gene\u2019s expression profiles over time can be regarded as a gene set and then a suitable gene set analysis method can be utilized directly to select relevant genes associated with the phenotype of interest over time.We believe this work will motivate more research to bridge feature selection and gene set analysis, with the development of novel algorithms capable of carrying out feature selection for longitudinal gene expression data.The online version of this article (10.1186/s12911-018-0685-8) contains supplementary material, which is available to authorized users. Currently, feature selection and pathway analysis are two bioinformatics tools that are employed with extremely high frequency. While pathway analysis aims to identify relevant pathways/gene sets associated with a phenotype of interest, feature selection mainly focuses on the identification of relevant individual genes. These two tools seem to be parallel to each other.Nevertheless, some pathway analysis methods can be further extended to possess the ability of identify relevant individual genes. One example is the significance analysis of microarray - gene set reduction analysis (SAMGSR) method proposed by . The addSince biological systems or processes are dynamic, researchers are interested in investigating gene expression patterns across time, in an effort to capture dynamical changes that are biologically meaningful to the systems. With the fast evolution of microarray and RNA-Seq technology, longitudinal omics experiments have become affordable and thus routine in a variety of fields. The statistical approach typically employed to analyze longitudinal omics data is to stratify the data into separate time points and then tackle them separately. This na\u00efve strategy is inefficient given the highly dependent structure between the measures of same subject over time is erroneously ignored, leading to a huge loss of statistical power and a failure to detect incremental changes in gene expression patterns over time \u20138. ThereOn the other hand, several novel and complex methods have been proposed to specifically deal with longitudinal gene expression data \u20139. For iTraumatic injury with subsequent infection was a common cause of death in ancient times. Even today massive injury such as combat wounds remains life threatening , 13. A lAnother primary objective of the study by was to ehttp://www.ncbi.nlm.nih.gov/geo/). The accession number is GSE36809. This experiment was hybridized on Affymetrix HGU133 plus2 chips. Because we focus on identifying genes that present longitudinal expression change pattern between the trauma patients with complication and those without complication, only patients with uncomplicated recoveries and patients with complicated recoveries were considered here.The raw data for the injury microarray experiment by were dowBased on the duration of recovery, uncomplicated recovery represents recovery within 5\u00a0days while complicated recovery includes recovery after 14\u00a0days, no recovery by 28\u00a0days, or death [In this subsection, we first gave an introduction to the SAMGSR algorithm, and then provided a detailed description on the proposed procedure, which is referred to as the longitudinal SAMGSR method herein.p-value for the SAMGS statistic, the statistical significance of a gene set is determined. In the second step, only those genes within significant gene sets identified by the first step are considered. The SAMGSR method reorders the genes within gene set j based on the magnitude of its SAM statistic and gradually partitions the entire gene set into two subsets: the reduced subset Rk which includes the first k genes with largest SAM statistic, and the residual subset k for k\u2009=\u20091,\u2026, |S-1|. Here S is the size of gene set j. Let ck be the SAMGS p-value of the residual subset k is the smallest k such that ck is larger than a pre-specified cut-off, e.g., 0.05. Conceptually, the significance level of a gene within a gene set is determined by the magnitude of its SAM statistic. It implies that if in a gene set |SAMi|\u2009>\u2009|SAMj| for genes i and j, gene j cannot enter the reduced subset unless gene i is inside the reduced subset.The SAMGSR method extends the SAMGS method by proviIn the longitudinal SAMGSR method, a gene set has different meaning, namely, it refers to a gene\u2019s expression profiles across time. Firstly, the significant genes were selected in which the original SAMGS statistic was modified to as,t is the SAM statistic [t\u2009=\u20091,\u2026, T). In the SAM statistic, 0t is a small positive constant used to offset the small variability in microarray expression measurements and thus to avoid the denominator of the SAM statistic being zero. Both s(t) and s0t are specific for individual time points because the variability of expression measurements may differ over time.here, dtatistic of gene From the above equation, it is observed that each gene\u2019s expression profiles over time are combined into a gene set. Namely, a gene set represents one specific gene over different time points. Our rational is that a gene\u2019s expression values for the same individual over time are correlated, mimicking a gene set/pathway. This method first calculates the SAMGS statistics for all genes to select the relevant genes and then determines exact time point(s) where the expression values of the specific gene differ between two phenotypes.k is regarded as a tuning parameter. Using the sequence of 0.05, 0.1, \u2026, 0.5, the optimal value of ck corresponds to the one associated with the minimum 5-fold cross-validation (CV) error. Figure\u00a0In the method, cIn this study, we use four metrics - Belief Confusion Metric (BCM), Area Under the Precision-Recall Curve (AUPR), Generalized Brier Score (GBS) and the misclassified error \u2013 to evaluate the performance of a classifier. Our previous study describeSince an evaluation on individual time points using the selected statistical metrics might be unfair for the SAMGSR extension in that its tendency to identify those genes that are insignificant at isolated time points but significant jointly over time, we used the following steps to calculate overall performance statistics. Specifically, for those methods incapable of providing the final model simultaneously with the selection of relevant genes, e.g., the longitudinal SAMGSR method, a linear support vector machine (SVM) model using the selected genes was fit to estimate the \u03b2 coefficients at individual time points. Then, the posterior probabilities of the samples belonging to the diseased group were calculated at each time point on the basis of the SVM models, and the averages of those probabilities over time were taken and used to obtain these performance statistics. Figure\u00a0http://www.r-project.org), and R codes for the longitudinal SAMGSR algorithm are available in an additional file , while the number of genes being significant only at one specific time point is one half of this number Fig.\u00a0. Again, Regarding computing time, LASSO is the most efficient with one single run only taking less than 1\u00a0s since the implementation of LASSO in the R glmnet package calls upi (F13A1 or GSTM1) at time j as Xi.j, the following logit function was used to obtain the probability for sample i experiencing a complicated injury, i.e., pi,In order to explore the properties of the SAMGSR extension, we used the observed expression values from the injury application to design two sets of simulations. First, we chose two causal genes \u2013 F13A1 and GSTM1 \u2013 and then randomly selected 998 genes from the remaining genes as noises that are irrelevant. Denote the expression value of gene i\u2009=\u2009exp.(logiti)/(1\u2009+\u2009exp.(logiti)). Then a random variable Yi that follows a Bernoulli distribution with the parameter of pi was simulated, it has two possible values with 1 indicating the sample belongs to the complicated injury group and 0 for the uncomplicated injury group. Notably, we considered one gene whose significance arises from its joint association accumulated from time point 1 to time point 4 and the other whose association with the outcome is only at the third point time in this simulation.Here, pThe aim of this simulation was to illustrate the advantage possessed by the SAMGSR extension, namely, by incorporating the accumulated effect of genes over time, it can recognize genes with a coordinated change accumulating over time but only mild or moderate change at each time point.In the second simulation, we chose two genes \u2013 COX4I2 and RP9 as the relevant genes. The logit function was,In this simulation, both genes were associated with the outcome at a single respective time point but in opposite directions. For both simulation settings, 50 replicates/50 datasets were generated. The frequencies of each causal gene being selected at each time point are given in Table\u00a0Although in the second simulation the number of relevant time points was less than that in the first one, the number of selected genes by the longitudinal SAMGSR algorithm was dramatically larger in the second simulation. This might be because the relevant genes in the second simulation were highly correlated with other genes compared to the first simulation. The highly correlated structure between relevant features and irrelevant ones produced a large number of redundant features that the SAMGSR extension cannot exclude. To our best knowledge, however, many feature selection algorithms, especially the filter methods , suffer In the injury application, only complicated patients with five measurements, and uncomplicated patients with seven time points were included in the training set. Then patients discharged from the hospital earlier (thus having later measurements missing) were used to verify the resulting models. Similar to SAMGSR, our proposed extension has no difficulty to deal with missing values. Therefore, the proposed algorithm does not require this restriction. In this study, we imposed this restriction to simplify our data analysis.The SAMGSR extension incorporates the correlated structure of expression\u2019s profiles over time into the framework of gene sets/pathways, and is more likely to identify genes with aggregated effects over time, while their effect size at individual time points may be insignificant. These genes are usually overlooked by the implementation of a conventional feature selection method at individual time points. Using a real-world application, we showed that the longitudinal SAMGSR method is superior to other relevant algorithms.In this article, we applied the SAMGSR algorithm to carry out feature selection for longitudinal gene expression profiles. To the best of our knowledge, this study is one of few efforts to explore the modification of suitable pathway analysis methods to execute feature selection for longitudinal gene expression data, such an application may save time of developing a new feature selection algorithm that can tackle longitudinal data from scratch. As shown by two simulations, this extension has a big drawback, namely, including many redundant irrelevant genes in the final lists. Nevertheless, we believe this work will pave the way for more research to bridge feature selection and gene set analysis with the development of novel algorithms to tackle longitudinal omics data. For instance, many gene set analysis methods such as may be aAdditional file 1:The R codes of the longitudinal SAMGSR method. (TXT 7 kb)"}
+{"text": "High-throughput expression profiling experiments with ordered conditions are becoming more common for studying detailed differentiation processes or spatial patterns. Identifying dynamic changes at both the individual gene and whole transcriptome level can provide important insights about genes, pathways, and critical time points.We present an R package, Trendy, which utilizes segmented regression models to simultaneously characterize each gene\u2019s expression pattern and summarize overall dynamic activity in ordered condition experiments. For each gene, Trendy finds the optimal segmented regression model and provides the location and direction of dynamic changes in expression. We demonstrate the utility of Trendy to provide biologically relevant results on both microarray and RNA-sequencing (RNA-seq) datasets.https://bioconductor.org/packages/release/bioc/html/Trendy.html.Trendy is a flexible R package which characterizes gene-specific expression patterns and summarizes changes of global dynamics over ordered conditions. Trendy is freely available on Bioconductor with a full vignette at The online version of this article (10.1186/s12859-018-2405-x) contains supplementary material, which is available to authorized users. High-throughput, transcriptome-wide expression profiling technologies such as microarrays and RNA-seq have become essential tools for advancing insights into biological systems. The power of these technologies can be further leveraged to study the dynamics of biological processes by profiling over ordered conditions such as time or space. In this article, we use the general term \u201ctime-course\u201d to refer to any dynamically ordered condition and \u201cgene\u201d to any genomic feature .Many methods for time-course experiments aim to identify differentially expressed genes between multi-series time-courses (e.g. two treatments monitored over time) \u20133. A revStatistical methods for analyzing single-series time-course data have largely focused on clustering gene expression , 9. Thestime points\u22121).EBSeq-HMM was deveHere we propose an approach we call Trendy which employs the method of segmented regression models to simultaneously characterize each gene\u2019s expression pattern and summarize overall dynamic activity in single-series time-course experiments. For each gene, we fit a set of segmented regression models with varying numbers of breakpoints. Each breakpoint represents a dynamic change in the gene\u2019s expression profile over time. A model selection step then identifies the model with the optimal number of breakpoints.We define the top dynamic genes as those that are well-profiled based on the fit of their optimal model. For each top gene, the parameter estimates of their optimal model are used to fully characterize the gene\u2019s expression dynamics across the time-course. A global summary of the dynamic changes across all top genes is then represented by the distribution of breakpoints across all time points. Our method does not require replicate time points and although we focus on time-course of gene expression, it may be applied to alternative features (e.g. isoform or micro-RNA expression) and/or other experiments with ordered conditions .https://bioconductor.org/packages/release/bioc/html/Trendy.htmlTrendy is written in R and freely available on Bioconductor at We include a detailed vignette with working examples and an R/Shiny application to visualize and explore the fitted trends. An overview of the Trendy framework is given in Fig.\u00a0G - by - N matrix containing the normalized expression values for each gene and each sample. Between-sample normalization is required prior to Trendy, and should be performed according to the type of data [We aim to estimate l in (1) . The metl in (1) .K+1 models for k\u2208{0,1,\u2026,K} and select the model having the optimal number of breakpoints by comparing the Bayesian information criterion (BIC) [k breakpoints for gene g. For a model with k breakpoints, there are k estimated breakpoints, k+1 estimated segment slopes, and an estimated intercept and error. The BIC of the linear model having no breakpoints, k=0, is also considered here.For each gene, we fit on (BIC) among alR2, which also penalizes for model complexity, as: An optimal model will be found for every gene, however we only further consider those genes with a good fit. We quantify the quality of the fit for each gene\u2019s optimal model as the adjusted k for gene g.where R2: Gene specific adjusted Segment slopes: Breakpoint estimates: Trendy reports the following for each gene\u2019s optimal model: To avoid overfitting, the optimal number of breakpoints will be set as cpval the trend of the segment will be defined as \u2018no-change\u2019, otherwise, if the p-value is smaller than cpval the segment is set to \u2018up\u2019 or \u2018down\u2019 depending on the sign of the slope. The default value of cpval is 0.1, but may be specified by the user. Trendy represents the trends \u2018up\u2019, \u2018down\u2019, and \u2018no-change\u2019 as 1, -1, and 0, and genes fitted trends may be clustered using an algorithm such as hierarchical clustering. Genes in the same group may then be investigated using gene enrichment analysis [Trendy also summarizes the fitted trend or expression pattern of top genes. Once the optimal model for a gene is selected, each segment is assigned a direction of \u2018up\u2019, \u2018down\u2019, or \u2018no-change\u2019 based on the sign and p-value of its slope estimate analysis \u201319 to exA global view of expression changes is obtained by computing the breakpoint distribution as the sum of all breakpoints at each time point over all dynamic genes: The Trendy package includes an R/Shiny application which provides visualization of gene expression and the segmented regression fit. The application also allows users to extract a list of genes which follow particular expression patterns. The interface is shown in Additional file\u00a0N=96 samples. The data are technical replicates collected and sequenced at the same time following the protocol from Hou et al., 2015 [We performed a simulation study to illustrate the operating characteristics of Trendy using an RNA-seq dataset with K=1,5,10.Total number of breakpoints: Minimum number of time points required in a segment: mNS=2,5.T=25,50.Total length of time course: t={1,2,\u2026,24,25}).Evenly spaced and short .Evenly spaced and long .Randomly spaced .Low: Variances sampled from the 20\u201330th percentile of variability.High: Variances sampled from the 70\u201380th percentile of variability.Specifically, the variability settings were: G=50 and N=25. We used default settings when variance is low, and for high variance the p-value cutoff, cpval, was set to.2. We evaluated the performance of Trendy based on the average percentage of genes correctly classified in the number of breakpoints, trend, and the time of breakpoints (when applicable). The full results are shown in Table\u00a0K increased for this simulation.These two settings were simulated 100 times with K=1 or K=2. The estimated breakpoint time was highly accurate, with an average difference of.01 when both K=1 and K=2 when variability was low and for the high variability scenario, the average difference was zero when K=1 and -.01 when K=2.For genes that Trendy correctly estimated the number of breakpoints, we evaluated the estimation of breakpoint time. Specifically, we calculated the deviation of the estimated breakpoints to the true simulated value when G), number of samples (N), and number of breakpoints considered (K). On a Linux machine using 10 cores, Trendy takes approximately 3.4 h for a dataset with 10,000 genes, 30 time points, and with K=3.The computation time of Trendy scales approximately linearly in number of genes are now supported in the literature as being involved in the cell cycle.Further analysis by Trendy identified a total of 34 top genes that have a cycling pattern defined as \u201cup-down-up-down\u201d [Further analysis by Trendy identified 807 genes having a delayed peak pattern defined as \u201csame-up-down\u201d with the first breakpoint occurring after Stage 8 , 24. The/msigdb) ) includeTrendy was also applied to two neural differentiation time-course RNA-seq experiments in Barry et al. 2017 . Breakpotime points\u22121 patterns, it is not computationally tractable for very long time-courses and we were not able to run the method on this set of data.To highlight the main differences between Trendy and other tools such as EBSeq-HMM and FunPat, we performed a comparative study using the Axolotl RNA-seq dataset. The dataset has 17 measured time points with 2 or 3 replicates at each time. Because EBSeq-HMM attempts to classify genes into 3FunPat is able to analyze datasets with a large number of time points, however the output is different from that of Trendy in a number of ways. Since there is no standard annotation package in R for axolotl genes, we focus on the output of the gene clustering. FunPat identified 411 total patterns using the default settings. The patterns are represented visually for each group and a text file lists the genes belonging to each cluster as well as standardized expression values for each gene. Additional file\u00a0K=1, and an increasing trend over the time-course. In Fig.\u00a0To illustrate an example of Trendy versus EBSeq-HMM on a shorter time-course dataset, we demonstrate one simulated gene example in Fig.\u00a0We developed an approach we call Trendy, which utilizes segmented regression models to analyze data from high-throughput expression profiling experiments with ordered conditions. Trendy provides statistical analyses and summaries of both feature-specific and global expression dynamics. In addition to the standard workflow in Trendy, also included in the R package is an R/Shiny application to visualize and explore expression dynamics for specific genes and the ability to extract genes containing user-defined patterns of expression.Trendy characterizes genes more appropriately than EBSeq-HMM for long time-courses when the expression is noisy and changes are gradual over the time-course. Although an alternative auto-regressive model for EBSeq-HMM might provide the flexibility to better classify genes in such cases, we also stress that Trendy provides unique information on dynamics including the time of significant changes via the breakpoint estimation. Trendy is also able to handle much longer time-courses in a reasonable amount of time compared to EBSeq-HMM and FunPat. In addition, the output of Trendy is more flexible than FunPat as genes can be clustered based on a variety of summaries provided such as breakpoint location and trends.N. For example, if a time-course has N=T=10 then it is not possible to identify any breakpoints if mNS=10. Rather, a smaller number of data points separating the breakpoints would be required, such as mNS = 4, which would allow a maximum of one breakpoint to be fit and require at least 4 data points in both segments surrounding the breakpoint. Based on the simulations and case studies, mNS around five is recommended, which also indicates that Trendy is designed for experiments with T>10. In general, Trendy is intended for more densely sampled biological processes, where multiple time points carry evidence of a trend. If a significant change is expected between two consecutive time points that is not supported by the surrounding times and replicates are not available then EBSeq-HMM is more appropriate to assess statistical significance.Trendy performed well in both simulation studies by identifying few false positive genes when no trend was present and correctly identifying breakpoints and trend directions when true trends were simulated. As demonstrated in the simulations, Trendy is robust at choosing the true K. However, in practice, setting K much larger than what is biologically reasonable is not advised since it increases the computation time. We also note that the number of data points in a segment separating breakpoints, mNS, is a critical parameter. The choice of this parameter value is directly linked to the number of samples In addition to characterizing each gene, Trendy also calculates a global summary of dynamic changes. The breakpoint distribution can be used to prioritize follow up investigations or experiments into specific time points. We recommend using the top dynamic gene breakpoints to generate this by specifying those with a higher value of We applied Trendy to two case study datasets (one microarray and one RNA-seq) and demonstrated the approach\u2019s ability to capture biologically relevant information in individual gene estimates of breakpoints and trends, as well as, information conveyed in global summaries of trends across genes. Although Trendy was applied only to single-series time course experiments here, the breakpoints for Trendy can be compared across experiments if measured on the same time or spatial scale as we did in Barry et al., 2017. In experiments where the number of time points is large and/or expression between time points is consistent yet subtle, we expect Trendy to be a valuable tool, especially as the prevalence of such experiments is on the rise with an increase in time-course sequencing experiments to study dynamic biological processes and the proliferation of single-cell snapshot sequencing experiments in which cells can be computationally ordered and assigned a temporal order \u201327.Project name: TrendyProject home page:https://bioconductor.org/packages/release/bioc/html/Trendy.htmlOperating system(s): all, specifically tested on Linux and MacProgramming language: ROther requirements: R version \u22653.4License: GPL-3Any restrictions to use by non-acadecmics: No restrictions.Additional file 1Supplementary Figures. (PDF 3581 kb)Additional file 2RNA-seq data used in simulation study. Additional file 3Enrichment results of Axolotl RNA-seq data. (XLSX 105 kb)"}
+{"text": "MAML3) translocation are two clinically important genetic alterations that correlate with increased rates of metastasis in subtypes of human paraganglioma and pheochromocytoma (PPGL) neuroendocrine tumors. Although hypotheses propose that succinate accumulation after SDH loss poisons dioxygenases and activates pseudohypoxia and epigenomic hypermethylation, it remains unclear whether these mechanisms account for oncogenic transcriptional patterns. Additionally, MAML3 translocation has recently been identified as a genetic alteration in PPGL, but is poorly understood. We hypothesize that a key to understanding tumorigenesis driven by these genetic alterations is identification of the transcription factors responsible for the observed oncogenic transcriptional changes.Succinate dehydrogenase (SDH) loss and mastermind-like 3 (N\u00a0=\u2009179) to reconstruct a PPGL tumor-specific transcriptional network. We subsequently use the inferred transcriptional network to perform master regulator analyses nominating transcription factors predicted to control oncogenic transcription in specific PPGL molecular subtypes. Results are validated by analysis of an independent collection of PPGL tumor specimens (N\u00a0=\u2009188). We then perform a similar master regulator analysis in SDH-loss mouse embryonic fibroblasts (MEFs) to infer aspects of SDH loss master regulator response conserved across species and tissue types.We leverage publicly-available human tumor gene expression profiling experiments contains supplementary material, which is available to authorized users. Pheochromocytoma and paraganglioma (PPGL) are rare, closely-related neuroendocrine tumors arising from the adrenal medulla and autonomic ganglia of the peripheral nervous system, respectively. Over the last two decades more than 20 potentially causative genetic alterations have been elucidated for PPGL, including mutations in genes involved in kinase signalling, hypoxic response, and tricarboxylic acid (TCA) cycle metabolism \u201325. AlthMAML3 gene. This translocation also correlates with poor patient prognosis and higher rates of metastasis for SDH-loss, VHL-loss, and MAML3 translocation PPGL subtypes, as described in Methods differentially-expressed genes , compared to the expected overlap for randomly-selected gene sets of this same size . We deria tumors . This dataset has been described previously [We then assembled a PPGL-specific inferred transcriptional network using 179 RNA-seq experiments generated by the TCGA Research Consortium . The results of these analyses, presented in Additional\u00a0file\u00a0p-value <\u20090.05 are shown in Fig.\u00a0We then applied the MR inference algorithm , using the inferred transcriptional network, to infer TFs whose target genes are significantly enriched for the input SDH-loss, VHL-loss, or EPAS1) was nominated as a MR for VHL-loss tumors, controlling ~\u20095% of the observed differentially-expressed genes in that tumor subtype. We therefore repeated the MRA procedure using ARACNE-inferred networks trimmed with slightly less stringent criteria (DPI tolerance\u2009=\u20090.05), so as to enhance the sensitivity of detection of master regulators. These analyses, presented in Additional\u00a0file\u00a0EPAS1-related transcriptional effects in SDH-loss tumors, although EPAS1 is by no means among the top MRs inferred to cause observed patterns of tumorigenic transcriptional perturbation. Importantly, this suggests that HIF-related transcriptional dysregulation is detectible, but that it inadequately accounts for the majority of transcriptional perturbations observed in these \u201cpseudohypoxic\u201d tumor subtypes, and that dysregulation of other transcription factors may play a much more important role in driving oncogenic transcriptomic patterns than previously appreciated. Strikingly, for MAML3 translocation-positive tumors, we find that a single TF, IRX4, is predicted to account for >\u200920% of the observed transcriptional perturbation. This is particularly intriguing because the reported MAML3 translocations involve chr18~chr4 fusion (TCF4~MAML3 gene fusion) or chr17~chr4 fusion (UBTF-MAML3 gene fusion), with the IRX4 locus on chromosome 5 being unaffected in either case [MAML3 translocation is connected to IRX4-mediated transcriptional dysregulation.Surprisingly, hypoxia-inducible factors were not among the nominated high confidence SDH-loss MRs, although HIF2\u03b1 of a given TF overlaps with the regulon of other known TFs . This dataset has been described previously [SDHx and VHL mutation status of these tumor specimens, we calculated the average log2(fold-change) in gene expression for tumors of each of these molecular subtypes relative to other PPGL tumors. Then, using known regulon structures of the PPGL inferred transcriptional network, we calculated the average log2(fold-change) for all genes under regulatory control of each of the SDH-loss and VHL-loss MRs nominated from MRA performed on the ARACNE network trimmed with DPI tolerance of 0.05. We performed this same calculation using the input gene expression signature used for MR discovery, and then examined whether there was correlation between the regulon log2(fold-change) values for discovery and validation data sets. Strikingly, we observe strong agreement between the two analyses . This model of SDHC-loss was used rather than the more common SDHB and SDHD subunit defects seen in human PPGL tumors specifically because the SDHC gene trapped allele used to derive the line was readily available from the Wellcome Trust Sanger Institute Gene Trap resource. Examination of transcriptional patterns in this model revealed general transcriptional up-regulation Fig.\u00a0a that issis Fig. b. The reWe hypothesized that although human PPGL tumors and mouse SDH-loss iMEFs are obviously very different, fundamental similarities might point to drivers of PPGL tumorigenesis. We therefore examined the overlap between the inferred SDHC-loss iMEF MRs and MRs inferred via unbiased MR analyses in human SDH-loss PPGL tumors. This analysis revealed five potentially conserved MRs, a number that is higher than would be predicted by random chance, suggesting that perturbation of these MRs represents a conserved biological response to SDH loss Fig. c. We theWe examined the patterns of differential gene regulation for the inferred regulons of ZFP423 and SOX11, assessing the degree to which expression of these gene subsets is sufficient to drive unbiased hierarchical clustering of iMEF experimental (SDHC-loss) and hemizygous control specimens as a MR, lending strong support to the pseudohypoxia hypothesis of prolyl hydroxylase dioxygenase poisoning by accumulated succinate. Nonetheless, a key conclusion from our analysis is that perhaps only 5% of differentially-expressed genes are controlled by HIF2\u03b1 in these pseudohypoxic tumors, suggesting that other mechanisms must be invoked to explain the remaining 95% of observed transcriptional dysregulation. Categories of other potential mechanisms include dysregulation of other TFs, classical epigenomic derangement via dioxygenase poisoning of TET DNA demethylases and Jumonji domain-containing histone demethylases, as well as dysregulation of cellular acylation patterns, as recently described [Classical role of pseudohypoxic activation of gene expression via HIF factors in SDH-loss and VHL-loss PPGL tumors has not previously been studied via unbiased methods such as TNI/MRA. Our MR discovery analysis of human PPGL tumors inferred escribed .Our analysis of MRs consistently perturbed in both human SDH-loss PPGL tumors and in SDHC-loss MEFs suggests that at least a portion of the MR response to SDH loss is conserved among vertebrates. Functional analysis of ZFP423, the mouse ortholog of human ZNF423, suggested that SDHC-loss MEFs might have a differential cellular response to retinoic acid. Our experiments show that this is indeed the case. We observe that control immortalized MEFs respond to retinoic acid by cell death, most likely through induction of apoptosis. Remarkably, this induced cell death upon retinoic acid exposure is not observed in SDHC-loss cells. We additionally show that malonate-mediated inhibition of SDH activity in the SH-SY5Y neuroblastoma cell line inhibits the normal process of retinoic acid-induced neuronal differentiation. These results suggest that SDH loss or inhibition may fundamentally modulate the cellular response to retinoic acid, one of the most potent known endogenous morphogens. The provocative extension of this observation is speculation that SDH-loss PPGL tumors may originate in development through a failed apoptotic response to retinoic acid concentration gradients. This concept of a putative developmental origin for SDH-loss PPGL through failure to properly interpret apoptotic cues has been previously raised in the context of neuronal growth factor signalling . IntriguMAML3 translocation-positive PPGL tumors. Our analyses generally confirm the accepted mechanism of pseudohypoxic activation of EPAS1/HIF2\u03b1 in SDH-loss and VHL-loss contexts, but suggest that this effect accounts for only ~\u20095% of differential gene expression, leaving the vast majority of tumorigenic transcription aberrations to be explained. Many of the other nominated MRs currently lack clear mechanistic connections to their primary gene defects, but show characteristic and consistent patterns of perturbation across tumor specimens. Future investigation may help to elucidate the relevant mechanistic details.We present here unbiased analyses nominating specific MR TFs that collectively explain the observed patterns of transcriptomic perturbation in SDH-loss, VHL-loss, and We also present analysis of MRs inferred in an SDHC-loss MEF tissue culture model that suggests that a subset of the SDH-loss MR response is conserved between vertebrate species and across cell types. Subsequent analysis of one of the conserved MRs, ZFP423, suggests that altered response to retinoic acid may be a distinguishing feature of SDHC-loss MEFs, a hypothesis that we tested and validated experimentally. We report that SDHC-loss MEFs display an attenuated cell death response to retinoic acid and that SDH-inhibited SH-SY5Y neuroblastoma cells display attenuated retinoic acid-induced neuronal differentiation. This retinoic acid resistance suggests a possible developmental path to SDH-loss PPGL tumorigenesis.Additional file 1:Dataset S1. SDHB-loss PPGL differential gene expression signature. (CSV 14 kb)Additional file 2:Dataset S2. VHL-loss PPGL differential gene expression signature. (CSV 11 kb)Additional file 3:Dataset S3. MAML3 translocation-positive PPGL differential gene expression signature. (CSV 19 kb)Additional file 4:Dataset S4. SDHB-loss PPGL master regulators (dpi\u2009=\u20090.00). (CSV 18 kb)Additional file 5:Dataset S5. VHL-loss PPGL master regulators (dpi\u2009=\u20090.00). (CSV 18 kb)Additional file 6:Dataset S6.MAML3 translocation-positive PPGL master regulators (dpi\u2009=\u20090.00). (CSV 19 kb)Additional file 7:Dataset S7. SDHB-loss PPGL master regulators (dpi\u2009=\u20090.05). (CSV 22 kb)Additional file 8:Dataset S8. VHL-loss PPGL master regulators (dpi\u2009=\u20090.05). (CSV 22 kb)Additional file 9:Dataset S9.MAML3 translocation-positive PPGL master regulators (dpi\u2009=\u20090.05). (CSV 21 kb)Additional file 10:Dataset S10. PPGL regulon overlaps. (CSV 3893 kb)Additional file 11:Dataset S11. SDHC-loss iMEF master regulators. (CSV 19 kb)Additional file 12:Figure S1. Statistical analysis of differential expression gene set overlaps for SDH-loss and VHL-loss PPGL tumor molecular subtypes. Statistical simulations assessing the probability of the observed differential expression gene set overlaps. Gray histogram bars show the distribution of overlaps for randomly-selected gene sets of the same size as those analyzed. Green dots and lines show Poisson fit to the simulated data and estimated p-value for the observed overlap relative to the simulated overlap distribution. (PDF 156 kb)Additional file 13:Figure S2. Statistical analysis of differential expression gene set overlaps for MAML3 translocation-positive PPGL tumors and MAML3 translocation-positive neuroblastoma tumors. Statistical simulations assessing the probability of the observed differential expression gene set overlaps. Gray histogram bars show the distribution of overlaps for randomly-selected gene sets of the same size as those analyzed. Green dots and lines show Poisson fit to the simulated data and estimated p-value for the observed overlap relative to the simulated overlap distribution. (PDF 73 kb)Additional file 14:Figure S3. PPGL transcriptional network validation by analysis of known TF-binding DNA motifs in inferred MR regulons and analysis of transcriptional subnetworks. A,C,E) Red traces show distribution of nearest pattern match for known TF-binding DNA motifs in inferred TF regulon. Gray traces show average nearest pattern match for scrambled version of the same motif. B,D,F) Statistical analysis of regulon motif pattern searching. Red dot indicates the fraction of nearest pattern matches for the original motif localizing to within 2.5 kbp of the TSS. Gray bars show the distribution of values yielded from the same quantification performed on scrambled versions of the original motif. Empiric p-values were estimated from the data in the random distribution and expected likelihood of the observed motif fraction with 2.5 kbp of the TSS. G-I) Analysis of transcriptional subnetwork-specific functional term enrichment among inferred target genes. Shown are the top 10 gene ontologies and/or KEGG pathways unique to each subnetwork. Subnetworks refer to those specified in Fig. Additional file 15:Figure S4. Statistical analysis of master regulators inferred in SDH-loss and VHL-loss PPGL tumors. Statistical simulations assessing the probability of the observed differential expression gene set overlaps. Gray histogram bars show the distribution of overlaps for randomly-selected gene sets of the same size as those analyzed. Green dots and lines show Poisson fit to the simulated data and estimated p-value for the observed overlap relative to the simulated overlap distribution. (PDF 85 kb)Additional file 16:Figure S5. t-SNE clustering of PPGL tumors by inferred transcription factor activity profile. A) t-SNE clustering of discovery cohort PPGL tumors by transcription factor activity profile. Colors of the data points correspond to annotations for tumor genotype, malignancy, and location, as indicated. B) t-SNE clustering of COMETE validation cohort PPGL tumors by transcription factor activity profile. Colors of the data points correspond to annotations for tumor genotype, as indicated. C) Assessment of EPAS1, ZNF423, and SOX11 activities in validation cohort specimens. Clustering pattern corresponds to genotype annotations given in panel B. (PDF 150 kb)Additional file 17:Figure S6. Analysis of transcription factor activity profiles of SDHD-null head and neck tumors. A) Analysis of differential SDH-loss PPGL master regulator activity in SDHD-null tumors of the abdomen and thorax vs. head and neck tumors. X-axis indicates log2(fold change) in inferred transcription factor activity between tumors of the and thorax relative to head and neck tumors. Y-axis indicates degree of statistical significance for the comparison. The subset of data with adjusted p-value <\u20090.05 are plotted in green and include a text label. B-E) Boxplots showing distribution of activity profiles for selected differentially active SDH-loss MRs. (PDF 174 kb)Additional file 18:Figure S7. Analysis of RBP1 expression in validation cohort PPGL specimens. A-B) t-SNE clustering of COMETE validation cohort PPGL tumors by transcriptional profile. Colors in panel A indicate relative degree of RBP1 expression . Colors in panel B correspond to annotations for tumor genotype, as indicated. (PDF 197 kb)"}
+{"text": "Arabidopsis thaliana plants growing in identical conditions over a 24\u2010h time course. We identified hundreds of genes that exhibit high inter\u2010individual variability and found that many are involved in environmental responses, with different classes of genes variable between the day and night. We also identified factors that might facilitate gene expression variability, such as gene length, the number of transcription factors regulating the genes and the chromatin environment. These results shed new light on the impact of transcriptional variability in gene expression regulation in plants.A fundamental question in biology is how gene expression is regulated to give rise to a phenotype. However, transcriptional variability is rarely considered although it could influence the relationship between genotype and phenotype. It is known in unicellular organisms that gene expression is often noisy rather than uniform, and this has been proposed to be beneficial when environmental conditions are unpredictable. However, little is known about inter\u2010individual transcriptional variability in multicellular organisms. Using transcriptomic approaches, we analysed gene expression variability between individual This tool will help researchers to take into consideration any inter\u2010individual gene expression variability in their genes of interest. Moreover, we show that highly variable genes (HVGs) are enriched for environmentally responsive genes and characterised by a combination of specific genomic and epigenomic features. We have revealed both the level and potential mechanism behind gene expression variability between individuals in Arabidopsis, allowing understanding of a previously unexplored aspect of gene regulation during plant development.Using single seedling RNA\u2010seq, we identified hundreds of genes that are highly variable between individuals in Arabidopsis thaliana seedlings at multiple time\u2010points over a full day/night cycle between the average CV2 of the entire time course for each of these 10 genes measured by RNA\u2010seq or RT\u2013qPCR ] of each gene is compared to its average expression level ] which corrected for the observed negative trend between CV2 and expression level, and used it for further analyses of gene expression variability. Genes with a negative log2(CV2/trend) are less variable than the trend, while genes with a positive log2(CV2/trend) are more variable than the trend. To test whether there were transcriptome\u2010wide trends in the level of variability across the day, we verified that the global trends of the CV2 against the average normalised expression measured for each time\u2010point are in the same range during the time course for individual genes can be viewed at https://jlgroup.shinyapps.io/AraNoisy/ from our RNA\u2010seq data set using a previously described method of the HVGs identified for each time\u2010point are also identified as highly variable in another time\u2010point Fig\u00a0.Since many genes are identified in more than one time\u2010point, we then defined in how many time\u2010points they are identified. We performed this analysis on all the HVGs , LVGs 5,727) and on a thousand sets of random genes selected for each time\u2010point (same number as HVGs for the time\u2010point). In total, 30% of all the 1,358 HVGs are identified in only one time\u2010point while the others are shared with other time\u2010points, up to all of the 12 time\u2010points for 40 genes at each time\u2010point. We identified four clusters of variability patterns across the time course enrichment for all 1,358 HVGs. We identified enrichment for several processes involved in the response to biotic and abiotic stresses as well as in the response to endogenous and exogenous signals . This is2(CV2/trend) of HVGs along the time course (10(FDR) of their enrichment at the different time\u2010points . Most HVGs are still expressed in more than one tissue , which provides the list of able TFs . These rable TFs C. bZIP Ts et\u00a0al, . These aThese results show that HVGs are enriched for genes involved in the response to environment and stress and are targeted by TF families involved in environmental responses, while LVGs are enriched in DNA, RNA and protein metabolism. Moreover, the clear pattern of enrichment for some function either during the day or the night further suggests that variability between seedlings across the day and night is functional and might be controlled.2(CV2/trend) profile for each of the 1,358 HVGs, 5,727 LVGs as well as for a thousand sets of random genes of the same number as all HVGs , it seems that profiles in gene expression variability for approximately 20% of HVGs could be potentially explained by expression profiles . We also observed a negative trend between the level of variability and the gene length or number of introns for all genes at each time\u2010point . We observed a very similar level of corrected CV2 for full genes and their fragments . This result suggests differences in the way HVG and LVG expression are regulated, which could possibly be due to different network architectures.One other factor we tested is the binding of transcription factors (TFs) at the promoters of genes. For this, we counted the number of TFs binding to the promoter for all 1,358 HVGs, 5,727 LVGs and the thousand sets of 1,358 random genes using the available DAP\u2010seq data and found a tendency for a higher number of TFs binding the promoter of HVGs . We first analysed the proportion of genes containing a histone modification among all 1,358 HVGs, 5,727 LVGs and the thousand sets of random genes in comparison with all background genes. We could identify that HVGs are enriched in H3K27me1 and H3K27me3, which are repressive marks, while they are depleted in active marks such as H3K4me2, H3K4me3, H3K36me3 or H2Bub and represented the signal along the genes. We identified differences in the profiles of the average chromatin signal between HVGs, LVGs and random genes for H3K27me3, H2A.Z, H3K4me3 and H3K23ac Fig\u00a0B. A highIn summary, HVGs and LVGs are characterised by a specific chromatin environment, in terms of the presence/absence of chromatin marks as well as for the profiles of these marks. Our results indicate that chromatin at HVGs tends to be more compacted and refractory to expression than at LVGs and random genes, which might have implications for how expression is regulated in these genes.Arabidopsis seedlings at the genome\u2010wide scale throughout a diurnal cycle. To do this, we have analysed 14 seedlings at each of the 12 time\u2010points, generating 168 transcriptomes in total. This resource reveals previously unexplored variability for multiple pathways of interest for plant researchers, as well as providing insights into the modulation of gene expression variability at the genome\u2010wide scale . These data could also be used for other purposes, such as inferring regulatory networks based on gene expression correlation between seedlings, as previously done using microarrays of individual leaves media at 22\u00b0C in long days for 24\u00a0h. Using a binocular microscope, seeds that were at the same stage of germination were transferred into a new plate containing solid 1/2X MS media. In total, 16 seeds were transferred into each of the 12 individual plates. Seedlings were grown at 22\u00b0C, 65% humidity, with 12\u00a0h of light (170\u00a0\u03bcmoles) and 12\u00a0h of dark in a conviron reach in cabinet. After 7\u00a0days of growth, seedlings were harvested individually into a 96\u2010well plate and flash\u2010frozen in dry ice , for 1\u00a0\u03bcg of high\u2010integrity total RNA (RIN >\u00a08) into which 2\u00a0\u03bcl of diluted 1:100e ERCC RNA Spike\u2010In Mix was added. The libraries were sequenced on a NextSeq 500 using paired\u2010end sequencing of 75\u00a0bp in length.Sixteen 7\u2010day\u2010old Col\u20100 WT www.bioinformatics.babraham.ac.uk/projects/fastqc/). Potential adaptor contamination and low\u2010quality trailing sequences were removed using Trimmomatic . For each gene, raw reads and TPM (transcripts per million) obtained when analysing 6\u201315 seedlings with the ones obtained with 16 seedlings at the time\u2010point ZT6, as we collected up to 16 seedlings for this time\u2010point . Briefly, genes were first filtered so that (i) their averaged expression level between all 168 seedlings was of 5 TPM or more, (ii) they were at least 150\u00a0bp long, (iii) they had a TPM of 0 in <\u00a05 seedlings for the analysed time\u2010point, and (iv) their averaged expression level was of 5 TPM or more in the analysed time\u2010point. Then, the fitted variance\u2010mean dependence was calculated for each time\u2010point .For each gene, a corrected CVMean normalised gene expression was used when representing gene expression throughout the time course. It was calculated for each gene by dividing the expression level at a given time\u2010point by the average expression across the entire time course for the same gene.Arabidopsis thaliana seedlings were harvested individually and flash\u2010frozen in dry ice every 2\u00a0h over a 24\u2010h period. Total RNA was isolated from 1 ground seedling. RNA concentration was assessed using Qubit RNA HS assay kit. cDNA synthesis was performed on 700\u00a0ng of DNAse\u2010treated RNA using the Transcriptor First Strand cDNA Synthesis Kit. For RT\u2010qPCR analysis, 0.4\u00a0\u03bcl of cDNA was used as template in a 10\u00a0\u03bcl reaction performed in the LightCycler 480 instrument using LC480 SYBR Green I Master. Gene expression relative to two control genes (SandF and PP2A) was measured (See Sixteen 7\u2010day\u2010old Col\u20100 WT 2(CV2/trend) or on the mean normalised expression level using the statistical programme R , which provides the list of Gene length and number of introns were calculated using the TAIR10 annotation.et\u00a0al, The lists of genes being marked by the analysed chromatin marks were obtained from Roudier and colleagues (Roudier et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, ChIP\u2010seq data were downloaded from GSE101220 for H3K27me3 (Jiang & Berger, The data sets and computer code produced in this study are available in the following databases:GSE115583: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE115583.RNA\u2010seq data: Gene Expression Omnibus https://jlgroup.shinyapps.io/aranoisy/.Graphical web interface: https://github.com/scortijo/Scripts_noise_paper_MSB_2018.Computer codes used to analyse RNA\u2010seq data: SC conceived of the project. SC and JCWL designed the project. SC performed RNA\u2010Seq experiments and analysed and interpreted the data. ZA performed the RT\u2013qPCR for Fig\u00a0The authors declare that they have no conflict of interest.AppendixClick here for additional data file.Table\u00a0EV1Click here for additional data file.Table\u00a0EV2Click here for additional data file.Table\u00a0EV3Click here for additional data file.Table\u00a0EV4Click here for additional data file.Table\u00a0EV5Click here for additional data file.Table\u00a0EV6Click here for additional data file.Table\u00a0EV7Click here for additional data file.Review Process FileClick here for additional data file."}
+{"text": "We hence propose a Bayesian inferential methodology using a pseudo-marginal approach and a recent approximation to integrate over unobserved states associated with measurement error.Transcription in single cells is an inherently stochastic process as mRNA levels vary greatly between cells, even for genetically identical cells under the same experimental and environmental conditions. We present a stochastic two-state switch model for the population of mRNA molecules in single cells where genes stochastically alternate between a more active ON state and a less active OFF state. We prove that the stationary solution of such a model can be written as a mixture of a Poisson and a Poisson-beta probability distribution. This finding facilitates inference for single cell expression data, observed at a single time point, from flow cytometry experiments such as FACS or fluorescence We provide a general inferential framework which can be widely used to study transcription in single cells from the kind of data arising in flow cytometry experiments. The approach allows us to separate between the intrinsic stochasticity of the molecular dynamics and the measurement noise. The methodology is tested in simulation studies and results are obtained for experimental multiple single cell expression data from FISH flow cytometry experiments.https://github.com/SimoneTiberi/Bayesian-inference-on-stochastic-gene-transcription-from-flow-cytometry-data.All analyses were implemented in R. Source code and the experimental data are available at Bioinformatics online. This study aims at proposing a methodology for investigating transcription, i.e. the process by which mRNA transcripts are synthesized from genes in single cells. This process is fundamentally stochastic . Investilatent. We make use of a pseudo-marginal approach representing the mRNA counts, \u0393 refers to the gamma function and 1F1 is the confluent hypergeometric function of the first kind. We note that the degradation parameter \u03b2 is not identifiable as it appears only in combination with other parameters. In the sequel we consider a reparameterization where the remaining kinetic parameters are scaled with respect to the degradation rate, i.e. X can be written as a mixture of a Poisson and a Poisson-beta density, which can usefully be exploited for inference.heorem: The density in Pois(x) indicates the Poisson rv with mean x and variance x and Beta represents the beta rv with mean A in 1F1 needs to be estimated numerically, which is challenging (X in (11) provided by the theorem shows that this computation can be avoided by taking advantage of the latent variable structure to sample X without the need to explicitly compute Pr(X).We note that llenging . Howeveri, iY, is proportional to the actual population of mRNA, iX, and that the measurement process is perturbed by measurement noise. In our FISH flow cytometry experimental data, observations coming from a sample of N cells, \u03ba, and additive Gaussian measurement error, which we assume to be independently and identically distributed (iid):ith cell and a and variance b. In the analysis of the background noise data, described in As the mRNA molecule count cannot be observed exactly, we assume that the observation for cell iX, is a latent state variable. The marginal likelihood of the observation for the ith cell, given the parameter vector We note that, due to the measurement process, the unobservable mRNA population in each cell, S from (11), In practice, we approximate (13) by drawing a finite sample of size N samples of size S, N data points would lead to a biased estimator. Here, we use a recently developed estimator which allows us to employ the same S particles for all observations while preserving unbiasedness. The method is illustrated in detail in the In order to approximate the densities of all observations, we should draw iP and iX, led to a highly correlated multidimensional posterior space, which was much more challenging to explore.We note that our approach explicitly allows for two sources of noise, namely the intrinsic stochasticity due to the biological noise, inherent in the molecular processes associated with transcription and degradation, and the measurement noise, which is not part of the molecular dynamics. The approach outlined below does not rely on the Gaussianity assumption of the measurement noise, and can be extended in a straightforward way to other distributional specifications for the measurement error. An alternative to the pseudo-marginal approach is to explicitly perform a data augmentation procedure to sample the latent states together with the other parameters of the model. We implemented and tested both methods on simulated data with the conclusion that the former resulted in improved mixing and convergence of the posterior chains while the data augmentation procedure with two layers per cell, i.e. K replicates is kN observations available for the kth replicate, K\u2009=\u20094. The hierarchical measurement equation, relating the observations to the latent mRNA populations isAs will be shown in Section 4.1, the experimental data available was collected in four biological replicates, each containing a multitude of single cell observations. The full data of an experiment with kth replicate asDefine the hierarchical parameter vector for the k is described by exactly the same value of the parameter vector, in a hierarchical model it is a random sample from a joint distribution q is the number of parameters in q\u2009=\u20097 in our case, and each jth parameter across the replicates. The graphical model for the hierarchical system used is shown in Bayesian hierarchical modeling \u03bcj|\u03c4We develop a Metropolis\u2013within-Gibbs algorithm (5 iterations were discarded as burn-in. We computed the highest posterior density (HPD) credible intervals (CIs) via the HPDinterval function of the R , i.e. the ratio of the standard deviation to the mean, of the hierarchical parameters across replicates, to study how parameters vary between biological replicates. P, P\u03bc, allows us to compare the overall time the gene spends in the ON state between conditions. It appears that, for an increasing dose of tetracycline, although both switches are accelerated, there is some considerable variation between replicates and no discernible difference in the time the gene spends in the more active state. We also find that the gene spends between approximately 3 and 14% of the time in the ON state, while most of the time they are OFF. Despite For each replicate and experimental condition, we simulated the observed data, observed .The latent population of mRNA in single cells is estimated to occupy a range between a few tens to a few hundreds of molecules, while the ratio between variance and mean is inferred to be orders of magnitude bigger than 1 , which highlights the large degree of overdispersion observed for gene expression in single cells (details in env gene under the control of a tetracycline inducible promoter. We find strong evidence that transcription mostly happens in short and intense bursts, where the gene spends most of the time in the less active state, and only switches for a short time into a more active state, the latter being characterized by a much larger transcription rate. For increasing level of stimulus, the transcription rates are mostly unchanged, while there is a significantly increased speed of switching in both states.We propose a stochastic gene expression model that allows for transcriptional switching between two states, where transcription in the so called OFF state is less active than in the ON state, but may occur at a positive rate. While approaches exist to fit this system, and indeed more complex types of switch models, to single cell time series imaging data on gene expression (Further analyses are currently being performed to compare more experimental conditions and to investigate how transcription varies during the life cycle of a cell. We note that Supplementary DataClick here for additional data file."}
+{"text": "Thus, experimental and computational biologists have made great efforts to construct TF gene networks for regulatory interactions between TFs and their target genes. Now, an important research question is how to utilize TF networks to investigate the response of a plant to stress at the transcription control level using time-series transcriptome data. In this article, we present a new computational network, PropaNet, to investigate dynamics of TF networks from time-series transcriptome data using two state-of-the-art network analysis techniques, influence maximization and network propagation. PropaNet uses the influence maximization technique to produce a ranked list of TFs, in the order of TF that explains differentially expressed genes (DEGs) better at each time point. Then, a network propagation technique is used to select a group of TFs that explains DEGs best as a whole. For the analysis of Arabidopsis time series datasets from AtGenExpress, we used PlantRegMap as a template TF network and performed PropaNet analysis to investigate transcriptional dynamics of Arabidopsis under cold and heat stress. The time varying TF networks showed that Arabidopsis responded to cold and heat stress quite differently. For cold stress, bHLH and bZIP type TFs were the first responding TFs and the cold signal influenced histone variants, various genes involved in cell architecture, osmosis and restructuring of cells. However, the consequences of plants under heat stress were up-regulation of genes related to accelerating differentiation and starting re-differentiation. In terms of energy metabolism, plants under heat stress show elevated metabolic process and resulting in an exhausted status. We believe that PropaNet will be useful for the construction of condition-specific time-varying TF network for time-series data analysis in response to stress. PropaNet is available at A transcription factor (TF) is a protein that regulates expression levels of a target gene (TG) by binding to a specific DNA sequence on the promoter regions of target genes refers to a graph-based representation that contains regulatory relationships between TFs and their target genes. Constructing a TF network is a very difficult task since the number of possible TF-TG relationships to be considered for the construction of a TF network is over 30 million ; There are 20,418, 22,619 and 27,665 protein-coding genes in human, mouse, and Arabidopsis species, respectively, according to the ENSEMBL , control vs. under stress, at each time point. By constructing TF networks that include DEGs and TFs, we can gain insight into how the responses of the TFs differ in gene expression levels under each stress, i.e., DEGs. There are two major issues with this approach.Contributions of TF to DEGs differ for different TFs. It is not enough to consider only TFs that show significant expression changes during stress. It is known that TFs that show little change in expression levels function in response to stress as much as those that show large change in expression levels. In other words, the amount of change in expression level of a TF is not necessarily proportional to the significance of its role in the response to the stress and propagates more influence on more significant DEGs (estimated by network propagation). Performance of PropaNet was compared with simple correlation-based methods and other methods for time series analysis using clustering and network information. PropaNet showed better performance in finding cold- and heat-specific transcription factors and their target genes by incorporating network information and its novel strategy for selecting major regulatory TFs.EX that are measured at multiple time-points, a template TF network G and a set of target genes TGset. PropaNet detects major TFs that particularly target TGset defined by users. TGset can range from a small set of pathway genes to whole DEGs of the gene expression data. The goal of the PropaNet analysis is to elucidate time-varying networks of major TFs and their target genes at each time point by the network-based analysis on the template TF network. Terminologies used in this paper are defined as follows:The PropaNet analysis takes three types of input data: time-series gene expression data Definition 3.1. Let EX, G and TGset be time-series gene expression data, TF network and a set of targeted genes, respectively. EX is a set of gene expression values ei,j,k of gene i measured at time point j = 0, \u2026, T from replicate k = 1, \u2026, K. A set of differentially expressed genes for time point j = 1, \u2026, T compared to the initial time point j = 0 is defined as DEGsetj. From the template network G, a time-specific network Gj is generated using a gene set Vj = (TGset \u2229 DEGsetj) \u222a TFset. A time-specific network Gj is defined as Gj = with nodes Vj (TFs and TGs) and edges Ej (TF-TF and TF-TG pairs). For each node and edge, a weight of a node p : V \u21a6 \u211c (a map of node to weight) and a weight of an edge w : E \u21a6 \u211c (a map of edge to weight) exist. Differential expression levels di, j and correlation coefficient p and w, respectively.Dj is a set of differential expression levels at time j, the element of which di,j is the differential expression level of a gene i at the time point j = 1, \u2026, T with respect to the initial time point j = 0. That is, di,j is calculated from comparing two sets of expression values {ei,j,1, \u2026, ei,j,k} and {ei,0,1, \u2026, ei,0,k} by an existing DEG detection algorithm such as limma at time point j where a DEG is defined as a gene with p-value < 0.05.MTFsetj is a minimal set of major TFs that explains the change of expression levels of DEGsetj at time point j.MTFnetj is a time-varying TF network at time point j that shows the regulatory system explaining how MTFsetj controls DEGsetj.MTFsetj and their time-varying network including target genes MTFnetj for each time point j = 1, \u2026, T. PropaNet operates in three steps as below and the process is visualized in PropaNet outputs a set of major regulatory TFs Step 1. Instantiation of time-specific TF networks from a template network. A time-specific network consists of DEG and TFs at each time point.Step 2. Time-specific measurement of the influence of each TF by influence maximization. TFs in the network are ranked along with their influence on DEGs via the network topology (including non-direct targets).Step 3. Identification of time-specific major regulatory TFs by network propagation. The TF set is constructed by adding a TF at a time, following down the ranked list of TFs.Gj for each time point j = 1, \u2026, T by mapping TFs and a intersected gene set of user-defined target genes and DEGs to a template network \u222a TFset for time j) . The inputs of PropaNet (a template TF network and DEG profiles) are designed to be user-determined.The first step of PropaNet is to construct a time-specific networks time j) . DEGs arTGset \u2229 DEGsetj at each time point j. Influence maximization (IM) is an algorithmic technique used in network influence analysis to select a set of seed nodes to maximize the spread of influence (the expected number of influenced nodes) from a given network , TFset, Dj) to measure the influence of each TF (\u2208 TFset) to the targeted genes (\u2208 Vj\\TFset) on the time-specific TF network Gj. IM algorithm first initializes the weights of node, DE(s), as the absolute differential expression level ds,j for all s \u2208 V and the influence of TF, IL(t), as 0 for t \u2208 TFset. Then, it generates sub-graph G\u2032 from G by selecting edges with a probability of 1 \u2212 p for each edge (line 4), where p is the weight of the edge in the original graph G. Then, the influence IL(t) increases by the G\u2032. After repeating the above procedure for Round times, the algorithm produces the final output IL of all TFs at the time point j.We provide more detailed explanation of the labeled influence maximization algorithm of PropaNet (Algorithm 1). Input of the algorithm for time Network propagation is a graph-based analytic paradigm that propagates information of a node to nearby nodes through the edges at each iteration step. This process is repeated for a fixed number of steps or until convergence. Since the value of a node influences not only the values of its direct network neighbors but also those of its distant neighbors, network propagation is known to perform better than direct neighbor search methods and shortest path search methods for a problem of prioritizing genes that are associated with seed genes as an amount of information of a node v at the beginning of iteration 0. At each iteration k, the amount of information at each node v is influenced by the sum of the information at the neighboring (adjacent) nodes N(v) at iteration k \u2212 1, in proportion to the weights on the corresponding edges, according to the following equation:Network propagation is mathematically equivalent to random walks on a graph. We can think of w is the weight of the edges in the input network. The propagation process described in Equation 1 can be written in matrix notation as follows:where W is a normalized version of the adjacency matrix of the input network. Another version of the propagation process is the random walk with restart (RWR). RWR performs the random walk and restarts at a rate of \u03b1:where k iterations, the values in the resulting vector pk(v) give us a ranking of each node v that diffused from the initial value of seed nodes.where the parameter \u03b1 is thought of the trade-off between prior information (restart) and network smoothing . After p-values. This can be easily determined by computing Spearman's rank correlation coefficient (SCC) between the two ranked lists. More formally, the simulation is evaluated by the comparison of the ranking between the differential expression of observation DE(v) and the inferred expression of network propagation IP(v). This simulation is independently processed for each time point j. It first initializes the information of nodes, IP(v) as 0 for v \u2208 V. At the time point j, we now have a list of TFs and their influence score, IL(t), that are measured in the previous step. It, then, initialize the most influencing TF ) as a set of seed S and conducts network propagation on the TF network G to update IP(v). Then, it measures the similarity of ranking, SCC, between IP(v) and DE(v). It adds the next most influencing TF into the set of seed S, performs network propagation, computes SCC, and decides whether accepting the TF or not accepts; the TF is accepted if SCC increases or declined otherwise. It continues this process for the list of TFs until the coverage of target genes exceeds the half number of DEG at the time point. Finally, it produces S as a set of major regulatory TFs.PropaNet simulates the TF-centered regulation process based on the network propagation. Each step of network propagation outputs ranked list of nodes in the network. The objective function, or the stopping criteria, is to find a ranked list of genes that are most similar to the ranked list of DEGs in terms of their The experiments for the evaluation of PropaNet were conducted using two time-series gene expression datasets measured under thermal stress, AtGenExpress of non-stress time-series for the replicate data at zero time point . Accordingly, the 5-min time point data of stress (cold/heat) time-series were discarded for consistency of time point. The raw data of E-MTAB-375 were downloaded from ArrayExpress stress in Arabidopsis. The temperature-related stress has been investigated extensively in scientific, agricultural, and industrial fields because of recent climate and weather extremes, derived by global warming. Moreover, climate and weather extremes would be worse as global warming continues; a special report of Intergovernmental Panel on Climate Change (IPCC) in 2018 predicts with high confidence that global warming is likely to reach 1.5\u00b0C between 2,030 and 2,052 if it continues to increase at the current rate and help focus on mechanisms related to treated samples. To investigate the effect of utilizing control sample data, we developed a modified version of PropaNet, \u201cPropaNetC,\u201d to utilize non-stress time-series control sample data for generating time-varying networks.C identified DEGs at each time point compared to the initial time point (0 h) for each of control and treated samples separately. PropaNetC then selected seed DEGs at each time point, t, by subtracting the DEGs of control samples from DEGs of stress samples and five time points for cold and heat stresses, respectively. DEGs were detected at each of the six time points and four time points for cold and heat stresses, DEGs were detected at each time point with respect to zero time point (j = 0). The number of DEGs were 41, 23, 524, 2,262, 6,129, 7,656 for cold stress, and 1,177, 522, 1,915 and 7,424 for heat stress. Then, PropaNet produced six-time-point-networks for cold stress for heat stress . Among the literature-supported edges, 7 edges were supported by the literatures of cold-specific experiments, such as DREB2A\u2192LTI78 . The GO term, \u201ccircadian rhythm,\u201d was enriched only in the results from PropaNet-specific genes (p < 0.0012) . In detaWe performed PropaNet analysis on two datasets with different time points, AtGenExpress (7 and 5 time points for cold and heat stress) and E-MTAB-375 (22 time points). We then investigated the overlap between the resulting networks of adjacent time points. Genes in the networks of many (densely sampled) time points (E-MTAB-375 dataset) were overlapped more, 69% and 38% in average, between adjacent time points than the resulting genes of small (loosely sampled) time points (AtGenExpress dataset), 25% and 20% in average, for cold and heat stress, respectively . This reThe effects of temperature on plants are broad. The characteristics of effects of temperature on plant growth could be classified by imposed severity, duration, the ramp rates of changes, recovering condition and the developmental stages of the plant. Usually, the ambient temperature is not a stressful treatment. However, it may depend on the duration of exposure. The physiological consequences of sudden temperature treatment have been extensively studied. However, most of the experimental designs focused on experimentally available conditions and tissues such as leaves, roots, and fruits. Scrutinizing overall relationship of genes provides many of the unrevealed contexts of signal transfers and hierarchies among transcription factors.In samples curated in AtGenExpress dataset gene is matching with the previous reviews of temperature sensing mechanism of plants , AP2/ERF proteins , heat shock protein (HSFA2), HOS10 and ZFHD1. It is remarkable that PIF4 which is known as a thermosensing TF is detected in our analysis. The expression of PIF4 is known to be gradually decreased at night under the light/dark transition act as central TFs of signaling cascades. MYB52 is known to be involved in ABA, drought, salt, and cold responses and different sets (SIP2 and APX2) of gene comparing with cold treatment. SIP2 is presumed to be involved in phloem unloading of raffinose in sink leaves. Raffinose and other members of the raffinose family oligosaccharides are involved in stress tolerance and act as antioxidants to prioritize TFs that target more significant DEGs.Comparative analysis results showed that PropaNet detected known stress-responsive genes more accurately than other time series analysis methods (see section 4.2). The advanced performance of PropaNet can be explained with its three characteristics distinguished from other methods. First, PropaNet considers indirect regulatory power of a TF though consideration of multiple steps of transcription regulation (by influence maximization) while existing tools consider direct targets only. Second, PropaNet takes into account of the regulatory power of multiple TFs simultaneously (by network propagation). Third, PropaNet uses \u201cdifferential expression\u201d values on time-varying networks (see section 4.3). Therefore, it is recommended to use PropaNetC to eliminate the effects of background biological mechanisms when both treated case and non-treated control samples are available.We conjecture that the consideration of multiple steps of regulation of a TF and simultaneous regulations of multiple TFs is the main reason why PropaNet performed better than existing tools in terms of F1 scores. Additionally, the use of PropaNetOn the other hand, there are a few limitations of PropaNet that have to be considered before analysis. PropaNet assumes the reliability of a template TF network that is given by the user as input. In our experiment, we used PlantRegMap as a template network, which was generated by identifying TF binding sites using TF ChIP-seq and searching the DNA sequence motif on the promoter regions of the target genes. Then, it appended TF-target interactions that were found in the literature. Thus, it has the possibility to include false TF-target interactions; (1) The antibodies that are used in TF ChIP experiments are known to have the range of affinity to bind the un-targeted but similar-structured TFs. (2) The quality of ChIP-seq data can vary depending on the experimenters and the year of data generation. (3) TF ChIP-seq experiments are conducted in a particular condition, so it is possible that the TF interactions change in a different condition. Use of extended versions of PlantRegMap or protein-protein interaction networks may provide more detailed information on the plant response to cold and heat stress, which has to be studied in the future. In addition, PropaNet detects TFs of which differential expressions are positively correlated with DEGs, which cannot capture inhibitory relationship between TF and target genes. This is due to the limitation of random walk process that might not converge with negative weights. Thus, incorporating negative weights to detect inhibitory TFs can be a future work for PropaNet as well.All datasets for this study are included in the manuscript and the SK and WJ designed the project. SK, KJ, and HA directed the development of the algorithm. SK and KJ developed a modified version of influence maximization algorithm. HA collected and processed the data. KJ, DJ, MP, and HA implemented the program of the algorithm and conducted experiments of network construction. WJ and JH biologically interpreted the network analysis results. HA, KJ, SK, and WJ wrote the paper.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "To model spatial changes of chromatin mark peaks over time we develop and apply ChromTime, a computational method that predicts peaks to be either expanding, contracting, or holding steady between time points. Predicted expanding and contracting peaks can mark regulatory regions associated with transcription factor binding and gene expression changes. Spatial dynamics of peaks provide information about gene expression changes beyond localized signal density changes. ChromTime detects asymmetric expansions and contractions, which for some marks associate with the direction of transcription. ChromTime facilitates the analysis of time course chromatin data in a range of biological systems.The online version of this article (10.1186/s13059-018-1485-2) contains supplementary material, which is available to authorized users. Genome-wide mapping of histone modifications (HMs) and related chromatin marks using chromatin immunoprecipitation coupled with high-throughput sequencing (ChIP-seq) and DNA accessibility through assays for DNase I hypersensitivity (DNase-seq) or transposase-accessible chromatin (ATAC-seq) assays have emerged as a powerful approach to annotate genomes and study cell states \u20135. ThrouWhile many mapping efforts have largely focused on single or unrelated cell and tissue types , 6, a grIn the context of time course chromatin data, only a few methods have been proposed that consider temporal dependencies between samples. One such method, TreeHMM , produceOne important limitation of methods for pairwise comparison or time course modeling of chromatin data is that they do not directly consider or model spatial changes in the genomic territory occupied by chromatin marks over time. Spatial properties of genomic peaks continuously marked by HMs have gained increasing attention as a potentially important characteristic of chromatin marks. For example, long peaks of H3K27ac have been associated with active cell type-specific locus control regions termed super-enhancers or stretch enhancers in a number of cell types , 49. AlsIn this work, we present ChromTime, a novel computational method for detection of expanding, contracting, and steady peaks, which can detect patterns of changes in the genomic territory occupied by chromatin mark peaks from time course sequencing data Fig.\u00a0. We applhttps://github.com/ernstlab/ChromTime), designed for systematic detection of expansions, contractions, and steady peaks from time course chromatin data of a single chromatin mark .All model parameters are learned jointly from the whole time course. As a result, ChromTime can adapt to different boundary movements, dynamics frequencies, and noise levels across experiments and biological systems. The estimated parameters are used to make a prediction for each block for the most likely positions of the peak boundaries and the corresponding boundary dynamics that had generated the signal within the block. The final output contains predicted peak boundaries annotated and colored by their assigned dynamics, which can be used for downstream analysis with existing tools and visualized in genome browsers ac marks in T-cell development in mouse and conf4ac marksWe next investigated whether ChromTime\u2019s approach for reasoning jointly about the whole time course increases power to detect associations with gene expression compared to considering boundary differences of peaks at consecutive time points called in isolation. Specifically, we analyzed gene expression changes of genes with TSSs overlapping ChromTime peaks in relation to posterior probabilities for expansions and contractions compared to boundary differences of peaks called with ChromTime from data from individual time points in isolation. We investigated this in the context of H3K4me2 peaks in mouse T-cell development and for We next investigated whether there is additional information in ChromTime predictions with respect to gene expression changes beyond what can be captured by pairwise signal density changes or by differential peak calls. For this analysis, we focused on H3K4me2 in mouse T-cell development and H3K4Previous studies have shown that the locations of different chromatin marks can be correlated , 54. In ChromTime can predict unidirectional expansions and contractions, which enables analysis of directionality of spatial dynamics of peaks, an aspect of chromatin regulation that has not been previously systematically explored. To investigate this, we applied ChromTime on data from 13 previously published studies from a variety of developmental, differentiation, and reprogramming processes Table . ConsistIn this work, we presented ChromTime, a novel computational method for systematic detection of expanding, contracting, and steady peaks of chromatin marks from time course high-throughput sequencing data. ChromTime employs a probabilistic graphical model that directly models changes in the genomic territory occupied by single chromatin marks over time. This approach allowed us to directly encode our modeling assumptions about dependencies between variables in an interpretable and extendable framework.We applied ChromTime on ChIP-seq data for broad and narrow HMs and for Pol2, and on ATAC-seq and DNase-seq data from a variety of developmental, differentiation, and reprogramming courses. Our results show that the method can identify sets of expanding and contracting peaks that are biologically relevant to the corresponding systems. In particular, expansions and contractions associate with up- and down-regulation of gene expression and differential TF binding, supporting the biological relevance of ChromTime predictions.ChromTime gains power by both reasoning jointly about all time points in a time course and by explicitly modeling the peak boundary movements. Supporting this, in our analyses we observed that territorial changes identified by ChromTime had better agreement with gene expression changes compared to considering directly the boundary change of peaks called on data from individual time points in isolation. Additionally, we also observed for a range of cases that expanding and contracting peaks associated, on average, with greater change in gene expression compared to peaks with steady boundaries even after controlling for signal density changes. Some of the power that ChromTime gains from considering spatial information might be explained by its ability to differentiate territorial expansions or contractions, which can reflect changes in the number of TF binding sites in close vicinity, from changes in signal density within steady peak boundaries. Changes in signal density without territorial expansions or contractions might reflect a change in the proportion of cells with the chromatin mark without large changes in activity in any one cell. Additional power can come from the temporal and spatial information that allows the model to effectively smooth over noise in the data, thus enabling more biologically relevant inferences.ChromTime enables novel analysis of directionality of spatial epigenetic dynamics. In this context, we found that asymmetric unidirectional expansions and contractions for several marks correlate strongly with direction of transcription in promoter proximal regions, which suggests that spatial dynamics at such locations may be related to actions of the transcriptional machinery. One possible explanation for the observed correlation between the direction of spatial dynamics of at least some HMs and transcription can be provided in part by previous studies that have shown that the Pol2 elongation machinery can recruit H3K4-methyltransferases, such as members of the SET and MLL The ChromTime software is also relatively efficient in terms of runtime, particularly when using its option to parallelize all computations during the parameter learning and prediction phases over multiple CPU cores. In our tests, processing ChIP-seq data for the H3K4me2 mark and control data from five time points in mouse T-cell development took 3\u00a0hWe applied ChromTime to a range of data types but found no single setting of the method options to be preferable in all cases (\u201cMethods\u201d). We thus created three modes with different default options: punctate mode used for ATAC-seq and DNase-seq, narrow mode used for ChIP-seq of narrow HMs, and broad mode used for ChIP-seq of broad HMs and Pol2. In principle, ChromTime can also be applied on ChIP-seq data of sequence-specific TFs in punctate mode. However, for these data, where binding can often be associated with a single point source such as individual instances of DNA sequence regulatory motifs, methods that predict the single point source across time points and the binding intensity associated with the source at each time point may be a more natural way to model the data.T, the number of observed combinations of dynamics can scale exponentially with T. This exponential growth can complicate downstream analyses that directly consider each combination of dynamics, as there will be 3T-1 possible sequences of dynamics at each side of a peak. Extensions of the ChromTime model could model the large number of combinations as being instances of a smaller number of more distinct dynamic patterns.Another limitation of the ChromTime method is that while the runtime of ChromTime still scales linearly with the number of time points, The increasing availability of time course chromatin data provides an opportunity to understand chromatin dynamics in many biological systems. To facilitate reaching this goal we developed ChromTime, which systematically detects expanding, contracting, and steady peaks, allowing extraction of additional information from these data. ChromTime gains power by both reasoning about data from all time points in the time course and by explicitly modeling movements of peak boundaries. We showed that ChromTime predictions associate with relevant genomic features such as changes in gene expression and TF binding. We demonstrated that territorial changes of peaks can contain additional information beyond signal density changes with respect to gene expression of proximal genes. ChromTime allows for novel analysis of directionality of spatial dynamics of chromatin marks. In this context, we showed for multiple chromatin marks that the direction of predicted asymmetric expansions and contractions of peaks strongly associates with direction of transcription in proximity of TSSs. ChromTime is generally applicable to modeling time courses of chromatin marks and thus should be a useful tool to gaining insights into dynamics of epigenetic gene regulation in a range of biological systems.ChromTime takes as input a set of files in BED format with genomic coordinates of aligned sequencing reads from experiments for a single chromatin mark from a high-throughput sequencing experiment such as ChIP-seq, ATAC-seq, or DNase-seq over a time course and, optionally, from a set of control experiments. ChromTime consists of two stages Fig. :Detecting genomic intervals (blocks) potentially containing regions of signal enrichment (peaks)Learning a probabilistic mixture model for boundary dynamics of peaks within blocks throughout the time course and computing the most likely spatial dynamic and peak boundaries for each block throughout the whole time coursep and time point t, \u03bbt,p, in the Poisson test is computed conservatively as the maximum of:The aim of this stage is to determine approximately the genomic coordinates of regions with potential peaks of signal enrichment at any time point in the time course . Extended intervals within a predefined number of non-significant bins, MAX_GAP (3 bins by default), are further joined together. This joining strategy has been previously implemented by other peak callers for single datasets such as SICER .where Di,s,t denote the dynamic between time points t and t\u2009+\u20091 on boundary side\u00a0s, where s is one of L (left side) or R (right side). Between any two time points the ChromTime model allows for one of three possible dynamics at both the left and the right end boundaries of a peak: STEADY, EXPAND, or CONTRACT. To capture the change of boundary positions between consecutive time points t and t\u2009+\u20091 we define the quantities Ji,L,t\u2009=\u2009Bi,L,t\u2009\u2212\u2009Bi,L,t\u2009+\u20091 and Ji,R,t\u2009=\u2009Bi,R,t\u2009+\u20091\u2009\u2212\u2009Bi,R,t corresponding to the left and right boundaries, respectively. Positive values of Ji,L,t and Ji,R,t indicate the number of bases a peak expanded, whereas negative values indicate the number of bases a peak contracted, and a value of 0 indicates that the peak held steady on the left and the right side, respectively. ChromTime models Ji,L,t and Ji,R,t with different probability distributions for each of the three dynamics. For STEADY dynamic, ChromTime uses the Kronecker delta function:Let For expanding and contracting dynamics, ChromTime employs negative binomial distributions to model the number of genomic bins a peak boundary moves relative to the minimal movement of one bin required for peak expansions and contractions:t: \u03bcEXPAND,t, \u03b4EXPAND,t for expansions, and \u03bcCONTRACT,t, \u03b4CONTRACT,t for contractions. Of note, in negative binomial distributions the probabilities for negative integers are defined to be 0. Therefore, the above parametrization enforces that boundary movements of negative or zero length are impossible for expansions and that boundary movements of positive or zero length are impossible for contractions.Furthermore, each distribution is parametrized with a mean and dispersion parameter depending on the dynamic and the time point, t and t\u2009+\u20091, P\u2009=\u2009\u03c0t,d, which is the same at each side (left and right). Users have the option to set a minimum prior probability (MIN_PRIOR) for the dynamics for all time points. This parameter can be used to avoid learning priors too close to zero, which in some cases can occur for more punctate marks where the short length of the peaks can cause the prior to become a dominant influence on the class assignment of the spatial dynamics. By default, MIN_PRIOR\u2009=\u20090 in narrow and broad modes and MIN_PRIOR\u2009=\u20090.05 in punctate mode.The ChromTime model additionally assumes that there is a prior probability to observe each dynamic between time points T time points we can express for block i the probability of a particular sequence of dynamics and boundary positions on the left side and on the right side , and observing foreground counts oi\u00a0and Zi\u2009=\u20091 conditioned on the values of the covariates, xi as:For a time course with Zi\u2009=\u20091 is used to denote Zi,t\u2009=\u20091 for all t, ds,t for t\u00a0=\u20091,\u2026,T-1 is the dynamic label for the tth pair of consecutive time points on the left or the right side (s\u00a0=\u00a0L or R), respectively. Also bL and bR\u00a0are the vectors of T boundary positions containing lt\u00a0and rt\u00a0for t\u2009=\u20091,\u2026,T, respectively.where i having observations oi and Zi\u2009=\u20091 given the covariates xi is:The total probability of the signal in a block can be expressed as a sum over all possible sequences of dynamics and peak boundary positions that can generate the block across all time points. Thus, the probability of block dL and dR each iterate over all possible 3T-1 combinations of peak boundary dynamics, and bL and bR each iterate over all possible ways to place left and right end boundaries across all time points that are consistent with the requirements that 1\u2009\u2264\u2009Bi,L,t\u2009\u2264\u2009Bi,R,t\u2009+\u20091\u2009\u2264\u2009Ni\u2009+\u20091 at each time point.where o be the total set of observed read counts in all blocks in the data, x be the set of the corresponding two-dimensional vectors containing the constant term and the logarithm of the expected number of reads at each position and time point for each block, Z\u2009=\u20091 denotes all Zi\u2009=\u20091, and M be the total number of blocks. Then, the likelihood of all blocks conditioned on their covariates is:Let We note that the above formulation allows ChromTime to model the appearance of a peak, if it occurs after the first time point in the time course, as an expansion from a zero length peak at the previous time point. Similarly, the disappearance of a peak is modeled as a contraction to a zero length peak at the next time point.d at each time point t: \u03c0t,d.Prior probabilities of each dynamic \u03b1t, \u03b2t, \u03b3t and \u03b4t.Parameters of the negative binomial distributions that model the PEAK and the BACKGROUND components at each time point: \u03bcEXPAND,t, \u03b4EXPAND,t and \u03bcCONTRACT,t, \u03b4CONTRACT,t, respectively.Parameters of the negative binomial distributions that model the boundary movements in EXPAND and CONTRACT dynamics at each time point: The total set of parameters of the model consists of:Zi\u2009=\u20091 given the covariates . In particular, ChromTime attempts to optimize the conditional log-likelihood of the observed counts and After the optimal values for all model parameters are estimated from the data, for each block the most likely positions of the peak boundaries at each time point are calculated. This procedure consists of two steps. First, ChromTime determines for each block all time points with significantly low probability of containing a false positive non-zero length peak. Second, conditioned on those time points, ChromTime computes the most likely assignment of the peak boundary variables at each side and each time point ac marks. We applied ChromTime in punctate mode on all ATAC-seq and DNase-seq data. No control reads were used for ATAC-seq and DNase-seq. In addition, foreground reads for ATAC-seq were shifted by 5\u00a0bp in the direction of alignment (SHIFT\u2009=\u20095), and for DNase-seq no shifting was applied (SHIFT\u2009=\u20090). We applied ChromTime in broad mode on all data for H3K79me2, Pol2, H3K4me1, H3K27me3, H3K9me3, and H3K36me3 marks. All other options were set to their default values.The timing evaluation was conducted on a MacBook Pro laptop with 2.7GHz Intel Core i7 and 16\u00a0GB RAM using four CPU cores.The procedures for analyses with external data are described in Additional file Additional file 1:Additional figures supporting the main analyses. (PDF 8541 kb)Additional file 2:Further description of methods and analyses in this study. (PDF 711 kb)"}
+{"text": "FBNNet. Our experimental results show that the proposed FBM could explicitly display the internal connections of the mammalian cell cycle between genes separated into the connection types of activation, inhibition and protein decay. Moreover, the method we proposed to infer the gene regulatory networks for the novel Boolean model can be run in parallel and; hence, the computation cost is affordable. Finally, the novel Boolean model and related Fundamental Boolean Networks (FBNs) could show significant trajectories in genes to reveal how genes regulated each other over a given period. This new feature could facilitate further research on drug interventions to detect the side effects of a newly-proposed drug.A Boolean model is a simple, discrete and dynamic model without the need to consider the effects at the intermediate levels. However, little effort has been made into constructing activation, inhibition, and protein decay networks, which could indicate the direct roles of a gene (or its synthesized protein) as an activator or inhibitor of a target gene. Therefore, we propose to focus on the general Boolean functions at the subfunction level taking into account the effectiveness of protein decay, and further split the subfunctions into the activation and inhibition domains. As a consequence, we developed a novel data-driven Boolean model; namely, the Fundamental Boolean Model (FBM), to draw insights into gene activation, inhibition, and protein decay. This novel Boolean model provides an intuitive definition of activation and inhibition pathways and includes mechanisms to handle protein decay issues. To prove the concept of the novel model, we implemented a platform using R language, called DNA carries the genetic information that governs life, death and the reproduction of living organisms. A gene is a fragment of DNA that codes for one protein, the fundamental unit of cellular functions. Gene expression is the process whereby a gene initially transcripts into mRNA, then mRNA translates it into the protein Albert, . The cenIn systems biology, studying the relationships between the functional status of proteins and gene expression patterns in GRNs in a holistic manner is critical to understand the nature of cellular functions as well as their dysfunctions; for example, in triggering diseases or Off (0) - that represent gene activation or inhibition, respectively. Each variable represents a gene included in the GRNs with its next state affected by a Boolean function. A Boolean function, denoted by f, is a logic rule that gives a Boolean value, of 0 or 1, as an output based on the logic calculation of the Boolean input, as defined in Equation (1).A Boolean model is constructed using Boolean variables in either of two binary states - B is dependent on the activation of gene A or gene D, and gene C is related to the activation of gene A and the inhibition of gene D.Figure The basic premise of the Boolean network is that the genes exhibit switch-like behavior during the regulation of their functional states. The switch-like behavior ensures the movement of a GRN from one state to another approach redundant, and it was assumed that the biochemical network was functioning in a parameter-insensitive way or Off (inhibition), taking into account the combined effects of the current state of its regulators (or the states of the associated regulators of the relevant gene expression processes) such as the Boolean function that describes the gene expression status of gene CycA :where E2F, Rb, Cdc20, Cdh1or UbcH10 are potential genes that regulate gene CycA. As a result, a model of GRNs, including k number of genes, is constructed by the set of Boolean functions denoted as And Boolean functions by the disjunction Or. For example, a Boolean function P & Q | A & B & C & (D |E) can be divided as follows:The function given by Equation (2) for CycA combines both activation and inhibition pathways that require further inferences to determine the activation and inhibition parts. The roles of the gene activator and inhibitor in this example are not intuitively defined in the compressed Boolean function even though the compressed rule can be split into multiple subfunctions. A compressed Boolean function, which is defined as a rule that contains disjunctions with various subfunctions, can be divided into a set of where \u2200 is a set of subfunctions. In a large GRN with many genes, this weakness becomes a significant problem in deciphering GRNs that are biologically meaningful. Furthermore, a single Boolean function determines the next status of a gene. However, this may not be true biologically because a gene may remain activated within a period of decay time when activator/activators are not present Albert, . Also, tTherefore, to make the conventional Boolean functions clearer, we propose to focus on the general Boolean functions at the subfunction level taking into account the effectiveness of protein decay, and further split the subfunctions into the activation and inhibition domains. Because gene activation and inhibition are the two most fundamental components of complex cellular machinery, we use the term \u201cfundamental Boolean functions\u201d to denote these subfunctions. The biological meaning of the fundamental Boolean function is that it represents a regulatory function or regulatory complex function which can determine the activation or inhibition activity, respectively and directly. For example, Equation (2) is decomposed into six fundamental Boolean functions: CycA and E2F being TRUE trigger the activation function and Rb, Cdc20, (!E2F & !CycA) and (Cdh1 & UbcH10) being TRUE trigger the inactivation function.In this paper, we propose a novel data-driven Boolean model, called the Fundamental Boolean Model (FBM), to draw insights into gene activation, inhibition and protein decay. This novel Boolean model provides an intuitive definition of the activation and inhibition pathways and includes mechanisms to handle protein decay as well as introducing uncertainty into Boolean functions. Furthermore, the new structure of the Boolean network allows us to propose a data mining method to extract the fundamental Boolean functions from genetic time series data. To prove the concept of the novel model, we implemented a platform using R language, called FBNNet, that is based on the proposed novel Boolean model, together with a novel data mining technique, to infer fundamental Boolean functions that visualize the dynamic trajectory of gene activation, inhibition and protein decay activities. The novel Boolean model is shown to infer GRNs with a high degree of accuracy using the time series data generated from an original cell cycle network.The paper is prepared as follows: section Methods presents the proposed novel Boolean model and introduces a network inference method to infer networks for the model. In section Result and Discussion we present and discuss the details of the analysis results on a mammalian cell cycle network introduces uncertainty into the target gene. Because enzyme activation also contains the concept of the reversible type of activators, we can redefine the degree of inhibition to the degree of enzyme reaction where V and Vo are the rates of affected and unaffected reactions, respectively. Therefore, we can convert the equation to a conditional probability measure, which is the probability of an event that occurs given another event has happened, to represent the propensity of an enzyme reaction to reduce or raise the enzyme-catalyzed reaction rate to the target gene. If a conditional probability value of an inhibitor is 1, the inhibitor is irreversible, and if the probability is <1 and more than 0, it is reversible.where V and Saboury, . The degIn the conventional Boolean models Equation (1) represents the processes of gene activation and inhibition which do not consider the different behaviors of enzyme reactions such as reversible and irreversible reactions. Furthermore, the disappearance of an activator does not mean the emergence of an inhibitor, i.e., a Boolean activation function with a negation sign does not mean it has to be an inhibitor. Hence, there are justifiable reasons to separate the general Boolean function into the domains of activation and inhibition. To analyse the gene activation and inhibition networks, we abstracted the characteristics of enzyme activation and inhibition, such as mutual offsetting, reversible inhibition and the long-run degradation of a specific gene product (protein). Henceforth, we propose a novel Boolean model to construct dynamic activation and inhibition networks based on this abstraction. The following section explains the proposed novel model in detail.Ea, Ed), where the node collection, V = {v1, v2, \u2026, vn} corresponded to a group of states, X = {xi|i = 1, \u2026, n} of size n, where each variable is only in one of two states: On (1) or Off (0), and the general edge set, E, was divided into two set of fundamental Boolean functions, Eaand Ed, which are categorized by their regulatory functionalities, i.e., the activation and inhibition, rather than a single function, as in all conventional Boolean models. We denote this graph as a Fundamental Boolean Network (FBN) and the two sets of fundamental Boolean functions are defined as -Following the same pattern as the original definition of a Boolean model, we define our novel Boolean network as a graph G denotes the total number of fundamental Boolean functions activating the target gene and ld(i) denotes the total number of fundamental Boolean functions deactivating the target gene. The output of a fundamental Boolean activation function is TRUE means the target gene will be activated and FALSE means the activation function has no impact on the target gene. Similarly, The output of a fundamental Boolean inhibition function is TRUE means the target gene will be inhibited and FALSE means the inhibition function has no impact on the target gene.where The proposed fundamental Boolean functions encapsulate the following biologically meaningful key ideas:A fundamental Boolean function is a simple transition rule that takes a minimum group of essential gene states as an input and determines the regulation effect on the target gene in the form of a Boolean value.In general, a fundamental Boolean function is a function that cannot be divided any further. Hence, a fundamental Boolean function can be regarded as a delegation of stereochemical reactions, such as the activity of a transcription factor formed by the binding of a few essential proteins; a combination of transcription factors formed containing multiple transcription factors; a transcription factor complex formed by binding a transcription factor with other proteins; and, conditions that constrain gene activity.A general assumption of the proposed fundamental Boolean functions is that the production of the coded protein for each gene at each time step is either completely activated or inhibited. A further assumption is that gene regulation can be entirely or partially affected by the recipe defined by a fundamental Boolean function that states how proteins bind to their target genes.The gene regulation time is embedded into Boolean updating schema based on the treatment of time. Under the synchronous scheme, all states have a unique successor, i.e., all nodes are updated at the same time, and all gene regulation processes are assumed to have completed upon the next time step. This scheme is simple but induces well-known artifacts to summarize the requirements of protein degradation with input from the target gene i at time t:There are debates on the decay time of mRNA/proteins in Boolean models that allow the gene to remain in the Albert, . To encai at time t + 1. \u03d1 delegates the decay time period to reflect the fact that the attenuation or enhancement of the expression of mRNA requires time. \u00ac represents a negation operator that changes a Boolean function from TRUE to FALSE or vice versa. \u00d7 is a logical And operator. The output of the decay function fdecay is a Boolean value of On (1) at time t +1 if the gene state of \u03c3i at time t is On (1) within the tolerated time period or Off (0) at time t +1 when the tolerated time period is expired regardless of the gene state of \u03c3i at time t. We assume that the tolerated time period for protein decay has only one time step for short time series data. Short time series data contain an enormous number of genes but only a few observations; hence, knowledge of the mechanistic details and kinetic parameters cannot be extracted consistently from the data , , and (6) we propose the novel Boolean model (FBM) as:i at time t + 1 depends on the state of itself at time t if it is tolerated by the parameter \u03d1, a decay time period. P\u301ax\u301b is a Boolean function that takes a uniform distributed random number, \u03bc, and an output of 1 if \u03bc < x and 0 otherwise. V{x} denotes the logical connective function of Or, i.e., i have the confidence measure value of 1. +is a logical Or operator. The output of the proposed model is the activation or inhibition status of the target gene i at time t +1. A general assumption of the proposed model is that all gene regulations are controlled by the proposed fundamental Boolean functions in the activation, inhibition and protein decay domains.The role of Figure Off due to the protein decay at the time step 2. Gene B was turned Off at the time step 3, which is also due to the protein decay. In case 2, Gene A was inhibited by Gene B at the time step 2, and Gene B was continually boosted up by Gene A at the time step 2 but inhibited due to the protein decay at the time step 3. In case 3, Gene A was inhibited by Gene B and Gene B was turned Off due to the protein decay at the time step 2. In case 4, both genes were entrapped into a simple loop, namely, attractor due to the lack of activators to turn any of them On. All of the cases will be entrapped into the same attractor, i.e., the gene state of {0,0}.The proposed model can simulate the dynamic equilibrium of gene regulation. Figure The proposed Boolean model, i.e., Fundamental Boolean Model and the related Boolean network, i.e., Fundamental Boolean Network provides a novel mechanism to analyse the activation, inhibition and protein decay pathways intuitively. The potential application of the mechanism can be used to analyse the drug-related gene regulations because the inhibition pathway of a new drug can be revealed intuitively through the usage of the drug-related fundamental Boolean networks. The main challenge is how to extract the knowledge network (NK) from the drug-related dataset. The first knowledge network is normally referred to as the prior knowledge network (PNK) which encapsulates the biological knowledge already known for the main compounds involved in the process being studied :Confidence Measures:t, and the target gene, at time t + 1.Confidence measures are outputs of the confidence functions introduced in Equations (5.a) and (5.b) that indicate a conditional probability on the causality of regulation between a conditional gene, at time Confidence Counter Measures:t, which regulates the conditional gene at time t + 1. We denote the confidence counter measures as Confidence counter measures are similar to the confidence measures introduced in Equations (5.a) and (5.b) but are used to indicate the conditional probability on the causality of the target gene, at time Confidence counter measures of activation:Confidence counter measures of inhibition:Support Measures:Support measures are the percentage of transactions contain marched rules . We denote the support measures as i are definded as:where Support measure of activation:Support measure of inhibition:Conditional Causality Test:Some researchers claimed that causality is not a concept statistic and is not statistically \u2018identifiable\u2019 because a secluded causal hypothesis cannot be verified by using only observational data is equal to 1; in contrast, the reasoning that gene B at time t +1 is caused by gene A at time t may not be as strong as gene A at time t +1 caused by gene B at time t due to lack of information to support this reasoning. Hence, confidence p(Bt+1 = 1|At = 1) is \u2264 p(At+1 = 1|Bt = 1). The ratio p(At+1 = 1|Bt = 1) divided by p(Bt+1 = 1|At = 1), can then be used as a test for the causality direction between genes A and B. We named this test a conditional causality test. This test can differentiate indirect regulators from direct ones because the indirect regulators will usually have weaker reasoning than the direct ones.The formulae for calculating the confidence measures and the confidence counter measures have been discussed in Equations and . The ratios and interpretations are based on plausibility reasoning theory: if gene Therefore, by giving confidence for a potential fundamental Boolean function and a confidence counter for the potential fundamental Boolean function, we can calculate the value of the conditional causality test and interpret it as follows:A and the conditional gene B is greater than 1, gene A, then, is regulated by gene B.If the value of the conditional causality test for target gene A and B are then regulated from each other.If the value is equal to 1, genes B regulates gene A can be rejected.If the value is lower than 1, there is no causal relationship, so the hypothesis that gene Entropy and Mutual Information:Pi, within a given sequenceThe Shannon entropy theory provides a quantitative information measure about the probability of observing a particular symbol, or event, log is a logarithm with base 2. The sequence is the sum of the probabilities of an event being either On or Off and H(Y|X) are the two conditional entropies that capture the relationship between sequences X and Y presents the remaining information between sequence X and the information shared between X and Y. The output state of X is determined by Y if M = H(X) . The reason we propose this type of cube is that we can distribute the computational costs to multiple computing threads or a cloud computing environment. The precomputing cube can be stored in any distributed database. The data training of every target gene is independent, which means we can distribute some target genes to a different computer to build their tree structures and then assemble all the distributed trees as an orchard forest. The network inference process is separated from the process of constructing cubes and has different pruning strategies. The separation between network inference and cube construction allows further development of scalable methods to extract genetic networks effectively and efficiently from comparatively fewer updates of the cube. Meanwhile, the cube can be consistently improved by integrating it with more time series data.Each branch or link above ground on the tree is a regulatory function. Every node on a branch is a component of the regulatory function. Because regulatory functions are the knowledge we are looking for, we call them \u2018\u2018 and the analytic data part as \u02d8U, then U\u2018\u2018 \u2208 \u02d8U.The nodes underground are analytical data that contain all possible regulatory functions for the target gene unless the NULL hypothesis rejects them. The nodes above ground comprise the extracted regulatory functions that have been mined from the nodes underground. Hence, if we define the collection of regulatory functions of a target gene as UEach node contains four dimensions, i.e., four major groups of measures. Each dimension represents a potential regulatory function of the target gene. The four dimensions are denoted as TT, TF, FT, and FF, as shown in Figure TT is when the target gene's state is TRUE, the current conditional gene state is TRUE, and all upstream conditional gene states are fixed;TF is when the target gene's state is TRUE, the current conditional gene state is FALSE, and all upstream conditional gene states are fixed;FT is when the target gene's state is FALSE, the current conditional gene state is TRUE, and all upstream conditional gene states are fixed;FF is when the target gene's state is FALSE, the current conditional gene state is FALSE, and all upstream conditional gene states are fixed;G4, of target gene G1 at Ground-3 has its two upstream gene states fixed, i.e., G1(0) and G2(0), therefore, the four dimensions of the node G4 are TT (G1|!G1&!G2&G4), TF (G1|!G1&!G2&!G4), FT (!G1|!G1&!G2&G4) and FF (!G1|!G1&!G2&!G4). Each dimension has a factor, also called a function statement, which represents the potential gene regulatory function. Dimensions TT and FT, FT and FF are two pairwise dimensions. The minimum confidence measures between TT and FT, FT and FF are error measures. The pairwise dimensions have the following characteristics:All nodes under Ground-2 will have prefix gene states. For example, conditional gene, xi that must match the requirements ofFor the pair dimensions, if we define one dimension to be the confidence measure that will have an impact on the target gene, the other dimension is then regarded as an error measure. This definition is equivalent to the definition of the essential Boolean state x1, \u2026xi\u22121, xi, xi+1, \u2026, xn, where f is a Boolean function and xi is a Boolean state . Because p is equal to p, the branch for precomputing p is not processed from the main tree. Hence, the computational cost of constructing the entire cube is affordable.Constructing the cube requires building an optimal tree-type data structure from all possible combinations of all related genes up to a maximum depth. The computational cost grows exponentially. However, it is endurable because all precomputed genes will not go to next level to avoid redundant computations. For example, we have three genes, p-value is over 0.05. This procedure reduces unnecessary root branches.Before the initial pruning, we use Pearson's Chi-square test of a gene is a limited to Ft + 1 by giving the gene state at time t. Therefore, we can reconstruct the time steps to any length by giving the initial gene state at time t = 0. The following list gives the primary usages of applying the model to reconstruct time steps:The model, formulae and orchard cube we introduced in the previous section provide a complete mechanism to calculate the next gene state at time Verify the reconstructed time series data against the original time series. The regulatory functions are a subset of the analytic data. If the functions are correct, we should be able to reconstruct time steps with the same initial states as the original time series. Because the asynchronous Boolean model produces an undetermined result, the reconstructed time series data might be different from the training time series data. Hence, we focus on demonstrating the FBM using synchronous Boolean schema only in this paper.Sobserved1 and Sobserved2, we reconstructed the time series data, as follows:Reconstruct the hidden layers between the observed time steps. In short time series data, the gaps between observed time steps are very sparse. We can use the reconstructed time series to reveal the hidden layers by giving the initial Boolean state and keeping on generating the next time step based on the previous time step until the latest generated time step is identical to the next observed time step. By giving the two observed states S1 and Sk are then denoted as missing time steps or hidden layers.The states between FBNNet , which can build an Orchard type of cube and mine FBNs from the cube. The FBNNet tool can find attractors under the two main Boolean updating schema and plot a static regulatory graph as well as a dynamic regulatory graph. The following paragraphs describe the main experiment we conducted.There are three main steps to verify the proposed logical model: (i) the specification of the updating schema of the model; (ii) the definition of the parameters of the model; (iii) the extraction of a regulatory network. As discussed in section Background, there is two main Boolean updating schema: synchronous and asynchronous based on the treatment of time. For the sake of simplicity, we apply the synchronous updating scheme to test the model. The parameters for the fundamental Boolean model are the protein decay, which, by default, is 1; the updating time step for each subfunction is 1; the parameter of confidence of each subfunction is extracted using the methodology we proposed. To extract the fundamental Boolean network, we implemented an R package, namely BoolNet . The generated 1024 sample data are the dataset used for this experiment. Each sample contains 43 time steps in sequence. All the initial states of the 1024 samples are unique containing the complete combination of 210 changes. Hence, the number of genes that are expressed in each sample is variant.To generate the experimental data, firstly, we used the command Because the proposed Boolean rule definition is very different from the traditional Boolean rules, i.e., intuitive rules vs compressed rules (not intuitive) as discussed in section Background, we cannot compare it with other networks generated by other tools directly as it would not be a fair comparison. Hence, the best way to evaluate the generated FBNs is to use them to reconstruct time series data with the same initial states and then compare with the training time series data. The reason is that under the synchronous model if the generated network is correct, the network should be able to produce the same time series data as the training time series data with the same initial states.The evaluation matrix for the time series comparison we have adopted is ER (Error rate), AR (Accurate rate), MMR (Mismatched rate) and PMR (Perfect matched rate). The matrix is defined as follows:where n is the total number of samples. Time series data here are referred to a list of sample matrixes, and each sample matrix contains gene states . Hence, a sample data means a sample matrix in this paper.FBNNet, as shown in Table The FBN for the sample genes and the FBN for the cell cycle genes were inferred via the R package BoolNet. The method generateTimeSeries of BoolNet does not provide a configurable parameter for protein decay but has a default value of 1 (time step), embedded inside its logic.As shown in Table A significant difference from other existing Boolean models is that the inferred FBN splits the Boolean functions of the mammalian cell cycle into the domain of activation and inhibition intuitively as shown in Table As shown in Table http://datastorm-open.github.io/visNetwork/).Regarding FBNs, Figure As shown in Figure Attractors are defined as recurrent cycles of states because none of its activation and inhibition functions has an impact on it.As shown through the demonstration of the proposed model with the mammalian cell cycle, we demonstrate that we could use the proposed orchard cube to infer the GRNs of the mammalian cell cycle. The outcome shows that if the network inferred was 100% correct, the reconstructed time series should match 100% with the original training dataset. However, this assumption was based on the degree of completeness of the initial training dataset when used with the synchronous Boolean schema.The cell cycle FBN reveals the internal gene activation, inhibition and protein decay mechanism. Although the generated new Boolean cell cycle network is very different from any other Boolean network, it still can generate the same attractors as others such as in the study of Faur\u00e9 et al. , which mWe also show the dynamic trajectories for the attractor 2. The main advantage was that it illustrated the relationships in the domains of activation, inhibition, and protein decay to facilitate scientists in understanding the intrinsic genetic regulations. The downside was that the FBN might contain too many links. Comparing the rules shown in Table FBNNet was implemented as a prototype for the proposed FBM using pure R language without any performance optimization enhanced. Hence, to generate the experimental results, it requires approximately 200 s with parallel computing and 530 s without parallel computing. The machine we used to experiment is a laptop, which is made by Acer\u2122, a model of Aspire V 17 Nitro. The R does not provide the facility of parallelisation directly, and we have to use other packages, namely, \u201cparallel,\u201d \u201cforeach,\u201d and \u201cdoParallel\u201d to do the parallel computing. The performance of these packages are unknown, and they may not provide the real power of parallel computing as good as C or C++. However, they are good enough to prove the concept of the proposed novel Boolean model. BoolNet uses C to speed up its performance in constructing the cell cycle network, and hence, is faster than the current version of FBNNet. In contrast, our method is used to derive the intuitive activation and inhibition pathways and hence requires more computational times. In addition, the proposed orchard cube is used to store all precomputed measures for all potential fundamental Boolean functions in case we need to mine FBN from short time series data. Hence, it requires more time to process and keep as many potential rules as possible.The current version of TRUE or FALSE values affecting their target genes by verifying the input states to be matched with the requirements of the functions. The time interval between time steps was a parameter of the proposed model and should reflect the assumption that all related genes should have completed their biological reactions; for example, transcription from DNA to mRNA, and translation from mRNA to protein. If we fix the time interval for all genes the FBM, then, is a synchronic Boolean model. If all genes have their time interval defined, the FBM then is an asynchronous Boolean model. The proposed Boolean model, therefore, can be used to reconstruct the missing time steps using either a synchronous scheme or an asynchronous scheme. However, because the results from the asynchronous Boolean model are nondeterministic, we cannot use the reconstructed time series to verify the generated output correctly under the asynchronous Boolean model.Finally, we proposed a way to reconstruct any missing time steps by estimating all fundamental Boolean functions' In this paper, we studied the characteristics of enzyme activation, enzyme inhibition and protein decay as well as the advantages and disadvantages of the conventional Boolean models and then proposed a novel data-driven Boolean model, namely the Fundamental Boolean Model (FBM), to draw insights into gene activation, inhibition, and protein decay. The FBM separated the activation and inhibition functions from conventional Boolean functions, and this separation will facilitate scientists in finding answers to some fundamental questions, such as how a modification of one gene affects other genes at the expression level. We introduced a new data-driven method to infer FBNs. The new method contains two different parts: the first part was to construct an orchard cube to store all precomputed measures for all potential fundamental Boolean functions; the second part was to infer FBNs from the constructed orchard cube by filtering each tree's underground part, based on the criteria discussed previously. Dynamic FBNs could show the significant trajectories of genes to reveal how genes regulate each other over a given period. This new feature could facilitate further research on drug interactions to detect the side effects of the use of a newly proposed drug. The protein decay issue is also a function of the proposed model (Equation 6), and hence, there is three type of links for the FBNs; and this feature makes the networks unique over other Boolean models.FBNNet, which has successfully demonstrated that FBNs can be inferred from time series data. The R package provides a tool to draw FBNs, either in the static mode, as shown in Figure The proposed FBM is a data-driven model, and the FBM functions are extracted from a particular type of data cube. Hence, the knowledge about the connectivity among genes are not needed but can be used to verify the generated result. To prove the concepts of the FBM and FBNs, we implemented an R package called BoolNet.The dynamic trajectory of gene activation, gene inhibition and protein decay activities of the attractor 2 is deciphered in Figure There was a need to search all related genes and to calculate all relevant measures for all associated gene combinations up to some depth, to infer FBNs. This requirement could end with the NP-hard problem , i.e., there is no known polynomial algorithm so that the time to find a solution grows exponentially with predefined problem size, as mentioned in Liu et al. . HoweverThe experimental data and materials are discussed in Appendix B of SI.LC, DK, and SS developed the research project, formulated the research questions and designed the paper. LC did all the computing with intellectual inputs from DK in consultation with SS. DK directed the project. LC wrote the first draft with DK and SS critiqued it, and all authors approved the final submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "E. coli and Saccharomyces cerevisiae are utilized to evaluate the performance of the HSCVFNT algorithm. As a result, HSCVFNT obtains outstanding F-scores of 0.923, 0.8 and 0.625 for SOS network and (In vivo Reverse-Engineering and Modeling Assessment) IRMA network inference, respectively, which are 5.5%, 14.3% and 72.2% higher than the best performance of other state-of-the-art GRN inference methods and time-delayed methods.Gene regulatory network (GRN) inference can understand the growth and development of animals and plants, and reveal the mystery of biology. Many computational approaches have been proposed to infer GRN. However, these inference approaches have hardly met the need of modeling, and the reducing redundancy methods based on individual information theory method have bad universality and stability. To overcome the limitations and shortcomings, this thesis proposes a novel algorithm, named HSCVFNT, to infer gene regulatory network with time-delayed regulations by utilizing a hybrid scoring method and complex-valued flexible neural network (CVFNT). The regulations of each target gene can be obtained by iteratively performing HSCVFNT. For each target gene, the HSCVFNT algorithm utilizes a novel scoring method based on time-delayed mutual information (TDMI), time-delayed maximum information coefficient (TDMIC) and time-delayed correlation coefficient (TDCC), to reduce the redundancy of regulatory relationships and obtain the candidate regulatory factor set. Then, the TDCC method is utilized to create time-delayed gene expression time-series matrix. Finally, a complex-valued flexible neural tree model is proposed to infer the time-delayed regulations of each target gene with the time-delayed time-series matrix. Three real time-series expression datasets from (Save Our Soul) SOS DNA repair system in Gene regulatory networks (GRN) contain the regulation relationships among genes, mRNA and proteins, which almost control all biological activity in the field of biology ,2,3. MasThe gene regulation pattern in organisms involves a time-delayed factor ,16,17,18In order to improve the accuracy of GRN inference, many reducing redundancy methods are utilized before or after GRN identification by the inference methods. Generally, the information theoretic approaches were utilized to measure the complex regulatory dependence between genes . Zhang ehttp://121.250.173.184), to infer a gene regulatory network with time-delayed regulations by utilizing a hybrid scoring method and complex-valued flexible neural network (CVFNT). In order to reduce computation complexity, a decomposed strategy is utilized. A HSCVFNT algorithm infers the regulations of each target gene, respectively. For each target gene, the HSCVFNT algorithm uses a novel scoring method based on time-delayed mutual information (TDMI), time-delayed maximum information coefficient (TDMIC) and time-delayed correlation coefficient (TDCC), to reduce the redundant regulatory relationships. In the scoring method, the ranks of regulation relationships according to TDMI, TDMIC and TDCC are added. To overcome these limitations and shortcomings of these existing methods, this paper proposes a novel algorithm, namely HSCVFNT are utilized to evaluate the performance of HSCVFNT algorithm. Experimental results show that HSCVFNT performs better than other state-of-the-art GRN inference methods.Three real time-series expression datasets from an SOS DNA repair system in E. coli with six genes [Saccharomyces cerevisiae (IRMA network), which contains five genes [Two real-life benchmark gene regulatory networks are utilized to validate our method. The first GRN is derived from the SOS DNA damage repair network of ix genes , and theve genes .Sensitivity, Precision, Specificity and F-score are utilized to evaluate the performance of HSCVFNT algorithm. Four criterions are defined as follows:TP, FP, FN, and TN are described in E. coli, we select the algorithms of DBN [To evaluate the performance of the HSCVFNT method, the HSCVFNT algorithm is compared with the algorithms with better performance. For the SOS DNA repair network in s of DBN , S-systes of DBN and TDSSs of DBN . For thes of DBN , TDLASSOs of DBN , DBN-ZC s of DBN , DBmcmc s of DBN , MMHO-DBs of DBN and HRNNs of DBN . For allE. coli includes four experiments under different intensities of light [uvrD, lexA, umuD, recA, uvrA, uvrY, ruvA and polB. In this paper, six main genes are selected to test our method. The true gene regulatory network is described in SOS DNA damage repair network of 20 J m2) . Each exIn the HSCVFNT algorithm, the maximum time lag The performance comparison results are shown in Saccharomyces cerevisiae (IRMA network). Two gene expression time-series datasets are collected with 21 equally distributed time points by being triggered by glucose within the network [The second benchmark network is from network . In the In the HSCVFNT method, the maximum time lag The performance comparison results with on dataset are shown in The performance comparison results with off dataset are shown in Although the MMHO-DBN algorithm has the highest Precision with on and off datasets, which reveals that the ratio of true positive edges is very high, Sensitivity and F-score are not perfect. MMHO-DBN algorithm only inferred four and one true positive edges with on and off dataset, respectively. F-score of MMHO-DBN is lower than our method. On the whole, our method performs better.In order to test the stability of HSCVFNT, 20 runs are performed for IRMA network identification with on and off datasets. With on dataset, Precision, Sensitivity and F-score obtained average 0.817 \u00b1 0.064, 0.736 \u00b1 0.042 and 0.774 \u00b1 0.045, respectively. With off dataset, the averaged Precision, Sensitivity and F-score are 0.569 \u00b1 0.066, 0.612 \u00b1 0.04 and 0.589 \u00b1 0.047, respectively. Compared with In order to verify the effect of time-delayed factor on the performance of gene regulatory network modeling, we also use an HSCVFNT algorithm with no delay (non time-delayed version) for an SOS network and IRMA network identification. In the non time-delayed version, the scoring method is based on mutual information, maximum information coefficient and correlation coefficient, and time series data are also non time-delayed.Comparison results are depicted in In order to investigate the effect of the number of candidate regulatory factors In order to validate the performance of our proposed scoring method, we make the comparison experiments with the single time-delayed reducing redundancy methods for inferring an SOS network. F-score results of four reducing redundancy methods are depicted in In order to validate the performance of the bat algorithm (BA), we make the comparison experiments with the classical optimization algorithms , particle swarm optimization (PSO) and differential evolution (DE)), which are utilized to optimize the CVFNT model. Through 20 runs, performance results of four optimization methods for SOS network and IRMA network inference are listed in X and Y is defined as [p(x) and p(y) are the marginal probability distribution functions of gene X and gene Y, respectively. p is the joint probability density function of gene X and gene Y. MI is symmetric, so it could not identify the time-delayed dependence between two genes. Time-delayed mutual information (TDMI) is proposed to measure the time-delayed dependence between target gene and its regulatory factors.Mutual information (MI) between two genes fined as ,48(5)M, as an exploratory analysis tool, can be utilized to evaluate the relationships between hundreds of variables . CompareX, Y). The maximum information gain for all the grids sized of tuple can be computed as [A bivariate set is defined as a set where the data elements are the ordered tuples (puted as (7)I\u2217 with time lag The correlation coefficients of all genes for each time lag n genes and gene expression time series dataset includes m time points. The flowchart and pseudo code of our proposed scoring method are depicted in In order to measure the time-delayed regulatory relationships accurately, a novel hybrid scoring method based on time-delayed mutual information, time-delayed maximum information coefficient and time-delayed correlation coefficient is proposed. Suppose that a time-delayed gene regulatory network contains F and terminal set T) are defined in advance: i variables. j stands for the value of Flexible neural tree (FNT) model was initially introduced by Prof. Chen in 2005 . BecauseF and T. If a terminal instruction n leaf nodes and complex-valued weights is CVAF and the output is complex-valued. a, r and c is complex-valued and n||net is the modulus of complex-valued nnet. In this paper, we choose complex-valued Elliot function as a CVAF of the CVFNT model.The output of a CVFN A typical CVFNT model is described in The flowchart of our HSCVFNT algorithm is described in (1)Let gene expression data to be (2)For target gene (3)For target gene (4)k, the CVFNT model is utilized to identify the time-delayed regulatory relationships. The gene expression data of candidate regulatory factors are utilized as input data. The gene expression data of target gene k are utilized as output data. The optimization process of the CVFNT model is described as follows:According to the candidate regulatory set and time-delayed time series matrix of gene (a) Initialize the population, containing the structure, real-valued and complex-valued parameters of the CVFNT model.RMSE):i-th time point and i-th time point. (b) All individuals in the population are evaluated, using root mean squared error .(c) Structure is optimized using three operators: selection, crossover and mutation, which are introduced in Ref . At some(5)k and its regulatory factors.According to the structure of the optimized CVFNT model, input genes are seen as the regulatory factors. Obtain the regulations between target gene (6)In order to infer a gene regulatory network accurately, a novel algorithm, namely HSCVFNT, is proposed to infer a gene regulatory network with time-delayed regulations. In the HSCVFNT algorithm, the novel scoring method is proposed to reduce the redundant regulatory relationships and obtain the candidate regulatory factor set of each target gene. The CVFNT model is utilized to infer the time-delayed regulations with a time-delayed gene expression matrix. Three real time-series expression datasets from SOS network and IRMA network are utilized to evaluate the performance of the HSCVFNT algorithm.The results on the SOS network show that our HSCVFNT method significantly outperforms the three other state-of-the-art methods . The results on the IRMA network reveal that the HSCVFNT algorithm performs better than HRNN, MMHO-DBN, TDARACNE, TDLASSO, DBmcmc, and DBN-ZC. We also investigate the effect of a time-delayed factor and the number of candidate regulatory factors on the HSCVFNT algorithm for inferring GRN. The experiment results show that a HSCVFNT algorithm with a time-delayed factor could identify a gene regulatory network more accurately. Different numbers of candidate regulatory factors have a great influence on the performance of the algorithm. The comparison results show that about 30% of the number of genes is more appropriate.In the future, we will apply the proposed algorithm for inferring a large-scale real gene regulatory network and discovering some meaningful relationships."}
+{"text": "Arabidopsis gene regulatory networks by analyzing the time-series gene microarray data. In order to evaluate the impact of time point measurement on network reconstruction, we deleted time points one by one to yield 11 distinct groups of incomplete time series. Then the gene regulatory networks constructed based on complete and incomplete data series are compared in terms of statistics at different levels. Two time points are found to play a significant role in the Arabidopsis gene regulatory networks. Pathway analysis of significant nodes revealed three key regulatory genes. In addition, important regulations between genes, which were insensitive to the time point measurement, were also identified. With the availability of high-throughput gene expression data in the post-genomic era, reconstruction of gene regulatory networks has become a hot topic. Regulatory networks have been intensively studied over the last decade and many software tools are currently available. However, the impact of time point selection on network reconstruction is often underestimated. In this paper we apply the Dynamic Bayesian network (DBN) to construct the However, many gene expression data in current microarray databases are static, which can hardly describe the life phenomenon well. Fortunately, time series gene microarray data, which contains the temporal information, could help with the dynamic network reconstruction, as is indicated in the gene knock-out experiments by Geier et al. . In thosetc. A Boolean network is a simple model that is suitable for qualitative research. The differential equations method, which models the gene network from an accurate mathematical point of view, lacks anti-noise ability and robustness. Researchers now pay more attention to Bayesian networks, including the static Bayesian network and the dynamic Bayesian network (DBN). The static Bayesian network, in which nodes represent random variables, models static probabilistic dependency relations among genes from its expression data with noise [Arabidopsis time series gene microarray data. However, for the reconstruction of a gene regulatory network, two related issues are still unresolved. Firstly, the effect of time point measurements on the reconstruction of gene regulatory networks, such as the number of time points, and the measurement intervals, remain to be explored. Secondly, what kind of properties of the constructed network are robust and less sensitive to the time point measurements, i.e. what kind of properties obtained from the constructed networks are more credible, even when the time point measurement is not enough. To answer these two questions would be very helpful for the design of time course data measurements and the application of gene regulatory networks constructed with time series data. In this work, the reconstruction of the Arabidopsis gene regulatory network was taken as a case study to answer the above questions.Recently, many popular methods of gene regulatory network reconstruction were developed, including Boolean networks, multiple regression analyses , differeth noise . Althougth noise ,8, vectoth noise , the hidth noise ,11,12,13Arabidopsis were cultivated to growth stage 3.90 (Rosette growth complete) [http://affymetrix.arabidopsis.info/) [Arabidopsis gene regulatory networks. The R package G1DBN was used to perform Dynamic Bayesian Network reconstruction [etc (http://mirrors.geoexpat.com/cran/) [http://www.sysbio.org.cn/Molecules2010_SupplementaryScripts.htm)The data were derived from the microarray experiments performed in the laboratory of Smith . Arabidoomplete) and labes.info/) as expertruction ,18. Othem/cran/) , were alS and P, where S is the structure of network and P is a set of conditional distribution on S. S represents a directed acyclic graph (DAG) and its nodes correspond to the time series dynamic variables. They can be defined as:ij Xis the jth variable at time i and ijx is the value of jth variable at time i, is the vector composed by variables at time i and is the vector composed by jth variable at all times.DBN, in which a time factor is introduced, is an extension of the Bayesian network. More precisely, it uses time series data to construct causal connections among random variables and uses time lapse information to construct circular regulation . The netS represents the probabilistic relationship or causality between them. If there is an arc, the relationship of the two nodes will be conditional dependence. Then, the DBN model can be obtained:jPa0 = \u0424, denotes the random variables that correspond to the parents of node i.The arc between two nodes of D of variables X, search for a network such that it best matches the set D, where S\u2032is the network structure and \u03b8 is parameters in network. Then, a score function can express how well it matched, that is, make the formula to be maximum:If the structure is unknown, the network will be constructed by some learning rules and relevant criterion, which can measure networks from observed data. Given an observed data set Arabidopsis gene microarray data. So far, compared to parameter learning, structure learning of DBN is much more difficult. In general, DBN structure learning approaches are transplanted and extended from the BN approaches and can be divided into two types. One is based on search and scoring method. At first, a primary network structure is given, and then edges are added or subtracted so that the model can be improved. Finally a network that best matches the dataset can be picked out. Another method is based on dependent relationships and uses statistical measurements to estimate the dependence among nodes and then construct a network based on the results. In the present work, we used the second method to construct gene networks from the S = be a directed network. V is the set of nodes and E\u2019 is the set of edges of network S, then the degree of a node v is the number of edges at node v [V. The indegree or outdegree of a node v is the number of edges pointing to or out from node v in S [Dia) is the longest shortest path of a network[The network structure can be analyzed using different statistics based on the analysis of nodes, edges or the whole network. Various statistics could be analyzed for different goals. These statistics and other of the same type are commonly known as centrality measures, connectivity indices, and/or topological indices. The applications of these statistics cover drug molecular graphs ,21, prott node v , which be v in S . Moreovea network. Here itk\u2208{0,1, 2,3,...,11}, kQ is a statistics of network kG and ave is the average of kQ. It is obvious that a low diversity score denotes a low undulatory property and here indicating the insensitivity to the time point measurement. To compare the effect of different time points on the reconstruction of network, groups of time series data should be used. However it is difficult to obtain abundant time point data in an experiment. Hence, we deleted the time points one by one to simulate the distinct groups of time series microarray data, which included 800 genes expression level at 11 time points, recording time 0, 1, 2, 4, 8, 12, 13, 14, 16, 20 and 24 h. Each time, we deleted one time point and constructed the gene regulatory network using the remaining time points. The networks were designated as G1, G2, G3 and G11. For instance, the network G1 was made up by time points 1, 2, 4, 8, 12, 13, 14, 16, 20 and 24. Additionally, the network with all the time points is denoted as G0. In order to express undulatory property of those statistics, for example, to find which statistics are more insensitive to the time points, we defined the relative diversity score of one statistics as follows:Arabidopsis gene regulatory network built by the DBN method using the R software is shown in The Arabidopsis gene regulatory networks are listed. N0 is the node number whose degree is 0. Rn is the number of linear regulation between genes. The definition of other statistics can be found in As shown in Arabidopsis gene regulatory network is shown in The distribution of degree of nodes in the Arabidopsis gene regulatory network regulate or are regulated by other genes. The betweeness of gene nodes in the network was also calculated and the top-forty nodes were picked up. These genes were then mapped to KEGG database and 21 enriched pathways were identified and three key genes, At2g21330, At1g43670 and At2g29690, were observed to participate in most of these pathways.It can be seen that over half the genes have regulatory relationships with others. About 36% of the nodes\u2019 degree is 1, while about 5% of them are equal or greater than 4. That is to say 39 genes have regulation relationships with no less than four other genes. The gene with the maximum degree is disproportionating enzyme 2 (AT2G40840). It encodes a cytosolic protein during transglucosidase and amylomaltase activity, which suggests an essential role of the pathway carbohydrate metabolism in leaves at night . Thus, mArabidopsis and corresponding proteins in other species also have the similar important biological functions. They participate in fundamental metabolic pathways. Both At1g43670 and At2g21330 are involved in the carbohydrate metabolism: D-fructose-1,6-bisphosphate 1-phosphohydrolase (At1g43670) hydrolyzes the fructose-1,6-bisphosphate to fructose-6-phosphate (F-6-P) and inorganic phosphate; fructose-bisphosphate aldolase (At2g21330) catalyzes an aldol cleavage and its reversible aldol condensation of fructose-1,6-bisphosphate. Anthranilate synthase (At2g29690) takes part in the amino acid metabolism and is a key enzyme in the synthesis of tryptophan (Trp), indole-3-acetic acid, and indole alkaloids. O three genes, AT3g01920 (which encodes the yrdC family of proteins) AT3g57600 and AT1K (p < 0.05) and the number of regulations Rn (p < 0.05) are relatively small, while the diversity score of centralization Ce (p < 0.05) is larger. This indicated that average degree K and the number of linear regulation between genes Rn are less sensitive to time points and the centralization are sensitive to time points.. Therefore, gene regulatory networks based on these properties are more robust since they will not vary with time point measurements.Several network statistics of the 12 networks were calculated and are shown in In The maximum vulnerability is a valid statistic based on the whole network. It quantifies the maximum loss if one node is deleted from the network. The larger the value of the maximum vulnerability is, the less stable the network becomes. The maximum vulnerabilities of G5, G10, G4 and G9, rank among the top-four maximum vulnerabilities . These nArabidopsis gene regulatory networks deduced from time series microarray data are robust. Goodness of fit can be a crucial criterion to judge the robustness and stability of a network. From In order to get the degree circumstances of the 12 networks, degree logarithmic distributions are considered . It is nThe values of sensitivity, precision and F-measure are all calculated for the eleven networks, and shown in Arabidopsis gene regulatory networks. Hence these time points should not be neglected for both the network reconstruction and biological experiments. In the same way, the time point 2 (G3) is found to be not as important as others.Combining the maximum vulnerability, degree distribution, sensitivity, precision and F-measure data of these eleven networks, the time points 16 (G9) and 20 (G10) are found to play a significant role in the Moreover, we evaluated the impact of time period on network construction by deleting two adjacent time points. The networks G2_3 and G9_10 were reconstructed by the data without two points, that is, 1h and 2h and 16h and 20h.As is shown in Its maximum vulnerability is 0.0881, larger than the other networks . However the statistics such as maximum vulnerability (0.0769) and sensitivity of network G2_3 are not as significant as G9_10\u2019s. This indicates that the period between 1h and 2h maybe not so important as 16h and 20h to the reconstruction of network.i.e. they happen during the most time in the experiment and they should be significant regulations in a sense. On the other hand, from these two tables, we can find that gene regulations were absent frequently in network G9 than in other networks, which is in agreement with the previous sensitivity and precision analysis. Gene regulations and signals in this time period are much more important and should be sampled more densely. On further analysis, occurrence of gene regulations in 12 networks should be considered. From Arabidopsis time series data were constructed, and then the effect of the time point measurements on the network reconstruction was investigated. In the systems biology era, it has become necessary to study the dynamic behavior of a biological network with the time course data for a correct understanding of biological systems . The meai.e. reconstruction of networks based on the deletion of different time points and then comparison of networks statistics at three different levels: degree, edges and networks. The time point deletion method can help us to detect the importance of different time points, to find the robust network properties and to identify key biological modules which are insensitive to time point measurement. According to our analysis, the network statistics such as the average degree (K) and the number of linear regulation between genes (Rn), are less sensitive to time point measurement, indicating that these statistics are more meaningful than others when even the time point measurement may not be enough. With our time point deletion method, we found that the time points 16 (G9), 20 (G10) in the Arabidopsis time course data are more important for the correct reconstruction of the Arabidopsis biological network, while the time point 2 (G3) is less important. In addition we also identified key biological regulations by the comparison of different time point deletion data sets.We have proposed a novel method to detect the effects of time point measurements, The method proposed in this paper is based on the assumption that the networks statistics are more comparable if they were generated by the same network reconstruction method. We take the network G0 based on all time points as the standard network. Moreover, there are no perfectly correct networks that can be considered as the golden-standard reference. Of course, there are some other choices, such as take the independent network as the golden-standard network to validate. Further research should be done for this purpose. We could consider other network construction methods based on the time-series gene microarray data to validate the result, such as the reconstruction method by integrating several time course datasets ."}
+{"text": "The DryNetMC does not only infer gene regulatory networks (GRNs) via an integrated approach, but also characterizes and quantifies dynamical network properties for measuring node importance. We used time-course RNA-seq data from glioma cells treated with dbcAMP (a cAMP activator) as a realistic case to reconstruct the GRNs for sensitive and resistant cells. Based on a novel node importance index that comprehensively quantifies network topology, network entropy and expression dynamics, the top ranked genes were verified to be predictive of the drug sensitivities of different glioma cell lines, in comparison with other existing methods. The proposed method provides a quantitative approach to gain insights into the dynamic adaptation and regulatory mechanisms of cancer drug resistance and sheds light on the design of novel biomarkers or targets for predicting or overcoming drug resistance.Drug resistance is a major cause for the failure of cancer chemotherapy or targeted therapy. However, the molecular regulatory mechanisms controlling the dynamic evolvement of drug resistance remain poorly understood. Thus, it is important to develop methods for identifying key gene regulatory mechanisms of the resistance to specific drugs. In this study, we developed a data-driven computational framework, DryNetMC, using a Leveraging global gene expression patterns to study dynamical mechanisms of cancer drug resistance is an appealing yet challenging task for both experimental and computational biologists. In this study, we proposed a dynamic network-based computational method to prioritize the key genes responsible for cancer drug resistance, which is significantly innovative compared to the conventional differential expression method and co-expression network-based differential analysis. In addition, our method is verified to be more accurate compared with several state-of-the-art methods used for inferring GRNs. We applied our computational method to a set of time-course RNA-seq data of gliomas, and several novel predictions were verified by the additional gene expression/cellular response data or the clinical data. Our study provides a principled approach to gain insights into dynamic adaptation and regulatory network mechanisms of cancer drug resistance, and sheds lights on designing new biomarkers or targets for predicting or controlling drug resistance. Drug resistance is often an inevitable event that limits the effectiveness of cancer chemotherapy or targeted therapy. Unraveling key genes and their regulatory mechanisms controlling the acquisition and development of drug resistance remains a challenging task. Therefore, it is important to develop systematic approaches for identifying key gene regulatory mechanisms underlying the resistance of cancer cells to specific drugs or therapeutics.Our preliminary studies have established an experimental model for the glioma differentiation therapy with cAMP activators , which eAs a result, distinguishing cellular states between drug sensitivity and drug resistance at the molecular level is critical for the study of drug resistance. The conventional method involves the use of a set of differentially expressed genes to distinguish between drug sensitivity and resistance , 4, but In recent years, network-based approaches \u20138 have bIn this study, we developed a time-course RNA-seq data-driven dynamical network modeling and characterization method to quantify and prioritize key genes for predicting drug sensitivities and designing potential targets against drug resistance. The advances and novelty of our method include not only accurate reconstruction of GRNs via an integrated approach but also novel characterization and quantification of GRN properties for measuring and ranking node importance.in vitro experimental data were employed to validate the drug sensitivity prediction based on the similarity of the temporal patterns of the prioritized key genes. We compared our method with other methods including the conventional differential expression analysis and differential co-expression network\u2013based method. The computational method developed in this study is generally applicable for the analysis of time-course RNA-seq data designed for studying drug resistance in many cancer types.We used glioma differentiation therapy as a realistic case and time-course RNA-seq to investigate the temporal gene expression changes in sensitive and resistant cells. We then reconstructed GRNs for both sensitive cells and resistant cells and analyzed their difference in network topology, local network entropy and expression dynamics. We further designed a novel quantification method to prioritize the most important genes in the differential network that are responsible for drug resistance by considering the network topology, network entropy and adaptation dynamics. The clinical data verified that the top-ranked genes were associated with the targeted therapeutic response and prognosis of glioma patients. Furthermore, the Fig 1. Below, we describe the details of each step.The computational pipeline for the time-course transcriptome-based modeling and characterization of the GRNs underlying drug resistance is illustrated in T1, T2, \u2026, and TK following drug treatment. The raw RNA-seq reads were processed using a standard pipeline , such that x(Tk) = uk and the derivative of x(t) is continuous. The details of construction of piecewise cubic Hermit interpolation polynomial is provided in S1 Text. We chose this method because x(t) preserves the monotonicity, local extremum and nonnegativity of the expression data. We noted that other interpolation methods, such as polynomial spline or cubic spline, might result in unrealistic negative interpolated values and unexpected variations in gene expression. We then uniformly took n points from x(t), denoting them as x(t1), x(t2), \u2026x(tn).Given that the total number of time points represents the continuous expression of gene i at time t, L is the number of nodes in the network. The function fi can be linear, piece-wise linear or nonlinear. To reduce the model complexity, we assumed thataij is the interaction strength from xj to xi, and bi is a constant number accounting for the effects of degradation or self-activation. eij represents the prior information and association between gene i and gene j in an initial correlation network G (see details in S1 Text).We next inferred the causal relationship between each pair of nodes using time-series gene expression data from sensitive cells and resistant cells. We used a set of ordinary differential equations (ODEs) to model the dynamic gene-regulatory network as follows:tk+1\u2212tk is sufficiently small (since n is chosen large enough as mentioned above). Therefore, the above continuous model can be Y = L\u00d7K = (yi(tk))L\u00d7K, X = L\u00d7K = (xj(tk))L\u00d7K, E = (eij)L\u00d7L, A = (aij)L\u00d7L, and B = diag(bi). Eq T is the error term. \u03b51, \u03b52, \u22ef, \u03b5L are mutually independent normal random variables with means 0, 0, \u22ef, 0 and variances \u03c31, \u03c32, \u22ef, \u03c3L, respectively. Note that E\u2218A represents the Hadamard product of E and A.We then denote (bi). Eq can be tA and B, by fitting the model to the experimental data (Xexp and Yexp) using a regularized regression method as foll\"glmnet\" .\u03b8, to select the significant edges in the network, i.e., the edge from gene j to i was preserved if |aij|\u2265\u03b8 and was deleted if |aij|<\u03b8. The Bayesian information criterion (BIC) was employed to quantify the trade-off between the goodness-of-fit and the complexity of the network model. For a network with p nonzero edges fitted to the experimental data with N (= L\u00d7K) samples, the BIC was calculated as follows [To determine whether an edge in the GRN model is redundant, we examined the impact of removing an edge on the model fitting result. We defined a strength threshold, follows :BIC=N\u22c5l\u03b8 and thus different edge numbers, the lowest BIC identifies preferred model that possesses both good predictive power and network simplicity.Given a series of G and G denote the sensitive and resistant networks derived from the above method, respectively. We then defined a differential network between the resistant (ER) and sensitive (ES) networks as D = G, and ytoscape .http://www.geneontology.org/) [http://www.genecards.org/) [The genes in the differential network were inputted into the Gene Ontology (GO) knowledgebase (gy.org/) for funcds.org/) .We subsequently sought to measure the importance of each node in the differential network by considering the network topology, interaction strengths and gene expression dynamics. We defined the hub score, local network entropy and adaptation score for each node as follows and then integrated these values into a single comprehensive index.D to estimate the value of its links to other nodes. D, and we subjected AAT and denoted it H = . The hub score of node i was defined as hi.We first considered the network topology and defined a hub score for each node in the differential network A = (aij)L\u00d7L, the entropy of node i in the GRN was defined as follows:N(i) is the set of all neighborhood of node i. Higher entropy indicates that the gene network is more robust with respect to the perturbations. The entropy change score for the genes in the differential network was defined asi in the sensitive and resistant networks, respectively.By viewing the GRN as a random walk of information flow , we defiT was 48 hr in our experiment. To quantify the adaptation dynamics of each gene in the differential network, we defined the following adaptation score:i in the sensitive and resistant networks, respectively. If gene i presented a more adaptive response to the drug treatment in the resistant cells than in the sensitive cells, a high adaptation score was obtained.We calculated the relative response of each gene by measuring the dynamic change in gene expression as follows:H), entropy (S) and adaptation score (D) and denoted their rank values (dimensionless) for each node as Ii) for each node as the normalized rank sum of these values:We ranked the hub score (The rankings of the importance score were used to prioritize key genes responsible for the drug resistance.S1 Text). The DryNB incorporated the node importance measurement to select the genes with the most clinical relevance (see ment (Eq ) as a peS2 Text), and their differential expression between normal and tumor tissues was assessed using the patient data. Additional RNA-seq data from another glioma cell line, U87MG, was obtained in this study to test the responses of these cells to dbcAMP treatment, verified by morphological changes of U87MG cells [The temporal expression profiles of the prioritized genes were validated by performing qPCR experiments demonstrated that two glioma cell lines, DBTRG-05MG and LN-18 cells, showed differential sensitivities to treatment with dbcAMP, an activator of cAMP. In the following text, we refer to DBTRG-05MG and LN-18 cells as sensitive and resistant cells, respectively. The RNA-seq data for both sensitive cells and resistant cells were measured at 0, 6, 12, 24 and 48 hr following drug treatment. The first time point corresponds to no treatment condition.Our experimental data demonstrated that these showed different phenotypic trajectories. The cellular states of sensitive cells changed over time and went far from their starting point, whereas the resistant cells first moved away and then returned closer to the starting point. This result suggested that the resistant cells show a dynamic adaptive response under dbcAMP treatment.A principal component analysis (PCA) of the temporal expression of all the genes in the sensitive and resistant cells . Most TCGs in the sensitive cells showed \"monotonic response\" patterns in which the gene expression exhibited an ascending or descending trend , whereas most TCGs in the resistant cells exhibited \"adaptive response\" patterns , in which the gene expression temporally increased or decreased at an early stage, followed by a return to near-baseline levels at the late stage. Furthermore, we quantitatively evaluated the tendencies of monotonic response or adaptive response of the TCGs . Clearly, the TCGs in the resistant cells tended to have a higher adaptive response score but a lower monotonic response score compared to the TCGs in the resistant cells , with statistical significance assessed in S2C and S2D Fig.We selected the significant TCGs to analyze their temporal expression patterns. S3 Fig). Time-course gene expression data were simulated using a set of ODEs based on a five-node true network with typical network motifs . Sampling data at 0, 6, 12, 24, and 48 hr were obtained and Hermit interpolations were performed . The performance of the DryNetMC with respect to GRN inference was compared with several existing methods, including PCC-based correlation network method (PCCNet) [S3D Fig) of the DryNetMC is notably greater than that of the other methods. Moreover, we compared the performance of the DryNetMC with the other methods on 100 simulated datasets (see details in S1 Text). The comparison of the AUC values demonstrated that the DryNetMC significantly outperformed the other methods.Prior to working with our realistic RNA-seq data, we tested the effectiveness of the DryNetMC with respect to GRN inference using a synthetic dataset ((PCCNet) , tree-ba(PCCNet) ), a dyna(PCCNet) , the ODE(PCCNet) , 35. TheS5 Fig shows the parameter estimation and edge selection results from the network reconstruction. S5C and S5D Fig show the frequency distributions of the mean cross-validation errors. Most of the mean cross-validation errors were less than 0.003, indicating good accuracy of the fitting and reliable network reconstruction. S5E and S5F Fig demonstrate the change in the BIC values against the absolute values of edge strength (\u03b8). The minimal absolute values of edge strength were achieved at approximately 0.01. Therefore, we used 0.01 as the strength threshold to select the significant edges in the networks .We then used real RNA-seq data from DBTRG-05MG and LN-18 cells treated with dbcAMP to reconstruct the sensitive and resistant networks using the dynamic modeling method. Fig 3A and 3B visualize the reconstructed GRNs for the sensitive and resistant cells. The genes are represented as circled nodes, and the activating and repressing interactions are represented as red arrows and blue lines, respectively. The sensitive and resistant networks showed different connections between most pairs of genes, suggesting that different gene regulations underlie the sensitive and resistant cellular states.Fig 3C) by reserving edges in the resistant network that should be specifically responsible for the resistant cellular state. The functional enrichment of genes involved in the differential network revealed that cell cycle and chromosome segregation were crucial pathways, which suggested that these biological processes were significantly involved in drug resistance during glioma differentiation therapy .We defined a differential network was significantly larger than that of the sensitive network (gray bars). We also investigated two- and three-node feedback motifs, including positive feedback (PF), negative feedback (NF), positive-positive feedback (PPF), positive-negative feedback (PNF), and negative-negative feedback (NNF) . We then measured the number and percentage of various feedback loops in the sensitive and resistant networks , which indicated that the percentage of various feedback loops in the resistant network was substantially higher than that in the sensitive network. These results revealed that the genes exhibited more complex regulatory relationships in the resistant cells, which enabled the resistant cells to show increased robustness in response to the drug attack.We examined the topological difference between the sensitive and resistant GRNs. As shown in the local network entropy was increased in the resistant network compared with the sensitive network, which indicated the increased robustness of the resistant network with respect to perturbations. This result is consistent with the dynamic changes in gene expression in the two networks in response to dbcAMP treatment (Eq ) was accS1 Text). The identified DryNB included seven genes that were significantly associated with drug sensitivity or resistance of glioma patients to the targeted therapies.To test the functional roles of the top-ranked genes in the drug resistance of glioma patients and their clinical relevance, we developed a DryNB model through incorporation of node importance measurement into a logistic model and the test dataset , which demonstrated good accuracy of the DryNB for predicting drug response. In addition, the seven identified genes showed differential expression profiles in the normal and tumor tissues of glioma patients . Fig 5D further shows the significant association of the identified seven genes with the survival probability of glioma patients based on the data from the TCGA samples (N = 610). These results demonstrated the significant clinical relevance of the identified genes and verified their important roles in drug resistance and cancer progression.The TCGA samples of glioma patients who received targeted therapies were randomly divided into the training and test subdatasets based on several different sample ratios to the total sample number (N = 289). The sample division at each ratio was repeated 100 times. The areas under the ROC curves (AUCs) were computed for both the training dataset . These results demonstrated superior capability of the DryNetMC in discovering potential functional genes compared with the conventional method.Notably, the five top-ranked genes were consistently included in the DryNB. We further compared the significance of the clinical relevance of the top-ranked genes identified respectively from the DryNetMC and the conventional differential expression method . We useFig 6A shows the expression profiles of genes in the differential network in the sensitive, resistant and tested cells. Fig 6B shows the time-course expression patterns of the prioritized 5 genes in the three cell lines, which were consistent with the qPCR experiments .We then tested whether the temporal patterns of the prioritized 5 genes could be used to predict drug sensitivities of different glioma cell lines. RNA-seq data from another glioma cell line, U87MG, was used for testing. Fig 6C shows that the DTW distance of the tested cell line to the sensitive cell line was significantly smaller than that to the resistant cell line, with a p-value equal to 0.03125 (one-tailed Wilcoxon signed rank test) . Therefore, we hypothesized that the response of the tested cells to the cAMP treatment was more similar to that of the sensitive cells. As such, we used our experimental data of the morphological changes exhibited by three cell lines in response to different doses of dbcAMP [Fig 6D), and the dose-response curve of the tested cell line was closer to that of the sensitive cell line.We employed the dynamic time warping (DTW) algorithm to measuFig 7A and 7B indicate that the expression patterns of the gene set prioritized by DEseq2 (according to fold changes) or by GSNCA and thethe tree ) were noS1 Text). We randomly selected 5 DEGs and set them as marker genes to evaluate the distance from the tested cells to the sensitive cells (DS) or to the resistant cells (DR) . The difference value (D = DS\u2212DR) was calculated. Smaller D indicates more closeness of the tested cells to the sensitive cells. The above process was repeated for 1000 times. Fig 7C shows the distribution of the above difference values. Let d denote the difference value of the two distances derived from the top 5 genes prioritized by the DryNetMC. Probability P(D\u200950 %) genes to the study of gene pairs. Our technique, based on the branching process, responds to our concern in previous ad hoc treatments [One aspect of the branching process that cannot be inferred from the study of riplings , 10, theears ago , ancestreatments , 12 of hPopulus trichocarpa [Brassica oleracea [B. oleracea prompt an extension to three-event models, which we carry out, and suggest further work to higher numbers of events.In the next section, we summarize the general branching process approach to analyzing the distribution of gene pair similarities. We then focus on four competing two-event models involving WGD and/or WGT. We define four types of gene triplet according to whether the gene pairs within them were created by the first event, the second event, or both. Within each model, we calculate the expected number of triplets of each type. Thus creates an \u201cunderlying\u201d profile of triplet distribution to compare to the \u201cobserved\u201d profile of triplets in the data. Because of the way the two components of the pair similarity distribution overlap, however, the origin of each triplet in the data is not always obvious. Thus we create a \u201cpredicted\u201d profile of triplet distribution by grafting a paralog divergence model onto the branching process, making use of a maximum likelihood dividing point between the two components. We apply this analysis to the genomes of durian, poplar chocarpa and cabboleracea , each ofmi the total number of individuals (genes)at time ti,i=1,\u2026,n. Set m1=1. At time ti, i=1,\u2026n\u22121, each of the mi genes is replaced by ri\u22652 progeny, but only j\u22651 of them survive until time ti+1, with probability Denote by mi genes at time ti, let j progeny survive until time ti+1, so thatOf the total of The probability distribution of the evolutionary histories represented by the given tn isThe expected number of genes at time t1, with all three progeny surviving \u2013 the 3-nomial sample has value 3 \u2013 is followed by another independent WGT at time t2 where the three lineages show one, two or all three offspring surviving, i.e., the independent 3-nomials samples have values 1,2 and 3, respectively. We will study the case of two successive polyploidy events, with r1 and r2 taking on values 2 or 3, i.e., WGD or WGT, in all four combinations, i.e., in the set This is illustrated by the sample trajectory in Fig.\u00a0ri of the various events. This motivates us to extend our study from gene pairs only to also include gene triplets.To infer parameters like fractionation rate in the polyploidization history of a genome, based on the distribution of gene pair similarities, we need to know the ploidies ti,tj,tk}, where each of i,j and k may be 1 or 2. Let t2,t2,t2} triplet and the red dots form a {t1,t1,t2} triplet. The single red dot combines with the three pairs of blue genes to form three additional {t1,t1,t2} triplets. And there are a further nine {t1,t1,t2} triplets in the sample. We can calculate the expected number of triplets of each type by enumerating the triplets of each type in each possible trajectory of the process, and multiplying by the probability of this trajectory from Expression set {lculated \u20137. We thWM(\u0394) of each type WM(\u00b7) constitute the underlying profile of the model M. The underlying profiles for each model based on maximum likelihood values of u and v are given in the top half of Tables\u00a0u\u2032=u2 and v\u2032=v2.This table provides the expected number of triplets entclass1pt{minimaIt can be seen in Table\u00a0We can try to categorize the set of triplets in a set of data by how closely they resemble one of the four basic types. If the two components of the similarity distribution were completely separate, this would also be an easy matter. But the usual large overlap between the components means that we cannot automatically ascribe any data triplet to any particular underlying triplet.H somewhere between the peaks of the two components. For this we compute the product of the probability density at each similarity value less than H, according to the component with mean at t1, and the density at each similarity value greater than H, according to the component with mean at t2, and maximize with respect to H. I.e.,As a solution to this problem, we first try to find the best transition, or cutoff point H. If a similarity x is less than H we classify it as being produced at time t1, and we write x\u2208I. If it is greater than H, we classify it as being created at time t2 and we write x\u2208J. This creates eight \u201coctants\u201d, defined by the 8=23 combination of the three triplet similarities, which in turn are collapsed into the four types of triplet in t1,t1,t1},{t1,t1,t2}(representing three octants), {t1,t2,t2} (representing three octants) and {t2,t2,t2}. The number of triplets of the four types we call the observed profile.We then categorize the triplets in the data according to the transition value Although we can compare the observed profile with the underlying profile, this comparison is not too meaningful since it neglects the fact that many of the data triplets classified as one type may be generated by a different underlying type, not as an error, but simply as a result of the normal process of duplicate gene sequence divergence clearly operative in the more or less dispersed and overlapping components of the distribution of gene pair similarities.predicted profile for each model. We first calculate the variance-covariance matrix \u03a3 of the t1 similarities in triplets containing at least of them and t2 similarities in triplets containing at least two of these. We fixed covar=0, in accordance with the Markov nature of the branching process.We can, however, take this process into account in producing a predicted profile of triplet types by integrating over the trivariate normal with means drawn from the EMMIX analysis or identified by eye with the distribution component peaks, and covariance estimated as above, restricted to the domains defined by the transition point. Thus our prediction of {t1,t1,t1} triplets would involve a restriction to the domain where all three coordinates are less than or equal to H. Our prediction of apparently {t1,t1,t2} triplets would be confined to the three octants where two coordinates are less than or equal to H and one is greater. The integrals are weighted by W(\u0394), the expected number of triplets. For example, the predicted number of {t1,t1,t2} triplets would be;For each model \u03bc is the vector of component means. To summarize, we have defined three types of triplet profile:the observed profile, based on triples of genes all having high similarity scores with each other, which can be compiled from the list of gene pairs produced by the SYNMAP function of COGE , 2,the underlying profile for each model the predicted profile for each model where For comparative purposes we normalize the underlying and predicted profiles so that the total number of triples is the same as the observed profiles.We compare the three profiles for three well-studied flowering plant genomes that are known to have undergone multiple polyploidizations in the last 120 million years, to see if our method predicts the right combination of WGT and WGD.t1,t1,t1} and {t2,t2,t2} triplets. This indicates the potential of our statistical method, since the original durian sequence article [Starting with the durian genome, the 3,3) model, known to represent true evolutionary history, is the only one with a credible prediction profile in Table\u00a0,3 model,Populus trichocarpa genome (COGE ID 25127), whose gene pair similarity distribution is displayed in Fig.\u00a0\u03b3, this shares the ancient \u201csalicoid\u201d WGD with other members of the Salicaceae family [\u03b3 event as a WGT, and the more recent event as a WGD.The predicted profile of the 3,2) model in Table\u00a0,2 model e family . iBrassica oleracea genome (COGE ID 26018), underwent a WGT that gave rise not only to the crucifers and mustard genera, but also radishes and other related genera. Early than that a WGD called the \u03b1 doubling is apparent in the whole range of family Brassicacea genera, including Arabidopsis. A still earlier WGD, the \u03b2 doubling, occurred in the order Brassicales lineage that includes the Brassicaceae. Thus the cabbage genome counts \u03b3,\u03b2,\u03b1 and a Brassica WGT in its evolutionary history [The recent ancestor of history , 17. In \u03b2 and even \u03b3 doublings greater than 76%.This analysis in Table\u00a0We can partially correct this by adding a third event to our branching process. This leads to eight models instead of four, and ten kinds of triplet, summarized in Table\u00a0\u03b3 event, that for the second component, representing the \u03b1 event, at 79.5% and we find two ML discrimination points, as in Fig.\u00a0We fix the mean of the first component at 71% to account for the t2,t2,t2} triplets. The absence of a distinction between the \u03b1 and \u03b2 events means that the similarities they generate are all conflated to yield an excess of t2, and consequently an excess of t2 triples, so that a WGT is inferred rather than a WGD.The results of this are shown in Table\u00a0m for m events, while the number of triples follows the polynomial tetrahedral sequence (A00292 in [The obvious remedy for this would be to construct four-event models (sixteen of them), with profiles consisting of 20 different triplets. We leave this for further work. In general the number of models is exponential: 200292 in ) \\documeri\u22652 across the i\u2212th generation population and deaths determined by a ri-nomial law conditioned on at least one survivor.We model the process of fractionation to account for the distribution of gene pair similarities after a number of whole genome doublings, triplings, etc., each followed by a period of duplicate gene loss. The model is a discrete-time branching process, with synchronous birth number via a local mode), standard deviation and proportion of the sample.The observations of gene pair similarities consist of a mixture of normals, each component generated by one event, with the event time estimated by the sequence divergence from the event to the present. Despite the overlapping distributions, we can estimate the mean Clear cases like this are rare, however, especially for genomes where the last polyploidization is more remote in time.It is true that in some cases, such as that we presented in , concernBrassica rapa against the alternative of tetraploidization. This kind of data, however, namely speculation about the number of single-copy genes in the current genome, was extremely subjective in that report, and is unreliable even when assessed by the best available methods on well-assembled and annotated genomes.In previous work , we usedA distribution of gene pair similarities is generated in the comparison of two related genomes as well as in the self-comparison of a single genome. The number of orthologous gene pairs available when comparing two related genomes is generally much greater than the number of paralogous pairs identified in the self-comparison of two genomes, simply because the loss by fractionation of one copy of a duplicated gene does not eliminate all related orthology pairs: the other remaining copy and its orthology pairs remain intact. This suggests an avenue to improved accuracy of polyploidy levels inference. The larger number of data, however, may not always compensate for the fact that the speciation component of the similarity distribution is always the most recent one , so that"}
+{"text": "Identification of differentially expressed genes has been a high priority task of downstream analyses to further advances in biomedical research. Investigators have been faced with an array of issues in dealing with more complicated experiments and metadata, including batch effects, normalization, temporal dynamics , and isoform diversity . To date, there are currently no standard approaches to precisely and efficiently analyze these moderate or large-scale experimental designs, especially with combined metadata. In this report, we propose comprehensive analytical pipelines to precisely characterize temporal dynamics in differential expression of genes and other genomic features, i.e., the variability of transcripts, isoforms and exons, by controlling batch effects and other nuisance factors that could have significant confounding effects on the main effects of interest in comparative models and may result in misleading interpretations. Perhaps even more popular is the use of longitudinally repeated measurements at different time points in relation to some baseline stimuli or perturbation. Prior to the main downstream analyses, a prerequisite step must be the removal of experimental artifacts and unwanted sample-to-sample variation using appropriately proposed methods in pipelines. While this has long been recognized as an important step in the analysis of high-throughput data, it has largely been overlooked in the detection of significantly differential expression18.Similar to microarrays, investigators have been increasingly conducting experiments that focus on ontological alterations across a series of time periodsThe purpose of this research is to develop data management procedures for the increasing wealth of data being generated by new approaches and to deepen the characterization of temporal dynamics by including isoform diversity in addition to gene-level analyses. In this study, we describe how to incorporate improved strategies to remove systematic biases and to fully characterize temporal dynamics by accounting for data-driven inherent features. This is based on our large-scale time course longitudinal stimuli-response data at every step, along with a panorama snapshot of the entire workflow through Day 14 (D14). And 8 biological replicates at each time point have been utilized in this study. To characterize the complexity of temporal dynamics, our proposed Bayesian dynamic AR method with batch correction and isoform diversity, compared to existing static and other dynamic methods have been employed. A schematic illustration of the entire analytical strategy in the detailed analytical pipelines at each window is depicted in the section of 4. METHODS and in Supplementary Fig.\u00a0Based on the results where the first pipeline was used to analyze our raw data, without regard to the sophisticated diagnosis in the exploratory analyses versus control group over time, was also conducted for our within-subject time series data for a comparison. The latter method showed less sensitivity than our proposed Bayesian dynamic autoregressive model because our experimental setting is designed in a within-subject stimuli-response (data not shown).In this study, our experimental dataset is a meta-framed longitudinal time course with repeated measurements, i.e., a within-subject stimuli-response dataset. In principle, within-subject longitudinal stimuli-response data is applied to a Bayesian dynamic autoregressive model (AR), which more precisely targets the repeatedly measured time course RNA-Seq data, as was proposed in our previous studyConclusively, when performing differential expression analysis with well-qualified samples after correction for systematic artifacts and making use of the desired dynamic method that was implemented for longitudinal data, we clearly detected more highly significant genes and isoforms that are insignificant in pipeline 1 due to the significant lurking factor (lane effect) in our experiment and confirmed in the multiple results Dataset\u00a0. And morInterestingly, when compared to na\u00efve static methods, the advantageous nature of our proposed Bayesian dynamic AR method has the capability to detect significantly differentially expressed genes before correction. This suggests that our Bayesian dynamic AR model explicitly captures the variability of replicates within a group and the extra variability due to the experimental systematic artifact of lane effect on the given data. In other words, as shown in Dataset\u00a023. Therefore, we confirmed in this study that the temporal dynamic genes significantly identified by the Bayesian dynamic AR model have mostly been included in the list from the static Fisher exact test, which was based on before-correction data , in a good agreement with the AR-based RNAseq results. There existed a strong positive correlation between log2 transformed fold changes calculated from the normalized hit counts (RNAseq) and those derived from the gene copy numbers detected by qRT-PCR as shown in Fig.\u00a025 that have been uniquely identified using the AR model were validated independently by qRT-PCR Fig.\u00a0. Our sel Dataset\u00a0. The hig43. Datasets\u00a0To identify various temporal patterns at the isoform level, we applied the most popular and robust methods, Cuffdiff and DEXSeq.Analogous to gene level analyses before the correction of the systematic artifact of lane effect, for both exon and isoform level analysis, relatively few significant sets were identified by the existing static Cuffdiff and DEXSeq methods, implicating suspected confounding effects Dataset\u00a0. Therefo50. We carried out these functional analyses based on the data after correction of unwanted biases in both static and dynamic methods by pipeline 2, using pooled data . Additional issues include whether the data is a single or multiseries factorial time course dataset, a circadian rhythmic periodical dataset, a single cell cycle dataset, or a meta-framed time course. Important is the establishment of a gold-standard benchmarking list in databases63.There still many issues to be further addressed in a variety of more complex experimental designs, such as temporal RNA-Seq data, and especially in combined time course RNA-Seq datasets, in terms of the development of computational methods in the field of community. Examples include significant artifacts such as batch issues75.As an extension of this study, we are currently developing Bayesian dynamic models to define temporal dynamics for between-subject stimuli-response in factorized RNA-Seq time course experiments to better take into account systematic biases, multiple experimental factors, and isoform diversity in a model of differential expression in R and OpenBUGS (Winbugs) (implementation is underway). Another promising study in temporal dynamics would be to characterize a progressive disease model, such as pediatric cancer progression, by targeting initiation, progression time points and period, perturbation of cancer progression, and estimation and prediction of unobserved time points76.More specifically, complete analytical pipelines for noncoding RNAs (ncRNAs) are also needed, as aberrant patterns in (very) long and noncoding RNAs have been explored as important biomarkers for the classification of subtypes of cancers and other diseases, as primary factors in oncogenesis, and for therapeutic effects77. To identify differentially expressed genes between the baseline time point (D0) with ruminal infusion of butyrate versus later time points in the comparative tests, that is, D0 vs D1 (to D14), we computed the tail probability of for other methods. It indicates the significance of differential expression for each gene-by-gene testing. This method is able to be straightforwardly extended to detect temporally differential expression for the quantification level of isoforms and other genomic features. Strikingly, our proposed dynamic model enables the inclusion of the factor of systematic biases from lane effects, which was estimated from preprocessing procedures in the differential expression analysis. Furthermore, this dynamic model has the capability to infer multiple factors in various experimental and clinical settings simultaneously, such as other additional nuisance factors resulting in unwanted systematic biases in a given dataset.To precisely characterize temporal dynamics in within-subject stimuli-response longitudinal RNA-seq data in the format of a single series time course experiment, we employed an autoregressive model (AR) that had been initially proposed to account for the count property with Poisson gamma and time dependency in our previous study78, multivariate analyses, and Venny online tool. For comparison to our proposed temporal dynamics Bayesian AR method, differential expression analyses of genes, including pairwise comparisons, a generalized linear model incorporating multiple factors, and dynamic methods were carried out using Cuffdiff, the Tuxedo Suite, edgeR, DESeq, voom in limma, and maSigPro, in turn85. For the quantification of isoforms (and exons) and the identification of differential expressions of splicing, Cuffdiff and DEXSeq were employed86. For functional analyses, we employed Networkanalyst and David88.Prior to down-stream analyses, the following diagnostic analyses on all samples for both the before and after pooling data were performed in the R package with the latest versions: gplots, RUVSeq.94. Cows in mid-lactation were fed ad libitum a total mixed ration consisting of 50% corn silage and 50% concentrate on a dry matter basis. The cows were moved to a tie stall barn for adaptation and acclimation 7 d prior to the experiment. A ruminal infusion of butyrate was initiated immediately following the 0\u2009h sampling (baseline control) and thereafter continued for 168\u2009h at a rate of 5.0\u2009L/d of a 2.5\u2009M solution (representing >10% of the daily anticipated metabolizable energy intake to support lactation) in a buffered saliva solution as a continuous infusion. After 168\u2009h of infusion, the infusion was stopped and the cows were maintained on the basal lactation ration for an additional 168\u2009h for sampling. Rumen epithelial samples were serially collected via biopsy through the rumen fistulae at 0, 24, 72, and 168\u2009h of infusion, and at 24 and 168\u2009h post infusion . The ruminal pH was monitored using a standard pH meter and recorded at each sampling. Rumen epithelial samples were snap frozen in liquid nitrogen and stored at \u221280\u2009\u00b0C until RNA extraction , followed by DNase digestion and Qiagen RNeasy column purification . The RNA integrity was verified using an Agilent Bioanalyzer 2100 . High-quality RNA (RNA integrity number (RIN)\u2009>\u20098.0) was processed using an Illumina TruSeq RNA sample prep kit following the manufacturer\u2019s instructions . After quality control procedures, individual RNA-seq libraries were pooled based on their respective sample-specific 6-bp adaptors and sequenced at 50\u2009bp/sequence read using an Illumina HiSeq. 2000 sequencer, as previously described96. In the sample raw data, biological replicates were collected from 8 different cell lines and technical replicates were run in different sequencing dates (Oct-24-2014 and Jan-08-2015) and in distinct lanes. The tuxedo method, bowtie, TopHat, and Cufflinks tools for trimming fastqc were performed using the latest version of tools. For the quantification of expression levels, there were two different types of the pipeline that were compared in this study: after and before pooling of the replicates. Hereafter, we refer to these as after and before pooling data. In the pipelines, the mapped expression FPKM levels quantified by Cufflinks were further utilized for the purpose of detecting temporally differentially expressed genes and isoforms. Prior to the identification of temporally expressed genes and isoforms, we employed exploratory analysis for all individual samples to verify sample reproducibility, variability, and unwanted systematic biases by making use of the diagnostic tools12.All raw sequence fastq files were initially preprocessed against the reference genome, Bos_taurus.UMD3.1.80.gtf, downloaded from toolsQuantitative Real-time Reverse Transcription (RT) PCR analysis was performed using an Absolute Quantitation (standard curve) method. Briefly, the reaction was carried out in a SsoAdvanced Universal SYBR Green Supermix (Bio-Rad) using 200\u2009nM of each amplification primer and 100\u2009ng of the first-strand cDNA in a 25\u2009\u03bcl reaction volume as previously described98. Real-time amplification was conducted on a CFX96 Real-Time PCR Detection System (Bio-Rad) with the following profile: 95\u2009\u00b0C for 120\u2009s, 40 cycles of 95\u2009\u00b0C for 30\u2009s, 60\u2009\u00b0C for 30\u2009s and 72\u2009\u00b0C for 30\u2009s followed by a melting curve analysis for each primer pair. Standards with known quantities (copy numbers) for a single mRNA sequence (gene of interest) were prepared from PCR products purified using Agencourt AMPure XP beads . The expression levels were determined from a standard curve of known target cDNA copy numbers (1.0\u2009\u00d7\u2009101 to 1.0\u2009\u00d7\u2009105 molecules per reaction) and analyzed simultaneously with unknown experimental samples on the same plate. The primers used in the study were listed in Supplementary Table\u00a0Purified total RNA samples were converted to cDNA using an iScript Advanced cDNA Synthesis Kit . 99. For the detection of temporally differentially expressed genes and isoforms, we intuitively performed typical pairwise comparison methods, as our main hypothetical testing of interest is to identify any significant changes between treatment groups versus the baseline control group (Day 0). The comparative methods were carried out by Cufflinks and Cuffdiff, edgeR, and DESeq pairwise comparison. We utilized FPKM (fragments per kilobase per million mapped reads) values as the input data of expression values for the direct comparison of methods. When pooling replicates, we clearly observed that the quality of sample B29 was not good due to the sample and library prep. Further examination of this outlier sample explored how discrepant results impact downstream analyses, such as differential expression analysis, when this sample was either excluded or included. This step in our study ensured that poor quality samples were carefully dealt with in the preprocessing procedure as a prerequisite step since these samples can significantly affect the following results of downstream analyses.Pipeline 1: Initially, we pooled all of the technical replicates from different lanes and sequencing dates in the preprocessing procedures, resulting in eight biological samples at each time point, identical to the previous study5. In addition to sample B29, we also observed extra variability among samples due to the distinct lanes, indicating a significant lane effect in the experiment. Prior to the main downstream analyses, we adjusted the systematic artifacts that could affect the main biological factors of interest , as shown in the study that explored unwanted biases in static data12, and we further incorporated the effect in our proposed Bayesian dynamic AR method. In other words, to examine how systematic biases can be confounding in the detection of differentially expressed genes and isoforms in this pipeline, we compared the results of differential expression with and without correction of the extra variability of samples before executing the main differential expression analysis. Furthermore, we also performed gene ontology, gene set enrichment analysis, and network module analysis based on the temporal dynamics of gene and isoforms as putative biomarkers.Pipeline 2: As described in pipeline 1, our experimental design attempted to characterize the stimulated alterations across different time points from D0 through D14, and each individual sample was longitudinally observed as the repeated measurement during the given time period. Since the comparison in our study was focused on the D0 baseline time point versus each of the later time points, as the format of simple pairwise comparisons , other than a full time course series, simple pairwise static methods under the independent assumption of samples before and after stimulus might also work to some degree. However, it was evident that the expression levels were highly correlated between two neighboring time points, as the samples were longitudinally measured after an external stimulus of ruminal infusion. Additionally, our analysis also shows time-dependent expressed patterns between consecutive time points, as shown in Supplementary Fig.\u00a012 Dataset S3-(2)Dataset S3-(3)Dataset S3-(4)Dataset S4-(1)Dataset S4-(2)Dataset S4-(3)Dataset S4-(4)Dataset S5Dataset S6-(1)Dataset S6-(2)Dataset S7Supplemental Vide S1-(1)Supplemental Vide S1-(2)Supplemental Video S2-(1)Supplemental Video S2-(2)"}
+{"text": "With the rapid evolution of high-throughput technologies, time series/longitudinal high-throughput experiments have become possible and affordable. However, the development of statistical methods dealing with gene expression profiles across time points has not kept up with the explosion of such data. The feature selection process is of critical importance for longitudinal microarray data. In this study, we proposed aggregating a gene's expression values across time into a single value using the sign average method, thereby degrading a longitudinal feature selection process into a classic one. Regularized logistic regression models with pseudogenes were then optimized by either the coordinate descent method or the threshold gradient descent regularization method. By applying the proposed methods to simulated data and a traumatic injury dataset, we have demonstrated that the proposed methods, especially for the combination of sign average and threshold gradient descent regularization, outperform other competitive algorithms. To conclude, the proposed methods are highly recommended for studies with the objective of carrying out feature selection for longitudinal gene expression data. Feature selection, a mighty tool to tackle the high dimensionality issue accompanying high-throughput experiments where the number of measured features , is much larger than that of samples and has been employed with increasing frequency in many research areas, including biomedical research. The ultimate goal of feature selection is to correctly identify features associated with the phenotypes of interest while ruling out irrelevant features as much as possible.Because biological systems or processes are dynamic, it is useful for researchers to investigate gene expression patterns across time in order to capture biologically meaningful dynamic changes. With the rapid evolution of high-throughput technology, time series/longitudinal microarray experiments have become possible and even affordable. However, development of specific statistical methods dealing with expression profiles across time points has not kept pace.One commonly used strategy is to stratify time series data into separate time points and then analyze these points separately. This approach may lead to inefficiency in statistical power by ignoring the highly correlated structure of gene expression values across time and thus result in failure to detect patterns of change across time \u20133.An alternative strategy to conduct feature selection for longitudinal gene expression data is to use statistical methods capable of detecting different expression patterns across time between groups. Examples include Significance Analysis of Microarray , ExtractSome researchers have extended two typical longitudinal data analysis strategies, namely, the generalized estimating equation (GEE) method and a miA gene set or pathway refers to a set of genes that are highly likely to coregulate/coexpress to influence a biological process and hybridized on Affymetrix HGU133 plus2 chips. The data included 167 severe blunt trauma patients. In this study, only patients with uncomplicated recovery (within 5 days) and patients with complicated recovery were considered.Raw data were downloaded from the Gene Expression Omnibus database was divided randomly into two subsets with a ratio of 3:2. The resulting datasets served as the training set and the test set, respectively.Raw data (CEL files) of the microarray data set were downloaded from the GEO repository. Expression values were obtained using the fRMA algorithm and wereikt\u2009\u2009can be written as\u03b5ikt is the error term with a mean of 0 and a standard deviation of 1; I(x) is an indicator function whose value is 1 if the condition x is true and 0 otherwise. \u03b2kt0 represents the mean expression value of gene k at time point t for the uncomplicated patients; \u03b2kt1 represents the mean difference of gene k at time point t between the complicated patients and the uncomplicated patients.To determine the directions of association using the sign average method, we compared each gene's expression value at each time point for the patients with complicated recoveries versus those with uncomplicated recoveries. Specifically, using the uncomplicated group as the reference, for patient i, gene k, at time point t, the corresponding gene expression X\u03b21kt. Then different time points of a gene were stratified into either upregulated group U or downregulated group D. The upregulated group includes the time points for which increased expression is associated with a higher probability of experiencing complicated recovery . In contrast, the downregulated group includes the time points for which an increment in the gene's expression is associated with a lower probability for complicated recovery .At each time point for each gene a moderated t-test was fitted to decide if the specific gene is upregulated or downregulated for the complicated group against the uncomplicated group according to the sign of its estimated i| for patient i , the sign average of a specific gene k over all measured time points for patient i is defined ask) and the expression values at all downregulated time points , separately. Then it takes the difference between these two summations and divides this difference by the number of time points measured. Obviously, the sign average also takes into account the directions of associations with the phenotype of interest.Denoting the number of time points as |tUsing a summary value to represent one gene's expression values across time makes all conventional feature selection algorithms applicable to longitudinal microarray data and also avoids the imbalance of observations in both groups . Traditional methods such as a t-test are incapable of dealing with cases that have more than one observation from a group at a specific time point.j(\u03b2) \u2013 the derivative/gradient of the objective function with respective to the jth\u2009\u2009\u03b2 coefficient, and y is the tuning parameter \u03bb, restricting the L-1 norm of these \u03b2 coefficients to be smaller than it is.The coordinate descent (CD) method optimizeIn this study, a regularized logistic regression model with a LASSO penalty was used, and it was solved using the CD method in the R glmnet package .The threshold gradient descent regularization (TGDR) method proposed by Friedman and Popescu was adopj(\u03b2), \u03c4 in TGDR is fixed at 1, the TGDR algorithm provides a penalty approximately comparable to the LASSO term and a value of 0 corresponds to the ridge penalty. Major differences between the CD and the TGDR methods are presented in In contrast to the CD method, the selection of genes in the TGDR method is realized by a comparison between a gene's gradient with the largest absolute gradient using a threshold function f\u03c4 at 1, which approximately corresponds to the LASSO model, and then we applied the TGDR method to the training set to obtain discriminative signatures. Two sets of signatures were compared to evaluate the pros and cons of the CD method versus the TGDR method. The R codes adapted from the programming of the meta-TGDR algorithm [In the current study, we fixed the tuning parameter lgorithm , which ik for each class and it captures the ability of correctly ranking the samples known to belong in a given class. The three metrics each range from 0 to 1. For BCM and AUPR, the closer to 1, the better a classifier is. The opposite is true for misclassified error rate.To evaluate the predictive performance of a classifier we used three metrics: Belief Confusion Metric (BCM), Area under the Precision-Recall Curve (AUPR), and misclassified error rate. Our two previous studies , 40 and 1, gs2,\u2026, gsk). Upon these gene lists, a Rand index is defined asi and gsj, and |\u2009\u2009| represents the size of the gene set. As mentioned in our previous study [Besides the discriminative/predictive performance, stability/reproducibility is of crucial importance for a gene signature as well . Good stus study , the optwww.r-project.org). The R codes for the TGDR method and the sign average method are provided in the Supplementary Statistical analysis was conducted in R, language version 3.3 (After randomly dividing our data into two sets (one serving as the training set and the other as the test set), the sign averages for genes under consideration in the training set were calculated. A 5-fold cross validation was used to decide the optimal value for the tuning parameter in the coordinate descent method or the threshold gradient descent regularization method.Briefly, the training set was divided into 5 roughly equal-sized subsets in which the ratio of complicated recovery to uncomplicated recovery was approximately the same as that of the whole training set. For 4 of the subsets, the LASSO/CD method and the TGDR method were applied to select relevant genes and estimate their corresponding coefficients. The misclassified cases were counted by validating the resulting classifier to the remaining subset. This process was repeated 5 times with the five respective subsets serving as the test set only once. The misclassified errors were then aggregated for the whole training set. The optimal cutoff of the tuning parameter was the one having the smallest misclassified error. Using the optimal value of the tuning parameter, a final model was obtained using the training set and then was validated on the test set. The study schema is given in To evaluate the proposed method more comprehensively, we applied several relevant methods, i.e., EDGE , limma , glmmLAS\u03bb is set as a value smaller than 15, the glmmLASSO algorithm crashes. This makes us suspect that similar to the PGEE method [The results are presented in E method , the glmTo explore whether the sign average method provides a good summary of expression values across time points, we also considered other scores for individual gene expression values and combined those scores with the LASSO/CD or TGDR method to train the final models. The results are provided in www.genecards.org), out of the five unique genes identified by TGDR, only DPYD, NFE2L2, and TLR5 are directly related to injury, whereas only TNFSF10 presents such a direct relation among the 7 unique CD genes. Although none of these 12 genes are indicated by the Genecards database to be directly related to traumatic injury, DPYD, NFE2L2, TLR5, and TLR8 of the TGDR unique genes are indirectly related to traumatic injury, whereas 5 of the CD unique genes are indirectly related. Among the 4 unique TGDR genes indirectly related to traumatic injury, the Genecards database [Next, we focused on the unique genes identified by either the sign average and LASSO/CD method or the sign average and TGDR method and explored the biological relevance of these genes. According to the Genecards database is a protease inhibitor and cytokine transporter. A2M uses a bait-and-trap mechanism to inhibit a broad spectrum of proteases including trypsin, thrombin and collagenase. It can also inhibit inflammatory cytokines, and therefore disrupt inflammatory cascades. SPP1 (Secreted Phosphoprotein 1) encodes a protein that binds tightly to hydroxyapatite and acts as a cytokine involved in enhancing production of interferon-gamma and interleukin-12 and reducing production of interleukin-10 and is essential in the pathway that leads to type I immunity. CR1 (Complement C3b/C4b Receptor 1) encodes a monomeric single-pass type I membrane glycoprotein found on erythrocytes, leukocytes, glomerular podocytes, and splenic follicular dendritic cells. This protein mediates cellular binding of particles and immune complexes that have activated complements. CD274 encodes an immune inhibitory receptor ligand that is expressed by hematopoietic and nonhematopoietic cells such as T cells, B cells, and various types of tumor cells. The encoded protein is a type I transmembrane protein that has immunoglobulin V-like and C-like domains. Interaction of this ligand with its receptor inhibits T-cell activation and cytokine production. During infection or inflammation of normal tissue this interaction is important for preventing autoimmunity by maintaining homeostasis of the immune response. AIM2 (Absent in Melanoma 2) is involved in innate immune response by recognizing cytosolic double-stranded DNA and inducing caspase-1-activating inflammasome formation in macrophages; diseases associated with AIM2 include skin conditions and melanoma.To investigate whether the sign average method provides a valuable summary on one gene's expression value across time , we used observed gene expression values of the injury gene expression dataset to design two sets of simulations. Here, the expression values of each gene were further standardized to have a mean of 0 and a standard deviation of 1.k at the tth time point as its symbol with a subscript of t, the probability of an injury with complication was calculated on the basis of the following logit function:In Simulation I, we chose two genes (F13A1 and GSTM1) as relevant genes and then randomly included 998 other genes as noise. Denoting the expression value of gene k at the tth time point as its symbol with a subscript of t, the corresponding logit function can be written asIn Simulation II, we explored a scenario where the association presents a monotonically changed pattern; namely, the coefficients change decreasingly or increasingly over time. Again, we used F13A1 and GSTM1 as the relevant genes and randomly chose 998 of the remaining genes as noise. Denoting the expression value of gene This simulation setting is referred to as the monotonic effect scenario. Performance statistics were calculated and averaged for 50 replicates. The results of Simulation II are presented in Consistent with the results of the injury application, the methods under consideration may be roughly classified into three categories on the basis of the calculated performance statistics in Tables \u221216 indicates that these two gene lists overlap substantially (67.6%).In this study, two optimization methods to solve a regularized regression model (the CD method and the TGDR method) were compared to investigate whether their results are comparable. A Venn diagram shows thIn terms of computing time, the TGDR method is less efficient than the CD method. The CD method took 0.205 seconds for a single run while the TGDR took 7.948 seconds for a single run on a Mac Pro laptop equipped with a 2.2\u2009GHz dual-core processor and 16\u2009GB RAM. The inferiority of the TGDR method regarding computing time may be due to two reasons. First, the R-codes we adapted from the meta-TGDR programming do not iOne major contribution of this study is the proposal of using the sign average operator to integrate a gene's expression profiles across time for a specific patient into a single value. With a summary value for each gene, longitudinal data are transferred into cross-sectional data, which makes the typical feature selection algorithms plausible for longitudinal gene expression data. One criticism is that this simplification makes the crucial time points and the change pattern of expression values across time for a specific gene nonidentifiable. Nevertheless, Simulation I shows that failure to identify significant time points for individual genes does not affect the superiority of the proposed methods over other relevant algorithms.In conclusion, summarizing genes' expression values across time using the sign average method degrades the feature selection process for longitudinal data to a conventional cross-sectional feature selection process and thus successfully conquers the longitudinal feature selection problem.In this study, data from a microarray experiment were used to illustrate the proposed methods. However, the methods are not specific to microarray data; they can be used to analyze RNA-seq data as well. The essential steps of the proposed methods are to get a summary score for each gene and then to carry out feature selection using these summary scores as predictors instead. The steps are very flexible and can be adapted to other types of gene expression data as long as the data are appropriately normalized. Specifically, for RNA-seq data, some normalized measures would be used to quantify gene expression values.Applying the proposed methods to one real-world dataset and two simulations, the proposed methods, especially for the sign average and TGDR method, present superiority over other relevant algorithms. Therefore, the proposed methods are highly recommended."}
+{"text": "RRD1 as the causal regulator for this teQTL hotspot.A large amount of panomic data has been generated in populations for understanding causal relationships in complex biological systems. Both genetic and temporal models can be used to establish causal relationships among molecular, cellular, or phenotypical traits, but with limitations. To fully utilize high-dimension temporal and genetic data, we develop a multivariate polynomial temporal genetic association (MPTGA) approach for detecting temporal genetic loci (teQTLs) of quantitative traits monitored over time in a population and a temporal genetic causality test (TGCT) for inferring causal relationships between traits linked to the locus. We apply MPTGA and TGCT to simulated data sets and a yeast F2 population in response to rapamycin, and demonstrate increased power to detect teQTLs. We identify a teQTL hotspot locus interacting with rapamycin treatment, infer putative causal regulators of the teQTL hotspot, and experimentally validate Temporal omics data have the potential to dissect complex biological networks. Here the authors develop methods for detecting temporal genetic loci (teQTLs) of quantitative traits monitored over time and inferring causal relationships between traits linked to the locus. Whereas descriptive models may achieve a high degree of accuracy in classifying individuals based on any number of features , predictive models seek to represent causal relationships between variables of interest and as a result reflect information flow through the system, thus enabling the identification of key modulators of a given biological process7, key points of therapeutic intervention8, or other interesting aspects of system behavior that can aid in our understanding of it11.Among the top objectives in modeling living systems is the construction of mathematical models capable of predicting future states of a system given a set of initial starting conditions. Whether predicting the risk of a disease at any point along one\u2019s life course given genetic, environmental, and clinical data12 and thus facilitating the transcription of a gene that in turn activates a given biological pathway13. Another type of causal relationships inferred through statistical causality tests has achieved widespread utility14. This type of causal relationships is considered as a weak form of causality and experimental follow-ups are generally needed to validate them. However, this weak causality enables us to orient the vast sea of correlations observed among hundreds of thousands of molecular phenotypes that can be simultaneously assayed, according to the direction of information flow.Building highly accurate predictive models depends on establishing causal relationships among the variables of interest. Elucidating physical interactions have been the primary means by which biologists establish causal relationships. For example, a transcription factor binding to a stretch of DNA16. However, such methods based on correlation data alone are well known to be generally unable to uniquely resolve the causal relationships among traits, given the different types of possible relationships between traits may not be statistically distinguishable from one another .Methods such as Bayesian network reconstruction algorithms have been devised to infer causal relationships among correlated traitsple Fig.\u00a0, if two 24 such as dynamic Bayesian networks or Granger causality has been developed to infer causal relationships from such data inferenceata Fig.\u00a0. Howeverata Fig.\u00a0 causes cata Fig.\u00a0, but a lata Fig.\u00a0. Both ge25 proposed to model growth-related temporal traits using a multivariate normal distribution and assumed that the mean vectors followed a logistic growth curve. In the context of temporal gene expression traits, the trajectories are usually much more complex and thus require more flexible fitting options.To date, inferring causality by jointly considering temporal and genetic dimensions in a formal modeling framework has not been systematically explored in high-dimension omics data. Integrating these two dimensions, which have a fundamental role in enabling causal inference, has the potential to enhance the power to resolve causal relationships and to provide a more accurate view of regulatory networks in biological systems. Previous methodHere we present a multivariate polynomial temporal genetic association (MPTGA) model that formally integrates genetic and temporal information to identify genetic association and a temporal genetic causality test (TGCT) to infer causal relationships among quantitative traits. To highlight the utility of this type of integrated tests, we apply it to transcriptomic data generated in a segregating population of yeast that were profiled at six different time points in response to treatment with the drug rapamycin. From these data, we demonstrate that the MPTGA test identifies significantly more genetic associations than the sum of the relationships identified via a genetic association test independently applied at different time points. In addition, we demonstrate that this approach has increased power to detect the causal regulators of expression quantitative trait loci (eQTL) hotspots that have been previously defined in this population, including the identification of regulators that had previously evaded direct detection. Finally, we identify and experimentally validate new causal regulators for temporal eQTL (teQTL) hotspots in this yeast population that explain the gene-by-drug interactions identified in our experiment.25 and the variances across subsequent time points are correlated, we develop MPTGA as a genetic association testing framework (see Methods). Instead of assuming the mean vectors of the multivariate normal distribution follow a logistic growth curve as in Ma et al.25, we model the mean vectors of the expression trajectories using a polynomial function, which is able to capture diverse types of temporal responses.As living systems are dynamic, constantly changing over time to adjust to different states and environmental conditions, the extent to which different genetic loci will impact a given trait may vary over time. There are multiple ways to model the behavior of a trait over time with respect to a given genetic locus. A simple approach is to perform eQTL analysis at each time point independently, then combine the results from analysis of all time points (referred as the union method) or perform meta-analysis based on Fisher\u2019s method (referred as the Fisher\u2019s method). We can also apply multivariate analysis of variance (MANOVA) to detect the difference of gene expression levels across different time points between different genotype groups. Alternatively, we can model time-series data by different autoregressive (AR) models, then assess whether the AR models are different with regard to different genotypes (referred to as the AR model). Alternatively, we can consider a quantitative trait following a polynomial function with regard of time and then employ a straightforward regression approach to model the trait with respect to a given genetic locus (referred as the regression method). If we further assume that for each genotype the trait over time follows a multivariate normal distribution similar to Ma et al.17 , the genetic effect (or the association with the marker) of Trait Y is solely explained by Trait X, so that the time-series values of Trait Y can be predicted with values of Traits X and Y at previous time points. In an independent model (M3: X\u22a5Y), the genetic effect of Trait Y cannot be explained by Trait X. In a partial causal model (M4), the genetic effect of Trait Y can only partially be explained by Trait X, so that the time-series values of Trait Y can be predicted with values of Traits X and Y at previous time points, as well as the genotype information at the associated locus. When traits X and Y were switched in models M1 and M4, the causal and partial causal relationships can be represented in models M2 and M5, respectively. First, we assess TGCT\u2019s power to distinguish causal/reactive relationships (M1 vs. M2) in general by comparing the joint likelihood L (Methods). Then, we focus on the cis\u2013trans trait pairs as the following: Trait X has a cis-teQTL and Trait Y has a trans-teQTL at the same locus so that models to be assessed are limited to M1, M3, and M4. We applied a linear regression on the corresponding time-series data for Trait Y and selected the model that best explains the data according to a given model selection criterion or Bayesian Information Criterion (BIC)) as detailed in Methods.Temporal QTL can be treated as a systematic source of perturbation to infer causality among traits associated with the QTL. There are a limited number of causal relationships possible between two traits associated with a given genetic locusTo compare the performance of multiple approaches for detecting temporal-genetic associations, we applied these methods to a set of simulated data (see Methods). Various time-series patterns were simulated to the tested locus. First, we simulated pairs of traits according to the causal model (M1), then assessed whether the causal (M1: X\u2009\u2192\u2009Y) or reactive (M2: Y\u2009\u2192\u2009X) model fit the data better (Methods). TGCT identified the correct model in most cases with the accuracy of 99.54%, 99.82%, 99.95%, and 99.97% for the sample size of 20, 50, 100, 150, respectively . For pairs simulated under the causal model (M1), TGCT identified the causal model as the best model in all cases across a wide range of strength of AR and causal effects is converted to the independent model (M3). In such cases, TGCT identified the independent model as the best model. When both\u00a0\u03b210 and \u03b211 are close to 0, the partial model (M4) is converted to causal model (M1) and TGCT identified the causal model as the best model in such cases.To evaluate the performance of TGCT, we simulated pairs of time-series data according to causal, independent, or partial causal models (see Methods). We applied TGCT only to the pairs in which both time-series traits 28 and compared teQTLs identified at a 5% false discovery rate (FDR) underlying the teQTL, the QTL 95% confidence intervals (Methods) for all teQTL methods were tighter compared with the static methods of the teQTL hotspots identified by the union method were enriched for the rapamycin response signature, suggesting that although this approach increases the power to detect static eQTL, the union method is not as sensitive for detecting a dynamic response. The regression method is closely related to the MPTGA method, but only one of its identified hotspots was enriched for genes in the rapamycin response signature, suggesting this approach in a temporal context may be prone to sporadic associations.29, we applied TGCT to resolve the causal regulators underlying teQTL hotspots. For each teQTL hotspot, candidate causal genes were defined as genes with cis-teQTLs linked to the teQTL hotspot. We applied TGCT to infer the causal regulators of the teQTL hotspots identified by MPTGA . The top putative causal regulator predicted by TGCT for this hotspot was ISC1, inositol phosphingolipid phospholipase C, a gene involved in ceramide production33. ISC1 was supported as causal for 136 of the 162 genes linked to this teQTL hotspot. Rapamycin induces insulin resistance via mTORC234, which regulates de novo ceramide synthesis35. Ceramide and its metabolites also play a pathogenic role in insulin resistance36. Taken together, these data support ISC1 as a causal regulator for rapamycin response differences among the yeast segregants. The identification of this teQTL hotspot and of ISC1 as a causal regulator could not have happened by analyzing any single time point after the rapamycin treatment. The genetic-by-drug perturbation interaction at this locus was only detectable in light of the time-series data considered in full.Of the 11 teQTL hotspots identified by MPTGA, 7 overlapped previously identified static eQTL hotspots in this population Table\u00a0 for whicISC1 teQTL hotspot, the teQTL hotspot at locus chrIX:70,000 was only identified by the MPTGA method. A general temporal pattern of genes linked to the hotspot is shown in Fig.\u00a0p-values for the regression and the union methods were not significant at an FDR\u2009<\u20090.05. Without constraining on residues, the regression method is prone to sporadic association37 (detailed in Discussion) so that p-value cutoff for 5% FDR is much lower than the one for the MPTGA, which explained why the regression method missed the teQTL hotspot.In addition to the p\u2009=\u20091.4\u2009\u00d7\u200910\u22127), which suggests this teQTL hotspot was driven by gene-by-rapamycin interactions. The top putative causal regulator predicted by TGCT for this hotspot was RRD1 . At an FDR\u2009<\u20091%, 64 differentially expressed genes (DEGs) were identified between the RRD1 knockout and wild-type strains, without exposure to rapamycin. These 64 DGEs were significantly overlapped with the rapamycin signature . When compared with genes linked to the 11 teQTL hotspots identified by MPTGA, the RRD1 knockout signature significantly overlapped with 5 teQTL hotspots and the teQTL hotspot ChrXV:150,000 was most significantly enriched for the RRD1 knockout signature . When compared with the eQTL hotspots based on static T0 data, the RRD1 knockout signature significantly overlapped with the eQTL hotspot at ChrXV:150,000 . The RRD1 knockout signature was enriched for the GO biological process response to stress , which is consistent with the functional annotation of this static eQTL hotspot29. These results were consistent with RRD1 expression levels being regulated both in cis and in trans by DNA variations at ChrXV:150,000 . More specifically, directions of changes for all genes in the overlap between genes linked to the teQTL hotspot ChrIX:70,000 and DGEs in RRD1 knockout signature are consistent between the time course and RRD1 knockout experiments. The segregants carrying RM allele at the RRD1 locus had low RRD1 expression level in comparison with the segregants carrying BY allele , whereas 9 genes were expressed lower in the segregants carrying RM allele and 6 of them overlapped with downregulated genes in RRD1-KO strain (FET p\u2009=\u20092.0\u2009\u00d7\u200910\u22127). In addition to the hotspot ChrIX:70,000, the top three teQTL hotspots with the highest fold enrichment include ChrV:190,000 and ChrIV:90,000 , which are the three unique teQTL hotspots compared with static eQTL hotspots , consistent with RRD1 expression variation being linked in cis to ChrIX:70,000 and in trans to ChrXV:150,000 , which is consistent with the GO functional annotations of the set of genes simultaneously linked to these teQTL hotspots with cis-eQTLs at a hotspot, both may be causal to an overlapped set of genes (Ys) with trans-eQTLs linked the locus, e.g., X1\u2009\u2192\u2009Y and X2\u2009\u2192\u2009Y. In these cases, TGCT cannot distinguish which cis-eQTL gene is the true causal gene. Thus, multiple putative causal genes were reported for hotspots with a large number eQTLs structures or more time points to break correlation relationships among colocalized genes so that the TGCT has more power to distinguish which gene is true causal regulator among correlated genes colocalizing at a locus than static methods. Follow-up experiments are recommended to validate putative causal regulators.For each teQTL hotspot, we tested all cis\u2013trans gene pairs at the locus for potential causal relationships. For a gene with a trans-eQTL at a hotspot, the TGCT may report multiple candidate causal genes. On the other hand, for two genes . However, the three unique teQTL hotspots identified by the MPTGA were significantly enriched for the aging related genes , whereas none of the static eQTL hotspots nor eQTL hotspots identified by\u00a0the union nor the Fisher\u2019s method was enriched for the aging genes at p\u2009<\u20090.01 , the association p-values based on the two methods were closely tracked with each other (Methods). The p-values for peak SNPs and neighboring SNPs were highly correlated, with correlation coefficients of 0.89 and 0.69 for MPTGA and the regression method, respectively . Even though the p-values of MPTGA (detailed in Methods). First, we simulated a set of gene expression traits and genotypes. As they were simulated independently, no gene association was expected. The QQ plot for the simulated data , which aims to group genetic effects at each individual time point into two discrete states is unlikely to work well in the cases with moderate genetic effect changes over time. This also highlights the advantage of the MPTGA method that simultaneously takes all time points into consideration.MPTGA and TGCT share similarity with common genetic association methods and temporal causality methods. Other methods have been developed to integrate genetic, gene expression and temporal information to construct global regulatory networksent Fig.\u00a0. Comparids Table\u00a0. Conside.01 Fig.\u00a0, but not00/01/11 (or 0/1/2), at each SNP, we can apply these methods directly to detect dominant/recessive effects. To detect full genetic effects, we can estimate parameters for each genotype, then compare them with the estimated parameters without considering genotype (null model). In addition, in the current TGCT test, we explicitly modeled the causal variable in an AR form. If long time-series data are available, a more flexible model can be used to unify the models used in MPTGA and TGCT, such as polynomial functions in both tests (detailed in Methods).In the current study, MPTGA and TGCT are simplified based on a haploid system. They can be generalized for diploid systems (detailed in Methods). When applying the MPTGA and TGCT to diploid systems in which there are three possible genotypes, 3 reconstructed causal networks and predicted the causal regulators for the eQTL hotspots of gene expression activity in a segregating yeast population. Second, the accuracy of MPTG depends on the amount of data available and data associated measurement errors. The integration of other types of high-throughput data might reduce the influence of these errors. Furthermore, the proposed TGCT method could only address the relationship between a pair of gene expression traits and a locus. More complicated models might be further considered to assess and represent more comprehensive regulation relationships as a larger network, e.g., multiple QTLs affect the expression of multiple transcripts and these RNAs in turn act on another complex trait. Finally, our procedure focuses on identifying causal and reactive relationships which is a very simplistic view of the gene networks. However, the true biology is much more complicated. The genes interacting in a large network may be subject to negative and positive feedback control. Despite these issues, the ability to integrate both the genetic and temporal information in the eQTL analysis offers a promising approach to understand the dynamic regulation.The integration of both genetic and temporal information in our study represents only the beginning step needed to dissect the dynamic regulation. There are also many other directions for improvement in temporal-genetic data analysis. First, other types of available high-throughput data have not been integrated in the analysis yet. To integrate multiple types of data, Zhu et al.11 and in time series46. A large amount of trans-eSNPs for human blood transcriptome were identified using 1002 subjects11. Only cis-eSNPs but no trans-eSNP was identified with 40 subjects46, suggesting it was underpowered for detecting trans-eSNPs. An eQTL study in Japanese population47 indicates that trans-eSNPs can be detected with 76 subjects. To identify more trans-eSNPs, more subjects are needed in genetic studies. We previously showed that it is possible to infer temporal causal relationships in transcription regulation of human blood transcriptome using 7 time points24. To effectively apply the temporal-genetic association and causality tests to a human study, we estimate that at least 8 time points and 200 individuals are needed. It is worth noting that the time intervals in a time series are not needed to be the same. For example, given a polynomial function, we sampled five time points at random intervals and simulated 100 traits and RM11-1a (RM). Each yeast segregant in this set of time-series data was sampled at 10\u2009min intervals for up to 50\u2009min after rapamycin addition, and RNA was extracted and profiled with Affymetrix Yeast 2.0 microarrays. This dataset was used for constructing predictive networks by taking advantage of both genetic variations and time dependencies42. A total 5703 gene expression traits and 2956 SNPs were used in the current study.A set of time-series messenger RNA gene-expression data is available, which measured the gene expression levels of 95 genotyped haploid yeast\u00a0F2 segregants after a perturbation with the macrolide drug rapamycinY, assayed in individual i at time t at a particular marker location with two genotypes. In this case we can generally represent the expression levels of the trait as yi(t)\u2009=\u2009\u03b4i0g0(t)\u2009+\u2009\u03b4i1g1(t)\u2009+\u2009\u03b5i(t), where \u03b4i0 and \u03b4i1 are the indicator variables for the two possible genotypes at the marker for individual i, and g0(t) and g1(t) are functions representing the dynamic process for individuals with different genotypes (the two possible genotypes here have been encoded as 0 and 1).To model the behavior of a quantitative trait over time with respect to a given genetic locus, we can model the behavior of the trait using different continuous functions for each possible genotype at a given locus. For example, in the case of a haploid organism, consider a gene expression trait, K degree polynomial forms here given they are flexible and commonly used in fitting complex curves: \u03b2k for the exponent k. Given this form of trait behavior over time, the trait with respect to a given genetic locus can be expressed as: yi(t)\u2009=\u2009Although the functions could take on any form that can be appropriately parameterized , we consider 49.For each genetic association approach described below, FDR was estimated by permutation tests in which the strain labels are randomly permuted so that the correlation of the expression traits was maintained, while any genetic associations were destroyedp-value cutoff 0.05/602. We assumed that the number of eQTL in a bin follows the binomial distribution with parameters n\u2009=\u2009total number of linkage identified across the whole genome and p\u2009=\u20091/602, which assumes equal probability of linkage among the 602 bins. Thus, under the hypothesis of binomial distribution, the threshold N0 was selected such that the probability of observing at least N0 eQTL linkage is less than 0.05/602.When applying to the yeast data set, we divided whole yeast genome into 602 bins of 20\u2009kb in size. The thresholds for declaring eQTL hotspots are based on binomial test t-test or Wilcoxon rank-sum test to check whether there is sufficient evidence that the gene expression levels are significantly different between the two groups.Traditional eQTL analysis is restricted to gene expression levels at a static state, among which a straightforward method is to split segregants into two groups according to their genotypes at a marker and perform the pt,j is the Wilcoxon rank-sum test p-value for the gene expression trait j at the time point t. It means that if a gene expression trait was significantly linked to a locus at any of the six time points, the trait was linked to the locus in the union method. The significant p-value cutoff is determined by permutation tests with FDR\u2009<\u20090.05.A straightforward approach to leverage gene expression data of whole time series is to perform eQTL analysis at each time point independently, then combine the results from analysis of all six time point at a locus as the following: pj\u2009=\u2009pt,j is the Wilcoxon rank-sum test p-value for the gene expression trait j at the time point t. The significant p-value cutoff is determined by permutation tests with FDR\u2009<\u20090.05.A meta-analysis approach over a time-series data assumes that data at different points are repeated measurements of the same underlying data. Similar to the union method, we perform eQTL analysis at each time point independently, then combine the results from analysis of all six time point at a locus as the following: \u03bcgt represents the mean gene expression level in the genotype g group at the testing locus at time point t. If MANOVA identifies significant difference of gene expression levels across different time points between groups of samples with different genotypes, we declare an eQTL for the trait at the testing locus.MANOVA takes into account the covariance between multiple dependent variables and thus is specifically appropriate in testing for association between a SNP and multiple correlated gene expression traits across different time points. In particular, we test the hypothesis:gj(t)\u2009=\u2009\u03b2j0\u2009+\u2009\u03b2j1t\u2009+\u2009\u03b2j2t2\u2009+\u2009\u03b2j3t3. A general regression model with cubic polynomial fitting for a trait yi is yi(t)\u2009=\u2009\u03b20\u2009+\u2009\u03b21t\u2009+\u2009\u03b22t2\u2009+\u2009\u03b23t3\u2009+\u2009\u03b5i, in which the predictor variables are . Thus, each set of time-series data contributed six observations in the regression model and the total number of observations was 6N, where N is the total number of segregants in the yeast F2 data set. To examine the difference between segregants with different genotypes, we compared the reduced model H0 (single fitting yi(t)\u2009=\u2009\u03b20\u2009+\u2009\u03b21t\u2009+\u2009\u03b22t2\u2009+\u2009\u03b23t3\u2009+\u2009\u03b5i) with a full model H1 (separate fitting for each genotype) as\u03b4i0 and \u03b4i1 are the indicator variables for genotype 0 and genotype 1. We performed an F-test to compare the reduced model against the full model to detect eQTL association.Many gene expression changes in time-series were not monotonic and sometimes have more than one fluctuation 51 as: \u03a3\u2009=\u2009fj(y)\u2009=\u2009MPTGA was similar to the regression model described above. Regression model assumed variances at each time points were independent while MPTGA assumes variances are related. Similar to Ma et al.gj\u2009=\u2009[gj(t)]m1\u00d7\u2009=\u2009[\u03b2j0\u2009+\u2009\u03b2j1t\u2009+\u2009\u03b2j2t2\u2009+\u2009\u03b2j3t3]m1\u00d7. The joint likelihood for N\u2009=\u200995 segregants was then L(\u0398)\u2009=\u2009\u03b2j0, \u03b2j1, \u03b2j2, \u03b2j3, \u03c1, L(\u0398)with respect to each unknown parameter. To solve these equations, we first expressed \u03b2\u2019s, L(\u0398) as functions of\u00a0\u03c1 as below, then looked for the critical point of log L(\u0398)\u00a0which reached its maximum.In our studies, the mean vector was then modeled by the cubic curve I0\u2009=\u2009[1\u22ef1]m1\u00d7, I1\u2009=\u2009[1\u22efm], U\u2009=\u2009 and V\u2009=\u2009.Notation: L(\u0398) with respect to \u03b2.0\u2019s, the following linear system was obtained:\u03b1ij\u2009=\u2009n0Q and bi\u2009=\u2009Q. Here, \u03b2.1\u2019s could be obtained similarly. Taking derivative with respect to \u03c3e, we had \u03b2\u2019s in terms of \u03c1, here \u03c1, too. The log likelihood could be written as L(\u0398). Then the MLE for against the full model H1 (different gene expression trait curve for different genotypes):After determining parameters with MLE procedure, LR test was performed to test the hypothesis of the existence of eQTL by comparing a reduced model H\u03c1 is forced to be zero, assuming independent relationship among observations in the time series as the regression method. Therefore, it is expected that the regression method would have similar performance as the MPTGA method when the time-series data is of low self-dependency.It is noteworthy that MPTGA is equivalent to the regression method when the AR coefficient \u03b4i0 and \u03b4i1 are the indicator variables for genotype 0 and genotype 1. It is noteworthy that this formulation corresponds to the independent model M3 in the TGCT test section below and we will refer to this method as the AR method in temporal-genetic association tests. We performed a linear regression to estimate the parameters under each model and used an F-test to compare the null model against the full model to detect eQTL associations.Time-series data are commonly modeled by a time-lagged AR model (first ordered AR model as an example):\u03c72 quantile method in the LOD score test described in Mangin et al.52, in which the corresponding statistic T(d0) follows a chi square distribution with N degree of freedom under the null hypothesis that d0 is the QTL position. The (1\u2009\u2212\u2009\u03b1) confidence interval is then defined as d0 such that T(d0) is smaller than \u03b1\u00a0quantile of a N degree of freedom. Here, R(d)\u00a0is the \u2212\u20092*log-LR statisticR(d)\u2009=\u2009We employed the 17 ; (2) Trait Y is causal for Trait X (M2); (3) Trait X is independent of Trait Y (M3); (4) Trait X is partially causal for Trait Y (M4); (5) Trait Y is partially causal for Trait X (M5). Models M1 and M2 are the simplest causal relationships between two traits in which a given locus acts on one of the traits through the other. Model M3 is the fully independent model in which the genetic locus acts independently on each trait. Models M4 and M5 represent partial causal relationships in which one trait is causal for the other, but the genetic locus acts independently on each trait.Temporal QTL can be treated as a systematic source of perturbation to infer causality among traits associated with the QTL. We and others have previously demonstrated that for two traits associated with a given genetic locus there are a limited number of causal relationships possible between the traits3 first identified genes with cis-eQTL in the corresponding eQTL hotspot region and inferred their downstream-regulated genes as the set of genes that could be reached in the integrative molecular Bayesian Network. If the downstream set of a cis regulated gene at an eQTL hotspot locus is significantly enriched for eQTLs linked to the locus, the cis regulated gene is inferred as a key regulator of the eQTL hotspot. Instead of integrating diverse data into a global causal network53, we aim to test pairwise causality by leveraging time-dependent genetic data.Static eQTLs and teQTLs were not evenly distributed along the whole genome. There were loci referred to as eQTL hotspots where many gene expression traits were linked. It is important to dissect causal regulators underlying these eQTL hotspot loci, which can regulate a large number of gene expression traits. To identify causal regulators for a given hotspot, Zhu et al.14 used normal distributions to model the static time expression trait data. Here with multi-dimensional time-series data, we seek to combine both the dynamic information and genetic information to infer the causal relationship between two time series more precisely. Granger26 formalized the idea of time series-based causality test in the context of linear regression. The idea of Granger causality is to test whether the prediction of the time series could be significantly improved by incorporating information from previous time points in a second time series, and thus to test whether the second time series has a causal effect on the first time series. Mathematically, Granger causality test compares the reduced model with the full model, which adds the lagged information of another time series as a predictor in regression, and tests whether the improvement in fitting the data is significant. We adopted the idea to include the lagged values of one time series to augment the autoregression when comparing the causal relation and independent relation. Due to a small number of time points available, we used first-order autoregression model AR(1). Specifically, the five models in Supplementary Fig.\u00a0The LCMS proposed by Schadt et al.X and Y are calculated asX\u2009\u2192\u2009Y) and reactive (M2: Y\u2009\u2192\u2009X) models, we calculated log joint likelihood X, Y)\u2009=\u2009Y) under these two models. As the total numbers of parameters in M1 and M2 are the same, comparing X, Y) under these two models and comparing BICs are equivalent.We used different autoregression parameters for different genotypes to account for the genetic effect and added the lagged value of one time series to represent the causal effect of one time series on the other. The parameters in each model were estimated using ordinary linear regression. The log likelihood of X with a cis-eQTL linked to a teQTL hotspot, then we can restrict the model selection among causal (M1: X\u2009\u2192\u2009Y), independent (M3: X\u22a5Y), and partial causal (M4) models without considering the reactive (M2: Y\u2009\u2192\u2009X) and partial reactive (M5) models. In such cases, the three models share the same regression model for Trait X. Thus, we perform model selection based only on the regression on Trait Y. The corresponding log likelihood was estimated as follows:k is the number of parameters estimated in the corresponding model. BIC penalizes complex models. The model with the smallest BIC was identified as the model best supported by the data.One of our major goals of the TGCT test is to identify the cis-regulators of teQTL hotspots. If we assume Trait For each teQTL hotspot, we first identified genes with cis-teQTLs linked to the hotspot as candidate causal genes, then pair these cis-eQTL genes with all genes with trans-eQTLs linked to the hotspot for the causality test. The cis-eQTL genes with the number of causal relations significantly more than expected by chance were selected as the putative key regulators of the eQTL hotspot.00/01/11 (or 0/1/2), at each SNP, we can apply these methods directly to detect dominant/recessive effects. To detect full genetic effects, the genetic association test can be expressed as yi(t)\u2009=\u2009Y, then the reduced model H0 (single-gene expression trait curve):H1 (different gene expression trait curve for different genotypes): at least one of the equalities does not hold.The above MPTGA and TGCT are simplified based on a haploid system. When applying the MPTGA and TGCT to diploid systems in which there are three possible genotypes, To test the hypothesis of the existence of eQTL at a locus, we can estimate these parameters with an MLE procedure and performing LR test as we described in the above section. Similar generalization can be applied to the TGCT.X with cis-eQTL, Xit\u2009=\u2009Y with trans-eQTL, then the three possible models of the causal relationships between them can be rewritten as the followingf\u00a0t\u22121(t) corresponds to a single polynomial fitting function using previous time points of both genotypes. Then, we can test for temporal-genetic causality using the same model selection approach as described in the TGCT method. We can set the degrees of polynomial functions f\u00a0t\u22121(t), f(t) used in MPTGA. If the number of time points is not large enough for the unified model described above, but larger than the size in our current study, we can make TGCT more flexible by using higher order AR models instead of first-order AR models.One potential drawback in the current TGCT test is that we explicitly modeled the causal variable in an AR form, which is not as powerful in identifying genetic effects as other methods temporal relationships; (2) genetic structures (LD structures across the genome). Thus, in the permutation procedure, we preserved the temporal relationships and genetic structure, and only permuted the strain labels. We left the gene expression data unchanged and permuted the strain labels in the genetic data so that true generic associations were destroyed while the correlation relationship of expression traits was maintained. We performed the permutation 10 times. At a specific N six-point time series either from a single multivariate normal distribution or two separate multivariate normal distributions. The number of samples N varied from 20 to 100. The covariance matrix was modeled as above, where \u03c1 was between 0.1 and 0.9 with \u03c1\u2009~\u2009N for high-correlation data set or \u03c1\u2009~\u2009N for low-correlation data set.We simulated time-series data sets from multivariate normal distribution, with mean vector modeled by various patterns that are similar to the observed experimental results , then the missing time points were imputed based on the fitted curves and the temporal-genetic association methods were applied to the imputed data. For the other methods that do not fit curves to the data, i.e., union, Fisher and MANOVA, the samples with missing data were masked first and each method was applied to the remaining data.X and Y under different models. We performed two sets of simulation studies. In the first set of studies, we simulated 10,000 trait pairs for each parameter setting. Each Trait X consisted of 6 time points for N samples with the mean vectors following one of the patterns shown in Supplementary Fig.\u00a0Y was simulated from Trait X according to the causal model (M1). The covariance matrices were modeled similarly with \u03c1\u2009~\u2009N for each dataset. The above simulation scheme was repeated to generate 10,000 trait pairs for each sample size N that varied from 20 to 150 and with each set of parameters in the causal model M1. For the pairs with both traits linked to the tested locus , we compared the joint likelihood L based on the causal model (M1) with the reactive model (M2). In the second set of studies, we simulated 10,000 trait pairs. Each Trait X consisting of six time points for N samples was simulated similar as above, and Trait Y was simulated according to the causal (M1), independent (M3), or partial causal (M4) model with different parameter settings. For the pairs with both traits linked to the tested locus , we calculated the likelihoods of Y based on the causal (M1), independent (M3), and partial causal (M4) models, and selected the best fit model based on BIC (detailed in the Methods section above).To evaluate the performance of the TGCT test, we simulated pairs of time series for traits 37. To assess the tendency of sporadic association or overfitting in the MPTGA or the regression method, we compared p-values of significant associations in both empirical and permuted data and the p-values of neighboring SNPs that are in strong LD. If a significant trait-SNP association is detected and the trained model describes the true underlying relationship, then neighboring SNPs in high LD, where genotypes for SNPs in high LD data vary slightly (according to the strength of the LD structure), should be able to predict the trait or strongly associate with the trait (model testing). On the other hand, if a significant trait-SNP association detected and the trained model describes noise instead of the true underlying relationship, then neighboring SNPs in high LD are unlikely to be able to predict the trait or strongly associate with the trait (model testing). Thus, by comparing the consistency/correlation of the strengths of associations between peak SNPs and neighboring SNPs in high LD to a trait we can assess overfitting problem, less overfitting will lead to a higher consistency/correlation.Both the MPTGA and the regression method, or the polynomial regression based methods in general, are prone to sporadic associations or overfittingp-values of the MPTGA test based on (1) simulated data, (2) real data, and (3) permuted real data.To assess the statistical validity of the MPTGA test, we compare the QQ plots from the 0/1 value with 0.5 probability; then we simulate a random gene expression trait (95 samples\u2009\u00d7\u20096 time points) from a multivariate normal distribution with a mean vector corresponding to a random pattern in Supplementary Fig.\u00a0p-value matrix. Next, we permuted the strain labels and performed MPTGA test, resulting in another 5,703\u2009\u00d7\u20092956 p-value matrix in each permutation.The simulation scheme is as follows: first, we simulate a genotype vector for 95 samples with each cell taking a random 54. For rapamycin treatment, 100\u2009nM rapamycin was added to the medium after yeast grew to log-phase. After culture for 50\u2009min, total RNA was extracted the same as above. All experiments were repeated 3 times on three different days.The wild type strain BY4730 and RRD1 knockout strain YSC6273-201925697 were obtained from Thermo Scientific Open Biosystems. Yeast was grown in YPD medium to log-phase in shaken flasks at 30\u2009\u00b0C. Total RNA was extracted as described previously55. Total 6932 yeast transcripts were quantified using Cufflinks55, and 5542 of them overlap with transcripts on Yeast Genome 2.0 Arrays from Affymetrix, which was used for generating the yeast F2 time course data. The 5542 transcripts were used in further analysis. DEGs were defined by CuffDiff55. At q-value\u2009<\u20090.01, 64 and 581 were in RRD1 ko signature without rapamycin (RRD1 ko no treatment vs. wild-type no treatment) and RRD1 ko signature with rapamycin (RRD1 ko with rapamycin vs. wild type with rapamycin), respectively. The RNA sequencing data generated are available at GEO data base with accession number GSE86786.Approximately 250\u2009ng of total RNA per sample was used for library construction by the TruSeq RNA Sample Prep Kit (Illumina) and sequenced using the Illumina HiSeq 2500 instrument with 100nt single-read setting according to the manufacturer\u2019s instructions. Sequence reads were aligned to yeast genome assembly using Tophat56. If neighboring bins were (t)eQTL hotspots, then they are merged into a single (t)eQTL hotspot.The yeast genome is divided into 20\u2009kb bins and the number of (t)eQTLs associated with markers in each bin is counted. For those bins with significantly more (t)eQTLs than expected by chance, the genetic location corresponding to the bin is defined as a (t)eQTL hotspothttp://db.yeastgenome.org/cgi-bin/GO/goTermFinder). We restricted attention to GO terms based on the slim mapping from SGD, which is comprised of roughly 100 categories. We applied the hypergeometric test using the annotation database. The annotations with the most significant p-values were reported in Table\u00a0The yeast GO categories were derived from the SGD database (http://research.mssm.edu/integrative-network-biology/Software.html.Codes for MPTGA and TGCT can be found at Supplementary InformationPeer Review file"}
+{"text": "Circadian rhythms are generated by interlocked transcriptional-translational negative feedback loops (TTFLs), the molecular process implemented within a cell. The contributions, weighting and balancing between the multiple feedback loops remain debated. Dissociated, free-running dynamics in the expression of distinct clock genes has been described in recent experimental studies that applied various perturbations such as slice preparations, light pulses, jet-lag, and culture medium exchange. In this paper, we provide evidence that this \u201cpresumably transient\u201d dissociation of circadian gene expression oscillations may occur at the single-cell level. Conceptual and detailed mechanistic mathematical modeling suggests that such dissociation is due to a weak interaction between multiple feedback loops present within a single cell. The dissociable loops provide insights into underlying mechanisms and general design principles of the molecular circadian clock. Circadian clocks are endogenous pacemakers that generate gene expression oscillations with a period of approximately 24h. They enable an organism to anticipate daily changes in light and temperature and allow to align physiological properties to the most beneficial time around the solar day. The suprachiasmatic nucleus (SCN) of the hypothalamus is the master circadian pacemaker in mammals that coordinates peripheral clocks throughout the body and even encodes seasons. Gene expression oscillations of circadian clock genes in this master pacemaker have been shown to dissociate after perturbations of the system such as light pulses and jet-lag. The underlying mechanism remains unknown. We show that this dissociation may occur even within a single cell of the pacemaker. Data-driven mathematical modeling suggests that the dissociation relies upon a weak interaction between interlocked gene-regulatory feedback loops. Differential responses of these feedback loops to light perturbations is consistent with the concept of morning and evening oscillators. Period and Chryptochrome as well as the bHLH-PAS transcription factors Bmal1 and Clock. Heterodimers of CLOCK and BMAL1 proteins enhance the transcription of Per and Clock genes by binding to their E-box promoter elements. The products of these genes, PER and CLOCK proteins, antagonize the activatory effects of the CLOCK-BMAL1 heterodimers and thus close the delayed negative feedback loop. This feedback loop will be hereinafter referred to as the Per loop. In addition to the \u201cprimary-loop\u201d, a nuclear receptor loop has been identified, involving Ror as positive regulators of Bmal1 and RevErb as negative regulators , a stronger impact of the Per loop on Bmal-Rev loop (p > 0.5) in comparison to the opposite situation generally leads to a longer transient dynamics of the Bmal-Rev compared to the Per loop after a 6h phase advancing jet-lag, see As discussed above, we considered symmetrical couplings between the Per and Bmal-Rev loops recovery of the steady state phase difference \u0394\u03b8\u22c6, compare the dashed (dotted) line in After application of a 9h light pulse at CT11.5h to adult mice, differential responses of Per1 and Bmal1 mRNA oscillations under free-running conditions as well as dissociating dynamics upon jet-lag and 9h light pulses, in case of weak enough coupling between both loops.In conclusion, our conceptual model that assumes interlocking of oscillating Per and Bmal-Rev intracellular feedback loops, where only the Per loop receives direct light input, successfully describes experimental We examine the robustness of our results by exploiting a slightly more complex conceptual model that additionally considers amplitude effects in a system of two (mean-field) coupled Poincar\u00e9 oscillators, representing again the Per and Bmal-Rev loops. Within this model, the results obtained from the phase model can be robustly reproduced, see e.g., phosphorylation, nuclear transport, complex formation) into explicit delays, are able to faithfully reproduce experimentally observed periods and phases under free-running conditions using the lombscargle function from the signal module of the Scientific Python package.Experimental data for t filter , using tribed in . Oscillaribed in in the p\u03c4i, amplitudes and phases can be determined as \u03d5i = arctan 2, respectively.Additionally to the Lomb Scargle periodogram, a simple harmonic function\u03b8P and \u03b8R represent oscillation phases of the Per and Bmal-Rev loops, respectively. T denote intrinsic periods of the Per loop, Bmal-Rev loop, and the Zeitgeber signal, respectively. KR and KP determine strength of interaction between the Per and Bmal-Rev loops, while z denotes strength of the light input. Parameter \u03b2 allows for a flexible adjustment of the steady state phase difference \u0394\u03b8 \u2254 \u03b8P \u2212 \u03b8R in the limit of vanishing frequency differences (\u0394\u03c9 \u2254 \u03c9P \u2212 \u03c9R = 0) or infinite coupling strength .As a conceptual model of intracellular circadian oscillation, two coupled phase oscillators are consi.e., z = 0), Eqs Under free-running conditions between both loops, given by condition Synchronization , In case of symmetric coupling and Bmal-Rev (j = R) loop, respectively. \u03bbj denote radial relaxation rates, Aj individual amplitudes, \u03c4j intrinsic periods, K the coupling strength, \u03d5 the coupling phase and i the complex element. By setting Aj = 0, one can transform the self-sustained limit cycle oscillator into a damped one.In order to consider amplitude effects, we introduce a conceptual model based on two coupled Poincar\u00e9 oscillators. For the sake of simplicity, we assume that both oscillators couple symmetrically by means of a mean field similar to models discussed in . The corZeitgeber on the Per loop by adding the Zeitgeber function As in , we modeContextual molecular circuit models are developed, based on the interplay of E-box, D-Box and RRE cis-regulatory elements, as previously published , 39, 40.vP = 1, kP = 0.1, dP = 0.25 h\u22121, Using a previously published model of Per gIn order to mimic dissociating dynamics between the Per and Bmal-Rev feedback loops, the Per single gene model of Zeitgeber function Z(t) appears as an additive term in Eqs T and intensity z is implemented as described previously [Cry1, Ror, and Dbp clock genes, here we considered only those genes and regulatory interactions that are necessary and sufficient for the occurrence of free running oscillations and entrainability of both Per and Bmal1 genes to the Zeitgeber. Along these lines, RevErb is a necessary network node, since the inhibitory effect of Per protein on RevErb transcription is transmitted towards Bmal1 via the inhibitory effect of RevErb on Bmal1 transcription which thus allows for light entrainment of Bmal1. Within the three node network of Per, Bmal1 and RevErb, all direct links mediated through cis regulatory elements are considered, see Under the assumption that light acutely induce Per transcription, time-dependent eviously . In compeviously , 39, 40 dP = 0.25 h\u22121, dB = 0.26 h\u22121, dR = 0.29 h\u22121, vP = 1, vB = 0.9, vR = 0.6, kP = 0.1, kB = 0.05, kR = 0.9, cP = 0.1, cR = 35, bP = 1, bR = 8, Values for all parameters have been obtained from and modiribed in .odeint function from the integrate module of the Scientific Python package. The solutions have been drawn at equidistant intervals of \u0394t = 0.01 h.Simulations results in Figs Simulations underlying Matlab function dde23, called from a Python script using the matlab.engine API. Again, the solutions have been drawn at equidistant intervals of \u0394t = 0.01h.Simulation results from the delay differential Eqs S1 FigMaterials and Methods of the Main Text. A) N cells are randomly located into a square shaped space from a two-dimensional uniform distribution. B) To each cellular position, an oscillating, sinusoidal intensity signal of period \u03c4i and initial phase \u03d5i is assigned. To mimick the experiment, periods and initial phases of in silico Bmal1 or Per1 signals are set differently. At each time point t, the signal is convoluted with a Gaussian kernel of standard deviation \u03c3G in order to mimic the spatial extension of neurons. C) Gaussian noise of standard deviation \u03c3n is independently added to the value of each pixel, at each time point t. D) Illustrative sketch of the resulting surrogate data image stack for exemplary time points. E) Example of individual surrogate time series data from a single pixel for both Bmal1 (orange) and Per1 (blue) image stacks.Depicted are various steps to generate the surrogate data as described in Section (TIF)Click here for additional data file.S2 FigTop: Example images of the Per1 surrogate time lapse movies at time point t = 0. Broadness of the Gaussian convolution kernels are increased from left to right, which can be associated with increasing neuron sizes or signal diffraction. Parameters \u03c30 = 0.0176, N = 150 and \u03c3n = 1 have been used. A standard deviation \u03c3G = \u03c30 of the Gaussian convolution kernel in the surrogate data generation approximates the size of an SCN neuron as recorded by the methods used in [Middle:Gaussian kernel density estimates in the bivariate graph of Bmal1 and Per1 oscillation periods, estimated by a Lomb Scargle analysis of surrogate time lapse movies, generated under hypothesis i.e., dynamical dissociation at the single cell level, for an increasing Gaussian kernel width (\u03c3G) from left to right column. Bottom: Same as in the middle panel in case of hypothesis i.e., randomly located cells with either a Bmal1 or Per1 signal of different periods. used in . Middle:(TIF)Click here for additional data file.S3 FigBmal1, RevErb\u03b1 and Per1 gene expression profiles of the SCN tissue data set from [i} denote fits to different time series of the three investigated clock genes. In panel A we allow individual periods \u03c4i for all three clock genes, while in panel B we assume a synchronized state between all clock genes such that the oscillation period \u03c4i \u2255 \u03c4 is shared throughout the fit to all three clock genes. The fitted relative amplitudes and phases of the individual clock gene expression rhythms are given by \u03d5i = arctan 2, respectively.set from have bee(TIF)Click here for additional data file.S4 FigMain text for different values of \u03b2. \u0394\u03b8\u22c6 \u2248 \u22120.7 \u03c0 denotes the experimentally observed phase differences between Per1 and Bmal1 gene oscillations as estimated from the SCN tissue data of [\u03b8\u22c6 \u2248 = \u22120.7\u03c0 in the K-(\u03c4\u22c6 \u2212 \u03c4R) parameter plane are depicted by dashed white lines. These isoclines correpsond to the color-coded isoclines of Main text.Borders of synchronization and col data of , see als(TIF)Click here for additional data file.S5 FigZeitgeber intensity z is increased. Dynamics of the Bmal-Rev loop follow these dynamics although at a lower degree. B) Coupling constant K mainly determines how fast dynamics of the Bmal-Rev loop follow the relatively fast response of the Per loop to a 6h jet-lag. Response of the Per loop to jet-lag gets slower to some extent, since its dynamics is attracted to the Bmal-Rev loop by the symmetric coupling, which weakens the impact of Zeitgeber signal. C) Asymmetry in the coupling between the Per and Bmal-Rev loop has been introduced for a constant overall coupling strength K = KR + KP = pK + (1 \u2212 p)K by means of the asymmetry constant 0 \u2264 p \u2264 1. Note that for p = 0 the system of coupled oscillators forms a chain, i.e., Zeitgeber signal Z(t) entrains the Per loop which in turn entrains the Bmal-Rev loop without any feedback from the Bmal-Rev to the Per loop. The coupling constant has been set to its nominal value of K \u2248 0.043 as determined in p > 0.5 leads to a longer time of transient dynamical dissociation, eventually taking more than two weeks for the re-synchronization process, e.g., for p = 0.7.A) The Per loop dynamics shows a faster response to a 6h jet lag as the (TIF)Click here for additional data file.S6 FigK and period detuning parameter plane. Period detuning has been defined as the difference between the experimentally observed oscillation period \u03c4\u22c6 \u2248 24.53h and the period \u03c4P of the Per loop. For the sake of simplicity a symmetric detuning of the Bmal-Rev loop from \u03c4\u22c6 such that \u03c4P = \u03c4\u22c6 \u2212 1h and \u03c4R = \u03c4\u22c6 + 1h, respectively. As in \u03b8\u22c6 between the Per and Bmal1 oscillations. C) Similar to K and Zeitgeber strength z parameter plane. For each K, a period detuning value from the dashed white line in panel B has been assigned such that the experimentally observed phase difference is conserved. D) Simulated (bold lines) free running (z = 0) oscillations of Per (blue) and Bmal1 (orange) for the optimal parameter set as depicted by the dashed black circle in panel B in comparison to corresponding experimental time series (dashed lines). E) A good agreement between simulated (bold lines) and experimentally obtained (dots) dynamics after a 6h phase advancing jet-lag can be observed for the optimal parameter set in panel B. Parameters underlying simulations in panel (B)-(E) are AP = AR = 1, \u03bbP = \u03bbR = 0.1h\u22121 and \u03d5 = \u03c0, compare Materials and Methods. F-J) Same as in panels (A)-(E) in case of a damped Bmal-Rev loop, i.e. AR has been set to zero in Materials and Methods.A) Schematic drawing of the conceptual model, comprised of two coupled Poincar\u00e9 oscillators, representing autonomously oscillating Per and Bmal-Rev loops, where only the Per loop is directly driven by light. B) Region of synchronization between the Per and Bmal-Rev oscillators in the coupling strength (TIF)Click here for additional data file.S7 FigZeitgeber intensity of z = 0.21 that faithfully reproduces the experimentally observed response to a 6h phase advancing jet-lag, phases of entrainment of simulated Per, Bmal1, and RevErb gene expressions qualitatively coincide with those observed in experiments. While Per and RevErb show peaks around midday, Bmal1 shows morning peaks under LD12:12 equinoctial entrainment conditions.A) Single-gene model. B) Three gene model. For a (TIF)Click here for additional data file.S8 FigA) Simulated dynamics of the three gene model after a 9h light pulse in the three-dimensional state space (blue curve), corresponding to simulations depicted in (TIF)Click here for additional data file.S9 FigMain text, simulated acrophases of Per (blue) and Bmal1 gene expressions, subject to a 9h light pulse, are depicted for different parameter values of cR. In case of Bmal1 oscillations, simulations with different values of cR are highlighted by different marker symbols and colors. Long lasting transient dissociation dynamics (more than two weeks) can be observed for large values of cR which corresponds to a weak coupling between the Per and Bmal-Rev loop due to a reduced transcriptional repression of Rev by Per. B) Instantaneous periods as determined from the time differences between two consecutive acrophases in panel A. In dependence on the constant cR, either longer or shorter instantaneous Bmal1 periods compared to Per oscillation periods can be observed after a 9h light pulse. C) Analogously to panel (A), simulated acrophases of Per (blue) and Bmal1 gene expressions after a 6h phase advancing jet-lag are depicted for different parameter values of cR. D) Again, varying parameters of cR lead to different re-entrainment times, ultimately leading to differing values of instantaneous periods of the Per (blue) and Bmal1 gene expressions.A) Analogously to (TIF)Click here for additional data file.S10 FigZeitgeber pulses (z = 0.43) applied to the Per variable at different times around subjective day. PRCs have been determined for different parameters values cR that can be associated with the impact (coupling strength) of the Per onto the Bmal-Rev loop. It can be noted that the PRC is barely affected by the different values of cR. B-D) Time to re-entrain for the Zeitgeber pulses as described for panel (A) in case of Per (B), Bmal1 (C), or Rev-Erb (D) for different values of cR. While the generally shorter re-entrainment time of Per barely changes with alterations in cR, the re-entrainment time of Bmall1 and Rev-Erb increases with increasing cR . E) PRCs of the three gene model as in panel (A), determined for 9h Zeitgeber pulses (z = 4.3) applied to the Rev-Erb variable (in the same way as described for the Per variable in bP that can be associated with the impact (coupling strength) of the Bmal-Rev onto the Per loop. F-H) Re-entrainment time, analogously to panels (B)-(D) in case of different values for bP and a Zeitgeber signal applied to Rev-Erb. Conclusively, one can observe that a wide range of re-entrainment times are possible in dependence of the phase of the Zeitgeber pulse as well as the parameter values associated with inter-loop \u201ccoupling\u201d.A) Phase response curves (PRCs) of the three gene model, determined for 9h (TIF)Click here for additional data file.S11 FigPer, Bmal1, and RevErb genes as well as the corresponding experimental time series from SCN tissue as obtained from the high throughput study in reference [Per2 time series data as done in [Per free-running gene expressions in comparison to the conceptual phase oscillator and the three-gene model, where kinetic parameters have been optimized to account for experimental Per1 gene expressions. C) Simulated dynamics under equinoctial LD12:12 entrainment conditions for a Zeitgeber intensity of z = 0.015. D) Simulated differential responses to a 6h phase advancing jet-lag between Per and Bmal-Rev loops together with the corresponding experimental data for Per2, Bmal1, and RevErb\u03b1 genes.A) Schematic drawing of the regulatory core clock network. In this model, the network of 20 known clock genes has been condensed into gene regulatory interactions of five groups of genes, see table in panel (A) and references , 40. B) eference . Please done in , 40. Thi(TIF)Click here for additional data file.S1 Text(PDF)Click here for additional data file."}
+{"text": "Unraveling the effects of genetic or environmental perturbations on biological rhythms requires detecting changes in rhythmicity across multiple conditions. Although methods to detect rhythmicity in genome-scale data are well established, methods to detect changes in rhythmicity or changes in average expression between experimental conditions are often ad hoc and statistically unreliable. Here we present LimoRhyde , a flexible approach for analyzing transcriptome data from circadian systems. Borrowing from cosinor regression, LimoRhyde decomposes circadian or zeitgeber time into multiple components to fit a linear model to the expression of each gene. The linear model can accommodate any number of additional experimental variables, whether discrete or continuous, making it straightforward to detect differential rhythmicity and differential expression using state-of-the-art methods for analyzing microarray and RNA-seq data. In this approach, differential rhythmicity corresponds to a statistical interaction between an experimental variable and circadian time, whereas differential expression corresponds to the main effect of an experimental variable while accounting for circadian time. To validate LimoRhyde\u2019s performance, we applied it to simulated data. To demonstrate LimoRhyde\u2019s versatility, we applied it to murine and human circadian transcriptome datasets acquired under various experimental designs. Our results show how LimoRhyde systematizes the analysis of such data, and suggest that LimoRhyde could prove valuable for assessing how circadian systems respond to perturbations. In diverse species, from cyanobacteria to plants to mammals, circadian clocks drive rhythms in gene expression throughout the genome . AccordiA common step in analyzing circadian or otherwise rhythmic transcriptome data is identifying which genes show evidence of rhythmic expression. This step can now be accomplished by various computational methods, including JTK_CYCLE and RAIN . ImportaA classic approach for rhythm detection is cosinor regression (or harmonic regression), which is based on fitting a time series to the first harmonic of a Fourier series, i.e., sine and cosine curves of a set period . BecauseIn addition to being a fundamental part of cosinor regression, linear models are 1 of 2 components shared by nearly all state-of-the-art methods for assessing differential expression in transcriptome data. The second is called empirical Bayes. While linear models provide the ability to handle complex experimental designs, empirical Bayes shares information across genes to make more stable estimates of gene-wise variance and thereby improve statistical power and accuracy . These mWe sought to develop a general approach to systematically analyze circadian transcriptome data from various experimental designs. Our approach, which we call LimoRhyde , builds on cosinor regression to express complex circadian experiments in terms of a linear model, which makes circadian transcriptome data amenable to analysis by existing tools for differential expression. We validated our approach in the 2-condition scenario by comparing it to DODR on simulated data and on 6 experimental datasets from mice. To explore LimoRhyde\u2019s flexibility, we then applied it to 2 datasets from humans. Our results suggest that LimoRhyde offers a valuable framework for assessing how rhythmic biological systems respond to genetic and environmental perturbations.https://doi.org/10.6084/m9.figshare.5945569. The LimoRhyde R package is available at https://github.com/hugheylab/limorhyde.All data and code to reproduce this study are available at 2(TPM+1). For the microarray datasets, we downloaded the raw (Affymetrix) or processed (Agilent or Illumina) expression data from NCBI GEO, then used metapredict v0.0.0.9019 for mapping probes to Entrez Gene IDs, intra-study normalization, and log-transformation (Suppl. Table S1.For the RNA-seq datasets (GSE73552 and E-MTAB-3428), we downloaded the raw reads, then quantified gene-level abundances (based on Ensembl Gene IDs) in units of transcripts per million (TPM) using salmon v0.8.2 and tximport v1.6.0 . We keptormation . DetailsTo make circadian transcriptome data amenable to analysis using linear models, LimoRhyde follows the strategy of cosinor regression, decomposing zeitgeber or circadian time into a sine and cosine of period 24 h. Although this decomposition is the simplest, one could also decompose time based on multiple harmonics of the Fourier series or on periodic splines. Thus, a single variable becomes at least 2 variables in the linear model. For data derived from several cycles in constant conditions, one could also include a linear time variable to control for drift. Additional terms for condition, subject, or other covariates can be included as appropriate. In this approach, differential rhythmicity corresponds to a statistical interaction between the experimental factor of interest and each term related to zeitgeber/circadian time. Differential expression, meanwhile, corresponds to the main effect of the experimental factor of interest.For example, if the only variables of interest are zeitgeber time and genotype , then the linear model could be expressed aswhereg in sample i, i\u2019s genotype , i\u2019s zeitgeber or circadian time, and g. The linear model could also be expressed in vector notation aswhereand p values were converted to q-values using the method of After constructing the linear model, the transcriptome data can be analyzed using multiple existing methods based on linear models and empirical Bayes. In this paper, we used limma v3.34.9 . For allIn the datasets from mice, which have discrete time points spaced throughout the circadian cycle, we detected genes with rhythmic expression using RAIN (see next section). In the dataset based on samples from human brain (GSE71620), one experimental factor (age) is continuous and the zeitgeber time points are approximately randomly distributed. Therefore, to calculate a q-value of rhythmicity (accounting for age), we first used LimoRhyde to construct an additive model with terms for age, brain region, and zeitgeber time. The model does not include a term for donor because, although each donor has a corresponding sample from each of 2 brain regions, those 2 samples correspond to the same age and the same zeitgeber time, making it impossible to reliably account for inter-donor variation. We then used limma to perform a moderated F-test on the coefficients corresponding to the 2 terms for zeitgeber time.p value for each gene in each condition. We then used the minimum p value for each gene to calculate q-values of rhythmicity (qrhy). Comparing acrophase across conditions only makes sense if the gene is rhythmic in both conditions. We calculated q-values of being rhythmic in both conditions similarly, but using the maximum instead of the minimum p value.For datasets with 2 conditions, our goal was to detect genes rhythmic in at least one condition. For datasets with discrete time points , we followed a similar procedure as used previously . We firsFor the human brain dataset, we also attempted to identify rhythmic genes using the implementation of Lomb-Scargle in the MetaCycle R package v1.1.0 . We firsWe used the DODR R package v0.99.2 , specifiTo validate LimoRhyde\u2019s ability to detect differential rhythmicity between 2 conditions, we created simulations with non-rhythmic, rhythmic, and differentially rhythmic genes. The simulated data were based on drawing 2 samples every 2 h for 24 h from 2 conditions, and were designed to roughly mimic the properties of real data. Each simulated dataset contained 10,080 genes, with 75% of genes non-rhythmic and 25% rhythmic, and 25% of the rhythmic genes differentially rhythmic . Rhythmic expression was modeled as a sinusoid with a period of 24 h. Noise in expression was modeled as additive Gaussian with a standard deviation of 1.p value of differential rhythmicity for each gene. Based on those p values, we then used the precrec R package v0.9.1 to calculate a receiver operating characteristic (ROC) curve for distinguishing each set of differentially rhythmic genes from the genes that were rhythmic but not differentially rhythmic. By using the known labels (a benefit of the simulations), this analysis avoided confounding with the method for detecting rhythmic genes and focused on the best-case performance of LimoRhyde and DODR for detecting differential rhythmicity.For each simulated dataset, we used LimoRhyde (followed by limma) and robustDODR to calculate the To evaluate the agreement between LimoRhyde and DODR in calling genes differentially rhythmic on experimental data, we calculated Cohen\u2019s kappa at various q-value cutoffs using the irr R package v0.84. To estimate each method\u2019s tendency to call false positives in each dataset, we first identified genes rhythmic in at least one condition using RAIN on the true sample labels, then permuted the sample labels (wild-type or knockout) within samples from the same time point. This strategy attempts to preserve rhythmic expression patterns, but remove differential rhythmicity. For each dataset, we then calculated the mean number of differentially rhythmic genes (across 50 permutations) at various q-value cutoffs. Because the number of differentially rhythmic genes varies substantially across datasets, we summarized the overall results using the geometric mean.rhy \u2a7d 0.1 for 5 of the 6 datasets. We used a cutoff of qrhy \u2a7d 0.15 for E-MTAB-3428, which has only 4 time points and 8 samples per genotype.To ensure a sufficient number of rhythmic genes for comparison, we used a cutoff of qWe estimated rhythm amplitude and zeitgeber/circadian time of peak expression (acrophase) using ZeitZeiger v1.0.0.5 with default settings . For thehttp://bioinf.wehi.edu.au/software/MSigDB/index.html. The gene sets are based on Entrez Gene IDs, so for the gene set analysis of GSE73552, we mapped Ensembl Gene IDs to Entrez Gene IDs using the org.Mm.eg.db R package, keeping only genes with a one-to-one mapping.To identify gene sets enriched for differential expression, we used the camera function in the limma R package . CAMERA -/- mice under night-restricted feeding in LD 12-12, with gene expression measured by RNA-seq .To develop a workflow for using LimoRhyde, we first sought to analyze a circadian transcriptome dataset that is representative of a common experimental design, in which samples are acquired at discrete time points throughout the circadian cycle in 2 conditions. We selected a dataset that included samples taken every 4 h from livers of wild-type and clock gene knockout Arntl RNA-seq . Startin RNA-seq . Using R RNA-seq , we then-/- mice. Limma is a general method for analyzing microarray and RNA-seq data based on linear models and empirical Bayes . Of the 16 genes with the lowest qDR, 8 genes are part of or directly driven by the core circadian clock .We next used LimoRhyde to express the experimental design in terms of a linear model and usedal Bayes . We usedal Bayes . Of 2434-/- mice. We found that the genes with the best evidence for differential rhythmicity had strongly reduced rhythm amplitude in Arntl-/- mice , changes in acrophase were widely distributed or for which differential rhythmicity was not examined (qrhy > 0.01). Among these genes, 3038 genes were differentially expressed (qDE \u2a7d 0.01), of which 301 genes had an absolute log2 fold-change >1 . The number of differentially expressed genes, on the other hand, increased as the cutoff for differential rhythmicity became more stringent and as the cutoff for differential expression became less stringent .We also investigated how the numbers of genes classified as differentially rhythmic and differentially expressed were affected by the criteria used at each step. We found that the number of differentially rhythmic genes increased as the cutoffs for rhythmicity and differential rhythmicity became less stringent . Gene sets enriched for differential expression, meanwhile, tended to be related to the ribosome and various catabolic processes .Finally, to complement the gene-wise analysis, we used a method called CAMERA to perform gene set analysis . BecauseGiven criteria for rhythmicity, differential rhythmicity, and differential expression, the assignment of genes to each group can be expressed as a Venn diagram . To illuDetecting differential rhythmicity between 2 conditions is the use case for DODR. As in LimoRhyde, differential rhythmicity in DODR is defined as a statistical interaction in a linear model based on cosinor regression. Although DODR does not use empirical Bayes to share information between genes, it does use rank-based statistics to achieve robustness to outlier samples.Suppl. Fig. S3A).To validate LimoRhyde\u2019s ability to detect differential rhythmicity between 2 conditions, and to benchmark its performance against that of DODR, we simulated transcriptome datasets containing non-rhythmic, rhythmic, and differentially rhythmic genes . The latter were defined by values for mean amplitude (relative to the standard deviation of additive Gaussian noise), difference in amplitude, and difference in phase for each gene. We first verified that pDR for non-differentially rhythmic genes was uniformly distributed between zero and one, as expected under the null hypothesis . We then used pDR from each method to calculate the area under the ROC curve (AUC) for distinguishing each set of differentially rhythmic genes from all genes that were rhythmic but not differentially rhythmic. As the AUC is not based on any one threshold, we also calculated the fraction of differentially rhythmic genes for which pDR \u2a7d 0.01 . Values of AUC and TPR for both LimoRhyde and DODR approached one (perfect detection) as the difference in amplitude or phase between conditions increased. These results indicate that, independent of the method used to detect rhythmicity, both methods provide similarly strong detection of differential rhythmicity.For each simulated dataset, we used LimoRhyde and DODR to calculate the Suppl. Table S1). For each dataset, we used RAIN to detect genes rhythmic in at least one genotype, applying a less stringent cutoff (qrhy \u2a7d 0.1) to have more genes for comparison. For rhythmic genes, we then used LimoRhyde and DODR to calculate q-values of differential rhythmicity . Each dataset included samples taken at discrete circadian time points from wild-type mice and clock gene knockout mice (thmicity . The medDR \u2a7d 0.01) and somewhat fewer genes at higher q-value cutoffs . To evaluate the ability of the 2 methods to control false positives, we performed permutation testing on each dataset . Both methods effectively controlled false positives, detecting many fewer differentially rhythmic genes on permuted data than on the unpermuted data; although, again, LimoRhyde tended to select fewer genes than DODR at higher q-value cutoffs . These results suggest that LimoRhyde (followed by limma) and DODR provide comparable detection of differential rhythmicity in circadian transcriptome data.Overall, q-values from the 2 methods were highly correlated (median Spearman correlation coefficient 0.91). In addition, based on Cohen\u2019s kappa, LimoRhyde and DODR showed moderate to strong agreement at various q-value cutoffs . AlthougWe next explored the flexibility of LimoRhyde using 2 transcriptome datasets from humans, each of which has a different experimental design than the datasets from mice. The first dataset from humans was based on brain tissue from postmortem donors, with the zeitgeber time for each sample based on the respective donor\u2019s time of death, date of death, and geographic location . The 146rhy \u2a7d 0.1 and rhythm amplitude \u2a7e 0.1, we identified 891 genes as rhythmic . In contrast, the Lomb-Scargle method, which can also handle randomly spaced time-points .Because the time points are based on times of death, they are approximately randomly spaced, making use of RAIN or JTK_CYCLE infeasible. Therefore, to identify rhythmic genes, we used an additive model in LimoRhyde, including terms for age (as a continuous variable in years), zeitgeber time, and brain region . This ade-points , identifDR \u2a7d 0.1; Suppl. Fig. S6C). To estimate the age-dependent changes in rhythm amplitude and acrophase of differentially rhythmic genes, we applied ZeitZeiger to samples from the younger 50% and older 50% of donors. Changes in rhythm amplitude were centered near zero, with similar numbers of genes showing increased or decreased amplitude in older donors (Suppl. Table S5). Changes in acrophase were shifted from zero, corresponding to a mean advance of 3.1 h in older donors . The numbers of differentially rhythmic and differentially expressed genes showed a similar dependence on the criteria for rhythmicity, differential rhythmicity, and differential expression as in the mouse liver dataset . Genes whose expression decreased with age were strongly enriched for involvement in glutamate receptor signaling, synapse structure and activity, and mitochondria (Suppl. Table S4), which is consistent with previous findings . The oriSuppl. Fig. S8A). We then used limma to perform a moderated F-test on the 2 coefficients corresponding to time of day, which identified 1436 genes with time-of-day-dependent expression . Among the 15 top-ranked genes were 8 core clock genes .To analyze the dataset using LimoRhyde, we constructed a linear model with terms for subject and time of day . As the number of time points increases, though, LimoRhyde will continue to favor genes whose expression varies sinusoidally over time, whereas ANOVA, which ignores the relationship between time points, will not. Taken together, these examples demonstrate how LimoRhyde enables a statistically rigorous analysis of circadian transcriptome data from diverse experimental designs.Despite the increasing complexity of experiments to interrogate rhythmic biological systems, methods to fully analyze the resulting genome-scale data have remained largely ad hoc. Here we described LimoRhyde, a unified approach to detect gene-wise differential rhythmicity and differential expression in circadian or otherwise rhythmic transcriptome data. LimoRhyde is inspired by cosinor regression and is applicable to data from any experimental design that can be described by a linear model. LimoRhyde thus functions as an adapter, making circadian transcriptome data amenable to analysis by the ever-improving and growing set of methods designed for the differential analysis of microarray and RNA-seq data.For detecting differential rhythmicity in the common 2-condition scenario, our results suggest that LimoRhyde performs comparably to DODR. Although LimoRhyde (followed by limma) is considerably faster, the absolute difference in runtime is negligible compared to the amount of time required to perform the experiments.LimoRhyde distinguishes itself from DODR by its versatility. First, LimoRhyde can be used to detect rhythmic or time-of-day\u2013dependent gene expression in datasets in which time points are either randomly spaced or do not cover the full circadian cycle, scenarios for which the current implementations of methods such as JTK_CYCLE and RAIN are ill-suited. In this application, LimoRhyde is conceptually equivalent to cosinor regression, with the advantage of using empirical Bayes procedures in methods such as limma to share information between genes. Second, LimoRhyde enables the detection of differential expression between conditions, accounting for possible rhythmicity. This could reveal expression changes in genes whose mRNAs are too stable to be rhythmic or differentially rhythmic . Our resp values, other methods are useful for interpretation. Given a set of differentially rhythmic genes, methods such as ZeitZeiger can quantify the changes in rhythm amplitude and phase. Although such post hoc comparisons are not statistical tests, it may be possible in the future to test specifically for a difference in one quantity or the other. Furthermore, gene set analysis methods such as CAMERA can identify biological processes that are enriched for changes in average expression level or in rhythm amplitude. An analogous method called Phase Set Enrichment Analysis could identify processes enriched for changes in phase or possibly the free-running period of the organism (tau). Neither DODR nor LimoRhyde are currently designed to detect differences in period. The second assumption is an alignment of all time points, whether from conditions with different values of tau (if free-running) or different photoperiods, to one scale. For example, if the photoperiod varied between conditions, the results will depend on whether time zero in each photoperiod is defined as the time of lights on or the time of lights off. Consequently, we advise caution when calculating and interpreting differential rhythmicity in datasets based on free-running organisms for which tau varies considerably between conditions. The danger of this experimental design is that, if the time points are not aligned properly, the results will be confounded by differences in the organisms\u2019 intrinsic circadian phase .In addition to these assumptions, categorizing genes as rhythmic, differentially rhythmic, and differentially expressed\u2014although convenient\u2014requires arbitrary cutoffs of q-value and/or rhythm amplitude. An alternative approach would be to test the coefficients for the main effect and the statistical interaction jointly, which would identify genes showing evidence for either differential rhythmicity or differential expression.Multiple features of LimoRhyde remain to be explored. For example, although we used LimoRhyde in conjunction with limma, which is fast and can handle both microarray and RNA-seq data, LimoRhyde is compatible with multiple other methods for differential expression analysis. In addition, although here we decomposed time using sine and cosine curves (as in cosinor), it is also possible to apply a decomposition based on a periodic smoothing spline (as in ZeitZeiger). LimoRhyde could also be used to detect differences in higher-order harmonics of circadian gene expression .In conclusion, we have developed a general approach for the differential analysis of rhythmic transcriptome data. We concentrated on microarray and RNA-seq data, but given limma\u2019s success on proteomics, DNA methylation, and ChIP-Seq data , we are Click here for additional data file.Supplemental material, limorhyde_supp for LimoRhyde: A Flexible Approach for Differential Analysis of Rhythmic Transcriptome Data by Jordan M. Singer and Jacob J. Hughey in Journal of Biological RhythmsClick here for additional data file.Supplemental material, Table_S1 for LimoRhyde: A Flexible Approach for Differential Analysis of Rhythmic Transcriptome Data by Jordan M. Singer and Jacob J. Hughey in Journal of Biological RhythmsClick here for additional data file.Supplemental material, Table_S2 for LimoRhyde: A Flexible Approach for Differential Analysis of Rhythmic Transcriptome Data by Jordan M. Singer and Jacob J. Hughey in Journal of Biological RhythmsClick here for additional data file.Supplemental material, Table_S3 for LimoRhyde: A Flexible Approach for Differential Analysis of Rhythmic Transcriptome Data by Jordan M. Singer and Jacob J. Hughey in Journal of Biological RhythmsClick here for additional data file.Supplemental material, Table_S4 for LimoRhyde: A Flexible Approach for Differential Analysis of Rhythmic Transcriptome Data by Jordan M. Singer and Jacob J. Hughey in Journal of Biological RhythmsClick here for additional data file.Supplemental material, Table_S5 for LimoRhyde: A Flexible Approach for Differential Analysis of Rhythmic Transcriptome Data by Jordan M. Singer and Jacob J. Hughey in Journal of Biological Rhythms"}
+{"text": "Arabidopsis thaliana, focusing in particular on the central role played by the key regulator FLOWERING TIME LOCUS C (FLC). FLC polymorphism and FLC expression are both strongly correlated with flowering time variation, but the effect of the former is only partly mediated through the latter. Furthermore, the latter also reflects genetic background effects. We demonstrate that it is possible to partition these effects, shedding light on the complex regulatory network that underlies flowering time variation.Intermediate phenotypes such as gene expression values can be used to elucidate the mechanisms by which genetic variation causes phenotypic variation, but jointly analyzing such heterogeneous data are far from trivial. Here we extend a so-called mediation model to handle the confounding effects of genetic background, and use it to analyze flowering time variation in A crucial question in genetics is understanding how genetic variation translates into phenotypic variation. DNA sequence polymorphisms influence final phenotypes through intermediate phenotypes such as protein structures, epigenetic states, and gene expression levels\u2014many of which can be assayed using modern technologies. Understanding how these intermediate, molecular phenotypes mediate the effects of genetic variation is of fundamental interest, and has enormous applied implications.measured expression variation and phenotypic variation would not necessarily be perfect due to time-, tissue-, and environment-specific regulation. To quantify this, the genetic effect can be decomposed using a \u201cmediation model\u201d into an \u201cindirect effect\u201d that can be explained by gene expression levels and a \u201cdirect effect\u201d that cannot be frequently coincide with causal variants identified using GWAS . With respect to genotypes the genome-wide SNP information in the 1001 genome project was used (We used published was used . The dat\u03c1) and Pearson\u2019s (r) correlation coefficient between flowering time and expression levels were calculated for 20,285 genes for which more than 10% lines showed detectable expression levels. The Benjamini Hochberg prodecure (Benjamini 1995) was applied to the p-values corresponding to \u03c1 to obtain genes with the most highly correlated expression levels while controlling FDR at 5%. For the resulting genes a correlation network , \u03b2 is the parameter of the corresponding fixed effect. GWAS analysis for flowering time and Cis-genetic effects of loci on an expression level Y was estimated using local_vs_global_mm function in mixmogam with the model\u03b5 is noise. The local region is defined as \u00b1 15 Kb coding region of each gene, and the global region is defined as the entire genome. With mixmogam the local and global IBS matrices were calculated as genetic relatedness using all SNPs in local and global regions, respectively. Significance of the variance components was estimated by permutation tests (1000 times) with maintaining the chromosomal order of all observations but shuffling the relative positions of the two variables.The idea of using pathway analysis to dissect biological effects into direct and indirect causal relationships was developed already about 100 years ago by X some input variable, by M the mediator and by Y the outcome. In the counterfactual framework one conceptualizes for each individual different potential outcomes depending on the state of other variables. For example one would denote by Y for individual u when X would be equal to x. Although in practice never observable, one contemplates the potential outcomes depending on different values of x as mathematically existing entities \u2014 the counterfactual variables. The average causal effect of a change from x to To develop these ideas denote by nal data .x has on the outcome Y directly , then the first idea is to look at counterfactual outcomes when keeping the levels m of the mediator M fixed. This leads to the so-called Controlled Direct Effect Y between input x but assuming the mediator level would take the counterfactual value x. In contrast the natural indirect effect is defined as If one is interested in the effect that changing n with input M. In our case, X corresponds to SNPM to the gene expression levels of the corresponding gene and Y denotes the flowering time. The classical approach of pathway analysis simply consists of plugging in the model for M in the second equation for Y,M. It turns out that for the simple model (1) both CDE and NDE coincide with the direct effect from pathway analysis X and the mediator M is considered. For the linear model (1) standard software for regression can then be used to obtain estimates of NDE and NIE and Now the concepts of CDE, NDE and IDE make it possible to obtain clear definitions of direct and indirect effects for rather general classes of regression models. We want to illustrate this first in the context of the simplest possible linear regression model for mediation analysis. To this end consider data from a sample of size M and Y, respectively. According to the pathway approach plugging in the model for M in the second equation for Y now yieldsM given e.g., One shortcoming of this extremely simple mediation approach is that it does not take into account at all the polygenic effect from other SNPs. The customary mixed model approach to GWAS analysis uses a random effect to model that polygenic effect and we would like to incorporate such random effects into the mediation analysis. Thus consider the following generalization of (1)Using the notation \u03b21=0, we used permutation tests. Gene expression values were permuted 1500 times while keeping flowering time, genotype and the relatedness matrix fixed.To test whether there is an indirect effect, that is the null hypothesis FLC expression was estimated using the The amount of flowering time variation explained by SNPFlowering time was predicted using the full mediation model given by equation (4), using estimates of escribed . To testhttps://github.com/Gregor-Mendel-Institute/swedishgenomes at a few weeks of age (the nine-leaf stage) was correlated with the eventual flowering of the same genotype across 132 inbred lines (Table S1). According to the Benjamini Hochberg procedure at an FDR level of 5%, 38 out of 20,285 genes (0.2%) showed significant correlation with flowering time . Of thesndidates . This rea priori flowering time genes: FLOWERING LOCUS C genetic variation on gene expression, using a 30 kb window surrounding each gene. Based on permutation tests for flowering time and FLC expression in addition to weaker associations in two other a priori candidates predict flowering time but not FLC expression, when FLC expression strongly predicts flowering time . As just noted, the variance-component analysis clearly supports cis-regulation of FLC FLC expression. As argued above, the remaining 59.2% must thus be due to unmeasured effects on FLC regulation, as it is hard to see how SNPFLC could affect flowering any other way.Using this model, we estimate that 40.8% of the total effect of SNPFLC and FLC expression contributed significantly cis-genetic variation at FLC not tagged by SNPFLC as well as the effect of trans-acting background genetic loci .The importance of the genetic background can readily be seen by comparing the result above to those obtained using a model that does not control for confounding genetic background . Under tfounding . The effp-values . Finallyhe model . Of the t p\u22640.05 . Among tnd SPL15 , all medpression . These sFLC , but a correlation between FLC expression and flowering time was only seen for early-flowering lines that have no requirement of vernalization approach commonly used in GWAS studies, we developed a simple mediation model that takes genetic background into account, and showed that it dramatically reduced overestimation of the effect of FLC. Although Principal Component Analysis (PCA) can, in principle, also handle complex confounding . Conversely, the fact that FLC expression only partly reflects the main SNP almost certainly reflects both allelic heterogeneity at FLC (A. thaliana and humans (According to our estimates, less than half of the effect of the main SNP at pression . Given td humans . AlthougFLC works upstream of the integration and photoperiod pathways, controlling the expression of key flowering time genes like FT and SOC1 in the integration pathway and CRY2 in the photoperiod pathway (FLC was mediated by FT and SOC1 but not CRY2 (FLC may regulate these pathways differently. In general, however, the central role played by FLC is illustrated by how well our simple model predicts flowering time across populations and environments (Our results also shed some light on the network regulating flowering time. Our correlation and variance component analyses \u20133, supponot CRY2 . This suronments .In conclusion, our results illustrate how genetic variation and intermediate phenotypes such as gene expression may be combined to understand the genotype-phenotype map, while at the same time illustrating the complexity of even an extremely simple network dominated by a single locus."}
+{"text": "The emerging field of pathway-based feature selection that incorporates biological information conveyed by gene sets/pathways to guide the selection of relevant genes has become increasingly popular and widespread. In this study, we adapt a gene set analysis method \u2013 the significance analysis of microarray gene set reduction (SAMGSR) algorithm to carry out feature selection for longitudinal microarray data, and propose a pathway-based feature selection algorithm \u2013 the two-level SAMGSR method. By using simulated data and a real-world application, we demonstrate that a gene\u2019s expression profiles over time can be considered as a gene set. Thus a suitable gene set analysis method can be utilized or modified to execute the selection of relevant genes for longitudinal omics data. We believe this work paves the way for more research to bridge feature selection and gene set analysis with the development of novel pathway-based feature selection algorithms. Here, a gene set or a pathway refers to a collection of genes that function together to influence and even regulate a specific biological process. In this study, the phrases \u201cgene set\u201d and \u201cpathway\u201d are used interchangeably.The emerging field of pathway-based feature selection that incorporates biological information conveyed by gene sets/pathways to guide the selection of relevant genesSince biological systems are dynamic, researchers are extremely interested in investigating gene expression patterns over a time course, in an effort to capture dynamical changes that are biologically meaningful and have casual implications. With the fast evolution of microarray technology and RNA-Seq technology, longitudinal experiments that collect gene expression profiles over a series of time points have become affordable and increasingly common in the fields of biomedicine and life science.5.The analytical strategy typically employed for such longitudinal data is to stratify the whole dataset into separate subsets according to time points and then analyze the resulting subsets separately. This approach fails to consider the correlations among measures of a specific subject at different time points. Additionally, it overlooks those genes with trivial changes at individual points but non-marginal accumulated effects when taken together. Therefore, this approach is usually inefficient and lacks statistical power5 fits a GEE model6 to each gene and then excludes those non-significant genes . By filtering genes one by one, this GEE-based screening is highly likely to include many redundant genes and thus to inflate the false positive rate. The redundant genes are irrelevant but suggested to be associated with the phenotype of interest by a feature selection method, mainly due to their correlations with the true relevant genes. Another example is the EDGE method proposed by Storeyet al.3. The EDGE method is designed to identify differentially expressed genes over time between different phenotypes. This method utilized spline-based models to construct expression value-versus-time curves for individual genes and then screened genes one by one according to their significance levels. Again, this method has the same drawback as the GEE-based screening does, namely, the inclusion of many redundant genes.On the other hand, some statistical methods that can analyze longitudinal gene expression data directly have been proposed. Among them, many have adopted a filter method to carry out the selection of relevant features for longitudinal gene expression profiles by screening genes one by one. For example, the GEE-based screening procedure byTo the best of our knowledge, there is no pathway-based feature selection algorithm for longitudinal gene expression data. Given the fact the pathway-based feature selection methods have been demonstrated to be superior to the conventional feature selection methods, there is an urgent need to develop pathway-based algorithms, in order to tackle longitudinal data.7 to conduct feature selection for longitudinal microarray data. In this modification, we extend SAMGSR by applying its reduction step twice. At the first reduction step, the core gene subsets corresponding to the selected gene sets are identified. Then, the essential time points of the selected genes are obtained subsequently. This extension is referred to as the two-level SAMGSR algorithm hereafter.Here, we propose one extension to a pathway analysis method \u2013 significance analysis of microarray gene set reduction (SAMGSR)https://arxiv.org/ftp/arxiv/papers/1511/1511.08272.pdf. In that version, the two-level SAMGSR algorithm and another extension we have made to the SAMGSR algorithm for longitudinal feature selection were included. After the preprint submission, we had made substantial modifications. Also we realized that the updated manuscript with two extensions together was easy to confuse the readers. Therefore, we decided to describe two extensions in separate manuscripts.A previous version of this article is available as a pre-print on arXiv at Cornell University Library:GSE36809. This experiment was hybridized on Affymetrix HGU133 plus2 chips.Data for the injury experiment were downloaded from the Gene Expression Omnibus repository. The accession number iset al.8, uncomplicated recovery represents a recovery within 5 days while complicated recovery includes a recovery after 14 days, no recovery by 28 days, or death. If the recovery duration is longer than 14 days, the patient experienced complicated recovery for sure. Therefore, the possible time points for an uncomplicated recovery include days 1/2, 1, 4, 7 and 14, whereas those for a complicated recovery are days 1/2, 1, 4, 7,14, 21, and 28. Furthermore, we restricted our focus to the patients that had the full compliment of measurements . The 25 uncomplicated patients and 18 complicated patients who met this request were put together and used as the training set. Since there were no measures for uncomplicated patients after 14 days, the data for patients with complications were truncated at 14 days.In this study, only patients with uncomplicated recoveries and patients with complicated recoveries were considered. According to XiaThe rest of patients including 50 uncomplicated patients and 23 complicated patients were used as a test set to validate the proposed method. In the test set, the time points considered were days 1/2, 1, 4, and 7. Of note, the characteristics of patients in the training set and the test set may be different since the test set includes patients who had been discharged early from the hospitals.Since different pre-processing procedures may impact the data analysis, we decided to download the summary expression values of the experimental data directly from the GEO database.http://software.broadinstitute.org/gsea/msigdb). In this study, we considered the C2 category in this knowledgebase, which includes gene sets from curated pathways databases such as KEGG9 and those manually curated from the literature on gene expression.The gene sets were downloaded from the Molecular Signatures Database (MSigDB) at time point j and |GSk| is the size of gene set GSk .0j is a small positive constant used to offset the small variability in microarray expression measurements. Of note, both s(ij) and s0j are time-point specific because the variability of expression measurements differs at different time points. In the reduction step, SAMGS is calculated sequentially first to a series of subsets of a significant gene set, aiming to identify a core subset that makes an essential contribution to the statistical significance of this gene set. Then the algorithm moves to the level of time points, with the objective of determining which combination of time points contributes substantially to the importance of the specific gene. At this level, each gene\u2019s expression profiles over time were viewed as a gene set. Our rational is that a gene\u2019s expression values for the same individual over time are highly correlated, mimicking a gene set.where dIn a separate study , we proposed another extension to the SAMGSR method for the purpose of longitudinal feature selection, which is referred to as the longitudinal SAMGSR method. The longitudinal SAMGSR method first applies the SAMGS step to select the relevant genes and then determines exact time point(s) that the expression values for a gene differ between two phenotypes. A potential disadvantage of this SAMGSR extension is it does not incorporate valuable biological information contained in pathways, which provides knowledge on how genes function in unison to impact on biology processes.k is regarded as a tuning parameter. Using the sequence of 0.05, 0.1, \u2026, 0.5, the optimal value of ck corresponds to the one associated with the minimum 5-fold cross-validation (CV) error. In a 5-fold CV, a dataset was randomly divided into 5 roughly equal-sized folds, and 4 of these folds were used to train a classifier and the misclassification error rate was calculated upon the held-out fold. This step was repeated for each of the 5 folds as the held-out fold, and the error rates were averaged. Given the fact the SAMGSR extensions cannot estimate the coefficients of selected genes, support vector machine (SVM) models were fitted to estimate those coefficients. Then the posterior probability for a sample can be calculated for each time point.In both SAMGSR extensions, c1 provided detailed descriptions on those metrics. In summary, all these metrics have a range in between 0 and 1. For the first two, the closer to 1 the better a classifier is. In contrast, a value of 0 is optimal for the last two metrics. Given the SAMGSR extensions tend to identify those genes that are insignificant at isolated time points but significant jointly over time, an evaluation on individual time points using these statistical metrics might be unfair for the SAMGSR extensions, we also averaged the resulting posterior probabilities at each time point and then calculate the performance metrics using those averages.In this study, we use four metrics - Belief Confusion Metric (BCM), Area Under the Precision-Recall Curve (AUPR), Generalized Brier Score (GBS), and the misclassified error \u2013 to evaluate the performance of a classifier. Our previous studywww.r-project.org). The Venn-diagram plot was made with the aid of an online bioinformatics tool. R codes of the two-level SAMGSR algorithm are given in theStatistical analysis was conducted in the R language version 3.1.2 at timej as Xi.j, the logit function of a complicated injury versus an uncomplicated injury is as following,In order to investigate the properties of both SAMGSR extensions, we used observed expression values from the injury application to design two sets of simulations as in our previous study. Briefly, we chose two causal genes \u2013 F13A1 and GSTM1 \u2013 and then randomly selected 998 genes from the data serving as noise in the first simulation setting. Denote the expression value of genecluit = 0.18FX13A1.1 + 0.57FX13A1.2 + 0.29FX13A1.3 + 0.41FX13A1.4 + 1.02GSTMX1.3\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0logHere, we considered one gene whose significance arises from its moderate joint contribution over time and the other whose association with the outcome is large at one specific time point. The aim of this simulation was to illustrate the inferred advantage possessed by the two SAMGSR extensions, namely, both of them incorporate the accumulated effect of genes over time, recognizing genes with mild or moderate change at each time point but with a coordinated change over time.In the second simulation, we chose two genes \u2013 COX4I2 and RP9 as the relevant genes. The logit function was,cluit = 0.56COXX4I2.1 \u2212 0.91XRP9.5\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0logFor both simulation settings, 50 replicates were created. The results for these two simulations are given in17, an additional filtering using a relevant algorithm such as bagging18 may provide a solution to alleviate or eliminate this problem.Although in the second simulation the number of relevant time points was less than that in the first one, the number of selected genes by both algorithms was dramatically larger in the second simulation. This might be because the relevant genes in the second simulation were highly correlated with other genes compared to the first simulation. The highly correlated structure between relevant features and irrelevant ones produced a large number of redundant features that both SAMGSR extensions, especially the two-level SAMGSR, cannot exclude. To our best knowledge, however, many feature selection algorithms, especially those based on filtering, may suffer from this drawback. As illustrated in our previous workBoth of the SAMGSR extensions incorporated the correlated structure of an expression\u2019s profiles over time in the framework of gene sets, and were more likely to identify genes with coordinated and aggregated effects over time, while their effect size at individual time points may be insignificant. The na\u00efve strategy of implementing feature selection separately at individual time points would overlook these genes. The employed process explains why the overlaps among the selected genes by both extensions over time were very large.9 and GO19 tend to be enriched in the most prevalently studied diseases, e.g., cancers. Moreover, the pathways are far from completeness even for these diseases20. These facts potentially introduce biases and unfairness to an algorithm that utilizes pathway information to guide feature selection, e.g., the two-level SAMGSR method. One solution is to consider a statistical method to construct data-driven gene sets e.g.,The curated pathways in major databases such as KEGGIn this article, we adapted the SAMGSR method for feature selection of longitudinal gene expression profiles. To the best of our knowledge, this study is one of the few efforts to explore how to execute feature selection for longitudinal gene expression data, with additional consideration on pathway knowledge. Given that the two-level SAMGSR extension only performs comparable at individual time points and is even outperformed by the longitudinal SAMGSR method when considerer all time points together, our try here is not fruitful. Nevertheless, we believe this work paves the way for more research to incorporate pathway information to guide feature selection, with the development of novel algorithms to tackle longitudinal gene expression data.GSE36809. The R codes of the two-level SAMGSR algorithm were given inThe microarray data were downloaded from the Gene Expression omnibus (GEO) repository, accession number Considering the intense method developments in time-course microarray data analysis, the authors should include a more rigorous literature review in background section supporting their claim about\u00a0the existence of no pathway-based feature selection method.\u00a0\u00a0As the proposed method is selecting both significant genes and significant time-points, the authors should explain \u201cfeature selection\u201d in introduction in order to clarify the study objectives for the reader.\u00a0In method description, the authors should describe how they calculated p-value and explain the corresponding permutation procedure in each of the steps.This method is able to detect the genes with different \u201clevels\u201d of expression over time and it fails to detect differential temporal patterns of gene expressions. The figure 3 also shows that the significant genes are different in expression levels and similar in temporal patterns. This limitation should be discussed by the authors.\u00a0How the method performs when one gene expression is a noisy realisation of the other one? Does the method fail to detect the similarity of these two expression trajectories? If yes, please discuss. More specifically, if\u00a0 xbar(D)=xbar(C)+E and mean(E)=0; then xbar(D) is a noisy realisation of xbar(C) and the test should fail to reject the null hypothesis. However,\u00a0by this method, we may get very large statistic and reject the null hypothesis.\u00a0\u00a0In the second step, the proposed method treats time as quantitative variable and disregards the temporal order of measurements. According to the authors: \u201ceach gene\u2019s expression profile over time was viewed as a gene-set\u201d. What limitations will this feature impose to the analysis?I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. This study proposes to use a two-level SAMGSR method for feature selection in longitudinal microarray data analysis. The idea of considering the over-time expressions of a gene as the expressions of genes in a gene set is quite interesting. The following are a few concerns that can be further clarified in the revised version of this paper.The longitudinal SAMGSR model, which has not been published before, was only briefly introduced. Since it has not been published, I would suggest to formally introduce the longitudinal model and include it as an alternative option in this paper. \u00a0Data normalization is considered an important step in high throughput data analysis. It would great to provide a brief description.The t-like statistic is used in SAMGSR, and the focus of this paper is microarray data. I am wondering if it is possible to have a very brief discussion on the feasibility of applying the proposed method to RNA-seq data after certain data transformation, e.g., \u201cvoom\u201d normalization.The inserted texts in Figure 1 do not seem to be consistent in size and font.Please provide the link (and last access date) to the online Venn-diagram plot tool.The first sentence in the abstract and introduction sections are identical. I would suggest making some minor modifications, e.g., using a more concise version in the abstract. \u00a0I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. More explanation on \u201ca gene\u2019s expression profiles over time can be considered as a gene set\u201d.Right top on Page 2: When the SAMGSR is used for the first time, the reference should be provided.Bottom right on Page 2: The sample sizes (the number of patients) are inconsistent on two different paragraphs.English may need to be further polished.In this paper, authors intend to propose an extension of a pathway analysis method, SAMGSR, for longitudinal gene expression data. Both real data analysis and computer simulations are used to compare the modified method with the existing methods, which demonstrates some benefits of the proposed method. The proposed method is interesting, although the novelty is limited. Some other detailed comments and concerns are:I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard."}
+{"text": "Looper, a computational method to analyze longitudinally gathered datasets and identify gene pairs that form looping trajectories when plotted in the space described by these phases. These loops enable us to track where patients lie on a typical trajectory back to health. We analyzed two publicly available, longitudinal human microarray datasets that describe self-resolving immune responses. Looper identified looping gene pairs expressed by human donor monocytes stimulated by immune elicitors, and in YF17D-vaccinated individuals. Using loops derived from training data, we found that we could predict the time of perturbation in withheld test samples with accuracies of 94% in the human monocyte data, and 65\u201383% within the same cohort and in two independent cohorts of YF17D vaccinated individuals. We suggest that Looper will be useful in building maps of resilient immune processes across organisms.When we get sick, we want to be resilient and recover our original health. To measure resilience, we need to quantify a host's position along its disease trajectory. Here we present Looper, we identified gene pair candidates that traced a loop, finally selecting those gene pairs that were common across all individuals in the training data. CDC20-IFI44L was one such gene pair; the gene expression profiles demonstrated a phase-shift, with IFI44L (Interferon- induced protein 44-like) peaking before CDC20 (Cell division cycle 20) . Because CDC20-IFI44L traced a clear loop, we decided to follow up on this pair , S6 Fig. The data samples were transformed to obtain polar coordinates and centered; the angles derived from this loop were found to correlate linearly with time (Pearson\u2019s \u03c1 = 0.91) . Using K-nearest neighbor analysis (K = 3) on the CDC20-IFI44L loop generated from the Montreal cohort, we achieved an accuracy of 83% in predicting the perturbation stage in withheld test samples, comprised of 4 individuals (24 data points) in the Montreal cohort .Our goal was to use looping gene pairs identified from the Montreal cohort to predict perturbation stage in the Lausanne and Emory cohorts. To this end, we split the Montreal cohort comprised of 15 individuals such that 11 individuals were used as training data and 4 individuals as test data. As opposed to the human monocyte data, we did not generate a composite gene expression profile with the training samples. Instead, for each individual in the training data, using . Next, using the same CDC20-IFI44L loop derived from the training data in the Montreal cohort, for the 33 data points in the Lausanne cohort sampled at days 0, 3 and 7, the perturbation stage was predicted with an accuracy of 73% . Days 0, 3 and 7 are referred to as base, early, and middle stages, respectively. Finally, we validated the predictive power of the CDC20-IFI44L loop in identifying perturbation stage in another YF17D vaccination study GSE13485 (Emory cohort), regarded as an independent validation set. The Emory cohort consists of PBMCs sampled from a total of 25 individuals in two separate trials, at days 0, 3 and 7 post-vaccination. Using the CDC20-IFI44L data from the Montreal cohort, the perturbation stage for the 75 data points in the Emory cohort was predicted with an accuracy of 65% . Next we performed in silico sensitivity analysis to measure the robustness of our method in predicting perturbation stage for noisy out-of-sample data. The addition of noise, sampled from a Gaussian distribution centered on the mean expression profiles of the CDC20-IFI44L gene pair, along with its corresponding standard deviation, to the Emory cohort resulted in a shift in prediction accuracy from 65% to 63%, suggesting that our proposed method can handle noise.Leave-one-individual-out-cross-validation (LOOCV) analysis on all 15 individuals in the Montreal cohort resulted in a tight range of prediction accuracy, implying that there were no outliers in the data Ideally, when infected, our bodies enter a state of sickness but regain health thereby demonstrating resilience. The dynamicity of the immune response to an infection is reflected in the gene expression profile of an individual; mapped across the full spectrum of perturbation that includes inflammation and recovery, this expression profile can represent the overall resilience of the individual. Studying resilience allows us to move beyond the binary identification of the presence or absence of infection and investigate the reasons why some individuals recover poorly or not at all.Looper, a parsimonious computational approach to identifying looping gene pairs, and we demonstrate this approach through the analysis of publicly available, longitudinally gathered microarray datasets in humans under conditions of inflammation and resolution in vitro, in human monocytes, and in response to vaccination with YF-17D, in PBMCs. The main constraint we used to select datasets for our analysis is that the data should represent inflammation as well as resolution, so that a complete trajectory from infection to recovery can be mapped. As long as the time points at which the data have been gathered show dynamic gene expression profiles, with gene expression levels rising and then returning to near-baseline levels, we can use the data to detect loops. Even though the publicly available human monocyte dataset (GSE 47122) was unevenly sampled across time, we were able to detect the dynamic expression profiles of genes. We assume here that the rationale behind such uneven sampling is informed by the researchers\u2019 prior knowledge of gene induction kinetics of inflammation and resolution in this specific model.We study resilience by focusing on looping data, and we define loops as paths traced by two out of phase variables in phase space. We describe Looper identified candidate gene pairs tracing loops that capture their respective immune system perturbation dynamics. We suggest that these loops can serve as maps by providing a clear separation between stages of inflammation and resolution, and we provide evidence in support of our hypothesis by using such maps to predict the position along an inflammation/resolution trajectory in test samples in both datasets with considerable accuracies.For each of these datasets, Genes that are induced and then resolved over an infection are often functionally coupled but have varied temporal relationships. Two genes can be expressed concurrently, at mutually exclusive points in time, or in a phase-shifted manner where the induction of one gene precedes that of the other but shares a partial overlap. In our workflow, we focus on identifying these phase-shifted gene pairs. Phase-shifts are not unique to the datasets we have studied; they can be found in infections such as malaria (in mice and in humans), not just in gene expression data but also between two behavioral variables such as temperature and weight . FurtherLooper from the YF17D vaccinated individuals . In . In 13].ividuals , has notividuals . CDC20, ividuals .Arabidopsis thaliana infection to develop methods of determining pseudo-time as a proxy for real time , and, andArabeal time , our anaWe observed a range of predictive accuracies across the datasets analyzed. The highest accuracy for perturbation stage prediction was achieved at 83% for the withheld samples within the Montreal cohort, which was used to identify the loops. The accuracy of perturbation stage prediction was 65% for samples in the Emory cohort (GSE13845) as compared to 73% for the Lausanne cohort dataset (GSE13699); this discrepancy could be due to differences in the platforms used in both experiments, differences across humans, or differences in the actual rate at which individuals pass through the loop. Our intent here was not to discover biomarkers for use in a clinical setting; instead we sought to develop a simple approach to automate the discovery of loops that summarize resilience in diverse datasets. Once we establish that a gene pair robustly generates loops across different individuals, we intend to study the gene pairs and their molecular pathway in-depth, for example by examining how disrupting the expression of one gene or its feedback loop impacts the induction kinetics of the other gene in the gene pair.In our studies we have demonstrated that these looping gene expression patterns can be found in immune cells such as monocytes as well as in PBMCs. It will be interesting to see if there are common loops across different cell types and across a diverse panel of immune system perturbations. Deviations from a healthy loop could reveal biological insights; for instance, in a typically self-resolving malaria infection, recovering mice trace a small loop while dying mice diverge and trace larger arcs . As the longitudinal\u201d, \u201cmicroarray\u201d, \u201cinflammation\u201d, and \u201cresolution\u201d. Selected gene expression datasets were downloaded from the National Center for Biotechnology Information Gene Expression Omnibus , using a Python module Metageo, that we designed and uploaded to the Gitlab repository https://gitlab.com/prath/resilience2018. Metageo extracts the GEO Platform Files (GPL) and maps the probe IDs on the microarray file to gene names on the GPL file, generating an output file with gene names replacing their corresponding probe IDs. For multiple probe IDs that map to the same gene, Metageo computes the median gene expression and assigns it to the gene.Publicly available human gene expression (microarray) datasets were selected with an eye to experiments that sampled patients longitudinally from the start of the perturbation through to resolution of inflammation, and included at least six time points. Our rationale for this prerequisite was that we needed variation in two variables that rose and fell out of phase with each other, which requires a minimum of four samples. Automated search could not be effectively applied at this step. Though several datasets have multiple data point samples, they focus on a narrow stage of the infection and don\u2019t include recovery. We chose microarray datasets following an extensive literature search using the key phrases \u201chttps://gitlab.com/prath/resilience2018). Nearest neighbor computation was performed with K = 3 using Euclidean distance as the metric to measure similarity.K-Nearest Neighbor analysis was performed in Python using the provided code (https://github.com/bytorres/PlosBio2015). Briefly, this script finds the center of a two-dimensional point cloud by first normalizing the ranges of the two dimensions. Multiple possible centers are tested within the range of the data points and the center is identified as one that produces the smallest variance for calculated radii. We defined 0 degrees as day 0 of experiment. Transformed data were centered and visualized and graphed using Tableau v9.0.Data was transformed from Cartesian to polar coordinates using MATLAB code described previously was performed on the YF17D-vaccinated Montreal cohort (GSE13699) with the Ayasdi 3.0 software platform . Nodes in the network represent clusters of human samples, and edges connect nodes that contain samples in common in terms of their gene expression profiles. Details of the TDA workflow are described in Torres et al . The folWe used Symbolic Aggregate Approximation (SAX) to conveA related methodology of plotting infdividual trajectories through multidimensional disease space described previously represenLooper since it discovers loops in disease space, and have made it available in a GitLab repository: https://gitlab.com/prath/resilience2018. A schematic of the Looper workflow is described in We packaged our methodology of identifying loops using gene expression data in the form of a computational module written in Python; we have named this module Each selected dataset described gene expression values sampled across multiple time points for a collection of individuals in response to inflammation or vaccination. For each dataset, we imputed missing time points in individuals using the median gene expression for that time point across all other individuals. The gene expression data was then rescaled to baseline (time 0), and split into training and holdout sets. Next, a composite gene expression profile was created by taking the median of gene expression data per time point across all individuals within the training data. To capture dynamic transcripts, we filtered the data down to a list of top 0.5% genes that showed the largest expression ranges across time as measured by standard deviation. Through this filtering step, we recovered 95 genes out of an initial 18859 genes in the in vitro human monocyte dataset (GSE47122) and 91 genes out of an initial 18197 genes in the YF17D vaccination dataset (GSE13699). At this point, there were 4465 and 4095 possible gene pairs in the in vitro human monocyte dataset and the YF17D dataset, respectively. To identify the subset of gene pairs that were phase-shifted with respect to each other, we performed a time-series similarity search using symbolic aggregate approximation analysis (SAX), described in the methods, on the filtered list of genes. We ultimately obtained 102 and 45 phase-shifted gene pairs in the in vitro human monocyte dataset and the YF17D dataset, respectively. The phase-shifted gene pairs were plotted against each other to visually confirm loops. The gene pairs that traced loops were used to predict time post perturbation in the holdout sets, and sorted based on their prediction accuracies.\u03bcl RMPI-1640 supplemented with 50\u03bcg/ml gentamycin (GIBCO) and 5% heat-inactivated human serum. Monocytes were sequentially exposed to CCL2 (10 ng/ml), TNF\u03b1 (10ng/ml), LPS from E. coli serotype 055:B5 (5ng/ml), IFN\u03b3 (25ng/ml), IL10 (20ng/ml) and TGF\u03b2 (10ng/ml) , replaced with LPS at 2 hours, which was supplemented with TNF\u03b1 at 3 hours and IFN\u03b3 at 7 hours. Cells were washed and placed in media containing IL10 at 14 hours, which was replaced with TGF\u03b2 at 24 hours. The cells were kept at 37 C for the first 2 hours, shifted to 39 C until 14 hours and moved back to 37 C until the end of the experiment while being at 5% CO2 throughout. Cells were harvested for RNA extraction at 0, 2, 2.5, 3, 4, 7, 10, 14, 24 and 48 hours.Experimental validation was performed following the protocol described by Italiani et al. used to RNA was isolated from the samples in 96 well plates using the RNeasy 96 Kit (Qiagen) and converted into cDNA using the One-Step RT-PCR kit (Applied Biosystems). The following primers were obtained from RealTimePrimers . Transcript fold changes for IL1A and TNIP3 were calculated with respect to GAPDH.S1 Table(XLSX)Click here for additional data file.S2 Table(XLSX)Click here for additional data file.S1 FigEach line represents the median gene expression across all individuals. Gene expression data is presented on a log2 scale. The gene expression profiles highlight a phase shift between the pair of genes.(TIFF)Click here for additional data file.S2 Fig\u03c1 = 0.98).(A) IL1A-TNIP3 loop comprised of all data points in the training and test samples. Circles represent individual samples. (B) Polar plot derived from IL1A-TNIP3 loop comprised of all data points. Points are colored based on an ordinal time scale to clearly distinguish between points sampled at different times . Distinct time points can be seen to occupy distinct regions on the plot. (C) Angle derived from polar transformation of the IL1A-TNIP3 loop is positively correlated with time ((TIFF)Click here for additional data file.S3 Fig\u22126 indicates that the two gene pairs are not sampled from the same distributions.Distribution of prediction accuracy for gene pairs identified as forming loops (green) versus randomly sampled gene pairs (blue). The Kolmogorov-Smirnov statistic of 0.94 and p-value of 8.75 x 10(TIFF)Click here for additional data file.S4 FigIL1A-TNIP3 loop constructed with log2 qPCR data shown for three different donors (a-c). Each colored line represents average gene expression values across replicates at a different dose of LPS, as labeled in the index. The lines get thicker to mark progress of time.(TIFF)Click here for additional data file.S5 FigThe topological network constructed with 91 genes shows that the expression level of genes at day 14 returns to day 0 baseline levels. Each box reflects the network colored by number of days post-vaccination, in the order (a-e) as days 0, 3, 7, 10, and 14.(TIFF)Click here for additional data file.S6 FigLoop constructed using gene expression data (log2) in individuals in the training data consisting of 11 individuals . Circles represent individuals sampled at different time points. The time point is labeled next to the appropriate circle.(TIFF)Click here for additional data file."}
+{"text": "To this end, retinal organoids generated from human embryonic stem cells (hESCs) were analyzed by single cell RNA\u2010sequencing (scRNA\u2010Seq) at three time points of differentiation. Combinatorial data from all time points revealed the presence of nine clusters, five of which corresponded to key retinal cell types: retinal pigment epithelium (RPE), retinal ganglion cells (RGCs), cone and rod photoreceptors, and M\u00fcller glia. The remaining four clusters expressed genes typical of mitotic cells, extracellular matrix components and those involved in homeostasis. The cell clustering analysis revealed the decreasing presence of mitotic cells and RGCs, formation of a distinct RPE cluster, the emergence of cone and rod photoreceptors from photoreceptor precursors, and an increasing number of M\u00fcller glia cells over time. Pseudo\u2010time analysis resembled the order of cell birth during retinal development, with the mitotic cluster commencing the trajectory and the large majority of M\u00fcller glia completing the time line. Together, these data demonstrate the feasibility and potential of scRNA\u2010Seq to dissect the inherent complexity of retinal organoids and the orderly birth of key retinal cell types. S The rapid improvements in single cell sequencing technologies have opened new opportunities for dissecting the complexity of organoids derived from stem or primary cells. To demonstrate the feasibility of this approach, single cell RNA\u2010sequencing on retinal organoids was performed, which revealed the presence of multiple retinal cell types and their sequential emergence during the differentiation time course. Data show that this method has great potential for identifying multiple cell types arising within complex organoids, enabling detailed molecular and temporal systematic studies and close comparisons between in vitro derived tissues and in vivo organogenesis.It has been estimated that 285 million people are affected by visual impairment globally, with retinal diseases accounting for approximately 26% of blindness The improvements in next generation sequencing technologies and protocols for the application of these at a single cell level have broadened their application to multiple systems A detailed description of all experimental procedures is presented in the Supporting Information.A hESC (H9) cell line was differentiated to retinal organoids. Samples were collected at 60, 90, and 200 days, dissociated, partitioned into single cells using the Fluidigm C1 Single\u2010Cell mRNA\u2010Seq HT IFC and processed for scRNA\u2010Seq. Following quality control and filtering , data from scRNA\u2010Seq of each time point were normalized and then merged using the Seurat package to allow analysis of a higher cell number . The findCluster function revealed nine distinct clusters Fig. , 18, 19.Furthermore, clustering analyses were then performed on each time point Fig. . This waInterestingly, this analysis also depicted cell types with transcriptional profiles shared by several clusters Fig. . For exa. This analysis identified the mitotic cluster as the proliferating population from which the rest of the cells emerged. A M\u00fcller glia subpopulation resides next along this branch. This is potentially due to the expression of genes commonly expressed in retinal progenitor cells (RPCs) within the M\u00fcller glia cluster , corroborating published data for murine M\u00fcller glia cells As the complexity of the organoids increased and distinct cell types could be resolved over time additional pseudo\u2010time analysis was conducted. Monocle Our data demonstrate the feasibility and potential of scRNA\u2010Seq to dissect the inherent complexity of retinal organoids and the orderly birth of key retinal cell types therein, which recapitulates the order of retinal development.J. Collin: study design, performed research, data collection and analysis, figure preparation, manuscript writing; R.Q.: data analysis, figure preparation, manuscript writing; D.Z., B.D.: performed research, data collection and analysis, contributed to manuscript writing; R.H., J. Coxhead: performed research, data collection; S.C.: data analysis; M.L.: study design, data analysis, figure preparation, manuscript writing, fund raising; J. Collin, R.Q., D.Z., B.D., R.H., J. Coxhead, S.C., and M.L.: approved the final version of the manuscript.All authors indicated no potential conflicts of interest.Appendix S1: Supporting InformationClick here for additional data file.Figure S1 Stepwise filtering strategy of single cell RNA\u2010Seq data. Quality control of the raw data was performed using the Scater R package and applied to each time point. (A & B) For day 60 the threshold was set to remove cells with fewer than 100,000 reads or 2000 genes (B). For day 90 and 200 a filter was applied to remove cells with fewer than 150,000 reads or 2000 genes (A); (C) Cells with higher than 15% of mitochondrial genes were removed; (D) For day 60 and 200 cells containing higher than 15% of Ambion spikes were removed from the analysis. For day 90 this threshold was set at 75%. (E) The range of total reads per cell aligned to endogenous genes.Click here for additional data file.Figure S2 Contribution of known technical factors to variation between each dataset before and after normalization. The Seurat CCA method was used to combine data from each time point. t\u2010SNE plots show the clustering of the cells before and after normalization.Click here for additional data file.Figure S3 Combined t\u2010SNE plot of clustering analysis reveals the presence of nine cell clusters. Seurat was used to align all time points to generate a combined dataset.Click here for additional data file.Figure S4 Expression of retinal progenitor cell marker genes in the clusters present at each time point. RPC marker genes LIN28, ZIC1, DLL3, LGR5, SOX9, GLI1, VSX2, SFRP2, ASC1, SOX2, LHX2, PRTG, RAX, FGF19, HES1, PAX6, SIX3 and SIX6 were used to assess expression in the clusters over time. A combined violin plot shows the total expression of these genes within the clusters. A Wilcoxon test was used to compare total expression of the RPC genes between the clusters. Both the mitotic and the M\u00fcller glia cluster showed significant expression of the RPC genes with p values of 7.066140e\u201007 and 2.724621e\u201040 respectively at day 60. The M\u00fcller glia cluster showed significant expression of RPC genes at day 90 and day 200 .Click here for additional data file.Table S1 List of top ten markers used for cluster identification shown in Figure 1.Click here for additional data file.Table S2 Summary of antibodies used for immunohistochemical staining.Click here for additional data file."}
+{"text": "Gene Expression Omnibus (GEO) and other publicly available data store their metadata in the format of unstructured English text, which is very difficult for automated reuse.We employed text mining techniques to analyze the metadata of GEO and developed Restructured GEO database (ReGEO). ReGEO reorganizes and categorizes GEO series and makes them searchable by two new attributes extracted automatically from each series\u2019 metadata. These attributes are the number of time points tested in the experiment and the disease being investigated. ReGEO also makes series searchable by other attributes available in GEO, such as platform organism, experiment type, associated PubMed ID as well as general keywords in the study\u2019s description. Our approach greatly expands the usability of GEO data, demonstrating a credible approach to improve the utility of vast amount of publicly available data in the era of Big Data research. In this Big Data era, data-driven approach for biomedical research becomes more and more important , and dat6 samples (https://www.ncbi.nlm.nih.gov/geo/summary) derived from over 3000 organisms. GEO offers a simple submission procedure that allows researchers to summarize a study in plain text. However, such flexibility has also resulted in unstructured metadata in plain English scattered in different sections of each study\u2019s description making its reuse difficult.The Gene Expression Omnibus (GEO) is the largest data repository designed for archiving and distributing microarray, next-generation sequencing, and other functional high-throughput genomics data , 5, 6. ATime-course data are fundamental to study genome dynamics and identify genes whose expression changes significantly over a defined period of time. Discovery of these genes is crucial in order to understand underlying disease mechanisms, detect novel targets for intervention and improve prevention and treatment of diseases . While rTo maximize reuse, GEO incorporates a comprehensive search function to discover experimental results through query terms such as `organism\u2019 or `cell type\u2019 . Additiohttp://www.regeo.org.In this work, we employed text mining techniques to develop Restructured GEO (ReGEO), a novel database to maximize re-use of GEO data by providing time series information about experimental data stored in GEO. We used a rule-based text mining algorithm to parse the metadata in GEO to automatically identify the number of time points in the experimental design. Our approach reached an accuracy rate of 93.5% and is entirely automatic. Our work demonstrates the utility of text mining in improving the usability of publicly available data. The ReGEO database can be accessed at We obtained metadata and the titles of samples for Gene Series Expression (GSE) data from GEO and GEOmetadb . Among tExplicit statement of the number of time points.Listing of time points.Approximate statement.Time points in titles of GSM samples.We designed an NLP text mining algorithm to detect information about the number of time points of gene expression data in GEO in the `Summary\u2019 and `Overall design\u2019 fields; listings of time points ; and time-related statements, e.g. `early stage\u2019 or `middle age\u2019. In addition, the algorithm looks for the time point information in the titles of the series\u2019 samples, taking into account that the time values may appear as a number followed by a time unit, or vice versa . In summary, the algorithm evaluates the following four possible scenarios for determining the time pointsExplicitly stated number of time points, time series or stages, and so on;A list of numbers only in ascending order, followed by or starting with a time unit;Multiple lists, as stated in b;A list of numbers each with time unit.For time points expressed in the `Summary\u2019 or `Overall design\u2019 fields, the following four situations are considered as valid statement of time points.Treated as false time points if the same number is mentioned in summary or overall design with samples or patients.Treated as false time points if certain other letters that are deemed irrelevant to time occurred in the same positions in other titles.At least part of the time points in that dataset must show a regular pattern, either in an arithmetic progression or geometric progression series.Rational analysis for single-letter time-unit time points expressed in the titles of samples, the following rules are applied.A regular expression-based method was used to parse the GEO metadata to match the above patterns, followed by verification of consistence and rationality in the following conditions:When filtering GEO series by a fixed number of time points, false negatives are less desirable than false positives because the latter can be easily identified and discarded during analysis. For this reason, our algorithm was designed to err by excess and, possibly, assign extra time points to a series rather than missing any of them.The above text mining rules were fine-tuned on 600 manually-curated GEO series with \u22658 time points and evaluated on an additional 200 GEO series.disease-ontology.org). MetaMap is a highly configurable program to map biomedical text to the UMLS Metathesaurus or, equivalently, to discover Metathesaurus concepts mentioned in text of these 200 test series, out of which 159 series have one single time point, 33 have between 2 and 7 time points (inclusive), and 8 have over 7 time points.Using this data set with manually-identified time point(s) as benchmark, our text mining program precisely identified the correct number of time point(s) in 167 out of the 200 series (83.5%). Due to inconsistent ways of reporting the baseline time point, it is common to have a small discrepancy of one time point between even two human curators; hence, we consider the computer-curated information on time points as correct if it is within \u00b11 time point as compared with the ground truth collected by human curators. Based on this criterion, our tests showed that our automated identification of time points has an accuracy rate of 93% (186/200). The error rate in series with a single time point is 1.26% (2/159), in series with 2\u20137 time points is 27.3% (9/33) and in series with over 7 time points is 37.5% (3/8). This evidences that the algorithm exhibits greater accuracy as the actual number of points of the series decreases, a satisfactory trait to correctly eliminate single time point data and to identify multiple time-points data for genome dynamic analysis.The decreased accuracy for multiple time-points data is due to the increased complexity of these data sets\u2014studies with two or more time points are generally less homogeneous and consistent in the way they refer to the time points in the experiment. One frequent case occurs when the series refer to an experiment that uses different time lengths in a treatment. For example, in series GSE28435, the correct number of time points is 6 . However, the treatment samples also included the seizure latency lengths of 4 min, 8 min, 9 min, 13.5 min, etc. For this reason the current version of the algorithm assigns 16 time points to this series. This kind of error is very hard to avoid and even human curators sometimes can make the same mistake in these cases.The situation is getting even more complicated when the title of a sample includes time-related descriptors such as patient age or length of recovery time that do not refer to time points. This is illustrated by series GSE10288, where the ages of different patients were interpreted by the algorithm as time points. Similarly, the same samples under different stimulations measured on different time series also can cause the same problem. Grouping samples with patient ID or treatment could possibly help to avoid this problem. However, GEO metadata is not currently organized in this manner and future attempts to do this grouping could result in a tendency to label samples with a smaller number of time points, which would be undesirable.Despite all these complications, the accuracy of our algorithm to identify single and low number of time points data ensure us to eliminate these data sets as they consist a majority of data in GEO database. The errors incurred by our method for high time-point data can then be quickly checked and fixed by a human due to the small number of these data sets. The current version of the algorithm to identify time ponts is optimized, balancing the trade-offs of precison and recall, and used to build ReGEO.A similar approach was followed to evaluate the accuracy of the disease tags assigned to each series. From the randomly-selected 200 series that were manually curated, 172 were correctly labeled with the related diseases by the MetaMap method. The remaining 28 series were labeled with incorrect diseases or with no diseases at all. Therefore, the overall accuracy rate achieved was over 86%.In order to be inclusive but precise, we have crafted Disease Search in a careful way. For example, to search a DO term, we consider title, summary, overall design and citations. We also used the advanced term mapping software, MetaMap, to allow partial match and ignored short abbreviation and stop words. In comparison, we do not impose any of these restrictions on Keyword search.The full detail of these tests can be found in supplementary file ReGEO_test_results.csv (inserted as an attachment at the end of this document).Identifying, pooling and harmonizing `small data\u2019 from many studies is one of the goals of Big Data research, which will help investigators to conduct integrative analyses of a large number of data sets under similar experimental conditions without generating new data. The ultimate goal of ReGEO is to provide end users with a convenient and accurate way to identify and categorize data for their integrative data analysis.Employing text mining techniques represents a new direction to achieve this goal by extracting useful information from unstructured metadata text. As such, ReGEO is designed not as a data `dump site\u2019, but as a user-friendly database for data identification and integrative research. In this paper we have focused on identifying the number of time points and related diseases from the GEO unstructured data description texts as a starting point, making it possible to study gene expression dynamics across different data sets and under different experimental conditions.Employing ontology, e.g. DO in our case for data annotation, could further facilitate data discovery and integrative analysis. For example, the number of detected and manually confirmed cases for `prostate cancer\u2019 in ReGEO is 1444, in which only 1244 are consistent with the annotation in GEO database, and 82 are not present. However, several technical difficulties need to be further investigated. For one, there may exist many synonyms for ontology terms, and an efficient method is needed to organize and integrate these synonyms. Another difficulty is that ontology terms could lack sufficient detail. For example, the finest DO term on influenza infection is `Influenza\u2019, which does not contain sub-terms on the specific trains of influenza virus. This issue can be ameliorated by integrating other ontologies, such as the Infectious DO, into the annotation. These limitations could be overcome by collaborating with ontology developers.In the future, we aim to further extend the methods developed in this work by employing advanced text mining, NLP, machine learning, and ontology techniques , and appSupplementary DataClick here for additional data file."}
+{"text": "Single cell transcription signatures were used to deconvolute the bulk RNA-Seq data into cell-specific signals.We conducted network co-expression analysis of next-generation RNA sequencing data from whole blood from P. vivax induced activation of innate immunity, including efficient antigen presentation and complement activation. However, this effect was accompanied by strong immunosuppression mediated by dendritic cells via the induction of Indoleamine 2,3-Dioxygenase 1(IDO1) and Lymphocyte Activation Gene 3 (LAG3). Additionally, P. vivax induced depletion of neutrophil populations associated with down regulation of 3G-protein coupled receptors, CRXCR1, CXCR2 and CSF3R. Accordingly, in malaria-exposed volunteers the inflammatory response was attenuated, with a decreased class II antigen presentation in dendritic cells. While the immunosuppressive signalling was maintained between plasmodium species, response to P. falciparum was significantly more immunogenic.Initial exposure to In silico analyses suggest that primary infection with P. vivax induces potent immunosuppression mediated by dendritic cells, conditioning subsequent anti-malarial immune responses. Targeting immune evasion mechanisms could be an effective alternative for improving vaccine efficacy. Plasmodium antigens are inefficiently generated and rapidly lost in the absence of ongoing exposure. As a result, individuals from high malaria transmission areas develop partial protection against severe symptoms at an early age and experience a significant number of asymptomatic infections afterwards.Malaria remains an important public health problem worldwide, with more than 216 million cases per year 445,000 deaths.Plasmodium falciparum, the most investigated malaria species, can modulate immune responses by interfering with maturation of antigen-presenting cells, however the precise mechanisms leading to immune tolerance are poorly understood. Likewise, it has been suggested that malaria parasites impair dendritic cells (DC) function, resulting in the induction of tolerant T cell phenotype.9There is evidence that P. vivax is more rapidly acquired than immunity to P. falciparum.P. vivax usually results in a strongly reduced incidence of febrile episodes upon re-infection, secondary P. falciparum infections are associated with fever and high parasitaemia. Comparing the immune responses induced by each parasite could reveal immune evasion mechanisms to be used as targets to increase the vaccine efficacy. Unlike P. falciparum, there are no established laboratory methods to culture P. vivax in vitroP. falciparum CHMI models.16Immunity to P. vivax we conducted systems immunology analyses of our publicly available RNA-Seq data from a P. vivax CHMI in order to elucidate key host-parasite interactions as potential vaccine targets. We optimised bioinformatic pipelines to enhance read-alignment and increase the number of identified transcripts, improving the sensitivity of the analysis. By using cell specific signatures obtained from single cells, we deconvoluted the signal from whole blood to specific cell types. Network analysis and in silico signal deconvolution allowed us to identify the role of DC in induced tolerance as well as specific targets in the antigen presentation pathway that could play a central role in establishing of immunological memory. Finally, we compared the P. vivax transcription programs with a similar CHMI in P. falciparum to define the specificity of the tolerogenic responses.To better understand immune responses induced by exposure to P. vivax dataset is deposited in Gene Expression Omnibus (GEO) under accession number GSE67184. The dataset is associated with a P. vivax CHMI, corresponding to the analysis of differences in gene expression between malaria-na\u00efve (MN) and malaria-exposed (ME) volunteers. After challenge with P. vivax, whole blood transcriptomes from six na\u00efve volunteers, were compared with six malaria-exposed to investigate the greater symptom severity in na\u00efve infection, which occurs despite equivalent parasite loads and time to diagnosis.P. vivax as described in.P. falciparum dataset deposited in Gene Expression Omnibus (GEO) under accession number GSE50957 is associated with a P. falciparum CHMI, were PBMC transcriptomic profiles were obtained to explore the association of malaria immunity with fever.Our Table S1). For differential expression analysis and gene co-expression network analysis, gene expression was estimated as counts per million (CPM), filtering out genes lees that two gene counts in at least half of the samples were used. Determination of differentially expressed genes (DEG) was done using EdgeRAll reads were subject to quality control using FastQC.Express3Dr\u202f=\u202f0.8 to r\u202f=\u202f0.95) and Markov Clustering Algorithm values were used (MCL\u202f=\u202f1.7 \u2212\u00a0MCL\u202f=\u202f5) to determine an optimal graph structure. Clusters were then manually curated to remove artefacts. The gene clusters with highest correlation scores were used to generate and visualize networks based on GO-enrichment analysis (GOEA) by using ToppGene.6 permutations. The significance level was set at 0.05. Transcription factors were identified using the transcriptional regulatory relationships deposited in the TRRUST database.Un-supervised transcript co-expression analysis was performed using the graphical correlation-based tool BioLayout p\u202f<\u202f10\u22125, defined by a Kolmogorov\u2013Smirnov test). Because of a sample size requirement, we merged the original single cell data from T cell groups (CD4 and CD8) into one group. Neutrophil markers were obtained from a similar analysis using the Hoek et al. RNA-Seq dataset.To evaluate gene-specific differences between blood cells we retrieved the Zheng et\u00a0al. SC-RNA-Seq dataset.t test was used. The statistical test used and p values are described in the figure legends with *p\u202f<\u202f0.05, **p\u202f<\u202f0.01, and ***p\u202f<\u202f0.001.Statistical analyses were performed using Prism 7 (GraphPad Software) and methods embedded in bioinformatic pipelines. The Shapiro\u2013Wilk test was used to test for normality. For non-normally distributed data the Wilcoxon matched-pairs signed rank test was used for comparison of two groups. For normally distributed data a paired student's n\u202f=\u202f6) before and after the challenge, to establish the transcriptomic networks activated during malaria infection. A total of 1,072 DEGs were identified, of which 39% were upregulated (Table S2). Co-expression analysis (CEA) identified eight main clusters . Three largest clusters were made of 202 genes up-regulated on exposure to malaria and involved in innate immune responses (FDR\u202f=\u202f9.4\u202f\u00d7\u202f10\u221233). Cluster 1, characterised by the highest average gene expression level at diagnosis, recapitulated the typical inflammation pattern including up-regulation of CXCL9, CXCL10, IFIT1, IFIT2, IFIT3, IFIT5, IRF1 and IRF7 (HLA-DMB and CD74 (Cluster 2) and complement activation (Cluster 3). In contrast, cytokine response and endosome genes were downregulated during the infection (Clusters 4-6). Importantly, Cluster 4 comprised significantly down-regulated neutrophil-associated genes, including the neutrophil chemoattractants CXCR1, CXCR2 and CSF3R, which promotes neutrophil maturation.ACKR1 (DARC), the P. vivax receptor in reticulocytes.,CD163,CR1, a regulator of the complement cascade, CCR3 a receptor that binds to CCL11 and NCF4 involved in activates flavocytochrome b in neutrophils.Our previous analysis of differences in gene expression between malaria-na\u00efve (MN) and malaria-exposed (ME) volunteers demonstrated significant changes in gene expression at the time of malaria diagnosis, particularly in the na\u00efve volunteers, with downregulation of genes related to innate immunity, and inflammation.P. vivax CHMI, we compared the genes differentially expressed upon P. vivax challenge in MN and ME individuals. A total of 400 DEGs were detected in ME individuals (FDR\u202f<\u202f0.05), from which 30% were up regulated (Table S3). Importantly, ME volunteers showed 62% fewer DEGs compared with the MN volunteers with a lower proportion of upregulated genes . A smaller, statistically insignificant, decrease was observed for the ME group. Both groups exhibited a significant increase in expression of CXCL9 and CXCL10 during the infection which could be correlated with the levels in T cell and monocytes. The receptor of the monocyte chemoattractant protein-1, CCR2, was up-regulated only in MN individuals which correlates with the high predicted levels at the time of diagnosis (Tables S2 and 3).The availability of single-cell gene expression data offers improved means for deconvolution by providing profiles from a large number of minimally perturbed primary cells. To estimate the proportions of B cells, monocytes, mDC, neutrophils, NK and T cells in the whole blood samples obtained during the CHMI trial, we optimized and validated a bioinformatics pipeline for dissecting the characteristic expression profiles of immune cell types using a data set comprising 8K single cells from a healthy donor.h groups . This inh groups . Furtherh groups . This deIDO1 , correlating with the expression of EGR2, a transcription factor binding to IDO1 promoter . Two of the main transcription factors involved in the antigen presentation gene regulatory network, CIITA and RFX5, where significantly up regulated in MN but not in ME suggesting a less activated antigen presentation in the ME individuals (CSF3R) a receptor that that controls the production, differentiation, and function of granulocytes was significantly down-regulated in MN but not in the ME group . While molecules associated with immunosuppression were observed in both datasets, P. falciparum induced stronger inflammatory responses with expression of a wide spectrum of chemokines, chemokine receptors and Toll-like receptors, in contrast to P. vivax. Toll like receptor 7 was upregulated during the infection with both parasites, whereas TLR 1-5 and 8 where exclusively upregulated in P. falciparum. CXCL9 and 10 where highly upregulated in both parasites, whereas CXCL11 and CCL2, two potent chemoattractant molecules for lymphocytes and monocytes respectively were only upregulated in P. falciparum infections.In order to assess the specificity of the tolerogenic responses detected in CXCR3 downregulation on P. falciparum could be related with a negative feedback with the CXCL11. IL15 was the most highly cytokine in P. falciparum, whereas in P. vivax was IL18. While expression of transcription factors involved in activating innate immune responses, including interferon regulatory factors (IRF) 1,2,5,7 and 8 was highly significant in P. falciparum, only IRF 1,7 and 9 were significantly up-regulated upon exposure to P. vivax. Down-regulation of CXCR1, CXCR2, CCR3 and CXCL6, specific to neutrophils, was absent in the P. falciparum dataset due to the sample preparation and inefficient CD8 memory responses (CXCR6).P. falciparum appears to be essential to expeditiously control a highly pathogenic infection at the expense of the long-term immunity.We have shown that P. vivax induces dramatic drops in neutrophil population, specifically downregulating of CXCR1, CXCR2 and CCR3. Neutrophils regulate DC function during microbial infection, probably by cross-talk between these cell populations as an important component of the innate immune response to infection.IL-8 (CXCL8), the main ligand of CXCR1, is a powerful neutrophil chemotactic factor and its binding to CXCR1 induces activation and migration of neutrophils.P. vivax affecting bone marrowEZH2 at CIITApIV and the resulting increases in CIITApIV H3K27me3 occur in the presence of IFN-\u03b3 and leave the proximal promoter inaccessible for transcription factor binding or transcription initiation.CIITA and MHC II. This observation correlates with the low antibody response observed in sera from the same volunteers tested against a protein microarray comprising \u223c50% of the P. vivax proteome.EZH2, HDAC7 and CREBBP is possible that the changes induced by the infection could be genomically imprinted. Indeed, case-control and longitudinal studies indicate that children undergoing malaria episodes have increased susceptibility to infection with non-typhoidal Salmonella species and other bacteria.P. falciparum and P. vivax malaria.Conversely, initial exposure to IDO1 has been shown to modulate the T in response to the parasitic infection.P. vivax malaria. While development of immunological memory facilitates an immediate recall of effector cells which rapidly clear the parasite preventing an excessive inflammatory response and tissue damage, previous studies have shown that release of pro-inflammatory mediators like TNF-a and IFN-\u03b3 during malaria infections, potentially can contribute to organ damage.P. vivax infection is tightly associated with previous host malaria experience. This might explain the lack of correlation between protection achieved in CHMI and vaccine efficacy in the field.69Interestingly, our approach revealed transcriptional networks of gene modules related to type I interferon, innate immunity and T cell signalling. Importantly, these gene networks were associated with specific phenotypes and predicted changes in immune cell populations. Among these pathways, P. vivax infection. The most significant limitation in our study is that the deconvolution method is highly dependent on the fidelity of reference profiles which can potentially over- or under-represent the cell types. However, the use of SC-RNA-Seq data as well as the validation of the signature with data sets from purified cell population mitigate this issue, and allow identification of plausible cell-specific immune mechanisms. A better understanding of immune systems in individuals with varying degrees of immunity to P. vivax will be useful to improve rational vaccine design and development novel therapeutic interventions. The mechanisms of immunosuppression that we have shown here could be harnessed to improve current malaria vaccines by targeting specific molecules such as the IDO1, to overcome parasite immune evasion.Applying a combination of whole transcriptome network analysis and cell signature deconvolution allowed us to gain novel insights into the mechanisms underpinning MEP is funded by Sir Henry Dale Fellowship, Wellcome Trust. Grant no 109377/Z/15/Z . AV is funded by the Royal Society grant no CH160056.The authors declare to have no conflict of interest."}
+{"text": "The functional magnetic resonance imaging (fMRI) data and brain network analysis have been widely applied to automated diagnosis of neural diseases or brain diseases. The fMRI time series data not only contains specific numerical information, but also involves rich dynamic temporal information, those previous graph theory approaches focus on local topology structure and lose contextual information and global fluctuation information. Here, we propose a novel multi-scale functional connectivity for identifying the brain disease via fMRI data. We calculate the discrete probability distribution of co-activity between different brain regions with various intervals. Also, we consider nonsynchronous information under different time dimensions, for analyzing the contextual information in the fMRI data. Therefore, our proposed method can be applied to more disease diagnosis and other fMRI data, particularly automated diagnosis of neural diseases or brain diseases. Finally, we adopt Support Vector Machine (SVM) on our proposed time-series features, which can be applied to do the brain disease classification and even deal with all time-series data. Experimental results verify the effectiveness of our proposed method compared with other outstanding approaches on Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and Major Depressive Disorder (MDD) dataset. Therefore, we provide an efficient system via a novel perspective to study brain networks. The functional Magnetic Resonance Imaging (fMRI) technique provides an opportunity to quantify functional integration via measuring the correlation between intrinsic Blood-Oxygen-Level-Dependent (BOLD) signal fluctuations of distributed brain regions at rest. The BOLD signal is sensitive to spontaneous neural activity within brain regions, thus it can be used as an efficient and noninvasive way for investigating neurological disorders at the whole-brain level. Functional connectivity (FC), defined as the temporal correlation of BOLD signals in different brain regions, can exhibit how structurally segregated and functionally specialized brain regions interact with each other. Therefore, the brain network analysis using fMRI data will provide great advantages to automated diagnosis of neural diseases or brain diseases.Some researchers model the FC information as a specific network by using graph theoretic techniques. Differences between normal and disrupted FC networks caused by pathological attacks provide important biomarkers to understand pathological underpinnings, in terms of the topological structure and connection strength. The network analysis has been becoming an increasingly useful tool for understanding the cerebral working mechanism and mining sensitive biomarkers for neural or mental diseases. Zeng et al. propose However, these graph theory approaches have many drawbacks that must be overcome. First, the graph theory has many limitations, on the one hand, common graph theory features such as edge weights, path lengths and clustering coefficients methods, we propose a novel framework for feature extraction of brain functional connection. Then, through feature selection, we use the classification model for predicting brain disease. Finally, we discuss parameter settings in the model.We carry out experiments on two different datasets. One is a public Alzheimer's Disease Neuroimaging Initiative database and 82 normal controls . We download the ADNI data from website In volunteer experiment, we use a total of 60 subjects, including 31 volunteers with Major Depressive Disorder (MDD) and 29 healthy volunteers . Those major depressive disorder subjects without comorbidity had a minimum duration of illness more than 3 months. Each participant provided written informed consent and the study was conducted in accordance with the local Ethics Committee.www.fil.ion.ucl.ac.uk/spm/software/spm12/) software package on Matlab. The data pre-processing procedure includes slice timing, realign, segment, normalization and band-pass filtered. For more detailed data pre-processing procedure, please refer to website.We perform image pre-processing for the fMRI data using a standard pipeline, carried out via the statistical parametric mapping according to the Automated Anatomical Labeling (AAL) template. This atlas divided the brain into 78 cortical regions, 26 cerebellar regions and 12 subcortical regions according to anatomy, details in literature , and analyze lesions by observing the changes of correlation. However, the PCC value is context-independent or order-independent, that is not considering nonsynchronous information at different time intervals. Here, we first give a basic introduction to PCC, and then elaborate on our approach.Pearson's correlation coefficient (PCC) is the simplest and most commonly scheme in functional connectivity estimation. For any two brain regions, the coordination degree of blood-oxygen-level dependent fluctuation is calculated as the functional connection strength between these two brain regions. Typically, in the case of the AAL template, this step extracts the 6,670-dimensional features. Mathematical definition is the covariance of the two variables divided by the product of their standard deviations, as follows:Clearly, according to the formula, the value of the Pearson's correlation coefficient is context-independent or order-independent in time series, which it only limits alignment at the same time, so information about the time dimension or context is missing.\u03d5(\u00b7) to evaluate temporal dynamic property of the time series data. In addition, we convert \u03d5(\u00b7) to g(\u00b7), defined as follows:We extract the discrete probability distribution of co-activity in time series data. First, we use the function f(\u00b7) represents a mapping function that makes use of prior knowledge in order to map the original time series into another specific form, g(\u00b7) represents the function to evaluate temporal information after the mapping operation.where We utilize the prior knowledge in order to map the original multivariate time series data into another specific form, such as a mapping of numeric, state and character. The mapping function is defined as follows:Ak denotes the original time series data, and \u03c6 denotes the prior knowledge.where Ak, the correlation value between In the multivariate time series data It = is defined as follows:In addition, the correlation value between Notably, it is obvious that Ak. Here, N is the number of time series data, T is the number of intervals.Generally, we explore the correlation of time series data in multiple intervals. Let Next, we transform the tensor i-th time series data and j-th time series data based on function \u03d5(\u00b7) in interval It, defined as follows:where t-test for feature selection, and then we use Support Vector Machine (SVM) as the learning model, which is described in detail as follows.In disease prediction, the number of samples is limited, but the feature dimension is usually large, so we need to both compress the feature space to improve the accuracy and analyze the etiology with more meaningful features. We use t-test as the feature selection method. We assume that one feature of positive and negative samples is subject to the distribution of the same mean, and we set the significance parameter p = 0.05.We use the two-sample We adopt Support Vector Machine (SVM) technique developed by Cortes and Vapnik for solvi is calculated as follows:where C is a regularization parameter that controls the tradeoff between margin and misclassification error.where In practice, we make more detailed discussion for parameters in our method. We discuss some prior knowledge and assumptions in our problem of Alzheimer's disease and Major Depression Disorder diagnosis, and some details need to be clarified. The time series data not only carry specific numerical information, but also include contextual and fluctuation trend information.Here, due to the BOLD imaging principle, we pay more attention to the time points of high activity state, that is, time points with high values in time series. We define a dynamic or soft threshold to distinguish whether a time point is active or not, that is, converting a numeric sequence into a state sequence or 0/1 sequence.For all active time points in one set of time series, we count the number of time points of simultaneous responses in other sets of time series. Moreover, we analyze the co-active between two sets of time series in asynchronous. As we get more details with asynchronous analysis, we'll get more essential information. In the experiments, it is also proved by the higher classification accuracy.We adopt a empirical rule to indicate the dynamic threshold, called three-sigma method WalterA, . This mewhereandAk, we calculate a corresponding dynamic threshold f(\u00b7), as follows:In a multivariate time series The magnitude of \u03b7 indicates the sensitivity of our method to the active state. In our experiment, \u03b7 is set to 1.i in time point m and brain region j in time point n are in active states. To be more concrete, The correlation function represents the relationship between a couple of time points in time series. In disease diagnosis, we only focus on co-activity, that is, both brain region \u03d5(\u00b7) in our experiment is:Corresponding to Formula 2 above, I, we extract local information by the element of interval, that is, greater element, more detailed information. Easy to be over-fit and sparse; if the element of interval is little, we may lose some key information. Also, for a interval It \u2208 I, if It is close to zero, it means that two time points that we're interested in are very close; if It is far from zero, it indicates that we extract long-distance asynchronous information.For a collection of multiple intervals I is set to {, , , }. Here, represents information for synchronization, and represent short-distance correlation for asynchronism, represents a loose interval for asynchronism. Empirically, it is sensitive to close interval of zero and loose for long distances.In our experiments, the interval collection Our experiment consists of three parts. To proof the effectiveness of our approach, we perform on automated diagnoses of Alzheimer's disease and Major Depressive Disorder, respectively. We evaluate the classification performance using the leave-one-out cross-validation (LOOCV). And also, we adopt Accuracy, Sensitivity, Specificity and AUC as evaluation standards. First, we compare the results of the traditional PCC method and our feature extraction method in the two data sets of AD and MDD. Then, we compare the effects of different classifiers. Finally, we compare our approach with some recent research works.Here, we compare the performance of traditional PCC method and our feature extraction method to analyze fMRI data. In addition to feature extraction, we use the same experimental steps and parameters, including preprocessing, feature selection and classifier. The results are shown in On Alzheimer's disease and major depressive disorder database, we compare our method to traditional PCC method, and classification results are summarized in In this part, we use the feature extraction model in the previous step to compare the performance of different classifiers. Specifically, we compare three classifiers: random forest (RF), logistic regression (LR) and support vector machine (SVM). The results are shown in In this part, we use our proposed multi-scale functional connection method to extract features, and compare the results of different classifiers. Comparing these three classifiers, SVM can achieve the highest AUC in both AD dataset and MDD dataset, the best ACC can also be obtained on the AD data set, which is generally a stable classifier. In addition, RF can obtain the best ACC on the MDD dataset, and LR can obtain the best Spe on the AD dataset. Overall, all three classifiers can achieve good accuracy, indicating that the information extracted by our method is effective and stable.We compare our proposed method to recent outstanding studies. Baseline represents the traditional graph theory feature-based method. Moreover, the state-of-the-art methods represent three major groups of graph kernels on edge, subtree and shortest-path, respectively. These graph kernel belong to the Weisfeiler-Lehman graph kernel framework (Shervashidze et al., On Alzheimer's Disease Neuroimaging Initiative database, we compare our method to seven existing methods, and classification results are summarized in On the volunteer experiments of Major Depressive Disorder, we compare our method to three existing methods, and classification results are summarized in The fMRI time series data not only contains specific numerical information, but also involves rich dynamic temporal information. However, those previous graph theory approaches focus on local topology structure and lose contextual information and global fluctuation information. Here, we propose a novel multi-scale functional connectivity for identifying the brain disease via fMRI data. We calculate the discrete probability distribution of co-activity between different brain regions with various intervals. Also, we consider nonsynchronous information under different time dimensions, for analyzing the contextual information in the fMRI data. Therefore, our proposed method can be applied to more disease diagnosis and other fMRI data, particularly automated diagnosis of neural diseases or brain diseases. Experimental results verify the effectiveness of our proposed method, so we provide an efficient system via a novel perspective to study brain networks.In the future, parallel computing (Zou et al., http://adni.loni.usc.edu/. The results and codes for this study can be found in the https://github.com/guofei-tju/Multi-Scale-FC-Frontier-in-NeuroSci.git.Publicly available datasets were analyzed in this study. This data can be found here: FG and QZ conceived and designed the experiments. ZZ and JX performed the experiments and analyzed the data. FG and ZZ wrote the paper. FG and JT supervised the experiments and reviewed the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Inference of gene regulatory networks from gene expression data has been a long-standing and notoriously difficult task in systems biology. Recently, single-cell transcriptomic data have been massively used for gene regulatory network inference, with both successes and limitations.In the present work we propose an iterative algorithm called WASABI, dedicated to inferring a causal dynamical network from time-stamped single-cell data, which tackles some of the limitations associated with current approaches. We first introduce the concept of waves, which posits that the information provided by an external stimulus will affect genes one-by-one through a cascade, like waves spreading through a network. This concept allows us to infer the network one gene at a time, after genes have been ordered regarding their time of regulation. We then demonstrate the ability of WASABI to correctly infer small networks, which have been simulated in silico using a mechanistic model consisting of coupled piecewise-deterministic Markov processes for the proper description of gene expression at the single-cell level. We finally apply WASABI on in vitro generated data on an avian model of erythroid differentiation. The structure of the resulting gene regulatory network sheds a new light on the molecular mechanisms controlling this process. In particular, we find no evidence for hub genes and a much more distributed network structure than expected. Interestingly, we find that a majority of genes are under the direct control of the differentiation-inducing stimulus.Together, these results demonstrate WASABI versatility and ability to tackle some general gene regulatory networks inference issues. It is our hope that WASABI will prove useful in helping biologists to fully exploit the power of time-stamped single-cell data.The online version of this article (10.1186/s12859-019-2798-1) contains supplementary material, which is available to authorized users. It is widely accepted that the process of cell decision making results from the behavior of an underlying dynamic gene regulatory network (GRN) . The GRNGRN inference was first based upon bulk data using trMore recently, single-cell transcriptomic data, especially RNAseq , have be1. In the first view, this heterogeneity is nothing but a noise that blurs a fundamentally deterministic smooth process. This noise can have different origins, like technical noise (\u201cdropouts\u201d) or temporal desynchronization as during a differentiation process. This view led to the re-use of the previous strategies and was at the basis of the reconstruction of a \u201cpseudo-time\u201d trajectory (reviewed in ). For ex2. The other view is based upon a representation of cells as dynamical systems , 32. WitDespite their contributions and successes, all existing GRN inference approaches are confronted to some limitations:1. The inference of interactions through the calculation of correlation between gene expression, whether based upon or linear or non-l2. The very possibility of making predictions relies upon our ability to simulate the behavior of candidate networks. This implicitly implies that network topologies are explicitly defined. Nevertheless, several inference algorithms \u201329, 35 p3. Regulatory proteins within a GRN are usually restricted to transcription factors (TF), like in , 26\u201330. 4. Most single-cell inference algorithms rely upon the use of a single type of data, namely transcriptomics. By doing so, they implicitly assume protein levels to be positively correlated with RNA amounts, which has been proven to be wrong in case of post-translational regulation and koff (active to inactive) of the target gene. A direct consequence of causality principle for GRNs is that a dynamical change in promoter activity can only be due to a previous perturbation of a regulating protein or stimulus. For example, assuming that the system starts at a steady-state, early activated genes (referred to as early genes) can only be regulated by the stimulus, because it is the only possible cause for their initial evolution. An illustration is given in Fig.\u00a0A initial variation can only be due to the stimulus and not by the feedback from gene C, which will occur later. A generalization of these concepts is that for a given time after the stimulus, we can infer the subnetwork composed exclusively by genes affected by the spreading of information up to this time. Therefore we can infer iteratively the network by adding one gene at a time ratio, which corresponds to the local mean active duration or incoherent feed-forward loop.For this, we need to estimate promoter and protein wave times for each gene and then sort them by promoter wave time. We define the promoter activity level by the ion Fig.\u00a0-b. PromoB can be regulated by gene A or D since their protein wave time are close to gene B promoter wave time. Gene C can be regulated by gene B or D, but not A because its protein wave time is too earlier compared to gene C promoter wave time.The WASABI inference process Fig.\u00a0-c takes \u03b8i,j , which permits the creation of feedback loops or incoherent feed-forward loops.For new proposed interactions, a typical calibration algorithm can be used to finely tune interaction parameter in order to fit simulated mRNA marginal distribution with experimental marginal distribution from transcriptomic single-cell data. To avoid over-fitting issues, only efficiency interaction parameter i,j Fig.\u00a0 is tunedB and C mutually create a positive feedback loop. If this positive feedback loop is detected we consider that each gene has its own auto-positive interaction as illustrated in Fig.\u00a0Positive feedback loops cannot be easily detected by wave analysis because they only accelerate, and eventually amplify, gene expression. Yet, their inference is important for the GRN behavior since they create a dynamic memory and, for example, may thus participate to irreversibility of the differentiation process. To this end, we developed an algorithm to detect the effect of positive feedback loops on gene distribution before the iterative inference (see Supporting information). We modeled the effect of positive feedback loops by adding auto-positive interactions. Note that such a loop does not necessarily mean that the protein directly activates its own promoter: it simply means that the gene is influenced by a positive feedback, which can be of different nature. For example, in the GRN presented in Fig.\u00a0We decided to first calibrate and then assess WASABI performance in a controlled and representative setting.In the first phase we assessed some critical values to be used in the inference process. We generate realistic GRNs Fig.\u00a0-a where We limited ourselves to 4 network levels . If the protein threshold is too close to the initial protein level, then a slight protein increase will activate target promoter activity. Therefore, promoter activity will be saturated before regulator protein level and thus the difference of associated wave times is negative. This shows that one can accelerate or delay information, depending on the protein threshold value. In order to be conservative during the inference process, we set the RNA/Protein wave difference bounds to in accordance with the distribution in Fig.\u00a0We then assessed what would be the acceptable bounds for the difference between regulator protein wave time and regulated gene promoter activity. Ten in silico cascade GRNs were generated and simulated for 500 cells to generate population data from which both protein and promoter wave times were estimated for each gene. Based on these data, we computed the difference between estimated regulated promoter wave time minus its regulator protein wave time for all interactions in all networks. The distribution of these wave differences is given in Fig.\u00a0We finally observed that for interactions with genes harboring an auto-positive feedback, wave time differences could be larger. In this case, wave difference bounds were estimated to (see supporting information). We interpret this enlargement by an under-sampling time resolution problem since auto-positive feedback results in a sharper transition. As a consequence, promoter state transition from inactive to active is much faster: if it happens between two experimental time points, we cannot detect precisely its wave time.20 possible directed networks.WASABI was then tested for its ability to infer in silico GRNs (complete definition in supporting information) from which we previously simulated experimental data for mRNA and protein levels at single-cell and population scales. We first assessed the simplest scenario with a toy GRN composed of two branches with no feedback . Table\u00a020 to 88) illustrates the power of WASABI to reduce complexity by applying our waves-based constraints. We defined two measures for further assessing the relevance of our candidates:We then ran WASABI on the generated data and obtained 88 GRN candidates Fig.\u00a0-b. The hQuality quantifies proportion of real interactions that are conserved in the candidate network (see supporting information for a detailed description). A 100% corresponds to the true GRN.1. fit distance, defined as the mean of the 3 worst gene fit distances, where gene fit distance is the mean of the 3 worst Kantorovitch distances [2. A istances among tiWe observed a clear trend that higher quality is associated with a lower fit distance Fig.\u00a0-b, whichIn both cases, GRN inference specificity was lower than for cascade network inference. Nevertheless in both cases the true network was inferred and ranked among the first candidates regarding their fit distance Fig.\u00a0-b and d,We then applied WASABI on our in vitro data, which consists in time stamped single-cell transcriptomic and bulkWe first estimated the wave times Fig.\u00a0. PromoteIn order to limit computation time, we decided to further restrict the inference to the most important genes in term of the dynamical behavior of the GRN. We first detected 25 genes that are defined as early with a promoter time lower than 5h. We then defined a second class of genes called \u201creadout\u201d which are influenced by the network state but can not influence in return other genes. Their role for final cell state is certainly crucial, but their influence on the GRN behavior is nevertheless limited. 41 genes were classified as readout so that 24 genes were kept for iterative inference, in addition to the 25 early genes. 9 of these 24 genes have 2 waves due to transient increase, which means that we have 33 waves to iteratively infer.After running for 16 days using 400 computational cores, WASABI returned a list of 381 GRN candidates. Candidate fit distances showed a very homogeneous distribution (see supporting information) with a mean value around 30, together with outliers at much higher distances. Removing those outliers left us with 364 candidates. Compared to inference of in silico GRN, in vitro fitting is less precise, as we could expect. But it is an appreciable performance and it demonstrates that our GRN model is relevant.We then analyzed the extent of similarities among the GRN candidates regarding their topology by building a consensus interaction matrix Fig.\u00a0-a. The fWe therefore took a closer look at the \u201cbest\u201d candidate network, with the lowest Fit distance to the data Fig.\u00a0-b. We ob1. Most of the genes (84%) with an auto-activation loop. As mentioned earlier, this was a consensual finding among the candidate networks. It is striking because typical GRN graphs found in the literature do not have such predominance of auto-positive feedbacks.2. A very large number of genes were found to be early genes that are under the direct control of the stimulus. It is noticeable that most of them were found to be inhibited by the stimulus, and to control not more than one other gene at one next level.3. We previously described the genes whose product participates in the sterol synthesis pathway, as being enriched for early genes . This wa4. Among 7 early genes that are positively controlled by the stimulus, 6 are influenced by an incoherent feedforward loop, certainly to reproduce their transient increase experimentally observed .5. One important general rule is that the network depth is limited to 3 genes. One should note that this is not imposed by WASABI which can create networks with unlimited depth. It is consistent with our analysis on signal propagation properties in in silico GRN. If network depth is too large, signal is too damped and delayed to accurately reproduce experimental data.6. One do not see network hubs in the classical sense. The genes in the GRNs are connected to at most four neighbors. The most impacting \u201cnode\u201d is the stimulus itself.7. One can also observe that the more one progress within the network, the less consensual the interaction are. Adding the leaves in the inference process might help to stabilize those late interactions.Altogether those results show the power of WASABI to offer a brand-new vision of the dynamical control of differentiation.In the present work we introduced WASABI as a new iterative approach for GRN inference based on single-cell data. We benchmarked it on a representative in silico environment before its application on in vitro data.per se due to the lack of a gold standard against which different algorithms might be benchmarked [Usually, to demonstrate that a new inference method outperforms previous ones benchmarking is performed \u201345. Howechmarked . For exachmarked are baseMore over, in our view it would be meaningless to compare our approach to any other approach that would not yield a representative executable model , 49 whicHowever, despite experimental validation, we are convinced that WASABI has the ability to tackle some general GRN inference issues based on the assumptions on which WASABI as been designed and on in silico validation results.1. WASABI goes beyond mere correlations to infer causalities from time stamped data analysis as demonstrated on in silico benchmark Fig.\u00a0 even in 2. Contrary to most GRN inference algorithms \u201329, 35 b3. WASABI is not restricted to TFs. Most of the in vitro genes we modeled are not TFs. This is possible thanks to the use of our mechanistic model which in4. Optionally, WASABI offers the capability to integrate proteomic data to reproduce translational or post-translational regulation. Our proteomic data demonstr5. We deliberately developed WASABI in a \u201cbrute force\u201d computational way to guarantee its biological relevance and versatility. This allowed to minimize simplifying assumptions potentially necessary for mathematical formulations. During calibration, we used a simple Euler solver to simulate our networks within model . This faWASABI has been developed and tested on an in silico controlled environment before its application on in vitro data. Each in silico network true topology was successfully inferred. Cascade type GRN is totally inferred Fig.\u00a0 with a gAs it stands our mechanistic model is only accounting for transcriptional regulation through proteins. It does not take into account other putative regulation level, including translational or post-translational regulations, or regulation of the mRNA half-life, although there is ample evidence that such regulation might be relevant , 52. ProCooperativity and redundancies are not considered in the current WASABI framework, so that a gene can only be regulated by one gene, except for negative feedback or incoherent feedforward interactions. However, many experimentally curated GRN show evidence for cooperations (2 genes are needed to activate a third gene) or redundant interactions (2 genes independently activating a third gene) . We inteHPC capacities used during iterative inference impacts WASABI accuracy. Indeed late iterations are supposed more discriminative than the first one because false GRN candidates have accumulated too many wrong interactions so that calibration is not able to compensate for errors. However, if the expansion phase is limited by available computational nodes, the true candidate may be eliminated because at this stage inference is not discriminative enough. Therefore improving computing performances would represent an important refinement and we have initiated preliminary studies in that direction .As it stands, WASABI is limited to infer networks with less than 100 genes in a reasonable time. However, by means of all improvements described above, WASABI can be upscaled to infer network with more than 1000 genes using recent sc-RNA-seq technologies . This isper se an asymptotically solvable problem due to inferability limitations [Nevertheless, despite all possible improvements, GRN inference will remain itations , intrinsitations , 57. ExtThe application of WASABI on our in vitro model of differentiation generated several GRN candidates with a very interesting consensus topology Fig.\u00a0.1. We can see that the stimulus (i.e. medium change ) is a ce2. Twenty-two of the 29 inferred early genes are inhibited by the stimulus, while inhibitions are only present in 7 of the 28 non-early interactions. Thus inhibitions are overrepresented in stimulus-early genes interactions. An interpretation is that most of genes are auto-activated and their inhibition requires a strong and long enough signal to eliminate remaining auto-activated proteins. A constant and strong stimulus should be very efficient for this role like in where st3. None of our GRN candidates do contain so-called \u201chubs genes\u201d affecting in parallel many genes, whereas existing GRN inferred generally present consequent hubs , 29, 35.4. In order to reproduce non-monotonous gene expression variations, WASABI inferred systematically incoherent feedforward pattern instead of \u201csimpler\u201d negative feedback. This result is interesting because nothing in WASABI explain this bias since in silico benchmarking proved that WASABI is able to infer simple negative feedbacks versatility and computational tractability using HPC facilities enabled by WASABI original iterative processTogether, these results demonstrate WASABI ability to tackle some general GRN inference issues: We believe that WASABI should be of great interest for biologists interested in GRN inference, and beyond for those aiming at a dynamical network view of their biological processes. We are convinced that this could really advance the field, opening an entire new way of analyzing single cell data for GRN inference.Our approach is based on a mechanistic model that has been previously introduced in and whicG interacting genes potentially influenced by a stimulus level Q. Each gene i is described by its promoter state Ei=0 (off) or 1 (on), its mRNA level Mi and its protein level Pi. We recall the model definition in the following equation, together with notations that will be extensively used throughout this article. In all that follows, we consider a set of kon and koff are functions of P= and Q. The form for kon is the following : The first line in model represen\u03b3 is set to default value 2. Interaction threshold Hj is associated to protein j. Interaction parameters \u03b8i,j will be estimated during the iterative inference. Parameter \u03b2i corresponds to GRN external and constant influence on gene to define its basal expression: it is computed at simulation initialization in order to set kon and koff to their initial value. From now on, we drop the index i to simplify our notation when there is no ambiguity.This interaction function slightly differs from since au\u03b8 and H) are estimated before network inference from a number of experimental data types acquired during T2EC differentiation. They include time stamped single-cell transcriptomic [WASABI framework is divided in 3 main steps as described in Fig.\u00a0riptomic , bulk trriptomic and bulkriptomic . In a sed0,d1,s1 are known. Single-cell data and bulk proteomic data are simulated from in silico GRNs for time points 0, 2, 4,8, 24, 33, 48, 72 and 100h.For T2EC in vitro application, tables of gene parameters and wave times are provided in supporting information. For in silico benchmarking we assume that gene parameters d0 corresponds to active decay (i.e. destruction of mRNA) plus dilution due to cell division. The RNA decay was already estimated in [https://osf.io/k2q5b). Cell division dilution rate is assumed to be constant during the differentiation process and cell cycle time has been experimentally measured at 20h [The degradation rate mated in before dd at 20h .s0, we used a maximum estimator based on single-cell expression data generated in [s0/d0. Thus s0 corresponds to the maximum mRNA count observed in all cells and time points multiplied by To infer the transcription rate rated in . We suppkon and koff are bounded respectively by constant parameters s0 and d0(t) are supposed to be previously estimated for each gene at time t.Dynamic parameters entclass1pt{minimadt used for simulations shall be small enough regarding GRN dynamics to avoid aliasing (under-sampling) effects. On the other hand, dt should not be too small to save computation time. These constraints correspond to Range parameters shall be compliant with constraints (Eq. ) imposedand we deduce inequalities for ranges: \u22121. Parameter kon is close to its maximum value t, we estimate kton, using a moment-based method defined in [kt,non, with index n corresponding to bootstrap sample n. For each time point we compute the 95% percentile of kt,non,, then we consider the mean value of these percentiles to have a first estimate of kon can be easily reached during simulations with reasonable values of protein level (because of asymptotic behavior of interaction function). In other words kon considering 10% margins. Finally, this limited d0(t). We set the default value fined in . We boott, we estimate ktoff, using a moment-based method defined in [kt,noff, with index n corresponding to bootstrap sample n. For each time point we compute the 95% percentile of kt,noff,, then we consider the mean value of these percentiles to have a first estimate of koff can be easily reached during simulations with reasonable values of protein level (because of asymptotic behavior of interaction function). In other words koff considering 10% margins. Finally, this limited dt to guaranty simulation anti-aliasing. Parameter tion Eq. . Parametfined in . We bootn in Eq. to guarad1(t) and s1(t) are estimated from comparison of proteomic population kinetic data [d1(t) corresponds to protein active decay rate while total protein degradation rate s1(t) and d1 and s1 during differentiation. 5 genes were estimated with a variable s1(t) and a constant d1 to fit a constant protein level with a decreasing RNA level. For the remaining 24 genes, protein level decreased while RNA is constant, which is modeled with s1 constant and d1(t) variable.Rates tic data with RNAtic data . Paramet package . Objectid1 and s1. For the remaining 25 genes, we estimated parameters with the following rationale: we consider that the non-detection in the proteomic data is due to low protein copy number, lower than 100. Moreover [s1 and the mean protein level that we confirmed with our data (see supporting information), resulting in the following definition: For the genes that were not detected in our proteomic data we turned to the literature and founMoreover proposedr2=0.55, slope=0.81, intercept=\u22121.47 and p=2.97\u00d710\u22129. Therefore, if we extrapolate this relation for low protein copy numbers assuming P<100 copies, s1 should be lower than 1 molecule/RNA/hour. Assuming the relation d1 from mean RNA level given by: d1>RNA/100. We set s1 and d1 respectively to their maximum and minimum estimated values.Linear regression was performed using the Python scipy.stats.linregress method from Scipy package with the following parameters: We inferred the presence of auto-positive feedback by fitting an individual model for each gene, based on . The modWprom and protein Wprot are estimated regarding their respective mean trace In vitro wave times are provided in supporting information.Wave time for gene promoter 1) If the mean trace is monotonous , it is smoothed by a 3rd order polynomial approximation using method poly1d from python numpy package. Wave time is then defined as the inflection time point of polynomial function where 50% of evolution between minimum and maximum is reached.2) If the mean trace is not monotonous, it is approximated by a piecewise-linear function with 3 breakpoints that minimizes the least square error. Linear interpolations are performed using the polynomial.polyfit function from python numpy package. Selection of breakpoints is performed using optimize.brute function from python numpy package.We obtained a series of 4 segments with associated breakpoints coordinate and slope. Slopes are thresholded: if absolute value is lower than 0.2 it is considered null. Then, we looked for inflection break times where segments with non null slope have an opposite sign compare to the previous segment, or if previous segment has a null slope. Each inflection break time corresponds to an initial effect of a wave. A valid time, when wave effect applies, is associated and corresponds to next inflection break time or to the end of differentiation. Thus, we obtained couples of inflection break time and valid time which defined the temporal window of associated wave effect. For each wave window, if mean trace variation between inflection break time and valid time is large enough , a wave time is defined as the time where half of mean trace variation is reached during wave time window.d0. Protein mean trace Wprom. Genes with multiple waves, in case of feedback for example, are present several times in the list. Moreover, genes are classified by groups regarding their position in the network. Genes directly regulated by the stimulus are called the early genes; Genes that regulates other genes are defined as regulatory genes; Genes that do not influence other genes are identified as readout genes. Note that genes can belong to several group.Genes are sorted regarding their promoter waves time i belongs to one of these groups according to following rules: Wprom<5h then it is an early geneif Wprom<7h then it could be an early gene or another typesif if else it could be a regulatory or a readout geneWe can deduce the group type for each gene from its wave time estimation. Subsequent constraints have been defined from in silico benchmarking see \u201c\u201d sectionH is estimated for each protein. It corresponds to mean protein level at 25% between minimum and maximum mean protein level observed during differentiation by in silico simulations: Interaction threshold kon and koff of gene target induced by the shift of the regulator protein level from its minimal to maximal value to obtain an initial sub-GRN. Calibration algorithm Calibrate is defined below.In a first step we calibrate the interactions between early genes and stimulus (This list is computed prior to iterative inference (see previous subsection).For each GRN candidate we estimate all possible interactions with the new gene and prior regulatory genes, or stimulus, regarding their respective promoter wave and protein wave with the following logic: if promoter wave is lower than 7h, interaction is possible between stimulus and the new gene. If the difference of promoter wave minus protein wave is between \u2212\u200920h and +\u200930h, then there is a possible interaction between the new gene and regulatory gene. Note: if WASABI is run in \u201cdirected\u201d mode, only the true interaction is returned.\u03b8i,j. For in silico study we defined GRN Fit distance as the mean of the 3 worst gene-wise fit distances. For in vitro study we defined GRN Fit distance as the mean of the fit distances of all genes. Gene-wise fit distance is defined as the mean of the 3 higher Kantorovitch distances[\u03b8i,j with associated GRN Fit distance is returned.For interaction parameter calibration we used a Maximum Likelihood Estimator (MLE) from package spotpy . The goadistances among tiWe fetch all GRN calibration fitting outputs from remote servers and select best new GRNs to be expanded for next iteration updating list of List_GRN_candidate. New networks candidates are limited by number of available computational cores.dt=0.5 h) to solve mRNA and protein ODEs [t and t+dt is given by a Bernoulli distributed random variable p(t) depending on current kon, koff and promoter state: We use a basic Euler solver with fixed time step"}
+{"text": "Single-cell gene expression measurements offer opportunities in deriving mechanistic understanding of complex diseases, including cancer. However, due to the complex regulatory machinery of the cell, gene regulatory network (GRN) model inference based on such data still manifests significant uncertainty.The goal of this paper is to develop optimal classification of single-cell trajectories accounting for potential model uncertainty. Partially-observed Boolean dynamical systems (POBDS) are used for modeling gene regulatory networks observed through noisy gene-expression data. We derive the exact optimal Bayesian classifier (OBC) for binary classification of single-cell trajectories. The application of the OBC becomes impractical for large GRNs, due to computational and memory requirements. To address this, we introduce a particle-based single-cell classification method that is highly scalable for large GRNs with much lower complexity than the optimal solution.The performance of the proposed particle-based method is demonstrated through numerical experiments using a POBDS model of the well-known T-cell large granular lymphocyte (T-LGL) leukemia network with noisy time-series gene-expression data.The online version of this article (10.1186/s12864-019-5720-3) contains supplementary material, which is available to authorized users. A key issue in genomic signal processing is to classify normal versus cancerous cells, different stages of tumor development, or different prospective drug response. Previous gene-expression technologies, such as microarray and RNA-Seq, typically measure the average behavior of tens of thousands of cells , 2. By cGene regulatory networks (GRNs) govern the functioning of key cellular processes, such as stress response, DNA repair, and other mechanisms involved in complex diseases such as cancer. Often, the relationship among genes can be described by logical rules updated at discrete time intervals with each gene have Boolean states: 0 (OFF) or 1 (ON) . The ParIn and 10]10], the c=0) and cancerous (c=1), we derive the optimal Bayesian classifier (OBC) T is a vector of baseline gene expressions corresponding to the \u201czero\u201d state for each gene, and D=Diag is a diagonal matrix containing differential expression values for each gene along the diagonal (these indicate by how much the \u201cone\u201d state of each gene is overexpressed over the \u201czero\u201d state). Such a Gaussian linear model is an appropriate model for single-cell gene-expression data [where ion data , 20.n genes. The difference between the healthy and mutated classes could be the over-expression or disruption of a value of single or multiple genes in the mutated case. Let Dc observed trajectories from class c, c=0,1. Let \u0398= be the uncertainty set of M network functions containing the unknown true network functions in (M possible Boolean functions as c. The prior probability of the model \u03b8 for the class c is represented by \u03c0(\u03b8\u2223c), where c=0,1. This uncertainty could arise due to some unknown regulations (i.e. interactions) between some genes in the pathway diagram classes, each having mentclass2pt{minim\u0398 of feature-label distributions, then we desire a classifier to minimize the expected error over the uncertainty class. This expected error is equivalent to the Bayesian minimum mean-square-error estimate [\u03b5 is the error of \u03c8 on the feature-label distribution parameterized by \u03b8 and the expectation is taken over the posterior distribution of parameters If the feature-label distribution is unknown but belongs to an uncertainty class estimate given byFor a given test trajectory p\u03b8(.) denotes a probability density function corresponding to parameter \u03b8,p0 is the prior probability of class 0 and where c=0,1.for Derivation of requires\u03c0(\u03b8\u2223c) denotes the prior probability of the corresponding network model \u03b8 for class c. For an arbitrary trajectory \u03b8 and class c by where Now, using the above definition in -6) lead lead6) lwhere \u03b8, i.e. \u03c0(\u03b8|c). Furthermore, The expectation in is takenassifier that con\u03b8\u2208\u0398. Let YT1:= be a single trajectory of length T. The log-likelihood function can be computed as As shown in Eq. , the OBCwhere based on the POBDS model, k for the model \u03b8 and class c as Define the conditional probability of any network state at time step Using and 12)12), the 1 denotes L1 norm, indicating the summation in due to the transition matrix involved in its computation. The whole process of the proposed OBC is presented in Algorithm 1.The complexity of computing the log-likelihood function for a single trajectory of length n2 elements required to compute the log-likelihoods, leading to exponential computational and memory complexity. Thus, the key here is to scale up the OBC for single-cell trajectories by reducing both computational and memory complexity when computing techniques \u201327 for eYT1: be a given trajectory and the goal is to approximate the likelihood function in Eq. . Let there be N total particles Let n in Eq. , given by [where given by : 21\\doci=1,\u2026,N, where we have used .for vk,i=p\u03b8 wk\u22121,i, we can sample from P\u03b8 and then sample from the transition density given the mixture, P\u03b8.By simulating the index with the probability N times from the joint density of P\u03b8, and then obtain the new particles Actually, we simulate only from particles associated with large predictive likelihoods. We first sample The auxiliary variables a1,\u2026,aN) represents a categorical distribution with the probability mass function where Cat which can be much smaller than O(2n2T) that is the complexity of computing the exact log-likelihood function in and classes . These log-likelihood values and posterior probability approximated during the training process can be used to derive the approximate OBC in , whereas the exact solution has the complexity O(2n+12MTD0D1). The exponential growth of the complexity with the size of network for the exact solution precludes its application to large GRNs. However, the number of particles, N, by the proposed method can be chosen relatively small according to the attractor structure of the system [O(2NMT), as opposed to O(2n+12MT) for the optimal solution.The training process of the proposed method has the computational complexity N<<22n) , allowinIn the previous sections, the classification of single-cell trajectories is discussed. Here, we consider common scenarios in molecular biology research where gene-expression data are often based on the average expression from multiple-cells at different time with different states. Since the trajectories are assumed to be independent and drawn based on the dynamics of the true network, its steady-state distribution nt model : 25\\doc\u03b8\u2217, is unknown, and is represented by a finite set of M possible network models {\u03b81,\u2026,\u03b8M} with prior probability \u03c0(\u03b8\u2223c). Let c. The optimal Bayesian classifier for a given test sample Y can be represented by where \u03b8 can be computed as The posterior probability of the parameter \u03b8\u2208\u0398 and c\u2208{0,1}. Let A be an n\u00d72n matrix containing all Boolean states of the system 28) into We evaluate the proposed single-cell trajectory classifier and compare its performance with the OBC based on multiple-cell average expression on the T-LGL leukemia Boolean network, whose GRN is shownAs we may not know the true network function, we consider four candidate network functions for each of the healthy and mutated networks as the uncertainty class of possible GRN models. In addition to the true network, we remove the operation \u00ac of Apoptosis for the genes sFas and GPCR, which are intermediate nodes. Therefore, this network is very close to the true network. For the third network, we remove \u00ac of Apoptosis from two other nodes IAP and P2. In the fourth network of this uncertainty class, we change the operation AND to OR for the gene BID. In this study, we use the observation models described in Eqs. and 25)25) for tT for two values of state perturbation probability p=0.05 and 0.1, respectively. For the sake of simplicity, we assume the gene-expression parameters in Eqs. , and set \u03bb=10, and \u03b4=30. Two different values are considered for the observation noise level: \u03c3=20 (low noise), and \u03c3=25 (high noise). While \u03c3 corresponds to both within subject and between subject variations in the single-cell, it shows between subject variation in the multiple-cell because multiple-cells would allow to average out the within subject variance. We set the number of training trajectories D=Dc=0+Dc=1=4. In both figures, the error curves are monotonically decreasing in terms of the trajectory length T for the single-cell classifier. There is a special case in which the error gets fixed after some T. This may be explained by the effect from the steady-state distributions depending on the lengths of attractor cycles of the networks under study. When the perturbation noise p and the observation noise \u03c3 are small, the sufficient T to achieve the least possible error is L+1, where L is the minimum attractor length in the two networks. More precisely, the BNps tend to the deterministic BNs when the perturbation noise is small, meaning that the observations occur only in the attractor states and circulate inside the attractor cycles. In such a case, L+1 is the maximum length of a trajectory that can help distinguish the two networks. But there is a nonnegligible probability of jumping states in the considerable perturbation scenarios, so that longer trajectories can be helpful. In all figures for every value of T and p, the error increases with increasing observation noise. While the proposed classifier works in both low- and high-noise scenarios, the classifier based on the multiple-cell expression data only works well in the low-noise scenarios and is very sensitive to \u03c3. In low observation noise scenarios and when p=0.05, multiple-cell classifier can easily classify, while the trajectory-based with two time-points will have a little bit higher error due to the common short segments of the trajectories between two classes. When there are at least four time-points, which is the attractor cycle size in the uncertainty class of the networks, the performance of the classifiers based on single-cell trajectories is better compared to the ones using multiple-cell even in the low-noise scenarios. Increasing the number of time-points help better decipher the difference in single-cell trajectories between two classes with improved classification accuracy. Compared to the multiple-cell classifier based on averaged gene expression over cells at different states, the trajectory based classifier can clearly improve the classification performance. Even using four time points, the classification accuracy can be improved up to 8%. Using longer trajectories, the improvement can be up to 18% as indicated in Fig.\u00a0Figure\u00a0 in Eqs. and 25)T for twoD on the classification performance. In the particle filter point of view, increasing D by 1 means increasing the available data as T. Therefore, we set T=2, that is smallest T, to better see the trend of classification error. Both the average error and its standard deviation decrease with more training trajectories and the classification error converges to a fixed value when D becomes large enough. The value of D required for a converged error rate depends on the parameters T, p, and M. In real-world scenarios that may have significant uncertainty and the perturbation probability is high, we need more training data to improve the performance.In addition to the effect of trajectory length, we would like to investigate how the number of training trajectories affect the performance of the proposed method, especially with a low number of training samples. Figure\u00a0M. To demonstrate how the proposed method can reduce the computational cost of the Boolean network classification, we check the change of the average classification error with respect to the number of particles. Figure\u00a018\u00d7218 transition probability matrix. Such a dimension is too large for the direct application of the OBC.Figure\u00a0T=3 for p=0.1 and \u03c3=25 compared to Plug-In and IRB, respectively.To more comprehensively evaluate the proposed method, we have compared it with two other classification methods, i.e. IRB and PlugThe proposed method does not have any restriction on the noise distribution assumptions due to the generalizability of particle filters. To show this, we also test our method with different noise distributions. While the noise of GRNs is usually Gaussian, sometimes due to up regulation, noise can be Poisson or Negative Binomial (NB). We have simulated Gaussian and NB noise distributions with the same mean and variance while for Poisson noise, the variance is equal to its mean in high process noise. As Additional file\u00a0In this paper, we have developed the optimal Bayesian classifier for binary classification of single-cell trajectories under regulatory model uncertainty. The partially-observed Boolean dynamical system is used for modeling the dynamical behavior of gene regulatory networks. Due to the intractability of the OBC for large GRNs, we have proposed a particle filtering technique for approximating the OBC. This particle-based solution reduces the computational and memory complexity of the optimal solution significantly. The performance of the proposed particle-based method is demonstrated through numerical experiments using a POBDS model of the well-known T-cell large granular lymphocyte (T-LGL) leukemia network based on noisy time-series gene-expression data.Additional file 1This additional file contains the additional experiment results. (PDF 229 kb)"}
+{"text": "The data analysis tools detect and track gene clusters, their size, number, persistence time, and their plasticity (deformation). Of biological significance, our analysis reveals an optimal mean crosslink lifetime that promotes pairwise and cluster gene interactions through \u201cflexible\u201d clustering. In this state, large gene clusters self-assemble yet frequently interact (merge and separate), marked by gene exchanges between clusters, which in turn maximizes global gene interactions in the nucleolus. This regime stands between two limiting cases each with far less global gene interactions: with shorter crosslink lifetimes, \u201crigid\u201d clustering emerges with clusters that interact infrequently; with longer crosslink lifetimes, there is a dissolution of clusters. These observations are compared with imaging experiments on a normal yeast strain and two condensin-modified mutant cell strains. We apply the same image analysis pipeline to the experimental and simulated datasets, providing support for the modeling predictions.Our understanding of how chromosomes structurally organize and dynamically interact has been revolutionized through the lens of long-chain polymer physics. Major protein contributors to chromosome structure and dynamics are condensin and cohesin that stochastically generate loops within and between chains, and entrap proximal strands of sister chromatids. In this paper, we explore the ability of transient, protein-mediated, gene-gene crosslinks to induce clusters of genes, thereby dynamic architecture, within the highly repeated ribosomal DNA that comprises the nucleolus of budding yeast. We implement three approaches: live cell microscopy; computational modeling of the full genome during G1 in budding yeast, exploring four decades of timescales for transient crosslinks between 5 The spatiotemporal organization of the genome plays an important role in cellular processes involving DNA, but remains poorly understood, especially in the nucleolus, which does not facilitate conventional techniques. Polymer chain models have shown ability in recent years to make accurate predictions of the dynamics of the genome. We consider a polymer bead-chain model of the full yeast genome during the interphase portion of the cell cycle, featuring special dynamic crosslinking to model the effects of structural maintenance proteins in the nucleolus, and investigate how the kinetic timescale on which the crosslinks bind and unbind affects the resulting dynamics inside the nucleolus. It was previously known that when this timescale is sufficiently short, large, stable clusters appear, but when it is long, there is no resulting structure. We find that there additionally exists a range of timescales for which flexible clusters appear, in which beads frequently enter and leave clusters. Furthermore, we demonstrate that these flexible clusters maximize the cross-communication between beads in the nucleolus. Finally, we apply network temporal community detection algorithms to identify what beads are in what communities at what times, in a way that is more robust and objective than conventional visual-based methods. The 4D Nucleome Project proposesOur present understanding of basic principles that govern high-order genome organization can be attributed to incorporation of the physical properties of long-chain polymers \u20137. The fDe novo stochastic bead-spring polymer models of the dynamics and conformation of \u201clive\u201d chromosomes, plus the action on top of the genome by transient binding interactions of structural maintenance of chromosome (SMC) proteins, e.g. condensin, provide complementary information to chromosome conformation capture (3C) techniques, genome-wide high-throughput (Hi-C) techniques, and restraint-based modeling . We observe that the number of clusters was significantly decreased in the hmo1\u0394 null mutation, but not the fob1\u0394 null mutation (p = 0.04 for hmo1\u0394 versus p = 0.3 for fob1\u0394). We also observe that increasing \u03bc yielded a general trend in which there were fewer clusters. Taken together, these data suggest that gene clustering can directly impact the size and shape of the nucleolus. This underscores the need for robust and objective tools for identifying gene clusters.Our final experiment studies cluster formation in the nucleolus and compares clustering observed in the experimental and simulated microscopy images. We developed a cluster detection algorithm written with MATLAB affects the properties of pairwise gene interactions. A pair of beads is said to be interacting if they are in very close proximity and the distance between them drops below d*. As discussed in d* = 100nm unless otherwise noted. In the following experiment, we show that increasing \u03bc not only inhibits the formation of clusters, but that there exists a particular range of \u03bc that optimizes gene mixing, or the overall interaction frequency of all pairs of genes. These experiments illustrate how clustering\u2014which inherently describes multi-way relationships\u2014can be studied through pairwise distances\u2014which inherently describe two-way relationships \u2014, and how there remain important open problems related to the time series signal processing of 4D chromosome conformation datasets.Next, we study how the kinetic time scale We study the following summary statistics for gene mixing:(A)interaction fraction indicates the fraction of possible unique bead pairs that interact at least once during an interphase simulation.The (B)mean interaction number indicates the number of simultaneous interactions for a nucleolus bead, averaged across time and across beads.The (C)mean waiting time indicates for any two beads, selected at random, the average time that passes between their i-th and (i + 1)-th interactions.The (D)mean interaction duration indicates the amount of time beads enter and reside within the interaction distance.The \u03bc. We identify three regimes that optimize different attributes. \u03bc \u2248 0.1 yields a self-organized structure that maximizes the number and the duration of gene-gene interactions (see panels (B) and (D)). Recall from Video 1 that \u03bc = 0.09 yields many large clusters that are stable over time. This is reflected in a high number of interactions with beads in the same cluster and low number of interactions with beads not in the same cluster.In \u03bc \u2a85 1, we see flexible clustering behavior from Video 2. Notably, we find here that this flexible clustering has interesting properties beyond simply being a weaker version of the strong clustering from the rigid clustering regime. Namely, \u03bc values maximize the fraction of pairs of beads that interact at least once over the simulation, and flexible clustering promotes the number of both simultaneous and overall distinct pairwise gene interactions in the nucleolus. This behavior arises from a balance between the number of intra-cluster gene-gene interactions, which is still elevated due to the moderate clustering as shown in (B), and the ability for genes to frequently switch between clusters during cluster interactions, as indicated by the reduced waiting time in (C). SMC proteins with such crosslinking timescales will thereby promote collective interactions among all active genes. These circumstances could accelerate a homology search, for example, to facilitate DNA repair, if the sister chromosomes were suddenly activated by a family of SMC proteins whose binding affinity was near this \u201csweet spot\u201d.With 0.15 \u2a85 \u03bc \u2a86 1.5 is associated with a non-clustering regime, as shown in Video 3. The lack of clustering is reflected by a low number of gene-gene interactions, and the freely diffusing nature of the beads is reflected by short interaction duration and high interaction fraction.Finally, Having found that flexible clustering maximizes interesting properties of gene interaction, we seek to develop tools to identify and label the spatiotemporal clusters. In the rigid clustering regime, the clusters are so well-defined that any reasonable algorithm will detect them, but this is not the case for the flexible clustering. To detect and track flexible clustering, we utilize both spatial and temporal information to identify and track clusters.While we have access to 4D bead position time series data, we begin by transforming this into a multilayer network problem as described in Methods Section: Gene-interaction networks from pairwise-distance data. This is motivated by the fact that the most similar data available in biology, the Hi-C dataset, does not measure true distances between genome regions, but rather a notion of similarity based on average proximity . The resGiven a gene-interaction network, we identify communities using an approach based on multilayer modularity . See 3131, 32 fomodularity measure was originally introduced [The troduced to detectroduced . BecauseOur analysis is primarily based on an extended version of modularity that allows one to detect time-varying communities in temporal networks and is called the multilayer modularity measure . In cont\u03b3 and \u03c9, which provide \u201ctuning knobs\u201d [\u03b3 is a resolution parameter [\u03c9 is a coupling parameter and allows one to choose if the communities can change drastically from one time step to the next or if they are restricted to changing slowly over time. We explored a range of choices to select appropriate values. Finally, we highlight that this approach significantly contrasts traditional clustering algorithms such as k-means clustering [a priori and which does not naturally extend to time-varying data.A key feature of the multilayer-modularity approach for community detection is that the framework involves two parameters, g knobs\u201d , 50 to iarameter and alloustering , which sIn These videos indicate good agreement between visual perception of clusters and the clusters that are detected by the multilayer modularity algorithm\u2014when beads visually appear to be clumped together, they tend to also be the same color in the videos, which reaffirms the validity of our choice of clustering algorithm. However, especially when looking at gene interactions and gene mixing, which were defined at the \u201cbead level\u201d . We now define similar, but slightly different, concepts that are defined at the \u201ccommunity-level\u201d in that they reflect only community-membership information and do not require the precise bead locations. We say that two beads are \u201ccommunicating\u201d if they are in the same community. That is, all beads in the same cluster are communicating with each other, and beads in different clusters are not communicating. With this modified definition in hand, we define summary statistics for gene mixing at the community level that are analogous to the 2-point summary statistics for pairwise gene interactions that we previously defined in Section: Flexible clustering regime maximizes bead mixing. Analogous to gene mixing at the bead level, we now define \u201ccross communication\u201d at the community level.In this section, we provide further evidence that flexible clustering is the mechanism that is responsible for the optimality observed in (A)communicating fraction indicates the fraction of bead pairs that are in the same community at least once during a simulation.The (B)average beads per community indicates, for a nucleolus bead, the average number of beads in the same community at the same time, averaged across time and across beads.The (C)mean waiting time indicates for any two beads, selected at random, the average time that passes between when they are no longer in the same community and when they are next in the same community.The (D)mean interaction duration indicates the amount of time between beads when they are first in the same community and when they are no longer in the same community.The In Using temporal community detection, we are now finally able to quantitatively support our first observations made in Section: Transient crosslinking timescale influences nucleolus clustering that there are three distinct clustering regimes: rigid clustering; flexible clustering; and no clustering. We support these observations by studying the properties of the detected clusters.\u03bc \u2208 {0.09, 0.19, 1.6}, respectively.In For the rigid clustering regime, in For the flexible clustering regime, in For the non-clustering regime, there appears to be little relationship between cluster size and stability beyond an average size of approximately 3 beads. The clusters also tend to be much smaller, with almost no clusters with an average size over 10 beads.\u03bc \u2208 {0.09, 0.19, 1.6}, respectively. In agreement with panels (A)\u2013(C), one can observe that larger clusters are more stable. Note also that the communities exhibit more plasticity for \u03bc = 0.19 than for \u03bc = 0.09 since beads have a higher average probability for changing the community to which they belong.In The dynamic self-organization of the eukaryote genome is fundamental to the understanding of life at the cellular level. The last quarter century has witnessed remarkable technological advances that provide massive datasets of both the spatial conformation of chromosomal DNA from cell populations and the dynamic motion of domains in living cells (GFP tagging and tracking of specific DNA sequences), from the yeast to the human genome. Data mining of this massive data has likewise witnessed remarkable advances in understanding the hierarchical packaging mechanisms of DNA that act on top of the genome, e.g., histones and structural maintenance of chromosome (SMC) proteins, the topology of individual chromosome fibers, their topologically associated domains, and the territories they occupy in the nucleus. The third wave of advances has come from 4D modeling of chromosomes based on stochastic models of entropic, confined polymers, and the coupling of SMC proteins that either bind and crosslink genes on chromosomes or generate loops on individual chromosomes. As these three approaches continue to mature and inform one another, at an ever-increasing pace, insights into the structure and dynamics of the genome continue to deepen.The motivation for this paper lies in the information that can be inferred from these massive datasets, from Hi-C, live cell imaging experiments, and polymer physics modeling. In previous studies, cf. and refeWe elected to build and implement these network tools on the 4D datasets generated in house, from simulations of our recent polymer modeling of interphase budding yeast . In this\u03bc = 0.09sec) binding kinetics provided closer agreement with experimental results . It was also shown via visualization of the simulated 4D datasets that this timescale induces a decomposition of the nucleolus into a large number of clusters each consisting of many 5k bp domains, and these clusters were persistent over time. On the other hand, with long-lived crosslinks (\u03bc = 90sec) the clusters disappeared. These results reveal that the timescales of the crosslinkers relative to entropic fluctuations of the chromosome polymer chains are a fundamental contributor to genome organization.We previously showed in that verFor the present paper, the sample set of binding kinetics in was expaWith the above community-scale information and statistics, we generalize standard gene-gene interaction statistics across the four decades of bond duration timescale. As a generalization of waiting times for 2 distant genes to come within a specific distance of one another, we calculate waiting times for genes in the same community to leave and then re-enter another common community, and calculate the fraction of all genes that were in the same community at least once during interphase, which we call the community cross-communication fraction.\u03bc = .09sec) transition at slightly longer-lived crosslink timescales (\u03bc = 0.19sec) to more mobile communities that interact far more frequently, each interaction corresponding to merger, subsequent division, and an exchange of genes. We refer to this regime as flexible community structure with enhanced cross-communication. Furthermore, we discovered non-monotonicity in the dynamic self-organization behavior: the community cross-communication fraction is maximized, coincident with a minimum waiting time between genes departing and returning to common communities, with a crosslink timescale of \u03bc = 0.19sec. Both properties fall off, albeit in different ways, for shorter and longer timescales.From these analyses, we discovered a novel dynamic self-organization regime, wherein the rigid, persistent communities at relatively short-lived crosslink timescales . We defer to [\u03bc. To implement crosslinks, all nucleolar beads are assigned a state of \u201cactive\u201d or \u201cinactive,\u201d and crosslinks are allowed only between active beads. Each bead\u2019s state fluctuates stochastically as follows: an active bead becomes inactive after a duration that is a random number drawn the normal distribution N2); and an inactive bead becomes active after a duration drawn from N2). If two nucleolar beads are both active and the distance between them is less than \u03bc dictates the kinetic timescales for crosslinking. See [We model the effect of SMC proteins by transiently crosslinking pairs of non-neighboring beads in the nucleolus, represented by a contiguous chain of 361 beads on chromosome XII. \u2014by constructing and analyzing pairwise-distance maps. Hi-C \u201cimages\u201d the conformation of chromosomes using a combination of proximity-based ligation and massively parallel sequencing, which yields a map that is correlated with the pairwise distances between gene segments. While the actual pairwise distances between gene segments cannot be directly measured, Hi-C implements spatially constrained ligation followed by a locus-specific polymerase chain reaction to obtain pairwise count maps that are correlated with spatial proximity: the count between two gene segments monotonically decreases as the physical 3-dimensional distance between them increases.Each simulation of the Rouse-like polymer model yields time-series data xi(t)}. Letmap from a set of N points i, j) in the matrix gives the distance between point i and point j. Whereas Hi-C imaging aims to study the positioning of chromosomes using noisy measurements that are inversely correlated with pairwise distances, for our simulations we have access to the complete information about the chromosome positioning. We therefore define and study several variations for pairwise distance maps, which will allow us to also study artifacts that can arise under different preprocessing techniques, such as averaging the time series data across time windows and/or averaging across multiple simulations with different initial conditions. We define the following pairwise distance maps:To provide an analogue to Hi-C imaging, we construct pairwise-distance maps for our simulated data {instantaneous pairwise distance map X(t) = F({xi(t)}) encodes pairwise distances between beads at a particular timestep t.An time-averaged pairwise distance map \u03c4.A population-averaged pairwise distance map Z(t) = \u2329X(t)\u232ap encodes the pairwise distance at timestep t between beads, which are averaged across several simulations that have different initial conditions (which are chosen uniformly at random).A These pairwise distance maps represent the data that is sought after, but cannot be directly measured, by Hi-C imaging. Moreover, by defining several distance maps we are able to study \u201caveraging\u201d artifacts that can arise due to various limitations of Hi-C imaging. For example, Hi-C imaging obtains measurements that are typically averaged across a large heterogeneous distribution of cells that are subjected to nonidentical conditions and exist at nonidentical states in their cell cycles.R, to label the nucleolus, and SPC29-RFP:HYGR, to label the spindle pole body, to generate the yeast strain DCY1021.1. DCY1021.1 was transformed to knock out FOB1 and HMO1 to generate DCY1055.1 and DCY1056.1 respectively.The budding yeast strains used in this study were obtained by transforming the yeast strain EMS219 ) with CDC14-GFP:KANFluorescent image stacks of unbudded yeast cells were acquired using a Eclipse Ti wide-field inverted microscope (Nikon) with a 100\u00d7 Apo TIRF 1.49 NA objective (Nikon) and Clara charge-coupled device camera (Andor) using Nikon NIS Elements imaging software (Nikon). Each image stack contained 7 Z-planes with 200 nm step-size.replicate option specified to extend outer edge of pixel values to ensure the center of all cropped images was the brightest pixel. The intensity values of all projections were normalized by subtracting all intensity values by the minimum value and then dividing the resulting values by the maximum intensity value after subtraction. The normalized intensity values were stored with double point precision, preventing any loss in dynamic range.Image stacks of experimental images were cropped to 7 Z-plane image stacks of single cells using ImageJ and saved as TIFF files. The cropped Z-stacks were read into MATLAB 2018b (MathWorks), converted into maximum intensity projections, and the projections of hmo1\u0394 and fob1\u0394 were cropped to 55 \u00d7 55 pixels, to match the dimensions of WT projections, using MATLAB function padarray with multithresh function, to NaN and then summing number of values that were not NaNs. That pixel count was converted to \u03bcm2 by multiplying the sums by 0.06482, the area of each pixel in \u03bcm2.The areas of nucleolar signals were determined by setting all values below threshold, calculated using To calculate the standard deviation of the intensities of the nucleolar signal, the non-NaN values remaining after thresholding were re-normalized using the same method described above, and the standard deviation of those values was measured.multithresh function, to NaN and then all NaN values to 0. Clusters were identified using the imregionalmax function and counted using bwconncomp function.To count clusters within the nucleolar signal, the normalized images were deconvolved with 5 \u00d7 5 Gaussian structural element, using deconvblind function, and underwent two rounds of background subtraction by setting all intensity values below threshold value, calculated using default The simulated images generated from our simulations were analyzed as described above with the additional step of measuring the standard deviation of the each simulated maximum intensity projection. All WT images were analyzed using the script wtExpIm.m. All hmo1\u0394 and fob1\u0394 images were analyzed using the script cropExpIm.m. All simulated images were analyzed using the script clusterCountLoop.m.All MATLAB scripts have been made available at .Xij gives the (possibly averaged) Euclidean distance between beads i and j as described in Pairwise-distance maps for high-throughput chromosome conformation capture (Hi-C), we construct a network model in which there are weighted edges only between beads that are in close proximity to each other and for which each edge weight Aij \u2265 0 decreases monotonically with distance Xij. We propose a model with two parameters, d* and s, which represent a distance threshold and a decay rate, respectively. In particular, we define a network adjacency matrix A having entriesGiven a pairwise distance map i and j only when Xij < d*, and s controls the rate in which the edge weight Aij decreases with increasing distance Xij. Because the edge weights exponentially decrease with distance, the community detection algorithms we study are insensitive to the choice for d*, provided that d* is sufficiently large so that the network is connected. Our choice d* = 325 in Section: Temporal stability of clusters ensures there is an edge between all beads in the same cluster we defined several versions of pairwise distance maps\u2014instantaneous, time-averaged, and population-averaged maps\u2014and a network model can be constructed for any of these maps:instantaneous interaction network refers to a network associated with an instantaneous pairwise distance map X(t).An time-averaged interaction network refers to a network associated with a time-averaged pairwise distance map Y(\u03c4). We point out that due to the nonlinearity of A population-averaged interaction network refers to a network associated with an population-averaged distance map \u2329Z(t)\u232ap.A temporal interaction networks, which we define as a sequence of time-averaged interaction networks encoded by a sequence of adjacency matrices s \u2208 {1, 2, \u2026, T}, we partition time into a sequence of time windows \u03c4s = {(s \u2212 1)\u0394 + 1, \u2026, s\u0394} for s = 1, 2, \u2026 of width \u0394. We then define a sequence of time-averaged networks {G(s)} for s = 1, 2, \u2026 associated with these distance maps, which are time-averaged across the non-overlapping time windows {\u03c4s}. The result is a sequence of adjacency matrices so that each entry i and j during time window \u03c4s.In addition to the above network models, we are particularly interested in constructing and studying \u03bc. Matching the time-scales of \u03bc and \u0394 allows the temporal interaction networks to efficiently capture the dynamics of interactions. Specifically, if \u0394 is too short then the temporal network will be identical across many time steps, which is not an efficient use of computer memory. Moreover, if \u0394 is too large, then the temporal network data will be too coarse to identify interaction dynamics occurring at a faster time scale. We chose \u0394 = 10 to aggregate the time-varying bead-location data (which was saved every 0.1 second) into 1-second intervals. We studied 1000 such time windows to produce In practice, we choose the time window width \u0394 to be similar to\u2014but slightly larger than\u2014 As} for s \u2208 {1, 2, \u2026, T}, we study the multilayer modularity measure [s , \u03b3 is again a tunable \u201cresolution parameter,\u201d i in layer s, s, \u03b4 is again a Dirac delta function, Csr = \u03b4 + \u03b4 defines the coupling between consecutive (time) layers and Csr = 1 if only if r = s \u00b1 1 (otherwise Csr = 0), and {cis} are the integer indices that indicate the community for each node i in each layer s. If one wished to analyze just a single network , then one can simply set Csr = 0 so that the second term in the square brackets is discarded.We analyze spatiotemporal clustering of chromosomes using community detection methodology for temporal interaction networks, particularly an approach based on multilayer-modularity optimization. Given a sequence of adjacency matrices { measure Q=12\u03bc\u2211i,i \u2208 {1, \u2026, N} enumerate the nodes and s \u2208 {1, \u2026, T} enumerate the network layers, the goal is to assign a community label cis to each node-layer pair to maximize Q [cis = c indicates that node i is in community c during time window \u03c4s. There are many techniques to solve such an optimization problem, and we identify partitions that optimize Q using a variational approach commonly referred to as the Louvain algorithm [Letting ximize Q . Here, clgorithm . To prov\u03b3 = 1). The part i and j in layer s according to the configuration null-model for networks [Q of this null-model comparison is scaled by \u03b3. Thus, as whole, the first term is largest when there exists an edge between i and j in time window \u03c4s and when the expected probability of such an edge is smallest. Effectively, this term influences optimal partitions to give and the same community label if there is an edge between nodes i and j in time window \u03c4s. We next consider the second term, \u03c9\u03b4Csr, which is nonnegative since \u03c9 > 0 and Csr \u2208 {0, 1}. Since Csr = 1 only when \u03c4s and \u03c4r are consecutive time windows , this term influences the community labels cis and cir to be the same from one time window to the next.Consider the first term, measure parameter space) and comparisons to other community-detection algorithms including the study of connected-components.Note that multilayer modularity involves two parameters lgorithm Click here for additional data file.S2 Videohttps://github.com/bwalker1/chromosome-videos/blob/master/Dataset6_allRed_finer_realtime.mp4.Resource available at (MP4)Click here for additional data file.S3 Videohttps://github.com/bwalker1/chromosome-videos/blob/master/Dataset12_allRed_finer_realtime.mp4.Resource available at (MP4)Click here for additional data file.S4 Videohttps://github.com/bwalker1/chromosome-videos/blob/master/Dataset0_color_finer_realtime_altView.mp4.We observe that the stable and well-separated clusters have been distinctly labeled by the community detection algorithm. Resource available at (MP4)Click here for additional data file.S5 Videohttps://github.com/bwalker1/chromosome-videos/blob/master/Dataset6_color_finer_realtime_altView.mp4.Here the clusters are not so clearly separated, but the colored labels still appear consistent with what one would expect. Resource available at (MP4)Click here for additional data file.S6 Video\u03bc. Resource available at https://github.com/bwalker1/chromosome-videos/blob/master/Dataset12_color_finer_realtime_altView.mp4.Here there are no clusters present in the data, but the community detection algorithm still tries to give the same label to nearby beads. Due to the lack of stable community structure, beads change label more frequently than for smaller values of (MP4)Click here for additional data file."}
+{"text": "Dynamic studies in time course experimental designs and clinical approaches have been widely used by the biomedical community. These applications are particularly relevant in stimuli-response models under environmental conditions, characterization of gradient biological processes in developmental biology, identification of therapeutic effects in clinical trials, disease progressive models, cell-cycle, and circadian periodicity. Despite their feasibility and popularity, sophisticated dynamic methods that are well validated in large-scale comparative studies, in terms of statistical and computational rigor, are less benchmarked, comparing to their static counterparts. To date, a number of novel methods in bulk RNA-Seq data have been developed for the various time-dependent stimuli, circadian rhythms, cell-lineage in differentiation, and disease progression. Here, we comprehensively review a key set of representative dynamic strategies and discuss current issues associated with the detection of dynamically changing genes. We also provide recommendations for future directions for studying non-periodical, periodical time course data, and meta-dynamic datasets. Owing to rapid advances in sequencing technologies and affordable costs, more complicated experimental designs and clinical applications, such as time course data and meta temporal dynamics, have become feasible and popular in genomic research ,4,5,6,7.First, the complexity of the architecture in identification of significant dynamic changes that are involved with cellular functions and molecular processes over a series of time points should be carefully examined by a set of coordinated gene structures in an ensemble fashion of multivariate gene-to-gene approaches, as well as single gene-by-gene testing in a univariant strategy. Second, it is well-known that pre-processing, such as normalization procedures, is required for making samples comparable and for adjusting sample-to-sample variations of biological or technical origin in sequencing based temporal dynamic data ,47,48,49Lastly, one of the significant advances is the conception of meta-framed data analysis in which multiple time course datasets collected from different laboratories or multiple data generated by different platforms are fully integrated ,4,5,6,7.The central goal of this review is to provide well-documented guidance for the analytical pipelines by examining existing dynamic methods in both gene-wise testing strategies and gene-to-gene interaction tools, as there are currently no unanimously validated dynamic methods that are deemed optimal under various scenarios . FurtherRNA-Seq has considerably revolutionized the transcriptome studies in the last decade and more complicated experimental designs has been popularly conducted in temporal data allowing main biological factors of interest and other nuisance factors at each time point and even in integrated meta-data ,65,66,67In the statistical testing of differential expression analyses, basically static methods ,70,71,72A. Next maSigpro (prev. maSigPro) : It usesB. DyNB : It has C. EBSeq-HMM (prev. EBSeq) ,80: It tD. ngsp : It is uE. lmms : Linear F. timeSeq in NBMM : It was G. splineTimeR : OriginaH. ImpulseDE2 (prev. ImpulseDE) ,62: It hI. Trendy : It is dJ. AR (auto-regressive model) : It has K. MAPTest : It reliL. TIMEOR (Trajectory Inference and Mechanism Exploration using Omics data in R) : It is aM. TimeMeter : AnotherN. PairGP : This dyO. GPrank : This dyP. dream : This dyQ. rmRNAseq : As anotState-Transition Analysis by Rockne et al (2020) has beenDue to the lack of holistic comparative studies of various dynamic methods, there will be no universally or widely accepted or best methods available for different scenarios under time course experimental designs. Therefore, we recommend the development of a complete analytical pipeline to better characterize dynamic changes and to reduce misleading results with detCircadian rhythms can regulate periodical gene expression according to daily or 24-h oscillations. Over the last decades, the genetic regulatory mechanisms of circadian genes have been thoroughly explored to characterize clock-controlled dependency, such as that observed in physiology, metabolism, and mental illness ,109,110.A. JTK_CYCLE : It was p-value is used to define a common set of candidate genes among three dynamic methods. In two built-in functions within this meta-method, the meta2d function to integrate outcomes from three different dynamic methods, whereas the meta3d function is to merge outcomes from multiple individuals within a single dataset by choosing a specific dynamic method, such as JTK_CYCLE, or ARS or LS. Thus, this method can be extended by combining additional methods to identify periodicity in the later version of the meta2d function. However, the current version does not allow to integrate results from three dynamic methods and multiple individual sets simultaneously. B. MetaCycle : It is tC. RAIN : It has D. DODR : Unlike E. LimoRhyde : When coThe batch issue in data integrations not only an RNA-Seq specific problem inherent in the current technology, but also an issue that has been steadily discussed in the history of high-throughput of datasets ,113,114.A. ARSyN : It has B. Combat-Seq/Combat : It is bC. RUVSeq : SimilarD. svaseq/sva ,91: It iE. gPCA (guided-PCA) : It is aF. Harman : AnalogoG. Maximum Mean Discrepancy and Residual Nets (MMD-ResNet) : It is bIn conclusion, the use of Combat-Seq/Combat, gPCA, and Harman, is generally limited because batch factors are identifiable and known to investigators only in few case studies. RUVSeq works for studies with both known and unknown factors; however, users need to select the k value as an arbitrary number. svaseq can also be applied for unknown batch cases. Both methods have been originally implemented for RNA-Seq specific static multi-group studies. As for dynamic time course datasets, we recommend Combat-Seq, svaseq, and RUVSeq as default batch detection methods, which can be performed with pre-filtered and normalized datasets together in the initial step of exploratory diagnostic analyses. However, to incorporate detected batch effects with subsequent dynamic methods to identify time-variant trajectory patterns in RNA-Seq time course datasets, Combat-Seq and Harman should be used since both provide adjusted expression profiling data after batch removal when batch factors are known. Additionally, MMD-ResNet can be applied for dynamic specific designs with single batch factor as it is not restricted to any specific experimental design and/or platform. Batch-free data should be analyzed using dynamic methods to infer time-variance trajectory patterns. Versatile batch detection methods are needed to cover a wide range of dynamic data including non-periodical and periodical data. As meta-dynamic data in the post-genomic era become increasingly available, sophisticated analytical methods are urgently needed.Genes that are relevant to cellular perturbations by external environmental factors, such as drug treatments, are regulated with several other genes by interacting with trans-acting and cis-regulatory elements in the genetic transcriptional regulatory machinery to represent different biological pathways or differential networks. It is very important to infer more reliable sets of pre-defined genes that are associated with functional pathways, network modules, and clusters of co-expressed patterns in the sparse and irregular time series RNA-Seq data. Motivated by single gene-by-gene dynamic methods, researchers have started to recognize the importance to characterize dynamic gene-to-gene interactions in the following paradigms ,119,120,Basically, the gene list selected from temporally differential expression analyses is further analyzed in subsequent down-stream analyses to identify gene interactions in the separate analytical pipelines ,19,21,26Variance component score test : The iniA. funPat : It is fB. DPGP : It is fC. LPWC : SimilarEPIG-Seq clustering method allows tIn order to infer the functional interplay amongst genes affected by the initial stimuli in perturbated cells or other types of time course data, the plethora of dynamic gene regulatory network (dynamic GRN) methods have been steadily proposed from arrays until present Seq-based data. In essence, dynamic GRN methods are mainly grouped into three categories: (1) Continuous state space models, ; versus (2) discrete state space models (such as Boolean Network: BN and Probability Bayesian Network: PBN). And both models account for structure and temporal dynamics when inferring the causal network modules, whereas, there exists (3) Relevant Network (RN) and Bayesian Network (BN) to consider only structure . In this(i) Dynamic GRN tools for RNA-Seq data: dynGENIE3 tool has been(ii) GRNs in traditional microarrays: Time-varying multivariate state space model (SSM) tool is desigImportantly, the inference of dynamic gene regulatory networks based solely on time course RNA-Seq data has limitations as the significant changes on gene expression levels over time points for the particular biological dynamic process are involved with TF, histone modifications (HM), other cis- and trans-acting elements in the gene regulatory machinery. Therefore, we review the GRN methods in time series RNA-seq data and its integration in the following. (iii) More informative dynamic GRN tools for data integration: CRNET tool has been(iv) GRN tools for scRNA-Seq data: While we primarily focus on the dynamic tools for bulk RNA-Seq transcriptome time course data, the recent advances on machine learning tools and their applications for scRNA-Seq studies have been omitted for the sake of brevity. Instead, an excellent comprehensive review for the (v) Dynamic differential networks: In order to define differential networks representing group-specific differences over time using inferred gene regulatory networks, DryNetMC tool has beenAs a differential network tool for static multi-omics data , iDINGO tool ,135 has (vi) Deep learning neural network approaches in current genomics data: We also want to describe the deep machine learning tools that have been widely applied to current genomic data analyses for the purpose of predictions and classification problems. A comprehensive review to fully discuss their applications in the epigenomic and genomic data is currently available . As discMore recently, several studies ,141,142 Altogether, it is timely crucial to establish general framework and guiding principles for analyzing various time course datasets with well validated dynamic methods that will reduce method-, study-, and platform-specific artifacts, which will lead to a conclusive consensus and reproducible results in the diverse range of dynamic studies, such as perturbative cellular models, cell-lineage programs, disease progressions, and other types of dynamic processes ,149. Although many of dynamic tools dealing with the diverse types of RNA-Seq time course data in differential expression analyses have been widely adopted for the purpose of identification of dynamically changing genes, more challenges still exist and need to be to be properly addressed. Each method has its own unique characteristics and data distributional assumptions as well as input/output formats, running procedures, and estimated parameters. As shown in static methods without regards to time, such as edgeR and DESeq2 ,71,72,92In addition, the comparative study to evaluate the performance between dynamic gene-wise testing methods has been conducted recently whereas Normalization procedures outweigh differential methods themselves in determine the outcome in the identification of differentially expressed gene sets ,45,47. MIn addition, the development and application of dynamic tools have significant impacts on post-genomic era data analyses, particularly for the characterization of gene-to-gene interactions using unsupervised clustering techniques, pre-defined gene sets, gene regulatory networks, and deep machine learning tools. It is conceivable that the evaluation and validation based on benchmarked datasets with robustness and reproducibility will undoubtedly facilitate our understanding of complex biological processes underlying diseases. Importantly, user-friendly web-based interfaces or packages for enhanced and unified analytical strategies (meta-pipelines) that are more capable of characterizing dynamically changing genes, GANs, and their corresponding roles in biological pathways are urgently needed. Integrated dynamic methods should be implemented by addressing the following issues related to time course datasets and their metadata: (1) Normalization, (2) filtering genes or samples with low expression and quality, (3) unbalanced or unevenly measured time points and replicates, (4) batch removal or incorporation of batch factor in the detection model of dynamic changes in the inference of trajectory patterns, (5) dynamic specific methods in co-expression clustering methods, gene set enrichment tests, GRNs, and (6) differential networks to represent time- and condition-specific dynamic changes for personalized medicine. We believe this review serves as a template for more precisely analyzing dynamically changing genes over a broad range of time points or repeated measures in experimental and clinical applications while addressing the necessary next step in meta-dynamic data analysis."}
+{"text": "Drosophila melanogaster even-skipped gene at single-cell and high-temporal resolution as its seven stripe expression pattern forms, and developed tools to characterize and visualize how transcriptional bursting varies over time and space. We find that despite being created by the independent activity of five enhancers, even-skipped stripes are sculpted by the same kinetic phenomena: a coupled increase of burst frequency and amplitude. By tracking the position and activity of individual nuclei, we show that stripe movement is driven by the exchange of bursting nuclei from the posterior to anterior stripe flanks. Our work provides a conceptual, theoretical and computational framework for dissecting pattern formation in space and time, and reveals how the coordinated transcriptional activity of individual nuclei shapes complex developmental patterns.We used live imaging to visualize the transcriptional dynamics of the The patterns of gene expression that choreograph animal development are formed dynamically by an interplay between processes \u2013 transcription, mRNA decay and degradation, diffusion, directional transport and the migration of cells and tissues \u2013 that vary in both space and time. However the spatial aspects of transcription have dominated the study of developmental gene expression, with the role of temporal processes in shaping patterns receiving comparably little attention .Gene expression patterns are dynamic on many levels. They form, change and disappear over time, often as cells, tissues, and organs are forming and moving in the developing embryo . FurtherA slew of studies, from theoretical models to imagiDrosophila melanogaster even-skipped (eve) gene whose seven stripes ring the embryo in the cellularizing blastoderm in the hour preceding gastrulation gene molecules loaded onto the gene in order to amplify the signal . We inserted the engineered BAC into a targeted site on chromosome 3L using \u03a6C31 integrase-mediated recombination gene . The 432bination , and homeve transgene transcription recapitulates the well-characterized dynamics of eve expression, most notably formation of the characteristic seven stripes in the late blastoderm movies of embryos from before nc14 through gastrulation. We optimized our data collection strategy to sample multiple stripes (3 to 5) in each movie, to obtain high temporal resolution and to have optimal signal-to-noise with minimal bleaching. In total, we collected 11 movies \u201312, withWe used a custom image processing pipeline to identeve as being expressed broadly in nc13 and early nc14 embryos before refining sequentially into four, then seven stripes axis are largely stable after they form, while stripes 4\u20136 show small anterior shifts. Stripe 7 makes a more dramatic movement towards the anterior, moving approximately 8% of egg-length, or around 40 \u03bcm from its initial location. The quantitative characterization of this stripe movement, the decoupling between stripes and nuclei, and the quantification of transcriptional bursting dynamics in each nucleus necessitated the development of a method, described below, to dynamically define the position of stripes throughout each movie.onk and offk. When the promoter is in the ON state, we assume it loads polymerases continuously with a constant rate r . This pulse lasts at the locus for 140 s, at which point all polymerase molecules loaded during the original time window have terminated transcribing state and the frequently bursting (active) state occurring from one column of nuclei to the next , consistpatterns . They alpatterns .To better understand how the low-bursting state in interstripes is established, we looked at the bursting history of the nuclei in these regions . The firThe contrast in bursting history between stripes and interstripes is less pronounced in the posterior, where there are fewer such never-ON nuclei in the interstripe region , and theve as well as fushi tarazu has been proposed to originate, in part, from cross-repression between these two genes (D. melanogaster yellow gene (D. melanogaster genome at chromosome 3L through \u03a6C31 integrase-mediated recombination (see Generation of fly lines), and generated a viable homozygous fly line as detailed below.We used bacterial recombineering to modif-103K22) . We repllow gene as descreve resides, as well as for boosting the signal-to-noise ratio. See In principle the length of the reporter should not limit our ability to estimate burst parameters. However, in practice a reporter construct that is too short will have insufficient signal. Further, one that is too long will increase the dwell time of each RNA polymerase molecule on the gene and, as a result, our cpHMM inference will require too many computational resources. Our choice of reporter construct structure strikes a balance between these two limitations and is ideally suited for inferring bursting parameters in the time range where eve locus and a GFP reporter instead of the eve coding sequence (CH322-103K22-GFP). We replaced the GFP reporter with MS2::yellow through a two step, scarless, galK cassette-mediated bacterial recombineering reporter. Upon electroporation, we selected transformants on M63 minimal media plates with galactose as a single carbon source. We achieved a correct replacement of GFP sequence by galK cassette in the BAC context , validated by observing the digestion patterns produced by ApaLI restriction enzyme.We modified a CHORI BAC CH322-103K22 derived from ineering . BrieflyE. coli SW102 cells. We electroporated these cells with the purified MS2::yellow insert and used M63 minimal media plates with 2-deoxy-galactose to select against bacteria with a functional galK gene. We used colony PCR to screen for colonies with a correct MS2::yellow insertion (CH322-103K22-MS2) replacing the galK cassette. We validated this insertion by observing ApaLI, XhoI, SmaI, and EcoRI restriction digestion patterns and through PCR and Sanger sequencing of the insertion junctions. We transformed our CH322-103K22-MS2 BAC in E.coli EPI300 cells to induce high copy numbers and purified it with a Qiagen plasmid Midiprep kit.We next purified the CH322-103K22-galK BAC and transformed it into fresh D. melanogaster embryos bearing a \u03a6C31 AttP insertion site in chromosome 3L . We received the flies that resulted from that injection and used a balancer fly line to obtain a viable MS2 homozygous line . We used line as the maternal source of Histone-RFP and MCP-GFP embedded on a microscope slide hollowed on the center. Then, we coated the hydrophobic side of the Lumox film with heptane glue and let it dry. The film allows\u00a0for the\u00a0oxygenation of embryos during the 2\u20133 hr long imaging sessions while heptane immobilizes them.We soaked an agar plate with Halocarbon 27 oil, picked embryos with forceps, and laid them down on a 3 \u00d7 3 cm piece of paper tissue. We dechorionated embryos by adding 2 drops of bleach diluted in water (5.25%) on the paper tissue and incubating for 1.5 min. We removed bleach with a clean tissue and rinsed with\u00a0~4 drops of distilled water. We then placed the tissue paper with dechorionated embryos in water, and picked buoyant embryos with a brush.We lined\u00a0~\u00a030 apparently healthy embryos on the Lumox film slide and added 2\u20133 drops of Halocarbon 27 oil to avoid desiccation, and covered the embryos with a cover slip for live imaging.Movies of embryonic development were recorded on a Zeiss-800 confocal laser-scanning microscope in two channels, . We imaged embryos on a wide field of view, along their anterior-posterior axis, of 1024 \u00d7 256 pixels (202.8 x 50.7 \u00b5m), encompassing 3\u20135 stripes per movie. We tuned laser power, scanning parameters, master gain, pinhole size and laser power to optimize signal-to-noise ratio without significant photobleaching and phototoxicity.For imaging, the following microscope settings were used: 63x oil-objective, scan mode \u2018frame\u2019, pixel size of 0.2 \u00b5m, 16 bits per pixel, bidirectional scanning at a speed of 7, line step of 1, laser scanner dwelling per pixel of 1.03\u00a0\u00b5s, laser scanner averaging of 2, averaging method Mean, averaging mode Line, 488 nm laser power of 30\u00a0\u00b5W (EGFP), 561 nm laser power of 7.5\u00b5W (TagRFP) (both powers were measured with a 10x air-objective), Master Gain in EGFP detector of 550V, Master Gain in TagRFP detector of 650V, Digital Offset in both detectors of 0, Digital Gain in both detectors of 1.0, and a pinhole size of 1 airy unit under the imaging conditions mentioned above , laser filters EGFP:SP545 and TagRFP:LBF640. This resulted in an imaging time of 633 ms per frame and a full Z-stack of 21 frames in intervals of 0.5 \u00b5m every 16.8 s. Following , the imaWe used a Matlab computational pipeline based on n,t = Fn,t+1 - Fn,t where Fn,t is the fluorescence signal for nucleus n at time point t and then calculated the Pearson correlation coefficient of the vectors and over values of d from 1 to 20 representing time displacements of 20 to 400 s. The minimum correlation occurred at 140 s.To estimate the transit time of the polymerase along the construct we first calculated, for each nucleus, the difference in fluorescence signal between adjacent timepoints DFor this work we employed a statistical method that utilizes a compound-state hidden Markov Model to infer bursting parameters from experimental fluorescence traces. The theory and implementation of this method are described in detail in https://github.com/mbeisen/Berrocal_2020; swh:1:rev:d983098bd5183f9907d633c425f80b2cb5282a8b).All data were analyzed in Python using a Jupyter notebook with custom code to process raw data and generate figures. The Jupyter notebook and all data required to run it is available in We first filtered the raw data to remove data with observations spanning less than 2,000 s, as well as nuclei that were poorly tracked over time defined as nuclei that moved across the movies at an average rate of over 4.5 pixels per minute. This left 430,073 observations from 2959 nuclei.covariance_type='tied'. We preliminarily assigned nuclei time points to a stripe if they were consistently clustered in that stripe in the relevant time windows. We then pooled all nuclei time points assigned to the same stripe and fit a line to the median x and y positions in the bottom (y\u00a0<\u00a0128) and top (y\u00a0>\u00a0128) halves of the image. We considered the slope of this line to represent the orientation of the stripe to the image x-axis. We then went back to each time window and fit the nuclei assigned to the stripe with a line with the previously computed slope fixed. This produced an association of time with stripe position, from which we derived a linear model that describes the position of each stripe in each movie at every time point.We used the Gaussian mixture model module of the Python library scikit-learn to clustWe assigned all nuclei time points (not just bursting ones) to stripes by identifying the stripe whose predicted position at the relevant time was closest to the nucleus being analyzed, and assigned a nucleus to the most common stripe assignment for its individual time points. We then corrected the reorientation of the stripe at each time point to be perpendicular to the image x-axis by setting its new image x-axis position to be the x position of the stripe in the middle of the y-axis (y\u00a0=\u00a0128) plus the offset of the nucleus to the unoriented stripe along the x-axis. Finally, we used the positions of the anterior and posterior poles of the embryo to map image x coordinates to AP position. We then adjusted the AP position of each stripe in each movie such that the center of the stripe at 35 min in nc14 had the same AP position. In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.Acceptance summary:Drosophila even-skipped gene at single-cell resolution during stripe formation, and developed tools to characterize and visualize how the role of transcriptional bursting influences stripe formation. By tracking the position and activity of individual nuclei they find stripe movement is driven by the exchange of bursting nuclei from the posterior to anterior stripe flanks.The authors used live imaging to visualize the transcriptional dynamics of the Decision letter after peer review:[Editors\u2019 note: the authors submitted for reconsideration following the decision after peer review. What follows is the decision letter after the first round of review.]Drosophila even-skipped gene\" for consideration by eLife. Your article has been reviewed by three peer reviewers, one of whom is a member of our Board of Reviewing Editors, and the evaluation has been overseen by Naama Barkai as the Senior Editor. The reviewers have opted to remain anonymous.Thank you for submitting your article \"Kinetic sculpting of the seven stripes of the The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.All reviewers felt the work has merit but have some concerns that are expressed below.Essential revisions:You will need to address the question about whether the reporter genes disrupt or recapitulate the endogenous genes. It seems some experimentation might be done here to resolve this-perhaps a FISH experiment. Ideally the endogenous gene could be tagged. Data may exist for this point that is not in the manuscript.The second reviewer has issues with the analysis at several points. These need to be addressed.The third reviewer has points that need to be addressed that mainly involve further clarifications in the text and focusing the Discussion.Reviewer #1:This is a companion manuscript to Lammers, et al. It increases the number of stripes analysed for the even skipped gene, assessed using the method of modeling using a modified HMM analysis to provide a memory of the transcriptional bursting, and to include the duration of this bursting in defining the sharp boundaries of the stripes. The technical advance in this work is to insert the entire even skipped set of enhancers and regulatory elements on a BAC so that the ontogeny of all seven stripes can be observed at high time resolution .This is an elegant piece of work, although with some of the same caveats that are relevant for the Lammers manuscript. One concern is whether the embryos would develop past gastrulation, given the light dosage (Sedat has made a big point about this). It would be useful to know this, although it may not influence conclusions about the early events. As with the Lammers manuscript, the observations are by their nature descriptive, but provide a basis for evaluations of various mechanisms; these are spelled out well in the Discussion, with appropriate cautionary caveats. One of the more interesting observations is that the stripes seem to be similarly regulated in kinetics and duration of the transcription despite the variety of factors regulating them. This commonality of mechanism requires some new thinking about the role of these various elements.Some quibbles about the writing style: the concept of a \"transcriptional desert\" seems a bit melodramatic (as does the repetitive \"stripe is a stripe...\" line) and obscures the mechanistic implications. Presumably the \"desert\" is just the closing of the transcriptional window, as described more precisely in the accompanying manuscript. To further impinge on the author's expository flair here, the concept of \"sculpting\" is too anthropomorphic, as one needs a sculptor to sculpt and this is conceptually confusing since these events are stochastically driven. Is the sculptor a facilitator of \"intelligent design\"? Do you want to go there?eve reporter system is now sucking up factors and presumably competing with the endogenous gene possibly causing a \"transcriptional haploinsufficiency\" wherein the effective local concentrations are now half what they should be, and this may have the effect of changing the bursting and duration kinetics. Furthermore, the local environment of the endogenous gene may contain more regulatory chromatin not dreamt of in your philosophy, that would rule the expression of eve with a firmer hand (using the anthropomorphizing theme). Hence the assertion that the inserted gene \"is phenotypically neutral\" seems glib and may not be technically correct, at least until tested. Assuming that the function of eve is not compromised by some of the modifications used for the reporter gene, why not tag the endogenous gene? .Some of my concerns about the Lammers manuscript can be reiterated briefly. The inserted All told, I feel that these considerations could be dealt with in the Discussion.Reviewer #2:eLife with some revisions.In this work by Berrocal et al., the authors provide a quantitative treatment of the enduring developmental problem of how the combinatorial inputs into an enhancer drive spatial patterns in the blastoderm. Work in recent years indicates these developmental genes are also stochastic and exhibit bursting transcription, making the emergence of these ordered patterns even more difficult to explain. This manuscript is decidedly observational yet also highly quantitative. No molecular mechanisms are elucidated, but the underlying spatiotemporal transcription phenomena are characterized with experimental rigor. The authors seem to believe they have uncovered some universal behaviors but they stop short of making any strong claims, making it hard for this reviewer to come away with a digestible conclusion. Overall, I found the paper to be scholarly and well-written. I think it could be appropriate for 1) I found the paper to be somewhat off balance in the presentation. Some of the concepts and data analyses are covered in a cursory manner, even for specialists, but the Discussion goes on at length. For example, I am confused by Figure 4A. Why does the autocorrelation go negative? What does the autocorrelation of the first derivative even mean? Is this experimental data?2) There is a tremendous amount of live-cell data, and I appreciate the scale of that analysis effort. The two major inference problems in this data analysis are the following: 1) promoter activity must be inferred through direct analysis of the transcription time traces, and 2) burst kinetics have to be assigned to a particular stripe. I would like to see some straightforward analysis or metric of how good these inferences are. Error bars or confidence analysis is noticeably absent throughout the manuscript.3) Related to this second inference, I found the exceptional behavior of the 5 \u2013 6 interstripe to be important. Is this looser regulation also observed by smFISH? Such an analysis would also indicate how well the stripe assignment works in live blastoderms. Is there any biological functionality to the 5-6 interstripe bursting?4) In Figure 8, is there some sort of reduced parameterization which captures the essential behavior, rather than an empirical tabulation of on and off rates? The results in Figure 10 argue for such a reduced parameterization, and the comparison is made to the Zoller paper in the Discussion, but I don't see where they actually test a reduced model for fitting the data.5) Related to this last point, when I look at Figure 8 , I see a lot of variability at the low ON rates. This variability could result in an apparent trend which is artefactual. At some point, there is a noise floor where the mHMM might be picking up single infrequent bursts that could therefore be spurious. I find the treatment of this key point unsatisfactory, both because there is not an in depth statistical analysis, and also because the authors use sentences such as \"To investigate whether this result is an artifact of the mHMM, we implemented an orthogonal method that uses integer programming to infer promoter states from traces based on a direct fit of 140s fluorescence pulses.\" This one-sentence description is obtuse, even for specialists. I think there needs to be greater effort here.Reviewer #3:Drosophila embryos. The framework is showcased in the analysis of the dynamics of the even-skipped gene, monitored at the single-cell and high temporal resolution.In this work, Berrocal and Lammers et al. propose a conceptual, theoretical and computational framework for dissecting gap genes pattern formation in space and time in early This is an ambitious and much-needed work targeting the transcription dynamics in mid and late nuclear cycle 14. Here, the MS2-GFP system is employed to simultaneously monitor the transcription dynamics of the even-skipped gene in seven stripes, each controlled by the independent activity of a specific enhancer: whereas stripe 1, 2 and 5 are each controlled by a different enhancer, stripes 3 and 7 and stripes 4 and 6 are controlled by a common enhancer. This provides the basis for the analysis of the transcription kinetics, as demonstrated here with the memory-adjusted Hidden Markov Model. In this last long nuclear cycle before gastrulation, the transcription factor gradients are transient. Thus, traditional analyses are hindered due to the movement of both the nuclei and transcription factors distributions. From the videos, the paper presents a protocol to classify and track the stripes position over time, and attributing nuclei to the corresponding stripes.The main conclusion is that despite being created by the activity of independent enhancers, even-skipped stripes emerged from globally similar kinetic phenomena over time which combine an increase of transcription burst frequency and duration within the stripes and a progressive reduction of bursting between stripes. Interestingly, the dynamics and the timing of stripe formation appears quite different among stripes but does not correlate with the enhancer regulating the stripe formation . Thus, despite being created by the independent activity of different enhancers, even-skipped stripes exhibit similar kinetic phenomena over time. This is a very interesting observation, suggesting the co-evolution of enhancers to ensure the robustness of the downstream processes.1) The five even-skipped enhancers have been identified using transgenic reporters and mostly experiments on fixed embryos (ISH or FISH) that did not capture the fast dynamics uncovered with the MS2 system and live imaging. It is thus not clear how these five enhancers might recapitulate the different dynamics observed here for the seven stripes with the MS2 system. Although it is beyond the scope of this manuscript, it is important to mention in the manuscript potential discrepancies between the two experimental approaches.2) It has been shown recently that the MS2 cassette itself might influence the regulation of the target promoter . It is thus important to mention in the manuscript the possibility that the common regulatory properties observed for the seven stripes could be driven by the MS2 cassette. Could the authors confront the dynamics of expression of the seven stripes that they uncover with the MS2 system from FISH data on precisely staged cycle 14 embryos?3) As mentioned by the authors, the length of the construct is very long and carry the MS2 cassette in 5' of the transcribed sequence. This was done on purpose to increase the fluorescent signal but also impairs the sensitivity in detection fluctuations. A good example of this is shown in Figure 4C where a small activity of the promoter provides a very strong fluorescent signal. Can the authors comment on this and indicate the range of kinetics parameters that this construct allows to capture?on and koff, which can be extracted directly for each nucleus. This is very likely to be confused with kon and koff inferred from the mHMM approach. Please discuss how the temporal kon and koff are calculated when they are first mentioned (we only find out about this until the very end of the manuscript) and use a different annotation (or markers) to avoid confusion with mHMM's.4) Most of the works' results and conclusions are based on the temporal k5) Given the ON and OFF rate changing over time as pointed out above, one should question the validity of the mHMM here. Can the authors clarify on this point?6) In the Materials and methods section, how is the time window for each nucleus selected for mHMM inference? Is it the time window when the nucleus is in the temporal stripe region?7) Given the movement of the transcription stripes over time, protein stripes which are translated over a range of AP position should be wider as mRNAs stripes. Can the authors show how the integrated transcription stripes look like over time, or at the end of the cycles?8) The Discussion of the manuscript is too long and would benefit of being more focused. [Editors\u2019 note: the authors resubmitted a revised version of the paper for consideration. What follows is the authors\u2019 response to the first round of review.]Reviewer #1:[\u2026] This is an elegant piece of work, although with some of the same caveats that are relevant for the Lammers manuscript. One concern is whether the embryos would develop past gastrulation, given the light dosage (Sedat has made a big point about this). It would be useful to know this, although it may not influence conclusions about the early events.Author response: The imaging conditions are identical to those used repeatedly in various previous experiments . Here, the light dose was determined to be appropriate so as not to affect development as reported by the timing of the nuclear cycles in early development. We have now made this point explicitly in the Materials and methods section under \u201cImaging and Optimization of Data Collection\u201d.As with the Lammers manuscript, the observations are by their nature descriptive, but provide a basis for evaluations of various mechanisms; these are spelled out well in the Discussion, with appropriate cautionary caveats. One of the more interesting observations is that the stripes seem to be similarly regulated in kinetics and duration of the transcription despite the variety of factors regulating them. This commonality of mechanism requires some new thinking about the role of these various elements.Some quibbles about the writing style: the concept of a \"transcriptional desert\" seems a bit melodramatic (as does the repetitive \"stripe is a stripe...\" line) and obscures the mechanistic implications. Presumably the \"desert\" is just the closing of the transcriptional window, as described more precisely in the accompanying manuscript. To further impinge on the author's expository flair here, the concept of \"sculpting\" is too anthropomorphic, as one needs a sculptor to sculpt and this is conceptually confusing since these events are stochastically driven. Is the sculptor a facilitator of \"intelligent design\"? Do you want to go there?While we were simply trying to add a little rhetorical color to the manuscript, we accept the reviewer\u2019s admonition that they are distracting and perhaps misleading and have adjusted our language accordingly. For example, in the new version of the manuscript, we have eliminated all reference to transcriptional deserts. The one area where we disagree is with the term \u201csculpting\u201d. While we recognize that it has some anthropomorphic character, here it was meant as a metaphor to emphasize that, by adjusting the kinetics of bursting in individual nuclei, a complex pattern emerges from an initially unfinished and unfeatured mass. It is not a hill on which we would die, but we hope the reviewers will allow us this flourish.Some of my concerns about the Lammers manuscript can be reiterated briefly. The inserted eve reporter system is now sucking up factors and presumably competing with the endogenous gene possibly causing a \"transcriptional haploinsufficiency\" wherein the effective local concentrations are now half what they should be, and this may have the effect of changing the bursting and duration kinetics. Furthermore, the local environment of the endogenous gene may contain more regulatory chromatin not dreamt of in your philosophy, that would rule the expression of eve with a firmer hand (using the anthropomorphizing theme). Hence the assertion that the inserted gene \"is phenotypically neutral\" seems glib and may not be technically correct, at least until tested. Assuming that the function of eve is not compromised by some of the modifications used for the reporter gene, why not tag the endogenous gene? .There are two separate concerns being addressed here. First, that the regulation of a BAC may not be the same as the endogenous locus, and second that the BAC itself might be disrupting its own regulation by adding an extra locus to the system.eve\u200b\u200b mRNA degradation rate, the profile resulting from our BAC is quantitatively comparable to that of the endogenous eve \u200b\u200blocus as measured by mRNA FISH. We now mention this control in the section \u201cLive imaging of eve expression\u201d. Further, while CRISPR insertions of MS2 (or PP7) to the eve \u200b\u200blocus exist, these either don\u2019t rescue or are located on the 3\u2019 end of the gene such that the fluorescence signal is significantly reduced .In our accompanying paper we have actually shown that, given reasonable values for the eve\u200b have many targets throughout the genome, and bind to thousands of sites in the blastoderm . Thus the addition or removal of a single locus to the system would change the effective concentration of the factors by one percent at most \u2013 far less than the natural nucleus to nucleus noise in transcription factor levels or variation in the size of nuclei within an embryo. Indeed it would be truly remarkable to have a regulatory system that could respond to such a small change in factor concentration. Furthermore, we expect selection to strongly disfavor such sensitivity, as it runs counter to the need to maintain robustness in gene expression outputs during development. We hope the reviewer will agree with us that this \u201ctranscriptional haploinsufficiency\u201d is probably a small effect and that it is not worth mentioning explicitly in the manuscript so as not to dilute its message.The second concern, that the addition of an extra copy of a locus might affect gene regulation by diluting the concentration of regulators is interesting, but we think this phenomenon is highly unlikely to have a measurable effect here. First, all of the well-characterized regulators of All told, I feel that these considerations could be dealt with in the Discussion.We have addressed the reviewer\u2019s concerns throughout the text. In addition, the Discussion section has been significantly modified to better focus on the insights about the molecular control of transcriptional bursting resulting from our work.Reviewer #2:In this work by Berrocal et al., the authors provide a quantitative treatment of the enduring developmental problem of how the combinatorial inputs into an enhancer drive spatial patterns in the blastoderm. Work in recent years indicates these developmental genes are also stochastic and exhibit bursting transcription, making the emergence of these ordered patterns even more difficult to explain. This manuscript is decidedly observational yet also highly quantitative. No molecular mechanisms are elucidated, but the underlying spatiotemporal transcription phenomena are characterized with experimental rigor. The authors seem to believe they have uncovered some universal behaviors but they stop short of making any strong claims, making it hard for this reviewer to come away with a digestible conclusion. Overall, I found the paper to be scholarly and well-written. I think it could be appropriate for eLife with some revisions.We thank the reviewer for carefully assessing our manuscript. We appreciate the feedback, both specific and general, and have tried to address it in the revised manuscript. Specifically to the reviewer's point, we have revised the Discussion section to more focus on the potential underpinnings of the control mechanisms for bursting uncovered by our work. Having said that, we have also added a call for new experiments that utilize the methods developed here to further probe into how transcription factors actually realize the control of bursting.1) I found the paper to be somewhat off balance in the presentation. Some of the concepts and data analyses are covered in a cursory manner, even for specialists, but the Discussion goes on at length.As mentioned above we have significantly edited the Discussion section to make it more focused. In addition, we have expanded the description of how our analyses are done, and removed superfluous analyses that might have added more confusion than clarification.For example, I am confused by Figure 4A. Why does the autocorrelation go negative? What does the autocorrelation of the first derivative even mean? Is this experimental data?We apologize for our lack of clarity. We have now revised the description of the autocorrelation approach in section \u201cModeling and inference of promoter state\u201d and in the Materials and methods section under \u201cEstimation of polymerase transit time\u201d. The idea is that every loaded polymerase induces a fluorescence signal for essentially the entire time it is transiting the locus: a pulse of fluorescence of width equal to the transit time. Thus the increase in signal associated with a polymerase loaded at time t should be coupled to a decrease in signal at t + transit time. We reveal this lag by looking at the autocorrelation of the vector of the timepoint by timepoint change in fluorescence signal, which has a minimum at 140s .2) There is a tremendous amount of live-cell data, and I appreciate the scale of that analysis effort. The two major inference problems in this data analysis are the following: 1) promoter activity must be inferred through direct analysis of the transcription time traces, and 2) burst kinetics have to be assigned to a particular stripe. I would like to see some straightforward analysis or metric of how good these inferences are. Error bars or confidence analysis is noticeably absent throughout the manuscript.The challenge here is that we lack the means of knowing the actual state of the promoter . In Lammers et al., 2020, we describe the effectiveness of our inference to recover model parameters from simulated data. However, this is under the still unproven assumption that the base model implicit in the cpHMM is correct. All of these points are made explicitly in the Lammers et al\u200b\u200b. work, but we are happy to repeat them in this manuscript if the reviewer deems it necessary.We do apologize for the lack of error bars throughout the paper. Where possible, we have now added error bars generated by, for example, bootstrapping over the nuclei in our data set. Specifically, we have added error bars to the single-cell fluorescence traces shown in Figure 3A, to the inference of the RNAP dwell time stemming from the autocorrelation analysis , and to our inferred bursting parameters .3) Related to this second inference, I found the exceptional behavior of the 5 \u2013 6 interstripe to be important. Is this looser regulation also observed by smFISH? Such an analysis would also indicate how well the stripe assignment works in live blastoderms. Is there any biological functionality to the 5-6 interstripe bursting?This is indeed a very interesting question (though outside of the scope of the current work). Given the fast pace of the pattern as reported by our data and the low temporal resolution attainable by smFISH, however, we doubt that the latter technique can shed light on the detailed behavior of the 5-6 interstripe.4) In Figure 8, is there some sort of reduced parameterization which captures the essential behavior, rather than an empirical tabulation of on and off rates? The results in Figure 10 argue for such a reduced parameterization, and the comparison is made to the Zoller paper in the Discussion, but I don't see where they actually test a reduced model for fitting the data.k\u200bon \u200band k\u200b\u200boff vanished . We believe that this is due, in part, to the large nucleus-to-nucleus variation in average fluorescence values at a given AP position . Instead, when binning by fluorescence, we find that k\u200b\u200bon increases with fluorescence while k\u200boff remains constant. Further, we find that r\u200b\u200b also increases with fluorescence .Note that, in Figure 8, we have now performed inference by binning our nuclei according to their average fluorescence values rather than grouping them by their position along the embryo. In doing so, the correlations between k\u200b\u200bon and r\u200b\u200b are coupled. We hope the reviewer will agree with us that such exploration of a model that goes beyond the widespread two-state model of bursting calls for a solid theoretical footing that goes beyond the scope of the current work, but that we are very excited to pursue in the future.Based on these results, we could have performed an inference with a model where 5) Related to this last point, when I look at Figure 8 , I see a lot of variability at the low ON rates. This variability could result in an apparent trend which is artefactual. At some point, there is a noise floor where the mHMM might be picking up single infrequent bursts that could therefore be spurious. I find the treatment of this key point unsatisfactory, both because there is not an in depth statistical analysis, and also because the authors use sentences such as \"To investigate whether this result is an artifact of the mHMM, we implemented an orthogonal method that uses integer programming to infer promoter states from traces based on a direct fit of 140s fluorescence pulses.\" This one-sentence description is obtuse, even for specialists. I think there needs to be greater effort here.We agree that there are many potential artifacts clouding out inference. As a result, we have removed any analyses that rely on inferences in regimes such as the low ON-rate one described by the reviewer. Further, we have performed a systematic analysis of the limitations of the cpHMM method using synthetic data in Lammers et al.\u200b \u200b which we would be happy to reproduce here if the reviewer deems it necessary.Reviewer #3:[\u2026] 1) The five even-skipped enhancers have been identified using transgenic reporters and mostly experiments on fixed embryos (ISH or FISH) that did not capture the fast dynamics uncovered with the MS2 system and live imaging. It is thus not clear how these five enhancers might recapitulate the different dynamics observed here for the seven stripes with the MS2 system. Although it is beyond the scope of this manuscript, it is important to mention in the manuscript potential discrepancies between the two experimental approaches.eve\u200b\u200b mRNA degradation rate, the profile resulting from our BAC is quantitatively comparable to that of the endogenous eve locus as measured by mRNA FISH. We now mention this control in the section \u201cLive imaging of eve expression\u201d.In our accompanying paper we have actually shown that, given reasonable values for the 2) It has been shown recently that the MS2 cassette itself might influence the regulation of the target promoter . It is thus important to mention in the manuscript the possibility that the common regulatory properties observed for the seven stripes could be driven by the MS2 cassette. Could the authors confront the dynamics of expression of the seven stripes that they uncover with the MS2 system from FISH data on precisely staged cycle 14 embryos?hunchback\u200b\u200b P2 enhancer and promoter. However, given the direct comparison between the BAC and the endogenous eve \u200b\u200bpatterns mentioned in the previous point, we believe that, unlike in the case reported by Lucas et al.\u200b\u200b, the MS2 stem loops do not significantly affect eve\u200b\u200b regulation. The reasons for this remain unclear, but could be related to different amounts of Zelda present at both loci. For example, if Zelda binding is already saturated at eve\u200b\u200b, we would not expect the presence of more Zelda sites in the MS2 loops to have a significant effect. Further, the distance between the loops and the enhancers might also make a difference. While the hunchback \u200b\u200bP2 promoter and enhancer are essentially one contiguous unit on the DNA, the eve \u200b\u200benhancers are separated from the promoter and distributed throughout the locus.This is a very important point indeed as Lucas et al\u200b\u200b. have shown that the presence of Zelda sites in the MS2 loops alters the expression dynamics of the 3) As mentioned by the authors, the length of the construct is very long and carry the MS2 cassette in 5' of the transcribed sequence. This was done on purpose to increase the fluorescent signal but also impairs the sensitivity in detection fluctuations. A good example of this is shown in Figure 4C where a small activity of the promoter provides a very strong fluorescent signal. Can the authors comment on this and indicate the range of kinetics parameters that this construct allows to capture?eve\u200b resides, as well as for boosting the signal-to-noise ratio.This is a very important point and one that we explore in detail in Lammers et\u200b al.\u200b, 2020. We have now referred the reader to this work when we introduce the reporter construct in the section titled \u201cReporter Design\u201d within the Materials and methods. In principle the length of the reporter should not limit our ability to estimate burst parameters; however, in practice a construct that is too short will have insufficient signal and one that is too long will require too many computational resources. Our choice of reporter construct structure strikes a balance between these two limitations and is ideally suited for inferring bursting parameters in the time range where on and koff, which can be extracted directly for each nucleus. This is very likely to be confused with kon and koff inferred from the mHMM approach. Please discuss how the temporal kon and koff are calculated when they are first mentioned (we only find out about this until the very end of the manuscript) and use a different annotation (or markers) to avoid confusion with mHMM's.4) Most of the works' results and conclusions are based on the temporal kWe agree with the reviewer that the treatment of this different set of kinetick\u200bon, k\u200boff and r \u200bparameters inferred by the HMM model and have removed our analyses based on inferring these parameters from single-nucleus traces.parameters could result in potential confusion. As a result, we have decided to focus on the 5) Given the ON and OFF rate changing over time as pointed out above, one should question the validity of the mHMM here. Can the authors clarify on this point?The temporal variation in bursting parameters is indeed an issue. In light of this, we have decided not to focus on the temporal variation in the rates, except with respect to stripe movements, and focus our inference on a period of time when expression seems to be stable.6) In the Materials and methods section, how is the time window for each nucleus selected for mHMM inference? Is it the time window when the nucleus is in the temporal stripe region?Our inference is carried over the full duration of activity of each active nucleus during nuclear cycle 14. We have clarified this point in the Materials and methods, section \u201cCompound-state Hidden Markov Model\u201d.Further, to simplify our paper, we have decided to exclude the inference of the temporal dependence of bursting parameters in favor of focusing on the commonalities of bursting parameters between different stripes .7) Given the movement of the transcription stripes over time, protein stripes which are translated over a range of AP position should be wider as mRNAs stripes. Can the authors show how the integrated transcription stripes look like over time, or at the end of the cycles?Such analysis is possible, but relies on assuming a value for the mRNA and protein degradation rates. While we have engaged in such exercise in the past , we worry that the analysis proposed by the reviewer would dilute the message of the work.8) The Discussion of the manuscript is too long and would benefit of being more focused.We agree with the reviewer and have significantly shortened and focussed the Discussion section."}
+{"text": "Here, using high\u2010throughput quantitative single\u2010cell measurements and a novel statistical method, we systematically analyzed transcriptional responses to a large number of dynamic TF inputs. In particular, we quantified the scaling behavior among different transcriptional features extracted from the measured trajectories such as the gene activation delay or duration of promoter activity. Surprisingly, we found that even the same gene promoter can exhibit qualitatively distinct induction and scaling behaviors when exposed to different dynamic TF contexts. While it was previously known that promoters fall into distinct classes, here we show that the same promoter can switch between different classes depending on context. Thus, promoters can adopt context\u2010dependent \u201cmanifestations\u201d. Our analysis suggests that the full complexity of signal processing by genetic circuits may be significantly underestimated when studied in only specific contexts.Cells respond to external signals and stresses by activating transcription factors (TF), which induce gene expression changes. Prior work suggests that signal\u2010specific gene expression changes are partly achieved because Gene promoters can be classified into distinct classes. Here we show that a single gene promoter can switch between different promoter classes depending on transcription factor context. Transcription factors (TFs) control gene expression by binding to the promoters of genes and recruiting chromatin remodelers and the general transcriptional machinery. Recruitment of RNA Polymerase II enables the initiation of transcription, which produces mRNAs that are exported to the cytoplasm, where they are finally translated into proteins by the ribosome. Gene expression is primarily regulated at the level of promoter switching dynamics and initiation of transcription, which is associated with large cell\u2010to\u2010cell variability . Messenger RNA and protein YFP reporter copy numbers are described by two coupled birth\u2010and\u2010death processes. We account for extrinsic variability at the translational level by considering the translation rate to be randomly distributed across a population of cells. The dynamic state of the overall gene expression system at time t is denoted by s(t)=(z(t),m(t),n(t)), with z(t)\u2208{z0,\u22ef,zL\u20101} as the instantaneous transcription rate and m(t) and n(t) as the mRNA and YFP reporter copy numbers, respectively. We denote by s0:K={s(t)|0\u2264t\u2264tK} a complete trajectory of s(t) on the time interval t\u2208. We consider a sequence of K partial and noisy measurements y1,\u22ef,yK at times t1t1. This is to account for the fact that, depending on the pulse duration, Msn2 may not have reached its maximal nuclear level, u0. The parameters u0, k1 and k2 were determined through fitting. Specifically, we took the full 30 different pulses and inferred the best\u2010fit values for u0, k1, and k2 using least squares fitting. The values are shown below:That is, if Msn2 is cytoplasmic at time t is governed by a stochastic process Z(t)\u2208{z0,z1,z2}, whose value changes discontinuously whenever the promoter transitions from one state into another. In the absence of nuclear Msn2, the promoter is in its transcriptionally inactive state (z0=0), where no transcripts are produced. Upon recruitment of Msn2 to the promoter, it can switch into a transcriptionally permissive state in which transcription takes place with propensity z1. To account for Msn2\u2010dependent promoter activation, we consider the switching rate from z0 to z1 to depend on the nuclear Msn2 abundance. For simplicity, we consider a linear dependency, i.e., q01(t)=\u03b3u(t), with u(t) as the Msn2 abundance at time t. The corresponding reverse rate q10 is considered to be constant. We assume that transcription can be further enhanced by recruitment of additional factors such as chromatin remodeling complexes and general transcriptional factors. This is captured in our model by introducing a third state with transcription rate z2 and corresponding transition rates q12 and q21. With this, we can describe the time\u2010dependent probability distribution over the transcription rate PZ(t)=PZt=0|\u03b8,PZt=z1|\u03b8,PZt=z2|\u03b8T in terms of a forward equation.PZ(0)=pz,0 as some initial distribution over Z(t) and \u03b8={\u03b3,q10,q12,q21,z1,z2} as a set of parameters. In the following, we denote by zt={z(s)|0\u2264s\u2264t} a complete realization of Z(t) on a fixed time interval . Furthermore, we introduce the conditional path distribution p(zt|\u03b8) which measures the likelihood of observing a particular trajectory zt for a given parameter set \u03b8. Note that it is straightforward to draw random sample paths zt from this distribution using Gillespie\u2019s stochastic simulation algorithm (SSA) and N(t) the copy numbers of mRNA and protein at time t, respectively. The parameters c1 and c2 are the mRNA and protein degradation rates and A is the protein translation rate. To account for cell\u2010to\u2010cell variability in protein translation, we consider the latter to be randomly distributed across isogenic cells, i.e., A\u223cp(a|\u03b2), with p(a|\u03b2) as an arbitrary probability density function (pdf) with positive support and \u03b2 as a set of hyperparameters characterizing this distribution ,M(t),N(t)) can be described by a Markov chain. However, due to the random variability over A, each cell is associated with a differently parameterized Markov chain. This results in a heterogeneous Markov model, whose computational analysis turns out to be challenging =(Z(t),M(t),N(t),A), such master equation readsFor a given set of parameters P:=P(Z(t)=zi,M(t)=m,N(t)=n,A\u2208=0.02, whereas \u03ba={1,10} for the slow and fast promoter model, respectively. All rate parameters are given in units s\u20101.In order to study the accuracy of the proposed inference method, we tested it using artificially generated data. In particular, we considered two differently parameterized versions of the stochastic model in Fig\u00a0tK=150min using SSA and sampled the protein abundance at 55 equidistant time points t1,\u22ef,tK. For the Msn2 activation function u(t), we used the experimentally determined profile for a single\u2010pulse experiment . The measurements were then simulated from a log\u2010normal measurement density LNyk|lognk,\u03b72, with nk as the protein copy number at time tk and \u03b7 as the logarithmic standard deviation of this density. For this study, we set \u03b7=0.05.For each promoter, we generated 30 single\u2010cell trajectories between time zero and : total time active, time to activate and transcriptional output. We estimated posterior expectations of these functionals using J=400 Monte Carlo samples and analyzed how they compared with the true values extracted from the exact sample paths z0:K. We first assumed perfect knowledge of all process parameters. The top panels in Fig\u00a0k close to one. The corresponding R2 indicates the reconstruction accuracy of the inference method. For the slowly switching promoter, we found R2 values close to one, indicating very high accuracy. For the fast\u2010switching promoter, the inference results become slightly less accurate because individual switching events are more difficult to infer from the relatively slow reporter dynamics. We furthermore analyzed the robustness of the method with respect to parameter mismatch. To this end, we randomly perturbed all of the parameters using a log\u2010normal distribution LNlogb,0.12 with b as the underlying true value. Note that the random parameter perturbation was performed for each of the considered trajectories separately. In case of poor robustness, we would thus expect a significantly reduced correlation between the true and inferred values. However, we found for all three features that both the R2 and slope k changed only marginally indicating a relatively high robustness of the method. This is an important feature in practical scenarios where knowledge about process parameters is generally imperfect.We applied the hybrid sequential Monte Carlo algorithm to reconstruct the promoter dynamics and compared it with the true realization. In particular, we analyzed three of the path functionals described in Section \u201cu(t) corresponds to the nuclear Msn2 level that was measured experimentally for each condition . Since the promoter switching dynamics can be concentration\u2010 and pulse length\u2010dependent, we re\u2010estimated the promoter parameters \u03b8 for all other conditions, while keeping \u03c9 fixed at the previously inferred values. The kinetics of the same gene expression system have been previously quantified using a deterministic model =\u0393 and p(\u27e8A\u27e9)=\u0393 for the mRNA degradation and average protein translation rates, respectively. Additionally, the protein degradation rate was fixed to c2=1.67e\u20105s\u20101. For the switching parameters qij and the transcription rates z1 and z2, we used prior distributions p(\u00b7)=\u0393. To infer the parameters, we applied a Metropolis\u2010Hastings sampler with log\u2010normal proposal distributions to generate 2e4 samples from which we extracted maximum a posterior (MAP) estimates of the model parameters.For each promoter, we first estimated the total set of parameters et al, \u03b7=0.15 corresponding to an expected relative variation of roughly 15 percent. For each condition and promoter, we processed each individual cell using J=400 particles. From the resulting particles, we estimated the promoter features as summarized in Section \u201c90% of the 270 experiments less than 15% of trajectories were excluded). For a small fraction of around 3% of the experiments, between 30and50% of the trajectories had to be dismissed. However, all these experiments correspond to promoters and conditions were gene expression signals were very low and close to background. Therefore, the exclusion of trajectories should affect our analyses to no significant extent. Moreover, we performed a quantitative analysis, which shows that the exclusion of trajectories did not strongly affect the statistical properties of the gene expression levels for individual promoters and conditions. The corresponding analysis can be found in the provided GitHub repository.Using the calibrated models, we inferred the transcription and promoter switching dynamics using the hybrid sequential Monte Carlo inference scheme from Section \u201cAs indicated in the main text, the overall analysis pipeline depends on random number generation , and therefore, the inferred transcriptional features exhibit a certain degree of variability between repeated runs of the analysis. To quantify this uncertainty, we performed the overall analysis five times and calculated averages and standard errors of the resulting transcriptional features. Note that certain transcriptional features are defined only for responding cells . For conditions that contain only a small number responding cells, it can happen that in some of the repeated runs, no responders are detected, which leaves those transcriptional features undefined. In these cases, averages and standard errors were calculated over all runs for which the number of responders was non\u2010zero.u(t) as the experimentally measured nuclear Msn2 abundance. Transcription takes place with rate z when the promoter is in state P2. The parameters used for simulation were chosen to be c1=0.02, c2=0.06, c3=0.003, c4=0.02, c5=0.0006, c6=0.001, c7=0.9, c8=7e\u20106, and z=0.6 in units s\u20101.For the simulations shown in Fig\u00a0\u03b31=c2=\u03b33=\u03b34=c6=0.01/s, \u03bb5=0.1/s, n3=6, n4=2, n5=3, V3=0.5, V4=0.001, V5=1.2. The symbol u(t) denotes the time\u2010varying Msn2 input in arbitrary units and the species M in (34) corresponds to mRNA. The two transcription rates z1 and z3 are considered to be non\u2010zero but their specific value is irrelevant for the purpose of this analysis. The promoter can be described by a forward equationQ(t). From the solution of the forward equation, we can directly calculate the expected number of state transitions by multiplying the entries of Q(t) with the respective state probabilities and integrating over time. In particular, we calculatedP(t) defined asWe performed simulations of a four\u2010state promoter model with nonlinear Msn2\u2010dependent switching rates. In summary, theodel is described by a reaction network.H(t) counts the expected number of transitions between all states between time zero and t. The diagonal elements of the matrix correspond to the (negative) total number of transitions from one state to any other state. In Fig\u00a0H(t) for different dynamical inputs, whereas the diagonal elements were set to zero for clarity.The resulting matrix Study conception, data analysis, figures, and manuscript drafting and editing: ASH and CZ. Bayesian Inference method: CZ.The authors declare that they have no conflict of interest.Expanded View Figures PDFClick here for additional data file.Review Process FileClick here for additional data file."}
+{"text": "Lipomyces starkeyi, using factor analyses and structural equation modeling to construct a regulatory network model. By factor analysis, we classified 89 TAG biosynthesis-related genes into nine groups, which were considered different regulatory sub-systems. We constructed two different types of regulatory models. One is the regulatory model for oil productivity, and the other is the whole regulatory model for TAG biosynthesis. From the inferred oil productivity regulatory model, the well characterized genes DGA1 and ACL1 were detected as regulatory factors. Furthermore, we also found unknown feedback controls in oil productivity regulation. These regulation models suggest that the regulatory factor induction targets should be selected carefully. Within the whole regulatory model of TAG biosynthesis, some genes were detected as not related to TAG biosynthesis regulation. Using network modeling, we reveal that the regulatory system is helpful for the new era of bioengineering.Improving the bioproduction ability of efficient host microorganisms is a central aim in bioengineering. To control biosynthesis in living cells, the regulatory system of the whole biosynthetic pathway should be clearly understood. In this study, we applied our network modeling method to infer the regulatory system for triacylglyceride (TAG) biosynthesis in Lipomyces starkeyi is quite important. This produces edible oil with a fatty acid composition similar to that of palm oils, at a reasonable cost [The improvement of the bioproduction ability in microorganisms is one of the important themes in bioengineering fields. Several types of empirical breeding approaches, such as constructing random mutant strains ,2,3 and ble cost ,7,8,9. TL. starkeyi is its oil accumulation system [L. starkeyi is available [L. starkeyi [The specific feature of n system ,10. It cn system ,10, and vailable . Its oilvailable ,13,14, ivailable . The TAGvailable . The acystarkeyi . In the starkeyi ,17. Thisstarkeyi . The acestarkeyi .L. starkeyi, the transcriptional regulatory mechanisms that underlie this system must be elucidated. Gene regulatory network analysis is one of the useful methods to gain insights into transcriptional regulation. Various algorithms have been developed to infer complex gene networks from mRNA levels [Saccharomyces cerevisiae, Caenorhabditis elegans, Drosophila melanogaster, and human ES cells [Although the pathways for generating TAG have been extensively studied, the regulation of these chemical reactions has remained enigmatic, and thus we are far from the complete control of this TAG biosynthesis system. To enhance our knowledge and control the oil accumulation system in A levels ,20,21. WA levels ,23, whicA levels ,25,26. TES cells ,28,29,30L. starkeyi. Even though the flow of TAG biosynthesis has been studied as metabolic/biosynthesis pathways, they reveal the chemical reactions to generate TAG from glucose, rather than the transcriptional regulation of TAG production. These metabolic/biosynthesis pathways can be utilized for the dynamics equation of the mass balance of chemical compounds, and those methods were known to be useful for TAG biosynthesis control. To develop further an efficient strain for bioproduction by genetic control, we have to clarify the complicated transcriptional regulatory mechanism of TAG production, which remains unclear. In this study, we infer the structure of the transcriptional regulation of TAG biosynthesis by setting the TAG production ability as an objective variable. Since our developed SEM approach can detect the fitting scores of predicted models with the measured data, our approach is one of the powerful approaches to infer the transcriptional regulatory model for revealing the effect for result variables.Here, we applied our developed SEM approach for inferring the transcriptional regulatory mechanism underlying the whole TAG biosynthesis system in 8 cells/mL), and the total amount of produced oil data (g/L) measured in several types of strains in 256 experiments. Among the 256 experiments, 184 measured 8 time points from 0 to 240 h in several types of strains, 12 measured 4 time points from 48 to 192 h in a wild-type strain or low oil productivity strain, and 60 measured 3 time points from 24 to 72 h in wild-type strains or high oil productivity strains. Together, the data points were considered to clarify when the TAG biosynthesis occurs [For our network analysis, we utilized the expression profiles of 7799 genes, including 14 mitochondrial genes, measured by DNA microarray techniques, the cell concentration data and Cell(t) are the oil production data and the cell concentration data, which were measured as the phenotypic data at every time point (t) in each strain, and oil_productivity(t) was calculated as the TAG productivity per cell. From a biological viewpoint, a time-gap will occur between the defined oil_productivity(t) and the state of gene expression. The effect of gene expression at time t can be detected as the difference in oil productivity between time t + 1 and time t. In this method, since the last time point of each time series data had no oil productivity information, these data were deleted. Finally, we utilized 210 experimental data for calculations.Here, https://www.genome.jp/kegg/pathway.html) and JGI Genome Portal Database Databases. In the KEGG Database and by prior investigations [Yarrowia lipolytica, is available. We obtained the gene information related to the TAG biosynthesis pathways in Y. lipolytica and searched for the homologous genes in L. starkeyi by using the JGI Database. Finally, 88 genes were detected as components of the TAG biosynthesis pathway in L. starkeyi.To construct the regulatory network of the TAG biosynthesis pathways, we utilized the Kyoto Encyclopedia of Genes and Genomes Database can be summarized in a matrix form: Factor analysis is a statistical method for describing the variability among observed variables in terms of a potentially lower number of latent variables . The inin samples in each of the observed variables, then X and U are the (p \u00d7 n) matrices composed of the observed data and their means, respectively. The partial regression coefficients of each latent variable are indicated as elements of A, the (q \u00d7 p) latent interaction matrix. In matrix A, each column corresponds to a factor and each row corresponds to an observed variable, and thus each element of A indicates the strength of the regulation from each protein to each gene. The F matrix is the latent variable matrix, and E is the (q \u00d7 n) error matrix. If there are The variance\u2013covariance matrix between the observed variables \u03a3 structurized by parameters is described, as follows: 2 is the covariance matrix of error terms. From this structurized matrix, the values of the partial regression weight matrix A and the variances of the \u201cerrors\u201d are estimated.Here, A is the factor loading matrix of latent variables, \u03a6 is the covariance matrix among factors, and \u03a8To detect the suitable number of subgroups in the TAG biosynthesis pathways, exploratory factor analysis (EFA) was performed. EFA is utilized to reveal the latent structure, by assuming that the observed data are a synthetic amount of a lower number of latent variables. In this study, EFA was executed by a principal factor method with promax rotation, which is a general method for fitting rotating factors to a hypothesized structure of latent variables. In this study, we applied EFA for dividing several subgroups of TAG biosynthesis pathway genes, so we utilized Kaiser criterion, which is one of the major criterions, as for estimating the number of factors at first. By Kaiser criterion, the number of factors is known to be overestimated. Thus, we reduced the estimated factor number one by one to confirm the suitable number of factors, which control TAG biosynthesis pathway genes. In this step, we applied a scree plot to avoid underestimating the number of factors.STEP 1:Initial model assumption of oil productivity group;STEP 2:Model optimization of oil productivity group;STEP 3:Definition of pseudo variables from subgroups;STEP 4:Initial model assumption among pseudo variables;STEP 5:Model optimization of pseudo variables.The 88 genes detected by the factor analysis were divided into two types. One type is the genes classified into the same group with oil productivity, thus reflecting that those genes are controlled by the same transcriptional regulatory system with oil productivity. The other type is the genes classified into other groups from oil productivity, which means that those genes are controlled by a different transcriptional regulatory system than that of oil productivity. To clarify the whole regulatory system of TAG biosynthesis, we applied stepwise network modeling as follows:For the SEM calculations, we had to assume the initial model in each step. In this case, only one variable was defined as an objective variable, and the star model can be applied as the initial model. In STEP 1, the oil productivity was assigned as the objective variable. The genes classified into the same group with the oil productivity were assumed to be the effect variables for the oil productivity. We arranged 15 genes as the parent nodes for the oil productivity as a child node in an initial model.The whole regulatory system for TAG biosynthesis was inferred by a pseudo variables network. Given the restrictions of the SEM calculation, it is difficult to construct the optimal network model with the selected 88 genes. We calculated the representative value of each group from the measured data of components within the group and defined one pseudo variable corresponding to one group. In the pseudo variables network, the variable including oil productivity was defined as the objective variable, and the other variables were assumed as the effect variables for the regulator of oil productivity in the initial model. Thus, 8 variables were arranged as the parent node for the one specific variable as a child node.In the initial model, the parent nodes were assumed to be independent, and the relationships between parent nodes were not identified. Thus, the initial model was expressed byf is a vector of effect variables arranged as parent nodes, and g is the data of the objective variable. Since the parent nodes were assumed to be independent, the weights of the relationships between them were defined as O matrices. The matrix \u0393 is a vector representing the effectiveness of parent nodes to child nodes. The errors that affect the objective variables are denoted by \u03b5.Here, To detect the regulatory structure of TAG biosynthesis, we previously applied our developed network modeling method based on SEM calculation . We appln variables \u03a3(\u03b8) was given byIn the inferred network model, the variance\u2013covariance matrix between the arranged I denote the identity matrix, \u039b denote the n \u00d7 n matrices of the inferred parameter matrix, and \u03a6 denote the covariance matrix of the error terms. The real covariance matrix \u03a3 is calculated from the observed data. Each element of the model\u2019s variance\u2013covariance matrix \u03a3(\u03b8) is expressed as a function of the parameter \u03b8, and all parameters in \u03a3(\u03b8) are calculated to minimize the difference from \u03a3 by the maximum likelihood method: Let tr(\u03a3) is the trace of matrix \u03a3, and q is the number of observed variables.Here, |\u03a3| is the determinant of matrix \u03a3, \u03b8) and \u03a3, and the quantitative similarity can be detected as fitting scores. To evaluate the inferred model accuracy, we utilized six different fitting scores as criteria: \u03c72 values (CMIN), the goodness of fit index (GFI), the adjusted goodness of fit index (AGFI), the comparative fit index (CFI), the root mean square error of approximation (RMSEA) and the Akaike\u2019s information criterion (AIC). These criteria indicate the qualities of model adaptation to the measured data, and they have threshold values to determine whether the model is suitable. A CMIN value higher than 0.01 was considered as a well fitted model, and GFI and CFI values above 0.90 are required for a good model fit [In the SEM calculation, the similarity between the constructed model and the actual relationships is predicted by comparing \u03a3 and then applied confirmatory factor analysis (CFA) by the principal factor method with promax rotation. Since EFA is commonly used for identifying the number of factors with effects on the observed variables, we applied EFA first. After the number of factors was determined, we applied CFA to classify the observed variables strictly.Since the TAG biosynthesis pathways are a specific system for survival under nutrient-limited conditions in To detect the genes in the same regulatory system as the oil productivity, we executed factor analysis to 88 TAG biosynthesis-related genes and oil productivity, and thus the compiled expression profiles of 89 variables measured under 210 conditions were classified by their regulatory factors. In EFA, the Kaiser criterion was utilized to estimate the suitable number of factors. The Kaiser criterion asserts that the number of factors is the same as the number of eigen values of the covariance matrix that are greater than one, and 12 factors were extracted as regulatory factors of the 89 variables. Among the 89 variables, 88 have the highest factor loading values to nine factors, and the remaining three factors were considered to be ineffective for these variables. Furthermore, the scree plot of EFA indicated that the suitable number of extracted factors can be nine. The cumulative sum of the squared factor loadings for nine factors was 83.872%, and this means that nine factors were sufficient to explain the 89 variables. Thus, we executed CFA for 89 variables with nine factors, and they were well classified by the nine factors according to their highest factor loadings. The communality, which can clarify the percent of variance in each variable explained by nine factors, and the factor loading of each factor, are displayed in The communalities of the 88 TAG biosynthesis-related genes were over 0.500, and among them, 63 genes were higher than 0.800. These high communality values reflected the fact that the expression of TAG biosynthesis genes could be explained by these nine factors. On the other hand, the communality of oil productivity was 0.382. This means that the regulatory system for oil productivity was not only TAG biosynthesis genes, and other regulatory systems are involved. By CFA, the 89 variables were well classified into nine subgroups by their factor loadings: Group 1 had 24 genes, Group 2 had 21 genes, Group 3 had 15 genes and oil productivity, Group 4 had 9 genes, Group 5 had 5 genes, Group 6 had 6 genes, Group 7 had one gene, Group 8 had 3 genes, and Group 9 had 4 genes.The confirmatory factor analysis (CFA) classification of oil productivity was included in Group 3, with 15 genes that are known to be related to the reactions for TAG biosynthesis. To infer the regulatory system for oil productivity within the group, the network modeling method was applied. First, we connected the 15 genes to oil productivity as an initial model. In the initial model, the oil productivity was affected by all 15 genes.The entire architecture of the inferred network model among 15 genes and oil productivity is shown in p < 0.01).To clarify the regulation of oil productivity, the strong relationships were extracted from the inferred model. The edges, which have high weight absolute values (>0.3), are displayed in To infer the regulatory system of the TAG biosynthesis pathways, we executed our modeling method to infer the regulatory network model among the nine classified groups. By the restriction of the SEM calculation, the 89 variables should be summarized to lower numbers of variables. We classified the 89 variables into nine groups by factor analysis, and these groups were considered to reflect subgroups of the regulatory system. Thus, the inference of a regulatory network among nine groups is suitable to reveal the regulatory system of TAG biosynthesis in its entirety.Before the application of the SEM calculation, the pseudo-data of each group should be calculated. We calculated the average values from the expression profiles of the classified genes into each group as pseudo-data. As the initial model, we assigned Group 3 as an objective variable and the other groups as effective variables, since the oil productivity variable was included in Group 3. Directed edges were connected to Group 3 from the other groups in the initial model.The inferred network model among nine groups is displayed in p < 0.01).The regulatory system around Group 3 extracted from the inferred model is displayed in In this study, we used factor analysis to classify TAG biosynthesis genes according to their regulatory system. Group 1 includes many genes related to mitochondria or peroxisomes. Group 2 has many primary metabolism pathway-related genes, and the members of Group 3 include some genes known as regulatory factors of TAG biosynthesis. Almost all of the genes classified into Groups 4 and 5 were related to mitochondria. Group 6 included genes related to TAG biosynthesis. The cholesterol esterase TGL1 was the sole component of Group 7, Group 8 included two triosephosphate isomerase genes, and Group 9 included two pyruvate decarboxylases and one diacylglycerol acyltransferase. From the tendencies of the group members, the groups were divided into three types: the groups that are not related to or reduced TAG biosynthesis , the groups that are related to or induced TAG biosynthesis , and the groups that could not be characterized by their features .The genes classified into Group 3 included known regulatory factors for TAG biosynthesis. The induction of acyl-CoA is important for TAG biosynthesis . Among tL. starkeyi, the expression of ACL1 and DGA2 involved in the acyl-CoA synthesis pathway and the Kennedy pathway, respectively, in the mutants with greatly elevated lipid production was increased compared to that in the wild-type strain [L. starkeyi led to an increase in oil productivity . That is, ACL1 and DGA2 are considered to be regulatory factors and play a vital role in TAG biosynthesis in L. starkeyi, and our inferred network model fit well with this knowledge. Even though ACC1 is related to the conversion from acetyl-CoA to malonyl-CoA, which is important for acyl-CoA synthesis, ACC1 negatively regulated oil productivity in this model. This feature means that the excessive induction of ACC1 may have a negative effect on TAG biosynthesis. Silverman et al. examined the effect of overexpression of genes involved in TAG synthesis in the oil productivity and reported that the most drastic increase in oil productivity was the overexpression of DGA2 in oleaginous yeast Y. lipolytica [Y. lipolytica [The extracted regulations of oil productivity, shown in e strain ,31. Furtpolytica . Liu et polytica . These rTo clarify the whole regulatory system of TAG biosynthesis, we inferred network models between the groups classified by factor analysis. Since biosynthetic systems generally have many components that are related by complicated structures, revealing the regulatory system may be useful to detect the target factors for system control. The whole regulatory system in TAG biosynthesis is shown in The inferred models in this study effectively reflected the known regulatory relationships for TAG biosynthesis. Furthermore, our inferred network model detected some special features in the regulatory viewpoints, such as a negative feedback system for oil productivity and non-related genes for TAG biosynthesis regulation. This computational modeling method will help us to reveal the mechanisms underlying measured biological data."}
+{"text": "Historical and updated information provided by time-course data collected during an entire treatment period proves to be more useful than information provided by single-point data. Accurate predictions made using time-course data on multiple biomarkers that indicate a patient\u2019s response to therapy contribute positively to the decision-making process associated with designing effective treatment programs for various diseases. Therefore, the development of prediction methods incorporating time-course data on multiple markers is necessary.We proposed new methods that may be used for prediction and gene selection via time-course gene expression profiles. Our prediction method consolidated multiple probabilities calculated using gene expression profiles collected over a series of time points to predict therapy response. Using two data sets collected from patients with hepatitis C virus (HCV) infection and multiple sclerosis (MS), we performed numerical experiments that predicted response to therapy and evaluated their accuracies. Our methods were more accurate than conventional methods and successfully selected genes, the functions of which were associated with the pathology of HCV infection and MS.The proposed method accurately predicted response to therapy using data at multiple time points. It showed higher accuracies at early time points compared to those of conventional methods. Furthermore, this method successfully selected genes that were directly associated with diseases.The online version contains supplementary material available at 10.1186/s12859-021-04052-4. Predicting a patient\u2019s response to therapy using various types of information is essential for designing systematic treatments , 2. HepaTime-course gene expression profiling has advanced rapidly on account of time-course gene expression profiles collected from the same patient being more beneficial than those collected from the patient at a single time point , 11. MetIn predicting a long-term therapeutic response, prediction accuracy may be improved by incorporating patient information, which is repeatedly observed for a marker over time \u201320. RizoHere, we propose a new prediction model and a gene selection method using time-course gene expression profiles. This method is based on the hypothesis that improving the accuracy of predictions requires more information obtained from gene markers at multiple time points. Therefore, our prediction model was designed to consolidate information from multiple time points, and our gene selection method was designed to identify gene subsets as markers that predict therapeutic responses more accurately with increasing time points. Time-course microarray datasets collected from HCV and MS patients were used to evaluate the proposed method. In this evaluation, three types of experiments were performed as follows: (1) comparison with our proposed method and the conventional method; (2) hypothesis verification; and (3) function analysis of the gene subset selected by the proposed method.Our proposed method was designed to predict therapeutic response using multiple time-point data that would expectedly yield a higher level of accuracy than a prediction based on single-point data. Our method is termed \u2018the consolidating probabilities of multiple time points (CPMTP) method\u2019. CPMTP consists of prediction procedure (CPMTPp) and gene selection procedure (CPMTPg). Section\u00a02.1 introduced the theory of CPMTPp and CPMTPg. Section\u00a02.2 described the numerical experiments.This section described CPMTPp and CPMTPg in the Sections\u00a02.2.1 and 2.1.2, respectively. Briefly, CPMTPp is the procedure for predicting therapy response using a model. CPMTPg is the procedure for selecting genes.The CPMTPp design was based on the hypothesis that predictive accuracy is improved by consolidating information on the states of a patient at multiple time points. The general problem setting for the prediction in which the response at future time point \u201cmentclass2pt{minimmentclass2pt{minimmentclass2pt{minimCPMTPp was used to calculate one probability of therapeutic response using time-course microarray data Fig.\u00a0a. FirstlSimilar to the Bayesian model , 22 and mentclass2pt{minimIn CPMTPp, the Bayesian theory was used to consolidate probabilities based on time-course data . The pror-1\u201d Eq.\u00a0. As the vely Eq.\u00a0. From thCPMTPg were used to select the gene subset of CPMTPp best suited for accurate prediction using time-course microarray data. CPMTPg was used to decide the gene subset by optimizing the fitness function based on the probability \u201centclass1pt{minimaStep 1: Elastic net with stability selection eliminated genes with low impact on therapy responses, yielding a gene pool.Step 2: The gene subset was selected from the gene pool via an optimization method.Here, the gene expression profiles were composed as a data matrix (\u201cStep 1: Screening stepGene selection based on microarray data frequently suffers from the \u201cument}n) . Gene seument}n) , Holm meument}n) . HoweverThe sparse modeling solved the \u201cer focus , 15, 26.er focus . ElasticElastic net was used to select genes with non-zero weights, Stability selection was used to reduce the effect of lambda on feature selection , 29. StaStep 2: Selecting a gene subsetIn step 2 of CPMTPg, the gene subset for the CPMTPp model was selected from the gene pool \u201c(i)The gene list (entclass1pt{minima(ii)The subjects in the gene expression data were separated into two blocks.(iii)The CPMTPp model was constructed based on one block of data using the gene list (iv)The accurate rate of the model (entclass1pt{minima(v)(ii) and (iv) were repeated for \u201c(vi)The gene list showing the best accuracy rate was determined as the gene subset of CPMTPp.This step was performed as follows:The fitness function of the optimization method was designed via probability consolidated at multiple time points Eq.\u00a0. Equatiosth patients that the actual therapy response equaled the predicted one. qth patients that the actual therapy response did not equal the predicted one.To determine the gene subset of CPMTPp from gene lists \u201c{Note that this step used Ridge as an optimization method for weights mentclass2pt{minimThree experiments were performed: (1) comparison with CPMTP (CPMTPp\u2009+\u2009CPMTPg) and a conventional method, (2) verification of our hypothesis, and (3) analysis of the gene subset selected by CPMTPg. This section describes the material, preprocessing, evaluation method, parameters, and implementation.Two sets of time-course microarray data were used for this evaluation. One dataset was collected from HCV patients treated with antiviral therapies, peginterferon and ribavirin (HCV dataset) . The othThe details of these datasets are shown Table . The numentclass1pt{minimawn Table . Gene ex2 transformation and quantile normalization were performed on each dataset.Three steps were performed to preprocess gene expression data. Several probes were removed from the two datasets. As the probes had duplicate gene symbols in one dataset, one probe was selected by comparing median gene expression levels, and the other probes were removed. Probes with a gene symbol indicating a non-coding region or no gene symbol were also removed. Subsequently, logThe MLR model was used as the prediction model for the conventional method. The features of the MLR model were based on the difference of gene expression profiles between \u201centclass1pt{minimaNext, maSigPro was used for gene selection in the conventional method, which is frequently used for time-course microarray data analysis , 33, 34.To compare CPMTPp\u2009+\u2009CPMTPg and MLR\u2009+\u2009maSigPro as the conventional method, the area under the curve (AUC) and accuracy were calculated using HCV and MS datasets. For this, k-fold cross-validation was performed. This method splits patients in the dataset into \u201cThe receiver operating characteristic curves (ROCs) for each CP, which were calculated based on probabilities of CPMTPp\u2009+\u2009CPMTPg and MLR\u2009+\u2009maSigPro that were obtained via k-fold cross-validation, is depicted. The AUCs were calculated using these ROC curves. The difference between AUCs corresponding to CPMTPp\u2009+\u2009CPMTPg and MLR\u2009+\u2009maSigPro at each CP were compared using the DeLong test with significance levels \u201cTo compare with CPMTP and previous studies based on therapy responses estimated via k-fold cross-validation, the accuracies of CPMTPp\u2009+\u2009CPMTPg and MLR\u2009+\u2009maSigPro were calculated. The accuracies were calculated for each CP and each block for evaluation in k-fold cross-validation. Based on the mean, maximum, and minimum values of these accuracies, CPMTPp\u2009+\u2009CPMTPg and MLR\u2009+\u2009maSigPro were compared.In CPMTPp, it was assumed that the accuracy of the prediction model was improved as time points increased. The accuracies of the CPMTPp and MLR models were compared to verify this hypothesis. The gene selection methods of these models were CPMTPg. The mean, maximum, and minimum values of accuracies in CPMTPp\u2009+\u2009CPMTPg and MLR\u2009+\u2009CPMTPg were calculated using k-fold cross-validation using HCV and MS datasets.The gene subset selected by CPMTPg was analyzed by ontology to research the function of genes in the biological process. DAVID was usedThe therapeutic responses of patients were decided at the final time point of the datasets. The final time point in the HCV dataset was 28\u00a0days was utilized as the optimization method. GA is a heuristic optimization method that has been frequently utilized as a gene selection method for microarray data , 36, 37.The parameters of the numeric experiment are as follows: \u201ck\u2009=\u20093\u201d in k-fold cross-validation, and the rate of patients whose therapy responses were \u201csensitive\u201d or \u201cnot sensitive\u201d was the same for all blocks Table . The parThe implementation language was R-Language ver. 3.6.0). Quantile normalization, Elastic net, and maSigPro were used by limma (ver. 3.40.6), glmnet ver. 2.0-18), and maSigPro (ver. 1.56.0) packages, respectively. Stability selection and GA were implemented by the authors. The source codes used in this paper will be made available upon request. The pseudo code of CPMTP was added in Additional file 8, and ma.6.0. QuaA threefold cross-validation was performed for each HCV and MS dataset. In MLR\u2009+\u2009maSigPro and CPMTPp\u2009+\u2009CPMTPg, AUCs, as well as mean maximum and minimum values of accuracies, were calculated based on the results of cross-validation. The mean, maximum, and minimum values of accuracies for MLR\u2009+\u2009CPMTPg were calculated. Moreover, genes selected via the CPMTPg were analyzed.p values of the DeLong test were 0.03 were compared with those calculated using the conventional method (MLR\u2009+\u2009maSigPro) via threefold cross-validation. The results of both AUCs Fig.\u00a0 and accuThe AUCs of CPMTPp\u2009+\u2009CPMTPg were higher than those of MLR\u2009+\u2009maSigPro for all CPs in both the HCV and MS datasets Figs. , 4. HoweAccording to Fig.\u00a0eference using theference .In addition, the accuracies of CPMTPp\u2009+\u2009CPMTPg were confirmed for the artificial data. The results are shown in Additional file CPMTPp was designed based on the hypothesis that more accurate prediction was dependent on data from more time points. However, the results of the comparison between MLR and CPMTPp Fig.\u00a0 did not The accuracies of MLR, which did not consolidate the probabilities at multiple time points in the HCV and MS datasets, are shown Fig.\u00a0. In the The above results indicated that prediction using more time points (CPMTPp) did not contribute to improved accuracy. However, MLR, which did not consolidate the probabilities of multiple time points, used the same subset of genes as CPMTPp, and its accuracy tended to decrease or fluctuate over time points. This trend was not changed by the gene selection method for maSigPro and selection component (CPMTPg). CPMTP was based on the hypothesis that more information related to time points provided a more accurate therapeutic response prediction. To enable CPMTPp incorporate more information from multiple time points, an overall probability of deciding a therapy response was estimated by consolidating the probabilities calculated at each time point, using the Bayesian theorem. CPMTPg selected the gene subset for use in the CPMTPp model via the optimization method, which was set as the fitness function of the consolidated probability.CPMTP was evaluated using time-course gene expression profiles from HCV and MS patients in terms of accurate prediction, validation of the hypothesis, and gene function. These results suggested that CPMTP (CPMTPp\u2009+\u2009CPMTPg) predicted response to therapy accurately at all observed points compared to the conventional method. However, as opposed to our hypothesis, the predicted accuracy of CPMTPp was not improved but only retained as time points increased. Further, the gene subset selected by CPMTPg may be related to HCV and MS, according to analyses conducted by previous studies investigating the key GO terms associated with the gene subsets.The above findings indicated that CPMTP might enhance long-term therapeutic procedures by accurately predicting response to therapy at multiple time points. Moreover, gene subsets identified by CPMTP may be useful as gene markers of disease progression. Thus, CPMTP may not only resolve difficulties associated with predicting response to therapy in HCV and MS patients but may also apply to the resolution of other clinical issues of a similar nature.Additional file 1: Figure S1. The pseudo-code of CPMTPp. This code predicted a therapy response of a patient. Additional file 2: Figure S2. The pseudo-code of CPMTPg:step1. This code created a gene pool by the step1 of CPMTPg. Additional file 3: Figure S3. The pseudo-code of CPMTPg:step2. This code created a gene pool by the step1 of CPMTPg. Additional file 4: Figure S4. Results of CPMTPp\u2009+\u2009CPMTPg using artificial data. The artificial gene expression data was created. This data subjects were 20 sensitive and 20 not sensitive responders. Gene expression levels of \u201cGene1\u201d, \u201cGene2\u201d, and \u201cGene3\u201d were created by adding noise following a normal distribution to each baseline. The baseline of \u201cGene1\u201d had the different rising/ falling trends of gene expression levels between sensitive and not sensitive responders at all time points, while the baseline of \u201cGene2\u201d and \u201cGene3\u201d had it at a part of time points. Gene expression levels of the other genes were created by uniform distribution . To evaluate CPMTPp\u2009+\u2009CPMTPg, the threefold cross-validation was performed using this artificial data. As a result, CPMTPg selected \u201cGene1\u201d from all genes as the gene subsets at all validation. These mean accuracies were 92.8%(CP1:Additional file 5: Figure S5. Accuracies of MLR\u2009+\u2009maSigPro versus CPMTPp\u2009+\u2009maSigPro. The bars, top whisker, and bottom whisker represent mean, maximum, and minimum values of accuracies by threefold cross-validation, respectively. a HCV dataset. b MS dataset."}
+{"text": "Prostaglandin analogues (PG), beta\u2010blockers (BB) or their combination (PG+BB) are used primarily to reduce the intraocular pressure (IOP) pathologically associated with glaucoma. Since, fibrosis of the trabecular meshwork (TM) is a major aetiological factor in glaucoma, we studied the effect of these drugs on fibrosis\u2010associated gene expression in TM of primary glaucoma patients. In the present study, TM and iris of primary open\u2010angle (n\u00a0=\u00a032) and angle\u2010closure (n\u00a0=\u00a037) glaucoma patients were obtained surgically during trabeculectomy and categorized based on the type of IOP\u2010lowering medications use as PG, BB or PG+BB. mRNA expression of pro\u2010fibrotic and anti\u2010fibrotic genes was quantified using qPCR in these tissues. The gene expression levels of pro\u2010fibrotic genes were significantly lower in PG+BB as compared to other groups. These observations and underlying signalling validated in vitro in human TM cells also showed reduced fibrotic gene and protein expression levels following PG+BB treatment. In conclusion, it is observed that PG+BB combination rather than their lone use renders a reduced fibrotic status in TM. This further suggests that IOP\u2010lowering medications, in combination, would also modulate fibrosis\u2010associated molecular changes in the TM, which may be beneficial for maintaining aqueous out\u2010flow mechanisms over the clinical treatment duration. It is the second leading cause of blindness worldwide and if left untreated can cause an irreversible loss of vision.The current management of glaucoma focuses on controlling increase in IOP by the use IOP\u2010lowering medications such as prostaglandin analogues (PG), beta\u2010blockers (BB), carbonic anhydrase inhibitors (CAI) and alpha agonists (AA). These medications function through decreasing production of aqueous humour and/or increasing out\u2010flow by alternative pathways.2The cross\u2010sectional study was approved by the institutional ethics committee. The study was conducted as per the guidelines stated by Indian Council of Medical Research and the Declaration of Helsinki. Patient information and samples were obtained following informed written consent.2.1Patients with a diagnosis of POAG or PACG that underwent trabeculectomy in a tertiary eye care centre were included in the study. All these patients underwent a complete clinical evaluation, that is visual acuity, refraction, Goldmann applanation tonometry to determine IOP, gonioscopy to determine whether angles are open or closed, optic nerve head evaluation, perimetry with Humphrey Field Analyser (HFA) and/or imaging of the optic nerve head with optical coherence tomography (OCT) to establish the glaucomatous damage. These patients were diagnosed as POAG when the IOP was high with open angles on gonioscopy and glaucomatous changes on clinical optic nerve evaluation. The changes in the nerve were substantiated with corresponding perimetric changes and/or RNFL thinning on OCT. Primary angle\u2010closure glaucoma (PACG) diagnosis was considered for patients that had narrow angles with signs of occlusion such as patchy pigmentation of trabecular meshwork, pigments on anterior lenticular surface, sphincter changes or synechial angle closure. These angle changes along with optic nerve damage substantiated by corresponding changes in perimetry and/or OCT were considered as PACG. All PACG patients underwent laser peripheral iridotomy followed by the medical treatment. These glaucoma patients whether POAG or PACG were subjected to glaucoma filtration surgery, that is trabeculectomy, when (a) the IOP could not be controlled with medical treatment, (b) intolerant to medical treatment or (c) the cataract surgery was indicated and the patient was already on more than one IOP\u2010lowering medications for that eye.2.1.1Patients with POAG or PACG undergoing trabeculectomy with or without cataract surgery. Further, the patients that were on IOP\u2010lowering medications that includes PG and/or BB for more than a week.2.1.2(a) Patients with secondary glaucoma or type of glaucoma other than POAG and PACG, (b) TM tissue with poor RNA yield, (c) patients with age less than 18\u00a0years and more than 85\u00a0years, (d) patients with serology positive for HIV, HBS and HCV, (e) patients who were operated without any IOP\u2010lowering medications or its use for less than a week and (f) patients using IOP\u2010lowering medications other than PG or BB, for example alpha agonist, carbonic anhydrase inhibitors or cholinergic agonists only.2.1.3Trabecular meshwork was obtained while performing trabecular meshwork block excision, and iris was collected while performing iridectomy during trabeculectomy procedure. The tissue samples were collected in sterile micro\u2010centrifuge tubes containing Ringer Lactate solution and stored in a bio\u2010repository at \u221280\u00b0C until further use.2.1.42\u03b1 and they bring about their action by binding to prostaglandin F receptor. The vast majority of the study subjects on PG were using bimatoprost in the study cohort. Further, study subjects using beta\u2010blockers were all on timolol.The samples were divided into subgroups based on the types of primary glaucoma or types of IOP\u2010lowering medications. Based on primary glaucoma, the samples were divided into those from POAG (n\u00a0=\u00a032) or PACG (n\u00a0=\u00a037) patients. Based on the type of IOP\u2010lowering medications, the samples were divided into those on either prostaglandin analogues , beta\u2010blockers or a combination of prostaglandin analogues and beta\u2010blockers . Prostaglandin analogues used by the study cohort include bimatoprost, travoprost and latanoprost. All these three PGs are structural analogues of prostaglandin F2.2TGF\u03b21 (transforming growth factor beta 1), TGF\u03b22 (transforming growth factor beta 2), TGF\u03b2R2 (transforming growth factor beta receptor 2), CTGF (connective tissue growth factor), FN (fibronectin), LOXL2 (lysyl oxidase\u2013like 2), WNT3A (wingless\u2010type family member 3A), DECORIN , HEVIN and ADBR2 (\u03b22\u2010adrenergic receptor) were determined by normalizing the expression of these genes to housekeeping gene, \u03b2\u2010actin in the respective samples. Normalized expression value of zero indicative of no detectable expression of the gene of interest in a particular sample was excluded from the analysis. Pro\u2010fibrotic genes studied include TGF\u03b21, TGF\u03b22, TGF\u03b2R2, CTGF, FN, LOXL2 and WNT3A, and the anti\u2010fibrotic genes were DECORIN and HEVIN.Total RNA was isolated from trabecular meshwork tissue and iris tissue using TRIzol method according to manufacturer's protocol (Invitrogen). The concentration and purity of the extracted mRNA was assessed, and samples that had at least 1000\u00a0ng of RNA with a purity by optical density 260/280 ratio of >1.6 were selected for further analysis. RNA was converted into cDNA using Bio\u2010Rad iSCRIPT cDNA conversion kit (Bio\u2010Rad). Real\u2010time PCR was performed using SYBR green reagent (Kapa Biosystems Inc). The quantitative real\u2010time PCR cycle includes pre\u2010incubation at 95\u00b0C for 3\u00a0minutes, 40 amplification cycles at 95\u00b0C for 10\u00a0seconds, 58\u00b0C for 30\u00a0seconds using a CFX Connect\u2122 real\u2010time PCR detection system (Bio\u2010Rad). Primer sequence details are provided in Table\u00a02.32. hTM cells (1\u00a0\u00d7\u00a0106\u00a0cells/well of 6\u2010well plate) were treated either with prostaglandin analogue , beta\u2010blocker or the combination of both for 24\u00a0hours at a dilution of 1:100. Since bimatoprost and timolol were the most commonly used PG and BB in the study cohort, those were chosen for in vitro validation experiments. A dilution of 1:100 of PG and/or BB was chosen to study its effects on TM, because it is known that less than 5% of the instilled drug as an eye drop penetrates the ocular surface and reaches intraocular tissues including the anterior chamber.Human trabecular meshwork (hTM) cells were cultured in vitro in DMEM containing 10% FBS maintained at 37\u00b0C, 5% CO2.4hTM cells following treatment as indicated earlier were lysed using RIPA buffer for 30\u00a0minutes. The clarified whole cell protein lysates (WCL) were obtained following centrifugation. Proteins in WCL (20\u00a0\u03bcg) for each sample were separated on 10% SDS\u2010PAGE (sodium dodecyl sulphate\u2010polyacrylamide gel electrophoresis) gel. The proteins were then transferred onto a PVDF (polyvinylidene difluoride) membrane followed by blocking at room temperature for an hour using 5% fat free milk diluted in TBST . Primary antibodies against FN (Abcam plc), CTGF (Abcam plc), decorin (Abcam plc), total SMAD3 (Abcam plc) and phosphorylated SMAD3 (Abgenex Pvt. Ltd) were used at a dilution of 1:1000 except for GAPDH (Abcam plc) which was used at 1:5000 in 5% fat free milk in TBST. The membranes were incubated with respective primary antibodies overnight at 4\u00b0C. The membranes were washed using TBST and incubated with the relevant secondary antibodies at 1:5000 dilutions for an hour at room temperature. The membranes were then washed and incubated with Clarity ECL Western blotting substrate (Bio\u2010Rad) and resulting chemiluminescence was imaged using Image quant . Densitometry analysis was done using ImageJ software (Version 6).2.5TM CBA Human Soluble Protein Flex Set System, BD Biosciences) using a flow cytometer . The beads and fluorescent signal intensities were acquired and recorded using BD FACSDiva software (BD Biosciences). Standards were used to determine the absolute concentration of TGF\u03b21 in the supernatants, and the calculation was performed using FCAP array Version 3.0 (BD Biosciences).The levels of TGF\u03b21 in hTM cell culture supernatants were measured by bead\u2010based ELISA2.6t test or Mann\u2013Whitney test based on the distribution of the data. Shapiro\u2013Wilk normality test was used to determine the distribution type of the data. P\u00a0<\u00a0.05 was considered to be statistically significant. GraphPad prism version 6 was used to perform statistical analysis.Statistical significance between the groups was determined by using unpaired 3A total of 32 POAG and 37 PACG patients were included in the study. The clinical characteristics as described in Table\u00a0TGF\u03b21, TGF\u03b22, CTGF, FN, LOXL2 and WNT3A was observed to be lower in PG+BB group as compared to either PG or BB alone changes were observed only in CTGF, LOXL2 and FN genes in each patient. The expression levels in iris tissue were not expected to be affected due to the disease. No distinct expression pattern of fibrosis\u2010associated genes studied was observed in the iris of primary glaucoma patients (data not shown). TM/iris expression ratio of the genes studied showed similar normalized expression patterns as observed in TM tissue . The expression of pro\u2010fibrotic genes such as s Figure\u00a0. In addis Figure\u00a0. Anti\u2010fie Figure\u00a0. The genN Figure\u00a0. NeverthN Figure\u00a0 alone, aN Figure\u00a0. The expB Figure\u00a0. SimilarTGF\u03b21*, TGF\u03b22, CTGF*, FN*, LOXL2 and WNT3a* was lower in the group of hTM cells treated with PG+BB combination rather than with either PG or BB . The results also demonstrate that PG induced expression of TGF\u03b21*, TGF\u03b22*, TGF\u03b2R2*, CTGF*, FN*, LOXL2, WNT3a*, DECORIN and HEVIN compared to untreated controls as shown in Figure\u00a0P\u00a0<\u00a0.05). On the contrary, BB did not induce any marked increase in the expression of the genes studied on fibrosis\u2010associated factors in TM of primary glaucoma patients and hTM cells.The current study demonstrates that the both PG and BB differentially affect the expression of fibrosis\u2010associated factors in human TM. Further, differential expression of fibrosis\u2010associated factors by PG+BB indicates a favourable effect to reduce fibrosis or pore closure factors in TM tissue of POAG and PACG patients. The effect was also substantiated by in vitro experiments on cultured hTM cells with PG+BB combination providing a favourable regulation of fibrosis\u2010associated genes. In vitro validation clearly showed that even 24\u2010hour exposure of the drug to the hTM cells altered the pro\u2010fibrotic protein levels and the upstream signalling factors.Aberrant TGF\u03b2 response is known to contribute to fibrosis in general.The role of beta\u2010adrenergic signalling in driving TGF\u03b2 expression and fibrosis\u2010associated changes in cardiac tissues has been reported.The findings from this study provide a compelling insight into the possible effect of PG and/or BB on TM. The limitations of the study are the small sample size and small number of genes tested for their expression in the patient TM. However, the amount of TM tissue obtained during surgery is minute, precluding the testing of large numbers of genes. In addition, the study also did not have a healthy TM control group (TM tissues without known history of exposure to IOP\u2010lowering medications studied), since such invasive samples cannot be obtained from healthy eyes. Therefore, the findings observed in the patients were further validated in hTM cells, in vitro. Longitudinal studies including imaging of TM and/or ultrastructural studies using electron microscopy would provide structural evidence to the observations made. In addition, since our data provide evidence of modulation of gene expression patterns at the TM, future studies using RNA\u2010seq or proteomic methods are warranted. In addition to the known improved IOP\u2010lowering efficiency by the combination of PG and BB, our data provide additional molecular evidence that they also render fibrosis\u2010reducing effects in TM.The authors have no financial disclosures or conflicts of interest to declare pertaining to the study.ST contributed towards data acquisition, data interpretation and manuscript preparation. PM contributed towards data acquisition, data interpretation and manuscript preparation. APN contributed towards research design, data acquisition and analysis, and AG contributed towards research design and manuscript preparation. RKD contributed to data analysis and manuscript preparation. ASG contributed to research design, data interpretation and manuscript preparation. SS contributed towards research design, data analysis, data interpretation and manuscript preparation. All authors read and approved the final manuscript.Figure S1Click here for additional data file.Figure S2Click here for additional data file.Figure S3Click here for additional data file.Figure S4Click here for additional data file.Figure S5Click here for additional data file.Figure S6Click here for additional data file.Table S1Click here for additional data file."}
+{"text": "In the time series classification domain, shapelets are subsequences that are discriminative of a certain class. It has been shown that classifiers are able to achieve state-of-the-art results by taking the distances from the input time series to different discriminative shapelets as the input. Additionally, these shapelets can be visualized and thus possess an interpretable characteristic, making them appealing in critical domains, where longitudinal data are ubiquitous. In this study, a new paradigm for shapelet discovery is proposed, which is based on evolutionary computation. The advantages of the proposed approach are that: (i) it is gradient-free, which could allow escaping from local optima more easily and supports non-differentiable objectives; (ii) no brute-force search is required, making the algorithm scalable; (iii) the total amount of shapelets and the length of each of these shapelets are evolved jointly with the shapelets themselves, alleviating the need to specify this beforehand; (iv) entire sets are evaluated at once as opposed to single shapelets, which results in smaller final sets with fewer similar shapelets that result in similar predictive performances; and (v) the discovered shapelets do not need to be a subsequence of the input time series. We present the results of the experiments, which validate the enumerated advantages. Due to the rise of the Internet-of-Things (IoT), a mass adoption of sensors in all domains, including critical domains such as health care, can be noted. These sensors produce data of a longitudinal form, i.e., time series. Time series differ from classical tabular data, since a temporal dependency is present where each value in the time series correlates with its neighboring values. One important task that emerges from this type of data is the classification of time series in their entirety. A model able to solve such a task can be applied in a wide variety of applications, such as distinguishing between normal brain activity and epileptic activity , determiN the number of time series and M the length of the smallest time series in the dataset). This complexity was improved two years later, when Mueen et al. [fs), that finds a suboptimal shapelet in sax) representations [Shapelet discovery was initially proposed by Ye and Keogh . Unfortun et al. proposedn et al. proposedntations . Althougst), which performs only a single pass through the time series dataset and maintains an ordered list of shapelet candidates, ranked by a metric, and then finally takes the top-k from this list in order to construct features. While the algorithm only performs a single pass, the computational complexity still remains All the aforementioned techniques search for a single shapelet that optimizes a certain metric, such as information gain. Often, one shapelet is not enough to achieve good predictive performances, especially for multi-class classification problems. Therefore, the shapelet discovery is applied in a recursive fashion in order to construct a decision tree. Lines et al. proposedechnique ,12. Lineechnique comparedechnique , which slts). The technique is competitive with st, while not requiring a brute-force search, making it tractable for larger datasets. Unfortunately, lts requires the user to specify the number of shapelets and the length of each of these shapelets, which can result in a rather time-intensive hyper-parameter tuning process in order to achieve a good predictive performance. Three extensions of lts, which improve the computational runtime of the algorithm, have been proposed in the subsequent years. Unfortunately, in order to achieve these speedups, predictive performance had to be sacrificed. A first extension is called Scalable Discovery (sd) [lts on almost every tested dataset. Second, in 2015, Ultra-Fast Shapelets (ufs) [sd, but sacrifices less of its predictive performance. The final and most recent extension is called the Fused LAsso Generalized eigenvector method (flag) [sd, while being only slightly worse than lts in terms of predictive performance.Grabocka et al. proposedery (sd) . It is td (flag) . It is tSeveral enhancements to shapelet discovery have recently been investigated as well. Wang et al. investiglts [Most of the prior work regarding shapelet discovery was performed using univariate data. However, many real-world datasets are multi-variate. Extending shapelet discovery algorithm to deal with multi-variate data has therefore been gaining increasing interest in the time series analysis domain. Most of the existing works extend the gradient-based framework of lts ,20 or pelts .gendis), is to achieve state-of-the-art predictive performances similar to the best-performing algorithm, st, with a smaller number of shapelets, while having a low computational complexity similar to lts.This paper is the first to investigate the feasibility of an evolutionary algorithm in order to discover a set of shapelets from a collection of labeled time series. The aim of the proposed algorithm, GENetic DIscovery of Shapelets classes, while being dissimilar to subsequences of time series of other classes. The output of a shapelet discovery algorithm is a collection of Given an input matrix t, s) pair, where s in t from t starting at index i and having the same length as s.The distance matrix, A conceptual overview of a shapelet discovery algorithm is depicted in gendis, these components are decoupled.Once shapelets are found, these can be used to transform the time series into features that correspond to distances from each of the time series to the shapelets in the set. These features can then be fed to a classifier. It should be noted that both the shapelet discovery and the classification component can be trained jointly end-to-end. However, in lts, which we mentioned in K) and the length of each shapelet In this paper, we propose a genetic algorithm that evolves a set of variable-length shapelets, The building blocks of a genetic algorithm consist of at least a crossover, mutation, and selection operator . Additiogendis is provided in A conceptual overview of P candidate sets K shapelets, with K a random integer picked uniformly from W a hyper-parameter of the algorithm, and P the population size. K is randomly chosen for each individual, and the default value of W is set to be Initialization\u00a01.\u00a0Apply K-means on a set of random subseries of a fixed random length sampled from\u00a0Initialization\u00a02.\u00a0Generate K candidates of random lengths In order to seed the algorithm with initial candidate sets, we generate One of the most important components of a genetic algorithm is its fitness function. In order to determine the fitness of a candidate set L to a time series of length M requires G the total number of generations, P the population size, and K the (maximum) number of shapelets in the bag each individual of the population represents.The fitness calculation is the bottleneck of the algorithm. Calculating the distance of a shapelet with length Crossover\u00a01.\u00a0Apply one- or two-point crossover on two shapelet sets (each with a probability of 50%). In other words, we create two new shapelet sets that are composed of shapelets from both Crossover\u00a02.\u00a0Iterate over each shapelet s in Crossover\u00a03.\u00a0Iterate over each shapelet s in We define three different crossover operations, which take two candidate shapelet sets, It is possible that all or no techniques are applied on a pair of individuals. Each technique has a probability equal to the configured crossover probability : the maximum number of shapelets in a newly generated individual during initialization (default: Maximum shapelets per candidate (P): the total number of candidates that are evaluated and evolved in every iteration (default: 100).Population size (G): the maximum number of iterations the algorithm runs (default: 100).Maximum number of generations (Early stopping patience (Mutation probability (Crossover probability (M).Maximum shapelet length .We now present an overview of all hyper-parameters included in gendis.In the following subsections, we present the setup of different experiments and the corresponding results in order to highlight the advantages of In this section, we assess the efficiency of the introduced genetic operators by evaluating the fitness as a function of the number of generations using different sets of operators. It should be noted that our implementation easily allows configuring the number and type of operators used for each of the different steps in the genetic algorithm, allowing the user to tune these according to the dataset.We picked six datasets, with varying characteristics, to evaluate the fitness of different configurations. The chosen datasets and their corresponding properties are summarized in Initializing the individuals with K-means Randomly initializing the shapelet sets Using both initialization operationsWe first compare the fitness of GENDIS using three different sets of initialization operators:Each configuration was tested using a small population , in order to reduce the required computational time, for 75 generations, as the impact of the initialization is highest in the earlier generations. All mutation and crossover operators were used. We show the average fitness of all individuals in the population in Using solely point crossovers on the shapelet sets (Crossover 1)Using solely point crossovers on individual shapelets (Crossover 2)Using solely merge crossovers (Crossover 3)Using all three crossover operationsWe now compare the average fitness of all individuals in the population, as a function of the number of generations, when configuring GENDIS to use four different sets of crossover operators:Each run had a population of 25 individuals and ran for 200 generations. All mutation and initialization operators were used. As the average fitness is rather similar in the earlier generations, we truncated the first 50 measurements to better highlight the differences. The results are presented in Masking a random part of a shapelet (Mutation 1)Removing a random shapelet from the set (Mutation 2)Adding a shapelet, randomly sampled from the data, to the set (Mutation 3)Using all three mutation operationsThe same experiment was performed to assess the efficiency of the mutation operators. Four different configurations were used:The average fitness of the individuals, as a function of the number of generations, is depicted in gendis is that it evaluates entire sets of shapelets (a dependency between the shapelets is introduced), as opposed to evaluating single candidates independently and taking the top-k. The disadvantage of the latter approach is that similar shapelets will achieve similar values given a certain metric. When entire sets are evaluated, we can optimize both the quality metric for candidate sets, as the size of each of these sets. This results in smaller sets with fewer similar shapelets. Moreover, interactions between shapelets can be explicitly taken into account. To demonstrate these advantages, we compare gendis to st, which evaluates candidate shapelets individually as opposed to shapelet sets, on an artificial three-class dataset, depicted in A key factor of gendis, a dependent approach. The low discriminative power of the independent approach is confirmed by fitting a logistic regression model with a tuned regularization type and strength on the obtained distances. The classifier fitted on the distances extracted by the independent approach is only able to achieve an accuracy of gendis, a dependent approach, equals We extracted two shapelets with both techniques, which allowed us to visualize the different test samples in a two-dimensional transformed distance space, as shown in gendis is that the discovery of shapelets is not limited to be a subseries from gendis can be found in gendis is that we specifically search for only one shapelet instead of an entire set of shapelets. We can see that the exhaustive search approach is not able to find a subseries in any of these four time series that separates both classes, while the shapelet extracted by gendis ensures perfect separation.Another advantage of gendis, one can use only the first crossover operation (one- or two-point crossover on shapelet sets) to ensure all shapelets come from within the data.It is important to note here that discovering shapelets outside the data sacrifices interpretability for an increase in the predictive performance of the shapelets. As the operators that are used during the genetic algorithm are completely configurable for In order to evaluate the stability of our algorithm, we compare the extracted shapelets of two different runs on the ItalyPowerDemand dataset. We set the algorithm to evolve a large population for a large number of generations (500) in order to ensure convergence. Moreover, we limited the maximum number of extracted shapelets to 10, in order to keep the visualization clear. We then calculated the similarity of the discovered shapelets between the two runs, using dynamic time warping . A heat gendis to the results from Bagnall et al. [www.timeseriesclassification.com). In that study, thirty-one different algorithms, including three shapelet discovery techniques, were compared on 85\u00a0datasets. The 85 datasets stem from different data sources and different domains, including electrocardiogram data from the medical domain and sensor data from the IoT domain. The three included shapelet techniques are Shapelet Transform (st) [lts) [fs) [In this section, we compare our algorithm l et al. , which aorm (st) , Learnint) [lts) , and Fasgendis: a population size of 100, a maximum of 100 iterations, early stopping after 10 iterations, and crossover and mutation probabilities of gendis was then fed to a heterogeneous ensemble consisting of a rotation forest, random forest, support vector machine with a linear kernel, support vector with a quadratic kernel, and k-nearest neighbor classifier [st, closely. This is in contrast with fs, which produces a decision tree, and lts, which learns a separating hyperplane (similar to logistic regression) jointly with the shapelets. This setup is also depicted schematically in st, as that was unfortunately used by Bagnall et al. [For 84 of the 85 datasets, we conducted twelve measurements by concatenating the provided training and testing data and re-partitioning in a stratified manner, as done in the original study. Only the \u201cPhoneme\u201d dataset could not be included due to problems with downloading the data while executing this experiment. On every dataset, we used the same hyper-parameter configuration for assifier . This enl et al. to genergendis in comparison to the mean of the hundred original measurements of the three other algorithms, retrieved from the online repository, can be found in gendis to those of st. We recommend to use different performance metrics, which should be tailored to the specific use case. An example is using the area under the receiver operating characteristic curve (AUC) in combination with precision and recall for medical datasets.The mean accuracy over the twelve measurements of t-test with a cutoff value of fs is inferior to the three other techniques, while st most often achieves the best performance, but at a very high computational complexity.For each dataset, we also performed an unpaired Student gendis is reported in the final column. The number of shapelets extracted by st in the original study equals gendis. In order to compare the algorithms across all datasets, a Friedman ranking test [gendis, both the results obtained with the ensemble and with the logistic regression classifier are used. From this, we can conclude that there is no statistical difference between st and gendis, while both are statistically better than fs and lts.The average number of shapelets extracted by ing test was appling test ,30. We pgendis, was proposed to extract a collection of smaller subsequences, i.e., shapelets, from a time series dataset that are very informative in classifying each of the time series into categories. gendis searches for this set of shapelets through evolutionary computation, a paradigm mostly unexplored within the domain of time series classification, which offers several benefits:evolutionary algorithms are gradient-free, allowing for an easy configuration of the optimization objective, which does not need to be differentiablegendis evaluates entire sets of shapeletsonly the maximum length of all shapelets has to be tuned, as opposed to the number of shapelets and the length of each shapelet, due to the fact that easy control over the runtime of the algorithmthe possibility of discovering shapelets that do not need to be a subsequence of the input time seriesIn this study, an innovative technique, called st, while outperforming it in terms of predictive performance, with much smaller shapelet\u00a0sets.Moreover, the proposed technique has a computational complexity that is multiple orders of magnitude smaller (gendis is competitive to the current state-of-the-art while having a much lower computational complexity.We demonstrate these benefits through intuitive experiments where it was shown that techniques that evaluate single candidates can perform subpar on imbalanced datasets and how sometimes the necessity arises to extract shapelets that are not subsequences of input time series to achieve good separation. In addition, we compare the efficiency of the different genetic operators on six different datasets and assess the algorithm\u2019s stability by comparing the output of two different runs on the same dataset. Moreover, we conduct an extensive comparison on a large amount of datasets to show that gendis can be implemented, which would allow gendis to evolve larger populations in a similar amount of time. One significant optimization is to express the calculation of distances, one of the bottlenecks of gendis, algebraically in order to leverage gpu technologies. Second, further research on each of the operators within gendis can be performed. While we clearly demonstrated the feasibility of a genetic algorithm to achieve state-of-the-art performances with the operators discussed in this work, the amount of available research within the domain of time series analysis is growing rapidly. New insights from this domain can continuously be integrated within gendis. As an example, new time series clustering algorithms could be integrated as initialization operators of the genetic algorithm. Finally, it could be very valuable to extend gendis to work with multivariate data and to discover multivariate shapelets. This would require a different representation of the individuals in the population and custom genetic operators.Several interesting directions can be identified for further research. First, optimizations in order to decrease the required runtime for"}
+{"text": "Network inference from transcriptional time-series data requires accurate, interpretable, and efficient determination of causal relationships among thousands of genes. Here, we develop Bootstrap Elastic net regression from Time Series (BETS), a statistical framework based on Granger causality for the recovery of a directed gene network from transcriptional time-series data. BETS uses elastic net regression and stability selection from bootstrapped samples to infer causal relationships among genes. BETS is highly parallelized, enabling efficient analysis of large transcriptional data sets. We show competitive accuracy on a community benchmark, the DREAM4 100-gene network inference challenge, where BETS is one of the fastest among methods of similar performance and additionally infers whether causal effects are activating or inhibitory. We apply BETS to transcriptional time-series data of differentially-expressed genes from A549 cells exposed to glucocorticoids over a period of 12 hours. We identify a network of 2768 genes and 31,945 directed edges (FDR \u2264 0.2). We validate inferred causal network edges using two external data sources: Overexpression experiments on the same glucocorticoid system, and genetic variants associated with inferred edges in primary lung tissue in the Genotype-Tissue Expression (GTEx) v6 project. BETS is available as an open source software package at We can better understand human health and disease by studying the state of cells and how environmental dysregulation affects cell state. Cellular assays, when collected across time, can show us how genes in cells respond to stimuli. These time-series assays provide an opportunity to identify causal relationships among thousands of genes without performing hundreds of thousands of experiments. However, inferring causal relationships from these time-series data needs to be fast, robust, and accurate. We present a method, BETS, that infers causal gene networks from gene expression time series. BETS runs quickly because it is parallelized, allowing even data sets with thousands of genes to be analyzed. We demonstrate the performance of BETS compared to 22 other state-of-the-art inference methods on benchmark data. We then use BETS to build causal networks from gene expression responses to the widely-prescribed drug dexamethasone. We replicate the estimated causal relationships using gene expression data from the Genotype-Tissue Expression (GTEx) project and from additional experiments with dexamethasone. We release our software so that BETS can be used to accurately and effectively infer causal relationships from gene expression time-series assays. PLOS Computational Biology Methods paper.This is a The recent availability of gene expression measurements over time has enabled the search for interpretable statistical models of gene regulatory dynamics . These tIn this work, we develop an approach that uses gene transcription time series following glucocorticoid (GC) exposure to build a directed gene network . GCs plaOur method, Bootstrap Elastic net inference from Time Series (BETS), uses vector autoregression with elastic net regularization to infer directed edges between genes. Stability selection, which assesses the robustness of an edge to perturbations in the data, leads to improvements over baseline vector autoregression methods in this high-dimensional context . FurtherWe use the causal network inferred by BETS on the GC time-series data to study the relationships between TFs, immune genes, and metabolic genes. We validate our network using two approaches: Ten measurements of the same GC system with an overexpressed TF, and an expression quantitative trait loci (eQTL) study in human primary lung tissue . Althougcausal gene would lead to changes in expression of the effect gene .\u201323.15\u201323ect gene . We brieg\u2032 at the previous time point and the expression of g at the current time point methods assess the MI between the expression of me point 25\u201330].\u201330.g\u2032 atg\u2032 at the previous time point improves our ability to predict the expression of g at the current time point beyond using the expression of g at the previous time point , 37g at g\u2032 \u2192 g is included in the network when its marginal posterior probability of existence exceeds some threshold.Dynamic Bayesian networks (DBNs) search the space of possible directed acyclic graphs between previous and current expression levels to identify the network structure with the highest posterior probability of each edge given the data 38\u201342].\u201342.38\u201342g\u2032 \u2192 g is included in the network when its posterior probability of existence exceeds some threshold.A Gaussian process (GP) is a distribution over continuous, nonlinear functions. GPs are often used in the context of nonlinear DBNs, where GP regression is used to model a nonlinear relationship between previous expression and current expression 43, 44], 4443, 4While these approaches produce directed networks that have the flavor of Bayesian networks, except for DBNs, none of them produce graphs that are constrained to be acyclic, so they do not have the same statistical semantics as Bayesian networks.First, we briefly describe the approach that BETS uses to infer a directed gene network. Next, we compare results from BETS to those from twenty two other methods on the 100-gene time-series data from the DREAM4 Network Inference Challenge . Then, wDirected networks represent causal relationships among diverse interacting variables in complex systems. We developed a robust, scalable approach based on ideas from Granger causality to construct these directed networks from short, high-dimensional time-series observations of gene expression levels.G be the set of all p = |G| genes in the data set and g \u2208 G be a gene. Let \u00acg be G with g removed. Let t be a single time point, ranging from {1, 2, \u2026, T}. Let g at time t. Let L be the time lag, or the number of previous time point observations; so L = 2 means that we use two previous time points, t \u2212 1 and t \u2212 2, to predict expression at time t. These types of autoregressive models work best with similarly-spaced time points, as the data sets in this paper approximate, and assume stationarity, or the same causal effects across each time gap.Let Definition 2.1 . For lag L, a gene g\u2032 is said to Granger-cause another gene g if using g\u2032 at times t \u2212 1 to t \u2212 L, improves prediction of g at time t, beyond predicting g\u2032 to g, we first preprocessed the gene expression time-series data \u2264 0.05 . We thenthe data . FinallyWe evaluated BETS against other directed network inference methods. We used the DREAM4 Network Inference Challenge , a commuWe tested BETS and Enet against 22 other methods on the DREAM challenge , 49\u201351. BETS ranked 7th out of 24 in AUPR with an average AUPR of 0.128 and 4th Next, we compared the speed of BETS and three other top-performing methods: SWING-RF, CSId and Jump3 . SWING-RL = 1, ridge regression with lag L = 2, and lasso with lag L = 2 and an average AUROC of 0.686 . BETS with lag L = 1 still ranks 7th out of 24 in AUPR and 4th out of 19 in AUROC (tied with GP4GRN). Thus, BETS achieves consistently good performance on DREAM for both lags L = 1 and L = 2.Finally, we found that BETS\u2019 performance on DREAM is robust to the choice of lag. We ran BETS with lag ag L = 2 . BETS wioriginal exposure data set, cells were exposed to the synthetic GC dexamethasone (dex) for 0, 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 10, and 12 hours . Ij of N \u00d7 1 vector and N \u00d7 L|G| matrix.This procedure draws ut Xt-\u2113G . Let thejth bootstrap sample from the permuted data be Ij of Now consider the permuted output and input, rom X\u02dctg . Let thejth bootstrap sample for both the original and permuted data sets use the same row indices Ij.Thus, the g\u2032 \u2192 g\u2019s selection frequency, \u03c0g\u2032, g (the frequency of g\u2032 \u2192 g among the 1000 bootstrap networks) is computed , we generate a null distribution of selection frequencies using permutations. First, we generate a second permuted data set g \u2208 {1, \u2026, |G|} across time. This is done separately for distinct replicates.To determine the appropriate cutoff for the selection frequency of each edge , and FPR be False Positive Rate. Then, we haveRefer to every network edge inferred by a method as a positive and every missing edge as a negative. Let y axis and FPR on the x axis. AUPR plots precision on the y axis and recall on the x axis. When the number of negatives greatly exceeds the number of positives, as with gene networks, which are typically sparse, AUPR is a more relevant metric Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1: This manuscript presents the Bootstrap Elastic net regression from Time Series (BETS) algorithm, an network inference approach for time series gene expression data. The core question in time series-based network inference is how to use past information from regulators to detect their influence on the later expression levels of target genes. BETS adopts a Granger causality approach to address this question. Although Granger causality has been widely applied to time series gene expression data, there are multiple characteristics that differentiate BETS from prior work. The most notable is that BETS provides a false discovery rate (FDR) framework to rank and select network edges. Because it operates with temporal data, permuting the time points generates a convenient dataset in which the true regulator-target dependencies should be destroyed. Running BETS on the permuted data then creates a null distribution of regression coefficients or edge frequencies that can be used for FDR calculations. This FDR evaluation is combined with traditional bootstrapping and stability selection to make BETS robust to small sample sizes and false positive associations that are problematic in short time series. In addition, BETS incorporates an elastic net regression penalty and automated hyperparameter tuning, which are not implemented in many prior Granger causality methods.In general, the presentation , methodological choices, and analyses are superb. The synthesis of different types of time series network inference techniques in Figure S1 is an excellent high-level overview of the field. The evaluation on the simulated data DREAM4 challenge is valuable, demonstrating that BETS performs reasonably well on this popular benchmarking dataset and putting BETS in the context of many other types of network inference tools. There is a sizeable AUPR performance gap between BETS and the top method. However, this is not a concern because there are other advantages of BETS besides its AUPR on a simulated data set . Furthermore, other work in the gene network inference field has shown that AUPR on simulated data does not reflect the challenges of network inference in real mammalian systems, so maximizing AUPR on the DREAM4 data should not be the main objective of new methods.Therefore, the glucocorticoid case study is more relevant and interesting. Because there are not complete gold standard networks available for evaluating condition-specific human regulatory networks, the BETS predictions are assessed with two independent datasets. Overexpression of 10 transcription factors demonstrates that there is general agreement between the gene expression changes induced by overexpression and the edge signs predicted by BETS. The BETS predictions are not perfect, as four transcription factors have positive effects enriched among negative predicted edges. In addition, trans-eQTL analysis of GTEx lung gene expression data reveals many new trans-eQTLs that are nominated by the BETS network edges. These demonstrate how BETS predictions can be used to gain biological insights.One weakness with respect to the method's originality and relevance to the network inference field is that there is not a direct analysis demonstrating that the novel methodological aspects of BETS have a practical impact. The DREAM4 results are 10 years old and do not reflect the state of the art in Granger causality analysis. Only BETS was run on the glucocorticoid express data. Conceptually, the null distributions and FDR-based approach should improve network quality, but this claim is not directly assessed.Major comments:1) Related to the comment above, the manuscript would be improved by more specifically demonstrating to readers why they should use BETS over other modern Granger causality approaches. For instance, if the FDR framework is the main appeal, can it directly demonstrate the advantages of having a principled way to select the size of a network? If it is the scalability and parallelization, can the high-throughput software pipeline be made more robust and user friendly (see below)? There is a comparison with elastic net regression without stability selection, but ensembling or stability selection is now common place in network inference. The most closely related Granger causality work that shares some features with BETS is not included in the DREAM4 benchmarking. Examples of closely related methods include the last two methods in Section 2.1.6 of the supplement (the references are broken so specific manuscripts are unknown) or SWING (Finkle 2018 doi:10.1073/pnas.1710936115), which was not referenced.2) The analyses all fix the lag at L=2. It is unknown whether that lag would still be appropriate for time series data with more time points. The lag can be set by the user, but the relationships between the lag hyperparameter, the length of the time series, and the effectiveness of the FDR procedures have not been assessed.3) The permutation and bootstrapping concepts in Figure 1 are understandable at a high level. The methodological details are challenging to follow. It appears that a single temporal permutation is made in the outer loop and another is made in the inner loop. Is a single permutation sufficient to obtain a robust null distribution? In addition, if the bootstrapping is done on the matrix form in Equation 10, which has already unrolled or expanded the L previous time points, how is the inner loop permutation performed? The details of the permutation and FDR procedures are difficult to verify.4) Using the BETS network's edge signs to predict the vector autoregressive model's edge coefficients is a creative way to use the overexpression data to validate the network that accounts for the edge signs. However, there are some unaddressed caveats with this approach. The overexpression data have fewer time points, so a simpler regression model is used (Equation 17). Any errors in that simple model's fits will confound the BETS network assessment. This approach may also ignore false negative edges in the BETS network. An alternative approach would be to focus on the edge directions in the BETS network by estimating the differentially expressed genes in each overexpression experiment using a temporally-aware statistical test. nsgp (Heinonen 2015 doi:10.1093/bioinformatics/btu699) or a similar method may be able to accommodate the different number of time points in the original and overexpression data. Then, these genes can be treated as the targets of that regulator in a pseudo-gold standard network. The predicted edges for that regular in the BETS network could be evaluated with a precision-recall curve, even if they have very few predicted target genes.5) The author contributions imply that the RNA-seq data were generated as part of this study. If that is the case, the experimental protocols and methods are incomplete. In addition, the expression data should be made available.Minor comments:6) The supplement and GitHub readme note that the time points must be approximately equally spaced. This is an important assumption that limits the applicability to irregular time series. It should be stated more clearly in the main text.7) The supplement describes the global and local null distributions and FDR approaches in excellent detail. It is difficult to link this discussion to the methods that were actually used in the main text. Stating where the global and local versions were used in the main text methods would help connect these discussions.8) The main text does not explain the biological goals of the glucocorticoid study or why immune and metabolic genes are of interest. It would help guide readers if some of the well-written explanation from the supplement's Sections 1 and 2.3 was moved to the main text results.9) The discussion notes that applying Granger causality to single-cell pseudotemporal data is a relevant related area. This is indeed an exciting future direction for BETS, but recent preprints have shown that pseudotimes may not have the same information for network inference as bona fide time series data . That related work may guide readers who attempt to apply BETS to pseudotemporal data.10) The methods describes BIOGRID PPI, which do not appear to have been used in the analyses described in the results.11) The references in the supplement are broken12) The supplement refers to a method named VAR-GEN instead of BETS. Are these the same?13) Supplemental Figure 7 is difficult to understand. What are the indices and percentages?https://groups.google.com/forum/#!forum/bets-support displayed an error \"You do not have permission to access this content. (#418)\".14) It is commendable that the software pipelines are available on GitHub with an open source license. Some challenges in running the software are detailed below. In addition, the final version should be archived on Zenodo, Figshare, Software Heritage, or a comparable resource. The Google Drive materials could also be migrated to a permanent repository. The support group 15) The supplement contains typos \"we uses the\" (page 9) and \"multiple sclrosis patients\" (page 29).Software comments:The software was tested on Windows 10 with Git for Windows -release (x86_64-pc-msys)) in the following Python 2 conda environment:$ conda create --name bets python=2.7 numpy=1.13 scipy=0.19 pandas=0.20 matplotlib scikit-learnhttps://python3statement.org/). Porting the code to Python 3 is strongly recommended.Python 2's end of life date is in 2020 and support will be dropped by several packages BETS requires and formal testing to ensure the pipeline can run on a different system.The BETS pipeline in BETS_tutorial.md did not work in the environment described above. After editing dozens of lines, it was possible to run the code through Step 3 , but then there were too many scripts to edit. The most common incompatibilities were:- Assuming 'python2' instead of 'python' to run .py files in the shell scripts- The 'rB' argument when loading pickled files generated \"ValueError: Invalid mode ('rB')\". Removing the 'rB' worked.- The 'wB' argument when writing pickled files needed to be 'w' instead.- The paths to scripts combined different path separators / and \\\\- The 'module load' command is not needed to load Python in many systems (this does not terminate execution though)- Exporting environment variables as a way to configure BETS can be unreliable.Reviewer #2: In the manuscript, Engelhardt and colleagues describe a new method called Bootstrap Elastic net regression from Time Series (BETS) to infer causal gene networks from time-series gene expression data. BETS uses vector autoregression with elastic net regularization to infer causal relationships (directed edges) between genes. The authors benchmark the performance of BETS by comparing their results against those from 21 other methods on the time-series data from the DREAM4 Network Inference Challenge. Assessed using previously used metrics of performance , BETS\u2019 ranks 6th out of 22 in terms of the AUPR metric (0.13 vs the top-performing CSId @ ~0.2) and ranks 3rd out of 17 in AUROC (0.7 vs the top-performing CSId @0.72). Compared to other top-performing methods, BETS is the fastest (~2-10 times faster), making it an attractive alternative to the top-performing CSId and Jump3. The authors demonstrate the utility of BETS by applying it on a previously published time-series expression (RNA-Seq) data on A549 cells (human adenocarcinoma cell line) exposed to dexamethasone (synthetic glucocorticoid). BETS-inferred causal gene network is validated against orthogonal over-expression datasets.The proposed method is technically sound, and manuscript is generally well written, concise and easy to read. As the authors correctly state, BETS\u2019 advantage over other methods is its speed, with performance comparable to the best-performing methods. BETS would be a nice addition to the arsenal of methods used to infer causal gene networks from time-series expression data. And, the authors have made their method available on GitHub.Major Points1. With regards to the text related to the inferred Glucocorticoid response network (on page 8), where the authors describe the inferred network containing 2,768 nodes (genes): It is noted that all 2,768 genes are \u2018effect genes\u2019 (had an incoming directed edge), and 466/2,768 genes are \u2018causes\u2019 , defined as nodes with an outward directed edge. If all the genes have an out-degree (outgoing edge), I don\u2019t understand how the authors can define 466 genes that has both incoming and outgoing edge as causal. I would think that only those nodes with one or more out-going edges and no incoming edge are \u2018causal.\u2019 The authors need to explicitly clarify what their definition of \u2018causal gene\u2019 is because this raises questions about their \u2018causal gene network\u2019 definition.2. The fact that all 2,768 genes have an out-going edge means that the resulting network/graph is not a directed acyclic graph and that it contains strongly connected component (SCCs), defined as a sub\u2010networks where, for every pair of nodes u and v in the sub\u2010network, there exists a directed path from u to v, and from v to u. Given this, I wonder if a method like Vertex Sort (PMID: 19690563) would be more appropriate to infer the network hierarchy and thus \u2018causal genes\u2019.Minor Points:1. I recommend the authors consider using \u201ctime-series\u201d instead of \u201ctime series.\u201d2. The authors have been objective overall in describing/interpreting their results/findings. Line 184 on page 6 states that \u201cBETS had a slightly lower AUPR compared with CSId (~0.12 vs 0.20).\u201d I take issue with \u2018slightly\u2019 since CSId\u2019s is ~65% better than BETS w.r.t this metric. I would just remove \u2018slightly.\u2019**********Have all data underlying the figures and results presented in the manuscript been provided?PLOS Computational Biologydata availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Large-scale datasets should be made available via a public repository as described in the Reviewer #1: No: The manuscript states \"All files have been or are being submitted to the Gene Expression Omnibus under the same study name, including accession number GSE91208.\" However, that accession number only lists DNase-seq data, not RNA-seq data. Reviewer access links to any private data used in this study should be made available.Reviewer #2: None**********what does this mean?). If published, this will include your full peer review and any attached files.PLOS authors have the option to publish the peer review history of their article and your responses to reviewer comments. If eligible, we will contact you to opt in or out[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).Important additional instructions are given below your reviewer comments.Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.Sincerely,Christina S. LeslieAssociate EditorPLOS Computational BiologyThomas LengauerMethods EditorPLOS Computational Biology***********************ploscompbiol@plos.org immediately:A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact [LINK]Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1: The authors have made substantial revisions that greatly enhance the manuscript and address almost all of my comments with the initial submission. These include clarifications in the Methods text, demonstration of robustness to the lag parameter values, results from new network inference algorithms, and another approach for evaluating the glucocorticoid predictions. Even though not all of the quantitative results favor the BETS algorithm, the additional results strengthen the manuscript. The authors have conducted fair and objective evaluations instead of skewing the results and language to only highlight advantages of BETS. For instance, they are honest in reporting that SWING-RF is now the top performer in the DREAM dataset and the nsgp-based glucocorticoid evaluation shows the BETS predictions are not significantly better than random guessing. All network inference algorithms have some limitations, so the objective assessment will raise readers' confidence in the reported results and improves the manuscript overall.The main reason I am enthusiastic about this manuscript is that the false discovery rate framework is a major and novel contribution to the network inference field. Figure 2 shows that BETS substantially improves upon other vector autoregressive network inference algorithms (including the new SWING-Lasso) in the DREAM evaluation. BETS remains very good at identifying trans-eQTLs. In addition, the analyses and results are rigorous, and the manuscript is very polished overall.The only remaining major concern is the usability of the BETS software, as detailed below.Major comments:1) I am still unable to run the BETS pipeline. This time I tried running BETS in Python 3 on a Linux server. I created a fresh conda environment with$ conda create --name bets python=3 numpy=1.13 scipy=0.19 pandas=0.20 matplotlib scikit-learnThere were fatal errors within the first few minutes of running the pipeline described in BETS_tutorial.md- The line 'export NULL=g' comes after $NULL is used in package_params_cpipeline.sh so the directory names are incorrect- prep_jobs_bootstrap.sh still uses the 'python2' command to run the Python script so it does not work in a Python 3 environment. Because this command is run within a script, aliasing python2 to point to python3 does not work.Because the software is an important part of the contribution, the authors should ensure the pipeline can run in a fresh Python environment. One way to guarantee this would be to run it in a continuous integration service like Travis CI or GitHub Actions.Minor comments:2) To keep the discussion balanced, some of the new negative results could be included alongside the existing positive results. SWING-RF is very fast and accurate (Table S3), but it is not included in the runtime discussion on line 191. BETS is substantially faster than CSId and Jump3 but not SWING-RF. The Discussion paragraph starting at line 379 only focuses on the positive validations for the glucocorticoid network but ignores the nsgp-based results3) The editors should ensure the overexpression data is available on GEO before publication.https://nih.figshare.com/f/faq). Zenodo has 50 GB by default, but more is available upon request (https://about.zenodo.org/policies/). PLOS Computational Biology partners with Dryad (https://datadryad.org/stash/publishing_charges) and offers a 300 GB limit (https://datadryad.org/stash/faq)4) The source code and supplementary materials on Google Drive should be archived in a more permanent repository, even if they are > 100 GB. The NIH figshare instance allows 100 GB of storage, and researchers can request more (5) Some inconsistencies and typos have been introduced over the different versions of the manuscript- References to 'STAR Methods' remain- Some text describes 340 trans-eQTLs, other text states 341- Line 442 states 'In BETS, L = 2.' but now the results include multiple values of L, so this could state L = 2 is the default- Line 476 'that the the'- Line 576 states 'reference numbers listed in Supplementary Table 3' but that table contains other data- The 'DREAM benchmarking' section (line 601) omits the new SWING methods- Line 638 still refers to STRING interactions- Line 799 'gene g \\\\in \\\\not tf a random score' is missing a wordReviewer #2: The authors have have satisfactorily addressed the reviewers' comments.**********Have all data underlying the figures and results presented in the manuscript been provided?PLOS Computational Biologydata availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Large-scale datasets should be made available via a public repository as described in the Reviewer #1: No: Not yet. The accession number GSE91208 is listed but does not seem relevant. The GEO accession numbers for the 100 nM dexamethasone treatment time course have now been provided. The GEO uploads for the overexpression data are still in progress. The Google Drive data should still be archived as well.Reviewer #2: Yes**********what does this mean?). If published, this will include your full peer review and any attached files.PLOS authors have the option to publish the peer review history of their article digital diagnostic tool, Data Requirements:http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: Reproducibility:http://journals.plos.org/ploscompbiol/s/submission-guidelines#loc-materials-and-methodsTo enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see 14 Jul 2020Attachmentbets_r2r_v2.pdfSubmitted filename: Click here for additional data file. 7 Aug 2020Dear Dr. Engelhardt,We are pleased to inform you that your manuscript 'Causal network inference from gene transcription time-series response to glucocorticoids' has been provisionally accepted for publication in PLOS Computational Biology. Please note the reviewers additional comments, though.Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology.\u00a0Best regards,Christina S. LeslieAssociate EditorPLOS Computational BiologyThomas LengauerMethods EditorPLOS Computational Biology***********************************************************Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1:\u00a0The authors have addressed my previous comments about the manuscript, data, and software. I was able to access the GEO datasets and spot checked the expression data. The supplementary results and files are now archived on Zenodo. Finally, I confirmed that I could run BETS on a Linux server in the conda environment described in the previous review. I have no other concerns and remain enthusiastic about this research.One last comment is that the Zenodo file port-from-della.zip contains many files that could be removed. For instance, I noticed- drive-download-20200629T235020Z-001.zip- ProbGenReceipt_2016.pdf- Goldwater_ResearchEssay_1_25_17.docx- BACKUP* files- The code/ subdirectory- Many other files in the presentations/ subdirectory that aren't referenced in 'Full Progeny.xlsx'The Zenodo dataset can be updated at any time by uploading a new version of the zip file, so this suggestion does not impact my recommendation for the manuscript.**********Have all data underlying the figures and results presented in the manuscript been provided?PLOS Computational Biologydata availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Large-scale datasets should be made available via a public repository as described in the Reviewer #1:\u00a0Yes**********what does this mean?). If published, this will include your full peer review and any attached files.PLOS authors have the option to publish the peer review history of their article 1223-442824 | ploscompbiol.org | @PLOSCompBiolPLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom"}
+{"text": "D. melanogaster reveals population-level molecular circadian clock variation.A dataset of >700 tissue-specific transcriptomes of Drosophila melanogaster (1118w) and 141 Drosophila Genetic Reference Panel (DGRP) lines. This comprehensive circadian gene expression atlas contains >1700 cycling genes including previously unknown central circadian clock components and tissue-specific regulators. Furthermore, >30% of DGRP lines exhibited aberrant circadian gene expression, revealing abundant genetic variation\u2013mediated, intertissue circadian expression desynchrony. Genetic analysis of one line with the strongest deviating circadian expression uncovered a novel cry mutation that, as shown by protein structural modeling and brain immunohistochemistry, disrupts the light-driven flavin adenine dinucleotide cofactor photoreduction, providing in vivo support for the importance of this conserved photoentrainment mechanism. Together, our study revealed pervasive tissue-specific circadian expression variation with genetic variants acting upon tissue-specific regulatory networks to generate local gene expression oscillations.Natural genetic variation affects circadian rhythms across the evolutionary tree, but the underlying molecular mechanisms are poorly understood. We investigated population-level, molecular circadian clock variation by generating >700 tissue-specific transcriptomes of Earth\u2019s rotation results in daily cycles of light intensity, temperature, and atmospheric pressure. These oscillations are reflected in the biosphere as circadian rhythms, which are manifested in numerous species across the evolutionary tree of the protein-coding genes are evolutionarily conserved in humans, including functional orthologs of core circadian regulators , which consists of over 200 genetically diverse fly lines that constitute a highly valuable resource of natural genetic variation in Drosophila genes, substantially expanding the catalog of known clock-controlled genes. Analysis of the underlying gene regulatory network (GRN) based on cycling genes revealed transcription factors (TFs) that may control tissue-specific, circadian gene expression. In addition, we uncovered seven previously uncharacterized circadian regulators, which we validated by perturbation assays. We also assessed the influence of genetic variation on circadian rhythm by performing high temporal resolution RNA sequencing (RNA-seq) on 141 DGRP lines across three tissues, yielding another 451 transcriptomes. This screen revealed that 45 (>30%) of the sampled DGRP lines exhibit aberrant circadian gene expression. Since this variation is mostly manifested in a tissue-specific manner, our findings reveal extensive molecular circadian desynchrony between tissues. To understand the underlying molecular mechanisms, we performed a genetic analysis of the line (DGRP-796) showing the strongest deviating circadian expression. Through genetic and molecular analyses, protein structural modeling and simulation, as well as brain immunohistochemistry, we found that this line features a dysfunctional clock in both brain and peripheral tissues driven by a novel cry mutation that disrupts the light-driven flavin adenine dinucleotide (FAD) cofactor photoreduction. Hence, we validated in vivo the importance of this evolutionary conserved photoentrainment mechanism in the circadian pacemaker. Together, these results underscore the resource value of the generated tissue- and genotype-specific gene expression atlas in informing on previously unknown circadian biology and the effect of genetic variation thereon.To understand how these genetic variants affect circadian rhythms, we need to uncover the underlying regulatory mechanisms. However, there are still several major aspects of molecular circadian biology that are yet to be fully elucidated. These include (i) how the circadian clock controls tissue- or cell type\u2013specific expression rhythms, which affects a wide variety of metabolic, physiological, and behavioral processes = 0.028; fat body: GO:1902533, positive regulation of intracellular signaling, Padj = 0.019; gut: GO:0006886, intracellular protein transport, Padj = 0.003; and Malpighian tubules: GO:0046907, intracellular transport, Padj = 0.003], while GO analysis of the TSC genes that were up-regulated in the tissue where they also cycled did not reveal any significantly enriched terms.To investigate TSC gene expression, we first created a baseline gene expression time series using the e strain . This fie strain . Nonethetim, vri, per, cry, Clk, cwo, and Pdp1, benchmarking our approach, as well as seven largely uncharacterized ones: CG2277, CG5793, CG31324, CG14688, Gclm, Amph, and Usp1 , using Pdf-GAL4], would cause locomotor activity rhythm defects, the most common readout for circadian rhythm integrity. To provide a robust evaluation, two upstream activating sequence\u2013RNA interference (UAS-RNAi) constructs targeting different gene regions were used. Overall, knockdown of all tested genes, except CG14688 for which only one RNAi line was tested, affected the locomotor behavior either by affecting the number of rhythmic flies or by altering their period, rhythmicity strength, or rhythmicity index (a readout of the strength of the periodic pattern) with at least one pair of UAS-RNAi lines using either the tim or Pdf drivers . Yet, locomotor activity levels were not decreased . Four genes: Amph, CG2277, CG5793, and Usp1 showed robust and consistent effects in both RNAi constructs and both gene drivers, i.e., tim and Pdf have orthologs in mouse (Mus musculus), baboon (Papio anubis), and human (Homo sapiens) (table S5). By assessing circadian gene expression data across tissues in mouse and baboon from databases and previous publications and \u201cglutathione transferase activity\u201d . Analysis of the expression profiles of glutathione-associated genes indeed showed a systematic molecular desynchronization across tissues (GstD1 and Gclc) in this category were cycling in both brain and peripheral tissues (GstD1 and Gclc were found to be targeted by Clk (table S2) cycled in two or more tissues , providi tissues . Specifi tissues . Only twr GstD1) . FurtherClk, cwo, vri, and Pdp1 in clock control in all tissues, we expected these genes to be heavily interconnected, located centrally in the network and bridging different nodes in all four tissues. Fifty-two percent of TSC genes were directly connected to these regulators gulators . To decigulators . Drosophila homolog DNA replication\u2011related element\u2011binding factor (DREF) and the so far uncharacterized unknown motifs 1 and 2 to be enriched in all tissues . The high temporal sampling frequency of the 1118w dataset as well biological replication allowed us to account for technical variability in the expression values. Furthermore, by also considering the three profiled DGRP lines, we were able to include in the model benign expression variation that does not significantly alter the circadian clock, i.e., that does not cause circadian phenotypic alterations. In a second step, we mapped a gene expression vector of a static DGRP sample against the reference time line that was created in the first step. In other words, we ranked in time each DGRP line based on its gene expression profile relative to the reference, allowing us to compute a physiological time per line. This strategy allowed us to infer putative alterations of the molecular circadian clock based solely on single static transcriptomes without performing the challenging temporal profiling of each DGRP line.Subsequently, to investigate the impact of genetic variation on circadian gene expression, we computed the physiological circadian time of each static DGRP sample. In essence, we determined how much the static transcriptome of each line deviates from the expected transcriptome at the point of day at which the line was collected. To calculate the physiological time, we assessed several published computational methods next to two in-house developed ones described below. In general, for every evaluated technique, published or newly developed, we divided the process of physiological time inference based on static transcriptomes into two stages. In a first step, we created a reference time series of gene expression followed by a statistical inference of the relationship between a multidimensional gene expression matrix and time, i.e., selection of the best predictor genes. To create a reference time line, we used the previously introduced 1118w and the three profiled DGRP lines were used as a training set. During the evaluation, a sample was taken out of the training set, and an algorithm was trained on the remaining samples, followed by the prediction of the physiological time of the removed sample. This procedure was repeated on all samples from the training set, and the difference between the predicted physiological time and time of harvesting was calculated. Since no changes in circadian rhythm were observed for either 1118w or the three profiled DGRP lines, the difference between predicted time and harvesting time was used as the main metric for method evaluation. We found that the MTT approach yielded the lowest averaged difference between predicted and harvesting times across the training set (1.32 hours). We therefore considered this approach to be the most reliable (fig. S2B), reaching a precision, here conservatively defined as three SDs from the mean, of 3.4 hours. This precision is in our opinion rather remarkable considering the 2-hour sampling frequency of our 1118w experiment.As indicated above, we compared several computational approaches for inferring the physiological time: an in-house developed least absolute shrinkage and selection operator (LASSO) and directional statistics (DirectStat) method, a neural network, as well as the molecular time table (MTT) at various time points for the DGRP-774 fly line, which has been shown to feature a shortened circadian period (20 hours) have indn et al. . The roow1118 reference (Pdp1 in the gut) cycled with a circadian period (24 hours) in either the brain or gut in DGRP-796 . This gene was initially isolated in a screen for mutants with dampened rhythmic per gene expression on DGRP-796 and control 1118w and Canton-S flies in the subjective late evening (CT15), the time when the clock is most sensitive to light input , respectively. The number of l-LNvs as well as dorsal LNvs (LNds) in reference lines (1118w and Canton-S) and DGRP-796 was assessed at ZT21 (peak expression of Tim) before and after a 30-min light pulse (vs and LNds) between the fly lines before the light pulse and the other two a frameshift ACGAAA > GG at position 3R 15040256 and a codon change plus codon deletion TGTGGGT > T at position 3R 15040481, while the rest fall in upstream, downstream, 3\u2032 untranslated region or intronic regions, or are plain synonyms. We noticed that the frameshift is specific to the DGRP-356, while the in-frame deletion is also present in DGRP-355, DGRP-796, and DGRP-911. In this study, DGRP-355 and DGRP-356 were not sampled. Consequently, the only variant that affects cry that could also be causal to the phenotype was TGTGGGT > T. However, as this codon change plus codon deletion is in fact an in-frame deletion and as it does not affect a known functional region, we decided to also scan other known clock, clock-controlled and light input DGRP-796 genes for potentially disruptive mutations that could modulate the light input/response pathway. Yet, we did not uncover any obvious loss-of-function mutations or indels.These findings prompted us to perform a sequence analysis of 1118w chromosomes with the respective DGRP-796 chromosomes triggered a phenotypic effect (1118w and DGRP-796), suggesting that phase response-influencing variants may be located on chromosome 2. This prompted us to perform a variant scan of light- or circadian rhythm\u2013associated genes on chromosome 2, revealing 87 nonsynonymous variants across 38 genes, 11 premature start gains in 10 genes, three codon deletions , and one codon change with codon insertion in Akap200. Yet, given that DGRP-796 is (as far as we know) the only line that exhibits this particular circadian phenotype, we were unable to pinpoint a likely causal variant(s) due to a lack of statistical association power. In contrast, swapping chromosome 3 caused an almost complete phenotype recapitulation and changes the Val423 to an Ile (V423I). In addition, we detected a nonsynonymous SNP (T > C) 8 bp downstream of the deletion, which transforms Ser424 to a Pro (S424P) , FAD could not be accommodated in the same binding conformation in the DGRP-796 Cry pocket . Collectively, these modeling and simulation results suggest that the 6-bp deletion found in the DGRP-796 cry allele impairs both the ability to productively bind FAD and the optimal conformation required for the photoactivatable electron transfer cascade mediated by Trp residues.To identify the potential responsible gene(s) in DGRP-796, we therefore decided to use a chromosome mapping strategy using the lack of phase shift response as a readout . These ec effect . Specifitulation , implyinresidues . This licry mutation generated a loss-of-function phenotype in vivo by testing the locomotor activity rhythms of DGRP-796 under constant light (LL). While WT flies were arrhythmic in LL, as expected, DGRP-796 flies showed persistent rhythms of locomotor activity in LL comparable to the 02cry null-mutant flies . We1118D. melanogaster w line as well as across 141 genetically diverse (DGRP) lines at high temporal resolution. This resulted in a unique circadian gene expression catalog composed of 233 samples stratified over four tissue \u2013specific time series for 1118w and 451 static transcriptomes of three tissues for the DGRP lines.The circadian clock is a ubiquitous system of temporal control over cellular physiology that is implemented through coordinated regulatory feedback loops that operate across all tissues is predicted to be involved in the dynamics of protein ubiquitination, which has been shown to affect circadian fluctuation of hundreds of proteins in flies identified cycling genes do so in only one of the examined tissues, despite the fact that the majority are not differentially expressed across tissues. The abundance of genes that cycle solely in one tissue and their prevalence as a group over genes that cycle in multiple tissues suggest a high degree of autonomy for an organ\u2019s circadian rhythm. This observation is in line with previous studies that revealed that peripheral circadian clocks are, to a large degree, independent of the central clock, although this varies among tissues and tissue-specific TFs, e.g., emc in case of the brain. The mode of interaction between these core and tissue-specific regulators could follow at least two scenarios: (i) cobinding, as previously shown for Opa/Srp establishing body-specific circadian rhythms , the primary genes of the module, are zinc finger TFs that are involved in the compound eye photoreceptor differentiation process and modulatory (Gclm) subunits of the Gcl holoenzyme, as well as glutathione S-transferase D1 (GstD1) given that these genes are directly targeted by Clk/Cyc of the genes that cycled in several tissues are asynchronous. This is clearly an unexpectedly high fraction, although it is possible that the inbred and laboratory-adapted nature of DGRP flies induces stronger phenotypes than what would likely be observed in WT lines, as has been documented in previous studies using DGRP lines long-term observations that exceed two periods of oscillations and (ii) relatively frequent sampling to acquire regulatory insights. These requirements become especially pertinent in studies aimed at exploring how genetic variation affects the molecular circadian clock. In this study, we addressed these challenges through the implementation of a nonclassical experimental design involving static but high temporal frequency probing of available genotypes . Our data on DGRP-774 and a V423I substitution. The difference in amino acid sequence results in a local disruption of the \u237a helix and, most importantly, markedly reorients W420. Because of the complex nature of the allele, implicating at once three amino acid changes, it is difficult to determine the exact influence of each of the implicated amino acid changes on protein function. We argue though that the phenotypic effect is mainly caused by the W422 residue as it is conserved across multiple animal species, while M421 and V423 are not contributing to this particular circadian phenotype.Nonetheless, we note that the behavior of DGRP-796 does not completely match that of the radually and UAS-RNAi flies were obtained from the Transgenic RNAi Project and Bloomington Stock Center, while the GD and KK UAS-RNAi flies were ordered from the Vienna Drosophila Resource Center. The complete list of genotypes is presented in table S3.tim-Gal4 driver provides a means to direct RNAi action to tim-expressing cells across the whole body, including all brain cells, while Pdf-Gal4 targets LNvs.The supplementary details of each line can be found on the websites of their corresponding stock centers by stock identifiers listed in table S3. Stock identifications for the DGRP lines used in this study are available in table S1. The Drosophila flies were raised on food containing 58.8 g of yeast 96 , 58.8 g of Farigel wheat (Westhove FMZH1), 6.2 g of agar powder (ACROS 400400050), 100 ml of grape juice (Ramseier), 4.9 ml of propionic acid (Sigma-Aldrich P1386), and 26.5 ml of methyl 4-98 hydroxybenzoate , dissolved in 1 liter of water. The temperature was set to 25\u00b0C, and light exposure was set to 12-hour/12-hour LD cycles unless stated otherwise.1118w genotype, and it allowed us to evaluate tissue specificity of the circadian transcriptome . The second time course experiment involved 141 DGRP lines and was aimed at studying the impact of genetic variation on molecular circadian rhythms at a tissue-specific level .Two time course experiments were performed in our study. The first time course involved only the N brains were derived from the same individuals as the source of N guts, N fat bodies, and N Malpighian tubules. The gut in this experiment consists of the foregut and midgut, starting at the proventriculus and finishing at the midgut-hindgut junction. To harvest the abdominal fat body, we first removed the gut and sexual organs from the abdomen and then separated the fat body from the dorsal part of the abdominal cavity using pincers.For both of the time course experiments, the incubators were placed behind light-blocking curtains to minimize experimental disruptions when samples were collected in DD conditions. For all of the samples mentioned above, unless stated otherwise, we dissected the brain, abdominal fat body, whole gut without the crop, and Malpighian tubes. Within each genotype and time point, the same flies were used for collecting all organ and tissue samples, see table S1 for the exact number of the dissected flies per sample. Thus, for any given fly line, the 1118w fly line to study the extent of tissue specificity of molecular circadian rhythms. Two 3-day-old mated males were separated from females under CO2-induced anesthesia and placed into vials grouping up to 20 flies per vial. Vials were placed in an incubator with a 12-hour/12-hour LD cycle at 25\u00b0C. On day 3, at ZT0 , these vials were divided into two groups. One set of vials was kept at LD settings for 24 hours, and the remainder was placed in an incubator at 12-hour/12-hour DD settings. Samples derived from the group placed in LD condition received the ZT code, and samples placed in the DD condition received the CT code. One hour after the start of the experiment (10:00 a.m.), one vial from each group was collected by transferring the flies into a 15-ml tube and by flash-freezing these in liquid nitrogen. This collection point is equivalent to time points ZT1 and CT1 for the LD and DD conditions, respectively. Three hours after the start of the experiment, at 12:00 p.m. (ZT3/CT3), the next collection point occurred, and one vial per group was collected, transferred into a 15-ml tube, and flash-frozen in liquid nitrogen. This continued over the course of 24 hours until no vials with flies were left. Samples that were kept in the dark conditions were also collected in a dark environment. To perform subsequent RNA-seq, flies from each sample were divided into three biological replicates per tissue and per time point (table S1).We used the reference Caenorhabditis elegans by Francesconi and Lehner and flash-frozen at \u221280\u00b0C before processing. Brains, guts, fat bodies, and Malpighian tubules were dissected on ice in 1\u00d7 phosphate-buffered saline with 0.02% Tween 20, immediately transferred into screw cap tubes with glass beads and 350 \u03bcl of TRI Reagent , and placed in a Precellys 24 for homogenization (settings: 6000 rpm/30 s). Homogenized tissues were flash-frozen at \u221280\u00b0C until subsequent RNA purification. To avoid batch effects due to differences in RNA purification efficiencies, all homogenized samples were purified in parallel using the Direct-zol 96 RNA Purification system according to manufacturer\u2019s instructions. Total RNA was eluted in water and quantified with the Quant-iT RiboGreen RNA Assay Kit . Again, to avoid batch effects in cDNA amplification and library preparation, we took between 10 and 50 ng of the total RNA from each sample and used the high-parallel and high-throughput BRB-seq library preparation method as described (https://github.com/DeplanckeLab/BRB-seqTools) were obtained from the Illumina HiSeq 2500 . The samples were pooled into 16 libraries and distributed across the sequencing lanes to prevent a lane-induced batch effect. Multiple fastq files for each library were merged by SAMtools v1.3 after which they were subjected to demultiplexing using BRB-seq tools v1.1 based on the information contained within the first read in the pair (R1) . First, z planes containing neurons of interest were selected. Next, custom lookup tables were assigned to both channels. The minimum and the maximum of the Tim channel were then set to 0 and 30, respectively. Similarly, the minimum and the maximum of the Pdf channel were set to 0 and 50, respectively. Last, the \u201cstandard deviation\u201d Z-projection method was applied to each of the channels. A full list of commands used to process the images is available as an ImageJ plugin on the GitHub. Tim- and Pdf-stained neurons in l-LNvs and LNds were counted manually.Images were processed using ImageJ best practices workflow for SNP and indel calling on RNA-seq data. The alignment was executed in two stages:D. melanogaster reference genome was indexed by STAR v.2.5.0b .\u2022 First, the http://broadinstitute.github.io/picard). After that, read group information was added (Picard AddOrReplaceReadGroups), and duplicates were marked (Picard MarkDuplicates) using default settings, followed by \u201cmapping qualities reassignment\u201d by GATK v3.6-0 . Last, local realignment around indels was performed with GATK RealignerTargetCreator and IndelRealigner using default parameters.\u2022 Second, reads were mapped by STAR using the same parameters as above to the reference genome obtained at the end of the previous step. Then, SAMtools was used to remove reads with an insert size >1 kb. Next, soft clipping beyond the end of reference alignment and setting mapping quality (MAPQ) to 0 for unmapped reads was performed with Picard v2.2.1 CleanSam mode to call variants with the minimum phred-scaled confidence threshold set at 30 and the emission confidence threshold set at 10. Indels, multiple nucleotide polymorphisms (MNPs), and variants with a depth of coverage less than 5 were excluded from further consideration. Afterward, GATK CombineGVCFs was used to produce a multisample GVCF. Last, GATK GenotypeGVCFs with the same phred-scaled confidence threshold and emission confidence threshold as above were applied to obtain a multisample set of variants. Only biallelic SNPs with a depth of coverage > 5, a Fisher strand score of >30.0, and a quality by depth <2.0 were selected from the set to compare to the reference DGRP2 VCF 18 samples for the DGRP-208 time course with 6 samples per tissue: brain, gut, and fat body; (iii) 35 samples for the DGRP-321 time course with 12 samples dedicated to the brain and fat body each and 11 samples dedicated to the gut; (iv) 31 samples for the DGRP-563 time course with 12 samples of the brain, 11 of the fat body, and 8 of the gut, respectively; (v) 29 samples for the DGRP-774 time course with 11 samples for the brain and fat body each and 7 samples for the gut; (v) 94 samples for the DGRP-796 time course of which 46 samples were designated to the brain and 48 to the gut; and (vi) 338 static transcriptomes of the DGRP with 102, 125, and 100 samples for the brain, fat body, and gut, respectively, comprising 778 samples used for the downstream analysis.The bam files obtained at the second step of the genotyping procedure after \u201cread with an insert size of >1-kb removal\u201d were used to count the number of reads falling into each of the 16,995 genes using the Python package HTSeq v0.6.1 ran under union mode implemented in the JTK CYCLE was applied. No threshold on the amplitude was applied.To detect genes with circadian expression patterns, JTK CYCLE v3.1 versus 4 days in the study published by Ueda et al. = 2. Then, the zeitzeigerFit function was used on the training dataset to fit a spline into the gene expression profiles of all expressed genes. Next, the SPCs were calculated using zeitzeigerSpc, and lastly, zeitzeigerPredict was used to predict physiological time of a test sample.We predicted physiological time with the use of neural networks applied to the genes detected as circadian by the JTK CYCLE. Three neural network configurations were tested: (i) a single layer network with 12 input neurons, (ii) a network with 12 input neurons and 6 neurons in the hidden layer, and (iii) a network with 12 input neurons and 2 hidden layers having 6 and 3 neurons, respectively. The training of the networks was done by the neuralnet function from the neuralnet v1.3 package in R. The prediction of the physiological time of the test samples was then achieved via the compute function from neuralnet v1.3.We also developed and evaluated the LASSO regression analysis for the prediction of the physiological time. It consisted of two parts: first, the period of the day at which the sample was harvested (a.m. or p.m.) was determined to account for the symmetry of the circadian gene expression patterns. In the second step, LASSO calculated the physiological time based on the result of the first step. The analysis was implemented with glmnet v.2.0-13 used on the count table containing the time series data of all expressed genes.A cyclic nature of time concept served as inspiration to the development of the DirectStat approach for the prediction of physiological time. In this method, we considered time of the day as a point in polar coordinates and used DirectStat to build a regression. First, time of the day was converted to radians where 24 hours corresponds to 2\u03c0 rad. Then, we used forward backward early dropping selection for circular data as implemented in the function spml.fbed from the R package Directional v4.1 built under R v.3.5 to select genes that could be predictors of time. Last, we fit the regression with the function spml.reg and predicted physiological time using the obtained model.1118w time series complemented with the samples for three DGRP lines that were profiled every 2 hours over a period of 24 hours during the overall DGRP single time point collection together totaling 317 samples across four tissues. Gene expression values in the set were standardized to the range of 0 to 1 before the application of the methods. During the procedure, one sample was removed from the set (test sample), and training of the models was performed on the rest of the samples (training set), followed by the prediction of the physiological time on the test sample. Then, the difference in time between the estimated physiological time and time of harvesting was calculated. Thus, every method was scored 317 times. Last, the best approach was determined as the one with the least difference between the predicted and expected physiological time across all tissues , which, in our case, was the MTT method based on TIGs passing the filters applied to LD and DD time series considered as one unit (LDDD_05). To evaluate even further the performance of the method, we assessed predictions of the model for the three profiled DGRP line samples: the mean difference between predicted physiological time and recorded time of harvesting was <1.4 hours across all tissues. Predictions for each tissue were conducted separately. A DGRP sample was conservatively marked as an outlier if the predicted physiological time and the time of harvesting differed more than 3.4 hours. This cutoff was dictated by the sensitivity of the MTT and represents three SDs of the distribution of differences between the predicted time and time of harvesting across all four tissues, 1118w, and three profiled DGRP lines samples.A cross-validation leave-one-out approach was used to benchmark the methods listed above. The evaluation set was composed of the reference As a proof of concept demonstrating the ability of the MTT method to detect deviations in molecular circadian rhythms based on static transcriptomes, we applied it to the time line of DGRP-774 with respective samples taken every 2 hours for 24 hours across three tissues . DGRP-774 is known to have an 18.86-hour period in males that were assessed in the 1118w dataset. Panels one and three of fig. S2C illustrate that, despite the fact that the period in panel 3 is 30 hours (6-hour deviation), the time window during which we could detect a difference between the static transcriptome of the deviating line and a reference transcriptome is only 7 hours. This is comparable with the detection time window for rhythms with a 20-hour period (4-hour deviation). Moreover, the addition of a 6-hour phase shift into the model led to an increase of the detection window to 10 hours for the 20-hour rhythms and no change in the detection window size for the 30-hour rhythms, as shown in the second and fourth panels (fig. S2C), respectively. This simulation, however, is just an illustration of possible situations and their dependence on unknown before the experiment start variables, such as the molecular period and phase. It thus did not aid in choosing which DGRP lines to use to create a proof-of-concept expression time line dataset.We must note that the efficiency of detecting molecular circadian rhythm differences between a fly line with circadian rhythm deviations, such as an extended or shortened period or a phase shift, from static transcriptomes does not clearly correlate with the absolute value of the deviation (fig. S2C). To illustrate this, we simulated a circadian gene expression value as a sinus wave with an amplitude of 100 U every 10 min for 24 hours for three datasets: one featuring \u201creference\u201d expression rhythms with a period of 24 hours, one displaying a shortened period of 20 hours and one an extended period of 30 hours (fig. S2C). The amplitude of 100 U was chosen for clarity. Also, our basic model included noise, where the amplitude-to-noise ratio was derived as the mean across time and genes \u201camplitude-to-noise\u201d ratio of the seven core clock genes ("}
+{"text": "Multiple sclerosis (MS) is one of the most common neurological disabilities of the central nervous system. Immune-modulatory therapy with Interferon-\u03b2 (IFN-\u03b2) is a commonly used first-line treatment to prevent MS patients from relapses. Nevertheless, a large proportion of MS patients on IFN-\u03b2 therapy experience their first relapse within 2 years of treatment initiation. Feature selection, a machine learning strategy, is routinely used in the fields of bioinformatics and computational biology to determine which subset of genes is most relevant to an outcome of interest. The majority of feature selection methods focus on alterations in gene expression levels. In this study, we sought to determine which genes are most relevant to relapse of MS patients on IFN-\u03b2 therapy. Rather than the usual focus on alterations in gene expression levels, we devised a feature selection method based on alterations in gene-to-gene interactions. In this study, we applied the proposed method to a longitudinal microarray dataset and evaluated the IFN-\u03b2 effect on MS patients to identify gene pairs with differentially correlated edges that are consistent over time in the responder group compared to the non-responder group. The resulting gene list had a good predictive ability on an independent validation set and explicit biological implications related to MS. To conclude, it is anticipated that the proposed method will gain widespread interest and application in personalized treatment research to facilitate prediction of which patients may respond to a specific regimen. Multiple sclerosis (MS) is an immune-mediated, inflammatory demyelinating disease of the central nervous system that affects about 2.3 million people worldwide . The majThe etiology of MS has not been fully elucidated, however, it is commonly believed to be triggered as an autoimmune response to an interaction between genetic and environmental factors . In the There are an increasing amount of studies suggesting that genomic data such as gene expression profiles and genetic variants data may provide insightful clues on the development and molecular subtypes of MS. For example, Feature selection is a machine learning strategy of selecting, from among thousands of genes, a gene signature (subset) that may be relevant for diagnosis of a disease, segmentation of disease subtypes, patient drug response or survival rate prediction, and is becoming routine practice in the fields of bioinformatics and computational biology . Until nCompared with classic feature selection methods which mainly deal with cross-sectional data, the feature selection process for longitudinal gene expression data is more complicated. This is unsurprising given the fact that longitudinal gene expression data involve more than a single time point and consider both the expression value trajectory over time and their differences between different phenotypes. So far, feature selection methods specifically to handle longitudinal data are far from sufficient, to the best of our knowledge, the method proposed by Motivated by the differentially expressed network method , we propGSE24427) (A microarray experiment dataset was used as a training set in this study (GEO repository accession number SE24427) . We collGene expression values were measured at five separate time points: before the first, second, 1-month, 12-month and 24-month injections. Keeping in mind that several patients may have relapsed shortly after the 2-year treatment (in which case their gene expression profiles may have undergone some changes similar to changes of non-responders), we restricted the responder category to patients whose first relapse time was more than 5 years (60 months), resulting in nine responders and nine non-responders fed into the downstream analysis.GSE19285 experiment of these two experiments were downloaded from the GEO repository, respectively. The expression values were obtained using the fRMA algorithm . The expp-value calculation was downsized to a manageable scale), the corresponding gene pair was deemed to be a DCE at that time point; and (3) the intersection of DCEs across 5 time points was taken and the resulting subset was considered to represent consistently DCEs over time. A flowchart of the procedure is elucidated using a toy example with five genes (The procedure to identify consistently DCEs comprised three major steps: (1) interactions in String software were useve genes .In this study, the connection information in the String software was used as a reference. Only when the confidence score of a gene pair is larger than 0.6, these two genes are considered to be connected by an edge. The cutoff value for the absolute difference of SCCs between two groups is set at 0.6. Using the proposed procedure, 384 consistently DCEs over five time points were identified. These edges included 510 unique genes. By saying consistently, it is referred to the scenario that these edges were identified to be differentially correlated across all time points .p-value < 0.01), which involve 41 unique genes (Using permutation tests and following the strategy proposed by ue genes . Using tp = 0.003) and CXCL9 (adjusted p = 0.012) are significantly differentially expressed over time between these two groups. Focusing specifically on the two extreme points, no genes are differentially expressed at the first time point while only CXCL9 is under-expressed in the responder group at the last time point according to Wilcoxon\u2019s tests.The GeneCards database indicateNext, a gene-to-gene interaction network of these seven genes was constructed using the String software and presented in How these seven genes are related to MS was mined by a PubMed literature searching. Since the autoimmune inflammatory process is believed to be essential for the development of MS, it is natural to observe that chemokines and chemokine receptors are involved in the pathogenesis of the disease . For insFurthermore, CXCL9 is a ligand of CXCR3. A study describeA recent review stated tUsing a meta-analysis, the association of polymorphisms in IL2RB, IL2RA and IL2 with MS were systemically reviewed, in which significant association for IL2RA and IL2 had been justified and the The expression of CSF2, also known as GMCSF, by human TH cells has been reported to be associated with MS disease severity. GMCSF is strongly induced by interleukin 2 (IL2) : an MS-aAs far as GCA is concerned, http://www.interferome.org/). In summary, further investigation is warranted to examine how IFN-\u03b2 treatment establishes its effects by targeting these seven genes.Furthermore, all these seven genes except AKT1 and IL2RB are IFN-responsive genes according to the Interferome database (GSE41846 (Our procedure identified consistent DCEs over time for a responder group in comparison with a non-responder group, and the resulting gene signature has explicit biological relevance to MS. Based on simple calculations of SCCs and their corresponding difference between two different phenotypes, our method is easy to comprehend and can be implemented by an entry-level statistician or a clinician. Therefore, we anticipate its widespread application in relevant research areas, to help researchers identify the underlying therapeutic mechanism of a specific regimen (as shown in the GSE41846 in whichThe statistical procedure used in this study has several limitations. First, it excludes genes that are not in overlapping DCEs, therefore, valuable information may be overlooked, especially for the first time point. In the future, a way to use the first time point as the reference may be considered such that the overlap of other time points with this point will become the focus. Second, the training dataset had only 25 samples in total, and to eliminate possible ambiguous results caused by patients having relapses in the first 2\u20135 years, seven MS patients were excluded from the downstream analysis. This results in a smaller sample size. A large gene expression experiment with a better study design is highly desirable.10.7717/peerj.8812/supp-1Supplemental Information 1Click here for additional data file.10.7717/peerj.8812/supp-2Supplemental Information 2Click here for additional data file."}
+{"text": "Co-expression networks are a powerful tool to understand gene regulation. They have been used to identify new regulation and function of genes involved in plant development and their response to the environment. Up to now, co-expression networks have been inferred using transcriptomes generated on plants experiencing genetic or environmental perturbation, or from expression time series. We propose a new approach by showing that co-expression networks can be constructed in the absence of genetic and environmental perturbation, for plants at the same developmental stage. For this, we used transcriptomes that were generated from genetically identical individual plants that were grown under the same conditions and for the same amount of time. Twelve time points were used to cover the 24-h light/dark cycle. We used variability in gene expression between individual plants of the same time point to infer a co-expression network. We show that this network is biologically relevant and use it to suggest new gene functions and to identify new targets for the transcriptional regulators GI, PIF4, and PRR5. Moreover, we find different co-regulation in this network based on changes in expression between individual plants, compared to the usual approach requiring environmental perturbation. Our work shows that gene co-expression networks can be identified using variability in gene expression between individual plants, without the need for genetic or environmental perturbations. It will allow further exploration of gene regulation in contexts with subtle differences between plants, which could be closer to what individual plants in a population might face in the wild. Understanding how transcriptomes are regulated is key to shed light on how plants develop and also respond to environmental fluctuations. A powerful tool often used to reveal transcriptional regulation at a genome-wide level is gene co-expression networks . In genein silico the genotype, environment, and genotype \u00d7 environment effects on gene expression . This result shows that modules that are more connected to one another are more similar, at least for this feature, indicating that the community detection in the network worked well and provides modules that are relevant.First, we analyzed the number of edges at each time point throughout the time course for each module . In mostp-value = 0.0007683, Since high gene expression variability between genetically identical plants was previously observed in the transcriptome dataset we used to infer the variability network , we testOur results show that gene co-expression networks can be inferred in the absence of genetic or environmental perturbation. Moreover, genes do not need to show a high level of gene expression variability between seedlings to be integrated in the network.Next, we decided to test whether the co-expression network based on the variability of expression between genetically identical plants grown in the same environment is different from what is the standard practice in the field. Usually, co-expression networks are inferred by comparing transcriptomes obtained from pools of plants experiencing an environmental or genetic perturbation. Given that our dataset contains data for several time points throughout a day/night cycle, this would correspond to comparing the average expression of the 14 seedlings for each time point and exploiting changes in expression happening during the time course. We used this strategy to infer a co-expression network, referred to as the averaged time course network, that allows the identification of co-expression throughout the time course and is the closest to standard practices using our dataset. Using this approach, we find a total of 9332 edges, connecting 3861 genes in the averaged time course network. A total of 524 genes of this averaged time course network are also present in the variability network, that is, 30% of the genes in the variability network . Only 35We find that between 0 and 87.5% of genes in modules of the variability network are also in the averaged time course network, with most of the modules having between 20 and 50% of genes also present in the averaged time course network . The modIn order to define if the modules identified in the variability network are functionally relevant, we performed a Gene Ontology (GO) enrichment analysis. We find that some of the modules have strongly enriched GO .For example, module 8 is enriched in multiple GO related to photosynthesis. In particular, 33 genes out of the 41 in this module are members of photosystem I or II, or of the light harvesting complex . Other gModule 71 is enriched in GO related to DNA packaging and is iModule 1 is enriched in GO related to glucosinolate . We findModule 43 is enriched in GO related to flavonoid metabolism , with siOverall, we find that several modules in the variability network are functionally relevant, with modules showing enrichment for functions such as photosynthesis, DNA packaging, and glucosinolate or flavonoid metabolism, even in the absence of genetic and environmental perturbations. Moreover, we could identify a potential role in the enriched pathways for some genes, based on their co-expression with other genes in the same module.To go further in the functional analysis of the modules, we looked for enrichment of targets of transcriptional regulators in the modules. We focused on transcriptional regulators for which ChIP-seq was performed under similar conditions (seedlings grown in day/night cycles) and for which a list of target genes have been previously published . We defiArabidopsis thaliana genome, and looked at the ChIP-seq signal for GI at all the seven genes of module 64 in module 86 are PIF4 targets , while oodule 86 . We obseodule 86 . We alsoodule 86 . AT4G265odule 86 .A. thaliana genome, and looked at the ChIP-seq signal for PRR5 at all the 41 genes of module 8 , we calculated the Spearman correlation between every pair of genes, using their expression profiles across the 14 seedlings . Using aFor the averaged time course network, we calculated the mean expression across all seedlings for every time point, generating a time series of average expression for every gene. We again calculated the Spearman correlations for every pair of genes and generated a network by applying the Bonferroni correction as a (highly conservative) significance cutoff. This yielded a network that was similar in size to the variability network. All network analysis was carried out using the Python NetworkX and python-louvain libraries.The Louvain community detection algorithm was usedA. thaliana seeds were sterilized, stratified for 3 days at 4\u00b0C in the dark, and transferred for germination on solid 1/2X Murashige and Skoog (MS) media at 22\u00b0C in long days for 24 h. To reduce the level of phenotypic variation between plants, we selected the seeds that were at the same stage of germination with a binocular microscope and transferred them into a new plate containing solid 1/2X MS media. Seedlings were grown in a conviron reach-in cabinet at 22\u00b0C and 65% humidity, with 12 h of light (170 \u03bcmoles) and 12 h of dark. After 7 days of growth, 16 individual seedlings were harvested at ZT6 and at ZT14 into a 96-well plate and flash-frozen in dry ice. All seedlings harvested in a given time point were grown in the same plate. Total RNA was isolated from one seedling. We assessed RNA concentration using Qubit RNA HS assay kit. cDNA synthesis was performed on 700 ng of DNase-treated RNA using the Transcriptor First-Strand cDNA Synthesis Kit. RT-qPCR analysis was performed in the LightCycler 480 instrument using LC480 SYBR green I master, on 0.4 \u03bcl of cDNA in a 10-\u03bcl reaction. Gene expression relative to two control genes (SandF and PP2A) was measured (see Col-0 WT We used the Ontologizer command ChIP-seq data were downloaded from GSE45213 for SPL7 , from GS2. Peak calling was performed using MASC2 (Reads were aligned to the TAIR10 genome using Bowtie2 and Picang MASC2 , with thng MASC2 was usedhttps://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE115583.Publicly available datasets were analyzed in this study. This data can be found on Gene Expression Omnibus GSE115583: SC conceived the project and interpreted the data. SC, JL, and SA designed the project and wrote the article, with SC writing the first draft. SA inferred the networks. SC and SA performed downstream analyses of the network. MB performed the RT-qPCR validation. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Single-cell RNA sequencing (scRNA-seq) can map cell types, states and transitions during dynamic biological processes such as tissue development and regeneration. Many trajectory inference methods have been developed to order cells by their progression through a dynamic process. However, when time series data is available, most of these methods do not consider the available time information when ordering cells and are instead designed to work only on a single scRNA-seq data snapshot. We present Tempora, a novel cell trajectory inference method that orders cells using time information from time-series scRNA-seq data. In performance comparison tests, Tempora inferred known developmental lineages from three diverse tissue development time series data sets, beating state of the art methods in accuracy and speed. Tempora works at the level of cell clusters (types) and uses biological pathway information to help identify cell type relationships. This approach increases gene expression signal from single cells, processing speed, and interpretability of the inferred trajectory. Our results demonstrate the utility of a combination of time and pathway information to supervise trajectory inference for scRNA-seq based analysis. Single-cell RNA sequencing (scRNA-seq) enables an unparalleled ability to map the heterogeneity of dynamic multicellular processes, such as tissue development, tumor growth, wound response and repair, and inflammation. Multiple methods have been developed to order cells along a pseudotime axis that represents a trajectory through such processes using the concept that cells that are closely related in a lineage will have similar transcriptomes. However, time series experiments provide another useful information source to order cells, from earlier to later time point. By introducing a novel use of biological pathway prior information, our Tempora algorithm improves the accuracy and speed of cell trajectory inference from time-series scRNA-seq data as measured by reconstructing known developmental trajectories from three diverse data sets. By analyzing scRNA-seq data at the cluster (cell type) level instead of at the single-cell level and by using known pathway information, Tempora amplifies gene expression signals from one cell using similar cells in a cluster and similar genes within a pathway. This approach also reduces computational time and resources needed to analyze large data sets because it works with a relatively small number of clusters instead of a potentially large number of cells. Finally, it eases interpretation, via operating on a relatively small number of clusters which usually represent known cell types, as well as by identifying time-dependent pathways. Tempora is useful for finding novel insights in dynamic processes. PLOS Computational Biology Methods paper.This is a Dynamic tissue-level processes, such as development, aging and regeneration, are critical for multicellular organisms. Single-cell RNA sequencing (scRNA-seq) enables us to map the range of cell types and states in these processes at cellular resolution . A singlWhen using scRNA-seq to study dynamic processes, whether through snapshot or time-series experiments, it is of interest to order cells at different stages along an axis that represents how far along they are on the process under study, based on their transcriptional signatures. The ordering problem, commonly termed pseudotime ordering if it is inferred from data without a known temporal ordering, consists of two main parts: the identification of a trajectory representing the paths that cells follow, and the determination of pseudotime values for individual cells along this trajectory. This inferred trajectory enables us to study the sequential changes of gene expression during a process, as well as identify branches and instrumental genes at the branching points. More than 70 computational methods to order cells along pseudotemporal axes, known as trajectory inference methods, have been published, which employ different strategies to infer lineage and order cells . Most trWhile many scRNA-seq trajectory inference methods exist, few have been designed to consider time-series information. Trajectory inference methods that explicitly incorporate temporal information include Waddington-OT, which models cells\u2019 movement through dynamic processes using the optimal transport framework, and CSHMM, which uses a continuous-state hidden Markov model to assign cells to developmental paths ,16. ThesThe Tempora method infers cell type-based trajectories from time-series scRNA-seq data. Tempora focuses on identifying how cell types are related across the entire time-series data set, based on the established assumption that cells with similar gene expression profiles are closer in the underlying cell lineage. After identifying cell type transcriptome similarity relationships, Tempora orders these links based on the time-series data. Cell types identified primarily in earlier time points are ordered earlier in the trajectory than those identified primarily in later time points. To build a more robust trajectory, less influenced by small outlier cell populations and low per cell sensitivity of current scRNA-seq experimental methods, Tempora first clusters cells with similar transcriptional signatures and infers a trajectory that connects cell clusters rather than individual cells. These clusters represent putative cell types, such as progenitors, immune cells, cardiomyocytes, or stable cell states . SecondTempora takes as input a preprocessed gene expression matrix from a time-series scRNA-seq experiment and cluster labels for all cells. Tempora then calculates the average gene expression profiles, or centroids, of all clusters before transforming the data from gene expression space to pathway enrichment space using single-sample gene set variation analysis (GSVA) Fig 1)..Fig 1). We abstract the trajectory as a network of cell clusters, where vertices represent the cell types or states identified as clusters and directed edges represent temporal transitions between types or states. To infer this network, Tempora uses the established ARACNE method tTempora includes a downstream pathway exploration tool to determine and visualize pathways that change significantly over the trajectory. These pathways are identified by fitting a generalized additive model to the enrichment scores of each pathway across all clusters and selecting pathways whose expression patterns deviate significantly from the null model of uniform pathway enrichment scores across all time points.CDK1), muscle differentiation (MYOG) and a known population of contaminating myofibroblast cells (SPHK1) Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1: Executive Summary:Although single-cell RNA-seq (scRNA-seq) provides a singular snapshot in time, sequencing a representative population of cells within a dynamic and developmental process can yield snapshots representing a range of dynamic and developmental stages that can then be ordered in pseudotime and within trajectories. Thus, to infer this pseudotime ordering of cells within putative trajectories by taking advantage of the asynchronous nature of developmental processes, a number of computational trajectory inference methods have been developed. Here, Tran and Bader present a trajectory inference method called Tempora, specifically for time-series scRNA-seq data that can order populations of cells within trajectories while taking into consideration the true time information introduced by the time series. While the ability to infer trajectories from time-series scRNA-seq data is an important and much needed computational methodological development, the rigor of benchmarks and extent of comparisons to existing methods presented here need substantial improvement in order to substantiate conclusions, thus precluding publication at this time.---Major Comments:1. It remains unclear how many time series points are necessary for Tempora. Will Tempora be compatible and robust when only two or three time points are available? Are the inferred trajectories for HSMM and murine cortex stable to downsampling of time points?2. A number of unified single-cell analysis approaches have recently been published to accommodate joint embeddings across batches of data such as time series data, including Harmony and BBKNN to name a few. Indeed, the authors use Harmony corrected PCs prior to Tempora to ensure that trajectories are not driven by batch-effects. It is currently unclear from the Methods section whether similar batch-correction treatments were applied in the Monocle and TSCAN comparison, such that inferior performance noted for Monocle and TSCAN could be attributed to the lack of controlling for batch-effects.3. A major limitation of Tempora is that cells are collapsed into populations and the populations are ordered in a trajectory, rather than the cells themselves, thereby losing the single-cell resolution of inferred pseudotemporal ordering. It is unclear why Tempora elects to collapse cells into populations given the dynamic and continuous nature of developmental processes. A discussion of the benefits and limitations of this approach is warranted.4. While the authors note over 70 available trajectory inference approaches have been developed, only two (Monocle and TSCAN) are directly compared to Tempora. While it is beyond the scope of this paper to compare Tempora to all published trajectory inference methods, an explanation of why these two particular methods were chosen is warranted.5. Only two time-series datasets were evaluated here (HSMM and murine cortex), though more are available, including Zebrafish and human preimplantation embryos to name a few. Notably, the two time-series datasets evaluated by Temporal currently have 271 and 6,000 cells respectively while approximately 40,000 cells are available in the Zebrafish dataset, spanning many more distinct lineages and complex trajectories. Particularly as larger and larger scRNA-seq datasets are generated, it will be important to assess Tempora on a large dataset to better evaluate its scalability, usability, and accuracy for more complex trajectories.---Minor comments:1. The authors note that Tempora begins with an optimized clustering of scRNA-seq data and that over or under clustering may result in inaccurate trajectories. Readers and users would greatly benefit from some general guidelines to help them determine whether their data are over or under clustered. The current level of description in the Methods and Discussion requires further elaboration.2. The authors note that Tempora assumes that cell differentiation progresses unidirectionally from stem/progenitor states to differentiated states. It seems that application to aberrant disease programs such as cancer where these assumptions are violated would produce erroneous results. A discussion of this caveat should be noted in the discussion to mitigate user error.3. Without lineage tracing data, it is unclear how the current gold-standard was established. Is it resolution of the gold-standard at the population level or single-cell resolution? Please clarify.4. The level of detail in terms of parameters used for Harmony, Monocle, TSCAN currently presented in the Methods section needs further elaboration to ensure reproducibility.Reviewer #2: The manuscript by Tran and Bader proposed a computational pipeline, Tempora, which builds cell trajectories for time-series scRNA-seq data. Comparing with existing pseudo trajectory reconstruction methods for snapshot scRNA-seq data, Tempora takes advantages of both pathway information and the collected experimental temporal information of time-series scRNAseq data to connect and order cell clusters/types across time points. Tempora requires the input of time-series scRNAseq data with the batch effects being removed and the cells being well clustered. Tempora takes the average gene expression profiles, or centroids, of all clusters, and applies the existing method of ARACNE, which uses pathway enrichment information, on cell clusters/types to inference the network. After then, Tempora uses available temporal information from the input data to determine edge directions. The authors evaluated Tempora on two time-series scRNAseq data sets and demonstrated that Tempora can accurately predict the lineages, and illustrated that Tempora outperforms Monocle2 and TSSCAN methods, both of which are designed for snapshot scRNAseq data.Overall, Tempora combines existing tools and takes extra information than the state-of-the art methods for snapshot scRNAseq data. However, the method is not clear stated and the results are not convincing.Major points:https://doi.org/10.1093/bioinformatics/btx173), SINCERITIES (Bioinformatics 2018: https://doi.org/10.1093/bioinformatics/btx575), CSHMM (Bioinformatics 2019: https://doi.org/10.1093/bioinformatics/btz296), TSEE (BMC Genomics 2019: https://doi.org/10.1186/s12864-019-5477-8),Waddington-OT (Cell 2019: https://doi.org/10.1016/j.cell.2019.01.006). However, the authors did not cite and compare with any of them.(1) There are already a number of computational methods developed for time-series scRNAseq data. For example, TASIC (Bioinformatics 2017: (2) The novelty and strength of Tempora are not clear state. In order to inference cell cluster networks by the existing method ARACNE, Tempora has to treat single-cell data as bulk sequencing data by taking the averaged gene expression profiles, or centroids, of all clusters. A lot of information, such as the cell heterogeneity information (stochasticity of cells) offered by single-cell sequencing are not modeled and lost.(3) Please clear state why pathway information can help to identify cell cluster/type transition processes. The pathway information can be incomplete for the biological processes studied, will the method still work? The single cell data have a high rate of drop out event. Will the results for the pathway enrichment analysis be sensitive to drop out event?(4) How many parameters does Tempora have? How sensitive are they to the results? For example, in the implementation of ARACNE, is there any score for removing the edges?(5) Is Tempora sensitive to clustering results? To compare with Moncle2 and TSCAN, when using the same clustering results by Moncle2 and TSCAN, can Tempora still outperforms them?Minor points:(1) Since ARACNE is a key component of Tempora, please provide a detailed description of ARACNE in the supplementary.(2) It is not clear how the time-dependent pathways are detected? For example, as n Fig 2b,cReviewer #3: The authors present a new method to infer trajectories fron single-cell data, that first calculates a pathway enrichment score of clusters, uses the similarity in this enrichment to connect these clusters, and finally determines a direction for the edges between clusters using time series information. The manuscript is well written and results/methods are clearly described. The method is also well implemented as an R package, with good documentation and a nice tutorial. However, there are several flaws regarding the benchmarking and evaluation that, in my opinion, still need to be addressed.Major comments:- As the authors discuss, most current TI methods are integrated by the Dynverse group. Why then did the authors only compare their method to of the oldest methods - Monocle and TSCAN - both of which didn't really perform that well in their benchmarking study? (10.1038/s41587-019-0071-9). The benchmarking really needs to be improved on several facets:- More methods need to be included, which performed the best according to this benchmarking study and/or are the current state-of-the-art (e.g. Monocle 3 and PAGA)- Other methods that also use temporal information should be evaluated. Waddington-OT is an example of this - Tempora does not output pseudotime values for each of the cells, so I understand that using the \"Cell positions\" metric in the Dynverse study is difficult. Nontheless, it would be worthwhile to try it out, given that these authors of this benchmarking study also did apply it on methods that only output clusters- A comparison on 2 datasets is quite limited, especially given the sheer number of datasets available nowadays. I understand that time series information should be available, but there are also plenty of time series experiments, especially in the cell reprogramming field - A benchmark on synthetic data can also be very valuable, I think both PROSSTT and the generator from Dynverse output time series data (or can be adapted in a way that they can).- How scalable is Tempora with increasing number of cells? I guess this is driven by the speed of the initial steps (pathway enrichment and clustering). How does this compare to other methods? - \"Tempora assumes that user input includes an optimized clustering solution for their data.\" In my experience, the clustering is the step that has the largest influence on the quality of an inferred trajectory. If I understand correctly, the authors used the same clustering strategy for both datasets, i.e. using scClustViz and selecting the # of clustering as described in the methods. Do the authors suggest that the users always use this strategy? How can we be sure that this clustering is not the reason why Tempora performs better than Monocle and TSCAN? Why didn't the authors provide this clustering to either of these methods, or alternatively use the clustering present in either of these methods for Tempora?- \"To calculate the mismatch score between a pair of graphs, we first label each cluster in the inferred trajectory with the cell type(s) it contains, based on expression of a set of wellknown marker genes.\" Does this mean that this was done separately for the clusters obtained by Tempora, and the clusters obtained by Monocle and TSCAN? This looks like a very arbitrary way to label clusters, and can easily add a bias into this benchmark which is supposed to be objective. Why didn't the authors use a method that can label cells/clusters automatically? . This would make the analysis more objective.- How would the accuracy score of Tempora change if you would determine the direction of the edges in the same way as you did for Monocle and TSCAN (as described in the methods)? The main selling-point of Tempora is that it uses time information, so this is an important experiment to check whether the temporal information adds something (compared to determining the direction using marker genes)- It seems to me that the \"similarity threshold\" is an important parameter, that depends on the dataset. What value did the authors choose for the datasets? Was this optimized? If so, how can we be sure that the benchmark is not biased towards Tempora given that the parameters were not optimized for Monocle and TSCAN?Minor comments:- \"The central concept used by most of these methods to infer temporal cell trajectories is that cells that are close to each other in time (e.g. cells that are closely related to each other via differentiation) will have similar transcriptomes. However, sometimes actual time point information is available to use for this purpose, in which case it should be used directly.\" This section of the Author Summary is confusing to me. What is meant by \"time\" here, pseudotime or real-life time? Most trajectory inference methods do not assume that cells are close to each other in real-life time (i.e. snapshot data). I think it should be clearly defined how cells can be related in time, i.e. if their differentiation process happens in a synchronous manner (i.e. time-series data).- It's a bit weird that Monocle v1 (ICA/MST) is discussed in detail in the introduction, but then Monocle v2 is used in the evaluation. Also, it should always be clearly stated which of these two methods are meant, because the name \"Monocle\" is used for both v1 and v2 throughout the manuscript .**********Have all data underlying the figures and results presented in the manuscript been provided?PLOS Computational Biologydata availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Large-scale datasets should be made available via a public repository as described in the Reviewer #1: YesReviewer #2: YesReviewer #3: Yes**********what does this mean?). If published, this will include your full peer review and any attached files.PLOS authors have the option to publish the peer review history of their article and your responses to reviewer comments. If eligible, we will contact you to opt in or out[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).Important additional instructions are given below your reviewer comments.Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.Sincerely,Qing NieAssociate EditorPLOS Computational BiologyThomas LengauerMethods EditorPLOS Computational Biology***********************ploscompbiol@plos.org immediately:A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact [LINK]Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1:\u00a0Executive Summary:In this revision, the authors have expanded on the number of trajectory inference methods compared and also included a larger murine cerebellar dataset for benchmarking. The authors have addressed my previous concerns. I have the following set of minor comments:1. While the authors have included new analyses highlighting the importance of batch correction prior to trajectory inference, it remains unclear whether the inferior performance noted for Monocle for example could be attributed to inferior batch correction rather than the trajectory inference itself. Presumably, a minimum spanning tree could be constructed on the Harmony-corrected PCs, so a comparison based on the same batch-correction across trajectory inference methods should be achievable.2. With the new larger murine cerebellar dataset, the authors now note that \"when applied on the ~19,000 gene x ~55,000 cell gene mouse cerebellar development expression matrix, Tempora completes in an average of 60 seconds, while Monocle 3 takes 1700 seconds on a modern personal computer.\" While this is a notable improvement in computational efficiency, does Tempora scale linearly with the number of genes or cells? Or will the runtime growth be exponential with the number of clusters? It may be useful for readers as the size and complexity of single-cell RNA-seq datasets increases.https://doi.org/10.1186/s12864-019-5477-8), the first dimentionality reduction algorithm that can incooperate the time stage information of time series scRNA-seq data. It outperforms existing methods in preserving local and global structures as well as enhancing the temporal resolution of samples. It can uncover the subtle gene expression patterns, facilitating further downstream analysis.Reviewer #2:\u00a0The authors have done an excellent job in revision of the manucript and all my concerns have been addressed. I have one comment for the authors. The dimentionality reduction is very important for the downstream analysis of time series scRNA-seq data, but few study has realized it. I hope the author can take a look at the TSEE algorithm . If published, this will include your full peer review and any attached files.PLOS authors have the option to publish the peer review history of their article digital diagnostic tool, Data Requirements:http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: Reproducibility:http://journals.plos.org/ploscompbiol/s/submission-guidelines#loc-materials-and-methodsTo enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see 27 Jul 2020AttachmentTempora_Response_to_Reviewers_Round2.docxSubmitted filename: Click here for additional data file. 29 Jul 2020Dear Ms Tran,We are pleased to inform you that your manuscript 'Tempora: cell trajectory inference using time-series single-cell RNA sequencing data' has been provisionally accepted for publication in PLOS Computational Biology.Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology.\u00a0Best regards,Qing NieAssociate EditorPLOS Computational BiologyThomas LengauerMethods EditorPLOS Computational Biology*********************************************************** 4 Sep 2020PCOMPBIOL-D-19-02083R2 Tempora: cell trajectory inference using time-series single-cell RNA sequencing dataDear Dr Bader,I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards,Laura Mallardploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiolPLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom"}
+{"text": "The ability to accurately predict the causal relationships from transcription factors to genes would greatly enhance our understanding of transcriptional dynamics. This could lead to applications in which one or more transcription factors could be manipulated to effect a change in genes leading to the enhancement of some desired trait. Here we present a method called OutPredict that constructs a model for each gene based on time series (and other) data and that predicts gene's expression in a previously unseen subsequent time point. The model also infers causal relationships based on the most important transcription factors for each gene model, some of which have been validated from previous physical experiments. The method benefits from known network edges and steady-state data to enhance predictive accuracy. Our results across B. subtilis, Arabidopsis, E.coli, Drosophila and the DREAM4 simulated in silico dataset show improved predictive accuracy ranging from 40% to 60% over other state-of-the-art methods. We find that gene expression models can benefit from the addition of steady-state data to predict expression values of time series. Finally, we validate, based on limited available data, that the influential edges we infer correspond to known relationships significantly more than expected by chance or by state-of-the-art methods. A typical approach to gene network inference is to take the results of an assay, most often binding assays such as CHIP-seq, and divide the data into training and test sets. This involves excluding some of the transcription factor-target binding observations, and using the remaining training set to infer the hidden data by some method. An issue with this approach is that it presumes that the majority of binding events are physiologically meaningful, in the sense that they influence the expression of the target gene. However, it has been shown that the physiological importance of binding can be minor5.State-of-the-art methods for gene regulatory network inference6. Unfortunately, genomic interactions are decidedly non-linear, noisy and incomplete7.Another frequent issue with the paradigmatic network inference approach is that the resulting networks encode linear interactions . This modeling strategy makes pragmatic sense in the common situation in which the number of possible interactions is much greater than the experimental data points, because linear models have fewer parameters to fitFor these reasons, we have approached the causality problem differently: we first attempt to build a model for each gene g that can predict the expression of that gene in left-out time points. If our model is good, then the transcription factors that most influence gene g likely constitute the causal elements for g.8 though noise is always an issue.The form of the model is important here. Small data sizes relative to the number of causal elements preclude the use of neural networks and, in particular, deep neural networks, which would increase the number of model's parameters. The presence of non-linear relationships excludes linear methods. As a compromise, therefore, this work uses Random Forests (RF) because they model non-linear synergistic interactions of features and perform well even when sample sizes are smallOP) consist of an ensemble of regression trees tuned through extensive bootstrap sampling. We show the following: (i) The OutPredict model allows for non-linear dependencies of target genes on causal transcription factors; (ii) OutPredict can incorporate time series, steady-state, and prior (e.g. known Transcription Factor-target interactions) information to bias the forecasts; (i) OutPredict forecasts the expression value of genes at an unseen time-point better than state-of-the-art methods, partly because of steady-state and known interaction data; and (iv) the important edges inferred from OutPredict correspond to validated edges significantly more often than other state-of-the-art methods.The Random Forests within our new method OutPredict . As is well known16, Granger causality can give misleading results in such a setting because the time series are short, causal relationships are non-linear, and the time series are non-stationary.Another relevant time series method from the literature is Granger causality, which has been used successfully for small numbers of genesPublic datasets vary greatly by organism with respect to experimental design, data density, time series structure and assay technologies. To show its general applicability, we test OutPredict on five different species Table\u00a0: (i) a B17. The gold standard network prior is a curated collection of high confidence edges from high throughput ChIP-seq and transcriptomics assays on SubtiWiki18 (we used the parsed data set provided in19).This dataset consists of time series and steady-state data capturing the response of B. subtilis to a variety of stimuli12. As gold standard network data, we used experimentally validated edges from the plant cell-based TARGET assay, which was used to identify direct regulated genome-wide targets of N uptake/assimilation regulators12.This dataset consists of gene expression level measured from shoots over the 2-hours period during which the plants are treated with nitrogen20. We used as gold standard ancillary data the regulatory interactions aggregated from a variety of experimental and computational methods that has been collected and described in RegulonDB21. We retrieved both parsed expression dataset and gold standard data from9.This dataset includes the E. coli gene expression values, measured at multiple time points following five distinctive perturbations 22. As gold standard network data, we used the experimentally validated TF-target binding interactions in the DroID database23. These interactions come from a combination of ChiP-chip/ChIP-seq, DNAse footprinting, in vivo/vitro reporter assays and EMSA assays across various tissues from 235 publications. Huynh et al.9 also used this Drosophila data.This dataset consists of gene expression levels covering a 24-hour period; it captures the changes during which the embryogenesis of the fruitfly Drosophila occurs24. Because this is synthetic data, the underlying causality network is known.This synthetic dataset from the DREAM4 competition consists of 100 genes and 100 TFs (any gene can be a regulator)OutPredict learns a function that maps expression values of all active transcription factors at time t, to the expression value of each target gene (whether a transcription factor or not) at the next time point. Thus, for each gene target, OutPredict learns a many-to-one non-linear model relating transcription factors to that target gene.25, iRafNet26, DynGenie39. When used on a single time series, the Random Forest for each gene is trained on all consecutive pairs of time points except the last time point. For example, if there are seven time points in the time series, then the Random Forest is trained based on the transitions from time point 1 to 2, 2 to 3, \u2026, 5 to 6. Time point 7 will be predicted based on the trained function when applied to the data of time point 6. The net effect is that the testing points are not used in the training in any way because the test set includes only the last time points of each time series.The gene function is embodied in a Random Forest, as used previously in Genie3For a given time series, when multiple time series are available, OutPredict trains the Random Forest on all consecutive pairs of time points across all time series. Further, OutPredict treats replicates independently, viz. if there are k1 replicates for time point t1 and k2 for subsequent time point t2, then we consider k1 \u00d7 k2 combinations in the course of our training. The result of the training is to construct a single function f for each target gene that applies to all time series. To test the quality of function f, we evaluate the mean-squared error (MSE) on the last point of every time series on that target gene.The Random Forest uses bootstrap aggregation, where each new tree is trained on a sub-sample of the training data points. The Out-of-Bag error for a given training data point is estimated by computing the average difference between the actual value for a given training data point and the predictions based on trees that do not include the training data point in their bootstrap sample. Each tree is built on a bootstrap sample of size approximately 2/3 of the training dataset. Bootstrap sampling is done with replacement, and the remaining 1/3 of the training set is used to compute the out-of-bag score. Thus, the out-of-bag calculation is done on training data only.RandomForestRegressor in sklearn27.All our experiments used random forest ensembles of 500 trees to avoid overfitting. Pruning did not improve the out-of-bag score, so the experiments used the default parameters for pruning of X1 for the model of gene g in preference to transcription factor X2 if the prior data indicates a relationship between X1 and g but none between X2 and g.OutPredict uses prior data to bias the training of the Random Forest model. Specifically, each decision tree node within a tree of the Random Forest will be biased to include a transcription factor The gold standard for OutPredict is a matrix [Genes * TFs] containing 0\u2009s and 1\u2009s, which indicates whether we have prior knowledge about the interaction of a transcription factor (TF) and a gene. Hence, if the interaction between a TF and gene g is 1, then there is an inductive or repressive edge; while if it's 0, then there is no known edge.compute prior weights from the gold standard prior knowledge, we assign a value v to all interactions equal to 1 and 1/v to the interactions identified by 0 X1, X2, \u2026, Xr according to the prior weights (variance reduction * prior weight) criterion for all the selected subset at each node and branch on the transcription factor with highest I(d).The r candidate transcription factors are a subset of all transcription factors and are randomly sampled at each tree node, biased based on the weights of the priors, as in iRafNet formula of the MRFSS+TS model). Further, each prior dataset can be evaluated separately depending on how helpful it is to make predictions on time series. By contrast, for example, iRafNet26, combines all prior datasets and weights them equally at each tree node. An equal weighting strategy may decrease overall performance when, for example, one prior dataset is less informative or is error-rich. As an aside, iRafNet can make out-of-sample predictions but only on steady-state data.OutPredict incorporates steady-state(SS) data into the same Random Forest model as the time series(TS) data , and yj be a target. We seek a function such that maps X to yj either in steady-state or for time series. For steady-state data, we use all experimental conditions to infer a function yj\u2009=\u2009fsteadyj(X) where X must not include yj. That is, for each gene yj, we seek a function from all other genes to yj. For time series, Outpredict supports two types of models:Let 1. Time-Step (TS) model:natural logarithm (ODE-log) model:X(ti) denotes the expression values of all the transcription factors at time ti, yj(ti+1) denotes the expression of gene j at ti+1, \u03b1 is the degradation term. All genes are assumed to have the same \u03b1.2. Ordinary Differential Equation OutPredict integrates steady-state(SS) data with Time series(TS) data in a single Random Forest.ti+1\u2009\u2212\u2009ti) in the denominator. This makes some intuitive sense because many phenomena in nature show a decay over time. Empirically, for example, the difference in expression value between 5 and 20 is more than 1/3 the difference between 5 and 60 in the Arabidopsis time series. Further, Supplementary Fig.\u00a0We have found that the ODE-log model achieves a better out-of-bag score compared to just using the linear difference which of these two methods (ODE-log or Time-Step) to use, (ii) the prior weights of the TFs, and (iii) the degradation term for the ODE-log model. As far as we know, this is the first time the choice of model and degradation parameter value have been treated as trainable hyper-parameters. We show in Supplementary Table\u00a08, modified by the prior weighting:d is the current decision node being evaluated, S is the subset of samples that are below decision node d in the tree, Sl and Sr are the subsets of experiments on the left and right branches of decision node d, respectively; vary is the variance of the target gene in a given subset, and Xi to a given target gene y, which causes features with high prior weights to be chosen with higher probability when splitting a tree node during tree construction. Because the model for each target gene is independent, OutPredict calculates the model for the target genes in parallel.Computationally, at a given node d in a tree, OutPredict computes the product of (i) the standard Random Forest importance measure which is defined as the total reduction of the variance of y and (ii) the weight given by the priors. Here is the formula used for the reduction of varianceT be the number of trees and Di be the set of nodes which branch based on transcription factor (feature) Xi, the overall importance score of the feature Xi is:For the purpose of inferring relative influence of transcription factors on genes and constructing a network of such potential causal edges, let si of Xi is the sum of the variance improvements I(d) over all nodes d in Di divided by the number of trees T. The resulting variable importance value si is more robust than the value obtained from any single tree because of the variance reduction resulting from averaging the score over all the trees8. High importance scores identify the set of the likely most influential transcription factors for each target gene.Computationally, the importance score We measure the prediction performance of our algorithm using the Mean Squared Error(MSE) of the predictions of out-of-sample data. For each species tested, we compare the performance of the different algorithms on time series alone and on time series data with prior information.13 which is an approach developed specifically for time series gene expression prediction (in the supplement). In detail, we perform hyper-parameter optimization for the learning rate of the stochastic gradient descent optimizer, and the dropout rate. Thus, regularization is applied through dropout, which helps reduce overfitting. (ii) the Random Forest algorithm DynGenie39, which is an extension of Genie325 that is able to handle both steady-state and time series experiments through the adaptation of the same ordinary differential equation (ODE) formulation as in the Inferelator approach6. iRafNet26, as noted above, does not handle time series data as the main input data.As mentioned above, we compared our weighted Random Forest with two related works: (i) a Neural Network (NN) with a hidden layerpenultimate value prediction of the expression of a gene at a given time point to be the same value as the expression of that gene at the immediately previous time point. To evaluate the performance of our forecasting predictions, we compare the predicted expression values to the actual expression values for each gene ; (ii) calculate the figure of merit for each gene and each method ; (iii) calculate the difference, Diff, in the average of the figure of merit of the M1 values and the M2 values; (iv) Without loss of generality, assume Diff is positive; (v) randomization test: for some large number of times N , starting each time with Orig, for each gene g, swap the M1 and M2 values for gene g with probability 0.5. Now recalculate the overall difference of the figure of merit for M1 and for M2 and see if that difference is greater than Diff. If so, that run is considered an exception; (vi) The p-value of Diff (and therefore of the change in the figure of merit) is the number of exceptions divided by N. When the p-value is small, the observed difference is unlikely to have happened by chance.The non-parametric paired test we use throughout this paper compares any two prediction methods M1 and M2 as follows: (i) format the data from the original experiment by a series of rows with one row for each gene containing the gene identifier, the M1 prediction for that gene, the M2 prediction, and the real value technique (either the Time-step or ODE-log) that validation analysis found to be the best model using the out-of-bag score. We found that the weights/importance found in high quality prior data significantly improve predictions in B. subtilis .There are four reasons for the relative success of OutPredict compared to other methods: (i) the use of Random Forests which provides a non-linear model (in contrast to regression models) that requires little data , (ii) the incorporation of prior information such as gold standard network data (in contrast to DynGenie3), (iii) the adjustment of weights of predictors , and iv) the selection during training of the optimal technique between the Time-Step and our https://github.com/jacirrone/OutPredictgithub.com/jacirrone (10.5281/zenodo.3611488).In summary, OutPredict achieves high prediction accuracy and significantly outperforms baseline and state-of-the-art methods on data sets from four different species and the in silico DREAM data as measured by mean squared error. Further, as a proof of concept, we have seen that the high importance edges correspond to individually validated regulation events much greater than by chance in both Arabidopsis and DREAM. The code is open source and is available at the site Supplementary Information."}
+{"text": "In a previous study, air sampling using vortex air samplers combined with species-specific amplification of pathogen DNA was carried out over two years in four or five locations in the Salinas Valley of California. The resulting time series data for the abundance of pathogen DNA trapped per day displayed complex dynamics with features of both deterministic (chaotic) and stochastic uncertainty. Methods of nonlinear time series analysis developed for the reconstruction of low dimensional attractors provided new insights into the complexity of pathogen abundance data. In particular, the analyses suggested that the length of time series data that it is practical or cost-effective to collect may limit the ability to definitively classify the uncertainty in the data. Over the two years of the study, five location/year combinations were classified as having stochastic linear dynamics and four were not. Calculation of entropy values for either the number of pathogen DNA copies or for a binary string indicating whether the pathogen abundance data were increasing revealed (1) some robust differences in the dynamics between seasons that were not obvious in the time series data themselves and (2) that the series were almost all at their theoretical maximum entropy value when considered from the simple perspective of whether instantaneous change along the sequence was positive. We now have to look at apparently random time series of data, be they from the stock market, or currency exchanges, or in ecology and ask are we seeing \u201crandom walks down Wall street\u201d or deterministic chaos, or, often more likely, some mixture of the two.\u201d\u201c\u2014Sir Robert May The study of disease dynamics in plant pathology has been dominated by analysis of situations where disease increases monotonically within single growing seasons or over several seasons . ReflectDevelopments in technology for monitoring airborne inoculum of target species offer a promise of methodological advance to epidemiologists with an interest in creating evidence-based, within-season decision rules for disease management in crops. Given such potential applications for spore traps and quantification of target nucleic acid sequences, it is important that efforts are made to develop an analytical approach which takes into account the relevant statistical properties of the data that these monitoring methods generate. What can experimenters expect to see when they collect such data? What types of dynamical behavior are likely to be apparent, and how should the results be interpreted in relation to the use of the data in disease management?The work we report here falls into the broad theme on decision-making that runs through several of the contributions to this Special Issue of Entropy. In the case of the current work, our effort is aimed more at understanding the basic properties of the data than in deriving decision rules from them. The work is motivated by our belief that it is important to be aware of any informational limitations inherent in the data, so that efforts to use air sampling as a means of forecasting interventions occur with realistic expectations. The work is intended to be an initial contribution to the literature; one from which we hope a range of further investigations covering a wider range of pathogen systems will develop.As already noted, airborne concentrations of pathogen inoculum have been monitored using vortex (spinning rod) air samplers combined with species-specific quantitative polymerase chain reaction (qPCR) in a number of situations. In some cases, the approach has already been used commercially for disease management. Carisse and colleagues were pioneers, developing one of the first examples in commercial agriculture; in their case, to manage fungicide applications to control Botrytis leaf blight in onion in Quebec, Canada ,9,10. ThThe use of spore traps linked with qPCR assays has been developed successfully for disease monitoring in several other pathosystems, including monitoring for early season inoculum for grape powdery mildew , where mPeronospora effusa, is the most important threat to spinach production worldwide. Choudhury et al. [Spinach downy mildew, caused by the obligate oomycete pathogen y et al. examinedRecent developments in time series analysis based onP. effusa was sampled at four locations in the Salinas Valley of California in 2013 and 2014 using vortex air samplers constructed by Dr. Walt Mahaffee and operated by Dr. Steven Klosterman . The presence of the inoculum and quantification were achieved using qPCR amplification of a species-specific DNA sequence in the total DNA extract from the sampler rods. Details of the sampling procedure, qPCR primers, reaction conditions, and translation of the qPCR cycle threshold number to daily pathogen DNA copy number are described in Klosterman et al. [Airborne inoculum of n et al. .Samples were recovered from the air samplers on an irregular sampling interval of two or three days depending on the availability of technical staff. In the original 2016 study [P. effusa trapped over the preceding 24 h period. The nine time series were first inspected for evidence of an overall trend in copy number with time. Increasing trends were detected in 7 of the 9 series, and the series were tagged accordingly to indicate their status. Irrespective of whether or not the initial inspection suggested a trend to be present, in order to standardize the pretreatment of the data, a simple linear regression with time was fitted to the natural logarithm of the estimated copy number. The residuals from the regression were then exponentiated to produce the detrended series that were subsequently used analysis. In what follows, we refer to these series as tN, indicating the (detrended) copy number on day t. When corresponding log-transformed values are analyzed, they are denoted tn.After interpolation of the data to a daily time step, each of the nine time series consisted of 129 observations of the estimated target DNA copy number of t+1n = ln(tN+1) on the ordinate and tn = ln(tN) on the abscissa. The PACF differs from the standard autocorrelation function in that it considers only the direct effect of observations at one point in the series on observations separated by lag \u03c4, indirect effects, operating through the interposing points in the series that are removed.For each series, we obtained the autocorrelation function (ACF), the partial autocorrelation function (PACF), and the phase plot of the log-transformed series with nonlinearTseries\u201d [TseriesChaos\u201d [TseriesEntropy\u201d [entropy\u201d [https://github.com/robchoudhury/spore_trap_information_theory. The R code is provided as is, and we offer no guarantee that it will work when adapted to other data sets.To characterize the time series in terms of nonlinear dynamics, we followed an approach suggested by Huffaker et al. and by KTseries\u201d , \u201cTserieesChaos\u201d , or \u201cTseEntropy\u201d . Additioentropy\u201d or were nonlinearTseries and TseriesChaos. The basic idea in both cases is to construct an empirical hypothesis test by resampling from the observed data, with the test statistic being a suitable property of the data that will hold under linear dependence but not otherwise. One of the simplest approaches, the one implemented in nonlinearTseries, relies on the idea that a Gaussian linear process will show time reversibility. Randomized permutations are obtained using a method in which the phases of the Fourier transform of the observed data are randomized. A two-sided hypothesis test is implemented to examine whether there is evidence that the value calculated from the observed data differs from the set of surrogates generated in the data resampling routine. We set the \u201csignificance level\u201d option at 0.02, which results in the observed data being treated as one observation in a set of 100, with the two-sided test examining whether the observed data are in the p = 0.02 upper or lower tail of the sample. The supplied function includes a built-in diagnostic plot of the resampling test, but we implemented our own diagnostic graphical representation of the outcome for the test.Since nonlinear analysis (NLTS) can be time-consuming, an initial step should be to test for lack of linear dependence in the observed data. An agreed approach for performing this is to perform surrogate tests ,14. DiffTseriesEntropy implements a more complex surrogate testing procedure. First, the best-fitting linear autoregressive (AR) model is selected on the basis of the Akaike Information Criterion (AIC). The residuals of the best AR model are resampled (with replacement). For each resampled series, a metric entropy measure [pS can be calculated and the values for the observed series are compared with the confidence band. If pS for the observed series falls outside the band, the series can be considered to show nonlinear as opposed to linear dependence at the relevant lags. The entropy-based approach in TseriesEntropy is computationally more demanding than the expectation-based approach in nonlinearTseries. In the initial work, we examined both approaches. The results reported here are for the time-reversibility approach implemented in nonlinearTseries. The code supplied in the ure, Sp) is calcuAssuming that the surrogate tests indicate sufficient reason to proceed with NLTS, characterization of the dynamics in terms of their tendency to chaotic versus stochastic uncertainty is an important component of the ensuing effort. Following the pioneering work of Takens , one widIn the current context, where the ultimate motivation is the hope of using similar series in disease management, the capacity to reconstruct the phase portrait of the whole system is of secondary importance to characterizing the dynamics of the observed series. However in this initial study the focus is on understanding the dynamics rather than immediate practical application, and the time delay embedding approach may be valuable because the features of the dynamics it reveals are informative.I, of the time series data at successive lags, \u03c4 = 0, 1, 2, \u2026 \u03c4max; (ii) the Theiler Window, tw; and (iii) the embedding dimension, m.Three properties of the series are important in NLTS, these being (i) the average mutual information (AMI), tN being in the ith bin from knowing observation tN\u03c4- is in the jth bin. The results are averaged over all of the available data to produce the average mutual information. A graphical plot of I against lag, \u03c4 = 0, 1, 2, \u2026 \u03c4max produces an information-theoretic analogue of the ACF plot, but one in which the AMI\u2019s general measure of lagged association, as opposed to the linear lagged dependence captured by ACF, is visualized. The first minimum, or the first occurrence of a value below an empirical threshold, of the AMI function is taken to be an indication of the embedding time delay, d, of the series, since this value indicates a time lag at which observations have, in a general sense, low correlation.The AMI function is calculated by binning observations and by calculating the mutual information obtained about observation m (see below). Theiler\u2019s review [The Theiler Window ,21 is usTseriesChaos and nonlinearTseries offer functions to generate a space-time plot [tw can be selected by choosing a value at which there is a low probability of points being close in the phase space for a given time lag separation. For short time series, such as what we have in the present study, the space-time plot approach may not give usable results and other options may be needed; this was the case with our datasets which consist of 129 observations.For long time series, both ime plot from whiAs an easily obtained first approximation, Huffaker et al. suggestetw. For each series, we started with the value suggested by the first minimum of the ACF, noting also whether this lag separation was longer or shorter than the value suggested by the AMI. Where the AMI reached its first minimum at longer lag than the ACF, we used a range of estimates for tw and examined the effect of changing tw on the estimated embedding dimension, m.In the current case, lacking a reasonable alternative, we opted for a trial-and-error approach. With both the AMI function and the ACF available, we had estimates of both general association and linear correlation with lag, while the original time series and the corresponding phase plots also help to indicate suitable values of m, are either the method of False Nearest Neighbors (FNN) offered in TseriesChaos , the observed time series of pathogen DNA copies represent only one dimension of a higher-order dynamical system. We can think of the observed series as representing the whole higher-order dynamical system projected onto a single dimension. With this perspective, points that appear close to one another may actually be widely separated in the full dimensional space of the dynamical system. The idea of FNN computation is to select a subset of points within a given \u201cradius\u201d of each other but separated by at least the value of tw and to track whether they remain as neighbors as the dimensionality of the assumed attractor is incrementally increased. If the proportion of FNN is plotted against the number of dimensions, m, the first value of m at which the proportion of FNN is minimized provides an estimate of the embedding dimension.Options for estimating, r et al. pp. 67\u20136r et al. algorithE1(m) and E2(m), of putative values for the embedding dimension, m. Note that Cao\u2019s original notation used d in place of m. Cao\u2019s method starts by calculating an overall Euclidean distance measure between pairs of points on time delay vectors for successively larger assumed values of m. Function E1(m) calculates the ratio of the distance measure at successive pairs of values, . Cao\u2019s insight was that this ratio stabilizes close to 1 if the data are generated by an attractor. The second function, E2(m), focuses on the distance between only the nearest neighbors in the time delay vectors and operates on the distance measure based only on those. As with E1(m), the function returns the ratio between successive pairs . If the data are generated by a deterministic attractor, E2(m) has the property that, at some value m*, E2(m*)! = 1, whereas if the data are generated by a process dominated by stochastic noise, E2(m) \u2245 1, \u2200m. Thus, in addition to providing an estimate of the relevant embedding dimension, Cao\u2019s method offers the advantage over the FNN approach of providing an indication of whether the data-generating process is characterized by deterministic or stochastic uncertainty.In the approach suggested by Cao , the embentropy was used to calculate empirical estimates of the entropy in the data at each time point by iteratively adding the datum for each time point to the entropy calculation. Calculation using this approach starts by constructing a binning structure for the data and then by estimating the entropy based on the frequencies of observation in each bin. We started the iterative process at the 10th time point, so that the first estimate of entropy was based on the first 10 observations of each series. The calculation then proceeded as just outlined, with the second estimate being based on the first 11 data points and so on. The maximum likelihood option for the entropy function was used throughout.In addition to the characterization of the dynamics provided by the time-delay-embedding approach, we calculated two empirical entropy values to help in understanding the uncertainty in the data for airborne pathogen DNA. The first approach worked directly on the DNA copy number time series . The entropy function from the R package maxt-1). First differences between successive pairs of values were calculated, and if the resulting difference was greater than 0 (indicating tN+1 > tN), then 1 was entered for the corresponding value of the string; tN+1 \u2264 tN resulted in 0. The calculation then proceeded along similar lines to those outlined for the entropy of the copy number, iteratively increasing the size of the dataset by one time point and calculating a new entropy value. In the current case, at each time point, we calculated the proportion of the data that were 1s and then used Shannon\u2019s equation for expected information to give an entropy value in bits for the string at each time point . The calculation was coded directly in R. We initiated the calculation with the first two observations and then iterated the calculation one time point at a time.As a second approach to characterize uncertainty in the time series data in relation to decision making, we first transformed each series into a binary string of length time series, denoted tn. The instantaneous log growth rate tR is defined as tn+1 \u2212 tn, and the estimated linear AR model is then0, a1, etc. are parameters to be estimated; \u03b5 is an error term; and \u03c4 is an index indicating lag dependence. Selection of the order of lag dependence to use in fitting the AR models in each case was guided by the estimates of ACF and AMI functions (see 2 and a coefficient of prediction similar to the one proposed by Turchin [mn). Next, we calculated 1\u2212(RSSmod/RSSmn), in which RSSmod is the residual sum of squares from the selected model. When RSSmod > RSSmn, the coefficient has a negative value and indicates that the model fits noise. Values approaching 1 occur when the observed series has a pattern of oscillations that can be captured reasonably well in simple autoregressive models. Finally, values in the region of 0 indicate that the series is dominated by noise and, possibly, too short and complex to be characterized well.We followed a conceptual approach that draws on the work of Royama and Turcions see above. P Turchin . The coeTime series graphs for the nine series of spore trap DNA copy number data are shown in m) stayed close to the value 1 for all values of m tested. Graphical output from the R function is given in d in the place of m.The results of using Cao\u2019s method ttn = tn+1, with short orbits away from this line, typically lasting no more than three to four time steps. These features are indicative of stochastic variation around a fixed value with a mixture of immediate and time-delayed feedback Turchin [The phase plots for the Turchin .\u03bb1) was greater than 0, indicating that chaotic divergence would occur in independent realizations generated by the same data generating process. Although positive, the values of \u03bb1 were small, ranging from 0.04 to 0.17 using an automated binning procedure. The resulting series of entropy values are shown together with the data in In the second year of observations (2014), the detection of pathogen DNA on the traps was sporadic. All four series showed an early peak in copy numbers around day 10 and then a long period of low-to-no detection until around day 80, when all locations experienced another peak in detection. Apart from these two shared features, the time series of trap counts were superficially dissimilar across the four locations sampled in 2014, but the series of cumulative entropy values showed a similar pattern in all four cases, with an initial peak corresponding to the trap data at day 10 followed by a long reduction as successive, similar trap results resulted in a reduction in heterogeneity in the data. The peak in trap counts caused a further peak in entropy around day 80, followed by a second period of decline. In general, the cumulative entropies in 2014 did not exceed 1.5 nats except in the case of King City, South, for which the initial peak was 1.78 nats. The final values for the entropy of the four series in 2014 are given in In contrast to the more or less consistent pattern revealed by the 2014 data, the cumulative entropy values for the 2013 data sets were more variable. The final values for the five series tended to be higher than those in 2014 ranging from 1.00 to 2.04 nats with the exception of the King City North location, which had a final entropy value of 0.63 nats. In Salinas and Gonzales, the entropy value peaked early at over 2 nats and declined somewhat over the course of the season, although still finishing at or above 1.00 nats. In contrast to this early peak and decline pattern, at the remaining three sites in 2013, uncertainty increased through much of the season, in association with repeated oscillations in the trap copy number data.tR, the instantaneous change in the log copy numbers between pairs of observations. The analysis showed that, in all nine series, the entropy remained close to its theoretical maximum value over much of the season following an initial transient period lasting approximately 30 days. In three of the series , the entropy did not settle close to its maximum until later in the season, but even in these cases, the final entropy value was close to the theoretical maximum of 1 bit. Note that, in In addition to characterizing uncertainty in the daily trap data directly, we also assessed the uncertainty in the simpler issue of whether the observed series increased between each successive pair of days. Stability and Complexity in Model Ecosystems was chosen deliberately and for more than one reason. First, May\u2019s point that the dynamics of real systems are likely to be a mixture of stochastic and deterministic processes applies directly to our observations on the time series of spore trap DNA copy numbers for P. effusa in the Salinas Valley of California. Secondly, May was an advocate of the idea that models can and should be used in biology in a strategic way to try to understand broad types of behavior without necessarily considering immediate questions of application or numerical accuracy in any specific case, while our analyses are predominantly statistical in nature, they are nonetheless carried out from a strategic perspective. Our aim in this study was not so much to produce accurate predictive models of any of the series as it was to use the tools of nonlinear time series analysis, together with some linear methods, to investigate the broad properties of pathogen DNA copy data collected from vortex air samplers.The quotation from the late Sir Robert May\u2019s introduction to the Landmark edition of his mA correlation matrix plot for the numerical data in 2 calculated from linear autoregressive models, ranging from a minimum of 3.6% to a maximum of 26.4% , also suggested that the series were strongly influenced by stochastic noise. Taken together, these results indicated that the series lie in the transition between stochastic and deterministic uncertainty in what Turchin [quasi-chaotic territory at the boundary between the two types of dynamics.All of the series had positive Lyapunov exponents, indicating a tendency for deterministic sensitivity to initial conditions . On the P. effusa on suitable weather for spore production and release, that the copy number on air sampler traps would show appreciable stochasticity. Not only is the number of DNA copies detected dependent on the response of the pathogen to uncertain weather conditions, the physical processes of dispersal, and transport in air, together with the vortex sampling process itself, meaning that there are multiple sources of stochasticity between the release of spores and subsequent trapping events. However, at the same time, crop management practices such as planting and harvesting salad spinach happen on cycles of between 21 and 45 days, and may be a source of deterministic forcing in the data complicating the dynamics. If the data are predominantly stochastic in nature, then traditional statistical models should be able to describe the pattern and to characterize the uncertainty. Similarly, Turchin [P. effusa in the Salinas Valley in California, the observed dynamics may fall between these two preferred situations, making characterization of the dynamics difficult and leading to low overall predictability. The estimated embedding dimension for the series (after detrending) ranged from 6 to 9, indicating that they did not have dynamics compatible with a low-dimensional attractor.It seems reasonable, based on the dependence of oomycete pathogens such as Turchin argues tIf we consider the data in relation to variability in time and space, there are clear implications for making robust inferences about the quantity of pathogen inoculum in the air. For example, at three of the four locations where samplers were deployed in two successive years, the dynamics were classed as linear in one year and not linear in the other. The four locations sampled span a linear distance of approximately 80 km from Salinas in the north to King City in the south. In 2014, a year with relatively little pathogen activity, peaks in trap counts, and corresponding time series of entropy values showed relatively good agreement. In contrast, in 2013, when inoculum pressure was higher, generally, there was much less agreement between locations, and extrapolation from one location to another would not necessarily have yielded robust conclusions about the dynamics of the pathogen. The most striking example is the contrast between Salinas and King City S. In Salinas between day 20 and day 80, trap catches were relatively low and the cumulative value of the entropy showed a steady decline from approximately 1.5 nats to under 1 nat. In contrast, over the same period in King City, multiple peaks in trap catches were noted and entropy in the catch data rose from approximately 1 nat to approximately 2 nats.Inevitably, in a first use of a new methodology in a specific field of application, there are numerous things that could have been done that were not. The focus of our analyses was on the dynamics and properties of the times series when analyzed on a daily time step. It is probably not surprising that the binary series indicating the direction of change was close to its maximum entropy value over much of the season in both years at most locations. This result suggests that, on average on any given day, a sample from the next day is as likely to be higher as it is to be lower than the sample from the current day. Technical advances in sample preparation are reducing the time it takes to process nucleic acid samples from spore traps. As a consequence, the apparent possibility of real-time forecasting of disease risk on a daily basis is increasing. One possible interpretation of our findings is, however, that the usefulness of such forecasts may be limited by the inherent uncertainty of the data. Is there predictive value for decision making in knowing today\u2019s trap value if tomorrow\u2019s value may be higher or lower with equal probability? The results obtained here indicate that the binary strings derived from the time series data are close to being simple sequences of independent Bernoulli trials with a probability of 0.5 determining the outcome. As Gr\u00fcnwald points oAggregating results to form moving averages over longer runs of days would perhaps lead to information values that were more easily linked to disease outcomes, but the detailed work to examine this issue lies beyond the scope of this study. The objective of our work was not to explore whether entropy values can be used as a predictive indicator for disease risk but to characterize the uncertainty of time series data from spore traps to give practitioners a richer perspective on the level and nature of the uncertainty inherent in the data they collect.Looking at the entropy values for the two seasons retrospectively, it is clear that theOur analyses suggest that there may be quite severe practical limitations to being able to characterize pathogen dynamics using the combination of vortex air sampling and DNA target amplification. There are clear cases where detection of primary inoculum helps to improve disease management ,11, but"}
+{"text": "Dynamics of cell fate decisions are commonly investigated by inferring temporal sequences of gene expression states by assembling snapshots of individual cells where each cell is measured once. Ordering cells according to minimal differences in expression patterns and assuming that differentiation occurs by a sequence of irreversible steps, yields unidirectional, eventually branching Markov chains with a single source node. In an alternative approach, we used multi-nucleate cells to follow gene expression taking true time series. Assembling state machines, each made from single-cell trajectories, gives a network of highly structured Markov chains of states with different source and sink nodes including cycles, revealing essential information on the dynamics of regulatory events. We argue that the obtained networks depict aspects of the Waddington landscape of cell differentiation and characterize them as reachability graphs that provide the basis for the reconstruction of the underlying gene regulatory network. Single-cell analyses revealed complex dynamics of gene regulation in differentiating cells . It is bPhysarum polycephalum by taking multiple samples of one and the same giant cell. Physarum belongs to the amoebozoa group of organisms. It has a complex, prototypical eukaryote genome , 0.75 g yeast extract , and 3.9 mM glucose per liter, adjusted to pH 4.6 with concentrated HCl. After starvation for 7 days at 22\u00b0C in complete darkness, sporulation was induced with a 15 min pulse of far-red light . The plasmodial mass on the agar plug was scraped off with a pipet tip and, by cutting the tip, transferred into a vial of glass beads immersed in liquid nitrogen . Before nitrogen . After enitrogen , the relnitrogen was analnitrogen as previnitrogen .To correct for differences in the concentration of total RNA and in the efficiency of the RT-PCR reaction, the gene expression values were normalized to the median of the estimated relative concentrations of mRNAs of the 35 genes in each RNA sample. Each normalized expression value was subsequently normalized to the geometric mean of all values obtained for a given gene, and this was performed separately for each gene.cmdscale function provided as part of the stats package v3.5.1 did not change within the limits of accuracy of the measurements. Accordingly, repeated sampling of the same plasmodial cell yields true time series . To allosurement . This alsurement were betsurement . There wsurement , indicatIn summary, the gene expression values throughout a plasmodium deviated not more than approximately two-fold from the median of all samples from the same plasmodium and were thus within the limits of the technical accuracy of the measurements, even under conditions were genes were in the process of being up- or down-regulated. These differences measured between samples were minor as compared to the differential regulation where the expression level of genes changed in the order of ten to more than hundred-fold . These rAs the assayed genes were evenly expressed and changed evenly in time throughout the large plasmodia, at least within the limits of accuracy of the measurements, we took time series at 1 h time intervals. In order to assay, in each experiment, for the homogeneity and synchrony in gene expression throughout the plasmodium, we took two samples at each time point from different, arbitrarily chosen but distant sites of the plasmodium . In far-The technical quality of the measurements was estimated separately for the two data sets, each comprising the data of the samples collected at the 11 time points of the time series. For each plasmodial sample, the relative deviation of the two measurements from the mean of the two measurements was estimated. The frequency distributions of the deviations and corresponding quantil values indicated that the technical qualities of 1st and 2nd, as well as 3rd and 4th measurement were virtually identical with 95% of the values differing less than a factor of two from each other .The degree of spatial variability of gene expression within a plasmodium was estimated by combining the data sets for the first and the second sample of a plasmodium taken at each time point of the time series. The frequency distribution of the deviation of each measurement from the mean of 1st, 2nd, 3rd, and 4th measurement of the two samples taken from each plasmodium at any time point was virtually identical to the frequency distributions obtained for the technical replicates, indicating that gene expression within the analyzed plasmodia varied at maximum within the limits of accuracy of the measurements (within a factor of 2 in 95% of the samples). This conclusion is based on the comparison of the quantile distributions of the data sets considerWith this data set, we investigated how expression changes as a function of time in the individual plasmodial cells. The gene expression pattern of a plasmodial cell at a given time point was obtained as the mean of the four expression values of each gene measured in the two plasmodial samples picked at that time point.hstA, nhpA, pcnA, and uchA, in the following called pcnA-group genes, changed over time in some of the plasmodia, but there was no obvious consistent relationship to the time point of stimulus application. When only genes were included in the analysis that were clearly up- or down-regulated in response to the stimulus to obtain a data point for the expression pattern of each cell at each time point. Single-cell trajectories of gene expression are shown in stimulus , the datstimulus . A qualistimulus , with sover time was cleaver time , suggestver time the respver time as compaver time .place and the temporal transit between two states by a transition .\u2022There are places having more than one pre-transition, i.e., alternative paths may re-join. Thus, the Petri net structure does not form a tree.\u2022There are cycles: a cell may switch back to previous states or oscillate between states as defined by the expression patterns of the set of observed genes.For technical reasons we add immediate transitions starting alternative trajectories, in order to get a statistical distribution of states in which the experiments have started or will start with a given probability.state machine,\u2019 also called in other communities \u2018finite state machine\u2019 or \u2018finite automata,\u2019 which may involve cycles.In contrast to most state-of-the-art pseudo-time series approaches found in the literature , the strA state machine with one token and its reachability graph, or Markov chain for stochastic Petri nets, are isomorph ; to put it differently: our (stochastic) Petri net represents the Markov chain of states the cells assume in the course of their developmental trajectory and accordingly on their walk through the Waddington landscape. We assume that the Petri net represents the corresponding region of the Waddington landscape predicting possible developmental paths a single cell can follow, which of course yields a state machine.Representing Markov chains as Petri nets comes with a couple of advantages.First, Petri nets are equipped with the concept of T-invariants, which belong to the standard body of Petri net theory from very early on . We cons\u2022each cycle in a state machine defines a T-invariant, and\u2022each elementary cycle (no repetition of transitions) is a minimal T-invariant.Second, modeling the differentiation-inducing stimuli, what we have not done so far, would turn some of the free choice conflicts into non-free choice conflicts, which involves, technically speaking, leaving the state machine net class. To unequivocally identify transits that are stimulus-dependent, we need a higher data density which we will hopefully achieve in one of our next experiments. With stimulus-dependent transitions, the constructed Petri nets and their Markov chains do not coincide anymore, instead the Markov chains as well as the reachability graph are directly derived from the Petri nets and may be analyzed by standard algorithms. Finally, our Petri net approach paves the way for the actual ultimate goal of our future work - reconstructing the underlying gene regulatory networks based on the reachability graphs encoded by the Waddington landscape Petri nets.Un-stimulated cells .pcnA-group of genes might have added to this directedness. Therefore, we constructed a Petri net from trajectories based on significant clusters, this time exclusively clustering the subset of up- and down-regulated genes. Basic features found in the Petri nets of up- and down-regulated genes were similar to the ones found in the Petri nets for the full set of 35 genes: Trajectories formed parallel main branches, there were intermediate nodes of different stability, of different connectedness, and hence states that occurred with different frequency as a function of time for any single cell trajectory. In this kind of plot, the time course of the mRNA of each gene starts at the same point, while the slope of the curve indicates the x-fold change in mRNA abundance over time. Plotting subsets of genes suggests that trajectories through different regions of the Petri net of pldA is early up-regulated in quite a number of trajectories, followed by pwiA and finally by ligA and rgsA that appear strongly correlated at least in some of the plots. Qualitatively different patterns of regulation relative to each other are also evident for the three phospholipase D-encoding genes represent states, the system is likely to assume, like a corrie in the metaphor of the Waddington landscape, through which the system will pass. Places having many post-transitions (many out-going arcs) represent branching points from which the system has multiple options to proceed.Our analysis has confirmed former observations , now at The response of a cell to a differentiation-inducing stimulus seems to depend on the cell\u2019s current internal state. etc.One might be tempted to suspect a certain structure in the list of subsequently recorded trajectories . ChangesWe have previously argued that the Petri net depicts aspects of the topology of the Waddington landscape with resThe raw data supporting the conclusions of this article are provided as part of the AP and SP performed single cell time-series experiments and gene expression analyses and evaluated their results. MHa supervised the experimental work and developed the sample preparation method together with SP. MHe essentially contributed the analysis of the graph-theoretical properties of Petri nets and wrote a corresponding section of the manuscript. WM conceived and supervised the study, performed computational analyses including the automated generation of the Petri nets and wrote the manuscript. All authors read and approved the final version of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "ARABIDOPSIS TRITHORAX (ATX) enzymes are general methyltransferases that regulate the adult vegetative to reproductive phase transition. We generated an atx1, atx3, and atx4 triple T\u2010DNA insertion mutant that displays both early bolting and early LS. This mutant was used in an RNA\u2010seq time\u2010series experiment to identify gene expression changes in rosette leaves that are likely associated with bolting. By comparing the early bolting mutant to vegetative WT plants of the same age, we were able to generate a list of differentially expressed genes (DEGs) that change expression with bolting as the plants age. We trimmed the list by intersection with publicly available WT datasets, which removed genes from our DEG list that were atx1,3,4 specific. The resulting 398 bolting\u2010associated genes (BAGs) are differentially expressed in a mature rosette leaf at bolting. The BAG list contains many well\u2010characterized LS regulators , and GO analysis revealed enrichment for LS and LS\u2010related processes. These bolting\u2010associated LS regulators may contribute to the temporal coupling of bolting time to LS.In plants, the vegetative to reproductive phase transition (termed bolting in Arabidopsis) generally precedes age\u2010dependent leaf senescence (LS). Many studies describe a temporal link between bolting time and LS, as plants that bolt early, senesce early, and plants that bolt late, senesce late. The molecular mechanisms underlying this relationship are unknown and are potentially agriculturally important, as they may allow for the development of crops that can overcome early LS caused by stress\u2010related early\u2010phase transition. We hypothesized that leaf gene expression changes occurring in synchrony with bolting were regulating LS. We found that 202 of these BAGs are included in the LS database , which was treated with Gnatrol WDG (0.3\u00a0g/500\u00a0ml H2O) to inhibit the growth of fungus gnat larvae. Plants were subirrigated with Gro\u2010Power 4\u20108\u20102 , and grown in Percival AR66L2X growth chambers under a 20:4 light:dark diurnal cycle (Long Day) with a light intensity of 28\u00a0\u03bcmoles\u00a0photons m\u22122\u00a0s\u22121. The low light intensity prevents light stress in older leaves, which was evident as anthocyanin accumulation at higher light intensities. To compensate for the reduced light intensity, the day length was extended. The petiole of the sixth leaf to emerge was marked with a thread on individual plants.2.2atx1 atx3 atx4 triple mutant was generated by crossing two double mutants . Alleles and corresponding primers can be found in Data File Taq\u00a0polymerase with Standard\u00a0Taq\u00a0Buffer (New England Biolabs).The 2.3n\u00a0=\u00a06 single\u2010hole punches from six individual plants.One hole punch was removed from each marked leaf and incubated in 800\u00a0\u00b5l N,N\u2010dimethyl formamide (DMF) overnight in the dark. A total volume of 200\u00a0\u00b5l of sample was transferred to a quartz microplate (Molecular Devices) and absorbance at 664 and 647\u00a0nm was measured with a BioTek Synergy H1 plate reader. Absorbance readings were used to determine chlorophyll concentration and random hexamers to prime cDNA synthesis. The cDNA was diluted 16\u2010fold and used as a template for real\u2010time qPCR using either ABsolute QPCR Mix, SYBR Green, ROX (Thermo Scientific) or qPCRBIO SyGreen Blue Mix Hi\u2010Rox (PCR Biosystems), in Step One Plus or Quant Studio 6 Flex (Thermo Fisher) qPCR machines. All real\u2010time qPCR reactions were run with a 61\u00b0C annealing temperature, and normalized to 2.5atx1,3,4 TM or WT control plants. All harvesting was completed between 8:00 and 10:00 a.m. to prevent interference by diurnal gene expression changes. Leaf 6 was harvested from all 10 plants and the 10 leaves were immediately flash\u2010frozen together. A mortar and pestle were used to grind tissue in liquid nitrogen. Homogenized tissue was separated evenly into three tubes to be treated as three replicates. The Breath\u2010Adaptive Directional sequencing (BrAD\u2010seq) .Ten plants per line at each time point were selected for harvesting from the developmentally synchronized group of bolting 2.6T test analysis. To prepare for the DeSeq2 and edgeR analyses, raw data were aligned to the TAIR10 genome and counted using Kallisto. Data were then exported to R and rounded to the nearest integer for differential expression analyses. DESeq2 was completed using a 2 factor (Genotype\u00a0+\u00a0Time\u00a0+\u00a0Genotype:Time) model that treated time as a continuous variable, rather than a category . We isolated two atx1,3,4 triple mutants (TM1 and TM2), which contain the same alleles but are derived from different F1 plants from the same cross, and are independent isolates of the same genotype. The atx1,3,4 triple\u2010mutant genotype was confirmed both by PCR and RT\u2010PCR histone methyltransferases methylate H3K4 TSS is associated with high expression of FLC, a flowering inhibitor, thereby preventing the vegetative to reproductive transition were grouped into a cohort. We randomly selected individuals from this cohort for leaf 6 harvesting, which began at T0 and continued in 2\u2010day increments . This synchronization to bolting ensured the atx1,3,4 plants were developmentally similar. WT leaf 6 control tissue was harvested at the same time points. At T6, one WT plant had bolted, but all other WT plants were vegetative. This design allowed us to differentiate gene expression changes associated with bolting versus those associated with age. cDNA libraries were prepared and subject to high\u2010throughput sequencing . The clustering of all T0 samples is likely because the mutants had just begun the phase transition. As they progress further into the reproductive phase, they cluster away from WT. WT samples from T4 and T6 cluster with the atx1,3,4 TM bolting samples, likely because they are nearing the vegetative\u2010reproductive transition. At T6, one WT plant had bolted. Similar clustering patterns were also seen using Principal Component Analysis , providing confidence that our harvesting method and differential expression analysis isolated bolting\u2010related gene expression changes.Three methods were used to determine differential expression Figure\u00a0. A gene et al. completed a similar developmental time\u2010series experiment in Arabidopsis to long\u2010day (LD) photoperiods , a gene had to be identified by at least two of the three statistical methods used in our analysis Figure\u00a0, and it WRKY45 increased over time in all samples except for the vegetative WT control in our experiment. DTX50 also showed strong induction in each experiment, although it decreased back to basal levels after 4\u00a0days in all datasets. PSK4 increases at T0 and then maintains expression levels higher than WT at all subsequent time points, which corresponds to the clear induction of PSK4 in both WT datasets. SAG20 did not show as clear of a trend of expression over time, but it was consistently higher than vegetative WT levels. SAG20 increases expression in both WT time series. ANT appears to decrease expression in all datasets undergoing the vegetative reproductive transition. WT follows the same trend, with a delayed decrease in ANT expression compared to the atx1,3,4 TMs, likely as WT was nearing bolting time.A script was written in R to visualize the expression profiles of all 398 BAGs . A total of 202 of the 398 BAGs (50.7%) were shared (Data File We then wanted to identify genes within the BAG list that either regulate or are associated with LS. Li p\u00a0<\u00a0.001) were upregulated during atx1,3,4 TM bolting and DILS. Seven of these 91 genes were upregulated in bolting atx1,3,4 TMs, but downregulated during DILS; and one gene was downregulated during atx1,3,4 TM bolting, but upregulated during DILS.\u00a0The 91 BAGs that change expression during DILS, but are not present in LSD 3.0, are genes that may represent novel early regulators of LS Figure\u00a0, Sheet 8S Figure\u00a0.3.6We sought to identify genetic interactions between BAGs, of which many are transcription factors (TFs). We employed a machine learning approach (GENIE3) to build a network and then trimmed it with a precision\u2010recall analysis using DAP\u2010seq binding data in ConnecTF.org in the network are BAGs Figures\u00a0, and 10.ERF054, WRKY28, WRKY45) with flowering signaling Figure\u00a0. This is4The goal of our study was to better understand the molecular connection between the vegetative\u2010reproductive transition and LS in Arabidopsis. We hypothesized that there are LS\u2010related gene expression changes associated with the bolting event. Using RNA\u2010seq, we generated a list of 398 bolting\u2010associated genes (BAGs). A total of 202 of these BAGs were present in the Leaf Senescence Database 3.0 (LSD 3.0), some of which may be responsible for temporally connecting LS to bolting time. We also identified 91 BAGs that are differentially expressed during dark\u2010induced leaf senescence (DILS) but are not present in the LSD 3.0. Further study of these 91 genes may reveal some novel early regulators of LS.4.1ATX genes, we engendered early flowering by altering the expression of known flowering time regulators FLC, SOC1, and FT. The small change in flowering time (5\u20137\u00a0days) was advantageous compared to studying an extreme flowering time phenotype that could have added a prolonged age bias to the experiment. Furthermore, we could not use stress\u2010induced early flowering of WT because stress and LS signaling overlap. atx1,3,4 TM mutants show no visible signs of stress or other developmental defects prior to or after bolting.By mutating atx1,3,4 isolates, TM1 and TM2, we reduced the probability of false discoveries. Early flowering has been reported in other atx mutants triple mutant but did not report a change in flowering time or LS and , thus, their genetic divergence may explain the difference in phenotypes between the (atx1 atx3 atx4) and the (axt3 atx4 atx5) triple mutants.By overlapping the results between two separate 4.2p\u00a0<\u00a0.001).Synchronizing tissue harvesting to bolting differentiated bolting\u2010associated and age\u2010associated changes in gene expression. Multiple statistical approaches were used for the identification of 750 initial DEGs since there is not one perfect statistical approach for our time\u2010series analysis. Treating time as a continuous variable as we did in DESeq2 helps to identify time\u2010resolved gene expression changes. However, our edgeR\u2010based method treated time as a factor, which may have allowed greater detection of transient changes in\u00a0gene expression. It is common to see high overlap between edgeR and DESeq2, however, that typically occurs when the same underlying statistical design is used. Here, we did not expect strong overlap as the programs were run with different designs. Even with the varying designs, Genesect found significant overlap between these gene lists is a senescence suppressed protein phosphatase is a well\u2010studied promoter of LS and BFT regulate flowering time , nitrogen signaling and metabolism , and stress and LS signaling (Jiang et\u00a0al.,\u00a0Furthermore, BZIP61 and BZIP34 Figure\u00a0 share 715We have identified 398 BAGs expressed in mature leaf 6 that change expression at the time of bolting as the plant ages. A total of 202 of these BAGs are known to be associated with LS as demonstrated by their presence in LSD 3.0 (Li et\u00a0al.,\u00a0The authors declare no conflict of interest associated with the work described in this manuscript.JAB and WEH designed the experiments. WEH completed the experiments and data analyses. WEH and JAB wrote and edited the manuscript.RNA\u2010seq data from this study are publicly available on GEO as GSE134177.While it is widely accepted that early bolting generally confers early leaf senescence in Arabidopsis, the molecular basis of this temporal relationship has not been explored. Toward a molecular definition of this relationship, leaf senescence\u2010related gene expression changes were identified in mature rosette leaves at the time of bolting. Further study may show that some of these contribute to the temporal coupling of LS and bolting. This information could help inform the production of crops that could overcome early leaf senescence during stress\u2010induced early bolting.Fig S1Click here for additional data file.Fig S2Click here for additional data file.DataFIile S1Click here for additional data file.DataFIile S2Click here for additional data file.DataFIile S3Click here for additional data file.DataFIile S4Click here for additional data file.DataFIile S5Click here for additional data file.DataFIile S6Click here for additional data file.DataFIile S7Click here for additional data file."}
+{"text": "Systems biology aims at holistically understanding the complexity of biological systems. In particular, nowadays with the broad availability of gene expression measurements, systems biology challenges the deciphering of the genetic cell machinery from them. In order to help researchers, reverse engineer the genetic cell machinery from these noisy datasets, interactive exploratory clustering methods, pipelines and gene clustering tools have to be specifically developed. Prior methods/tools for time series data, however, do not have the following four major ingredients in analytic and methodological view point: (i) principled time-series feature extraction methods, (ii) variety of manifold learning methods for capturing high-level view of the dataset, (iii) high-end automatic structure extraction, and (iv) friendliness to the biological user community. With a view to meet the requirements, we present AGCT (A Geometric Clustering Tool), a software package used to unravel the complex architecture of large-scale, non-necessarily synchronized time-series gene expression data. AGCT capture signals on exhaustive wavelet expansions of the data, which are then embedded on a low-dimensional non-linear map using manifold learning algorithms, where geometric proximity captures potential interactions. Post-processing techniques, including hard and soft information geometric clustering algorithms, facilitate the summarizing of the complete map as a smaller number of principal factors which can then be formally identified using embedded statistical inference techniques. Three-dimension interactive visualization and scenario recording over the processing helps to reproduce data analysis results without additional time. Analysis of the whole-cell Yeast Metabolic Cycle (YMC) moreover, Yeast Cell Cycle (YCC) datasets demonstrate AGCT's ability to accurately dissect all stages of metabolism and the cell cycle progression, independently of the time course and the number of patterns related to the signal. Analysis of Pentachlorophenol iduced dataset demonstrat how AGCT dissects data to identify two networks: Interferon signaling and NRF2-signaling networks. A systematic analysis of massive, high-throughput biological data must be based on sound mathematics and flexible control to handle the stochasticity, plasticity, and modularity seen in genetic applications. An essential step in the analysis of the gene expression is grouping genes by the similarity of their expression profile, representing gene expression changes over time.There is wealthy literature on cluster analysis going back over three decades , 2, amona priori among a set of candidates such as the Euclidean, Manhattan, or more generally Minkowski norm-based distances. Setting the distance a priori neglects the intrinsic underlying geometry of data , where 1 means the angle is zero, indicating maximal similarity.After all the similarities are computed, AGCT can learn the low-dimensional manifold using only the similarities. The output of this step is a new coordinate system for genes, which is typically low dimensional and nonlinear . This coordinate system tends to bring together sets of genes that are strongly similar and separates less similar genes.As manifold learning algorithms, AGCT implements state-of-the-art techniques that include Markov chains, graph Laplacian, Bethe Hessian, and doubly stochastic matrix modeling. One key statistic is obtained from this second step: the local dimensions of the variety. This value is a pointwise estimator, for each gene, of its local number of \"principal axes.\" The smaller this number, the more likely that the correlations between the time series result from meaningful interactions. Thus, a manifold learned from random data would have very high local dimensions. The last step is the extraction of the structure from these coordinate maps using powerful unsupervised algorithms. The main algorithms are clustering algorithms, including state-of-the-art affinity propagation, Bregman k-means++ clustering, expectation-maximization clustering, non-negative matrix factorization, complete positive factorization, and hierarchical clustering. A filtered Delaunay triangulation is also available, which represents and highlights all significant correlations between neighboring genes using colored edges. Because of a vast number of algorithms, a simple command-line language in AGCT allows users to run batches of these algorithms with a large number of parameters.Then, a search tool and graphical display allow users to pinpoint the most interesting clustering results for analysis. Since the complete workflow from the time series to post-processing is complex, a scenario recording and replaying tool to draw biologically meaningful conclusions basing on unsupervised learning clustering analysis solely. In the course of analysis, AGCT uses a local dimension score metric applied to the biological data for the first time to assess the role of each gene within the network in terms of its connectivity to other genes.Aperiodic Cluster 3 . Aperiodic cluster as shown in To 9,334 probes we applied the Bregman k-means with five consecutive restarts within a range of 3\u2264k\u226415, and obtained the best trade-off between the number of clusters and the data fitting for four clusters as shown in Barycentre of the data. The Barycentre includes genes in a large local dimension, activating the selection switch between (1) YCC progression state, the (2) stationary dormant state G0 depending on the presence/absence of an energy supply, in this case, glucose and mating state [35.8CRM1R/C(2) (exportin), 32.6MEC1OX(1) (genome integrity checkpoint), 30.4MLP1R/C(2) (telomere length control), 29.4IRA2R/C(2) (cAMP-dependent nutrient-limiting conditions), 26.6USO1OX(1) (transport from ER to Golgi), 26.5IZH4M/G,R/C(2) (zinc metabolism), 26.5GCN1R/C(2) (regulator of the Gcn2p kinase activity), 25.3SRS2R/C(1) (commitment to meiotic recombination), 21.1CCH1R/C(2) , and 23.7HSH155R/C(2) (genome integrity checkpoint protein) (S1i Data). They belong to the Reductive/Charging Cluster 2 and Oxidative Cluster 1 and are main cellular switches to the global environmental signals. For example, large dimension Barycenter genes regulate transition from M-phase to G1-phase of Yeast Cell Cycle as it is shown in 3.36ADH2R/C(2), 4.2ATO3R/C(2), and 3.7ACS1R/C(2) are observed in the proximity of cell cycle genes 3.56TIP1M/G1(2), 7.57DSE3M/G1(2), and 7.37EGT2M/G1(2) which explains the demand of Coenzyme A (CoA) for genome stability and centromere/kinetochore-mediated mitotic progression [5.6FOX2R/C(2), 7PFK26(2), 6.6TSL1G1(2), 4.5PRR2(2), and 4.4NCE102(2).The Reductive/Charging Cluster 2 is the largest cluster with the largest local dimensions (\u224815D) and it fng state , 26. Havng state . The topstationary dormant phase, G0, upon starvation [Aperiodic Cluster 3, represented by Cluster 0 in NNMF clustering , 15.7RTFTr(1), 14.1MMS22Tr(1), 14.0BRE1Tr(1), 13.1RRT103Tr(3), and 13.9ARG82Tr(2), embody the global cell surveillance mechanism and are found in each cluster arvation , 31, 32.arvation , and morustering . AperiodThe YCC dataset of 6,178 genes separated into six clusters, and the Bregman k-means revealed an arrangement within the context of cell cycle phases, as seen in 9.7CLN3M(0), 7.1SWI5M(0) and DNA replication \u201cMCM\u201d genes by Spellman\u2019s , control the cell cycle and regulate the mitotic exit resided in Cluster 0. Proton pumps 8.8PMA1M(2), 12.9PMA2M(2), 7.8PMPM(2), and 6.5PMP3M(2) are known to be regulated by \u201cMCM\u201d genes and mitotic exit genes 5.3PCL9M-G1(2), 6.6SIC1M-G1(2) and 8.1TEC1M-G1(2) genes reside in Cluster 2 nearby \u201cMCM\u201d genes of Cluster 0 (7.7MATALPHA1M(2), 15.8MFA2M(2), 18.7MF(ALPHA)1M-G1(2), 8.2STE3M-G1(2), 11.0MF(ALPHA)2G1(2) are also found in Cluster 2 which is shown to be the case of mating in the presence of mating pheromone [Cluster 0 and Cluster 2 relate to mitosis and cell movement to the G1 phase . Mitoticluster 0 . The \u2018MAheromone , 26.9.4DUN1G1(1), 7.9MRC1 G1(1), and 7.2BUB1G1(1). And, finally, Cluster 3 relates to the synthesis phase includesis phase . It inclis phase .Transposon genes are also identified as spread over all clusters, as in the Tu et al. dataset, ensuring a defensive response to undesirable environmental perturbations (30.3CCT2(3) chaperone activity, 28.8DJP1(0) peroxisomal protein transport, 28.8PEP8(0) endosome-to-Golgi retrograde protein transport, 22.7SWH1(2) oxysterol-binding protein localized to the Golgi, 21.9GAT4(1) GATA family zinc finger motifs, 21.8NRT1(1) nicotinamide riboside transporter, and 19.9AQY1(0) spore maturation transporter.In the Barycentre of the YCC datasets, transporter function genes are mainly found: As described in Kanno et al. , male miForty-eight liver samples (triplicates for dose and time) were homogenated and spiked with standard external RNAs for Percellome normalization and procAnother in-house software named RSort was useThe 513 probes were directed to the unsupervised clustering by AGCT. As a result, the NNMF method generated two clusters , and theAffinity Propagation clusters showed a more detailed clustering of various pathways. Although divided into a few clusters, there were clusters enriched with Interferon signaling genes and enriched with NRF2 regulated genes , shown in Poisoning with Pentachlorophenol produces Hyperthermia or hyperpyrexia, profuse sweating, uncoordinated movement, muscle twitching, and coma. The induction of Interferon signaling may pose new mechanism for some of those acute symptoms . PCP hasTo verify AGCT status within the context of current applications, we searched for tools/methods within the same claim scope. For all tools, we used a \u201cground truth\u201d dataset of 3,565 periodic selected probes by Tu et al., of yeast metabolic cycle and whole yeast metabolic cycle dataset of Tu et al., that contains 9,335 probes. Data includes three periodic groups of genes active in different phases, such as oxidative (OX), reductive-building (R/B) and reductive-charging (R/C), that therefore includes top active gene probes in number of 55, 40, and 41 for 3,656 probes set and 61, 40 and 45 for 9,335 whole-cell probes set. Hence 3,565 probes are periodic; they are used as a ground truth dataset in Bushati et al. study foFirst of all, it is Expander analyzerSTEM works on short (3\u20138 time points) series and visualizes the behavior of genes belonging to a given Gene Ontology category, as well as AGCT does. It includes k-means and STEM original clustering methods; fifty models that are being fitted to the data given represent the data . STEM coPython\u2019s Scikit library includes many algorithms for unsupervised clustering and manifold learning, including t-SNE and IsomVisgenex 2.0 suite ruIn this study, we present a new clustering software tool called AGCT (A Geometric Clustering Tool) for the analysis of large-scale time-series data.By employing a spectral manifold visualization technique and clustering methods, such as Bregman k-means++, non-negative matrix factorization (NNMF), and affinity propagation, we successfully detected the biologically meaningful architecture of the whole cell expression profile for the Yeast Metabolic Cycle (YMC) and the Yeast Cell Cycle (YCC), without data pre-processing. Data pre-processing may significantly reduce the size of the network, thus breaking the original topology the of the dataset.Aperiodic cluster responsible for the stationary phase when a cell stops growing due to nutrient limitation. An Aperiodic cluster essential in YMC architecture was lost in the previous study [The YMC clustering analysis revealed an exact assessment of Reductive/Charging, Reductive/Building, and Oxidative sentinels in three clusters with periodic behaviour regulated by the aerobic/anaerobic status of mitochondrial respiration, as well as one us study , since ius study .AGCT provides both a bird\u2019s-eye view of all clusters as well as an expanded view of each small passage of data, achieved by zooming in on the manifold. The Delaunay triangulation technique can easily find strongly co-regulated genes, i.e. robust genes with small dimension score, and allows investigating their neighborhood, as no molecule can be disregarded within the dynamics of the biological network.dimension score, for use in the analysis of biological data. By our analysis, we postulate that genes with similar roles have similar dimension scores, as we can see in the case of ribosomal (score 2\u20135), metabolic (score 5\u20136), and cell cycle genes (score 8\u201310). This tendency is observed for both YCC and YMC. Since random manifolds would have intrinsic dimensions approaching the ambient space\u2019s , and loe\u2019s (CIT ), this mAGCT allows for processing any set of synchronized time series, regardless of the number of time stamps and spacing between the time stamps. The analysis does not assume any restrictive assumption about time series. For example, we do not assume stationarity or specific shapes for autocorrelation; the wavelets we extract can capture the entire signal and efficiently learn a manifold regardless of the shape or value of these characteristics. This is another reason why the use of AGCT should be encouraged outside our primary field of genetics. Only the synchronous nature of time series is assumed, which means that there exists a fixed set of time stamps for which each time series is captured by a set of readings.This is an important consideration for wavelet methods. Also, there is no limitation on the purpose of the analysis: instead of gene expression data, time series can be related to single nucleotide polymorphism genotyping , genomicWe emphasize one pattern in these examples. Time series are grouped according to \u201cindividuals\u201d in the broadest sense; an individual can be a gene (which is the main focus of our paper), a fermentor, a person, a crop field, and so on. The manifold, therefore displays these \u201cindividuals\u201d, not in a way, without connection to displays obtained using conventional factorial analyses, but in a much more sophisticated and powerful way. The best choice of parameters for each of the different applications should then be data-driven, and this is a reason why AGCT integrates such a large spectrum of choices for all the steps, starting from the number and types of wavelet transforms available, and including the number, types, and sophistication of clustering algorithms on a manifold. We have extensively addressed this problem in our specific focus example of gene expression data, and we provide parameter choices that we think are reasonably close to the most accurate ones and bring results that support our findings. AGCT should thus be also thought as a contender to other pieces of software that offer visualization capabilities, but do not necessarily contain the amount of sophistication and state of the art algorithms we have brought in AGCT for these purposes.Algorithm workflow and various techniques of AGCT are described in Supplementary Information file. Here, we simly summarize the steps and options in AGCT that we used to obtain the results for each of the datasets.The Yeast Metabolic Cycle (YMC) and Yeast Cell Cycle (YCC) datasets were encoded with n = 64 on Haar wavelets, and the results were similar to those obtained with n = 32.W of 20 K x 20 K double precision floating point numbers (approximately 3 GB), which easily fits in, say, 32 GB RAM. Furthermore, its computation time is short; the computation of W on YCC and YMC takes less than 15 minutes on a 2.8 Ghz Intel Core i7 MacBook Pro. Each clustering roughly takes 8 x 20,000 x k (the number of clusters) bytes to store the results. Therefore, even when k = 100, 16 MB is enough to store the results of a hundred distinct clusterings on the whole dataset. This makes it easy to process large numbers of clusterings and to perform extensive statistical analyses.Computations were performed on all genes in the YMC and YCC datasets without using the prototype or feature selection mechanisms of AGCT. Note that AGCT is capable of processing (tens of) thousands of gene expression profiles on a commodity computer. For example, a set of 20,000 gene expression profiles yields a gene x gene similarity matrix We compute the similarity matrix W whose entries are defined as the cosine similarity between the corresponding genes (see Supplementary Information sec. 2.4 for other choices of distances between gene feature vectors like the heat kernel similarity). To capture the manifold structure, we first define a weighted graph representation of the genes by sparsifying the dense similarity matrix by reducing the number of non-zero entries using the k-symmetric nearest neighbors. The k-symmetric nearest neighbors are obtained by selecting the k largest similarities for each gene and then taking the logical-or of rows and columns with the same index in order to keep its symmetric property.The sparse symmetric matrix W encodes the weighted adjacency matrix of a graph of genes, and the choice of k is linked to the dimension of the clusters we seek to retrieve. In our experiments, we chose k = 10, but we observed that small variations in k did not produce qualitatively different results .N is a normalized stochastic matrix that represents the transition matrix of a Markov chain whose states are genes. In SI, sec. 2.5, we report six different methods to produce a row stochastic matrix N, whose row coefficients sum up to one. We then solve the eigenvalue problem for N.We used Bregman k-means clustering (without using the Iterative option) with the squared Euclidean distance. We obtained N = 4 as the number of clusters for YMC and N = 6 for YCC. S1a, p. Both tables show that the observed edges are short compared to the length of the average edge, which is a good sign since it means that genes whose profiles have the largest correlations tend to be brought nearer to each other on the manifold.The scenario is a series of actions on AGCT with parameters. Recording is started automatically after the application is launched (The scenario icon blinks.) When the \u201cScenario save\u201d button is pressed, the recording is saved into a text file. By loading the scenario users can reproduce the parameter settings and results up to the calculation of the manifold. This feature saves time by eliminating the need to memorize parameters and manually process the dataset each time. Parameters used for computing YMC and YCC data are described in sections 3.1\u20133.6.S1 File(PDF)Click here for additional data file.S2 File(PDF)Click here for additional data file.S3 File(ZIP)Click here for additional data file.S4 File(ZIP)Click here for additional data file.S1 Data(XLS)Click here for additional data file.S2 Data(XLS)Click here for additional data file.S1 Table(DOCX)Click here for additional data file.S2 TableStatistics of Delaunay triangulation for YMC (a) and YCC (b), for different p-values.(PDF)Click here for additional data file.S3 Table(DOCX)Click here for additional data file.S1 Fig(PDF)Click here for additional data file.S2 Fig(a) Two clusters obtained with Non-Negative Matrix Factorization method on the Yeast Metabolic Cycle dataset : Cluster 0 (596G) and Cluster 1.(b) YMC and YCC sentinel genes belong to Cluster 1, space where most colored circles are observed. Delaunay triangulations are shown with green edges. Cluster expression profile, with candlesticks . Boxes indicate Q25, Q50 (black), and Q75 quantiles. Dimension histograms for each cluster.(PDF)Click here for additional data file.S3 FigReductive/Charging and Oxidative sentinels are detected in groups up (blue) and down (light green) on the manifold edges .(a) M-G1 phase sentinels bridging two clusters to promote cell cycle progression. (b) High local dimension genes of YMC are in the core of Reductive/Charging cluster 2 and around \u201cM-G1\u2014G1\u201d bridge area (a), specifying Barycenter of YMC dataset.(PDF)Click here for additional data file.S4 FigBlue\u2014Metabolic genes , Red\u2014YCC genes , Green\u2014Yeast Cell Cycle identified in this study. The thick red line corresponds to the enlarged region.(PDF)Click here for additional data file.S5 Fig(PDF)Click here for additional data file.S6 Figa) NNMF (k = 2), (b) Affinity propagation (k = 21), (c) Delaunay Triangulation on top co-regulated genes .((PNG)Click here for additional data file.S7 FigGreen and red edges equivalent to positive and negative correlations according to Delaunay triangulation (p\u22640.001).(PDF)Click here for additional data file.S8 Fig(PDF)Click here for additional data file."}
+{"text": "Physiological changes and unsupervised learning of DEG\u2019s discloses high altitude day 3 as distinct time point. Gene enrichment analysis of ontologies and pathways indicate cellular dynamics and immune response involvement in early day exposure and later stable response. Major clustering of genes involved in cellular dynamics deployed into broad categories: cell-cell interaction, blood signaling, coagulation system, and cellular process. Our data reveals genes and pathways perturbed for conditions like vascular remodeling, cellular homeostasis. In this study we found the nodal point of the gene interactive network and candidate gene controlling many cellular interactive pathways VIM, CORO1A, CD37, STMN1, RHOC, PDE7B, NELL1, NRP1 and TAGLN and the most significant among them i.e. VIM gene was identified as top hub gene. This study suggests a unique physiological and molecular perturbation likely to play a critical role in high altitude associated pathophysiological condition during early exposure compared to later time points.High altitude (HA) conditions induce several physiological and molecular changes, prevalent in individuals who are unexposed to this environment. Individuals exposed towards HA hypoxia yields physiological and molecular orchestration to maintain adequate tissue oxygen delivery and supply at altitude. This study aimed to understand the temporal changes at altitude of 4,111m. Physiological parameters and transcriptome study was conducted at high altitude day 3, 7, 14 and 21. We observed changes in differentially expressed gene (DEG) at high altitude time points along with altered BP, HR, SpO Reduced barometric pressure with increased altitude concurrently reduces the inspired oxygen partial pressure in humans . In spitWe analyzed the time based exposure induced perturbation in physiological and gene expression pattern at high altitude in Kyrgyz. Global transcriptome analysis using RNA sequencing, recent advanced technique, could effectively detect thousands of genes and their consecutive expression patterns. Systemic bioinformatics tool could further enhance the data presented by the RNA sequencing. Time series analysis benefits biology to dynamically capture the complete drift impinges in humans and specific selection of decisive time point for further therapeutic interventions. Present study was done with an objective to comprehend the responses during long 21 days stay at altitude of 4,111m in Kyrgyzstan by comparing the temporal response at each time point as compared to basal response. We studied molecular orchestration by enriched pathway analysis, gene clustering, and network analysis to find out critical time point indicative of acclimatization and top candidate genes controlling acclimatization process and cross validating it with physiological and biochemical markers\u2026 Time point analysis was performed because use of single time point can lead to static outcomes. This all can comprehensively help understand the acclimatization physiology at 4,111m.ad libitum with similar nutritional aspects during entire study duration. Protocol of the study was approved and carried out according to the guidelines by Ethics Committee of Defence Institute of Physiology and Allied Science (DIPAS), Delhi. After explaining the study protocol written informed consent was obtained from each individual. Basal studies started at Bishkek, Kyrgyzstan (800 m) and data was collected. Volunteers were inducted by road to ~3000m (for 4 days) for acclimatization as per prescribed protocol of the study. After that, volunteers were moved to 4,111m by road where they stayed for 21 days and throughout the altitude stay temporal analysis was performed at Day3 (HAD3), Day7 (HAD7), Day14 (HAD14) and Day21 (HAD21). The basic characteristics of the individuals selected for the study has been provided in the The population group under study consists of young male residents of Kyrgyzstan (\u2264800m) origin (n = 30) matched for age and BMI. Volunteers were fresh inductees to high altitude, did not show any kind of cardiovascular or neurological disease at basal as per their medical record data and were not into any kind of medication or supplementation. Volunteers were provided the same food/ration A very straightforward and common grading system for diagnosis of acute mountain sickness is the Lake Louise self-assessment questionnaire with a hSystolic and diastolic blood pressure along with heart rate was measured using mercury free BP instrument in the morning hours (0600\u20130700 hrs) in supine position when the participants were awake but still lying on bed. Saturation of peripheral oxygen was measured using finger pulse oximeter . It specifically measures percentage of oxygenated hemoglobin compared to total hemoglobin in blood giving an estimate of arterial oxygen saturation. Body mass was recorded after voiding between 0600\u20130800 hrs before breakfast using electronic platform balance and height was recorded using calibrated height rod .2) to which right atrial pressure (RAP) was added. RAP was considered as equal to 5 mmHg, taking into account that all volunteers displayed IVC < 2 1 mm and > 50% collapsibility. Mean pulmonary artery pressure (mPAP) can be approximated from the SPAP, using the following formula = 0.6 * SPAP + 2mmHg [Two-dimensional and Doppler echocardiography were performed from the standard parasternal, apical, and subcostal views in the resting state, in the supine or left lateral position using ultrasound systems . The standard M-mode measurements of aorta, left atrium, left and right ventricular wall thickness, and left ventricular end-diastolic and systolic dimensions and right ventricular dimension at end-diastole and end-systole were made from the parasternal long-axis view as recommended by the American Society of Echocardiography . All rep + 2mmHg . These cBlood samples were collected at basal, high altitude day 3, day 7, day 14 and day 21 between 0600\u20130800 hrs from anticubital vein after 12 hours fasting in ethelnediaminetetraacetic acid (EDTA) and gel tubes at basal (control) and at different altitude time points (test). The tubes were centrifuged; plasma and serum were separated by centrifugation at 3000 rpm for 20 min at 4\u00b0C and stored at -80\u00b0C until assayed. Venous blood samples from individuals at sea level and at different altitude time points was also collected in PAX gene Blood RNA Tubes . PAX contains 6.5 ml of RNA stabilizing solution in which 2.5 ml of whole blood is added using a blood collection accessory at room temperature. After blood collection, PAX tube was inverted 8\u201310 times, were kept upright at room temperature (18\u201325\u00b0C) for 2 hours than transferred to freezer (-20\u00b0C) for 24 hours and finally stored at -80\u00b0C.Isolation of total cellular RNA was done using PAX gene blood RNA Kit as per manufacturer\u2019s protocol. DNase digestion was carried out for the removal of DNA using RNase free DNase set (Qiagen) on the RNA spin column. Purified RNA was immediately chilled on ice. RNA concentration and quality was evaluated by measuring absorbance at 260 and 280 nm using a spectrophotometer before being stored at -80\u00b0C. The RNA samples were quantified using Qubit HS RNA kit . The samples were checked for degradation using Agilent Bioanalyzer RNA 6000 nano kit for determination of RNA Integrity Number (RIN) with 1 being the most degraded and 10 being the least, calculated by computing the ratio of the areas under 18S and 28S rRNA peaks, the total area under the graph and by measuring the height of 28S peak. The RIN values more than 7 were considered for further analysis.Two technical replicate samples for each control i.e. Basal and test i.e. high altitude D3, D7, D14 and D21 were transferred to Illumina HiseqX sequencing system and loaded sequencing by synthesis (SBS) reagents as per Illumina recommendation to perform 2x150 paired end sequencing. There were approximately 45 million (150-nt length) reads. The median quality score was >30 across all the samples. The sequence reads were mapped to the Human reference genome (HG19) with STAR aligner (V.2.0). The STAR mapped reads were processed to remove PCR and optical duplicates using Picard tools and raw expression count was generated using feature Counts. The RNA sequencing data is submitted to Gene Expression Omnibus (GEO): Accession Number: GSE133702.The sequence data quality was checked using FastQC and MultiQC software. The data was checked for base call quality distribution, % bases above Q20, Q30, %GC, and sequencing adapter contamination. All the samples in technical replicates have passed QC threshold (Q30>80%). Raw sequence reads were processed to remove adapter sequences and low quality bases using Trimgalore. The QC passed reads were mapped onto indexed Human reference genome (HG19) using STAR v2 aligner. Overall ~94.56% of the reads aligned onto the reference genome. The PCR and optical duplicates were marked and removed using Picard tools. Gene level expression values were obtained as read counts using featureCounts software. Expression similarity between biological replicates was tested by spearman correlation. For differential expression analysis the biological replicates were grouped as test and control . Differential expression analysis was carried out using edgeR package. The read counts were normalized and 40509 features (70.06%) have been removed from the analysis because they did not have at least 1 counts-per-million in at least 2 samples.http://discover.nci.nih.gov/cimminer). Venn diagram representation of differentially expressed genes in Kyrgyz was done using \u2018InteractiVenn\u2019 [Gene expression pattern signal intensities of each gene probe were acquired by clustering according to the Euclidean distance. It reveals the correlation during temporal analysis in Kyrgyz for each gene probe to generate heat map in which matrix of values is mapped with matrix of colors. Heat map was constructed using web based matrix visualization and analysis platform Morpheus for group wise analysis and CimMctiVenn\u2019 .https://david.ncifcrf.gov/) can provide systematic and comprehensive biological function annotation information for high throughput gene expression. Therefore, we applied KEGG pathway analyses to the DEGs using DAVID online tool at the functional level. Differentially expressed genes which were segregated both numerically log2FC\u2265 \u00b10.58 and significantly pvalue <0.05 were assigned for official gene number and accession number. These genes were put into DAVID analysis based on official gene name and accession number to extract KEGG enrichment. Pathways obtained for the enriched genes were ranked based on the adjusted p value (Benjamini- Hochberg adjustments). The identified gene ontologies were screened using Morpheus for their expression at different altitude time points. The file was uploaded to the Morpheus heat map building tool available from the Broad Institute. The file was formatted to conform to the expected column headings as described in the input file documentation. Custom colors were selected to represent high expression (red), moderate expression (pink), and low expression (blue). 394 common genes during temporal analysis were matched for the similar gene transcripts obtained from functional annotation clustering (using Morpheus) and those common genes were analysed for gene ontology using BinGO. Overrepresentation and visualization of gene ontology (GO) terms and construction of enrichment GO network using Cytoscape 3.2.1 with Biological Networks Gene Ontology tool BinGO plug-in. Node size represented number of targets and color represents significance of the GO category.The differentially expressed genes were studied for their overabundance in different GO terms as well as pathways. The overabundance of a particular term was decided based on the number of significant genes in the analysis. Functional annotation clustering was achieved using the database for annotation, visualization and integrated discovery (DAVID) according to the manufacturers protocol with an oligo (dT) primer containing a T7 RNA polymerase site added 3\u2019 of the poly (T). The selection of Gene of interest (GOI) was to validate the RNA sequencing data. The genes selected were considered because they were significantly differentially expressed and are involved in pathways obtained from DAVID analysis. The GOI\u2019s are: TBXA2R, GP9, ITGB2, HBA1, VEGFB and TNF. The primers used for technical validation of GOI\u2019s are shown in Technical validation was performed using samples from basal and high altitude day3 time point group. Samples for the technical validation were kept same as taken for the RNA sequencing with RNA integrity number >7. 1\u03bcg of RNA was converted into double stranded cDNA by reverse transcription using cDNA Synthesis Kit , Cortisol , Coagulation Factor VIII , Vascular endothelial growth factor and Vimentin were determined at basal, high altitude D3, D7, D14 and D21 according to manufacturer\u2019s protocol.Physiological measurements were represented as mean\u00b1SEM in graph. For RNA sequencing analysis, the final set of differentially expressed genes shown in To comprehend the complex response towards high altitude hypoxia we studied physiological perturbation in Kyrgyz population along with whole genomic changes in temporal manner. We designed experiment strategies adopted for the current study as schematically summarized and represented in 2, pulmonary artery pressure & AMS scoring allowed us to capture the transition between early high altitude exposed physiological changes and later stable response.To determine the severity of high altitude hypoxic stress during different time points investigations from our sample size of n = 30 were done. Physiological parameters that are directly related to high altitude condition like blood pressure, heart rate, arterial oxygen saturation, mean pulmonary artery pressure and acute mountain sickness score were investigated. Systolic blood pressure increased significantly on D3 (p<0.05) of HA as compared to basal. Thereafter, it reduced to almost basal level on D14 and D21 of high altitude . DiastolThe changes in physiological markers at early time point directed our attention in deciphering the underlying molecular basis of systemic changes involved during different time point in Kyrgyz. We further generated and investigated the changes utilizing RNA sequencing from whole blood RNA post high altitude exposure during each time point. Transcriptome data sets of different time points were assessed after applying cut off of Log2FC\u2265\u00b10.58. During different time points of high altitude hypoxia exposure the number of genes upregulated and downregulated was represented as bar graph . At HAD32, and mPAP. Some of the critical pathways with highest genes involved are: HAD3: Cell adhesion molecules (gene count: 31), Axon guidance (Gene count: 25); MAPK signalling (Gene count: 22), Rap1 signalling (Gene count: 21), Endocytosis (Gene Count: 21), Purine metabolism (Gene count: 15). HAD7: Cell adhesion molecules (Gene Count: 21); Ribosome (Gene count: 14), cytokine-cytokine receptor interaction (Gene count: 14), Axon guidance (Gene count: 11), Hematopoietic cell lineage (gene count: 7). HAD14: Cell adhesion molecules (Gene count: 9); Hematopoietic cell lineage (gene count: 9), Gap junction (gene count: 8), Platelet activation (Gene count:7). HAD21: Ribosome (Gene count: 28); Rap1 signalling (Gene count: 14); oxidative phosphorylation (Gene count: 11); upregulated pathways: cytokine-cytokine receptor interaction (gene count: 13), PI3K-Akt signalling (Gene count: 12), chemokine signalling (Gene count: 11). Involvement of maximum genes and correspondingly maximum pathways at HAD3 time point leads the way that initial time point is very critical for understanding the pattern of acclimatization in high altitude induced hypoxic environment. Initial physiological changes like reduced SpO2, increased BP, HR and mPAP response could directly relate the perturbation of whole genome for acclimatization at HAD3. The response majorly covers the fact that there is a clear transition state in human body when reached to high altitude from basal and at later stage the response get stabilized.To integrate the complex physiological analysis and transcriptomic data clustering of differentially expressed genes, DAVID Bioinformatics was performed. Transcriptomic data sets of different time points were assessed after applying cut off of Log2FC\u22650.58. Up regulated genes and down regulated genes were considered for finding out the considerably enriched KEGG pathways . MaximumCD274, CDH2, ESAM, LAMC1, FOLR3, FOLR2, GUCY1A2, TUBB4A, RPS26, RPS15A, CD19, SERPING1, C4BPA, CCR9, TNFRSF17, CCL3, CCR9, CXCL5, IL9R, PFN1, ACTB. These gene transcripts mined with BinGO plugin revealed biological terms which were clustered further as biological and cellular processes, transport process, immune response, regulation of immune response and regulation of metabolic process reflected as the gene with highest degree can be claimed as the hub gene at HAD3 time point. VIM also appeared in betweenness and closeness centrality. Other important genes involved in cell-cell interaction and cellular dynamics are CORO1A, CD37, STMN1, RHOC, PDE7B, NELL1, NRP1 and TAGLN.GeneMANIA coexpression network of high altitude day3 time point genes was constructed using GeneMANIA cytoscape plugin. After importing the data from cytoscape and running the network analysis, the top 10 genes were evaluated by the 5 topological feature average shortest path length, degree, clustering coefficient, closeness centrality and betweenness centrality. The network obtained from network analysis is provided in Vimentin was observed to be ~69% increase at day 3 time point in Kyrgyz individuals when compared to basal dropped at altitude during initial time point HAD3 and regained at later time points. This result is consistent with the previous studies in which lower arterial oxygen saturation has been observed in hypobaric hypoxia [2 helped us monitor the progression of high altitude induced hypoxia and gave a lead for gene expression level changes. Overall, temporal analysis of Kyrgyz physiology helped understand the pattern of responses and cover all phases in this group to study the dynamic changes trend. Physiology parameters reveal time point D3 to be perturbed significantly, thus this phase could be point of analysis. We found a time window where physiological changes are observed and thereafter it subsides gradually. It has given us an opportunity to understand the molecular basis of this phenomenon. In our study when we subjected gene expression data to unsupervised learning we found a similar trend that day 3 has a unique molecular signature compared to other time point.Our study reflected increased systolic and diastolic blood pressure during initial time point and significant increased levels of cortisol in Kyrgyz significantly at D3 (p<0.05) point out towards stress condition after high altitude exposure. Mean pulmonary artery pressure increased considerably on day3 of high altitude made them prone to pulmonary vasoconstriction and increased pulmonary arterial vascular pressure. SpO hypoxia , 21. The hypoxia , 22. Hea hypoxia . BP, HR Further analysing day 3 time point touched upon pathways related to Cellular dynamics and cell adhesion molecules; Rap1 signalling, MAPK, vascular morphogenesis and angiogenesis, Platelet Activation and Hypercoagulation, Angiotensin converting enzyme, hypoxic pulmonary vasoconstriction and immune response which are further separately discussed.ITGAV, ITGB2, ITGB7, ITGA4 and ITGA2; in cell adhesion molecule, focal adhesion and extracellular matrix receptor interaction, involvement of collagens COL6A3, COL6A2, claudins CLDN5 and other cell adhesion molecules like, ICAM3, CADM1, CDH2, CD58, ACTB, TLNN1, FLNA, ACTG1, LAMA5, PARVB, MYLK all are upregulated during initial time point in Kyrgyz suggesting modulation of cytoskeleton dynamics as a part of acclimatization to overcome hypobaric hypoxia. Similarly, in Tibetans and Sherpas gene network analysis suggest involvement of collagen and integrin in multiple pathways which are involved in the cellular functions related to angiogenesis [The mechanical interaction between the cell and extracellular matrix influence the cell behaviour and function. Cell adhesion is generally the ability of a single cell to stick to another cell or an extracellular matrix forming an endothelium which is a dynamic organ to regulate vascular tone , 25. Vasogenesis .Temporal cellular level changes at high altitude reveal occurrence of endothelial dysfunction. These cell adhesion dynamics might point towards enhanced Rap1 signalling which is evident in our present transcriptome data. Importance of cell dynamics is in several processes like coagulation, angiogenesis etc. Angiogenesis is a process of formation of new blood vessels from the existing vasculature . This prArterial venous thrombosis is observed under some cases of acute hypobaric hypoxia wherein activation of platelets has been observed during hypoxia exposure in human and animal studies \u201338. IdenACE is found highest in pulmonary circulation and form angiotensin there causing hypoxic pulmonary vasoconstriction in humans [ACE log2 fold change = 0.6 gene transcript in our study is upregulated in the Kyrgyz on day7 of high altitude. Previous studies on Kyrgyz individuals revealed the polymorphism associated with ACE gene transcript. It was studied that I/I genotype was more strongly related with pulmonary hypertension in Kyrgyz than in comparison to other phenotypes like I/D, D/D [Angiotensin converting enzyme is a protease which is capable of cleaving two amino acids from angiotensin 1 and form Angiotensin II, which is a powerful vasoconstrictor. n humans , 49. ThiI/D, D/D . It has I/D, D/D . These vHLA-A, HLA-B, HLA-C, HLA-G log2 fold change = 0.65, 0.82, 0.91, 0.64 respectively are found on the surface of antigen presenting cells for presenting peptides/antigens at cell surface; gene transcripts like CD22 CD79A, CD81 log2 fold change = 0.80, 0.92, 0.81 respectively which mediate B-cell B-cell interaction; gene transcript like LAT, ARAF, PRF1 log 2 fold change = 0.62, 0.65, 0.81 respectively all induce T cell activation. MHC activation during initial time point is a protective mechanism of Kyrgyz at high altitude for presenting antigens and reflects boosted adaptive immunity. Similarly, upstream regulated pathway is Complement and coagulation cascade. Activation of gene transcripts like C2, C3, C4BPA, CFB, C1QB during day7 also supports the activation of immune system for fighting against antigens. Along with adaptive immunity, innate immune system is also getting upregulated during initial temporal analysis in Kyrgyz individuals. Further, during Day7 of time point analysis cytokine-cytokine receptor interaction is getting down regulated. Cytokines, chemokines and their receptors like CCR9 log 2 fold change = -1.691, TNFSF10 log2 fold change = -0.583, CCL4L1 log2 fold change = -0.99 were down regulated which is indicative of reduced migration, activation of immune cells and future pathogenesis of human disease as the individuals getting acclimatized. Chemokines are essential players in immune and inflammatory reactions as well as infections [In present study, it is noticeable that immune system was highly regulated during temporal analysis in Kyrgyz individuals. During HAD3, which is the initial time point for estimating the gene level changes in this group, it is evident that antigen processing and presentation, B cell receptor signaling, Natural Killer cell mediated cytotoxicity, Toll like receptor signaling pathways all are up regulated. The gene transcripts involved were different Major histocompatibility complexes, fections , 52 and fections . It has fections . As, epifections .VIM, CORO1A, CD37, STMN1, RHOC, PDE7B, NELL1, NRP1 and TAGLN. Involvement of all these genes during hypoxia at cellular level suggests hypoxia affect structure and function of endothelial cells. Among it, VIM is lying in all 3 analysis and has the highest degree and could be pointed as a hub gene and rest all genes lie in at least 2 of the category. Hypoxia exerts effect on Vimentin, an important component of endothelial intermediate filament network, which help maintain structure and function of endothelial cells. As we have observed from the KEGG pathway analysis that activation of signalling pathway like MAP Kinase, cell adhesion molecules, focal adhesion and extracellular matrix receptor interaction pathways can alter the actin cytoskeleton and alter the endothelial cell migration, and barrier permeability during hypoxia. Thus, in the KEGG pathway analysis, the top pathways were associated with the cell-cell interaction which is in consonance with the hub gene functions. It has been shown that hypoxia causes redistribution of Vimentin to a more soluble and extensive filamentous network that could play a role in endothelial barrier stabilization. Results of our study support other studies which suggest up regulation of Vimentin in hypoxic cells [Vimentin binding peptide induces angiogenesis under hypoxic conditions [VIM, on protein level, is significantly up regulated with p\u22640.001 at HAD3. In brain capillary endothelial cells upregulation of Vimentin has been reported [Vimentin has been shown to regulate focal adhesion [It is evident from the clustering of gene ontologies analysis that initial time point HAD3 is very important in terms of dynamic changes occurring in Kyrgyz. Thus, we further analysed this time point to find out the involvement of important genes and their associated functions. Genes obtained from the degree; betweenness centrality and closeness centrality were checked for their specific roles. It was found that among the genes 50% and immune response generation/ inflammatory response are both interrelated. It is evident from the clustering of gene ontologies analysis that initial time point HAD3 is very important in terms of dynamic changes occurring in Kyrgyz. Moreover, in this study we also tried to find the nodal point of the gene interactive network and candidate gene controlling many cellular interactive pathways VIM, CORO1A, CD37, STMN1, RHOC, PDE7B, NELL1, NRP1 and TAGLN and the most significant among them i.e. VIM gene was identified. These hub genes may be used in the future as a biomarker and therapeutic target for accurate diagnosis and treatment of high altitude induced hypoxia. This study will contribute to the knowledge of the molecular mechanisms underlying the acclimatization of Kyrgyz towards high-altitude environment.We acknowledge that study was limited to 21 days at high altitude and was not investigated after descent from high altitude to baseline. Further similar studies can be conducted at similar altitude on larger number of volunteers for assessing the effect after 21 days and after descent to baseline.S1 File(XLSX)Click here for additional data file.S1 Fig(TIF)Click here for additional data file."}
+{"text": "CompEngine, a self-organizing, living library of time-series data, that lowers the barrier to forming meaningful interdisciplinary connections between time series. Using a canonical feature-based representation, CompEngine places all time series in a common feature space, regardless of their origin, allowing users to upload their data and immediately explore diverse data with similar properties, and be alerted when similar data is uploaded in future. In contrast to conventional databases which are organized by assigned metadata, CompEngine incentivizes data sharing by automatically connecting experimental and theoretical scientists across disciplines based on the empirical structure of the data they measure. CompEngine\u2019s growing library of interdisciplinary time-series data also enables the comprehensive characterization of time-series analysis algorithms across diverse types of empirical data.Time-series data are measured across the sciences, from astronomy to biomedicine, but meaningful cross-disciplinary interactions are limited by the challenge of identifying fruitful connections. Here we introduce the web platform, Massive volumes of time series are also collected for diverse commercial applications, including fault identification from sensor recordings of industrial processes, fraud detection from vast streams of credit-card transactions, and marketing strategy development from the dynamics of online behaviors. The wide range of problems involving time-series data has resulted in a diversity of analysis methods, but time series and their methods are rarely compared across disciplinary boundaries3.Taking repeated measurements of a system over time, yielding time-series data, is ubiquitous in science. Time series are studied in mathematics, statistics, and physics, and measured in disciplines ranging from biology, economics, astrophysics, and meteorology. Applications are correspondingly diverse, from high-throughput cellular phenotypingThere is much to be gained from interdisciplinary collaboration on the study of time-varying systems. For example, connecting researchers studying similar real-world dynamics could prompt them to work collaboratively to understand the common patterns in their data. Similarly, connecting simulations of time-varying model systems (for which underlying mechanisms are known) to the types of real-world systems that exhibit similar dynamics could connect theoreticians and experimentalists to better understand the mechanisms underlying empirical observations. Due to large barriers to identifying commonalities, and hence areas for productive collaboration around common problems, initiating meaningful interdisciplinary connections remains a key challenge.CompEngine, a self-organizing library of interdisciplinary time-series data that automatically highlights meaningful interdisciplinary connections between time series. CompEngine uses a common feature-based representation of time series to organize them according to their computed properties. CompEngine contains an initial set of over 24 000 time series encompassing recordings from a wide range of: empirical systems, including birdsong, population dynamics, electrocardiograms (ECG), heart-rate intervals, gait, audio, finance, meteorological, and astrophysical data; and synthetic model systems, including data generated from simulating sets of differential equations, iterative maps, and stochastic processes3. Each time series is annotated with user-provided metadata about what system was measured and how it was recorded\u00a0or simulated.Here we introduce CompEngine\u2019s large library of time-series data can be downloaded and used as a representative interdisciplinary data resource to characterize the performance of new time-series analysis algorithms on diverse empirical data. In this paper, we motivate and introduce CompEngine, describe the research underlying its functional machinery, and explain how we envisage it being of broad utility for the scientific time-series analysis community.As users upload and share their own data, connections between different data objects are updated automatically as the library grows and reorganizes itself. This process often yields surprising interdisciplinary connections between the properties of empirical data generated from real-world systems and synthetic model-generated data, thus lowering the barrier for fruitful interdisciplinary collaboration by connecting scientists through the data they analyze. CompEngine is a univariate and uniformly-sampled time series . Our challenge is therefore to develop a similarity measure that can compare time series measured at different sampling rates, from different systems, and for different durations of time. Drawing on previous research, we achieve this using a feature-based representation6.For a self-organizing library to meaningfully structure diverse data, it requires an appropriate measure of similarity between pairs of objects in the library. The fundamental data object in 4, but computing a set of features from the measured dynamics allows a time series to be represented as a point in a common feature space, regardless of how/where it was measured. Such \u2018feature-based\u2019 representations of time series have been used to successfully tackle a wide range of problems4, including classification (or regression)7, clustering8, forecasting9, and anomaly detection10. To generate a feature vector, a univariate time series of T ordered measurements, xt , is mapped to a set of F features, fi . Each feature is the real-valued output of some algorithm applied to the time series. Features capture different types of statistical properties of time series, such as their: distribution of values ; autocorrelation ; stationarity ; complexity and predictability ; or some characteristic of a fitted time-series model (their parameters and goodness of fit)4. The feature-based distance between two time series, xj)( and xk)( and fk). To weight all features equally in the distance calculation, the values of each feature are normalized across the time-series dataset; here we use an outlier-robust sigmoidal normalization3. Our core problem then becomes how to define an F-dimensional feature space: x\u2009\u2192\u2009f\u2009\u2208\u2009CompEngine.There are myriad ways two time series can be compared4? Recent work introduced an approach known as \u2018highly comparative time-series analysis\u20197. This approach is implemented in the software package, hctsa5, which includes algorithmic implementations of over 7000 time-series features. To investigate whether such a feature-based representation can organize different types of dynamics, we used hctsa to generate feature-based representations of data from fifteen different classes that encompass both: (i) simulated data, from deterministic dynamical systems, discrete iterative maps, stochastic differential equations, and random noise series; and (ii) empirical data, from seismology, river flow, share prices, log returns of financial series, ionosophere fluctuations, sound effects, animal sounds, music, electrocardiograms, heart-rate (RR) intervals, and gait dynamics. The dataset is described in detail in Supplementary\u00a0A feature-based representation enables diverse time series to be compared meaningfully, but how do we select which features to use, given the vast interdisciplinary literature on time-series analysist-SNE projection11 of this diverse dataset from the full hctsa feature space. The data-driven organization meaningfully represents the structure of the dataset, with distinct categories of data occupying characteristic parts of the space. For example, the lower-right part of the space, labeled \u2018b\u2019, contains audio data: clusters of sound effects, animal sounds, and music. As shown in Fig.\u00a0x3\u2009+\u2009sin(\u03a9t), \u03a9\u2009=\u20091.8812). Other examples are annotated, including the slow fluctuations of time series in region \u2018a\u2019, which contains opening share prices, a simulated damped driven pendulum, and the output from a stochastic differential equation (SDE). Periodic dynamics are concentrated in region \u2018c\u2019, including gait dynamics of patients with Parkinson\u2019s disease, ionosphere measurements, and the audio of a button-push sound effect. Overlaps between the structure of real-world dynamics and that of time-varying model systems highlight fruitful possible connections between theory and experiment. For example, the area labeled \u2018a\u2019 in Fig.\u00a03. This ability to make meaningful connections between diverse types of time series through a common feature space representation forms the basis for CompEngine being an informative self-organizing library of time-series data.Figure\u00a0hctsa to convert time series to comprehensive feature vectors requires the computationally expensive step of calculating over 7000 features. For an efficient online implementation as CompEngine, we require a more computationally efficient reduced feature set. In recent work, the full hctsa feature library was reduced to a smaller subset of interpretable features that show high classification performance and exhibit minimal redundancy across a wide range of classification tasks, yielding a set of 22 features called catch226. These features are implemented efficiently in C and capture the conceptual diversity present in hctsa, incorporating measures of autocorrelation, predictability, stationarity, distribution of values, and self-affine scaling6. Compared to the full hctsa feature set, distances between pairs of time series are highly correlated in the\u00a0catch22 feature\u00a0space, r\u2009=\u20090.77 , making it a feasible and scalable solution for a self-organizing library like CompEngine.Using hctsa, can be reduced to a smaller number of efficiently coded features to facilitate faster feature computation and therefore data comparison. In this section, we describe our implementation of an interactive time-series data library as CompEngine that self-organizes using computed features of the uploaded data.Above, we showed how representing time series in a common feature space provides the basis for building a self-organizing library of time-series data. A comprehensive feature library, CompEngine library.Figure\u00a0CompEngine also supports upload of multiple univariate time series through a bulk upload function.Upload of time-series data supports text files and Excel files containing a single column vector of real numbers. Audio data upload is also supported . All uploaded data is licensed under the \u2018no rights reserved\u2019 CC0 license. For computational efficiency, individual time series containing more than 10 000 samples are truncated to the first 10 000 samples. CompEngine library by providing basic metadata that is sufficient to allow new users to be able to understand it. This includes: Name, Sampling Rate, and Description, as well as: Source (identifies who/where/how the data were measured), Category , and Tags . Users may select from an existing Category, or suggest a new one, specifying a parent Category to place the proposed Category in the hierarchy. Tags are unrestricted and allow users to set useful, machine-readable and easily searchable keywords to their data. To facilitate automatic connections to new data, users can optionally choose to provide contact information and opt-in to regular updates when new data is uploaded to CompEngine in the future that is similar to the uploaded time series.Uploaded data can be permanently added to the CompEngine only after being approved by an administrator. New proposed Categories also require administrator approval. Greater care is given to bulk time-series uploads, which require prior approval via a bulk upload request.To maintain the quality of the database, all new individual time-series uploads are added to CompEngine provides an interactive network visualization of the nearest neighbors to a target time series. These nearest neighbors are those with the minimal feature-based Euclidean distances to the target time series. Each node corresponds to a time series, and is colored by its category label and annotated with a representative time trace. Links in the network capture the feature-vector similarity between pairs of time series. Users can interactively zoom in or out of the network and filter on specific sets of the categories of retrieved neighbors and can share the results (to Twitter or as a URL). There is also the option to use a \u2018List view\u2019 that shows the nearest neighbors as a table sorted by similarity that includes an interactive visualization of each matching time series. Users can use their domain knowledge to investigate interesting connections between their data and these most similar matches that exist in the CompEngine library.As shown in Fig.\u00a0Detailed inspection of time traces and metadata of neighboring nodes can be done using the inspector panel directory of .csv files.The interactive online visualization provides useful insights into individual time series and their nearest matches, but some users may wish to perform more sophisticated analyses on the matching data (such as investigating the types of features that drive different types of clustering patterns). To facilitate this, As well as new, uploaded data, users can also interactively explore the existing data library, as organized by Source, Category, and Tag, or by a custom search.CompEngine as:An individual time series (through \u2018download\u2019 button associated with every time series),Multiple time series matching a given search criterion , orThe full time-series database as a bulk download.Data can be download from CompEngine also includes a public API that allows users to retrieve individual time series, or sets of time series that match specified criteria . This allows researchers to programmatically query CompEngine, providing immediate access to the latest time-series data library.In all cases, data can be downloaded in either compressed .csv (.zip) or .json format. CompEngine makes a large library of diverse time-series data freely available to the public, as well as interactive tools to aid exploring the library. By allowing the library to grow over time through community contributions, it can provide a good sample of the type of time-series data studied across different scientific applications. In the context of time-series analysis, in this section we describe examples of new types of science enabled by CompEngine.CompEngine provides a \u2018Contact Contributor\u2019 button for time series that have been contributed by a user who has provided contact information, allowing users to connect through similarities in their data. CompEngine also continues to search for new matches as additional data is uploaded in the future, and can alert the user (by email) to future matches as they occur. By treating theoretical and diverse types of empirical time series in the same way, CompEngine thus provides a direct incentive to data sharing: the user learns more about how their system of interest relates to other synthetic and real-world systems, both immediately at the time of upload, and into the future as the time-series data library evolves.A self-organizing library structures data objects empirically according to their dynamical properties, not their assigned metadata. As described above, and shown in Fig.\u00a013, given that no single algorithm can exhibit strong performance on all types of data14. CompEngine provides access to a large and growing data repository to facilitate\u00a0the evaluation of analysis\u00a0algorithms on diverse time-series data, allowing us to comprehensively and objectively understand the strengths and weaknesses of different time-series analysis algorithms applied to different types of data. This process may indeed highlight unexpected examples of datasets for which the new method performs well on, inspiring new interdisciplinary collaborations on common problems. CompEngine can thus be used to understand the strengths and weaknesses of different time-series analysis algorithms on different\u00a0applications, contrasting the common practice of evaluating new algorithms on manually selected data (which involves selection bias). By fingerprinting the usefulness of different algorithms on different types of data, future methods development could be empirically tailored to the types of problems that our current analysis toolbox performs poorly on.In practice, the selection of a time-series algorithm for a given application is based on the subjective experience of the data analyst. Moving towards a more systematic procedure requires a comprehensive understanding of the characteristics of the data that a given algorithm performs well onCompEngine uses the example of time series to demonstrate how complex data objects can be projected in a common feature space and organized on the basis of their empirical properties. The benefits of such a self-organizing library are not just applicable to time-series analysis, but could also be extended to other data objects, including complex networks15, images, point clouds, and multivariate classification datasets16. We hope that CompEngine may serve as a template for developing new, self-organizing libraries of other data types that encourage broader scientific collaboration on common\u00a0data.18, data repositories are typically organized only on the basis of user-assigned metadata. CompEngine adds a computational layer of extracted features to self-organize a large repository of time-series data, automatically retrieving interesting connections between diverse time series. To our knowledge, the platform is the first self-organizing collection of scientific data, containing an initial library of over 24 000 diverse time series. Compared to conventional data repositories, this provides a direct incentive for data sharing, with users immediately obtaining new understanding of the interdisciplinary context surrounding their data, and an option to be notified when similar data are uploaded in the future. This resource is relevant to many applications, from individuals self-recording their heart\u00a0rhythms or sleep patterns through a wearable device, to those probing a portfolio of assets or examining drilling profiles. We envisage CompEngine becoming a unifying portal that links disparate users\u2014be they scientists or data analysts\u2014who currently work isolated from one another due to high barriers to comparison, and hence collaboration.While recent years have seen dramatic growth in data sharing, including in scientific researchSupplementary Information"}
+{"text": "In this article, we present a web-based platform named Gene Expression Time-Course Research (GETc) Platform that enables the discovery and visualization of time-course gene expression data and analytical results from the NIH/NCBI-sponsored Gene Expression Omnibus (GEO). The analytical results are produced from an analytic pipeline based on the ordinary differential equation model. Furthermore, in order to extract scientific insights from these results and disseminate the scientific findings, close and efficient collaborations between domain-specific experts from biomedical and scientific fields and data scientists is required. Therefore, GETc provides several recommendation functions and tools to facilitate effective collaborations. GETc platform is a very useful tool for researchers from the biomedical genomics community to present and communicate large numbers of analysis results from GEO. It is generalizable and broadly applicable across different biomedical research areas. GETc is a user-friendly and efficient web-based platform freely accessible at Over the past few decades, substantial funding and resources have been invested to generate biomedical datasets at many levels, ranging from nucleic acid and gene level to population level, in order to understand, treat and prevent various diseases, and protect public health. Based on data sharing policies of National Institute of Health (NIH) and other government agencies, many of aforementioned datasets are required to be shared with the general research communities. Consequently, vast amounts of biomedical data are being accumulated in databases and data repositories. However, use or reuse of these existing datasets for research by third parties is still not common as expected.Gene expression data from various diseases under different experimental conditions are mostly deposited in the NIH/NCBI-sponsored Gene Expression Omnibus (GEO) data repository . Like maThis article describes a web-based platform that addresses the difficulties in finding, accessing, reusing biomedical datasets, specifically from GEO, as well as the difficulties in finding and forming collaborations. The novel platform, named as Gene Expression Time-Course Research (GETc) platform (http://genestudy.org/), is built on top of an analytical method based on the ordinary differential equation (ODE) model for analyzing time-course gene expression data. GETc offers the following services and functions:Hosts time-course gene expression datasets from GEO annotated with disease and cell types.User-friendly navigation and searching functions.Hosts analysis results of the time-course gene expression datasets produced by the ODE analytic pipeline.Recommends relevant datasets for users based on their research interests.Recommends relevant papers and collaborators for each dataset hosted in the platform.The rest of the article is organized as follows: Section 2 discusses the background of the analytic pipeline and recommendation systems. Section 3.1 presents datasets used for developing the GETc platform. Section 3.2 describes the methodology used for analytic pipeline, recommendation systems and platform implementation. Section\u00a04 describes and discusses the results. Finally, conclusions are presented in Section\u00a05.In this section, we present the three main parts of our work, (i) repositories developed for archiving datasets in the biomedical domains and their metadata, (ii) an analytic pipeline developed for analyzing gene data and (iii) dataset, literature and collaborator recommendation systems.It is a growing trend among the researchers to make their data publicly available for reproducibility and data reusability. Many repositories and knowledge bases have been established for different types of data in many doma-ins. GEO(www.ncbi.nlm.nih.gov/geo/), UKBioBank(www.ukbiobank.ac.uk/), ImmPort(www.immport.org/home) and TCGA are a few examples of repositories in the biomedical domain. DATA.GOV archives the U.S. Government\u2019s open data from agriculture, climate, education, etc. for research use. However, users from the biomedical community have to visit and search each repository separately to find data for their research, which can be time-consuming and hectic.DataMed(datamed.org) started an initiative to solve the above issue for the biomedical community by combining biomedical repositories and enhancing the query searching using advanced NLP techniques\u00a0, 3. DataThe study of gene regulation related to different biological functions is critical to understand the underlying mechanism of each function, such as cell growth, division, development and response to environmental stimulus. In addition, gene regulatory networks (GRN) have been shown useful for investigating the interaction among genes involved in a biological process, or genes responsive to an external stimulus. There are many computational approaches in the literature for inferring GRNs from gene expression data; for example, information theory models\u00a0, BooleanA recommendation system is an enabling mechanism to overcome information overload. Literature in this area can be broadly grouped as content-based or collaborative filtering based recommendation systems. Next, we discuss literature related to developed recommendation systems.There are many dataset repositories in the biomedical domain and many datasets are added to each repository on a daily basis. For example, 34 datasets were added to GEO repository daily in 2019. Hence researchers are likely to be overwhelmed with the data available and they have to visit each repository for searching a dataset. The platforms like DataMed solved this problem and researchers only had to visit DataMed for searching the datasets. However, DataMed has not been updated recently. Again, the intent of search is always difficult to identify\u00a0 where siThe usefulness of the literature recommendation can be stated by the acceptance of Google Scholar, Semantic Scholar, PubMed, etc. The CiteSeer project\u00a0, 25 was pmra\u2019 on the publications from MEDLINE and this has been used as a related article search function in PubMed. Most of the proposed literature recommendation systems use embedding methods to convert text into vectors and calculate the similarity between articles.SciMiner is a web-based platform for identifying gene names in text based on user input and provides literature from MEDLINE for the corresponding gene\u00a0. A conteOnce a researcher finds a dataset suitable for his/her study, he/she may need literature available related to the dataset. A literature recommendation system for datasets may be a helpful tool for this scenario where researchers can get literature from PubMed for each dataset.Academic collaborator recommendation has long been regarded as a useful application in the academic environment, which aims to find potential collaborators for a given researcher by exploiting big academic data. In the past few years, several works on collaborator recommendation have been proposed\u00a0.Mainly, co-author network information has been incorporated to enhance the collaboration recommendation , 37, 38. proposedWhen a researcher finds suitable data for his/her study, the researcher may look for collaborators to work with on that dataset. In this scenario, a collaborator recommendation system for each dataset may be helpful.GEO is one of the most popular public repositories for functional genomics data. As of 18 December 2019, there were 122 222 series of datasets available in GEO. Metadata of GEO datasets such as title, summary, date of publication and name of authors was collected from the GEO using a web crawler. The PMIDs of the articles associated with each dataset were also collected. Many datasets did not have associated articles.Time-course dataset: This study was conducted for the time-course datasets from GEO, however, the time-course datasets were not identified explicitly in the GEO websites. The time-course datasets can be identified manually by reading the dataset descriptions or scanning the associated data with it which is a time-consuming and tedious task. A keyword-based NLP method was applied for identifying time-course datasets. We implemented a regular expression-based approach to extract the time point information from the GEO metadata. For example, some phrases like \u201812 time points\u2019, \u20187 developmental stages; harvest at 10 hrs, 12 hrs\u2019, etc. were used to extract the time point information. The regular expression-based system was able to identify 167 datasets out of 200 random datasets with an accuracy of 83.5%. Further, a total of 555 datasets were filtered manually from 862 datasets identified by the above system for processing. More details on identifying time-course datasets can be found in\u00a0 terms and publisher name.However, the articles collected from PubMed contain a variety of topics related to biomedicine and life sciences which may not be suitable for building a recommendation system for datasets in GEO. Further, the articles before 1998 were removed as the research on micro-array data started during that year\u00a0. The datWe integrated the series of statistical and modeling methods for the time-course gene expression data into an analytic pipeline\u00a0 which inThe final analysis results of the pipeline can be reported as the initial bioinformatics findings for narrowing down the analysis and framing scientific questions, toward new collaborative publications. We could apply the pipeline to each of the time-course gene expression datasets under one experimental or biological condition. Furthermore, simple comparison functions between two or more datasets across experimental conditions and/or from different studies are currently under development for the pipeline. We published the source code of the analytic pipeline, so others can use the pipeline and expand its functionalities.),(github.com/AutumnTail/Pipeline (Updated code)).Data Recommendation: Data recommendation is an essential part of the GETc platform. The dataset recommendation function recommends datasets to researchers based on their publications. The datasets used for this recommendation system contain data not only from GEO but also from other sources such as TCGA, ArrayExpress, SRA and Clinical Trails. We used only textual information of datasets (title and summary) and publications (title and abstract).A researcher may have multiple research interests. To identify the research interests, we implemented a non-parametric clustering algorithm named Dirichlet process mixture model (DPMM). More details on DPMM and its parameter tuning for obtaining better number of clusters can be found in\u00a0. Each reLiterature Recommendation: The literature recommendation system recommends literature for datasets. The most similar literature for a dataset can be determined simply by comparing the cosine similarity of the dataset vector and paper vectors. For developing the literature recommendation system in GETc, we used BM25 as it resulted in better precision at 10 compared to other embedding methods such as TF-IDF, word2vec and doc2vec\u00a0, the score for each unique author of similar articles can be calculated using Equation\u00a0.where AuthorScored.Higher weights were provided to the first and last authors of each similar article whereas less weights were provided to all other authors. Finally, the authors with the highest scores were recommended as the collaborators for affiliation_parser(github.com/titipata/affiliation_parser) package and the distance between the recommended collaborators\u2019 and the user\u2019s current location was calculated using geopy(geopy.readthedocs.io) package to show a distance-based relevance of user and collaborators.The top 1000 recommended publications from the above literature recommender for a single dataset were used for identifying collaborators for that dataset. Furthermore, authors\u2019 affiliations provided in papers were parsed using the In this work, we developed an interactive web-based platform, called GETc, to facilitate collaboration and sharing of the analytic results of our pipeline on time-course gene expression data from GEO to the general research community. We have identified 555 time-course gene expression datasets with more than 7 time points from GEO. We applied our analytic pipeline on 37 of those datasets (results in Section\u00a04). The output of the analytic pipeline for each dataset is folder of files containing intermediate and final analytic results, tables, graphics/plots and documents. The output also includes an automatically generated analysis report for each dataset.Platform users could interactively search, browse and identify particular datasets and corresponding results of interest. They can visualize and review the analysis results including figures and tables, which can be easily downloaded via the platform web-based user-interface. For the unprocessed time-course gene expression datasets included in the platform, users can request to execute the pipeline. The platform also provides its users with recommendations by employing the recommendation systems described in Section\u00a03.2.2. It recommends literature for time-course gene expression datasets, potential collaborators for extracting scientific insights from the analytic results. It also recommends datasets to researchers. Figure\u00a0Users of the platform can search for a time-course dataset using keywords and phrases and see the literature available, significant gene lists, gene clusters and prospective collaborators for that dataset. A screenshot of search and view dataset functionalities is shown in Figure\u00a0ex vitro or in vitro or in vivo and species (human or mouse/rats species). MCF10A, MCF7, HeLa and other widely used cell lines are tested in these datasets. These cells lines are originated from various types of cancers such as breast cancer, cervical cancer and leukemia. Also, treatments in these datasets target several essential cancer pathways, such as NFkB, EGFR and hedgehog. These classifications will help researchers perform meta-analyses to identify common/key genes and GRN in a certain type of cancer.The results of the analytic pipeline which we applied on 37 time-course gene expression cancer datasets from GEO are presented in Table A1. For each dataset with different conditions, the table shows the number of DRGs, number of GRMs, number of time points, cancer type, cell line, the organism, vitro or Evaluating recommendation systems are challenging because no benchmark nor prior true annotation exists for either dataset recommendation or dataset-driven literature recommendation. For that reason, we performed a manual evaluation by asking expert human judges to rate the recommendation of systems using one to three \u2018stars\u2019 scale based on the relevance .We evaluated the recommendation systems using strict and partial precision at 10 (P@10). Strict considers only 3-star, while partial considers both 2- and 3-star results. The developed dataset recommendation system was evaluated with five judges who have worked on the datasets before. The system obtained P@10 (strict) and P@10 of 0.61 and 0.78, respectively. For the literature recommendation, we considered 36 datasets for evaluation and the human judges have already worked on these datasets earlier. The proposed system obtained 0.80 and 0.87 of P@10 (strict) and P@10 , respectively.No gold standard dataset for evaluating collaborator recommendation is available to date. Similar to literature recommendation, evaluating our collaborator recommendation system was a challenging task, as it requires time to work with collaborators and only then they can provide feedback for system\u2019s output. We are currently working with additional multiple collaborators to evaluate the output of the system and generate feedback that we can use to assess the system\u2019s quality in the future.A screenshot of literature (top right corner) and collaborator (bottom right corner) recommendations for dataset GSE14\u2009103 is provided in Figure\u00a0We believe the functions of GETc are very useful for researchers from the biomedical genomics community to present and communicate large numbers of analysis results. In addition to datasets from GEO, we are currently expanding the platform with new time-course datasets from other repositories such as TCGA, SRA and ImmPort. We applied the ODEs in the process of constructing the high-dimensional gene regularity network where having at least 8-time points was essential for the identifiability of the corresponding model. Thus, only datasets with more than or equal to 8-time points can be processed with our pipeline.In this work, we developed a novel research platform called GETc for sharing data and analytic results of time-course gene expression datasets from GEO to improve the dataset reusability. It is built on top of an analytical method based on the ODE model for analyzing time-course gene expression data. GETc platform provides means to efficiently search and retrieve data, results, and facilitate collaboration through recommendation of related literature and potential collaborators corresponding to datasets. This platform also hosts a dataset recommendation system which will help researchers in biomedical domain to search datasets based on their publications. This will hopefully lead to better data reuse experience. We believe that the proposed novel idea and computational platform could also be applied to other types of data from different databases or data repositories."}
+{"text": "The advancement of single-cell sequencing technology in recent years has provided an opportunity to reconstruct gene regulatory networks (GRNs) with the data from thousands of single cells in one sample. This uncovers regulatory interactions in cells and speeds up the discoveries of regulatory mechanisms in diseases and biological processes. Therefore, more methods have been proposed to reconstruct GRNs using single-cell sequencing data. In this review, we introduce technologies for sequencing single-cell genome, transcriptome, and epigenome. At the same time, we present an overview of current GRN reconstruction strategies utilizing different single-cell sequencing data. Bioinformatics tools were grouped by their input data type and mathematical principles for reader's convenience, and the fundamental mathematics inherent in each group will be discussed. Furthermore, the adaptabilities and limitations of these different methods will also be summarized and compared, with the hope to facilitate researchers recognizing the most suitable tools for them. Gene regulatory networks (GRNs), which describe the regulatory connections between transcription factors (TFs) and their target genes, help researchers to investigate the gene regulatory circuits and underlying mechanisms in various diseases and biological processes. A simple model of gene transcriptional regulation includes two key events: (1) an active TF binds to a cis-regulatory element such as a gene promoter; (2) such binding activates/suppresses the expression of the gene, which leads to the increase/decrease of the gene's RNA level. By integrating high-throughput omics data detecting the above two events in genome-wide scale, various powerful methods have been developed for reconstructing GRNs 1Different from bulk sequencing that averages signals from a bulk of cells, single-cell sequencing isolates single cells from cell populations and labels DNA molecules derived from every single cell with unique barcodes before next-generation sequencing Furthermore, there are techniques able to detect more than one type of single-cell omics profiles simultaneously. For example, single-cell genome and transcriptome sequencing (G&T-seq) These single-cell omics and multi-omics technologies give us new opportunities to investigate complex gene regulatory mechanisms in a single-cell resolution . In shor2Tools designed for GRN reconstruction from scRNA-seq data alone have been reviewed and evaluated elsewhere There are two types of scRNA-seq data \u2013 with and without temporal information. In a biological process, condition or experiment, cells can be collected from tissues or cell cultures. These cells could be in a process of change or in a steady state. For instance, cells might undergo differentiation, drug treatments, environmental changes, etc., and transit from one condition to another. In these processes, single-cell snapshot data can be obtained by collecting cells at a certain time point. Although each single cell represents a static state at this single time point, cells may have different stochastic behaviors during the same process 2.1Provided with expression data with temporal information, ODE has been applied to describe expression dynamics and infer GRNs, which is generally formulated asAssumed that the expression change rate of target If parameters 2.1.1SCODE is a bioinformatics tool designed for scRNA-seq data by using the linear ODEs with pseudo-time series to describe expression dynamics and infer GRNs A major challenge of the ODE-based models is the expensive computational complexity caused by the high dimensionality of samples and genes. To reduce the computational complexity, SCODE alternatively solves an ODE with low-dimensional data by assuming that the high-dimensional data can be linearly expressed in a low-dimensional subspace Then the equation Solving the ODE 2.1.2GRISLI is another bioinformatics tool for single-cell pseudo-time-series data based on linear ODE Breaking the assumption in SCODE that all cells are in the same trajectory, GRISLI believes that different cells could evolve on different trajectories and focuses on those cells whose trajectories are close to each other. First, the expression change rate, also described as velocity, between cell Considering that some data points might live in the past 2.2Different from the ODE that considers expression change rate, the regression-based model is built on the assumption that the expression of a target gene can be predicted by the expression of TFs regulating it. Regression is one of the most commonly used methods to search for a suitable prediction function A significant benefit of the regression model is that it is simple to understand and convenient to apply to the complicated biological system The most common form of regression is linear regression and the associated linear least squares method. Furthermore, the structure of the GRNs can be characterized by adding an associated penalty function For example, ridge regression uses the Another important benefit of the regression model is the exclusive development of optimization algorithms. Several popular and efficient numerical algorithms have been proposed to solve the least squares problem (7) and the ridge regression problem such as gradient descent methods, Newton-type methods and Levenberg-Marquardt method 2.2.1Gene network inference with ensemble of trees (GENIE3) is a tree-based method to reconstruct GRNs Denote the expression of gene In random forests, Benefitting from the fact that few assumptions are required in random forests, GENIE3 owns ability to explain more complex regulatory relationships in GRNs when comparing with linear regression. GENIE3 is a good choice for scRNA-seq data without temporal information, while it might perform worse than other methods if scRNA-seq data contains temporal information. In addition, it may be harder for GENIE3 to infer large networks when it is needed to build 2.2.2Single-cell regularized inference using timestamped expression profiles (SINCERITIES) applies regularized linear regression and partial correlation analysis to reconstruct GRNs based on temporal changes in the distributions of gene expression Such temporal changes in the expression of each gene is measured by the distance of gene expression distributions between two subsequent time points, which is called as the distributional distance (DD). Kolmogorov-Smirnov distance is used to compute the DDs of all genes where After ranking the absolute values of the coefficients of all possible edges, the inferred GRN could be obtained by setting a threshold for the ranked value. The sign of the regulatory edge between each pair of TF and target is determined by the sign of the corresponding partial correlation.SINCERITIES reconstructs the GRNs with low computational complexity and suits for high-dimensional data 2.3The regulatory links in GRNs can also be determined by measuring the relationship between the expression of target genes and TFs. The Pearson's correlation, is the simplest statistic to characterize the association between However, the Pearson's correlation is too naive to characterize the complicated regulatory relationship in GRNs. For example, if genes In information theory, the entropy By considering the distributions of genes, mutual information (MI) has the ability to quantify the dependence between two genes based on their distributions. MI for two random variables The second equality shows the relationship between MI and entropy. From the formula mentioned above, MI measures the reduction in uncertainty of a random variable However, the estimation of MI and conditional MI involves data discretization and estimation of empirical probability distributions The inferred regulatory link is more reliable when the value of measurements is larger. After computing these measurements mentioned above for all genes, those links with lower values could be removed by choosing a threshold to infer the final GRNs.2.3.1Lag-based expression association for pseudo-time series (LEAP) is a correlation-based algorithm to infer the GRNs for pseudo-time-series data Given expression data The LEAP provides a strategy to find the regulatory links between genes and define their directional relationship by computed measurements. However, the relationships between all genes are assumed to be linear, where it might not satisfy for most cases. As the temporal information is considered in the method, pseudo-time-series data is required to infer GRNs. In practice, this correlation-based model generally consumes less time because the measurements can be directly computed by the analytical formulas, and it works for a large network. For example, a network with 5000 genes is considered in 2.3.2Partial information decomposition and context (PIDC) is an information-based algorithm to infer the regulatory relationship between genes A global threshold for PUC scores might bias the result of the inferred GRNs due to the distributions of PUC scores differ between genes The PIDC provides an approach to quantify the relationship between a pair of genes considering the effect of other related genes in GRNs. It extracts more information from the expression data. However, the data discretization and MI estimators are required in this method, which might impede the computation of PUC scores. The performance of PIDC might be influenced by the choice of data discretization methods and MI estimators 2.3.3Scribe is another information-based toolkit designed for datasets with temporal information to infer causality relationship between genes. It relies on restricted directed information (RDI) Furthermore, conditional RDI (cRDI) is considered to remove the arbitrary effect from other potential regulators To correct the sampling bias in computation and improve the performance, uniformized RDI and cRDI scores are computed by replacing the original empirical distribution of the samples with a uniform distribution Scribe extracts more intrinsic information from single-cell expression data by considering arbitrary effect delay from regulator 2.4Unlike the continuous expression values of the nodes in ODE, Boolean network describes the interaction of genes with discrete values for their states along with discrete time points. The nodes and edges of the network represent genes and regulatory relationships between them, respectively. To represent the expression status of genes, the numeric \"1\" or \"0\" is used to denote the state of nodes as \"on\" or \"off\". In order to characterize the dynamics of the network, Boolean functions with three main operations: AND, OR and NOT are built to update the successive state for each node, where the operators represent the regulatory manners of TFs to their targets. The final successful model can be obtained by verifying the dynamic sequence of system states and comparing with biological evidence. A drawback of Boolean network is that the computation consumes more time when more possible networks are needed to be considered with an increasing number of genes. Thus, the method is limited in a small number of genes in real practice Example 1. Consider the following network with three nodes as d X3 the thereFigThe Boolean update functions can be presented as follow:2.4.1Single cell network synthesis toolkit (SCNS toolkit) is a Boolean network-based toolkit for scRNA-seq data with temporal information to reconstruct and analyze GRNs. The diffusion map method The SCNS toolkit firstly discretizes the single-cell gene expression into binary states, where \"1\" and \"0\" represent that a gene is expressed or not respectively. According to the Boolean update functions that represent connections of a possible network, the vector bearing \"1\" or \"0\" states of all genes at an early time point can transit into the state vector of the next time point. State vectors at two adjacent time points could be connected to form a state transition graph. Boolean functions that fit the state series best are being chosen when the network is being reconstructed; see section Implementation in The SCNS toolkit provides insights into the developmental processes and the interactions between genes in GRNs across time. It considers regulatory logic when reconstructing the GRNs. Yet the method for data discretization in SCNS toolkit might influence the further inference of GRNs. As we mentioned above, the Boolean network-based method can only deal with the small-scale GRNs in real-life computation.3Although scRNA-seq data are widely used for GRN reconstruction, the performance of current tools on this data type is still unsatisfactory Single-cell regulatory network inference and clustering (SCENIC) is one of such tools However, when the majority of associated genetic variants locates in regulatory regions of patient genomes in diseases like cancer 4Fortunately, the development of single-cell epigenomic technologies, such as scATAC-seq, allows the identification of DNA regulatory elements in single cells at a reasonable cost. Open chromatin regions detected by scATAC-seq often contain active DNA regulatory elements for TF binding and gene regulations 4.1Self-organizing map (SOM), also known as the Kohonen network, is an unsupervised learning method for clustering and visualization Denote Given a random initial weight vector The SOM has the ability to map data from a high dimension space to a low dimension one. Although the convergence of the algorithm has been proved under some conditions, the SOM might converge until hundreds of thousands of iterations 4.1.1Linked self-organizing maps (LinkedSOMs) is a bioinformatics tool developed to infer GRNs by integrating scRNA-seq and scATAC-seq data. The input data for LinkedSOMs are the gene expression data and chromatin data, while the pseudo-time is not required. Two SOMs with the output set of SOM units are available after training the scRNA-seq and scATAC-seq data separately. K-means clustering Training two SOMs for scRNA-seq and scATAC-seq datasets makes LinkedSOMs time-consuming as mentioned above, though it can still analyze large datasets. Even though the original study of LinkedSOMs focuses on integrating scRNA-seq and scATAC-seq data, it is also applicable to multi-omics data analysis incorporating other single-cell sequencing data.4.2Nonnegative matrix factorizations (NMF) aims to decompose a nonnegative matrix 4.2.1Coupled nonnegative matrix factorizations (coupled NMF) is an NMF-based approach to reconstruct GRNs via integrative analysis of scRNA-seq and scATAC-seq data. The main assumption in coupled NMF is that the expression of a subset of genes (detected by scRNA-seq) can be linearly predicted from the status of chromatin regions (detected by scATAC-seq).Coupled NMF aims to cluster the cells in each dataset with information from another one by developing a new optimization problem based on NMF. Denote the scRNA-seq and scATAC-seq data by where Similar to LinkedSOMs discussed above, other single-cell multi-omics data can also be applied in this approach to analyze and infer the GRNs with coupled NMF. Although the numerical behavior of coupled NMF was showed 4.3Canonical correlation analysis (CCA) is a method to project two different datasets into a correlated low-dimensional space by maximizing the correlation between two linear combinations of the features in each dataset Supposed that the columns of The solution (and it can be solved by penalized matrix decomposition 4.3.1Seurat v3 is a bioinformatics framework that can infer GRNs from scRNA-seq and scATAC-seq data based on CCA. Denote the scRNA-seq and scATAC-seq data by The Seurat v3 focuses on the integration of scRNA-seq with different single-cell technologies including scATAC-seq. It generates an integrated expression matrix in the end, which can be the input in further downstream analysis like GRN inference with any single-cell analytic method. Moreover, the approach in Seurat v3 is extended to assemble multiple datasets, and this would provide a deeper insight into single cells. In addition, based on the principle of CCA and KNN, the Seurat v3 is capable of dealing with high-dimensional datasets.5With the development of various single-cell sequencing technologies nowadays, more and more methods for GRNs inference from single-cell sequencing data are proposed As the proverb says, \"Essentially, all models are wrong, but some models are useful\". Although comparisons on several tools that work on scRNA-seq data have been performed with simulated data and real data in several published reviews We also point out that the future direction of method development would be the integration of multiple single-cell sequencing data. Integrations of single-cell multi-omics could reduce the impacts of noise and enhance the performance by cross-validating the regulatory connections in GRNs through multiple datasets. More integrative tools will emerge when more types of single-cell data, such as proteome, metabolome, cell image, et al., become prevalent in the future. They will depict gene regulatory mechanisms underlying disease and biological processes more accurately, and provide a more comprehensive map of GRNs covering multiple biological molecules and regulatory layers. In addition to the integration of multiple data types, combining multiple algorithms and tools has also been shown to improve the accuracy of network inference from bulk-cell data We have no conflict of interest.Xinlin Hu: Writing - original draft, Data curation, Visualization. Yaohua Hu: Conceptualization, Writing - review & editing, Supervision, Funding acquisition. Fanjie Wu: Data curation, Visualization. Ricky Wai Tak Leung: Writing - review & editing, Visualization. Jing Qin: Conceptualization, Writing - original draft, Supervision, Project administration."}
+{"text": "Comparative time series transcriptome analysis is a powerful tool to study development, evolution, aging, disease progression and cancer prognosis. We develop TimeMeter, a statistical method and tool to assess temporal gene expression similarity, and identify differentially progressing genes where one pattern is more temporally advanced than the other. We apply TimeMeter to several datasets, and show that TimeMeter is capable of characterizing complicated temporal gene expression associations. Interestingly, we find: (i) the measurement of differential progression provides a novel feature in addition to pattern similarity that can characterize early developmental divergence between two species; (ii) genes exhibiting similar temporal patterns between human and mouse during neural differentiation are under strong negative (purifying) selection during evolution; (iii) analysis of genes with similar temporal patterns in mouse digit regeneration and axolotl blastema differentiation reveals common gene groups for appendage regeneration with potential implications in regenerative medicine. With the advance of high throughput methods, such as RNA-seq, the amount of time series gene expression data has grown rapidly, providing an unprecedented opportunity for comparative time series gene expression analysis. Although many studies are aimed at identifying differentially expressed genes (DEGs) ,2 eitherComparing time series gene expression data from different experiments requires computational methods that can handle sequences of different length and sampling density. Although studies using coarse-grained associations can potIn this study, we developed TimeMeter, a statistical method and R package to assess temporal gene expression pattern similarity, and identify differentially progressing genes. TimeMeter uses the dynamic time warping (DTW) algorithm to alignFor gene pairs with similar temporal patterns, TimeMeter partitions the temporal associations into separate segments via piecewise regression . The difXenopus, a tetrapod species with limited regenerative capabilities cells and mouse epiblast stem (EpiS) cells during neural differentiation . We showFinally, we used TimeMeter to detect STP genes between mouse digit regeneration and axolotl blastema differentiation. It is known that full appendage regeneration in the axolotl is due to the formation and differentiation of a heterogeneous pool of progenitor cells (blastema) at the site of amputation . The regm \u2212\u00a01) start time points in one gene if the first m time points can be aligned to the same start points in another gene, and terminating alignment (truncating the rest of time points) if DTW matches the last elements in any of the genes. This will exclude certain time points from alignment, and result in a truncated alignment. TimeMeter then calculates four measurements on the truncated alignment that jointly assess gene pair temporal similarity: percentage of alignment for (i) query and for (ii) reference, respectively; (iii) aligned gene expression correlation; (iv) likelihood of alignment arising by chance:TimeMeter first uses the dynamic time warping (DTW) algorithm to align two time series gene expression vectors via the R package (\u2018dtw\u2019). However, one of the pre-assumptions of DTW algorithm is that the two sequences are comparable. For example, DTW assumes that (a) every aligned index from the first sequence must be matched with one or more indices from the other sequence, and vice versa; (b) the first and the last aligned indices from the first sequence must be matched with the first and the last indices from the other sequence, respectively (but it does not have to be its only match). These assumptions do not hold true for certain patterns, such as dissimilar patterns or patterns where one series only resembles a small fraction of another. For instance, if two temporal similar genes exhibit differential progression , when we use the same time window to compare gene expression patterns, certain time points (e.g.\u00a0at start or at end) will be out of the matched time points boundary. DTW will make the start or the end points in one gene excessively duplicate to match the out of boundary time points in another gene. TimeMeter corrects the DTW aligned indices by truncating the first in query divided by total length of query time interval.Percentage of alignment for reference: the length of aligned time interval (after truncation) in reference divided by total length of reference time interval.Aligned gene expression correlation (Rho): it is calculated by the Spearman's rank correlation coefficient (Rho) for aligned gene expressions (after truncation). This measures how well the gene expression patterns correlate after alignment and truncation.P-value): To further rule out that the alignment is not due to a product of random chance, for each gene pair, TimeMeter shuffles the gene expression values of both query and reference separately 100 times. For each shuffling, the aligned gene expression Spearman correlation coefficient (Rho) (measurement 3) is calculated. TimeMeter assumes that the Rho from shuffling follows a Gaussian distribution with mean (\u03bc) and standard deviation (\u03c3). It calculates the P-value of likelihood of alignment arising by chance by lower-tail probability of Gaussian distribution , assuming the aligned gene expression correlations between query and reference should be significantly higher than these shuffled temporal gene expressions.Likelihood of alignment arising by chance . These thresholds are used for identifying STP genes throughout this study.These four metrics will give temporal similarity assessments on different aspects. In this study, we define similar temporal patterns (STP) as gene pairs where: (a) at least one temporal sequence (query or reference) has percentage of alignment >80%, and (b) no temporal sequence (query and reference) has percentage of alignment <50%. These two criteria assume that in the case that one pattern only resemble a fraction of another, the longer matched pattern should represent at least 80% of its original data, and the shorter matched pattern should represent at least half of its original data; and (c) Rho > 0.9 \u00a0and (d) K (K\u00a0\u2264\u00a0N) possible number of breakpoints (N = 10 in this study). For each segment, the slope of the regression measures the fold-change of the speed (query versus reference). A slope being greater or less than 1, indicates faster or slower dynamical changes, respectively. A slope equivalent or close to 1 is a special case in which the speed of dynamical change is the same or similar (time shift pattern). TimeMeter further merges adjacent segments if the absolute slope difference less than deltaSlope (we set deltaSlope = 0.1 for this study) by a linear regression, and recalculates the slope. This process is repeated until no adjacent segments have absolute slope difference less than deltaSlope. Then for each segment, TimeMeter calculates the area difference between under the segmented regression line and under the diagonal line, assuming that if the query and reference have no progression difference along time points, the aligned time points should follow the diagonal line . The extent of deviation from the diagonal line can be used to measure the progression difference. A progression advance score (PAS) is calculated by aggregation of area difference in each segment and normalized by total aligned time length (after truncation) in the query.Given a STP gene pair (identified by previous step), TimeMeter scores the progression difference based on truncated alignment. For each query time point within a truncated alignment, TimeMeter groups and calculates the average corresponding aligned reference time. This will result in two variables: aligned query time as the independent variable and average aligned reference time as the dependent variable. Next, TimeMeter applies piecewise (segmented) regression to these two variables, and partitions them into separate segments. The breakpoints in piecewise regression are determined by the lowest Bayesian Information Criterion (BIC) via enumerating all The PAS measures the absolute progression difference between two similar temporal pattern (STP) genes. For species with different paces of development (e.g.\u00a0human versus mouse), the PAS may reflect the difference in development pace between the organisms. To investigate genes with \u2018relative\u2019 (\u2018unexpected\u2019) progression difference , TimeMeter calculates the adjusted PAS. For each query time point, TimeMeter groups and calculates the median corresponding aligned reference time (after truncation) of all STP genes. This will result in two variables: the query time as the independent variable and the median aligned reference time of all STP genes as the dependent variable. Similar to calculating PAS for each gene pair, TimeMeter calculates a condition-specific progression advance score (c-PAS) that represent the overall progression difference between two conditions (e.g.\u00a0species). For each STP gene, an adjusted PAS is calculated by PAS minus c-PAS, which represents the \u2018unexpected\u2019 progression difference between two species.10 \u2019. We only included genes with significant changes in time series with at least 2-fold expression changes in time series, and scaled them from 0 to 1:10 \u2019 of a gene in all conditions (e.g.\u00a0time series).All gene expression values in this study were normalized and scaled from 0 to 1 using the following procedure: The normalization was performed by median-by-ratio normalization method . In caseP-values, we add different levels (K) of Gaussian noise N in the simulated data:To investigate how the data noise affects the P-values are further adjusted by Benjamini\u2013Hochberg (BH) multiple test correction.Gene ontology (GO) enrichment analysis was performed using the R package . For eacN and dS values between human and mouse protein coding genes were downloaded from Ensembl (v93) . These paralogous transcript pairs should be excluded for any natural selection analysis, because they are not comparable (The dbl (v93) . We remomparable ,35. For mparable .The axolotl early developmental gene expression data (transcripts per million (TPM)) were obtained from our previous study . The devXenopus early development developmental gene expression data were obtained from the publication (6), and then performed normalization and scaling.The lication . This isThe RNA-seq measured time series gene expression data (TPMs) on human embryonic stem (ES) cells and mouse epiblast stem (EpiS) cells during neural differentiation were obtained from our previous study . Human ESurgical procedure: All experiments were approved by the University of Wisconsin-Madison Institutional Animal Care and Use Committee. A total of 47 adult male C57Bl/6 mice (9\u201310 weeks old) were used for the study. Mice were subjected to hindlimb distal phalanx amputation to digit 3 (P3). For each amputation, mice were anesthetized, the hindlimb claw was extended, and the distal phalanx and footpad was sharply dissected. A regenerating distal phalanx was generated by amputating \u226433% of the P3. Skin wounds were allowed to heal without suturing. Mice were subjected to micro-computed tomography (microCT) one day prior to surgery and immediately after amputation to confirm \u2264 33% removal of the P3. Any digit that did not fall within the \u2264 33% amputation guideline was omitted from the study. Based on our criteria, 17 animals were removed from the study resulting in a final total of 30 mice. P3 mice were collected at 0, 3, 6, 12, 24 h, 3, 7, 14, 21 days. Each time point contained three mice, except day 7 (six mice). At the time of P3 collection, samples were immediately immersed in RNA later for 24 h at 4\u00b0C. The digits were then removed from RNAlater and stored at \u221280\u00b0C until performing RNA sequencing.MicroCT analysis: Mice hindlimb paws were longitudinally imaged using microCT to assess digit regeneration. MicroCT provides the necessary resolution and contrast to measure digit length and volume used in the analysis. Imaging was performed using a Siemens Inveon microCT scanner, and analysis was conducted using Inveon Research Workplace General and 3D Visualization software . All scans were acquired with the following parameters: 80 kVp, exposure time, 900 \u03bcA current, 220 rotation steps with 441 projections, \u223c16.5-min scan time, bin by 2, 50 \u03bcm focal spot size, and medium magnification that yielded an overall reconstructed isotropic voxel size of 46.6 um3. Raw data were reconstructed with filtered back-projection and no down-sampling using integrated high-speed COBRA reconstruction software . Hounsfield units (HU), a scalar linear attenuation coefficient, was applied to each reconstruction to permit inter-subject comparisons. Three-dimensional images were segmented using a minimum pixel intensity of 300 HU, and a maximum intensity of 3168 HU to represent bone density. After the region of interest was defined, the P3 volume was calculated. Sagittal length of the digits was also obtained by measuring twice from the distal tip to proximal edge of the P3 bone. Two researchers who were blinded to one another's measurements independently conducted analyses, and their results were averaged.RNA-seq: Total RNAs were isolated from tissues using trizol (ThermoFisher #15596018) and chloroform phase separations followed by the RNeasy mini protocol (Qiagen #74106) with optional on-column DNase digestion (Qiagen #79254). One hundred nanograms of total RNA was used to prepare sequencing libraries using the LM-Seq (Ligation Mediated Sequencing) protocol using Bowtie (v0.12.8) (v0.12.8) allowingv0.12.8) . TPMs wev0.12.8) , and repData access: The RNA-seq raw data (fastq files) and the processed data (TPMs and expected counts) for the mouse digit regeneration data have been submitted to GEO with accession number GSE130438.E-value < 10\u22125.The raw axolotl blastema cell differentiation RNA-seq reads were obtained from the previous study . To compWe used Bowtie (v0.12.1) to map tFigure P-value = 3.5e\u201318) based on shuffling of the original temporal gene expression data . Given the same time window, if the query and the reference are shifted by a longer time (e.g.\u00a015 days or 30 days) (In the example shown in 30 days) , more ti30 days) .When we use the same time window to compare two temporal patterns with different dynamical speed , TimeMetP-values. As shown in Figure We further investigated how the data noise and the sampling density affect the For gene pairs with similar temporal patterns (STP) , TimeMeter applies piecewise (segmented) regression to aligned time points (after truncation), and partition them into separate segments. For example, as shown in Figure Xenopus, we obtained time series gene expression values from two publications < 0.05) (e.g.\u00a0nu < 0.05) . As showgo (MYA) , the dynXenopus during development. Visually, one can observe a noticeable progression difference if |PAS\u00a0> 2| , respectively. Interestingly, for axolotl advanced genes (PAS > 4), several well-known key neural development genes are in list, such as C1QL1 (Xenopus advanced genes (PAS < \u22124), several well-known muscle or smooth muscle cell proliferation genes are in the list, such as COMT in axolotl advanced genes , four neural development related terms are enriched in Xenopus advanced genes , two muscle cell proliferation related terms are enriched cannot be detected by TimeMeter.Our previous study compared the transcriptomic dynamical changes between human embryonic stem ES) cells (from day 0 to day 42) and mouse epiblast stem (EpiS) cells (from day 0 to day 21) during neural differentiation . In Barr methods was used methods . The maj methods . Figure S cells 1260 STP genes, which are also detected by TimeMeter (referred as \u2018Both\u2019) and (2) 2,284 STP genes, which are only detected by Barry et\u00a0al. , and performed the following analysis:To investigate whether TimeMeter substantially increased the specificity for detecting STP genes, we decomposed the Barry et\u00a0al. only\u2019 STP genes, respectively. There are more enriched GO terms in \u2018Both\u2019 than in \u2018Barry et\u00a0al. only\u2019 STP genes. As shown in P.adj <\u00a00.05), while 2284 \u2018Barry et\u00a0al. only\u2019 STP genes are only enriched in 108 GO terms. The \u2018Both\u2019 STP genes enriched GO terms covers the majority of \u2018Barry et\u00a0al. only\u2019 enriched GO terms but not vice versa. As shown in et\u00a0al. only\u2019 enriched GO terms are also enriched in \u2018Both\u2019 STP gene sets. However, only 13.7% (65/474) of \u2018Both\u2019 STP gene sets enriched GO terms are also enriched in \u2018Barry et\u00a0al. only\u2019 STP gene sets. For enriched GO terms, the majority of \u2018Both\u2019 STP genes are driving genes (genes that drive the enrichment GO terms) while only around half of \u2018Barry et\u00a0al. only\u2019 STP genes are GO driving genes. The GO enrichment analysis suggests that TimeMeter did not randomly pick up a subset of STP genes from the Barry et\u00a0al. STP gene sets .Firstly, we performed GO enrichment analysis on \u2018Both\u2019 and \u2018Barry s Figure . Insteadet\u00a0al. only\u2019). The STP genes are based on comparing mouse epiblast stem (EpiS) cells differentiated to neural cells with human embryonic stem (ES) cells differentiated to neural cells, and thus it would be expected to find neuron morphogenesis and neuron development terms to be enriched. The \u2018Barry et\u00a0al. only\u2019 top 5 enriched GO terms are not directly related to neural development.Secondly, the neuron or morphogenesis development related GO terms are top enriched specifically in \u2018Both\u2019 gene sets GO terms for the 3544 STP genes from Barry et\u00a0al., there are 24 terms related to development in the Barry et\u00a0al. only gene list. In contrast, 20 out of 24 development related GO terms showed noticeably higher statistical significance for 1260 STP genes identified by both methods to evaluate TimeMeter and Barry et\u00a0al. methods. It is technically challenging to directly use traditional correlation analysis to evaluate TimeMeter and Barry et\u00a0al. methods, because these two datasets have different numbers of time points. However, it is a widely accepted notion that the Carnegie stage progression during gestation can be directly comparable between human and mouse . Hence, if we use a subset of time points to reconstruct a human-mouse time points pair in vitro that resemble human-mouse Carnegie stage equivalents (in utero), the newly constructed gene expression pair will not only have the same number of time points but also will have been adjusted by the differentially developmental paces, since the time points in human and mouse are matched to Carnegie stage equivalents. Therefore, we can apply traditional correlation analysis to evaluate TimeMeter and Barry et\u00a0al. methods. We transposed Barry et\u00a0al. (in vitro) days to embryonic day equivalents (in utero) . This analysis suggests that TimeMeter significantly increases the specificity for detecting STP genes.Fourthly, we integrated the knowledge of human-mouse Carnegie stage equivalents , and sen utero) . As showet\u00a0al. but not by TimeMeter. Figure Figure et\u00a0al. in either the TimeMeter list or in the Barry et\u00a0al. list. Of the 23 terms found in both lists, 19 are more significantly enriched in the TimeMeter list than in the Barry et\u00a0al. list. Another eight terms are specifically enriched in TimeMeter STP gene list, while only one term (hindbrain development) is close to the level of significance (P.adj = 0.07) only in Barry et\u00a0al. list. These results suggest that TimeMeter's increased specificity is not at the cost of decreased sensitivity.To further investigate whether the increased specificity of TimeMeter is at the cost of decreased sensitivity, we performed separate GO enrichment analysis on the STP gene sets from TimeMeter and from Barry We next calculated the PAS distribution for TimeMeter detected STP genes. As shown in Figure N) and the synonymous substitution rate (dS) estimates the balance between neutral mutations, purifying selection and beneficial mutations. A dN/dS ratio is significantly <1 indicates that the gene is under strong purifying selection (acting against change). We asked whether the genes, which have conserved (similar) temporal patterns between human and mouse during neural development, are naturally selected. We examined this by comparing the dN/dS ratio between genes identified by TimeMeter with similar and dissimilar temporal patterns. As shown in Figure N/dS ratio is significantly lower in STP genes than in dissimilar genes , indicating that the STP genes are naturally selected during evolution.During evolution, the ratio of the nonsynonymous substitution rate . TimeMeter can be applied to compare time series gene expression data allowing query and reference samples to be in different time windows and diffXenopus diverged about 290 million years ago (MYA) (Xenopus (e.g.\u00a0|PAS| > 4) are enriched in function groups which are important for neural and muscle development, indicating that the measurement of differential progression provides a novel feature in addition to pattern similarity that can help characterize early developmental divergence between two species.Axolotl and go (MYA) , and vergo (MYA) . These fN/dS ratio is significantly lower in STP genes than in genes with dissimilar temporal patterns. The results suggest that STP genes involved in neural differentiation in mice and humans are under strong negative selection during evolution.Applying TimeMeter to human ES cells and mouse EpiS time series gene expression datasets during neural differentiation, we found that the dWe compared mouse limb regeneration with axolotl blastema differentiation, and detected 38 STP genes between these two processes. Among the enriched GO terms, telomere maintenance and organization terms are particular interesting. In vertebrates, regenerative abilities decline with aging , and telhttp://www.morgridge.net/TimeMeter.html).TimeMeter is available at (The mouse digits regeneration RNA-seq data generated in this study is available at GEO with accession number GSE130438.gkaa142_Supplemental_FilesClick here for additional data file."}
+{"text": "Today massive amounts of sequenced metagenomic and metatranscriptomic data from different ecological niches and environmental locations are available. Scientific progress depends critically on methods that allow extracting useful information from the various types of sequence data. Here, we will first discuss types of information contained in the various flavours of biological sequence data, and how this information can be interpreted to increase our scientific knowledge and understanding. We argue that a mechanistic understanding of biological systems analysed from different perspectives is required to consistently interpret experimental observations, and that this understanding is greatly facilitated by the generation and analysis of dynamic mathematical models. We conclude that, in order to construct mathematical models and to test mechanistic hypotheses, time-series data are of critical importance. We review diverse techniques to analyse time-series data and discuss various approaches by which time-series of biological sequence data have been successfully used to derive and test mechanistic hypotheses. Analysing the bottlenecks of current strategies in the extraction of knowledge and understanding from data, we conclude that combined experimental and theoretical efforts should be implemented as early as possible during the planning phase of individual experiments and scientific research projects.This article is part of the theme issue \u2018Integrative research perspectives on marine conservation\u2019. First, we need to clarify what exactly we consider a sequence and what we understand as information. When speaking about sequences, most biologists understand a sequence found in biological macromolecules, such as the sequence of nucleotides within a DNA or RNA molecule or the sequence of amino acids within a protein. Strictly speaking, sequences are far more general and describe any set of objects arranged in some sequential order. In this work, we will mostly refer to biological sequences given by the order of chemicals arranged in a sequential order within a macromolecule, but would like to stress that measurements obtained at various time points also represent a sequence, from which plenty of useful information can be extracted. Such sequences were in particular important before the advent of high-throughput technologies that allow macromolecular sequences to be read efficiently. As we will discuss, sequences of sequences, i.e. time-series of biological sequence data, are a valuable method to infer information from sequences.While sequences are rather straightforward to define in a very general sense, it is far more challenging to capture the notion of information in a simple definition. In information theory, information\u2014or rather the generation of information\u2014is quantified by the information entropy the letters are arranged into a sequence. However, the same information can be written in many languages. The Shannon entropies of all these texts may be the same, or at least very similar. But for me as a receiver it makes a great deal of difference whether the text is written in English (which I understand) or in Finnish (which I don\u2019t). This example illustrates that the information content of data, as quantified by the Shannon entropy, does not help us to predict how much useful information we can extract. It further illustrates that, in addition to the data themselves, knowledge about the decoding system is required to actually make use of the information. In the following, information is interpreted as \u2018knowledge obtained from investigation, study, or instruction\u2019,1 which entails that besides the pure information content also the associated decoding mechanisms are considered.It is very simple to calculate the Shannon entropy of an arbitrary text, and the resulting number will tell us how randomly Understanding information as a signal that becomes valuable after decoding by a receiver, a DNA sequence contains more informative content than the sequence of the four different nucleotides that a DNA molecule is composed of.The order of the nucleotides within the DNA sequence reduces the information entropy. In eukaryotes for example, the genome sequence contains several types of repeated nucleotide sequences (repeats). This phenomenon results in a reduction of the DNA information entropy, as was shown in an earlier study . As a be(b)et al. [rpoB gene provide more phylogenetic resolution than the 16S rRNA gene and are often used in gathering evolutionary information [Proteins, defined by the information encoded in the DNA sequence (the gene), fulfil certain functions within a living organism. Information gathered from specific marker genes allows conclusions about evolutionary forces that are responsible for adaptation and speciation processes. For example, the most commonly used marker gene in prokaryotes is 16S ribosomal RNA (rRNA) . Becauseet al. , or its et al. . Alternaet al. or the rormation .Proteins resulting from the translation of the DNA sequence may, in the simplest case, perform exactly one function. However, there are multiple known examples where this simple one-to-one relation is not accurate. Multifunctional proteins, the so-called \u2018moonlighting proteins\u2019, perform more than one biochemical or biophysical function ,31. Prot(c)Zooming out from the level of single genes to the whole library of genes stored in an organism\u2019s genome allows extraction of information from the sequence in a different context. Considering the whole genome as information source, several sequence characteristics can be scanned to coax out functionality encoded in the genome structure. Focusing on the GC content variation between organisms, for example, points to genomic adaptations that might have played a significant role in the evolution of the Earth\u2019s contemporary biota . In addi(d)Whereas the genomic content stored in the DNA remains rather constant throughout the lifespan of an organism, the rates with which individual genes are transcribed vary strongly over time. Transcription is regulated by multiple factors, including environmental stimuli. The result of this regulation can be observed by measuring the quantity of the messenger RNA transcripts (mRNA) under different conditions or over time. These data provide additional information that cannot be obtained from the DNA sequence alone. Transcriptomics techniques allow analysis of the entirety of all transcripts available from one organism in different tissues, under different conditions or at different time points. Information obtained by transcriptomics allows conclusions about the regulation of gene expression. There are two key contemporary techniques in the field: use of microarrays, which quantify a set of predetermined sequences, and RNA sequencing (RNA-Seq), involving high-throughput sequencing to capture all sequences . For med(e)One of the main goals of sequence analysis is the determination of functional properties. The corresponding methods are often referred to as \u2018functional profiling\u2019. This process usually begins by comparative analyses of sequences of interest with annotated databases. For instance, after sequencing the protein coding gene of interest, the obtained reads are mapped on a reference database like the Kyoto Encyclopedia of Genes and Genomes (KEGG) orthology \u201364, ClusIf the same function is encoded in highly identical protein sequences, then we would consider the information entropy of such sequences in general as very low. Sometimes sequences may perform the same function but are different in their content, e.g. in amino acid compositions. An example is the LSR2 protein, which is a transcriptional silencer found in Actinobacteria, where it binds AT-rich DNA and silences its transcription \u201373. ThisFunctional profiling of genes and proteins is an important step in understanding the role of a sequence in the context of the whole genetic repertoire of an organism. How genes interact on the functional level is yet a higher level of information, from which new knowledge can be extracted.(f)Understanding biological systems presupposes investigating how matter and energy are converted in order to maintain their functions. How exactly these processes work is very likely written in the genetic sequence. To decode it, we need more understanding than the information from sequence content alone, or how strong a gene is expressed. Rather, the interplay between various gene functions is essential. Metabolic pathway reconstruction, molecular interaction and reaction network analysis, followed by mapping processes to reference pathways, increase our understanding about a higher-level function of an organism \u201376. Once(g)Fundamental research in biology heavily relies on model organisms. They have been used to uncover mechanisms that synthesize, modify, repair and degrade the genetic sequence and its encoded product, the signalling pathways that allow cells to communicate, the mechanisms that regulate gene expression and the pathways underlying diverse metabolic functions \u201396. In oMetatranscriptomics allows researchers to quantify community gene expression in an environmental sample using high-throughput sequencing technology. Today we have several pipelines (e.g. SAMSA2) to analyse the huge amount of data efficiently using high-performance computational utilities . Such anMetaproteomics (community proteomics) characterizes all the proteins expressed at a given time within an ecosystem. This allows us to create hypotheses and draw conclusions about microbial functionality. Further it makes it possible to study the adaptive responses of microbes to environmental stimuli or their interactions with other organisms or host cells \u2013110. Ana3.Most methods reviewed above extract and study information from genomic sequences, either alone or in a comparative context, but mostly as static structures without considering any temporal dynamics. Gene expression information describing the quantity of reads obtained either in different conditions or from different time points does contain time as a factor. Whereas comparative genomics can generate hypotheses regarding the evolutionary dynamics of genes and genomes, dynamics on shorter time-scales have not yet been discussed. It is apparent that even the best meta-omics dataset obtained for a single time point cannot yield any information regarding, for example, the mechanisms underlying the population dynamics observed in an ecosystem. Before we discuss recent and ongoing approaches to analyse time-series of sequence data and extract mechanistic information, and thus understanding, we briefly summarize essential concepts of time-series analysis in general.The main objective of time-series modelling is to carefully collect and examine observations from the past in order to develop a suitable model that describes the inherent structure of the series. This model is then used to generate future values for the series, i.e. to make predictions . The preet al. [There are many ways to analyse time-series data, depending on how much prior knowledge is available about the underlying mechanisms. Often we first distinguish between seasonal, cyclic and irregular components . Analysiet al. contain,Dermasterias imbricata increased in three areas. The observed frequency of D. imbricata until 2015 exceeded the model prediction for population development. The serious limitation of the model, however, is the assumed linear form of the associated time-series, making it insufficient in many practical situations.When adapting a model to a dataset, particular attention should be paid to selecting the most economical model. Here, \u2018most economical\u2019 refers to the simplest possible model that can explain the data without overfitting . One of A commonly applied methodology for the investigation of nonlinear stochastic models is the use of artificial neural networks (ANNs). Their characteristic is the application to time-series prediction problems by their inherent ability to nonlinearly model without having to adopt the statistical distribution. The corresponding model is formed adaptively on the basis of the specified data. For this reason, ANNs are inherently data-driven and self-adaptive. The most common and popular are multi-layered perceptrons (MLPs) characterized by a single feed-forward network (FNN) with a hidden layer. This method has a wide range of applicability. For example, phage protein structures could be predicted based on the genetic sequence . In a di4.The strategies to analyse time-series data discussed above are essentially statistical methods that aim at extracting patterns from time-series without using prior knowledge in order to make predictions about underlying mechanisms. Mechanistic models pursue a complementary approach. Based on experimental observation and often a great deal of intuition, a researcher formulates hypotheses on certain underlying interactions that give rise to an observable macroscopic behaviour. These hypotheses are then translated into equations capturing the interactions in a quantitative way. Solving these equations generates simulation results that can be compared with experimental observations, thus verifying or falsifying the initial hypotheses. This approach has been extremely successful for relatively small systems and for very fundamental questions. Almost a century ago, Lotka and Voltet al. [Phaeodactylum tricornutum, Moejes et al. [Owing to the high throughput and the resolution, time-resolved 16S barcoding data contain information on hundreds of species. Barcoding is referred to a global bioidentification system that employs DNA sequences as unique identifiers linked mostly to a specific taxonomic unit . Derivinet al. , who devs et al. demonstrGenomic sequence, together with functional annotation, allows the reconstruction of genome-scale metabolic network models, which encompass the complete biochemical repertoire encoded in an organism\u2019s genome . The mosThe current development of modelling techniques to simulate interactions of organisms on a metabolic level proceeds with enormous momentum. Controlled mesocosm experiments allow fo5.Tara Oceans expedition [The key question for the future is how can we ensure that ongoing data collection efforts, generating vast amounts of biological sequence data, are optimally suited for the development of mechanistic models. These cannot only describe data, but also rationalize what we observe based on underlying fundamental mechanisms. It is understandable that, when a new and rather unknown system, such as the global marine microbiome, is investigated for the first time, a rather unbiased, exploratory approach is taken, as is exemplified by the pedition . The enopedition . Despitepedition \u2013151.P. tricornutum [This example demonstrates that the amount of data does not necessarily correlate with the gain of basic understanding. In other examples, such as the dynamics of the phycosphere of cornutum in contr2, phosphate, nitrate, salinity, pressure, chlorophyll density, etc. Moreover, novel approaches will be required to integrate results from different methods of data analysis to maximise the information gain. Secondly, after an era of mainly exploratory data acquisition, it is of paramount importance to strengthen hypothesis-driven experimental approaches [et al. and Goldford et al. [We conclude that two main aspects will become increasingly important for biological research in the near future to close the gap that currently exists between the vast amount of high-throughput data and the actual fundamental understanding generated from it. Firstly, methods need to be developed, and already existing ones need to be implemented in the daily experimental work process and refined to integrate different types of data. Today several methods already exist for particular sequence analysis \u2013154. A mproaches ,159. Eved et al. ,133,160.In fact, we are convinced that the involvement of theory cannot begin too early. Bioinformaticians and modellers should be involved during experimental design, because these researchers are typically those that formulate clear working hypotheses and have a model structure in mind, even before a detailed mathematical model has been constructed. Only in close interdisciplinary discussion can the different goals and aims of experimentalists and theorists be harmonized, and experiments be planned so that the resulting data are optimally suited to build mechanistic models and test scientific hypotheses."}
+{"text": "Background Telomeres, which are composed of repetitive nucleotide sequences at the end of chromosomes, behave as a division clock that measures replicative senescence. Under the normal physiological condition, telomeres shorten with each cell division, and cells use the telomere lengths to sense the number of divisions. Replicative senescence has been shown to occur at approximately 50\u201370 cell divisions, which is termed the Hayflick\u2019s limit. However, in cancer cells telomere lengths are stabilized, thereby allowing continual cell replication by two known mechanisms: activation of telomerase and Alternative Lengthening of Telomeres (ALT). The connections between the two mechanisms are complicated and still poorly understood.statistically significant genes with either mechanism, contrasted to normal cells. A new algorithm is introduced to show how the correlation between two genes of interest varies in the transient state according not only to each mechanism but also to each cell condition.Results In this research, we propose that two different approaches, G-Networks and Stochastic Automata Networks, which are stochastic models motivated by queueing theory, are useful to identify a set of genes that play an important role in the state of interest and to infer their previously unknown correlation by obtaining both stationary and joint transient distributions of the given system. Our analysis using G-Network detects five Conclusions This study expands our existing knowledge of genes associated with mechanisms of telomere maintenance and provides a platform to understand similarities and differences between telomerase and ALT in terms of the correlation between two genes in the system. This is particularly important because telomere dynamics plays a major role in many physiological and disease processes, including hematopoiesis. Stochastic Automata Networks (SANs) [infinitesimal generator of a Markov chain in a product form in order to ease the problem of dimensionality and the complexity of the vector-matrix product [The introduction of s (SANs) has led product , 3. Spec product 1\\documeN is the number of automata, R is the number of synchronizations, and Li and Mr,i are matrices containing information about the local transitions and the effect of the synchronization r on the automaton i, respectively. Nr,i is the normalizing matrix of Mr,i, and \u2297 and \u2295 denote, respectively, the tensor product and tensor sum [provides the transition rates of a time-continuous Markov chain of the type just described. Notation: nsor sum . Notatiowithout synchronizations exists, as long as some numerical conditions related to local balance are satisfied. Prior to that, Boujdaine et al. in 1997 [limited synchronization, and proved a sufficient condition for existence of stationary distribution, by applying properties of Kronecker sum. Note that classic queuing networks such as Jackson\u2019s networks and G-Networks with positive and negative customers [For a decade after SAN introduction, many outstanding analytical results have been proved. For example, Plateau and Stewart proved i in 1997 considercorrelation analysis in SANs is rarely examined, although correlation quantifies the degree of inter-relatedness of two automata. It contributes to the understanding of how the association of two automata changes with time according to the state of interest, with the behavior of other automata in the system simultaneously considered. In the current study, we first examine the exact-form of an infinitesimal generator for G-Networks with positive and negative customers. Then it is applied to a gene regulatory network (GRN) having five genes related to the onset of cancer in order to show the time-dependent dynamics of transition rates and correlation between two particular genes of interest. Throughout this paper, our analysis is based on time-continuous Markov chains, but we note that most results are valid in the time-discrete case provided complications such as periodicity are excluded.Most analyses and applications of SANs focus on finding steady-state distribution of queuing system models. To our knowledge, simultaneous transition in another automaton. Such transitions are collectively referred to as synchronizing transitions. In one synchronizing transition, two automata are paired. The first is called a master automaton; it affects the state of another automaton as its state changes. The second is called a slave automaton; it is affected by its master automaton [Transition in one automaton may initiate a new utomaton .local transitions. Such transitions only change the state of one automaton.In any given automaton, transitions not classified as synchronizing transitions are The SAN consists of a number of automata, which are correlated by synchronizing transitions. Each automaton has states and transitions. The state space of the system is the Cartesian product of the states of the automata . There eexponentially distributed, and second, that a multidimensional Markov chain represents the SAN, although a particular automaton may not be Markovian itself. Then the infinitesimal generator (the matrix of time derivatives at 0 of the transition probabilities), which reflects a change in transition probability from one state to another, is given by Eq. In the SAN, two assumptions are made. First, the time to transition is Q is defined as follows:Li is a local normalized transition rate matrix of automaton i; thusi\u2208{1,2,\u22ef,N},Mr,i=IK, where IK is an identity matrix of size K, except for the case when r is two indices corresponding to master and slave automaton. Thereforemsr(r) and sl(r) are indices for master automaton and slave automaton, respectively, in the rth synchronization. Here, Dr,msr(r) denotes the transition rate matrix due to the rth synchronizing event on the master automaton, and Er,sl(r), the transition probability matrix, is the effect of the rth synchronizing event on the automaton sl(r).For each synchronizing transition, for all KN\u00d7KN)-dimensional diagonal matrix, whose the lth diagonal element is the negative sum of the lth row of A normalizing matrix Specifically, each matrix in Erol Gelenbe, are Markovian stochastic models grounded in queueing network theory [negative customer\u2019, which can be biologically interpreted as a repression signal [positive customers, which is mRNA expression level biologically, in an automaton or a gene having an infinite state space. Positive and negative customers synchronously move from a master gene to a slave gene within the system with transition probabilities, locally affecting the queue. Both types of customers arrive at the ith gene from outside of the system at rates i are independent and identically exponentially distributed with rates \u03bci\u2208 for i=1,2,\u22ef,N. Figure\u00a0Gelenbe- or G-Networks , devisedk theory . They wek theory \u201318. Howen signal . G-Netwonetworks . In contll queue , and theentclass1pt{minimaG-Networks have an advantageous property of being analytically tractable because of the existence of product-form stationary distributions under the typical Markov Chain assumptions as shown in Eq. x={x1,x2,\u22ef,xN} be the vector of non-negative integers representing the state of the network. Then the time-dependent vector, {x(t):t\u22650}, is a continuous-time Markov chain, which satisfies the system of Chapman-Kolmogorov equations [xi(t),1\u2264i\u2264N, of vector x(t)=(x1(t),x2(t),\u22ef,xN(t)), is the number of customers (or the mRNA expression level) of gene i at time t. It has been proved that the joint steady-state distribution \u03c0(x) of x(t), has the form of a product of stationary distributions of each queue,Let quations . Elementeach satisfying the balance equation of the G-Networks \u201325, i.e.i=1,2,\u22ef,N. It implies that the stationary probability for each positive recurrent state is expressed in the terms of a product of functions depending solely on the state of a single queue.for Q are defined as follows in the terms of customer-related rates:I : Identity matrixUpp : Matrix with entries 0 except the main upper diagonal which is 1Low : Matrix with entries 0 except the main lower diagonal which is 1Kth diagonal element which is 0According to , matriceThis section consists of two parts. The first part shows the mathematical result that derives the correlation between two genes utilizing Kronecker algebra. The second part contains the biological results based on the application of the mathematical result to the gene regulatory network associated with telomere biology.Q is the infinitesimal generator and P(t) is the transition probability matrix of a finite Markov chain, we can derive a set of differential equations called Kolmogorov\u2019s forward equations, in the form of P\u2032(t)=P(t)\u00b7Q. Solution of the forward equation is given in this case by the power series expansion that converges for any square matrix Q:If KN)-dimensional time-dependent state probability vector at time t has the formThe resulting (1\u00d7\u03c0(t) is a state probability vector at time t [\u03c0i,j(t) of two automata, say i and j, we introduce two matrices Ci,j and KN\u00d7K2)- and (K2\u00d7KN)-dimensional, respectively. After several steps of calculation, we obtainwhere t time t . In ordei\u2260j, i,j=1,2,\u22ef,N, Ci,j and where, for all K)-dimensional matrix K. This results inNote that the can be easily calculated, once the infinitesimal generator, Q, of the entire system is obtained.Equation K\u00d7K)-dimensional table that describes the joint probabilities for each time point t. Based upon it, the association between two genes for all t is evaluated using the Pearson product-moment correlation coefficient [We can then form a Abnormal Pathway Detection Algorithm (APDA) based on G-Networks [K2\u2013tuples of mRNA expression levels at each fixed time t, and thereby illustrates the relationship between any two genes.In this research, we use a GRN provided in Fig.\u00a0Networks . -dimensional joint state probability vector as \u03c0ij(t) in Eq. Q. They are estimated and optimized according to the APDA [\u03c0i,j(t), under the normal condition, with the Table\u00a0\u03bci for all i, are identical for both normal and ALT cells, while transition probabilities, di, are different between the cell types. Transition probabilities are deciding factors in correlation coefficients based on Eq. Note that this matrix needs to be transformed into a was exploited to confirm that parameter estimation via the APDA is appropriate aligning with the theory of G-Networks. We compare the ECDF of simulated data utilizing values in Tables\u00a0qi in Eq. Estimated and/or assumed values of the parameters listed in Tables\u00a0Networks and the v chains , the empThe ECDF, usually denoted by T(xi):where entclass1pt{minimaFigure\u00a0i.e., before it reaches a steady state. Due to the product-form stationary probability distribution of network state in the theory of G-Networks, the correlation coefficient becomes 0 when the system becomes stationary [t=0.8. As shown in the figure, the patterns of correlation between each pair of genes do not noticeably differ among the types of cells , although they are dissimilar from one pair to another within the same type of cells. For example, the correlation between CEBPA and FOXM1 in a transient state is stronger than that of c-Myc and hTERT, and it reaches a steady state more slowly in all cell types.First, Fig.\u00a0ationary . AssuminSecond, Eq. We discuss the similarities or differences in the correlation of two genes between normal and malignant cells with either active telomerase or ALT. Figure\u00a0Note that if we conversely assume that a pair of genes are highly negatively correlated at the initial time point, then the negative correlation prevails over time as shown in Fig.\u00a0G-Networks and Stochastic Automata Networks both stemming from queuing theory, can contribute to the analysis of similarities and differences between telomerase activation and ALT using genes related to telomere maintenance. However, there are the following two caveats.\u03c0ij(0) in our example, evaluates the strength of the evidence for a relationship of two genes but does not determine causal relationships. In other words, even if a positive correlation between CEBPA and hTERT in ALT cells is revealed, it is not known which gene activates another one.First, correlation coefficient, whose sign relies upon the initial joint state probability vector Q, which is a function of tensor products, can be enormously large, with the corresponding matrix being sparse depending on the number of states of each queue. The state space used in this research is relatively small. Computations in larger dimensions can be made practical by using Compressed Sparse Row and a number of methods for the approximation of the matrix exponential [Second, the dimension of the infinitesimal generator, onential such as One important advantage of using mathematical modeling to understand dynamics of a biological system is that the model can be a cost-effective and time-saving supplement or even a substitute for laboratory experiments in which patterns of the system are delicate and complex. Especially the stochastic approach is useful to quantify the role of fluctuations in the behavior of the system of interest. In this paper, we propose that a stochastic paradigm involving statistically significant genes in cells with either telomerase or ALT compared to normal cells from preexisting GRNs. We further confirmed that the APDA satisfactorily estimates the parameters, which decide the rates of convergence to stationarity as the ECDF of the simulated data using the estimated parameters approaches to theoretical CDF using qi\u2208{1,2,\u22ef,N} as time progresses. Further, our correlation analysis based on SANs helps to infer the link between genes in various conditions that has not yet been discovered through experiments. We prove that our mathematical expression (Eq. vs. ALT) by which telomeres maintain their sufficient lengths, even though the trend differs from one pair to another within the same type of cells. To conclude, this study provides a platform to detect significant genes and infer the previously unknown connection among them by applying modeling techniques borrowed from queueing theory. This is particularly important because telomere dynamics plays a major role in many physiological and disease processes, including hematopoiesis.In spite of the caveats discussed in the previous section, the APDA based on G-Networks initially introduced in can suggsion Eq. allows aQ is the infinitesimal generator and P(t) is the transition probability matrix of a finite Markov chain, we can derive a set of differential equations called Kolmogorov\u2019s forward equations, in the form of P\u2032(t)=P(t)\u00b7QIf P(0) initial condition is P(0)=I, where I is an identity matrix having the same dimension as Q [ion as Q , consistQ:Solution of the forward equation for finite Markov chains is given by the power series expansion that always converges for any square matrix l1-norm, and H(t) is is a linear operator with || H(t) ||=o(t), as t\u21920. When the infinitesimal generator, Q, is irreducible, its transition probability matrix, P(t) defined in Eq. t>0. It signifies that as the trajectory length N approaches to the limit, at least one transition between any pair of states will almost surely happen without regard to the sparsity of Q [\u03c0\u00b7Q are a set of |S| linear equations, which is usually used to detect the stationary distribution \u03c0, if such one exists [where ||\u00b7|| denotes ity of Q . Note the exists .A formal solution to the time dependent state probability vector is defined as\u03c0(t) is a state probability vector at time t [\u03c0i,j(t) of two automata, say i and j, we introduce two matrices called Ci,j and KN\u00d7K2)- and (K2\u00d7KN)-dimensional matrices, respectively. Then based on Eq. where t time t , 36. In mentclasspt{minimaCi,j and Here, andK, and we haveNote that This results inwhereTherefore Eq. time-dependent joint state probability vector \u03c0i,j(t) can be easily calculated, once the infinitesimal generator of the entire system is obtained.Equation \u03c0i,j(t) of two genes is obtained, we then can form a (K\u00d7K)-dimensional contingency table that describes the joint probability for each time point t. Based upon it, the association between two genes for all t is evaluated using the Pearson product-moment correlation coefficient.Once http://www.ncbi.nlm.nih.gov/geo/). Data includes expression gene levels of 3 types of cells: 18 ALT cell lines, 16 telomerase-positive cell lines and 4 normal (fibroblast) cell lines. Initially, the gene expression levels of 29 genes were extracted from the data according to [significant genes in either telomerase-positive or ALT cells compared to the normal cells, discovered by APDA based on G-Networks [th percentile to guarantee the identical medians across all samples. Then they were normalized again and scaled with mean 3 and variance 1 [Gene expression data GSE14533) were obtained from the National Center for Biotechnology Information Gene Expression Omnibus as follows:In such cases, the parameters of stationary distribution in Eq. i\u2208{Indices of initiating genes}. Note that, according to Eq. i [Equation e gene i .We adopt APDA introduced originally in , but sliRules and assumptions of other parameters (shown in Table\u00a0\u0394t is decided by one of the 4 cases as shown in Table\u00a0According to the global balance equation of G-Networks \u201325, a moAlgorithm 1 contains the details of the simulation assessing the estimated parameter values from the APDA."}
+{"text": "Deciphering the relationship between clinical responses and gene expression profiles may shed light on the mechanisms underlying diseases. Most existing literature has focused on exploring such relationship from cross-sectional gene expression data. It is likely that the dynamic nature of time-series gene expression data is more informative in predicting clinical response and revealing the physiological process of disease development. However, it remains challenging to extract useful dynamic information from time-series gene expression data.We propose a statistical framework built on considering co-expression network changes across time from time series gene expression data. It first detects change point for co-expression networks and then employs a Bayesian multiple kernel learning method to predict exposure response. There are two main novelties in our method: the use of change point detection to characterize the co-expression network dynamics, and the use of kernel function to measure the similarity between subjects. Our algorithm allows exposure response prediction using dynamic network information across a collection of informative gene sets. Through parameter estimations, our model has clear biological interpretations. The performance of our method on the simulated data under different scenarios demonstrates that the proposed algorithm has better explanatory power and classification accuracy than commonly used machine learning algorithms. The application of our method to time series gene expression profiles measured in peripheral blood from a group of subjects with respiratory viral exposure shows that our method can predict exposure response at early stage (within 24 h) and the informative gene sets are enriched for pathways related to respiratory and influenza virus infection.The biological hypothesis in this paper is that the dynamic changes of the biological system are related to the clinical response. Our results suggest that when the relationship between the clinical response and a single gene or a gene set is not significant, we may benefit from studying the relationships among genes in gene sets that may lead to novel biological insights. The aim was to develop early predictors of susceptibility and contagiousness based on expression profiles collected prior to and at early time points following viral exposure. Some work reported the differences of transcriptomics [https://www.synapse.org/#!Synapse:syn5647810/wiki/402364), some common machine learning algorithms [In genomics studies, time-series gene expression data \u20133 often gorithms , 12. Thegorithms . The revgorithms advocategorithms analyzedTo study the relationship between viral exposure response and time-series gene expression data, we hypothesize that the changes (i.e. dynamics) of the relationship between genes in gene sets may be informative about viral exposure response, and propose a statistical framework to characterize and integrate dynamic information for response prediction where the model parameters have clear biological interpretations. The main innovations of the paper are: Firstly, we use spectral norm to extract information of the difference between two networks. Secondly, we model the changes of dynamic co-expression networks based on the graph-based change point detection method. Thirdly, we measure the similarity between two subjects by the relationship between gene trajectories.The rest of the paper is organized as follows: In the \u201cG=80. The sample size N and total time points T took values from the sets {20,50,100} and {40,80,150}, respectively. In the main text, we show the evaluation results under the case {N=100,T=40}. For the other cases, the results are provided in the Supplementary Materials [see Additional file\u00a0O1, O2, O3 and O4, respectively, with each gene set containing 20 genes. To model the time series data, we assume an AR(1) model for the mean expression levels, i.e.\u03a30.1 is the diagonal matrix with 0.1 as the diagonal element. In our model, the algorithm is based on the relationship between the response label and the change of the dynamic structure. As described in the \u201c\u03a30 across the time points and under the alternative hypothesis, the covariance matrix is \u03a30 up to some time point after which it changes to \u03a31. We assume \u03c1 is a constant and 1 is the matrix of 1. For the null hypothesis, \u03c1=0 and we consider different scenarios for the alternative hypothesis when \u03c1 takes value from set {0.1,0.3,0.5,0.7,0.9}. The time-series gene expression data are simulated for 50 subjects labelled \u2018 +1\u2019 through the model,In this section we assess the performance of the proposed algorithm on the data simulated as follows. For simplicity, we fixed the number of genes For 50 subjects labelled \u2018 \u22121\u2019, the data are generated byp\u2208{1,2,3,4}. Under our simulation models, we know that the first and third gene sets have changes in both the positive and negative groups, and the changes happen at time points 15 and 25, respectively. For the second and fourth gene sets, the positive group has no change point and the negative group has changes at the 20th time point. Therefore, the second and fourth gene sets are informative about the response label. We compared the proposed algorithm with commonly used machine learning algorithms, including Logistic Regression (LR), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) [where or (KNN) . Note thChange point detection and parameter inference The results under different scenarios are shown in Table\u00a0b1, b2, b3, and b4, respectively. As discussed in the simulation models, the subject label is the result of different change points in gene sets 2 and 4. For comparability, the absolute value of parameter b is denoted by |b|. When \u03c1 is greater than 0.3, |b2| and |b4| are the largest in the 4 parameters which is consistent with the model structure. So when the difference between \u03a30 and \u03a31 is large enough, our method can identify the gene sets which contribute more to the response label. In Table\u00a0P-value\u2019 is the average p-value over 100 replications using graph-based change point detection method. \u2018CHP(%)\u2019 represents the proportion of times the change point is precisely detected in 100 simulations. When \u03c1 is less than 0.1, the structure difference between \u03a30 and \u03a31 is small, and the detected change point may not be statistically significant. When \u03c1 is greater than 0.5, there is more than 90% chance to detect the change point.Prediction accuracy We average the time series data across time points as the input before they are analyzed by LR, LDA, SVM and KNN. The average ROC curves over 100 simulations of the classification results for each algorithm are shown in Fig.\u00a0\u03c1. The average AUC values are summarized in Table\u00a0\u03c1 is greater than 0.5. Moreover, the \u2018AUC\u2019 row of Table\u00a0\u03c1, which is consistent with our model hypothesis, as it is easier to infer the labels with a larger \u03c1.As described in the \u201chttps://www.synapse.org/#!Synapse:syn5647810/wiki/402364) (note that only the registered users can log into the website). The time-series gene expression data for this challenge were collected from healthy volunteers exposed to a respiratory virus within a controlled experimental setting where some became ill and others did not despite the same exposure. Data were derived from seven viral challenge experiments in which volunteers were exposed to one of four different respiratory viruses in order to find gene expression profiling signatures of susceptibility. Peripheral blood gene expression profiling was made at 55 time points ranging between -30 h (pre-exposure) and 672 h (post-exposure). The released data include 125 subjects from seven study centers with time-series gene expression data for 22,277 probes in peripheral blood for each subject, with a total of 2371 samples. Additionally, clinical information was also available, such as age, gender, and the time of samples measured. To reduce noise, we removed 7 subjects who were injected interfering viruses, and removed probes corresponding to multiple genes, and averaged the multiple probes corresponding to the same gene. We considered a total of 12,532 genes. Therefore, we have N=118, G=12,532, and T=55 for this data set. There are 68 subjects with positive labels and 50 subjects with negative labels. The overall data can be visualized by the heat map as shown in Fig.\u00a0In this section, we evaluate the performance of our proposed method through real data analysis. Some challenge results related to this paper are provided in the Supplementary Materials [see Additional file\u00a0A number of studies , 17 repoh in the \u201cFirstly, we selected the gene sets that may be related to viral exposure responses. We consider \u201cSYMPTOMATIC-SC2\u201d as the response label which is a binary variable indicating post-exposure maximum symptom score greater than six and then screen out differentially expressed genes from each cross-sectional gene expression data at 55 time points, even if it is not significant. This led to 55 gene sets. Secondly, for each gene set, we represent it as an undirected weighted network and the weight is given by gene expression similarity, where we used the Pearson correlation coefficient of two genes to define their similarity. That is the function b. In terms of 55 gene sets, we consider those gene sets among the top 12 in which the relationships among genes change at early stage. The inferred parameters are summarized in Table\u00a0We randomly selected 70% subjects as the training set and the remaining as the test set. The training set contained 83 subjects (35 subjects with negative label and 48 subjects with positive label), 12,532 genes, and up to 55 time points. We want to test the biological hypothesis that the dynamic networks with early change point contribute more to the response label. Figure\u00a0t. The result shows that there is a clear difference between the positive and negative groups. More importantly, at the early stage, it is very difficult to distinguish the positive and negative groups from the trajectory of a single gene, as shown in Fig.\u00a0The results show that the 44th, 2nd, 34th and 35th gene sets contribute more to the response than the other gene sets. By enrichment analysis for these four gene sets, we can identify pathways related to viruses as shown in Fig.\u00a0In this paper, we adopt a screening approach to find potential gene sets which may be related to response. For this screening step, we do not consider multiple testing when we detect change points of the dynamic networks. We further identify the gene sets related to the response through the proposed Bayesian model. The screening step can be considered as a variable selection step where no response information is used. In addition, when there is no simple relationship between the clinical response and a single gene or a gene set , a model that studies the changes of the relationships among genes in gene sets may offer novel biological insights.We have proposed a novel approach of modeling time-series gene expression data for inferring an individual\u2019s response to viral exposure. The biological hypothesis in this paper is that the dynamic changes of the system are related to the clinical response. Compared with previous time series analysis methods, we showed that change point detection for dynamic networks may be informative for the relationship between the clinical response and dynamic nature of the system (gene sets). Joint consideration of multiple kernels based on gene sets with dynamic network structures not only can predict an individual\u2019s clinical response, but can also help elucidate the biological pathways involved. The effectiveness of the proposed method was demonstrated through the analyses of both simulated and real data.dtw) which has been applied to gene expression data [In this paper, we construct the co-expression networks for the gene sets at each time point separately using Pearson correlation. We note that other methods may be used. For example, we can construct networks incorporating some prior knowledge such as regulatory network at each time point to improve network robustness. Some model-based methods such as TV-DBN can be uion data . In prachttp://software.broadinstitute.org/gsea/msigdb/index.jsp) [The main aim of the paper is to identify gene sets related to viral exposure response and meanwhile predict a person\u2019s response using the dynamic relationships among genes in a gene set at early exposure stage. We assume that only some of the gene sets are informative about clinical responses. Firstly, the genes need to be organized into different gene sets based on some criteria. Here are some suggested ways to group genes. If there is prior biological knowledge, we can organize genes into different gene sets according to such knowledge. For example, for immune related diseases, the immune-related pathways in the database, MSigDB (dex.jsp) , can be N subjects, G genes, and T time points. Leti \u2208{1,...,N} index the subjects,gt \u2208{1,...,T} index the time points where gene expression data are collected,yi\u2208{+1,\u22121} denote the response label of subject i,kth subset of gene index set, where k is an integer satisfying 1 \u2264kAssume that there are x, xigt represents the expression value of gene g at time t for subject i. The data setFor the elements in the array xig\u00b7=T is the time-series expression observation with length T of gene g for subject i. Similarly, xi\u00b7t=T and xgt\u00b7=T.where kth gene set, the genes are collected in Ok and let Gk denote the number of genes in this set. At each time point, we can construct a network such as co-expression network, for genes in the set. Therefore, we can get T networks across the T time points, with these networks represented by T matrices {A1,...,AT}, where At, the th entry of matrix At, is derived from h, i,j\u2208Ok and h is a function that defines the correlation or similarity between two genes in this set. The change point detection across these networks can be expressed as follow:vsF0 and F1 are different probability measures on a nonzero measure set. Firstly, define the similarity between two matrices as2 is the spectral norm of a matrix [At:t=1,...,T}, i.e. the minimum spanning tree (MST), with the above definition of matrix similarity. Thirdly, we can detect the change point of {At:t=1,...,T}. We use the graph-based change point detection method [For the n method for stats can define a kernel matrix \u03a6s. Denote the kernel matrix set After identifying gene sets with early change points, we use a Bayesian model to integrate dynamic information from multiple gene sets. Assume that the indices of the selected gene sets are collected in ng model ,1\\documa=T is the sample weight vector, b is the kernel weight vector, e is bias and ith column of kernel matrix \u03a6s. The th element of \u03a6s is defined by the similarity between subjects i and j. \u03a6s=\u03d5s, where the kernel function \u03d5s is defined asl,k)th entry of the matrix l in gene set Os for subject i. In Eq. is the Kronecker delta function that returns 1 if the variable satisfies the restriction and 0 otherwise, and \u03bd is a given margin parameter which is used to distinguish two categories. Next, we use variational approximation [p(y|x) by the lower bound q(\u0398) is the posterior distribution of \u0398. The exact formulas of the lower bound q(\u00b7) of each parameter can be computed bywhere ximation to estimeference . Hence, q(\u0398\u2216\u00b7) is the distribution of \u0398 with the parameter (\u00b7) removed. Algorithm 1 summarizes the estimation process of model parameters {a,b,e,f,L}. After we obtain a trained model, the label for a new subject can be predicted by Eq. (where Additional file 1 Supplementary Materials include six sections: Section 1, Graph-based Change-point Detection; Section 2, Details of Algorithm 1; Section 3, More Simulations; Section 4, Analysis of the Effects of Gene Sets; Section 5, Challenge Results; and Section 6, Figures.Additional file 2 An example of the R code used in the paper."}
+{"text": "Currently, most diseases are diagnosed only after significant disease-associated transformations have taken place. Here, we propose an approach able to identify when systemic qualitative changes in biological systems happen, thus opening the possibility for therapeutic interventions before the occurrence of symptoms. The proposed method exploits knowledge from biological networks and longitudinal data using a system impact analysis. The method is validated on eight biological phenomena, three synthetic datasets and five real datasets, for seven organisms. Most importantly, the method accurately detected the transition from the control stage (benign) to the early stage of hepatocellular carcinoma on an eight-stage disease dataset. What if one could identify a departure from the healthy state well before a tumor is present, when changes can perhaps still be reversed? What if one could identify qualitative changes in the states of a biological system without even knowing what the states are? Here, we propose a technique that aims at identifying such qualitative changes without any a priori knowledge about the nature of the changes. The preliminary results herein demonstrate the potential of this approach using several datasets derived from eight biological phenomena and seven organisms.In most, if not all, non-trauma health-care cases, pathological conditions are defined by phenotypic or clinical changes. For example, cancer is usually diagnosed after the patient experiences symptoms caused by significant transformations in their physiology. However, the progression from a healthy state to one of disease is gradual, happening over a period of time. This is particularly true in the case of conditions such as cancer or neurodegenerative disorders, for which the onset of the underlying pathology is believed to begin much earlier than the clinical, detectable onsetThe goal is to develop an approach that can detect qualitative changes in the system, where a qualitative change is defined as a change that involves observable macroscopic phenotypical or clinical changes. We should emphasize that no known approach is available to tackle this type of problems. There are no clearly defined states or classes available a priori, so no supervised machine learning approaches can be used. We would like to be able to detect changes as they happen if possible, without massive amounts of partially redundant data collected beforehand, so no unsupervised methods could be used to extract common features and build clusters. Here we are looking at a system without having a reference set of genes, so no enrichment approach will be useful. Finally, there is no predefined phenotype, and therefore no gene set analysis methods can be employed either. What we would like to achieve here is a method capable of (1) monitoring the activity of a system by taking periodic measurements and (2) detecting when a specific system undergoes a qualitative change without prior knowledge about it. To the best of our knowledge, no existing method could approach this task with a reasonable chance of success.In this paper, we propose a qualitative change detection (QCD) approach, an analysis method that uses sequential measurements as described by a time series (or by progressive disease stages), together with all known interactions described by biological networks, and that applies an impact analysis approach to identify the time interval in which the system transitions to a different qualitative state.3. Often, time series-data are used to extract gene profiles that can be be used to better understand the phenomena or phenotypes7. The analysis of time-series data can also be used to identify disease biomarkers either as a single gene, a group of genes, or a network of genes8.In practical terms, the data to be analyzed is a time series of gene expression or any other sequential measurements of systemic states such as the one described in disease progression. Time-series data have been used in many ways, e.g. to infer information regarding regulatory mechanisms, the rate of change for a gene, the order in which genes are (de)activated, and the causal effects of gene expression changes9, time intervals with the highest difference in expression for each gene from a predefined set10, dynamic network biomarkers using local network entropy11, or time periods of differential gene expression using Gaussian processes12. However, all of these approaches perform comparisons between disease profiles and a reference profile . In the paradigm proposed here, none of these existing methods can be applied because the goal is to identify a transition to a qualitatively different state without knowing the gene expression profile of the new state, and hence, without the ability to make a comparison between the control and disease phenotypes.In the landscape of analysis methods for high-throughput data see Fig.\u00a0, the proA biological system is characterized by a tendency to reach and maintain a state of homeostatic balance, considered to be a stable state. An alteration made by internal or external stimuli can trigger the system to transition from one stable state to another, referred to as a qualitative change. Notably, any of the system components taken in isolation may not vary dramatically; however, the system as a whole may undergo a qualitative change. Conversely, in a resilient system, important variations of one or a group of components may happen without necessarily involving a qualitative systemic change. Importantly, most systems have built-in tolerance mechanisms such that the response to a stimulus is delayed until the signal is perceived as real in order to filter noise and to conserve the energy necessary to undergo a systemic change.16. An expectation maximization algorithm is then used to separate large and small system perturbations values, thus identifying important differences between those states. Lastly, the analysis finds the disjunct overlaps of the intervals with large system perturbation that identify one or more time intervals during which the biological systems has undergone qualitative changes, referred to henceforth as change intervals.We developed and implemented a data analysis method capable of detecting qualitative changes in biological systems despite these challenges. The workflow of the analysis is summarized in Fig.\u00a0Conventional approaches to the analysis of time series gene expression data are extremely useful tools to identify genes that are behaving in a similar way. However, these methods are not designed to identify systemic changes. The goal of the proposed approach is to identify transitions from one state to another, rather than study a particular state or a particular time profile. Our goal is to show that the proposed approach is able to identify such meaningful transitions across different organisms and various phenomena.The analysis of eight well-studied phenomena was performed with the proposed method (QCD) for seven model organisms using both synthetic and real data. To assess the ability of QCD to detect qualitative changes, results were compared to prior knowledge of the phenomenon under study. QCD uses system knowledge, as described by a known gene signaling network or a map of neurons and their synaptic connections, as well as sequential measurements of the system components (genes or neurons). Data were obtained by measuring either the mRNA level of the genes involved in the system, in the case of real data, or generated based on equations describing the model of each organism, in the case of synthetic data.E. coli flagellum building18 and B. subtilis sporulation19. The subtle change analyzed using synthetic data is C. elegans avoidance reflex20. Major physiological changes analyzed using real gene expression data are yeast sporulation21 and fruit fly pupariation22. More subtle changes analyzed using real gene expression data involve fruit fly ethanol exposure23.The results of the analyses show that QCD can reliably identify the time interval during which a biological system goes from one qualitative state to another in response to organism development or to a shift in environmental conditions. We evaluate the method using phenomena that involve major physiological changes. We also evaluate the method for phenomena involving more subtle, yet important changes. Major physiological changes analyzed using synthetic data are 11. In addition to the six datasets mentioned above, we also ran QCD on the two datasets from the Liu et al. study. The first dataset is derived from a mouse study of exposure to a toxic gas (carbonyl chloride). Using these data QCD identified one qualitative change, before the exposure became lethal, preceding the pre-disease state detected by Liu et al. The second dataset contains data describing the progression of human hepatocellular carcinoma. Using these data, QCD identified a qualitative change from a benign stage (control) to a pre-malignant stage (high-grade dysplastic nodules), also preceding the pre-disease state detected by the Liu et al. study.QCD was compared with an existing method developed by Liu and colleagues used to detect network biomarkers and the pre-disease state (herein abbreviated DNBM)E. coli bacterium initiates the process of building a flagellum that will provide the motility necessary for finding an environment rich in nutrients.When in an environment lacking nutrients, the E. coli flagellar motor, using synthetic data and the flagellum building network18 genes in the multi-output networks25.We analyzed the process of building the see Fig.\u00a0. PreviouThe flagella building network is a generalization of the C1-FFL. In essence, the flagella building network is a multi-output C1-FFL in which the exact timing of the sequence of steps is controlled by the different activation thresholds to each other using a pathway impact analysis. In essence, the state of the system at each time point is compared to the state at all other time points using a pathway impact analysisway Fig.\u00a0, and theway Fig.\u00a0. The intIn essence, most of the comparisons between any time point earlier than 180 mins and any time point after 300 mins show large perturbations (exceptions are marked by the black arcs). This suggests that a qualitative change of the system occurs between 180 and 300 mins, which is indeed the case. The real change takes place between 240 minutes, when The identification of a change interval should be followed by an analysis of the states of the system before and after a change interval in order to gain insight into the system transition. Without loss of generality, we will consider the situation in which there is a single change interval, as in this dataset. Furthermore, we also assume that the system is in a stable state before and after the change interval. Under these circumstances, we can group the states in which the system is stable into meta-states.A meta-state is a group of consecutive states where all comparisons between states within a meta-state have a small perturbation and all comparisons between states from a meta-state to states outside it have a large perturbation.The results shown in panel B of Fig.\u00a0In this case, both groups of states identified by QCD had highly significant p-values: B. subtilis bacterium turns into a spore, a robust structure able to survive in an environment lacking nutrients. This is a crucial feature that ensures the bacterium\u2019s survival in an environment scarce in food in which it cannot survive in its active form.When deprived of food, the E. coli flagellum-building network, which includes only activation signals, the network controlling sporulation also includes repression signals Fig.\u00a0. The det19. Specifically, the sigmaK factor expression was identified as the critical control element in the regulatory mechanisms and the coordination of spore formation between the mother cell and forespore. In particular, sigmaK activates GerE which in turn triggers the expression of the last set on genes. For this reason, the true time point that can be considered as separating the rod-shaped bacterium from the endospore state is the point in which sigmaK becomes expressed and after the change interval (240\u2013600 min), as potential meta-states MS1 and MS2, respectively. The p-value for each was highly significant: C. elegans), the avoidance reflex network is composed of two parallel receptor neurons that communicate with two sequential command neurons cells. Stress-tolerant haploid spores are formed through cell division (meiosis) within the mother cell. This is a qualitative and obvious physiological change in yeast cells adapting to their environment. The sporulation process has been thoroughly studied and is well understood21, which makes it a good candidate on which to validate QCD.Starvation for nitrogen and carbon sources (high stress) induces meiosis and spore formation in diploid yeast describes the phenomena involved in sporulation. This pathway consists of mechanisms involved in processing internal and external stresses including nutrient availability. As a result, regulation of autophagy is essential for survival because it is used to maintain important cellular functions when environmental conditions change.We used the Kyoto Encyclopedia of Genes and Genomes (KEGG)et al. . Panel A in Fig.\u00a021.The QCD method was applied on the regulation of autophagy pathway and gene expression data from the yeast sporulation study by Chu al. GSE2,21. Pane21. Therefore, the true change interval for this phenomenon is 2 h to 7 h. As observed by Chu et al., the transition phase ends after the mid-late stage. This study also showed that one of the first discernible steps of spore morphogenesis occurs after the meiosis II spindles are formed, which makes the late phase a stable one. Also, the middle-late phase is still part of the change interval as previous studies reported that the middle-late phase includes the major cytological events of sporulation33. Panel B of Fig.\u00a0The commitment to sporulation starts in the middle stage (2\u20135 h) and spans the mid-late stage In this case, QCD identifies a qualitative change in the interval from 0.5 h to 7 h, which includes the real change interval (2 h to 7 h) and starts one time point earlier. The change interval is the transition that separates the two potential meta-states (active state and spore state). The active and spore potential meta-states have p-values of Sometimes small gene-level changes (not noticeable by eye) across the system can lead to important systemic changes. This is exactly the problem that our method was designed to address: the inability to easily identify important qualitative changes when they happen incrementally. The transition from healthy to disease is in many cases similar to the transition from young to old: any two consecutive measurements taken at short intervals are unlikely to show any important changes. However, the transition is happening and at some point, the current state will be significantly different from states long before. Our method is designed precisely for the purpose of detecting such changes and distinguishing them from mere random fluctuations present in any stable state.Three major states \u2014 egg, larva and pupa \u2014 occur during the development of the fruit fly. The larvae typically pass through three molting stages (instars) during which they shed various body elements and form new ones. Importantly, the third molting stage the larvae pupate and become adults, which marks the completion of the metamorphosis process.31 (pathway ID: dme 04340) and data publicly available for the metamorphosis of D. melanogaster . The Hedgehog signaling pathway, named after the signaling molecule Hedgehog (Hh), has a crucial role in organizing the body plan for the fruit fly during development. Panel A in Fig.\u00a0The QCD method was applied on the Hedgehog signaling pathway from KEGG22. A second peak of the steroid hormone 20-hydroxyecdysone occurs roughly at the 10-hour time point and triggers the transformation from prepupa to pupa22. Puparium formation represents the onset of metamorphosis; therefore, the real change interval for this case study is indeed from Panel B of Fig.\u00a0Notably, in this case study the change takes place at the beginning of the time course. To determine potential-meta-states relative to this change interval, we selected the only state before the change interval , the fruit flies should recover from the effects of ethanol exposure. In the GSE18208 dataset 40 minutes was not one of the sampled time points; therefore, to mark the real change interval, we used the very next time point available in the dataset, the one-hour time point.To apply the QCD method, we used the Hedgehog signaling pathway (KEGG ID: dme04340) and the acute ethanol exposure data available from GEO (GSE18208) and described by Kong The intuitive physiological transitions expected for these data are from no exposure (sober) to exposure to ethanol (drunk) and back to fully recovered (sober). However, the drunken state is temporary, since it is followed by recovery. Because of this transition, we expected two change intervals, from sober to drunk and from drunk to sober. Furthermore, the initial and end states (sober before exposure and sober after recovery) were expected to be very similar from a gene expression point of view. In other words, the sober state is the same in the initial and final state in this case, as opposed to the flagellum building case where the initial and final states, with and without flagellum, are obviously different.et al., the expression of immunity genes increased after ethanol exposure in the time range from 0.5 hours to 1.5 hours23. Because of this delayed effect, we did not expect the biggest changes between the control and 0 hours but rather between the control and some later time point(s).The ethanol exposure has a delayed effect at the gene level. According to Kong The QCD results on these data have shown that the biological system indeed goes through two qualitative changes, and the change intervals are: 0.5 hours to 1 hour and 1 hour to 1.5 hours, matching the expected transitions from a sober state to a drunken state and then back to the sober state. The effects of the ethanol exposure appear to peak at the 1-hour time point. Based on the change intervals and the return of the system to its initial state, there are two groups of states that may form meta-states. These potential meta-states consist of the following time points: control, 0 hour, 0.5 hours, and 1.5 hours to 3.5 hours, for meta-state 1, and the 1-hour time point for meta-state 2. The distribution of the significant and non-significant transitions yielded a highly significant p-value, et al.35. We used the data from this study to identify qualitative changes for this phenomenon. The dataset contains gene expression collected from 75 samples (48 patients) and covers eight progressive stages of HCV induced HCC: four no-cancer stages including no HCV/control, cirrhosis, low-grade dysplastic, and high-grade dysplastic, and four cancer stages including very early HCC, early HCC, advanced HCC, and very advanced HCC. Normal liver control is used as the initial stage and stages are ordered by disease progression.Hepatocellular carcinoma (HCC) is a common liver cancer that can be the result of an infection with the hepatitis C virus (HCV). The progression from HCV infection spans multiple disease stages before reaching HCC, as reported by Wurmbach 31 (hsa05203) as the network/map of the biological system. The viral carcinogenesis pathway describes the signaling mechanisms involved in inflammatory responses such as the one triggered by HCV. Panel A in Fig.\u00a0To apply QCD on these data, we used the viral carcinogenesis pathway from KEGGFrom these data, the QCD identified one qualitative change from stage zero (control), a benign state to stage three (high-grade dysplastic), the last of the four benign states and a state in which treatments are effective. The group of states before the change interval was considered as potential meta-state one (MS1) and contains only the control state. The group of states after the change interval was considered as potential meta-state two (MS2) and contains five states: high grade dysplastic nodules, very early HCC, early HCC, advanced HCC, and very advanced HCC. In essence, the analysis identified the transition from the benign state (first meta-state) to the cancerous state (second meta-state). The p-values of these meta-states were 11. The DNBM takes as input both the high-throughput data and the large network of protein-protein interactions for the organism under study. The output of DNBM is a pre-disease state in the form of a sample or list of samples from the data. The hypothesis is that a subset of the large network, termed the leading network, is the first to change toward the disease state, which makes its components and structure causally related with the disease. The DMBM models the change in gene expression over time as a Markov process. Then, a state-transition-based local network entropy (SNE) is used as a general, early measure of upcoming transitions by estimating the resilience of the network. The SNE is a Shannon-type entropy36, intended to quantify the change in state for the biological network.We compared the results of QCD in this case to the results of an existing method developed to detect network biomarkers and the pre-disease state (DNBM)Notably, the DNBM identifies one single (pre-disease) state prior to the onset of disease, while the proposed QCD identifies a change interval of transition to disease, which can be much more informative regarding the disease evolution, as well as providing an opportunity for therapeutic intervention. In addition, in the case of the QCD, the impact analysis approach may provide a better evaluation of the system\u2019s impact than the network entropy. At the same time, a reinforcement of the impact by comparing every two time points may provide a better approximation of the change onset. Therefore, evaluating the systemic change between every two time points results in the early-detection property.For this case study, the DNBM detected the pre-disease state at the fifth stage, very early HCC, which is the first malignant stage. The existent DNBM detected the start of the malignant state while our proposed QCD method detected the transition from benign to malignant.37, while the DNBM identified a later time point as being the pre-disease state.DNBM was also evaluated on a dataset for mouse exposure to carbonyl chloride (phosgene). Exposure to carbonyl chloride produces irreversible lung injury and potentially life-threatening pulmonary edema that manifest within a day. We also evaluated the QCD on the same dataset to the third stage (low-grade dysplastic), showing that a systemic qualitative change is happening and can be detected at a very early stage, as soon as the disease process has started.Disease prevention and early detection are two major healthcare objectives that contribute to improving quality of life. Currently, early detection of complex diseases is achieved only after the physiological traits of the phenotype are present, when existing treatments may be ineffective. Chronic disease, a particular case of complex disease, is generally detected in the late stage of a relatively slow, progressive process. Representative examples that affect a large number of people are heart disease, cancer, and neurodegenerative disorders. It is a real challenge for people with these diseases to maintain a good quality of life after diagnosis. Understanding when the transition to disease occurs is a good first step towards interrupting the process and maintaining the healthy state.38. High-grade dysplastic nodules (HGDN) cells are smaller than the normal liver cells and have a greater nucleus-to-cytoplasm-size ratio38. The difference between HGDNs and very early HCC is the stromal invasion present in the latter39. A study on the LGDNs and HGDNs in HCC development concluded that LGDNs together with large regenerative nodules, should be monitored with ultrasound, while HGDNs should be preventively treated due to their high malignant risk40. Taken together, these data support the qualitative change identified by QCD from a low malignant risk stage of the liver disease to a high risk stage and close precursor to the malignant stage of very early HCC.To maintain the healthy state, one needs to monitor the biological system and measure the gene expression or any parameters the system has in order to assess how much the system is changing. The moment a qualitative change occurs, either cumulative or sudden, a change interval emerges. For instance, in the case of the eight stages of HCC, a qualitative change occurs from control to high-grade dysplasia. A cirrhotic liver is characterized by the presence of scar tissue due to long-term damage. In an attempt to replace the damaged cells in the cirrhotic liver, clusters of newly formed cells can occur in the scar tissue. Dysplastic nodules found in the liver are typically identified in cirrhotic livers. Low-grade dysplastic nodules (LGDN) cells are larger than the normal liver cellsTo further investigate the results of our analysis in the case of HCC progression, we identified the differentially expressed (DE) genes (absolute log41 (http://cancer.sanger.ac.uk/census). This list is presented together with the catalogue of somatic mutations in cancer (COSMIC)42 (http://cancer.sanger.ac.uk/cosmic). We used this list of cancer genes to filter the 80 common genes to obtain a cancer gene set. The result consists of two genes: CHEK2 and FAT1 in humans, QCD identified the transition from control to high-grade dysplasia. In this case, the existing method identified as the pre-disease state, i.e., the \u201cvery early HCC\u201d stage, which can be interpreted as the start of the malignant state. Importantly, the change interval detected by QCD immediately precedes this pre-disease state detected by the existing method and marks the transition from benign to malignant. Intervention during this interval may prevent this transition and disease progression may be halted.To summarize, we have evaluated the proposed method QCD on both synthetic (noise free) and real (noisy) data, on a total of eight case studies for six model organisms and one human dataset and the QCD identified the qualitative changes in each case. We have also used both time course data as well as disease stages as system states in our analyses, and QCD performed well for both types of data.An immediate application for QCD could be to identify when the transition between different disease stages happens for other diseases. However, QCD is a versatile approach that can be applied to systemic states in different contexts .63. In future work, we plan to use the QCD method to predict obstetrical disease based on transcriptomics, metabolomics, proteomics, lipidomics, and other data. A system state in the QCD framework can be any of, but not limited to, the following: a developmental stage, the response to a certain therapeutic dose, the stage of a disease, patients who share physiological traits or disease outcome. The analysis of time series expression data using QCD could potentially be used to decide the duration of adjuvant chemotherapy or disease recurrence. However, the most important application of this approach would imply a paradigm shift: one could use a QCD-like approach with the aim of identifying the departure from the healthy state instead of diagnosing the onset of disease.The QCD method can also be applied in the study of drug synergies and synthetic lethality where it could identify the time interval when one drug sensitizes the cell and the second drug has maximum efficacy in a time-dependent way. In turn, this could maximize the effect of combination therapies for various diseases. Another important application for the conceptual framework described herein is the prediction of obstetrical disease in early pregnancy, so interventions can mitigate or prevent the \u201cgreat obstetrical syndromes\u201d that are primarily observed during the third trimester of pregnancyIn this paper, we propose a paradigm shift: instead of detecting the onset of disease, we would like to be able to detect the departure from the healthy state. The qualitative change detection (QCD) analysis presented here is able to detect intervals when a biological system undergoes qualitative changes such as the transition from healthy to disease.16 and assess the levels of perturbation;Compare the status of the system between each pair of time points using an existing statistical method called pathway impact analysis (IA)Separate large and small inter-state perturbations using a gamma mixture model fitted to the system perturbation by an expectation maximization algorithm;Calculate the change interval(s) as the narrowest disjunct interval(s) of large changes.The workflow of the analysis see Fig.\u00a0 consists13, which was previously developed to evaluate the pathway impact when comparing two phenotypes; herein, we use it to calculate a system/pathway impact factor for each comparison of two system states (time points). The input of impact analysis includes the changes in expression between the two time-points for the measured genes, while the output will be a perturbation factor for the pathway. The result of this first step will be a list of time intervals (comparisons) with their computed pathway perturbation factor.In step 1 the perturbation of the system between all pairs of system states is computed utilizing IA. First, sequential states are assigned to the chronologically ordered time points or disease progression stages when the data were sampled. We then compare all pairs of systems states using IA16. The gene perturbation factors are summed up to the pathway level to account for the observed pathway perturbation.The pathway impact analysis takes as input signaling networks (pathways) and a list of genes with their respective changes between two states of a system (e.g. condition vs. control). In a typical signaling pathway, nodes represent genes or gene products and edges represent signals, such as activation or repression, directed from one node to another. The goal of IA is to identify the pathways significantly impacted in a given phenotype by analyzing all measured expression changes for all genes, as well as all of their interactions, as described by each pathway. This type of analysis incorporates two types of evidence, which taken together estimate the disruption on a pathway when comparing two phenotypes. The first type is evidence given by the perturbation analysis. The magnitude of expression change (log fold-change) and the pathway structure are used to compute a perturbation factor for each gene (Eq. ). For ea\u00a0\u00a0\u00a0\u00a0DS(g) - set of genes directly downstream of g\u00a0\u00a0# - cardinalityFor the perturbation analysis, we sum the absolute value of the gene perturbation factors (Eq. ) so that\u00a0\u00a0We use the all-gene approach, without gene weights; therefore, since we do not select differentially expressed genes, the enrichment part cannot be computed. The pathway perturbation factors are positive values with 0 marking no perturbation \u2014 the higher the value, the larger the pathway perturbation. We work under the assumption that the pathway perturbation factors follow a gamma distribution with mode = 0 when the pathway is not perturbed.In step 2, the distribution of the pathway perturbation factors is modeled using a gamma mixture model see Fig.\u00a0. The hypThe EM algorithm has a number of parameters that can potentially influence the results. Such parameters include the initial shape and scale for the fitted gamma distributions, the convergence criteria (epsilon), the maximum number of iterations and the maximum number of restarts. The two gamma distributions parameters are initialized so that their modes corresponds to the minimum and maximum values of the perturbation factors, which puts the model in the correct range from the beginning. Other parameters such as the maximum number of iterations or epsilon are not influencing the results, as long as the values are reasonable. For instance, we use 100 as the maximum number of iterations but in all cases, the algorithm converged in fewer than 100 iterations. Therefore, even though in principle, the results can be influenced by the values of these various parameters, in practice the results were stable in all experiments we performed. In the proposed form, the user does not need to choose any parameters. As a potential improvement on the proposed technique, one could use a stochastic version of the EM algorithm.The mixture model fitting will provide two distributions that best fit the data together with a percentage that estimates how much of the observed data comes from each of these two distributions. If any of the distributions has a percentage of less than 10%, the QCD analysis considers that there is only one distribution and, therefore, there is no significant change, and no change interval. The algorithm for this step is available in Supplementary Materials Section\u00a0If both distributions fitted contribute more than 10%, the goodness of fit is then evaluated by computing the percentage of overlap between the observed and fitted distributions of system perturbations see Fig.\u00a0, overlapWe also explored the alternative of using the fit of a single gamma distribution to the perturbation factors in order to decide if there is a systemic change captured by the data. In the case of fitting only one distribution for that purpose the results are slightly worse, or just as good in most cases starting or ending in the respective points, and no large perturbation comparisons start or end in between those points. The output is a list of change intervals described by their start and end points. Note that the change interval does not have to be a comparison that shows a large system perturbation by itself.Change interval formal definitionNotations:\u00a0\u00a0\u00a0\u00a0\u00a0Definitions:\u00a0\u00a0\u00a0\u00a0\u00a0(i) \u00a0(ii) \u00a0(iii) \u00a0(iv) E. coli flagellum building dataset. In this case, the system is considered to be stable before and after the change interval. In this context, we group the states in which the system is stable into meta-states. We define a meta-state as a group of consecutive states that satisfy the following two conditions: All comparisons between states within a meta-state have a small system perturbation;All comparisons between states from a meta-state to states outside the meta-state have a large system perturbation.In the above, definition, the \u201csmall\u201d and \u201clarge\u201d perturbations, are defined based on the threshold between the two gamma distributions computed in the previous step and shown as the yellow line in Fig.\u00a0To better understand the phenomenon under study, after the detection of a change interval, the states of the system before and after a change interval should be analyzed to gain insight regarding the state of the system before and after a qualitative change. To describe this analysis, the situation in which there is a single change interval will be considered, as in the Note that all comparisons between the states within a change interval and the meta-states immediately before and immediately after it may have a small system perturbation. This is because, during the change interval, the system is in transition between the two meta-states; therefore, its state during the transition is a mix of the two meta-states that may not be qualitatively different from either of them.Based on the detected change interval, groups of sequential system states can form potential meta-states and the change interval was detected as (S6\u2013S10). In this case, there are two potential meta-states: MS1, which contains the states before the change interval (states from S0 to S6), and MS2, which contains the states after the change interval (states from S10 to S20). To investigate the potential meta-states, all comparisons (arcs) . A significant p-value lower than a predefined threshold would confirm the identification of a true meta-state. In our case studies, most p-values were significant at a 1% threshold and decion (Eq. ) with a \u00a0Given that \u00a0\u00a0\u00a0\u00a0C. elegans, data were generated using a step function for the For the third case study, For gene expression from biological experiments, microarray data were downloaded from the GEO database. The CEL files downloaded from GEO were processed using custom R scripts (R version 3.1.2). Data pre-processing was performed using the threestep function from the affyPLM (version 1.42.0) R package. Gene IDs were mapped to gene symbols using the respective annotation packages from R: org.Sc.sgd.db (yeast), org.Dm.eg.db (fruit fly), org.Mm.eg.db and moe430a.db (mouse), org.Hs.eg.db and hgu133plus2.db (human). Gene expression at a specific time-point was computed as the average of the replicates for the specific time point when replicates were available. The ROntoTools 1.6.1 R package was used for impact analysis. The mixtools 1.0.3 R package was used for the mixture model analysis.Supplementary information"}
+{"text": "The Environmental Determinants of Diabetes in the Young (TEDDY) is a prospective birth cohort designed to study type 1 diabetes (T1D) by following children with high genetic risk. An integrative multi-omics approach was used to evaluate islet autoimmunity etiology, identify disease biomarkers, and understand progression over time.We identify a multi-omics signature that was predictive of islet autoimmunity (IA) as early as 1\u00a0year before seroconversion. At this time, abnormalities in lipid metabolism, decreased capacity for nutrient absorption, and intracellular ROS accumulation are detected in children progressing towards IA. Additionally, extracellular matrix remodeling, inflammation, cytotoxicity, angiogenesis, and increased activity of antigen-presenting cells are observed, which may contribute to beta cell destruction. Our results indicate that altered molecular homeostasis is present in IA-developing children months before the actual detection of islet autoantibodies, which opens an interesting window of opportunity for therapeutic intervention.The approach employed herein for assessment of the TEDDY cohort showcases the utilization of multi-omics data for the modeling of complex, multifactorial diseases, like T1D.The online version contains supplementary material available at 10.1186/s13059-021-02262-w. Type 1 diabetes (T1D) manifests as a result of inappropriate activation of inflammation and the immune response against self-antigens, resulting in autoimmune destruction of pancreatic islet \u03b2 cells. The onset of T1D is preceded by the appearance of islet autoantibodies against insulin (IAA), glutamic acid decarboxylase (GADA), zinc transporter 8 (ZnT8A), and/or insulinoma associated antigen-2 (IA2A). The Human Leukocyte Antigen (HLA) complex is the primary genetic contributor of T1D susceptibility, with the highest risk conferred by specific allelotypes of the class II genes DQA1, DQB1, and DRB1. The global incidence rate of T1D is increasing annually, with many new cases appearing in western, developed countries [High-throughput omics approaches in human and rodent models, including genome-wide association, transcriptomics, metabolomics, proteomics, and microbiome analyses, have led to the discovery of features that are strongly associated with islet autoimmunity \u20138. CompaThe Environmental Determinants of Diabetes in the Young (TEDDY) study is a prospective study that was designed to identify T1D-associated environmental factors in children carrying high genetic risk for the disease . TEDDY ixijk representing the value of the j omic variable in subject i at time point k , omics features (second mode), and time (third mode), with element t k Fig.\u00a0. Only sut k Fig.\u00a0: Fig. S1Our analysis strategy requires that multi-omics data are available for both the matched case-control pair at each binned time point. These resulted in the selection of 136 paired individuals (68 cases and 68 controls) that had complete measurements for at least 3 out of 5 time points and in the overall integration 476 blood gene expression, 680 citrate plasma metabolomics, and 680 plasma dietary biomarker datasets. A total of 170 individuals, with multi-omics data in less than 3 time points, were used for model validation at individual time points data for the TEDDY cohort resulted in models that explained between 40 and 94% of the autoimmunity phenotype. Some of these models were able to successfully distinguish islet autoimmunity (IA) cases from controls were significantly upregulated across most time points. This high-energy state was accompanied by the enrichment of ROS detoxification and DNA repair genes, suggesting that an oxidative state resulting in cellular damage is present in IA-developing children. Additionally, pathways associated with lipid regulation, such as PPAR\u03b1 control of gene expression, and RNA metabolism , were mostly downregulated or became downregulated near the time of seroconversion. Interestingly, some pathways were constantly downregulated or only strongly activated at 0\u20133 MBSC, such as antigen presentation (MHC Class I and cross-presentation), insulin signaling, and activation of NFkB in \u03b2 cells. This pathway activation-repression pattern suggests a succession of signaling events that involve general cell maintenance, immunity recognition, communication, and response pathways that may contribute to progression towards islet autoimmunity.A more complex regulatory pattern was observed when analyzing signaling pathways that were enriched during the 12\u2009months preceding seroconversion Fig.\u00a0c. While p value\u2009<\u20092.2e\u221216) and between 9to6 and 6to3 networks, while differences between 6to3 and 3to0 networks were not significant . These results suggest that a coordinated transcriptional response preceded the metabolic perturbations. These patterns are consistent with our NPLS-DA results, in which the earliest time block (9\u201312 MBSC) provided information that best predicted the development of autoimmunity. We further analyzed the molecular interaction network at the transition between 12 and 9 MBSC, the period with the highest predictive value in our NPLS-DA model , a process that we previously showed to significantly impact T1D-related genes [EML6, SIGLEC1), regulation of extracellular metalloproteases activity (TIMP3), and innate immune responses (NFkBIL1). Highly interconnected compounds included vitamin C and D, components of vesicle membranes , and intermediate metabolites (Adipate). This result agrees with the functional classes observed in the enrichment analysis and suggests a strong interconnection among energy synthesis, lipid metabolism, nutrient levels, cell signaling, and immune responses in patients that eventually will progress towards autoimmunity.To further investigate the coordination of molecular changes occurring during autoimmunity progression, partial correlation analysis (PCoA) networks were generated for the transition between any two consecutive time points Fig.\u00a0. To capted genes . Other hTogether, enrichment and partial correlation analyses reveal a coordinated metabolic and gene expression response that involves sustained changes in energy and lipid components and that is connected to multiple signaling mechanisms leading to immune responses.While the above enrichment and network analyses revealed major molecular events associated with an autoimmunity predictive signature, these results fall short of enabling a mechanistic interpretation of the pathways to disease. We addressed this challenge by projecting our predictive signature over the template of KEGG pathways using the PaintOmics3 tool, which allows a joint display of gene expression and metabolomics data. When necessary, pathways were edited and combined to include novel associations identified by our partial correlation analysis and to improve readability. The hypothetical model resulting from this interpretive analysis is discussed in the following sections.APOA1 and decrease of LPC (18:3) in cases was correlated with an increase in adipate , which displayed lower expression in cases and the downregulation of its inactivating enzyme AKT serine/threonine kinase 1 (AKT1), suggests that the glycogen synthesis pathway is slightly repressed in IA cases compared to their healthy controls , a major repressor of glycolysis that was downregulated in our data. Notably, metabolic processes related to glucose utilization and energy synthesis were significantly enriched and activated in cases at most time points before seroconversion , peroxiredoxin (PRDX1), and superoxide dismutase (SOD) Fig.\u00a0.Collectively, this molecular profile suggests a model in which IA-developing subjects display an imbalance in lipid metabolism that is linked to reduced nutrient uptake, triggers activation of glycolysis, and leads to intermediate metabolites and ROS accumulation as early as 12 MBSC.ALOX12, ALOX15, ALOX15B, and prostaglandin synthase1 (PTGS1), known to be activated upon ROS accumulation [PLA2G4C and PLA2G7, in cases at multiple time points before seroconversion. Interestingly, we did not detect a significant change in arachidonate between cases and controls, which may indicate a rapid turnover between biosynthesis and catabolism. However, our data did suggest the participation of the arachidonate metabolism pathway in the inflammatory response observed in IA cases. This involvement is also supported by previous works showing that ALOX12 was implicated in pancreatic inflammation induction and T1D disease progression [ALOX5 was involved in leukocyte recruitment and aneurysm formation in arteries [We detected higher expression levels of several arachidonate-lipoxygenase genes like mulation , 29 Fig.\u00a0, known tion Fig.\u00a0.Fig. 6PTGF\u03b2) and interleukin-10 (IL10) genes was observed and immune-cell migration Fig.\u00a0c to be uMMP9, the metalloproteinase inhibitor TIMP3 was downregulated at early time points MMP9, theSince our blood multi-omic signature revealed T cell activation and immune-cell migration processes in IA-developing children, we searched for markers of \u03b2 cell destruction in our data. Blood biomarkers, such as the islet autoantibodies, may be indicative of distal organismal pathologies, which liberate molecular signals into the bloodstream. The upregulation of apoptosis-associated genes, including perforin (PRF1), granzyme B (GZMB), and caspase3 (Casp3), was identified in cases throughout the 12 MBSC Fig.\u00a0. The PRFAltogether, our analysis of immune processes revealed by the blood multi-omics signature associated with IA progression lends support to a model in which ROS stimulates increased activity of antigen-presenting cells and decreased activity of Tregs within an environment that favors immune-cell patrol, activation of autoimmunity, and migration of cytotoxic-CD8+ T and NK cells towards to pancreatic islets, where they coordinate the degradation of \u03b2 cells.High prevalence diseases such as type 1 diabetes are the consequence of the interaction of a variety of genetic and environmental factors that contribute to the establishment of a complex molecular phenotype leading to the onset of the disease. The molecular dysregulation pattern of T1D has been extensively studied via high-throughput assays such as genomics, transcriptomics, metabolomics, and metagenomics. However, few studies measure these different features on the same set of individuals and over an extended period as the TEDDY project does. Such an experimental design allows for an integrative analysis that can reveal the contributions and interactions of different molecular disease factors and their progression to autoimmunity. The realization of such analysis potential requires advanced preprocessing and statistical tools capable of harmonization, modeling, and interpretation of highly heterogeneous data. Here, we addressed these challenges by deploying an analysis pipeline that (a) makes use of the TEDDY case-control design and a time-rescaling strategy to combine measurements from individuals of different age, sex, and genetic background into a comparable and sufficiently powered dataset; (b) models omics features through 3D tensor structures, both separately and jointly, to recover the contribution of each molecular layer to disease progression while providing a unique predictive model; (c) combines multiple bioinformatics pathway analysis methods to propose interpretable models of disease. This analysis strategy strongly contrasts with most multi-omics disease studies where either omics modalities are poorly integrated , have noWe used a multivariate approach (NPLS-DA) and combinatorial variable selection strategy based on the VIP statistics to identify a predictive multi-omics signature. The NPLS approach was preferred for this study as it naturally accommodates the three-dimensional structure of our dataset to return information about patients, features, and the dynamics of the disease progression. Moreover, the VIP-based variable selection results in a predictive signature that can be further analyzed and interpreted by enrichment methods. We note that PLS may be prone to overfitting when the number of variables largely exceeds the number of observations. Possible model overfitting was addressed here both at variable selection and prediction performance, in the first case by implementing a permutation test for the final selected variables Fig.\u00a0d, and inWe present an interpretative analysis of the NPLS-DA autoimmunity predictive signature that combines time-resolved enrichment analyses to identify disease-evolving cellular processes, partial correlation analysis to unravel novel molecular associations, and PaintOmics-assisted data representation to incorporate existing pathway maps into one integrated biological model. The results of the interpretative effort also revealed many interesting patterns. First, we detect a metabolic phenotype in IA-developing children that is sustained during the analyzed 12-month period before seroconversion. This phenotype consists of the upregulation of energy-producing pathways and fatty acids, and the downregulation of structural lipids, triglycerides, and a major transcriptional regulator of lipid metabolism. Different elements of the metabolic signature reported in this work have been previously described in the literature as having links with T1D. For example, the metabolic stress before seroconversion has been reported to impact levels of phospholipids and energy metabolites , 46. LowThis metabolic signature prevails for months before the detection of autoantibodies while signaling and immune-related mechanisms seem to follow a more complex regulatory pattern in which some pathways are activated at 6\u20139\u2009months before seroconversion while others become active closer to diagnosis. Here, evidence of ROS-related inflammation, angiogenesis, and immune responses was observed Figs.\u00a0 and\u00a04. UIn summary, our analysis proposes a series of events that start as early as 12\u2009months before seroconversion and involve metabolic, inflammatory, and autoimmune processes, inferred by the combination of transcriptomic, metabolomic, and dietary biomarker profiles. While data were obtained from different biological compartments (PBMC and plasma) and their communication has not been modeled, there is overlap in the metabolic processes that are present and active in both. Interestingly, results of recent work indicate a similarity in the overall metabolic profiles of PBMC and plasma of diabetic patients, with differences between cases and controls being larger in PBMC extracts . These rAdditionally, results shown here combine most of the TEDDY cohort in one unique analysis, regardless of important disease factors such as type of first-appearing autoantibody, ethnicity, or country of origin. Unfortunately, further stratification of the multi-omic data based on these factors results in limitations at the effective sample size. Therefore, we acknowledge that the results shown here do not represent a complete disease model nor capture possible differences in disease subtypes. Still, this study demonstrates the power of the integrative approach to model complex disease processes with temporal resolution and to identify molecular disease phenotypes months before the current diagnosis capacity. Ultimately, the results provide information that will guide the development of strategies for early diagnosis and treatment of T1D.Finally, while our study establishes a metabolically impaired and high inflammatory state in children with HLA risk genotypes who progress towards autoimmunity, it does not provide any insights on the origin of this physiological condition. Causes could be either genetic or environmental. The genetics of T1D has been extensively studied with nearly 50 SNPs found to have a significant association with T1D . While mHowever, some other environmental factors showed contradictory results. For example, cow milk intake in childhood has been associated with both an increased risk of IA and T1D The TEDDY study, which also collects lifestyle and exposure data, represents a unique opportunity to analyze the relationship between environmental factors and the molecular phenotypes discovered here. Additional studies have the potential to further elucidate the type and timing of environmental triggers affecting the onset of islet autoimmunity as well as the subsequent course of disease in those who ultimately develop T1D.https://www.niddkrepository.org/studies/teddy.TEDDY is an international study that enrolled 8676 newborn infants with a high- or moderate-risk class II HLA genotype between 2004 and 2010 . The indBlood sampling from enrolled children began at 3\u2009months of age, with subsequent samples taken at 3-month intervals for 48\u2009months, after which they were taken biannually. Total RNA was extracted from 2.5\u2009mL peripheral blood per sample using high-throughput (96-well format) extraction protocol that applies magnetic (MagMax) bead technology at the TEDDY RNA Laboratory, Jinfiniti Biosciences in Augusta, GA. Purified RNA (200\u2009ng) was further used for cRNA amplification and labeling with biotin using Target Amp cDNA synthesis kit . Approximately 750\u2009ng of labeled cRNA was hybridized to the Illumina HumanHT-12 Expression BeadChips per the manufacturer\u2019s instructions. The HumanHT-12 Expression BeadChip provides coverage for more than 47,000 transcripts and known splice variants across the human transcriptome. After hybridization, arrays were washed, stained with Cy3-conjugated streptavidin, and scanned. Gene expression data were generated for 306 individuals. The beadarray and lumi Bioconductor packages were used for preprocessing microarray data, including image analysis, quality control, variance stabilization transformation, normalization, and gene annotation. The Median Background method was used for local background correction. Also, the BeadArray subversion of harshlight (BASH) method was used for bead artifact detection, which takes local spatial information into account when determining outliers. Each probe is replicated a varying number of times on each array; the summarization procedure produces bead summary data in the form of a single signal intensity value for each probe. Illumina\u2019s default outlier function and modified mean and standard deviation were used to obtain the bead summary data. Variance-stabilizing transformation (vst) and robust spline normalization (RSN) methods, which combine features of quantile and loess normalization, were applied to correct for batch effect and obtain between-array data normalization. The pairwise structure of the data permits elimination of biases associated with the characteristics of the pair, such as gender, age, and country of origin, as previously described . QualityThe Fiehn laboratory at the NIH West Coast Metabolomics Center quantified metabolomics abundance measures (metabolites and lipids) for all cases and controls for each available study visit from birth until the case event time. Primary metabolites and complex lipids were quantified from citrate plasma using GC-TOF MS and CSH-QTOF MS data acquisition, respectively, at the NIH West Coast Metabolomics Center at the University of California, Davis . GC-TOF Plasma from blood drawn into light-protected tubes (BD Vacutainer\u00aeCPT\u2122 Cell Preparation Tubes) was used to determine dietary biomarkers at the Genomics and Biomarkers Unit at the National Institute for Health and Welfare, Helsinki, Finland. 25(OH) D concentrations were measured using the ARCHITECT 25-OH Vitamin D chemiluminescent microparticle immunoassay (CMIA) . For ascFor our study, we selected TEDDY individuals for which gene expression, metabolomics, and dietary biomarker data were available for 12\u2009months before seroconversion, representing a total of 306 subjects. Seroconversion was defined as the second successive visit with detection of islet autoantibodies . Hence, cases were those individuals who developed autoimmunity during the study, while controls were defined as those who remained disease-free. Cases and controls were matched by age, ethnicity, and collection center. Timepoint zero in the analyses was defined as the date of IA confirmation for each case. Multi-omics data were then binned at 3-month intervals for each case and matching control, resulting in time point intervals 0, 3, 6, 9, and 12\u2009months before seroconversion. The bins were defined as follows: the 0, 3, 6, 9, and 12 time points as 1\u20131.5, 1.5\u20134.5, 4.5\u20137.5, 7.5\u201310.5, and 10.5\u201313.5\u2009months before seroconversion. When present, multiple values from the same subject within each bin were averaged.X, xijk corresponds to the value for patient \u25a1 at an omics feature \u25a1 at time point \u25a1. This structure facilitates the assessment of data covariation patterns to be studied across the three modes of our datasets simultaneously.Each omics dataset was arranged as a tensor or 3-way structure where each mode accommodated for each of the three dimensions involved in the study. Modes 1 to 3 represent individuals, omics variables, and time, respectively, with dimension values \u25a1, \u25a1, and \u25a1. Thus, an element of the array \u2212\u20097 is reached. This imputation strategy assured that the imputed values had a minimal influence on the model obtained from the true data eigenvectors of the singular value decomposition (SVD) of X matrix for any h PLS dimension, vh are the hth (right) eigenvectors of the singular value decomposition (SVD) of Y matrix for any h PLS dimension.NPLS-DA was performed to determine the variables that best explained the differences between cases and controls. NPLS-DA models were constructed for individuals with complete or imputed data in all three data modalities. NPLS is an extension of PLS to multiway structures , 79. PLSp\u00a0>>\u2009n data structures and with variables with high multicollinearity, as in transcriptomics datasets [X variables [Barker and Rayens demonstrdatasets . The powdatasets . This stariables .R2) and the predictive outcome capability (Q2) of each model. Also, fivefold and tenfold cross-validation were performed of the tridimensional (3D) data array per mode of the block X (matrix of predictors variables). In this way, one matrix per mode for a total of three matrices was obtained. Each matrix contained scores or loadings depending on the mode being analyzed. Through the DIFFIT approach , the numThe pseudo-code for variable selection and model validation is:Split of the samples into 5 or 10 folds, or take for LOO cross-validationfor i in 1:n: # where n could be 5, 10 or testingSet\u00a0<- samples in itrainingSet\u00a0<- samples not in ifit PLS-DA using trainingSetSelected Vars <- Selection of variables that describe best the modelPrediction <- Predict class of the samples in i using the fitted model with Selected VarsPerformance Evaluation <- Obtain R2of the model andQ2based on predictionRepeat this process 1000 timesObtain average R2and Q2through all iterations and foldsR2) and predictive outcome capability (Q2) values was then chosen as the best model for the omics dataset.A variable selection strategy was designed based on the Variable Importance for Projection (VIP) statistics that quantifies the influence that each predictor variable has over the response variable in terms of the total sum of squares of the model . We applThe performance of the NPLS-DA model was evaluated by leave-one-out cross-validation. Additionally, the model\u00a0was validated both for the significance of the feature selection and for the prediction on an independent set of samples.R2 and Q2 of the VIP-selected features were compared to the values obtained using a 100\u00d7 random selection of models with the same number of features to calculate a permutation p value of the VIP selection versus random feature selections. To test the robustness of the method, the outcome label across cases and controls was randomly permuted. NPLS-DA models were computed using either our VIP-selected variable set or random feature selections, and again compared R2 and Q2. The permutation test demonstrated the significance of the feature selection related to IA.To validate the feature selection, a permutation strategy inspired by previous works \u201385 was iAdditionally, the NPLS-DA multi-omics feature selection was validated on an independent set of samples not used in the model build. These were samples of individuals with complete multi-omics measurements in less than three time points. The validation set contained 8, 6, 18, 31, and 26 samples for time points \u2212\u200912, \u2212\u20099, \u2212\u20096, \u2212\u20093 and 0, respectively. As the individuals in this validation set do not have data for 5 time points, the NPLS-DA cannot be directly applied. Instead, we used the set of VIP-selected features to predict outcome by linear discriminant analysis, for each time point separately.Partial correlation analysis was performed among the VIP signature to identify associations between features within the TEDDY case-control dataset.X and Y are the variables which correlation is required, controlled by all other Z variables to \u03b1\u2009=\u20091 (Lasso regression) was performed using all possible intermediate values to determine the best shrinkage method for every particular time period, with 100 iterations for each \u03b1 values from 0 to 1 by 0.1 increments. Next, for each period, features were selected by creating 1000 regression models using their winning \u03b1 value, and features were collected that appeared at least once across repetitions and time. While partial correlation was computed for the full VIP signature, networks were constructed including only the features selected by the described strategy. The network visualization was made using Cytoscape [For partial correlation analysis, only normalized case values were used. To capture relationships associated with disease progression, correlations were computed for differences between omics variable values at two consecutive time points. Hence, profiles of 4 time periods were obtained: 12\u20139, 9\u20136, 6\u20133, and 3\u20130\u2009months before seroconversion. Moreover, an elastic net regression strategy was applied to the VIP signature to select those features with the highest variability at each time window. Briefly, shrinkage analysis from ytoscape , filteriP values were calculated for two available modalities, using either the sum or the mean gene expression value as the gene set statistic, and the most significant p value was retained. Time point-specific enrichment p values were combined using Fisher\u2019s method [fisher.method with Benjamini-Hochberg multiple testing correction. Combined p values were obtained for all possible time point combinations and pathways were selected if its Fisher\u2019s method adjusted p value was \u2264\u20090.05 in at least one time point combination. Semantically redundant pathways were manually discarded. Values used to draw enrichment pathways were calculated as 1 minus the p value multiplied by the sign of the average expression value of the significant genes present in the pathway.Gene set enrichment analysis (GSEA) was computed using the PIANO R-package , rankings method implement-test , and enrichment of metabolic classes among significant metabolites was calculated with Fisher\u2019s exact test. Enriched classes were selected for an FDR corrected p value \u2264\u20090.2. These values were then used to generate a heatmap of metabolite changes across time.Metabolites were manually assigned to a metabolic class (Additional\u00a0file\u00a0Additional file 1. Dietary biomarkers data processed.Additional file 2. Dietary biomarkers data raw.Additional file 3. GCTOF metabolomics data processed.Additional file 4. GCTOF metabolomics data raw.Additional file 5. Gene expression data processed.Additional file 6. Gene expression data raw.Additional file 7. Negative lipidomics metabolomics data processed.Additional file 8. Negative lipidomics metabolomics data raw.Additional file 9. Positive lipidomics metabolomics data processed.Additional file 10. Positive lipidomics metabolomics data raw.Additional file 11. Cohort data.Additional file 12. Selected features.Additional file 13. Supplementary figures.Additional file 14. Pathway enrichment.Additional file 15. Review history."}
+{"text": "Most of time series deriving from complex systems in real life is non-stationary, where the data distribution would be influenced by various internal/external factors such that the contexts are persistently changing. Therefore, the concept drift detection of time series has practical significance. In this paper, a novel method called online entropy-based time domain feature extraction (ETFE) for concept drift detection is proposed. Firstly, the empirical mode decomposition based on extrema symmetric extension is used to decompose time series, where features in various time scales can be adaptively extracted. Meanwhile, the end point effect caused by traditional empirical mode decomposition can be avoided. Secondly, by using the entropy calculation, the time-domain features are coarse-grained to quantify the structure and complexity of the time series, among which six kinds of entropy are used for discussion. Finally, a statistical process control method based on generalized likelihood ratio is used to monitor the change of the entropy, which can effectively track the mean and amplitude of the time series. Therefore, the early alarm of concept drift can be given. Synthetic data sets and neonatal electroencephalogram (EEG) recordings with seizures annotations data sets are used to validate the effectiveness and accuracy of the proposed method. The study of time series has strong theoretical significance and application value in real life. Due to its practical importance, the works related to the applications of time series are widely used in finance, engineering, medicine, and other fields ,2,3,4. TGenerally, concept drift detection methods can be divided into two types , one is Even though many approaches related to the detection of concept drifts of time series have been proposed in recent years ,18,19, sA novel unsupervised algorithm is proposed for online time series concept drift detection, which can effectively detect the occurrence of concept drift in streaming data by capturing the fine structures of data in different time scale.Entropy methods are used to capture the changes of intrinsic structures of the original sequence in different time domains, where multiple application scenarios are discussed according to the characteristics of entropies in detail.A statistical process control method based on GLR is designed to monitor the changes of the obtained entropy information, which can determine the concept drift in time and reduce the false alarms.In order to solve the difficulties mentioned above, in this paper, a novel unsupervised algorithm is proposed for the online time series concept drift detection. Firstly, an empirical mode decomposition (EMD) method based onThe rest of the paper is organized as follows: The second part presents the literature review; the third part is the introduction of the proposed algorithm entropy-based time domain feature extraction (ETFE), where the principle and implementation are included; the fourth part is the related experiments, which include the performance evaluation of the proposed method in synthetic data and real data; the fifth part is the conclusion and prospect of our work.In recent years, some theoretical results have been proposed to tackle with the concept drifts in time series. In order to solve the problem that real time series data are difficult to be labeled due to the characteristics of flow patterns and high frequencies, Cavalcante proposedIn order to deal with the influence of time dependence of time series on concept drift detection, Guajardo proposedCosta et al. dealt wiIn this paper, a novel unsupervised algorithm is proposed for the online time series concept drift detection. Compared with the existing detection methods, the novelty and innovation brought by this approach is that, based on IMFs revealing the original signals, entropy methods are used to capture the changes of intrinsic structures of the original sequence in different time domains, where the extracted features have higher signal-to-noise ratio. Furthermore, the statistical control process can effectively determine the occurrence of concept drift and reduce the false alarms.In this section, the ETFE method is to be introduced in detail, which is an online unsupervised concept drift detection algorithm for time series. Since the existence of noise and abnormal interferences in the original time series, it is difficult to directly detect concept drift from the original data . Based oEMD is a method proposed by Huang et al. , which cM local maximums and N local minimums, and their indexes are denoted as Let d units to left. The time indexes and values of the extension sequence are:Start from the left side. When d units to left. The time indexes and values of the extension sequence are:When d units to the left, and the time indexes and values of the extension sequence are obtained as follows:When Extend the right endpoint in the same way.Find out all local maximum points and local minimum points in the sequence (1)The number of local extremum points and the number of zero crossing points is equal or the difference is at most 1.(2)The average of the envelopes of the local maximum and the local minimum is zero.Check if the obtained s-th IMF, where s indicates the number of repeats of steps 6 and 7. Then, the obtained If the above two conditions are satisfied, the obtained f times until the obtained f-th residual is a monotonic function. In this way, the original time series Residual Delete the data of the extension part and retain only the data decomposed from the original part.However, the process of EMD is normally affected by the endpoint effect, and the divergent results will gradually pollute the data inward, resulting in distortion of the results . DiffereSince the noise and disturbance existing in time series, the changes of time-domain characteristics of time series are difficult to be captured by directly extracting information from raw sequence data . When thTime series The original sequence is reconstructed to obtain The distance Count the number of vectors satisfying the following conditions, and calculate the ratio between the number and the total subsequence data length:This process is called the template matching process of Calculate the average similarity rate:According to steps 1\u20135 above, the average similarity rate is calculated when the length of subsequence is divided by Calculate the approximate entropy:Approximate entropy (ApEn) is a kind of statistical measuring for the complexity of time series, which can be applied in the non-linear and non-stationary data with high noise . GeneralIt can be seen from the calculation process of ApEn that, when the difference between two subsequences is large, the number that satisfies Sample entropy (SampEn) is an imDifferent from SampEn, fuzzy entropy (FuzzEn) introducAlthough SampEn, ApEn, and FuzzEn can be used to measure the complexity of time series, they ignore the time dependence of elements in time series. Permutation Entropy (PeEn) is a meaIn the definition of PeEn, when extracting ordered patterns for each time series, no other information is retained except the ordered structure, such as the magnitude of time series information. This may lead to the same PeEn value for time series with different amplitude scales or fluctuation patterns. Weighted Permutation Entropy (WPeEn) can bettIncrement entropy (IncrEn) is a new measure of time series complexity in recent years , the defTherefore, in order to comprehensively analyze the application of entropy in the concept drift detection, various entropy methods, including the six entropies above, have been conducted and the comparative results have been discussed. From the discussion results of IMF-Entropy, it can be seen that, when concept drift occurs, the calculation results of IMFs\u2019 entropy change in the values of the mean, variance, or both. In order to monitor its changes, a statistical process control (SPC) model based on GLR is used.We simulate a process as follows:It is assumed that the change point According to , in the When the number of consecutive observations reaches a predefined number, If If If there is no prior knowledge to determine the location of the change point, the max In the implementation of GLR algorithm, the space complexity is not high. Only two arrays are needed for the calculation. One array is the sum of the whole data GLR test statistics can be easily calculated:H most recent observations and using only these observations in the testing procedure. Whenever a new observation arrives, H most recent observations, and the latest value is added. In this way, the breakpoint H data. This method does not ignore all the information outside the window, which not only has statistical significance but also makes the calculation faster. Although the computational speed of the statistics required for GLR test is fast, the process of finding the appropriate breakpoint ky\u2013Jones method iAssuming no change occurs, the average number of observations received before a false positive detection is equal to th (ARL) . The calth (ARL) in the iThe above three modules constitute the proposed method. The origin series data need to be decomposed based on a segment of time series, therefore a sliding window is required. If the window size is too small, it will contain less information, and a larger window will miss catching some local behaviors. Actually, there is not a general way to determine the length of window size, which is related to features of time series. For instance, the window size of data deriving from medical field may be considerably different from the one from financial field. Therefore, the size of the sliding window can be selected according to the prior knowledge in the actual application scenes. With the addition of new observations, time series data in the window is decomposed by the extrema symmetric extension EMD method. When drift occurs, it will inevitably lead to changes in the original time series. Since IMFs are the characteristic expressions of the original time series in various time scales, the changes in the internal structures and complexity of IMFs would correspondingly occur. From the above discussion, we can see that, when drifts occur, although the changes are difficult to be directly observed from the original data, the variance and mean of IMF\u2019s entropy have significantly changed. Therefore, in order to detect this change in the environment of streaming data, we introduce a GLR-based statistical process control method. Through GLR statistical test, the breakpoint that maximizes the GLR statistics can be found out. Then, one can judge whether the condition of drifts is reached by comparing GLR statistics with the predefined control threshold. When the drifts are detected, the detector will start again from the next observation value of the detection point. The overall ETFE Algorithm proposed is shown in Algorithm 1. And the implementation code of this algorithm has been uploaded .Algorithm 1 The overall algorithm of ETFEInput: data stream x1, x2, \u2026 Initialization: Initialize the parameters of the specified entropy, the size of sliding window 1foreach observation 2if\u20033\u2003\u2003sliding window append 4continue\u2003\u20035else\u20036\u2003\u2003sliding window append 7\u2003imfs 8\u2003entropy value 9\u2003update the interim parameters of GLR with entropy value 10\u2003calculate the GLR test statistic 11\u03d1 */\u200312If\u200313\u2003\u2003There is no evidence of drift occurs 14else\u200315\u2003\u2003There is evidence of drift occurs16\u2003\u2003drift detection position 17i\u2003\u2003drift detection time 18\u2003\u2003restart from the next observationn is the length of sliding windows. Here, only the first two IMF are used in the proposed approach. In the calculation of entropy, it is necessary to compare the relations among the reconstructed subsequences, so the time complexity is From the Algorithm 1, one can see the time complexity mainly lies in the computations of EMD, entropy and GLR test statistic. EMD is widely used in data stream processing because of its low time complexity . The timThrough the analysis of the space and time complexity of the proposed algorithm, it can be seen that the proposed algorithm can be fully applied to the big data scene including high frequency with high volumes, where the detection of concept drifts in the real-time data flow can be achieved. Therefore, the proposed model can be implemented in some applications, such as monitoring abnormal price fluctuation caused by manipulation in financial derivatives market, change of data distribution caused by machine faults in industrial production and the attack of patients, etc. In this part, a full evaluation of the proposed method is carried out. Firstly, six entropy methods are involved to make a brief comparative study, by which one can intuitively observe the feasibility of scheme. Secondly, by using synthetic data sets, the effectiveness of the proposed method is validated. Thirdly, the real EEG data sets are used to achieve the further verification.Two autoregressive processes Two autoregressive processes represent two different concepts of time series, and the length of each phase is 2000. As shown in In this group of experiments, IMF1 and IMF2, i.e., the two highest frequency IMFs, are used, where a sliding window with size 100 is set up. Whenever new observation enters, the sliding window moves forward one unit. By transforming the original time series, the entropy change of IMF1 and IMF2 can be seen after 2000 points, where the concept drift occurs and the distribution of data begins to change. As to IMF-FuzzEn, it shows that IMF1\u2019s entropy fluctuates around 0.2 in the first concept. After 2000 points, IMF1\u2019s entropy declines significantly and maintains around \u22120.1. IMF2\u2019s entropy maintains the fluctuation around 0.1 in the first concept. After the first 2000 points, IMF2\u2019s entropy experiences a significant upward change, and maintains around 0.25. It can be seen that the occurrences of the concept drifts will lead to the changes of the structure and complexity of time series in different time-domain features. Since the frequency of IMF1 is higher than the one of IMF2, IMF1 reveals more complex fluctuation patterns and is sensitive to the change of time series. Therefore, when the concept of original time series changes, the entropy of IMF1 can provide a reflection earlier than the one of IMF2. The same situation is also reflected in IMF-PeEn and IMF-IncrEn. In IMF-SampEn, after 2000 points, although the mean value of IMF1\u2019s entropy has not obviously changed, the variance reflects large fluctuations, where the variance of IMF1\u2019s entropy becomes smaller and that of IMF2 becomes larger. Similarly, the change of high-frequency IMF1 in ApEn occurs earlier than that of IMF2.From the results of IMF-WPeEn, one can see that after 2000 points, the mean and variance of the entropies of both IMF1 and IMF2 have changed. The mean of the entropy of IMF1 has increased, but the variance has decreased. Meanwhile, the mean and variance of the entropy of IMF2 have increased. Similarly, the change of IMF1 is earlier than that of IMF2.From the above results of IMF-Entropy, it can be concluded that, when concept drift occurs, the entropies of IMFs will change in mean, variance, or both. In addition, from the view of entropy, the change of higher frequency IMF is earlier than that of lower frequency IMF, which means that high frequency IMF is more sensitive to the change and low frequency IMF will need a certain time delay to catch the change. Such a mechanism can filter the anomalies or noises in original data. Therefore, the features extracted by the calculation results of IMFs\u2019 entropy can better reflect the concept change of data and have more robustness.Although there are many studies on concept drift, the data used for concept drift is mostly based on supervised classification algorithms, and the data set aimed for studies of concept drift in time series is still lack. In order to determine the breakpoints of concept drift and to measure the effectiveness of detection algorithm, synthetic data is also an effective method. Due to the particularity of time series, there is a lack of benchmark data set for concept drift detection of time series in real environment. In this work, the artificial data set in are appl\u03c6 in IncrEn represents the precision of the fluctuation amplitudes. The sliding window size is 100, the ARL is 200, which is equivalent to the significance level According to the common configuration, the parameters of the six entropies are set to be shown in In order to verify the effectiveness of the proposed algorithm in synthetic time series, four metrics, including detection delay, detection position offset, false alarms, and miss detection numbers are implemented, where detection delay represents the number of delay instances between detection time and the occurrence time of drift, detection position offset represents the number of instances between the detection position and the actual drift position, false alarms represents the number of false alarms and miss detection numbers represents the number of true alarms missed by the detector. An example is shown in In the experiments, the proposed method runs in 120 time series data, each of which runs 30 times. The statistical results obtained by IMF1 and IMF2 are shown in In the experiments, the proposed ETFE combining with six kinds of entropy methods are evaluated, the results of which would compare with the existing detection algorithms proposed in ,23. The In the actual application, the appropriate entropy method can be selected according to the intrinsic structure of the data to be tested. If the regularity or similarity is present in the time series, the approximate entropy or sample entropy may be selected; fuzzy entropy can be selected when the data are stable or insensitive to parameter selection; when one pays attention to the order relation within the data, the permutation entropy or the increment entropy can be chosen. If one needs to consider fluctuation scale within the data and capture the anomalies, the weighted permutation entropy is the appropriate one.In addition, from the results of ETFE detection using IMF1 and IMF2, one can obtain that, the detection delay and detection offset of IMF2 are normally higher than those of IMF1, which shows that IMF2, as a low-frequency feature, is less sensitive to time series changes compared with IMF1. And, judging from the number of false alarms, false alarms in IMF2 are less than that those in IMF1, which shows that IMF2 as a low-frequency feature is slightly affected by noise or anomalies. Moreover, the number of miss detection numbers in IMF2 is higher than that in IMF1, which also shows that IMF2 is not sensitive to data changes. Therefore, when IMF2 is used to implement detection, some drifts with slight changes may miss. Even so, the number of missing warnings using IMF2 remains at a very low level. Based on the above results, in the practical application, the high frequency IMFs can be used as a low-delay detection, while the low frequency IMFs can be used as a follow-up drift confirmation, which can make the results more robust and practical.The real data applied is a dataset of neonatal EEG recordings and seizure annotations . NeonataIn the dataset of neonatal EEG recordings and seizure annotations, not all EEG data are labeled by experts, data from the EEG dataset containing the annotations of the experts are selected. In addition, since the opinions of three experts are not uniform for some periods of onset, in order to ensure the consistency of the expert labeling, 30 periods of data with annotations of three experts are chosen. The applied data sets are shown in One can observe that the change of EEG data mainly occurs in amplitudes of sequence data. Since the weighted permutation entropy and the increment entropy are more sensitive to the changes of data amplitudes, they are used in the group of experiments. The parameters of WPeEn and IncrEn are the same as those of the previous experiments. The size of sliding window is set to be 100, moving forward 5 units at a time. In the setting of GLR parameters, startup is 20% of the total data length and ARL is 200, which is equivalent to the significance level The data stream of EEG data cannot obtain the labels in real time so it is impossible to directly use the supervised detection method. Therefore, in the comparative experiments, the algorithm proposed in is used,od (DDM) , Early Dod (DDM) and Pageod (DDM) . ELM-DDMIn order to verify the effect of the proposed method, Cohen\u2019s kappa consistency test is used From Generally speaking, experiments show that compared with ELM_ECDD, ELM_DDM, and ELM_PHt, ETFE combined with WPeEn and IncrEn have higher accuracy in determining the onset interval by detecting concept drift, trigger fewer false alarms, and also have lower miss detection rate. In this paper, a novel method called ETFE is proposed for online detection of concept drifts in time series. Firstly, because the real time series data have the characteristics of non-stationary and high noise, the empirical mode decomposition method based on extrema symmetric extension is used to decompose the time series. The time-domain features in different time scales can be effectively extracted and have good signal-to-noise ratio. Secondly, because the concept drift of time series is accompanied by the change of time series structure, the entropy information is used to represent the time-domain characteristics in a coarse-grained way. Finally, when concept drift occurs, the changes of contents in time series will lead to the variation of entropy information. Therefore, the concept drift can be determined by monitoring the changes of the values of mean and variance based on GLR statistical control process.In the experimental part, synthetic time series data and real data are used to verify the proposed algorithm. As to synthetic time series data, six entropy methods are conducted to discuss the time domain characteristics in different time scales obtained by decomposition. The metrics of detection delay, detection position offset, false alarms, and miss detection numbers are used to verify the effectiveness of the proposed method. In the real data experiment part, the newborn EEG record and epileptic seizure annotation data set are applied, where three existing methods are compared with the proposed method. The results show that our method has better detection results of concept drift with higher robustness. In the further research, when the complexity of time series is analyzed under different time scales, it would be meaningful to introduce multi-scale entropy into this work. In addition, statistical process control methods can be further enhanced to improve the detection of concept drift."}
+{"text": "The main purpose of an application performance monitoring/management (APM) software is to ensure the highest availability, efficiency and security of applications. An APM software accomplishes the main goals through automation, measurements, analysis and diagnostics. Gartner specifies the three crucial capabilities of APM softwares. The first is an end-user experience monitoring for revealing the interactions of users with application and infrastructure components. The second is application discovery, diagnostics and tracing. The third key component is machine learning (ML) and artificial intelligence (AI) powered data analytics for predictions, anomaly detection, event correlations and root cause analysis. Time series metrics, logs and traces are the three pillars of observability and the valuable source of information for IT operations. Accurate, scalable and robust time series forecasting and anomaly detection are the requested capabilities of the analytics. Approaches based on neural networks (NN) and deep learning gain an increasing popularity due to their flexibility and ability to tackle complex nonlinear problems. However, some of the disadvantages of NN-based models for distributed cloud applications mitigate expectations and require specific approaches. We demonstrate how NN-models, pretrained on a global time series database, can be applied to customer specific data using transfer learning. In general, NN-models adequately operate only on stationary time series. Application to nonstationary time series requires multilayer data processing including hypothesis testing for data categorization, category specific transformations into stationary data, forecasting and backward transformations. We present the mathematical background of this approach and discuss experimental results based on implementation for Wavefront by VMware (an APM software) while monitoring real customer cloud environments. One of the main goals of IT infrastructure and application monitoring/management solutions is the full visibility into performance, health and security with growing intelligence. The prediction of performance degradations, root cause analysis as well as self-remediation of issues before they affect a customer environment are anticipated features of modern cloud management solutions. Self-driving data centers require the availability of proactive Analytics with AI for IT operations (AIOps) in view Time series collection and analysis is of great importance for various reasons like anomaly detection, anomaly prediction, correlations and capacity planning ,13,14,15Time-series analysis is a significant branch of mathematics and computing that includes a variety of different types of analytical procedures, computational tools and forecasting methods. It is sufficient to mention the well-known and powerful approaches like Fourier analysis, time series decompositions, forecasting by SARIMA and Holt-Winters\u2019 methods , we focThe purpose of the paper is application of NN-based models to time series forecasting in cloud applications. The main idea is in training of a generic NN-model and transferring the acquired knowledge to a customer specific time series data never seen before. This should be the only way of overcoming the challenges regarding the resource utilization (GPU trainings of the networks) as the application of the pretrained model does not require on-demand network training. This solution is feasible if the problem can be narrowed down to some classes of time series data with specific behaviors for which the application of pretrained models are attainable. Moreover, those classes should be large enough to cover the sufficient portion of unseen customer data and specific enough by the behavior to deal with moderate network configurations. We found that the class of stationary time series can be properly handled by NN-models. Unfortunately, this class is not common in the discussed domain. Conversely, the majority of time series data contain nonstationary patterns like trend, seasonality or stochastic behavior. However, the class of stationary time series data can be extended to time series categories which can be transformed into the needed class by some simple transformations. This observation outlines the main idea of our approach through the utilization of a pretrained NN-model with preliminary time series classification and transformation into a stationary data via class-specific rules. We develop the theoretical foundation of the approach and show the results of its realization in a real cloud-computing environment. Implementation and testing are performed in Wavefront by VMware . WavefroIt is worth noting that our main goal is the performance of the approach for cloud environments rather than the accuracy of predictions compared to the well-known classical techniques that perform individual training for each specified time series data in the GPU accelerated environments. For us, the performance is trade-off between accuracy and resource utilization. We observed that the accuracy is comparable to the classical ARMA related approaches while preserving resource consumption on acceptable levels. In particular, the application of the pretrained network to a specified time series in a cloud environment can be performed without GPU acceleration and with moderate number of CPU cores.One of the important applications of time-series forecasts is the detection or prediction of infrastructure and application performance degradations or failures. Accurate and fast anomaly/outlier detection leads to proactive problem resolution and remediation before it affects a customer environment. This means that timeliness and preciseness of anomaly detection are of great importance for distributed systems. However, it is worth noting that forecasts based anomaly detection may be associated with low response times especially for longer forecast horizons. Moreover, precautionary procedures should be taken for reduction of false positive anomalies that can unnecessary disturb users with alarms.Another important aspect tightly related to the problem is association of time series outliers with system anomalies which is roughly speaking not always true. In any case, such problems are unsolvable without intrusion of domain expertise into mathematical models or their outcomes. Our solution to anomaly detection utilizes a test window which is smaller than the forecast window for providing adequate response times and meanwhile contains enough data points for the reduction of false positives. The fraction of violations of the confidence bounds of the forecasts in the test window generates an anomaly signal. The anomaly monitor generates alarms and warnings or launch preventive procedures whenever the anomaly score rises above a particular threshold.The current paper is organized as follows. Application of pretrained NN-models to solution of different problems is a well-founded approach for many domains like classification, image processing, voice recognition, text mining, etc. for on-demand access. The global database should be regularly updated by new time series data across different customers and environments. As a result, the pretrained model would be regularly updated. The online engine corresponds to a customer cloud-computing environment. An APM software restores the weights and configuration of the pretrained network from the file. First, the engine retrieves the configuration for data processing. Second, it restores also the weights and applies the model to the processed time series data for a forecasting. The offline mode requires GPU-powered data centers. The online mode is the customer common computing space without a GPU-acceleration.The diversity of time series data behaviors is a crucial milestone connected with the system of Another scenario, developed in this paper, is the selection of a single class that can be adequately treated by a unique NN-model. The other time series data can be treated properly after some preliminary transformations to the specified class. This scenario should be more optimal as only one model should be trained, stored and applied. How does one find the class with the best trainable and transformable characteristics? Our previous discussion indicated that the class of stationary time series should be the first candidate for experiments. They can be properly modeled by NN-models, and the techniques of transformation of a nonstationary time series into a stationary are theoretically well-founded. The set of stabilizing transformations is the class-specific. A deterministic trend can be stabilized by linear or nonlinear regression. A stochastic trend can be removed by a differencing. A seasonality can be removed via seasonal-means or lag-differencing. We apply different well-known hypothesis testing algorithms for time series classification. We use KPSS-test and ADF-test for the detection of deterministic and stochastic trends. We test a deterministic periodicity via PDM-test. A stochastic-periodicity can be verified via CH-test.As a result, we perform model trainings only for stationary time series data. We have two possibilities. Either collect only stationary time series for a global database or collect all available time series and use them after preliminary stabilization. We implemented the second scenario and the flowchart of Application of the pretrained neural network to a user specified time series data will similarly pass through the hypothesis testing engine for data categorization see . The modN uniformly sampled history points multiple to the size of the input of the pretrained-network. In our specific implementation, it should be multiple to 40 and k different uniformly sampled sparse grids containing 40 data points by the same comb-like procedure. All sparse grids have the same monitoring interval and together they combine the full grid. The sparse grids will be used independently for a network training and predictions. The corresponding forecast window will contain The next challenge connected with the application of NN-models is in the limited number of predefined input/output nodes of the networks. It means utilization of a small number of history and forecast data points for training and prediction. In the current implementation, we use networks with 40 inputs and 20 outputs resulting in utilization of 40 history points to get 20 forecast values. It restricts the model capability to use bigger number of history points even when they are available. NN-models require uniformly sampled input points. The forecast points will appear with the same monitoring interval as the history points. It is worth noting that the hypothesis testing should be applied to the entire full grid to have sufficient statistics for revealing the behavior of a time series. Then, the same class-specific transformations should be applied to each sparse grid.In this paper, we restrict ourselves by some specific data categories that contain deterministic and stochastic trends, deterministic and stochastic-periodicities. In all those cases, we are aware how to transform a nonstationary data into a stationary one with further application of NN-models. The list of categories can be enlarged if the corresponding transformations are known. It should be reasonable to add more domain specific data categories based on some expertise. The flowchart of The engine starts with the periodicity analysis. It tests for the three data categories called as stationary-periodic, trend-periodic and stochastic-periodic time series. The PDM-test inspects the first two categories and CH-test the last one. The PDM-test starts with the stationary-periodic class. It runs across different lags for a time series, measures their significance (see below) and either assigns the original data to the stationary-periodic class with a specific period-lag or rejects it. In the last case, it tests for the trend-periodic class. The test removes a possible deterministic trend (without checking its existence) via linear regression and verifies a periodicity once more. It runs across different lags for the detrended data and either assigns it to the stationary-periodic class with a specific period-lag or rejects it. In the first case, the original data should be assigned to the trend-periodic class with the known trend component and periodicity-lag. In the second case, the CH-test inspects data for a stochastic periodicity. Time series is assumed to be nonperiodic if all mentioned periodicity tests fail. A nonperiodic time series data should be scanned for a stationarity or trend. The combination of the KPSS-test and ADF-test will classify data into one of the following data categories: stationary, trend-stationary (the trend is deterministic), stochastic-trendy and an unknown type if all other tests fail to recognize a time series behavior. We do not utilize unknown types.We restrict ourselves by stationary-periodic and trend-periodic categories. The stochastic-periodic time series will be considered elsewhere although the idea is identical to the discussed.Let In reality, we can only expect approximate equality due to noise and instability in a time seriesWe perform inspection of a periodicity by the PDM-test ,44,45. T=37 (see ). We conM distinct samples collected from the time series with the same lag \u2113 and containing More precisely, let A preliminary goal is the minimization of M samples from the same bin.Let us reformulate the problem that allows more efficient implementation. We define the phase of each data point Now, consider the following statisticIf Otherwise, if We define \u201cimportance\u201d of each lag as followsTime series is assumed to be periodic if one of the local maximums of the The PDM-test has another interpretation connected with time series decompositions. Assume the following additive decomposition of a time series depending on a s see in )(11)TimThe strength of the seasonal component corresponding to a ined via .c or simply, a stationary. We call the test as We restrict ourselves by stationary, trend-stationary and stochastic-trendy (a unit-root process) categories. Data categorization will be performed via KPSS-test and ADF-test. We follow ,38,39. Lp-values (Both tests apply ordinary least squares (OLS) for coefficient determination and let \u03b5t (see ). The cofound in .c is the level, m is the order of the model. The value The ADF-test uses the following data model for time series p-values . We seeThe class contains time series data with deterministic linear trend that can be removed via ordinary least squares (linear regression). In general, a nonlinear trend also can be considered along the same ideas. The trend removal procedure should be applied to each sparse grid with the further calculation of the forecasts and confidence bounds. The original behavior should be recovered via trend addition as the backward transformation applied to each sparse grid.p-value is smaller than p-value is larger than The class contains time series data that can be transformed into a stationary via differencing. Those time series are known also as unit-root processes or unit-root processes with a drift. The differencing should be applied to each sparse grid with further calculation of the forecasts and confidence bounds. The original behavior should be restored via backward-differencing.p-values are smaller than p-value of the We sequentially apply c is the intercept (responsible for a drift) and The process of forecasting for the stochastic-trendy class has some peculiarities compared to the previous examples. Our implementation of the pretrained model requires 40 inputs. However, after the differencing, the number of points in a grid will be reduced by one. That is why 41 points should be selected in a sparse grid as the starting point. The history window in This means that The same procedure should be applied to confidence bounds and other sparse grids. Those forecasts and bounds are shown in The class contains periodic (see ) or almok different sparse grids for separate forecasts. The selection of k should be a trade-off between the needed resolution and the complexity of computations. The first approach requires selection The first approach is connected with the structure of history windows described above. The number of points in a history window is multiple to the size of the input of the pretrained NN-model. We already mentioned that the current model has 40 inputs and the history window will have lag see allowingk and then resampling according to its period. It will cause time consuming duplicated data processing.The mentioned approach may cause problems due to several reasons. The first problem is the connection of the history and forecast windows with the period-lag. The latest can be rather large leading to unreasonable extensive computations. The second problem is in unknown period-lag. It means time series sampling with some preselected k. Deseasoning of the time series can be performed via seasonal means . We canThis class contains periodic or almost periodic time series with some linear trend. The process of data categorization was discussed before. We need to remove the detected trend, get the forecast as it was discussed in the previous subsection and return the trend back. We take 19 sparse grids with 40 data points in each and directly apply the pretrained NN-model, according to the first approach of the previous subsection, to the detrended time series. Later, we recover the removed trend for the forecast and confidence bounds. We see that almost all observed data points are within the confidence bounds.In this section, we discuss an approach to anomaly signal time series generation based on the confidence bounds of the forecasts. Each data point in the anomaly signal shows the percentage (fraction) of observed data points in a test window that violate upper and/or lower confidence bounds. The anomaly signal may detect or predict anomalous conditions whenever its values exceed a particular threshold. In such situations, an anomaly monitor will generate alarms indicating some behavioral changes in a specified time series. We consider details for NN-based forecasting methods described in the previous sections, although the approach is applicable to any predictive model.One of the principle problems in time series data anomaly/outlier detection is setting of the proper trade-off between the timeliness and confidence of the detections. From the one side, the alarms should be detected as faster as possible for preventive actions before the alarms will impact customers\u2019 environments. From the other side, the big number of false positive alarms overwhelms system administrators and decreases the confidence towards the anomaly detection system. The trade-off may be resolved by the proper selection of underlying parameters for the anomaly signal generation.An initial indication that the state of a monitored system has begun to change in an unexpected fashion is that an observed data point diverges from its forecast. However, no one is expecting that this single indication will be used as a detectable signal due to a significant noise in time series data and its nondeterministic nature which makes very accurate predictions impossible. Another indication can be violation of a confidence bound by an observed data point. Nevertheless, no one will pay attention to that signal if the subsequent observed time series data are within the bounds or even close to the predicted values. The violation may possibly be an outlier due to noise or some sudden instability rather than an indication of a serious malfunctioning of a system. It is likely that many false-positive alarms will appear if alarms and warnings will be generated based on single-data-point or short-term departures of observed time series data values from the forecast ones. However, by waiting until a pattern of detected preliminary behavioral change will emerge, the problem may have already cascaded to a point when proactive actions can no longer be possible due to some catastrophic impacts on the system. The period of time between the initial indication of an anomaly and the onset of serious degradation depends on the nature of time series and the process that it describes.T uniformly sampled data points. To be more precise, parameter T must be a multiple of the size of the output of the pretrained NN-model. Moreover, in the previous sections it was indicated the strict connection between the sizes of a history window and the corresponding forecast window.r times longer than a forecast window , then the forecast engine will fail to process The current pretrained network uses 40 historical points to predict 20 future points. It means that the history window is twice as long as the forecast window for the current model. For generality, assume that a history window is ndow see . The useLet us explain some peculiarities regarding time series visualization in Wavefront. h is the width (window) of the kernel, n is the number of points within the window h. We can set h to be equal to the test window mentioned above. It is possible to calculate two-sided averages if time allows us to wait for new data points to arrive. Similarly, instead of the anomaly scores, we can smooth time series data points. Let The biggest problem that the Wavefront customers encountered while consuming the described system for anomaly detection was the large number of false positive alarms. Our experience shows that the customers agree with the reduction of false positives even at the expense of the rising number of false negatives. The common approach to reduction of false positives is through smoothing methods. Paper describeore s^i is estiThen, a new anomaly score estimate In this section, we introduce the NN-model training process in the offline mode. The training was performed in VMware private data centers equipped with powerful GPUs. However, our experimental training database is not big. It includes around 3300 time series, taken from real customer cloud environments. The database contains around 1500-\u201ccpu\u201d, 400-\u201cdisk\u201d, 110-\u201cIOps\u201d, 320-\u201cmemory\u201d, 450-\u201cnetwork bandwidth\u201d, 100-\u201cnetwork packets\u201d and 410-\u201dworkload\u201d metrics. Metrics in the database have 1-min monitoring interval and, in average, 1-month duration. It does not contain any specific information crucial for the model training and similar results should be possible to get via other datasets of time series. Moreover, interesting should be application of synthetic datasets of time series.The current network has 40 inputs and 20 outputs. We experimented with different dimensions without any serious difference. We noticed that longer input compared to the output resulted in better forecast accuracy. Taking into account the grid structure, we can utilize We tried different network architectures. The first was LSTM networks with stateless configuration and 2 hidden layers with 256 nodes in each. The next was MLP networks with identical configuration. We did\u2019n find significant differences between LSTM and MLP networks for our small dataset. The current model is the MLP network which is very easy to implement without special libraries. We used \u201crelu\u201d activation function for the hidden layers and \u201clinear\u201d activation for the output layer. \u2018Adam\u2019 optimizer and mean average error (\u201cmae\u201d) as a loss function were used. We applied 5 epochs for each time series and 20 epochs for the entire database and We tried different implementations of the online mode in Java as enterprise cloud service. The first attempt was utilization of Deep Learning for Java (DL4J) library . It causWe performed some comparisons of the current model and classical ARIMA . We applied both models to a database of time series data from our internal cloud environments with different history windows sliding across the time axis. We experimented with 120 points (2 h), 1440 points (1 day), 11,520 points (1 week) and It should be interesting to compare the average errors across all metrics from the same class. For example, for the class of stationary metrics, NN-model shows an average error The current NN-model uses 4000 data points (uniformly sampled via interpolation) for 2 weeks history, 8640 points for 2 months history and 12,960 points for 6 months. Those selections are the trade-offs between the complexity and grid density. We think that those numbers can be reduced without affecting the accuracy especially for some data categories.We considered application of NN-based models to time series forecasting and anomaly detection in cloud applications. Throughout the paper, we discussed approaches for overcoming some of the challenges.The first and main challenge is restrictions on resource consumption in distributed cloud environments. Neural networks require intensive GPU utilization and sufficient data volume which make on-demand training and application of NN-based models unrealistic due to additional costs and unacceptable response times. We proposed a solution along with the ideas of transfer learning. We generate a global database for time series data collected across different cloud environments and customers, train a model in a private GPU-accelerated data centers and apply the acquired knowledge in the form of a pretrained model to a user specified time series data never seen before without GPU utilization. The weights and configuration of the pretrained network are stored in a cloud and monitoring tools can easily access the corresponding files and retrieve the required information for on-demand application to forecasting and anomaly detection.The second challenge is the weakness of NN-models for analyzing nonstationary time series data. It is a well-known problem and many researchers suggest application of stabilizing procedures like detrending and deseasoning before feeding the network. The stabilizing transformations convert a nonstationary time series into a stationary one, and properly trained NN-models can adequately treat those metrics. We utilize this common idea and train models only for stationary time series. We detect the stabilizing transformations via hypothesis testing. In the offline mode, we perform hypothesis testing to all time series data within the database for finding the set of required transformations for all examples. Those transformations convert all nonstationary time series data into stationary ones before sending to a model for the training. In the online mode, we transform a user specified time series into a stationary data, calculate the corresponding forecast and by application of the backward transformations return to the original scale and behavior. Throughout the paper, we demonstrated the main capabilities of the approach. Moreover, the approach was implemented as a SaaS solution for Wavefront by VMware and it passed full validation in real cloud environments. Our customers mainly utilize the service for anomaly detection.However, many questions need further investigations. One of the key problems is improvement of the current approach via different networks and configurations. The second interesting problem should be hypothesis testing via NN-based models. We already received some results with one-dimensional convolutional neural networks for data classification. It should be natural to combine both networks to automate data categorization and forecasting. Another interesting problem is designing new models for some new classes of time series data that should improve the accuracy. We also need to check whether bigger datasets will improve the accuracy of the current models.It is not fair to compare the proposed approach with the methods that train network models in demand for a specific time series data. Undoubtedly, the latest will be more accurate or at least comparable to our approach. Our main goal is the balance between the power and resource utilization. We aimed to develop methods for cloud environments without consumption of valuable resources and with acceptable accuracy.Pang, C. Anomaly detection on time series data. Filed: 22 August 2018. Application No.: 16/109324. Published: 30 January 2020. Publication No.: 2020/0034733A1.Pang, C. Visualization of anomalies in time series data. Filed: 22 August 2018. Application No.: 16/109364. Published: 30 January 2020. Publication No.: 2020/0035001A1.Poghosyan, A.V.; Pang, C.; Harutyunyan, A.N.; Grigoryan, N.M. Processes and systems for forecasting metric data and anomaly detection in a distributed computing system. Filed: 17 January 2019. Application No: 16/250831. Published: 27 February 2020. Publication No.: 2020/0065213A1.Poghosyan, A.V.; Hovhannisyan, N.; Ghazaryan, S.; Oganesyan, G.; Pang, C.; Harutyunyan, A.N.; Grigoryan, N.M. Neural-network-based methods and systems that generate forecasts from time-series data. Filed: 14 January 2020. Application No.: 16/742594.Poghosyan, A.V.; Harutyunyan, A.N.; Grigoryan, N.M.; Pang, C.; Oganesyan, G.; Ghazaryan, S.; Hovhannisyan, N. Neural-network-based methods and systems that generate anomaly signals from forecasts in time-series data. Filed: 19 December 2020. Application No.: 17/128089. This application is a continuation-in-part of US Application No. 16/742594, filed 14 January 2020.Poghosyan, A.V.; Harutyunyan, A.N.; Grigoryan, N.M.; Pang, C.; Oganesyan, G.; Ghazaryan, S.; Hovhannisyan, N. Neural-network-based methods and systems that generate anomaly signals from forecasts in time-series data. Filed: 18 January 2021. Application No.: 17/151610. This application is a continuation-in-part of US Application No. 16/742594, filed 14 January 2020."}
+{"text": "The prefrontal cortex (PFC) constitutes a large part of the human central nervous system and is essential for the normal social affection and executive function of humans and other primates. Despite ongoing research in this region, the development of interactions between PFC genes over the lifespan is still unknown. To investigate the conversion of PFC gene interaction networks and further identify hub genes, we obtained time-series gene expression data of human PFC tissues from the Gene Expression Omnibus (GEO) database. A statistical model, loggle, was used to construct time-varying networks and several common network attributes were used to explore the development of PFC gene networks with age. Network similarity analysis showed that the development of human PFC is divided into three stages, namely, fast development period, deceleration to stationary period, and recession period. We identified some genes related to PFC development at these different stages, including genes involved in neuronal differentiation or synapse formation, genes involved in nerve impulse transmission, and genes involved in the development of myelin around neurons. Some of these genes are consistent with findings in previous reports. At the same time, we explored the development of several known KEGG pathways in PFC and corresponding hub genes. This study clarified the development trajectory of the interaction between PFC genes, and proposed a set of candidate genes related to PFC development, which helps further study of human brain development at the genomic level supplemental to regular anatomical analyses. The analytical process used in this study, involving the loggle model, similarity analysis, and central analysis, provides a comprehensive strategy to gain novel insights into the evolution and development of brain networks in other organisms. The prefrontal cortex (PFC), covering the front part of the frontal lobe, receives input from multiple regions of the brain for information processing , and is Since it is obviously impossible to perform biopsies from the same area of an individual\u2019s brain multiple times during growth to generate time-series genetic data, in the past few decades, researchers have evaluated the developmental trajectory of the forehead from the perspectives of neuropsychology, neuroimaging, and cell physiology . It is gAs a statistical tool, network analysis can help us fully understand the internal complex systems, rather than just individual genes functioning along . HoweverIn this work, we apply the loggle model to a time-series gene expression data set to construct PFC time-varying gene interaction networks, since the model fits the biological realm of PFC development. We quantify the development trend of the PFC gene network through network global attribute indicators such as network diameter. We further apply network similarity analysis to describe the development stage of PFC, so as to identify hub genes at different stages using the central analysis. We also apply the loggle model to evaluate the development of several KEGG pathways in PFC. The identification of the changes of gene networks in human PFC can provide novel insights into human brain development and function. The hub genes identified in different development stages provide specific candidate targets for further biological validation.1 with Gene Expression Omnibus accession number GSE30272. This data set records 269 RNA samples from stages from fetus development to elderly , after removing subjects with severe neurological or psychiatric conditions. These samples were obtained from post-mortem human brain PFC gray matter tissue homogenates and subjected to a series of processes such as RNA extraction and quality control. The log2 intensity ratio was normalized after background correction, and the log2 ratio was further adjusted to reduce the impact of systematic noise after performing surrogate variable analysis. Readers are referred to the paper by The time-series gene expression data on the human PFC were downloaded from the GEO databaseConsidering data noise and the complexity of the algorithms that would be used to construct a time-varying graph, we first performed feature screening to filter out potential noise and reduce the data dimensionality. In this work, we considered the following two options for feature screening to obtain genes that carry important information.(i) Calculate the variance for each gene and select the top 300 genes to construct the time-varying network graphs. The purpose of this is to explore the development of networks constructed with genes showing high variation in PFC throughout the lifespan, or changes in the interactions between the dominant genes at different developmental stages of PFC.(ii) Select genes based on known KEGG pathways. The purpose of this is to explore the development trends of several known pathways in the PFC throughout the lifespan. In this study, we chose five pathways related to the development of PFC function by searching the literature. A list of the pathways is shown in We built the time-varying network by dividing the sample into nine age periods based on the age information provided by the original data , that isThis study aims to characterize the developmental pattern of inter-gene interactions over time in the human PFC region and identify hub genes involved in development. Accordingly, we first used the loggle model to build and understand PFC time-varying network graphs. In particular, the loggle model uses the local group-lasso penalty to minimize locally weighted negative log-likelihood function to reasonably combine the information of adjacent time points to ensure the progressive change of the graph structure. Then, a blockwise fast algorithm and pseudo-likelihood approximation were used to solve the \u201ccomputational disaster\u201d problem. The PFC time-varying graphs were constructed using the loggle package in R. To make the work self-contained, we here briefly describe how to construct the time-varying network graph via the loggle model. More technical details can be found in the paper by X1(t),X2(t),\u2026,Xp(t))T is a p-dimensionalg time-series random vector at time t\u2208, which obeys a multivariate Gaussian distribution \ud835\udca9p(\u03bc(t),\u2211(t)). We used {xk} to indicate the observation at time kt(0 = t1 = \u2026 = kt = \u2026 Nt = 1), where N represents the sample size. For simplicity, we centered the observations kx by subtracting the estimated mean kx so that each kxis drawn independently from \ud835\udca9p).Suppose X(t) = (t)\u22c5(\u03a9(t) = \u03a3\u22121(t)) to construct the graph edge set. The loggle model assumes the smoothness of the graphical topology, and obtains the estimated precision matrix kth time point by combining the locally weighted negative log-likelihood function with the local group lasso penalty }i \u2208 Nk,d is a set of precision matrices with \u03a9uv(ti) representing the -th element in \u03a9; Kh(\u22c5) = K(\u22c5/h) as a symmetric non-negative kernel function with bandwidth h.where The model uses the alternating directions method of multipliers (ADMM) algorithm to solvep variables are completely separated into multiple non-overlapping blocks by the following necessary and sufficient condition after suitable permutation; then, the ADMM algorithm is applied to each block to speed up the computation and reduce the calculation time from O(p3) to Specifically, the penalty is used h; the neighborhood width d, which controls the smoothness of the graph over time; and the sparsity parameter \u03bb, which controls the degree of graph sparsity. The tuning parameters are learned by cross-validation (CV) at each age period. For this purpose, data are divided into training and validation sets. The CV score on the jth validation set at time tk is defined as:When learning the loggle model, there are three parameters involved: the kernel bandwidth K-fold CV score at time kt is defined as h, k\u03bb, dk). At the same time, the \u201cmajority vote\u201d procedure cv.vote index the data in the jth time point. Each time, we set one of the three sets in , , and as the validation set, and the rest as the training set. The training set is used to estimate the correlation matrix of the graph model, and the validation set is used to calculate the cross-validation score. For a given h, we estimate \u03a9 based on the training set and calculate CV-score with the validation set. The loggle model assumes that data measured at different time points are independent and observations are made on a temporal grid, which makes the CV setting in this application valid by using the global group lasso penalty among all of the distances calculated between each pair of nodes in a network , where \u03b4min represents the shortest path between nodes i and j. A \u201csmall\u201d network diameter indicates that nodes in the network are closely connected and the graph is compact. In particular, comparing network diameters at different time points can predict network development in a timely manner = exclusive edge metric indicates that some edges belong to a certain network and do not appear in the rest of the network (The network .CompNet neighbor similarity index (CNSI) to measure the similarity between two compared networks. CNSI measures the similarity of each pair of nodes by comparing the degree of overlap between the first neighbors of the nodes between two networks. A cumulative overall similarity score for all nodes is calculated to specify similarities between two compared networks , namely, the first neighbors (d(i)C = deg(i). A high-degree node is called a \u201chub,\u201d and removing such a node affects the network topology and further leads to disturbances in biological systems . Changeseighbors , is expr systems . The degWe first create a gene expression heat map to visualize the age distribution of gene expressions see . A quickHereafter, we focus on the time-varying graph fitted by the loggle model to further analyze the development of the human PFC gene expression network over time. The time-varying graph of the gene interaction network fitted by this model is shown in To illustrate the changes in the network topology in more detail, we calculated several global network properties. As seen in We further analyzed the similarity between the networks of the nine age periods using the CNSI indicator . As shown in Based on these results and previous studies , we diviSTK32B, CX3CL1, and BACH2 in the fetus stage; STK32B, PCSK1, and NPPA in infant; and IPCEF1, STK32B, and RGS4 in child. The hub genes in the deceleration to stationary period of PFC development has the most nodes and edges, with the fastest change from child to 30s. Among the five pathways, the Axon guidance pathway is most sensitive to age changes, followed by three pathways: the Dopaminergic synapse pathway (hsa04728), the Platelet activation pathway (hsa04611), and the FoxO signaling pathway (hsa04068). These three pathways not only have similar numbers of nodes and edges in PFC, but also the rates of change in the declining period are similar, and the numbers of edges in the stationary period are also similar. In contrast, the Longevity regulating pathway (hsa04211) has the fewest nodes and edges, the slowest rate of change in the declining period, and the lowest number of edges in the final stationary stage. The results provide evidence that these pathways are highly active during the fast development stage and are abolished with the slowing down of PFC development.From Dopaminergic synapse pathway (hsa04728) are PRKCB and GNG7; for the Longevity regulating pathway (hsa04211) are IGF1 and PRKAB2; for the Axon guidance pathway (hsa04360) are LRRC4C and PARD6G; for the Platelet activation pathway (hsa04611) are TLN2 and RASGRP2; and for the FoxO signaling pathway (hsa04068) are CCNB1 and PRKAB2, while one additional gene SMAD3 is shown in the child stage. We also performed a central analysis of the other two developmental stages, the declining stage and the stable stage . Owing to limits of space, we placed the results in We further explored the hub genes of these pathways at the fast development period , as shown in way hsa0460 are LRIn this work, to decipher the dynamic temporal development trajectory of PFC region in human brain, we conducted a comprehensive network analysis using transcriptomics data. The loggle model was used to reconstruct the developmental trajectory of the time-varying network graph of gene interaction, and the global network attribute index is used to quantify the network changes. At the same time, the development of PFC was divided into three stages by similarity analysis, and hub genes at different developmental stages were identified. In addition, several known KEGG pathways related to brain function were chosen for analysis to further demonstrate the development trend of PFC.Owing to its functional properties, development of the human brain usually continues for a long time, starting with the fetus and continuing through adolescence. PFC is one of the last brain regions to mature . StudiesAlthough the macro-level PFC development model has been widely accepted, owing to technological limitations, the trend of development at the gene interaction level has not been clearly described. To fill this gap, in our study, we used time-series gene expression data to construct the developmental pattern of PFC at the molecular level by estimating the time-varying network graphs. We found that, from the fetus to child stages, PFC experiences fast development and most genes are active during this period. This is due to the increase in the number of neurons in the entire cortex as the size of the brain increases and changes in microstructures such as synapses occur throughout childhood . The denSTK32B, CX3CL1, BACH2, PCSK1, NPPA, IPCEF1, and RGS4. Several of these hub genes play specific roles in neuronal differentiation or synapse formation. For example, the gene STK32B is upregulated during the fetus period and then enters a suppressed state. This gene encodes a protein involved in synaptic plasticity, learning, memory, and neurodegeneration, and is a key factor in the transmission of information between cells .The hub genes involved in the late stage of PFC are as follows: tructure . HamacheSTK32B are involved in a variety of tissue differentiation and developmental functions and nerve conduction, for example, regulating cell development and differentiation (CSRP2) (GLRA2) (CCBE1) (SRGAP1) (RSPO3) (C1orf187) (ERBB3) (EVI2A) (GLDN) , the ion transporter (SLC31A2) (DBNDD2) , involve (GLRA2) , extrace (CCBE1) , regulat(SRGAP1) , regulat (RSPO3) , and axo1orf187) . The genon (MOG) , neuregu (ERBB3) , partici (EVI2A) , and pro) (GLDN) . The genSLC31A2) , and Par(DBNDD2) .Most of the genes that are pivotal at different stages of PFC development related to brain function development and related diseases, have been evaluated in previous studies; but for some of them there is no clear connection to human brain development. This requires further experimental analysis, although this is beyond the scope of the present study. In summary, our analysis shows that, during the fast development period of PFC, the hub genes mainly regulate the proliferation and differentiation of neurons and the development of synapses. In the stable period of PFC development, the hub genes mainly maintain the stability of PFC in the human brain by maintaining nerve impulses and electrophysiological balance. Finally, during the stage of decline of PFC, the hub genes mainly function to combat the degradation of nerve fibers. This also illustrates the microstructure involved in the development of PFC throughout the lifespan.Axon guidance pathway is the most sensitive to aging throughout the lifespan. It is well known that axons are an important component of neurons and play an important role in the development of the human cerebral cortex. During the critical period of human cerebral cortex development, the Axon guidance pathway is highly developed. As the number of neurons increases until the PFC develops completely, with aging, the synapses in the frontal lobe begin to be pruned and decline plays a vital role in the normal cognitive processes of PFC transcription factor provides protection for nerve cells during oxidative stress pathway , revealing the characteristics of normal human brain development patterns, and expanding our knowledge of the spatio-temporal event in human brain development. However, the current study does have some limitations. Due to the computational limitation of the algorithm, we can only select a limited number of genes for analysis. The algorithm needs to be further improved to include more gene data. Owing to the specificity of the tissue site, the current gene level analysis of the human cerebral cortex is almost entirely dependent on the examination of post-mortem tissue and the data are not from a cohort study. In addition, we hope to integrate other omics data such as miRNA, DNA methylation, and proteomics data into the network analysis, to obtain a more comprehensive picture of PFC development. We plan to pursue these issues in future work.https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE30272.Publicly available datasets were analyzed in this study. This data can be found here: HW performed the analysis and drafted the manuscript. YW, RF, JS, and ZL participated in data analysis and interpretation. HC and YC conceptualized the idea and revised the manuscript. All authors read and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "The biological processes involved in a drug\u2019s mechanisms of action are oftentimes dynamic, complex and difficult to discern. Time-course gene expression data is a rich source of information that can be used to unravel these complex processes, identify biomarkers of drug sensitivity and predict the response to a drug. However, the majority of previous work has not fully utilized this temporal dimension. In these studies, the gene expression data is either considered at one time-point (before the administration of the drug) or two time-points (before and after the administration of the drug). This is clearly inadequate in modeling dynamic gene\u2013drug interactions, especially for applications such as long-term drug therapy. In this work, we present a novel REcursive Prediction (REP) framework for drug response prediction by taking advantage of time-course gene expression data. Our goal is to predict drug response values at every stage of a long-term treatment, given the expression levels of genes collected in the previous time-points. To this end, REP employs a built-in recursive structure that exploits the intrinsic time-course nature of the data and integrates past values of drug responses for subsequent predictions. It also incorporates tensor completion that can not only alleviate the impact of noise and missing data, but also predict unseen gene expression levels (GEXs). These advantages enable REP to estimate drug response at any stage of a given treatment from some GEXs measured in the beginning of the treatment. Extensive experiments on two datasets corresponding to multiple sclerosis patients treated with interferon are included to showcase the effectiveness of REP. Considerable efforts have been made to identify molecular biomarkers of drug sensitivity and to develop computational models to predict drug response based on these biomarkers. Gene expression data is one of the most commonly used molecular data type in these studies, due to their high predictive ability, and numerous methods have been proposed for drug response prediction based on gene expression data9. However, many existing methods only use basal gene expression data and hence can only capture the influence of the steady state of the cells on their response to a drug. For example, Costello et al.2 analyzed 44 drug response prediction methods that employed gene expression profiles of breast cancer cell lines taken before treatment to predict dose-response values, e.g., GI50\u2014the concentration for 50% of maximal inhibition of cell proliferation from a single time-point. In practice, however, for many diseases (e.g. cancers) the response to a drug changes over time due to various reasons such as the development of drug resistance or changes in the progress of the disease. To capture such changes at a molecular level, a collection of temporal gene expression profiles of samples over a series of time-points during the course of a biological process is necessary to provide more insights than a single (or two) time-point(s)10. Therefore, developing algorithms that can predict the drug response over time using time-course gene data is of great interest.Prediction of drug response based on patients\u2019 clinical and molecular features is a major challenge in personalized medicine. A computational model that can make accurate predictions can be used to identify the best course of treatment for patients, or to identify therapeutic targets that can overcome drug resistance12, but also assess the association between the time course molecular data and cytokine production in this HIV trial13 and predict drug response during a treatment14. In15, the authors proposed an integrated Bayesian inference system to select genes for drug response classification from time-course gene expression data. However, the method only uses the data from the first time-point, and hence does not benefit from the additional temporal information. Lin et al.14 presented a Hidden Markov model (HMM)-based classifier, in which the HMM had fewer states than time points to align different patient response rates. This discriminative HMM classifier enabled distinguishing between good/bad responders. Nevertheless, choosing the number of states for this HMM is a major practical issue. In addition, this method cannot handle missing data and it requires the full knowledge of GEXs at all time-points a priori. This implies that the HMM may not be able to predict drug response at multiple stages in future time points, since the corresponding GEXs are not measurable.With the advancement of gene sequencing technologies, collecting gene expression levels (GEXs) over multiple time-points and their matched drug response values is now feasible. In parallel with these technological developments, there has been growing interest in the application of machine learning methods to analyze the time-course gene expression data. For example, time-course gene expression data can be used to not only identify longitudinal phenotypic markers16 employed tensor decomposition to identify drug target genes using time-course gene expression profiles of human cell lines. Li and Ngom17 proposed a higher-order non-negative matrix factorization (HONMF) tool for classifying good or poor responders from a latent subspace corresponding to patients learned from HONMF. One limitation of this work is that the latent subspace may not have discriminative ability in classifying patients, since it is learned without accounting for the class-label information. Moreover, this method simply discards samples with missing values, causing unnecessary information loss.The time-course gene expression data contains the GEXs of different patients over a series of time points, which can be indexed as patient-gene-time and represented as a three-dimensional tensor. Motivated by this, several tensor decomposition based algorithms have been proposed. For example, Taguchi8 developed an algorithm for joint gene selection and drug response prediction for time-course data. The method uses Elastic-Net (EN) to select a set of genes that show discrimination of patients\u2019 drug responses throughout the treatment. The selected genes are then passed to a logistic regression (LR) classifier for drug response prediction. But in real applications, due to the existence of noise and missing values in the data, finding discriminative genes for all patients may be difficult. In fact, several studies have shown that it is more viable to find genes that have consistent discrimination in a subset of samples along the time series20. Therefore, relying only on discriminative gene selection but without modifying classification algorithms may not achieve satisfactory performance.Recently, Fukushima et al.In this paper, we take a different approach for time-course drug response prediction. We hypothesize that a patient\u2019s drug response at a given time-point can be inferred from the response at a previous time point. This means that not only the GEXs but also the past response results can be integrated to identify the drug response for a subsequent time point. We develop a REcursive Prediction (REP) algorithm to predict the drug response of samples using their time-course gene expression data and their drug response at previous time-points. REP has a built-in recursive structure that exploits the intrinsic time-course nature of the data through integrating past drug responses for subsequent prediction. In other words, in REP, not only the GEXs but also the past drug responses are treated as features for drug response prediction. Furthermore, by taking into consideration the intrinsic tensor structure of the time-course gene expression data and leveraging identifiability of low-rank tensors, REP can alleviate the noise corruption in GEX measurements, complete missing GEXs and even predict GEXs for subsequent time points. These features enable REP to evaluate drug response at any stage of a given treatment from some GEXs measured in the beginning of a treatment. Experiments on real data are included to demonstrate the effectiveness of the REP algorithm.Figure\u00a021 and nearest neighbor imputation23. Instead, we employ a low-rank tensor model to fit the time-course gene expression dataset such that the missing values can be completed. Our supporting hypothesis is that genes never function in an isolated way, but oftentimes groups of genes interact together to maintain the complex biological process, which results in correlation in the GEX data24. We note that our low-rank tensor model suggests three factors that uniquely determine the values of GEXs, i.e., the factors corresponding to patient, gene and time, respectively that each GEX is represented as a summation of F triple products from the latent factors of patient, gene and time, respectively. Let us denote jth GEX of patient i recorded at time k. Based on our assumption, we haveJ genes measured over K time points. By varying the indices j and k in -entry of j;\u00a0f)-entry of k;\u00a0f)-entry of Towards this goal, we first assume we can use non-negative tensor factorization to compute missing GEX values:27, i.e., drug doses taken in the past will affect the current response. This implies that the drug response in the past time-points may help predict the current response. Based on this hypothesis, we propose a recursive prediction algorithm, henceforth referred to as REP for simplicity, which enables to integrate past drug response records with gene expression values for subsequent drug response predictions. Figure y(t). Here, we accumulate the historical responses by concatenating them into a new vector ast, the output of the predictor depends not only on the GEX at that time point, but also the previously observed drug responses. We always insert the drug response from the most recent time point into the first element of The effects of drugs are usually cumulative over timeith patient at time t, we concatenate i at time t. We then pass For the K time points together, which yields a feedback matrix for patient i asOur main idea is to feed back the historical drug responses and then combine them with GEX values to predict the drug response in the future. This is the major difference between our method and the state-of-the-art algorithms: prior art ignored the previous drug response outputs. Therefore, the training set for our method is created in a slightly different way. Recall that at each time point, we stack the historic drug responses into a vector. For any patient in the training set, we can further concatenate all such prior response vectors for b is the intercept, It is important to note that our approach is really a framework that is applicable no matter what is the choice of the final classification or regression algorithm. Nevertheless, for the purposes of exemplifying and illustrating the merits of our proposed framework, we are particularly interested in support vector machines , which have shown promising performance in this type of task (cite a few past papers using SVM for this task here). We set In our formulation, the drug response feedback plays an important role and it can be viewed as a \u201cmust-have\u201d feature. In SVMs, we penalize the two-norm of the linear weights equally\u2014the implicit assumption being that features have similar powers. In our context, however, the GEX values are much larger than the drug response labels which are either 1 or t, i.e., jth gene at time t, we havetth row of Our method can predict the drug response values for a new patient at any time point. Specifically, given the GEXs of a new patient at time mentclass2pt{minimt viay(0), which is used to construct Since Previously, we have explained how to predict drug response for patients at a certain time point. However, in practice, we are more interested in knowing the drug response of a few time-points in the future from the beginning of a treatment. This requires to know the GEXs of all time points up to the one of interest a priori, which is impossible in practice. In this subsection, we provide an efficient solution that allows to predict the unseen GEXs.Recall that in our model, the GEX of a patient is determined by three factors, i.e., the latent representation of patient\u2014t}t=1 in to find Note that here we substitute predicted drug responses for the unseen drug responses. Clearly, when actual drug responses for past time ticks are available, they should be used. We only do the substitution here for a preliminary assessment of how well a patient is likely to respond over time, before the beginning of treatment\u2014which is naturally a more ambitious goal.In this section, we provide some numerical experiments to showcase the effectiveness of REP for drug response prediction from time-course gene expression data. We examine two tasks: classification on binary drug response and regression on continuous drug response.15. The dataset was collected from 53 Multiple Sclerosis (MS) patients who received IFN-28. Note that EDSS values in the dataset are between 0 and 7, where the higher EDSS values reflect more severe disability. Except for the EDSS at the initial time point, the others were measured after the IFN-The first dataset used is the interferon (IFN)-29 also corresponding to MS. In the dataset, there are 16 female patients and 9 male patients who received IFN-https://mygene.info/) to map the probes to gene names, which yielded 19,565 gene names.The second dataset is from a Gene Expression Omnibus (GEO) record GSE24427Unlike the first dataset, we do not have binary drug response in the second dataset. Therefore, we focus on the prediction of binary drug response on the first dataset (whether or not a patient will have good or poor response), while the prediction of EDSS for both datasets is viewed as a regression task, because EDSS is an ordinal variable .We consider two datasets to evaluate the performance of our method.8 and SVM), one nonlinear model (K-nearest neighbors (KNN)30), and a probabilistic graphical model (discriminative loop hidden Markov model (dl-HMM)14) to real-world time-course data. We did not include SVM with nonlinear kernels (e.g. Gaussian), since its performance was inferior compared to the linear kernel. Note that EN-LR and dl-HMM were specifically designed for prediction of drug response values based on time-course gene expression data, while SVM and KNN are widely used classification methods.We examine the predictive ability on the prediction of binary drug response and ordinal EDSS response. For the binary case, we apply a number of classifiers including two linear models with radial basis function (rbf) krenel, Random Forest and KNN on the two datasets. All methods are implemented via the Python sklearn package with version 0.0. We use default settings for Elastic Net and SVR algorithm. For Random Forest, we set the number of trees in the forest to 20. For KNN, we set the number of neighbors to 10. For each dataset, we first create two versions of training and testing sets, where one is with the drug response feedback described in Fig.\u00a0For classification, we use prediction accuracy (ACC) and area under receiver operating characteristic (ROC) curve (AUC) to evaluate the performance of REP, where ACC is defined as:M is the number of samples in the testing set, N is the number of Monte-Carlo tests, mth testing sample and t, respectively.For regression, we use mean squared error (MSE) and mean absolute error (MAE) to evaluate the performance, where MSE and MAE are defined asWe report performance for all methods by using the same training, validation and testing sets. Specifically, we employ leave-one-out cross validation (LOO) for testing, where at each fold, we split 27 patients into a training set with 26 patients and a testing set with one patient. We then hold the testing set and randomly split the training set into two parts, where the first part has 25 patients and the second part has one patient, i.e., the validation set. We train models on the first part and tune hyper-parameters on the second part. Note that for each algorithm, we select its best hyper-parameters\u2014those that yield the highest prediction accuracy on the validation set. Finally, we apply the selected model to the testing set. For a fair comparison, in all experiments, we apply the same missing value imputation method to all algorithms.http://www.cs.cmu.edu/~thlin/tram/), we used their MATLAB implementation for our comparison. The hyper-parameter for dl-HMM is the number of hidden states, which is chosen from For REP-SVM, its hyper-parameters include mentclasspt{minimaF and F from 2 to 5 and We first study how the hyper-parameters ent}\u03bc in affect tentclass1pt{minimaWe now compare the performance of the five algorithms in terms of prediction accuracy and AUC. In the raw data, the missing values in mentclass2pt{minimWe sought to determine the effect of missing values on the performance of these methods. For this purpose, we randomly sampled the GEX data and hid the selected entries. As the percentage of missing values increases, all methods suffer performance loss, but REP-SVM\u2019s ACC and AUC remain the highest in all cases that: (1) has a recursive structure that integrates past drug response records for subsequent predictions, (2) offers higher prediction accuracy than several classical algorithms such as SVM and LR, (3) exploits the tensor structure of the data for missing GEX completion and unseen GEX prediction, (4) can predict drug response of different stages of a treatment from some initial GEX measurements. The achieved performance improvement for real data application suggests that our method serves as a better predictor of drug response using the time-course data.Supplementary Information."}
+{"text": "Methods for the analysis of time series single cell expression data (scRNA-Seq) either do not utilize information about transcription factors (TFs) and their targets or only study these as a post-processing step. Using such information can both, improve the accuracy of the reconstructed model and cell assignments, while at the same time provide information on how and when the process is regulated. We developed the Continuous-State Hidden Markov Models TF (CSHMM-TF) method which integrates probabilistic modeling of scRNA-Seq data with the ability to assign TFs to specific activation points in the model. TFs are assumed to influence the emission probabilities for cells assigned to later time points allowing us to identify not just the TFs controlling each path but also their order of activation. We tested CSHMM-TF on several mouse and human datasets. As we show, the method was able to identify known and novel TFs for all processes, assigned time of activation agrees with both expression information and prior knowledge and combinatorial predictions are supported by known interactions. We also show that CSHMM-TF improves upon prior methods that do not utilize TF-gene interaction. An important attribute of time series single cell RNA-Seq (scRNA-Seq) data, is the ability to infer continuous trajectories of genes based on orderings of the cells. While several methods have been developed for ordering cells and inferring such trajectories, to date it was not possible to use these to infer the temporal activity of several key TFs. These TFs are are only post-transcriptionally regulated and so their expression does not provide complete information on their activity. To address this we developed the Continuous-State Hidden Markov Models TF (CSHMM-TF) methods that assigns continuous activation time to TFs based on both, their expression and the expression of their targets. Applying our method to several time series scRNA-Seq datasets we show that it correctly identifies the key regulators for the processes being studied. We analyze the temporal assignments for these TFs and show that they provide new insights about combinatorial regulation and the ordering of TF activation. We used several complementary sources to validate some of these predictions and discuss a number of other novel suggestions based on the method. As we show, the method is able to scale to large and noisy datasets and so is appropriate for several studies utilizing time series scRNA-Seq data. Single cell RNA-Seq data (scRNA-Seq) has been used over the last few years to study several developmental and temporal processes \u20133. TheseIn scRNA-Seq cells that are profiled cannot be further traced. Thus, to infer the trajectories which underlie these processes researchers often rely on computational methods that link expression profiles from different cells. Several methods have been developed for such analysis including methods that are based on dimensionality reduction followed by the construction of trees or graphs in the resulting reduced dimension space . Similartargets of these TFs leading to selection of cells that are more likely regulated by them. This process iterates between model revisions, cell assignments and TF assignment until convergence. When the model converges we obtain a specific location for each cell on one of the paths and a time of activation for TFs identified.To infer dynamic continuous models for both cell ordering and TF activation we developed the Continuous State Hidden Markov Models Transcription Factor (CSHMM-TF) method. An overview of the model is presented in We applied CSHMM-TF to several time series scRNA-Seq datasets. The first is a human liver dataset with 765 cells, 19K genes, collected at 4 developmental stages . The secFigs The reconstructed trajectories for the liver dataset , correctFigs For the lung dataset, several of the TFs assigned by the model to the lung dataset are known to play important roles in lung development. These include SOX9 , 37, whiFor both the lung and liver datasets, CSHMM-TF has also identified several TFs related to cell proliferation, as expected for developing tissues and organs. Examples are shown in the figures and the While we observe the expression values for all genes and TFs, when learning the CSHMM-TF model we do not use the expression of the TFs. Instead, following past work we deterSeveral of these profiles agree with both their time assignment and their relationship to other TFs assigned to the same paths. For example, the transcriptional repressor protein YY1 is known to directly interact with members of the ATF/CREB family of transcription factors . These TIn addition to the support provided by the analysis of expression profiles we looked at known interactions between TFs to determine whether TFs assigned by CSHMM-TF to the same path (either at the same or different times) are indeed known to interact. For this, we determined the number of protein-protein interactions (PPI) or regulatory interactions in each paths and compared these to random TF sets of the same size. We have further divided the analysis to determine the significance of interactions within and between a specific time assignment and late as everything after that).We searched for interactions for all 5 models in the TcoF-DB database , which cOverall, we see very significant enrichment for interactions between TFs assigned to the same path. For most datasets we also see significant enrichment for interaction for \u2018early TFs\u2019. These are TFs that are assigned to the initial part of the path and as shown above in many cases represent proteins that are involved in complexes that jointly regulate a large number of genes. However, interestingly we also find for some of the datasets (most notably the mouse lung data) a strong enrichment for early-late interactions. These interactions likely represent a late TF activation or recruitment by an earlier TF. The fact that many of them are known interactions indicate that our model, using scRNA-Seq data, is indeed able to identify the specific timing of the regulation of the different TFs which are usually all assigned to the same time.We compared CSHMM-TF with several prior methods for trajectory inference that do not utilize TF-gene interaction data. For this we looked at the accuracy of the reconstructed trajectories and cell assignments as well as on the inference of TFs and their order.Figure B in As the figure shows, for a number of cell types these methods were unable to fully reconstruct known developmental trajectories.For example, while PCA and TSNE, were able to identify clusters for some cell types in both the lung and neuron data, they were unable to reconstruct the correct trajectories and also mix a number of different cell types correctly assigned by CSHMM-TF. GPLVM correctly orders cells along a pseudotime, however, it is unable to determine branching models. Monocle 2 is able to reconstruct cell trajectories, however it only found a single split for these datasets and also mixed cell types that CSHMM-TF correctly separated into unique branches. Slingshot is able to order cells along a pseudotime but it did not identify any branch point for the lung data. For the neural data it correctly separates the MEF and neuron cells, but is unable to infer a correct trajectory along the different cell types . As for PAGA, while it correctly clusters cell types, it does not seem to provide any clear trajectory for the cells or clusters. For both datasets PAGA produces a set of weakly connected cliques making it hard to infer the branching.To compare the results of CSHMM-TF with CSHMM that does not utilize TF-gene interactions, we developed a quantitative measure which calculates the accuracy of the ordering inferred by the two methods . For theAs mentioned above, most prior methods do not attempt to model regulation by TFs. However, a few do, and so we next compared CSHMM-TF to two prior methods for TF assignments using the liver dataset. The first is SCDIFF , which, While some recent scRNA-Seq studies profile thousands of cells, very few large time series scRNA-Seq datasets are currently available. One of the datasets we analyzed, which studied mouse cortical development is quite large . As we hWhile several methods have been developed to reconstruct developmental models based on time series scRNA-Seq data, very few of these utilize information about TF-gene interactions. Such complementary information can aid in correctly reconstructing models for development and differentiation and can help explain the regulation of the process being studied.Here we presented CSHMM-TF a continuous-state HMM model which combines cell assignments to a developmental model with TF assignments as regulators of the process. The method utilizes a probabilistic model which helps account for noise and missing values common to scRNA-Seq data. To learn the model the method iterates between cell assignments to branches and TF assignments to specific time points. Cells assigned to paths to which TFs are assigned are assumed to have that TF active. Based on the analysis of the targets of these cells we can both, identify the regulators and improve the assignments of cells to paths.We applied the method to several scRNA-Seq datasets from both human and mouse. As we show, the method was able to reconstruct biologically sound models for all datasets, in most cases correctly grouping cells based on known types. In contrast, several other pseudo-time scRNA-Seq analysis methods were unable to correctly reconstruct models for at least some of these studies highlighting the advantage of integrating expression and regulation data.Beyond the construction of the models and cell assignments to specific positions, CSHMM-TF identifies several TFs as regulating key aspects of the processes. Analysis of the TFs identified for the different biological systems studied supports these assignments since many of them are known to play important roles in those process while others represent novel predictions about the regulation of specific branching events. In addition to the list of TFs, CSHMM-TF provides information about potential combinatorial and causal relationships between TFs assigned to the same path. As we showed, TFs assigned to the beginning of paths are often interacting and in some cases early and late TFs are interacting as well. In these cases CSHMM-TF provides information on the dynamics of the assembly process of TF complexes which, without the detailed trajectories provided by scRNA-Seq would have been hard to do.CSHMM-TF can also be complimentary to current analysis methods that are based on identifying DE TFs. For the liver data, we found that PITX2, a known liver development TF , appearsWhile CSHMM-TF was successful in analyzing several biological systems, there are certainly many places where it can be improved. First, CSHMM-TF relies on a predefined list of TF-gene interactions, and this is likely incomplete preventing the method from identifying additional key TFs. In addition, while the method is able to identify interacting TFs, the model for their impact is additive and so it would be hard for this method to identify more complex relationships .https://github.com/jessica1338/CSHMM-TF-for-time-series-scRNA-Seq.git). We believe that CSHMM-TF represents a useful first step in utilizing the detailed information provided by scRNA-Seq data to infer the dynamics of TF activation.CSHMM-TF is implemented in python and is available from github (We tested our method on five publicly available time-series scRNA-Seq datasets in human and mouse. The number of cells in the datasets ranged from 152 (mouse lung data) to \u223c 21K (mouse cortex data). Datasets were processed by removing genes with overall low expression . These are the states in which cells are allowed to split to two or more branches and they represent important split stages for cell lineages. The edges between split nodes are denoted as paths (p0 \u223c p2) and each contains infinitely many states such that each point on a path corresponds to an state. States are parametrized by their location w.r.t the two split nodes at the end of the path they reside on. Each of the split nodes is associated with a branch probability B. For each state (including split nodes), we define an emission probability by determining parameters for a multivariate Gaussian distribution which, following previous work, assumes independence for gene specific expression levels conditioned on the state [tstart) directly affects the emission probability of cells assigned to locations on the paths that follow the start time of the TF. To formulate the emission probabilities in CSHMM-TF we use sp,t to represent a specific state where 0 \u2264 t \u2264 1 is a pseudo time on path p(Da \u2192 Db), and a, b are the indices of split nodes. Denote by j in cell i, the emission probability for gene j in cell i assigned to state sp,t is modeled as a Gaussian distribution with mean \u03c3j:Continuous State HMMs (CSHMM) differs from standard HMMs in the number of states each can have. While HMMs have a (finite) well defined set of states, CSHMM can have infinitely many states (which we use to represent continuous time of cells). CSHMM-TF extends the formulation of CSHMM for time-series scRNA-Seq data (first presented in ) by addiactivity . The asshe state . The mai\u03b8 is the set of model parameters ). Note that this weight is gene specific and depends in part on the TFs predicted to regulate that gene. To allow different genes to change non-linearly at different rates across the path (some at the beginning while others at the end) we use a gene specific parameter Kp,j to denote the rate of change. For genes regulated by TFs that do not change at the start of the path we use t\u2032 = max. Here, t is the time assignment of the cell, tj,start is the TF activation time for TF regulating gene j, which we discuss in more detail below. For genes not regulated by any TF assigned to this path, or those regulated by TFs that are activated at the start of the path, t\u2032 = max is equal to t. We also attempted to include dropout probability using a mixture weight model in the emission probability, however, this did not change the performance of CSHMM-TF much and so is omitted here. These notations are enough to define the parameters required to specify a CSHMM-TF: \u03b8 = . All symbol definitions are presented in Here, ters see . gaj is To predict regulating TFs for each path we extend methods that only allow discrete time assignments to TF activity . We firsg) which represent the change in expression for each gene between the two nodes that define a path (start and end). We use a L1 regularization with parameter \u03bbg, where larger \u03bbg means more strict regulation. To incorporate TF information to this regularization (given our assumption that genes regulated by path specific TFs are more likely to change in that path) we use instead \u03b1p,j is the probability that the expression of gene j will change along path p (and so the higher the probability the lower the regularization for gene j). \u03b1p,j is estimated by fitting a logistic regression model for all genes regulated by TFs on path p. Such changes in the regularization parameters allow genes that are targets of assigned TFs to change more than other genes for which no explanation for change in expression is determined by the model.We assume that most genes do not change in a specific path . Based on this we regularize the gene expression difference vector refers to the mean gene expression of the split point at both ends of a path. Briefly, the log-likelihood shown in t\u2032 as we have discussed previously. The second, expanded in Where Click here for additional data file."}
+{"text": "The stem cells located in the hair follicle bulge area are critical for skin regeneration and repair. To date, little is known about the evolution of the transcriptome of these cells across time. Here, we have combined genome-wide expression analyses and a variety of in silico tools to determine the age-dependent evolution of the transcriptome of those cells. Our results reveal that the transcriptome of skin stem cells fluctuates extensively along the lifespan of mice. The use of both unbiased and pathway-centered in silico approaches has also enabled the identification of biological programs specifically regulated at those specific time-points. It has also unveiled hubs of highly transcriptionally interconnected genes and transcriptional factors potentially located at the core of those age-specific changes. The skin stem cells (SSCs) located in the bulge area of the hair follicles are important for replacing, restoring, and regenerating epidermal cells that have been lost, damaged, or become pathologically dysfunctional in postnatal periods ,2. TheseThe SSCs have been extensively characterized at the transcriptomal level both in toto and in single cell-based experiments in mice ,8. This Male C57BL/6J mice were maintained and utilized according to protocols approved by the Bioethics Committee of the University of Salamanca. Mice were kept in ventilated rooms in pathogen-free facilities under controlled temperature (23 \u00b0C), humidity (50%), and illumination (12-h-light/12-h-dark cycle) conditions.This procedure was carried out as previously described . Briefly+ Itg\u03b16+ SSCs were lysed in RLT buffer and total RNA was extracted using the QIAGEN RNeasy Micro Kit according to the manufacturer\u2019s instructions. Purified RNA was processed exactly as indicated elsewhere [Flow cytometry purified CD34lsewhere and anallsewhere . For thei, n is the number of time-points. For probesets with adjusted P. Correlations were considered statistically significant when the Pearson coefficient had p values below 0.05. Functional clusters were established when every pairwise correlation within a group of signatures was found significant. For the discovery of transcription factor binding motifs in the promoters of the coregulated genes, the iRegulon software was used [Functional annotation of gene sets was performed using Metascape and Toppwas used . A colleblockwiseModules function. The pickSoftThreshold function was used to select the soft thresholding power according to network topology. Consensus module detection within each expression pattern was omitted and kept to one module as the number of clusters had been already optimized. The heatmap plot depicting the adjacency matrix was created with the TOMplot function. To calculate the intramodular connectivity for each gene, we computed the whole network connectivity for each expression pattern through the intramodularConnectivity function. Hubs were defined as the 10% most connected genes within each expression pattern. The known functional interactions among hubs were obtained through the String tool [In order to identify the transcripts at the core of every gene expression pattern, the WGCNA R package was used . To thising tool . Cytoscaing tool .https://www.ncbi.nlm.nih.gov/geo/) under the accession number GSE137176.Microarray data reported in this paper was deposited in the GEO database (+ Itg\u03b16+ SSC pools from 0.6- (\u201cvery early\u201d age time-point), 1- (\u201cearly\u201d time-point), 2.5-, 4- (\u201cmiddle\u201d time-points), 6- and 12-month-old (\u201clate\u201d time-points) mice mice A. The fiproaches A. During of mice A,B. Howe of mice A,B. Upon of mice C. These criteria D.+ Itg\u03b16+ SSC pool undergoes significant fluctuations in an age-dependent manner.Further soft-clustering analyses revealed that these dynamic genes can be subdivided into 14 nonredundant gene expression subsets according to their respective age-dependent patterns of expression below. TWe next utilized the Metascape and ToppThe \u201cearly\u201d transcriptional time-point is much less complex, since it is only composed of two subsets of differentially expressed genes . Subset The SSCs from the \u201cmiddle\u201d age time-points (2.5- to 4-month-old mice) are more stable from a transcriptomal point of view than those from the earlier time-points . The twoThe transcriptional programs associated with the \u201clate\u201d age time-points regain in complexity both in terms of the number of differentially expressed gene subsets and the total count of genes involved . Their pcorrplot algorithm (see Methods) to monitor the age-dependent evolution of hallmark pathways and gene signatures directly associated with SSC function ,23. The Methods) A. With tMethods) A.2/M checkpoint elements, and Hedgehog signaling clWe next implemented a series of in silico approaches to identify genes that, given their high transcriptional interconnectivity with the rest of dynamic transcripts identified in our study, could be potentially involved in the age-dependent regulation of the SSC transcriptome. This idea stemmed from previous observations indicating that the most interconnected nodes (hubs) within specific signaling and transcriptomal networks usually play relevant roles in the orchestration of such biological programs ,29. To tFinally, we focused on the identification of transcriptional factors potentially involved in the age-dependent changes observed in the transcriptional programs of SSCs. To this end, we followed a two-pronged in silico approach: (i) the identification of the transcriptional factors that behave as hubs according to our previous bioinformatics analyses . (ii) ThThis analysis also revealed a high percentage of transcriptional factors belonging to the zinc finger protein (Zfp) family that are present in most hub subsets and age periods A. The fuThe second in silico approach did find the statistically significant enrichment of DNA binding sites for specific subsets of transcription factors in the promoter regions of the age-regulated gene hubs B. In theIn addition to E2F and Zbtb33, the promoter regions of the hubs specific for the \u201cearly\u201d experimental phase harbor DNA binding sites the polycomb protein E4F1 and GATA family factors B. RemarkThe promoter regions of the \u201clate\u201d period hubs are enriched in DNA binding motifs for E2F, ETS, GATA, the Wnt/\u03b2-catenin effectors (TCF/Myc), and Sox family proteins A,B. Thes+ Itg\u03b16+ SSCs. Our results indicate that the transcriptome of these cells is, in fact, highly dynamic along the age points interrogated. As a reference, our analyses have shown that approximately one third of the SSC transcriptome is highly dependent on that parameter. The dynamic gene subset of this transcriptome is also highly complex when considering the expression patterns exhibited by the differentially expressed genes, the number of genes involved in each of those dynamic subsets, and the type of biological functions linked to each of them. In line with this, our soft-clustering analyses have revealed that the age-dependent transcriptome of these cells can be subdivided into 14 different subsets according to their specific pattern of expression during the time periods analyzed. Further adding to this complexity, we found subsets that are age period-specific whereas and #\u22126) . These land #\u22126) . This hiand #\u22126) D.From a biological point of view, our analyses indicate that each of those differentially expressed gene subsets can be correlated with specific biological programs. In this context, we found that the \u201cvery early\u201d and the \u201clate\u201d time-points selected in our experimental approach are the most homogenous from a functional point of view. Thus, the \u201cvery early\u201d time-point gene signature share many molecular features that correlate with increased cell proliferation rates. By contrast, the \u201clate\u201d time-point exhibits molecular fingerprints associated with reduced stem cellness, increased cell differentiation, and aging. These data are interesting because they suggest that the transcriptomal features typical of \u201caging\u201d SSCs arise much earlier than previously anticipated and, in fact, even earlier than the functional deterioration that is only observed in SSCs six months later in mice . The \u201ceaIt is worth noting that the age-dependent biological programs found in the foregoing analyses are also detected when the microarray data are processed using independent, pathway-centered coexpression analyses. Furthermore, these latter in silico studies have unveiled additional functional features of those programs. For example, we saw that the proliferative condition found at the \u201cvery early\u201d time-point also correlates with elevated levels of Hedgehog- and E2F-linked gene signatures as well as with reductions in the expression levels of genes associated with TGF\u03b2 and Notch1 signaling C,D. By cIt is likely that the transcriptomal programs characterized using genome-wide expression analyses contain multiple bystanders that play no relevant processes in SSC biology. Separating the wheat from the chaff in these transcriptomal programs would be difficult unless functional screenings are conducted. However, one avenue to tackle this issue in silico is to carry out weighted correlation network analyses to identify transcriptional hubs in the expression networks characterized. Using this approach, we reduced the initial collection of 11,123 differentially expressed genes to a much smaller subset of 1087 genes that can be considered as hubs in this age-dynamic transcriptomal process . This collection could be further decreased if needed just by increasing the statistical and a priori identification criteria used in these correlation network analyses. Several lines of evidence suggest that this method provides a good picture of the potentially relevant genes involved in each of the dynamic gene subsets. Those include (i) the observation that the gene hub collection recapitulates well the age-dependent fluctuations of the biological processes found upon the functional annotation of the whole dynamic SSC transcriptome. (ii) That a significant fraction (34.8%) of the identified gene hubs has been previously linked to SSC-related processes ,9,10. ItAnother issue bound to this type of study is the identification of the master regulators of the transcriptomal changes observed. Prima facie, it could be argued that the upstream transcriptional factors in charge of such process had to behave as transcriptional hubs themselves . Consistent with this view, we detected a significant number of transcriptional factors in our age-regulated gene hubs A. UnfortGiven that the main focus of this study was the characterization of the age-dependent evolution of the SSC transcriptome under normal homeostatic conditions, we have not characterized the transcriptome of those cells in aging mice. Up to now, this issue has been addressed by a study that utilized SSCs from fully aged (18-month-old) mice . As indiIt is important to bear in mind that the age-dependent transcriptomal changes observed in this study do not necessarily have to be mediated by some internal clock of the SSCs. Indeed, some of these changes can be elicited by either paracrine or cell-to-cell signals from the niche. As a token, the upregulation of gene signatures associated with the activation of the Jak-Stat signaling pathway detected in aging SSCs is known to be engaged by cytokines released from the surrounding tissue . It is lIn addition to providing a large angular view of the age-dependent evolution of the SSC transcriptome, this work has described a computational pipeline that can be easily implemented to characterize, integrate, and extract functional information from the dynamic changes in transcriptomes linked to multiple biological time-points and/or experimental conditions."}
+{"text": "The complexity of biological systems is encoded in gene regulatory networks. Unravelling this intricate web is a fundamental step in understanding the mechanisms of life and eventually developing efficient therapies to treat and cure diseases. The major obstacle in inferring gene regulatory networks is the lack of data. While time series data are nowadays widely available, they are typically noisy, with low sampling frequency and overall small number of samples. This paper develops a method called BINGO to specifically deal with these issues. Benchmarked with both real and simulated time-series data covering many different gene regulatory networks, BINGO clearly and consistently outperforms state-of-the-art methods. The novelty of BINGO lies in a nonparametric approach featuring statistical sampling of continuous gene expression profiles. BINGO\u2019s superior performance and ease of use, even by non-specialists, make gene regulatory network inference available to any researcher, helping to decipher the complex mechanisms of life. Gene regulatory network inference is a topical problem in systems biology. Here, the authors presents BINGO, a powerful method for network inference from time series data. Knowledge of the regulatory structure enables understanding of biological mechanisms, and can, for example, lead to discoveries of new potential targets for drugs. Interactions between genes are typically represented as a gene regulatory network (GRN) whose nodes correspond to different genes, and a directed edge denotes a direct causal effect of some gene on another gene. The usual research aim is to infer the network topology from given gene expression data. Classically, GRN inference has been based on analysing steady-state data corresponding to gene knockout experiments, where one gene is silenced and changes in the steady-state expressions of other genes are observed. However, carrying out knockout experiments on a high number of genes is costly and technically infeasible. Moreover, it can be difficult to know if the system really is in a steady state when it is measured. By contrast, methods that infer GRNs from time series data can infer the networks on a more global level using data from few experiments. Therefore, GRN inference from time series data has gained more and more attention recentlyModelling dynamical systems from time series data are a problem with long history, in particular in the fields of mechanical, electrical, and control engineering. Inference of GRNs, however, adds further modelling challenges since data collection is expensive and technically demanding. Typically gene expression data have low sampling rates and relatively small amount of data. Moreover, GRNs have a high number of genes with complex, nonlinear regulatory mechanisms.6. In addition, different methods have been compared in review articles2. Many different approaches have been taken to solve the problem, such as graphical models7, Bayesian network models8, information theory methods10, neural models5, and so on. The main focus of this article is on methods that are based on fitting an ordinary differential equation (ODE) model to the observed gene expression time series data.Several GRN inference problems, from different types of data, have been posed as competitive challenges by the \"Dialogue for Reverse Engineering Assessments and Methods\u201d (DREAM) project12, spline fitting13, Gaussian process fitting15, or regularised differentiation17. The regression problem is then solved by linear methods13, fitting mechanistic functions15, or a user-defined library of nonlinear functions19. Also nonparametric machine learning techniques have been used, such as random forest3 and Gaussian process regression20. A method based on Gaussian process regression, called \u201cCausal structure identification\u201d (CSI)20 was the best performer in a comparison study for network inference from time series data1.Most ODE-based methods transform the system identification problem into an input\u2013output regression problem where the inputs are the measured gene expression values and outputs are their derivatives that are estimated from the data. Derivative estimation can be based on simple difference approximation21. The winner of the DREAM4 in silico network inference challenge was a method called \u201cPetri nets with fuzzy logic\u201d (PNFL)22, whose model class consists of fuzzy membership functions.Another ODE-based approach is to introduce a simple enough model class, from which trajectories can be directly simulated and compared to the measured data. Such strategy does not suffer badly from the low sampling rate, but the model class cannot be too complex, and it might be too restrictive to capture the behaviour of the real system. Linear dynamics have been proposed together with a Kalman filter state estimation schemeThis article presents BINGO . The novelty of BINGO is the introduction of statistical trajectory sampling. This enables the use of a flexible nonparametric approach for modelling the nonlinear ODE on continuous gene expression trajectories. BINGO is based on modelling gene expression with a nonlinear stochastic differential equation where the dynamics function (or drift function), is modelled as a Gaussian process. This defines gene expression as a stochastic process, whose realisations can be sampled using Markov chain Monte Carlo (MCMC) techniques. The key to overcome low sampling frequency is to sample the trajectory also between measurement times. The method pipeline is illustrated in Fig.\u00a014 treat the problem as an input\u2013output regression problem, with gene expression derivatives estimated directly from the time series data. This approach yields a probability distribution for the derivatives, whereas we obtain a probability distribution for the continuous gene expression trajectories. This allows by-passing the derivative estimation, which can be a serious source of errors when the sampling rate is low.While BINGO is not the first GRN inference method to utilise Gaussian processes to model the dynamics function in an ODE model, the existing approaches24. Moreover, the effects of sampling frequency and process noise are studied by applying the method to simulated data of the circadian clock of Arabidopsis thaliana25. Finally, BINGO\u2019s performance on real experimental data is demonstrated by applying it to a dataset from the synthetic network IRMA26, and by using it on drug target identification. BINGO outperforms state-of-the-art network inference methods in inference from time series data.BINGO is validated by comparing it with state-of-the-art network inference methods using data from the DREAM4 in silico network inference challengeY\u2009=\u2009{y0,\u00a0\u2026,\u00a0ym} are modelled as samples from a continuous trajectory,vj represents measurement noise. The continuous trajectory is assumed to satisfy a nonlinear stochastic differential equationw is some driving process noise. Here, x is an f is vector-valued,fi depends on xj, in the corresponding GRN there is a link from gene j to gene i. The task in GRN inference is to discover this regulatory interconnection structure between variables.Gene expression time series data f in Eq. (27. This defines x as a stochastic process with probability distribution p(x\u2223\u03b8), where \u03b8 denotes hyperparameters of the Gaussian process\u2014including the underlying GRN topology. Trajectory realisations from the conditional probability distributionp is the measurement model arising from Eq. . Essentially, the dynamics function f in Eq. is modelmentclass2pt{minim28 (a.k.a. Gaussian process state space model30) is based on GP latent variable models31. It is an effective tool for analysing time series data produced by an unknown dynamical system, or in the case the system is somehow too complicated to be presented using classical modelling techniques. In the original paper28, the method was used for human motion tracking from video data. Motion tracking problems remain the primary use of GPDMs33, but other types of applications have emerged as well, such as speech analysis34, traffic flow prediction35, and electric load prediction36. BINGO is based on continuous-time GPDM, which is introduced in this article, and some theory of GPDMs is presented in Supplementary Notes\u00a0The discrete-time GPDMf is a stochastic process, that is, a function whose values at any given set of input points form a random variable. Gaussian processes can be seen as a generalisation of normally distributed random variables. A Gaussian process is defined on some index set \u039e so that for any finite collection k\u2009=\u2009Cov(f(\u03be1),\u00a0f(\u03be2)). Then it holds thatN\u00a0\u00d7\u00a0N matrix whose element is k.A Gaussian process \u03b7j\u2009=\u2009f(\u03bej)\u2009+\u2009vj, for j\u2009=\u20091,\u2009\u2026,\u2009N with f(\u03be) at a generic point \u03be can be approximated with the conditional expectation, which can be expressed analytically27. Its popularity is based on its analytical tractability and solid roots in probability theory. For example, it is possible to obtain confidence intervals for predictions of GP regression. In GPDM, thanks to the analytical tractability of the GP framework, it is possible to obtain the probability distribution p(x\u2223\u03b8) for solutions of the stochastic differential Eq.\u00a0\u2009=\u2009f1(x1)\u2009+\u2009f2(x2). This is an important property for modelling a chemical system\u2014such as gene expression\u2014where reactions can happen due to combined effects of reactant species. For example, the dynamics corresponding to a chemical reaction x1\u2009+\u2009x2\u2009\u2192\u2009x3 cannot be modelled by The GP framework can handle combinatorial effects, meaning nonlinearities that cannot be decomposed into \u03b7j\u2009=\u2009f(\u03bej)\u00a0+\u00a0vj, for j\u2009=\u20091,\u00a0\u2026,\u00a0N, where \u03be does the function f depend on. In the GP framework, variable selection can be done by a technique known as \u201cautomatic relevance determination\u201d38, based on estimating the hyperparameters of the covariance function of f. For BINGO, we have chosen the squared exponential covariance function for each component fi:\u03b2i,j\u2009\u2265\u20090 are known as inverse length scales, since jth coordinate for the value of function fi to change considerably. If \u03b2i,j\u00a0=\u00a00, then fi is constant in the corresponding direction. The mean function for fi is mi(x)\u00a0=\u00a0bi\u00a0\u2212\u00a0aixi where ai and bi are regarded as nonnegative hyperparameters corresponding to mRNA degradation and basal transcription, respectively.In a classical variable selection problem from input\u2013output data \u03b2i,j: p(\u03b8\u2223Y)\u00a0\u221d\u00a0p\u2009=\u2009\u222b\u00a0pdx\u2009=\u2009\u222b\u00a0pp(x\u2223\u03b8)p(\u03b8)dx.\u00a0 Again \u03b8 consists of all hyperparameters in the method, including the interesting parameters \u03b2i,j. The integral with respect to the continuous trajectory x is done by sampling trajectories from the distribution pp(x\u2223\u03b8), as described above. Also, all the hyperparameters are sampled, including \u03b2i,j that are given zero-preferring priors by using an indicator matrix S, which is the adjacency matrix of the corresponding directed graph structure. BINGO\u2019s output is the average of the indicator samples was a method called \u201cCausal Structure Identification\u201d20, which is also based on Gaussian process regression. Its discrete-time version is included in the comparison, since its performance was better. The 10-gene challenge winner was \u201cPetri Nets with Fuzzy Logic\u201d22. The 100-gene challenge winner40 used only the knockout data. A similar method, the \u201cmedian-corrected Z-score\u201d (MCZ)41, achieved the second highest score in the 100-gene challenge. Any method inferring networks from time series data can be combined with a method inferring GRNs from steady-state data41, such as the MCZ. Unfortunately, the MCZ requires knockouts or knockdowns of all genes, which can hardly be expected in a real experiment. Nevertheless, the combinations dynGENIE3*MCZ and BINGO*MCZ are included in the full data comparison. The scores for the combined methods are the products of the individual scores, favouring links that score high in both methods. It should be noted that BINGO (as well as the PNFL) can utilise also partial knockout data together with time series data. The results on DREAM4 data are summarised in Supplementary Table\u00a0BINGO is first compared with the aforementioned methods using the time series data, and then with the challenge best performers using all available data. The use of steady-state data from knockout/-down experiments by BINGO is discussed in Supplementary Note\u00a039. The poor performance of most methods with network 2 is attributed to low effector gene levels in the wild type measurement22. In contrast, BINGO\u2019s performance is less satisfactory with network 3, where the PNFL achieves almost perfect reconstruction. This might be due to a fairly high in-degree (four) of two nodes in the true network. Only one out of eight of these links gets higher confidence value than 0.5 assigned by BINGO. Based on Supplementary Table\u00a01, Table 1), the impact of knockout data is largest in network 3. PNFL perhaps makes better use of these data. Also the BINGO*MCZ combination scores fairly well with network 3, but in network 2 it loses clearly to BINGO applied to all data directly.BINGO consistently outperforms other methods by a large margin (with the exception of network 3) in GRN inference from time series data in the 10-gene case. When using all data from the challenge, BINGO scores slightly higher (average AUPR) than the DREAM4 10-gene challenge winner PNFL. The average scores are very close to each other but in the different networks there are some rather significant differences. BINGO reaches a fairly high AUPR in network 2, which seemed to be very difficult for all challenge participants. The best AUPR for network 2 among the challenge participants was 0.660, and the PNFL\u2019s 0.547 was second highestf should vanish. This hypothesis is supported by the deterioration of results when also the knockdown data is included with the knockout and time series data. It should be noted that both the DREAM4 winner as well as the MCZ use only the knockout and knockdown data, but they require knockout of every gene, which is hardly realistic in a real experiment.As in the 10-gene case, BINGO outperforms its competitors by a clear margin in all five networks when inferring the 100-gene networks from time series data alone, and in fact, it scores slightly higher than the DREAM4 challenge winner. iCheMA is excluded due to its poor scalability to high dimension. When using all data, the combination BINGO*MCZ is the best performer, tied with the combination dynGENIE3*MCZ. It seems that with the 100-gene network, BINGO cannot always combine different types of data in an optimal way. This may be due to the large number of steady-state points where the dynamics function Circadian clock of Arabidopsis thaliana: Realistic data were simulated from the so-called Millar 10 model of the Arabidopsis thaliana circadian clock25, using the Gillespie method42 to account for the intrinsic molecular noise. This model has been widely used to study the plant circadian clock and as a benchmark to assess the accuracy of different network inference strategies15. It simulates gene expression and protein concentration time series with rhythms of about 24\u2009h. The gene regulatory structure consists in a three-loop feedback system of seven genes and their corresponding proteins whose chemical interactions are described using Michaelis\u2013Menten dynamics. The model has been simulated for 600\u2009h in 24-h light/dark cycles to remove transients. Then, the photoperiodic regime was switched to constant light. Ten replicates were simulated and the first 48\u2009h of the constant light phase was recorded and downsampled to correspond to sampling intervals of 1, 2, or 4\u2009h. The time series therefore consist of 49, 25, or 13 time points depending on the sampling interval. Two datasets were simulated with different levels of process noise, which is due to the random nature of gene expression. Process noise propagates through the network and therefore it should be beneficial to network inference algorithms.Figure\u00a0In vivo dataset IRMA: A synthetic network26 was constructed with the purpose of creating an in vivo dataset with known ground truth network for benchmarking network inference and modelling approaches. The network is rather small, consisting of only five genes and eight links in the ground truth network or off (four time series) at the beginning of the experiment. These have been averaged into one switch-on time series and one switch-off time series . Typically only the two average time series have been used, but here BINGO is applied both on the two average time series, and on all nine time series.5, and TimeDelay-ARACNE43. They only report one network structure as opposed to a list of links with confidence scores. Therefore it is not possible to calculate AUROC/AUPR scores for these methods, but their predictions can be presented as points with the ROC and precision-recall curves obtained for BINGO and dynGENIE3, shown in Fig.\u00a0The resulting AUROC/AUPR scores are presented in Fig.\u00a0Arabidopsis thaliana44. The data consist of two experiments (each with two replicates): wild type (untreated) and nicotinamide (NAM) treated plants. The study is divided in two parts: a study of 17 known core circadian genes, and a larger study of 994 genes that were periodic in both datasets44. For each study, BINGO used all available data at once. To identify direct NAM targets, an external input was added with constant value zero for untreated plants and one for treated plants. Links from NAM to potential targets are modelled exactly as other links in the network. Figure\u00a045 (note that F14 only has 10 of the 17 clock genes considered here). With a threshold of 0.5 Euler schemet\u00a0\u2208\u00a0 where k\u2009=\u20091,\u00a0\u2026,\u00a0M. In Supplementary Note\u00a0The existence and uniqueness of the solutions of the stochastic differential Eq. are proventclass1pt{minimaentclass1pt{minimap(X\u2223\u03b8) is derived in this section for the discrete trajectory \u03b8 denotes collectively all the hyperparameters. The discrete trajectory is sampled from the continuous version \u03b4\u03c4k\u00a0\u2254\u00a0\u03c4k\u00a0\u2212\u00a0\u03c4k\u22121, and k\u2009=\u20091,\u00a0\u2026,\u00a0M. It holds thatf, the trajectory X is a Markov process, and therefore its distribution satisfiesThe probability distribution n in Eq. , such th\u03c4 is a diagonal matrix whose element is \u03b4\u03c4k, The integral in Eq. can be cp(X\u2223\u03b8) corresponds to the finite-dimensional distribution of the continuous Euler-discretised trajectory x of Eq. (p(X\u2223\u03b8) is a proper finite dimensional approximation of the distribution of x.Note that ying Eq. , evaluatentclass1pt{minimaentclass1pt{minimaY\u2009=\u2009 where yj is assumed to be a noisy sample from the continuous trajectory x, that is, yj\u2009=\u2009x(tj)\u00a0+\u00a0vj, and R\u2009=\u2009diag. The intention is to draw samples from the parameter posterior distribution using an MCMC scheme. Therefore, the posterior distribution is needed only up to constant multiplication. Denoting the hyperparameters collectively by \u03b8, the hyperparameter posterior distribution satisfies p(\u03b8\u2223Y)\u00a0\u221d\u00a0p\u00a0=\u00a0\u222b\u00a0pdx\u2009=\u2009\u222b\u00a0pp(x\u2223\u03b8)p(\u03b8)dx.\u00a0 Here p is the Gaussian measurement error distribution, p(x\u2223\u03b8) will be approximated by p(X\u2223\u03b8) given in Eq. is a prior for the hyperparameters. This prior consists of independent priors for each parameter. The integration with respect to the trajectory x is done by MCMC sampling. The function fi has mean mi(x)\u2009=\u2009bi\u00a0\u2212\u00a0aixi where ai and bi are regarded as hyperparameters corresponding to basal transcription (bi\u2009\u2265\u20090) and mRNA degradation (ai\u2009\u2265\u20090), and the squared exponential covariance\u03b3i\u2009>\u20090 and \u03b2i,j\u2009\u2265\u20090. The network inference is based on estimating the parameters \u03b2i,j. If \u03b2i,j\u2009>\u20090, the function fi can depend on xj. In the context of GRN inference, it indicates that gene j is a regulator of gene i.Consider then the original problem, that is, estimating the hyperparameters from given time series data. Denote n in Eq. for the \u03b2i,j, indicator variables are used48. That is, each of them is represented as a product \u03b2i,j\u2009=\u2009Si,jHi,j, where Si,j\u00a0\u2208\u00a0{0,\u00a01} and Hi,j\u2009\u2265\u20090. The state of the sampler consists of the indicator variable S, the hyperparameters Hi,j, \u03b3i, ri, qi, ai, bi and the discrete trajectory X. The trajectory is sampled using a Crank\u2013Nicolson sampler50 of the matrix \u03b2i,j is not zero, given the data Y.The full algorithm is presented in Supplementary Note\u00a0qi, ri, and \u03b3i we use noninformative inverse Gamma priors. For Hi,j, ai, and bi we use Laplace priors , and for S we use S\u22230 gives the number of ones in S. This prior means that the existence of a link is a priori independent of the existence of other links, and each link exists with probability \u03b7\u2009\u2265\u20090 can be set to obtain a desired sparsity level for the samples. It is the only user-defined parameter in the method\u2014save for discretisation steps and sampler step lengths. Prior distribution specifications are discussed in more detail in Supplementary Note\u00a0For entclass1pt{minimaf(xss)\u2009=\u20090. These procedures are described in Supplementary Note\u00a0Several time series experiments, including knockout/knockdown experiments, can be easily incorporated by concatenating the time series. Steady state measurements are included by adding them into the dataset as points where 55. In particular, measurements between time points should be comparable, meaning that normalisation with respect to housekeeping genes should be carried out. By default, the BINGO code normalises the data by scaling the dynamical range of each gene \u2009=\u2009bi\u00a0\u2212\u00a0aixi where a constant shift has an effect on the priors for ai and bi. Nonlinear transformation has a more intricate effect. Say zi\u2009=\u2009g(xi) with some smooth, invertible function g. If dxi\u2009=\u2009fi(x)dt\u2009+\u2009dwi, thenfi(x) and Pre-processing steps often include background removal and nonlinear transformations . Moreover, microarray data are based on luminescence measurements which has a nonlinear dependency on the mRNA concentration. Constant shifts of the data have very little effect on BINGO, since the probability distribution in Eq. only depAs a general rule, BINGO should work best when the data correspond as closely as possible to the actual expression levels . However, if the data are very spiky, that is, concentrations peak very high on few measurement times, then a log-transformation might be beneficial, since BINGO assumes smooth dynamics functions.Further information on research design is available in the\u00a0Supplementary InformationPeer Review FileReporting Summary"}
+{"text": "Personalized cancer treatments show decreased side-effects and improved treatment success. One aspect of individualized treatment is the timing of medicine intake, which may be optimized based on the biological diurnal rhythm of the patient. The personal biological time can be assessed by a variety of tools not yet commonly included in diagnostics. We review these tools with a focus on their applicability in a clinical context. Using biological samples from the patient, most tools predict individual time using machine learning methodologies, often supported by rhythmicity analysis and mathematical core-clock models. We compare different approaches and discuss possible promising future directions.Tailoring medical interventions to a particular patient and pathology has been termed personalized medicine. The outcome of cancer treatments is improved when the intervention is timed in accordance with the patient\u2019s internal time. Yet, one challenge of personalized medicine is how to consider the biological time of the patient. Prerequisite for this so-called chronotherapy is an accurate characterization of the internal circadian time of the patient. As an alternative to time-consuming measurements in a sleep-laboratory, recent studies in chronobiology predict circadian time by applying machine learning approaches and mathematical modelling to easier accessible observables such as gene expression. Embedding these results into the mathematical dynamics between clock and cancer in mammals, we review the precision of predictions and the potential usage with respect to cancer treatment and discuss whether the patient\u2019s internal time and circadian observables, may provide an additional indication for individualized treatment timing. Besides the health improvement, timing treatment may imply financial advantages, by ameliorating side effects of treatments, thus reducing costs. Summarizing the advances of recent years, this review brings together the current clinical standard for measuring biological time, the general assessment of circadian rhythmicity, the usage of rhythmic variables to predict biological time and models of circadian rhythmicity. Cancer treatment typically aims to eradicate cancer cells, for example by inducing DNA damage and impairing DNA repair mechanisms and triggering cell death pathways while keeping side-effects at bay. Recent research shows that the efficacy of medication is influenced by the timing of the administration, known as chronotherapy . The admcirca dies, about a day), are generated by an endogenous timing mechanism and differ between subjects. This timing mechanism is sensitive to external signals such as light, temperature, food intake or a cup of coffee , thus aligning (entraining) the organism to the geophysical time and the phase, which is, for a circadian oscillation, the time of the first maximum of expression (peak of expression) within 24 h C. To assIn practice, there is no agreed-upon definition of circadian time. In principle, circadian time should denote the absolute phase of the central pacemaker, which is however not directly measurable in humans. In lieu, the phase of several physiological oscillations, as reviewed below, are used to define circadian time, most common are definitions based on a subject\u2019s melatonin profile E. SubtraIn mammals, the circadian timing system is hierarchically regulated with two main levels of control, the central pacemaker clock and the peripheral clocks, located in most peripheral tissues including heart, liver, kidney and skin, controlled by a complex network of interacting genes and proteins . The cenBMAL1 via the nuclear receptors RORA, RORB, RORC and REV-ERB\u03b1, REV-ERB\u03b2 (REV-ERB nuclear orphan receptors). The interplay between these two transcription/translation feedback loops orchestrates the circadian oscillatory mechanism in cells . The mechanistic insights on the TTFLs and the discovery of the molecular mechanisms controlling circadian rhythms led, in 2017, to the Nobel Prize in Physiology and Medicine awarded to Jeffrey C. Hall, Michael Rosbash and Michael W. Young ,22. In min cells A. This iThe influence of the circadian clock on health and disease has been extensively reviewed in recent work ,39,40,41Papio anubis , DNA repair , cell cycle , cell motility , as well as protein and macromolecules integrity [The circadian clock plays an instrumental role in human health and disease, as described in the Introduction. Due to its function as regulator of physiology and behavior in synchrony with external environmental cues, the circadian machinery is responsible for generating and maintaining proper organ functionality, via regulating gene and protein activity. In fact, studies conducted in animal models including mice and the reporte ,46,47,48, ITM2B) .It is of paramount importance to develop robust methods to monitor and accurately characterize the circadian time in humans. Multiple approaches can be used to tackle this aspect, each with its particular advantages and limitations. The term \u201cchronotype\u201d is used to denominate the relative phase of entrainment of the individual to the environment, that is, the temporal difference between the internal timing and external zeitgeber . The chronotype attempts to classify a subject as a \u201cmorning\u201d or \u201cevening\u201d person . Around While questionnaires serve as a quick and simple method to assess chronotypes and determine Morningness-Eveningness preferences, they lack the robust objective perspective of an experimental method and do not provide insights into the molecular basis leading to the different chronotypes. To overcome these limitations multiple experimental techniques using biological samples have been developed and provide a more comprehensive and objective perspective . The curWith current advancements in technology and in particular in wearable devices, more attention is being taken toward using wearables to assess circadian and ultradian rhythms in humans (reviewed in ). A majohttp://circadb.hogeneschlab.org/) in mouse tissues and 8 genes show circadian oscillations in human tissues, which may vary from tissue to tissue and patient to patient and thus need to be tested individually. The recent advancements in the understanding of the molecular mechanisms underlying the circadian effects and the accumulating evidence on the fundamental role of the biological clock in health and disease, have driven researchers to further investigate the advantages of circadian regulation in a clinical setting . One of A broad range of treatments for various medical conditions has been investigated in recent years including: allergy ,77, arthAs chronotherapy capitalizes on circadian rhythmicity, an adaptation of treatment to personalized circadian time instead of to external time should further strengthen its advantages . To faciData driven computational predictions are increasingly used in medical care. Among the used computational methods are machine-learning algorithms . These dA personalized prediction of circadian time must be based on some form of patient data. To ensure a sufficient amount of information, two different types of data can be used, low-dimensional data with a high temporal resolution or high-dimensional data with a lower temporal resolution. High temporal resolution can be gained by wearable technical devices that continuously measure physiological data ,127,128.Though these approaches result in relatively precise prediction of the circadian time ,128, theAn alternative approach is to use more complex data but with a low temporal resolution\u2014often just a single-time point measurement. Gene expression data is sufficiently complex and can be extracted from blood or saliva samples . Hair fop-values. To illustrate the relationship between the frequency of data collection and the robustness in rhythmicity detection, we used a previously published RNA-seq data set from the human colorectal cancer cell line SW480 in the 30 h time-course data collected with 3 h sampling interval (NTimepoints = 11) , which resulted in a sampling interval of 6 h (NTimepoints = 6) (BMAL1 was not detected as significantly oscillating (p = 0.14). Additionally, technical and biological replicates can increase the robustness of the results and the accurate assessment of statistical analysis [In this section, we aimed to extract rhythmic profiles in different experimental set ups and to analyze how the rhythmicity detection is affected based on the sampling interval and the number of data points. The sampling interval and the number of data points included in the experimental design are directly correlated to the statistical significance of the results. The increase in the number of data points results in lower AB-7779, ). SW480 AB-7779, . We selegression method tn [BMAL1 A\u2013C. In tts = 11) B. To visnts = 6) A. Lastlynts = 6) C. As exents = 6) C. For lants = 6) A BMAL1 wanalysis .Next, we analyzed the differential output between the different methods used for detecting rhythmic gene expression. The choice of the particular computational algorithm for detection of cycling genes and transcripts is a crucial aspect in the analysis of the circadian transcriptome. There are multiple algorithms available in the literature for the analysis of time-series data, which recognize different sets of genes as oscillating, see for example, Gene expression values are provided as input for all the algorithms in order to study changes occurring at the transcriptome level over time. These inputs are then processed based on either frequency ,150 or tJTK applies a ranking to gene expression values. It uses the Jonckheere-Terpstra\u2019s trend test that aims at the detection of a repetitive change between different experimental groups based on a dependent variable such as mRNA expression values and integrates the time as an independent variable. The Kendall\u2019s Tau method is used for the assessment of rank correlation for two independent quantities. Based on the pre-defined period length, JTK identifies an ideal combination of phases and periods to characterize the cycling patterns . EmpiricARSER uses an autoregressive spectral domain approach, which detects and outputs oscillation parameters . It considers signal to noise (S/N) levels and uses harmonic regression to model the changes over time. In recent years, deep neural networks (DNNs) for the detection of circadian rhythmicity have been used in several studies ,155. BIOEach of the above-mentioned methods has different pros and cons, which should be carefully evaluated before the application. One of the relevant aspects for the accurate assessment of circadian rhythms relies on how the algorithms handle noisy data, a problem commonly encountered in biological data. JTK_CYCLE performs well against low signal to noise (S/N) ratio and can be favored when there are outliers in the data set. It is also widely used as a comparison for the assessment of prediction accuracy in machine learning studies. ARSER is less affected by the level of noise and curve type bias, due to the combination of time-domain and frequency-domain analyses that underlie the algorithm but cannot deal with uneven sampling or with missing values. RAIN, which uses a ranking approach to the real gene expression values, works well for the detection circadian of rhythmic genes and considers also non-symmetrical waveforms. However, it does not work as robust as parametric approaches (such as harmonic regression) in terms of the identification of phases. LS, RAIN, e-JTK and BIO_CYCLE can work on unevenly sampled time-series data and can be favored in data sets with irregular sampling intervals.p- and q-value thresholds for a particular rhythmicity method. For the comparison of different computational algorithms used in rhythmicity analysis, next we plotted the number of rhythmic genes detected in the above described time course data . Although LS was tested in the same data set, there were no significant oscillations detected, the same had been previously observed in another time-course data set using LS [A single method cannot be easily favored as the rhythmicity detection depends also on the filters applied to the data set while defining the rhythmic genes using AB-7779, , Figure using LS . Our resRecent improvements in high-throughput screening and computational efficiency contributed to the application of machine learning in various areas of medical research ,158,159 NR1D1, NR1D2, BMAL1 and PER2. The authors evaluated the accuracy of the MTT algorithm by measuring the absolute difference between the experimental sampling time and the estimated circadian time. The accuracy of MTT directly correlates with the number of identified rhythmic genes from genome datasets (>100 genes). Later, the MTT method was adapted to construct molecular timetables of blood metabolites using metabolomics data of mice plasma [BMAL1, NPAS2, PER1/2/3, NR1D1, NR1D2, CHRONO, Thyrotrophic Embryonic Factor (TEF) and D Site-Binding Protein (DBP)). The combined expression of 10 clock genes was then used by the algorithm to calculate the likelihood curve, which depicted the single phase from the expression of 10 genes. The clock dysfunctional metric was then calculated as the ratio of the predicted phase by TimeTeller to the original time point of sample collection. The lower and higher clock dysfunctional metric represented a functional and dysfunctional clock, respectively. Other machine learning based algorithms like \u201cBIO_CLOCK\u201d were developed to predict the time at which the experiment took place [Synth database and mouse gene-expression data from the CircadiOmics database. BIO_CLOCK can predict the time (within 1 h range) at which a particular sample from a transcriptomic based experiment was obtained. BIO_CLOCK works best when trained and tested on the data generated from the same tissue type. The authors reported a testing error when the trained model was derived using data from one tissue and then applied to a different tissue type for predictions. Considering the tissue-specific variation in circadian rhythm, the ZeitZeiger algorithm, which is based on supervised machine learning was developed using a microarray data set derived from 21 different mice organs [In the past years, different machine learning based algorithms were applied on high-throughput RNA-sequencing and microarray data obtained from different types of in vitro and in vivo mammalian model systems to predict various circadian variables ,169,170.e plasma and humae plasma . \u201cTimeTeok place . The traok place ,174. Thie organs . ZeitZeie organs ,62.Often, publicly available human gene expression data sets contain missing values or lack of information on the sample collection time. Cyclic Ordering by Periodic Structure (CYCLOPS) algorithm was built to predict rhythmic patterns from high-dimensional human data containing missing information . CYCLOPSTo address the heterogeneity of data originating from different animal models and human studies, the partial least square regression (PLSR) method was developed. The authors have shown that this method predicted the DLMO phase with an error of \u22642 h when compared to the actual DLMO phase and the results obtained with the MTT algorithm . The PLSBesides genomic data, many studies rely on indirect measurements like ambulatory recordings of wrist actigraphy and DLMO to predict circadian time and evaluate misalignments ,176. SevActigraphy is another non-invasive method to assess human sleep/wake cycles. A hidden Markov model (HMM) from unsupervised learning was applied to predict sleep/wake time in multi-ethnic individuals (number of subjects = 43) data . The datThe gained information regarding predictions of the circadian phase, from established machine learning algorithms using genomics or physiological data, may be enhanced by the possible combination of both genomic and physiological data into one prediction model increasing the prediction power and driving the applicability of circadian information into the clinics.While a machine-learning algorithm typically constructs a relation between subject\u2019s data and the personalized clock phenotype without detailed considerations of the underlying biological process, biological knowledge on the drug target-specific molecular network can add information and thus improve the predictive power of the model towards its usability in treatment timing. In this section, we review promising mathematical models of circadian biological processes, such as the transcriptional-translational networks underlying circadian rhythmicity as well as cancer pharmacokinetics and pharmacodynamics, that is, the dynamical action of pharmacological interventions on a molecular level, involving cell metabolism, cell cycle and toxicity. These types of mathematical models are built using ordinary differential equations (ODEs) and can be used as a pre-processing step for the machine-learning methodologies increase the temporal resolution of the measured time-series by extrapolating additional data points and thus help to derive peak time and oscillation amplitude, (ii) predict missing data, as a common precondition for machine learning is complete data or (iii) it could derive the temporal dynamics of mRNA or proteins that were not measured experimentally but which are included in the model C\u2013E. Suchl timing . AdditioBMAL1 and PER2, as sketched for the traces of mRNA1 and mRNA2 in The mammalian clock has been modelled on different levels of detail. An example of a model is with few details is the response of the clock network to zeitgebers such as light . ExposurRegulatory gene network-based models or molecular models for the circadian clock, date back to work by Goodwin in 1965 , who conin silico parameters that involves biologically unrealistic values. Some fitting procedures allow to specify biological constraints, such as used by Ballesta et al. [Detailed molecular models of the core-clock are based on interaction networks similar to the example represented in a et al. or Kim ea et al. . Unrealia et al. ,195,196.a et al. .The step from genes to treatment has been attempted for one cancer medication, irinotecan (CPT11), whose activity is controlled by core-clock-regulated genes ,198,199.https://clinicaltrials.gov include chronobiological aspects [Several therapeutic interventions are likely to be time-of-day-dependent and their efficacy could potentially be optimized by individual tailoring, based on the internal rhythms of the patient. However, multiple bottlenecks need to be overcome, in order to advance chronotherapeutic applications. As of 2016, less than 0.2% of all clinical trials registered under aspects . The ind aspects . One posThe study of the core-clock, clock-controlled genes and their related biological and molecular processes, in human subjects and animal models, has strongly contributed to gaining insights into the circadian dysregulation underlying severe diseases including cancer ,209,210.At the current state of the art, circadian phase is predicted with an error of 1 to 2 h. To gauge the quality of prediction, this value has to be compared with the error that would result from using the external time directly as prediction for the circadian time. As the reported range of circadian time is typically less than 6 h, as shown in previous studies , it seeThe endogenous circadian clock regulates numerous processes in our cells. The circadian machinery targets thousands of genes and leads to the oscillatory expression of more than 40% of genes across all different mammalian tissues. At least 50% of these genes are drug targets of FDA (Food and Drug Administration) approved drugs. The potential impact of timing in drug development and treatment cannot be ignored. Yet more research is needed to bring such knowledge into the clinics. Clinical studies with larger cohorts of subjects, stratified according to their circadian profiles, are still lacking. The experimental methods used for sampling biological material and their analysis must be improved to allow for the robust assessment of the biological clock and the subsequent generation of clinically relevant predictions. Another challenge in the field is the lack of one common methodology for the detection of rhythmic patterns. Our analysis using different algorithms for the detection of circadian oscillations in time course data revealed distinct numbers of rhythmically expressed genes across methods, this may result in irreproducibility of the observed results, which should be addressed in the future. The usage of machine learning algorithms and predictive mathematical models of regulatory networks in circadian research enables to utilize and extrapolate biological data and to identify predictive circadian parameters. Such knowledge may lead to the identification of biomarkers within circadian expressed genes and can ultimately be used for accurate estimates of drug administration timing. The difference between core-clock and peripheral clock poses an open challenge for cancer therapy. While current predictions focus on the central clock, the peripheral clock of the cancerous tissue might be more important for its treatment. As tumor removal is hopefully done at a single time point, genomics-based predictions, which only require a single time point are particularly promising to assess the relevant tissue-specific internal time. Most studies use machine learning to predict circadian time, based on a relatively small training set of a few subjects, with the drawback of low statistical power . Machine"}
+{"text": "Background: The tumor immune microenvironment (TIME) is an external immune system that regulates tumorigenesis. However, cellular interactions involving the TIME in hepatocellular carcinoma (HCC) are poorly characterized.Methods: In this study, we used multidimensional bioinformatic methods to comprehensively analyze cellular TIME characteristics in 735 HCC patients. Additionally, we explored associations involving TIME molecular subtypes and gene types and clinicopathological features to construct a prognostic signature.Results: Based on their characteristics, we classified TIME and gene signatures into three phenotypes (TIME T1\u20133) and two gene clusters (Gene G1\u20132), respectively. Further analysis revealed that Gene G1 was associated with immune activation and surveillance and included CD8+ T cells, natural killer cell activation, and activated CD4+ memory T cells. In contrast, Gene G2 was characterized by increased M0 macrophage and regulatory T cell levels. After calculation of principal component algorithms, a TIME score (TS) model, including 78 differentially expressed genes, was constructed based on TIME phenotypes and gene clusters. Furthermore, we observed that the Gene G2 cluster was characterized by high TS, and Gene G1 was characterized by low TS, which correlated with poor and favorable prognosis of HCC, respectively. Correlation analysis showed that TS had a positive association with several clinicopathologic signatures and known somatic gene mutations (such as TP53 and CTNNB1). The prognostic value of the TS model was verified using external data sets.Conclusion: We constructed a TS model based on differentially expressed genes and involving immune phenotypes and demonstrated that the TS model is an effective prognostic biomarker and predictor for HCC patients. A tumor is a neoplasm caused by gene mutations and adaptation of resultant mutant cells to the microenvironment . The tumHCC is the leading cause of cancer-related morbidity and mortality worldwide, and most incidences are associated with cirrhosis related to chronic hepatitis virus infection . CurrentIn this study, gene expression data were retrieved from public databases and used to analyze 22 TIME immune cell components in 735 HCC patients. Furthermore, three immune phenotypes (TIME T1\u20133) were identified based on TIME to further evaluate associations among immune phenotypes, genomic characteristics (Gene G1\u20132), prognosis, and clinical features. We developed a TIME score (TS) model with good prognostic potential to be used as an immune biomarker for HCC . AnalysiWe searched public databases for gene expression data and clinical information regarding HCC patients. Six cohort data sets from The Cancer Genome Atlas (TCGA) and Gene Expression Omnibus (GEO) databases were downloaded. RNA-seq data of 424 HCC patients were downloaded from TCGA using the GDC API programmatic interface. Microarray data set GSE15654 containing data for 216 HCC patients, GSE76427 for 96 HCC patients, GSE14520 for 247 HCC patients and 241 normal controls, GSE36376 for 240 HCC patients and 193 normal controls, and GSE25097 for 269 HCC patients and 243 normal controls were downloaded from the GEO database. All samples from TCGA, GSE14520, GSE36376, and GSE25097 were randomly divided into training and validation sets. The RNA-seq read data from TCGA were preprocessed as follows: (1) HCC samples without clinical data and with overall survival (OS) <30 days were removed. (2) Normal tissue data were eliminated. (3) Genecode V22 annotation was used to transfer RNA-seq read data from fragments per kilobase million (FPKM) to transcripts per million (TPM). The distribution of TPM data was more similar to that of the microarray data than to the FPKM data. (4) Genes with a TPM expression value of 0 and that appeared in more than half of the samples were excluded. Microarray data from GEO were preprocessed as follows: (1) Normal tissue data were excluded, and thus, only primary tumor data were retained. (2) HCC samples without clinical data and OS <30 days were excluded. (3) The Bioconductor R package was used to map the chip probe to human gene SYMBOL.http://cibersort.stanford.edu/), where the algorithm was executed using the LM22 gene signature and 1,000 permutations.The distribution of immune cells in TIME in HCC vs. normal control tissues was estimated using the CIBERSORT algorithm. Scores of each human immune cell in the three cohort data sets were calculated using the LM22 gene signature as a reference . The CIBUnsupervised clustering of TCGA samples and tumor TIME-infiltrating cells was performed using the ConsensusClusterPlus algorithm based on the value obtained from TIME calculations. Euclidean distance calculation of similarity measures between clusters and K-means of unsupervised clustering were used to estimate the number of TIME clusters . The optAssociations involving genes and TIME-infiltrating cells were explored by first dividing the genes into clusters based on the TIME-infiltrating cells. The DEseq2 tool was used to classify genes that were significantly differentially expressed and related to the TIME cluster in TCGA. Next, differentially expressed genes were selected by excluding genes with an expression value of 0 in >50% of samples. Furthermore, the non-negative matrix factorization (NMF) algorithm was used to perform unsupervised clustering . NMF is *\u03b2 formula was used to define the TIME score model, in which \u03b2 is the multivariate regression coefficient of each group, and PC1 is the score of each group.Before construction of the TIME score model, we identified common differentially expressed genes among the TIME clusters by dimensionality reduction. These genes were first subjected to univariate Cox analysis, after which a random forest algorithm was used to evaluate the importance of the genes using the R package . The rant-test, and two independent variables were analyzed using Wilcoxon's sign rank test. Non-parametric testing of three or more sets of data was performed using Kruskal\u2013Wallis tests. The least absolute shrinkage and selection (LASSO) and random-forest analyses were used to select suitable immune cell fractions. These immune cell risk scores were used to construct diagnostic models based on the coefficients of each selected marker through a logistic regression algorithm. HCC patients were assigned to high- and low-risk groups using the median value or were adjusted by Z-scores such that >0 and <0 were defined as high- and low-risk groups, respectively. The Kaplan\u2013Meier (KM) method was used to plot survival curves for estimating survival rates of patients, and statistical differences among means were compared using the log-rank test. Immune and stromal scores of each sample were calculated using the ESTIMATE tool employing the R package. Receiver operating characteristic (ROC) curves, which were generated with Package pROC, were used to determine the sensitivity and specificity of the KM analysis. A diagram showing the association between TIME scores and gene biology was developed using the Corrplot R package. NetworkD3 R packages were used to construct an alluvial diagram of TIME clusters with different gene clusters and survival outcomes. ComplexHeatmap R packages were used to depict the mutational landscape of genes. HCC patients were classified into high- and-low risk groups based on median TIME scores for survival analysis. The limma R package was used to analyze differential expression of TIME cluster genes, and functional enrichment was performed using the cluster profile R package. All statistical analyses in this study were conducted using either the R package or SPSS software, and P < 0.05 was considered statistically significant.A forest plot was created using the Forest plot R package, based on univariate Cox regression analysis results of each data set. A univariate Cox proportional hazard risk regression model was used to calculate univariate risk ratio. The statistical significance of normally and non-normally distributed data was calculated using Student's P = 0.038) and M0 macrophages (P = 0.008) were unfavorable prognostic markers [hazard ratio (HR) >1], whereas CD8+ T cells (P = 0.021) and resting CD4+ memory T cells (P = 0.031) were favorable prognostic markers (HR <1) cells, macrophages, and dendritic cells (DCs). Correlation analysis further grouped the 22 categories into four groups . These f (HR <1) . We perf (HR <1) . The clu (HR <1) . Additio (HR <1) . The dis (HR <1) . In addi (HR <1) . In cont (HR <1) . These rP < 0.0001) . The dis 0.0001) . TIME T1However, it is not clear whether one or several specific immune cells could be used as HCC biomarkers. Therefore, we conducted random forest and LASSP < 0.0001) between TIME T1 and TIME T2 and TIME T1 and TIME T3. In total, we identified 432 DEGs between TIME T1 and TIME T2 and TIME T1 and TIME T3 . After b 0.0001) . We used 0.0001) . Notably 0.0001) was cons 0.0001) . These fGene G1 and Gene G2 represented significant differences in the distribution of TIME-infiltrating cells and prognosis; therefore, we further investigated differences in cellular biological functions involving these genes. We conducted Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis using biological pathways. We determined that Gene G1 is associated with immune processes, such as T cell receptor signaling pathways, Th1 and Th2 cell differentiation, immune system function, and complement activation. In contrast, most members of Gene G2 are involved in tumorigenesis processes, including the P53 signaling pathway, PI3K-Akt signaling, hepatocellular carcinoma, and apoptosis . TherefoP < 0.0001) (P < 0.05) (P > 0.05) , immune checkpoint genes , and transforming growth factor/epithelial-mesenchymal transition genes (TGF/EMT) were used. The results reveal differences in gene-expression patterns between different gene clusters, TS scores, and TIME phenotypes . These r 0.0001) , in whic < 0.05) . However > 0.05) . To stud > 0.05) . In thisenotypes . Howeverenotypes . Furtherenotypes . All of P < 0.05), we compared known somatic gene alterations exhibiting significant differences in mutation frequency between high and low TS score groups. A total of 49 variants were found to be associated with TS scores , GSE76427 (P = 0.04572), and GSE14520 (P = 0.00273) data sets can be found in the article/The study was approved by the Clinical Research Ethics Committee of College of Medicine, Zhejiang University. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.HD: research design and funding acquisition. WC, XZ, KB, YD, HZ, and JX: acquisition, interpretation, and analyses of data. WC, XZ, KB, and HZ: manufacture of figures. WC: writing of manuscript and article language modification. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The handling editor declared a shared affiliation, though no other collaboration, with the authors."}
+{"text": "ALS) gene family encodes cell-surface adhesins that interact with host and abiotic surfaces, promoting colonization by opportunistic fungal pathogens such as Candida tropicalis. Studies of Als protein contribution to C. tropicalis adhesion would benefit from an accurate catalog of ALS gene sequences as well as insight into relative gene expression levels. Even in the genomics era, this information has been elusive: genome assemblies are often broken within ALS genes because of their extensive regions of highly conserved, repeated DNA sequences and because there are many similar ALS genes at different chromosomal locations. Here, we describe the benefit of long-read DNA sequencing technology to facilitate characterization of C. tropicalis ALS loci. Thirteen ALS loci in C. tropicalis strain MYA-3404 were deduced from a genome assembly constructed from Illumina MiSeq and Oxford Nanopore MinION data. Although the MinION data were valuable, PCR amplification and Sanger sequencing of ALS loci were still required to complete and verify the gene sequences. Each predicted Als protein featured an N-terminal binding domain, a central domain of tandemly repeated sequences, and a C-terminal domain rich in Ser and Thr. The presence of a secretory signal peptide and consensus sequence for addition of a glycosylphosphatidylinositol (GPI) anchor was consistent with predicted protein localization to the cell surface. TaqMan assays were designed to recognize each ALS gene, as well as both alleles at the divergent CtrALS3882 locus. C. tropicalis cells grown in five different in vitro conditions showed differential expression of various ALS genes. To place the C. tropicalis data into a larger context, TaqMan assays were also designed and validated for analysis of ALS gene expression in Candida albicans and Candida dubliniensis. These comparisons identified the subset of highly expressed C. tropicalis ALS genes that were predicted to encode proteins with the most abundant cell-surface presence, prioritizing them for subsequent functional analysis. Data presented here provide a solid foundation for future experimentation to deduce ALS family contributions to C. tropicalis adhesion and pathogenesis.The agglutinin-like sequence ( Candida tropicalis. The work highlights how emerging advances in DNA sequencing technology were used to fill key knowledge gaps that have persisted for many years.Adhesion and subsequent colonization provide the opportunity for microbial pathogens to cause disease. Identification of adhesin-encoding genes is facilitated by the availability of genome sequences. An accurate catalog of potential adhesins focuses efforts to characterize protein function and develop therapeutic approaches that could be effective against multiple microbial species. Anti-adhesion therapies have been hailed as superior to traditional antimicrobials because their mechanism of action does not promote antimicrobial resistance . This reALS) genes encode cell-surface adhesins that recognize a broad variety of peptide ligands anchor addition site were clues for identifying possible ALS loci. While these techniques could quickly identify potential ALS genes, the large ALS gene size and considerable sequence identity across multiple physical loci in the same species complicated accurate assembly. Computational assemblies broke within the conserved tandemly repeated sequences in the middle of the coding region, or mistakenly placed the 5\u2032 end of one gene onto the 3\u2032 end of another. The research community was left without accurate knowledge of the ALS gene number or sequence in common fungal pathogens.As emerging genome sequencing technologies were applied to pathogenic fungal species, the genomes . ALS genC. albicans, C. tropicalis causes superficial mucosal infections, as well as disseminated and deep-seated infections; the rising incidence of azole-resistant C. tropicalis is causing clinical concern and long-read (Oxford Nanopore MinION) datasets to develop a C. tropicalis strains were authenticated using several methods (data not shown). First, each isolate was streaked on CHROMagar Candida medium (Becton Dickinson) to ensure that it produced the expected blue colony color. Molecular verification of strain identification used PCR primers ITS4 (5\u2032 TCCTCCGCTTATTGATATGC 3\u2032) and ITS5 gels and found to be similar across the isolates; methods for the karyotype analysis and examples of the results were published previously to saturation (approximately 16 h at 37\u00b0C and 200 r/min shaking). Genomic DNA was isolated according to Strain MYA-3404 libraries were constructed and sequenced at the Roy J. Carver Biotechnology Center, University of Illinois at Urbana-Champaign. Data were derived using Illumina (short-read) and Oxford Nanopore (long-read) methods. MiSeq shotgun libraries were prepared with the Hyper Library construction kit (Kapa Biosystems). The library was quantitated by qPCR and sequenced on one MiSeq flowcell for 151 cycles from each end of the fragment using a MiSeq 300-cycle sequencing kit (version 2). FASTQ files were generated and demultiplexed with the bclfastq Conversion Software . MiSeq reads were quality trimmed using Trimmomatic with theFor Oxford Nanopore long-read sequencing, 1 \u03bcg of genomic DNA was sheared in a gTube for 1 min at 6,000 r/min in a MiniSpin plus microfuge . The sheared DNA was converted to a shotgun library with the LSK-108 kit from Oxford Nanopore, following the manufacturer\u2019s instructions. The library was sequenced on a SpotON R9.4 flowcell for 48 h using a MinION MK 1B sequencer.Basecalling and demultiplexing were performed in real time with the Metrichor Agent V2.45.3 using the 1D BaseCalling plus Barcoding for FLO-MIN_106_450bp workflow. Removed from both ends of each Oxford Nanopore read were 60 nt, followed by additional trimming using a Github checkout (commit 92c0b65f) of Porechop to removC. tropicalis MYA-3404 genome sequence. The sequence was deposited in the NCBI database .Oxford Nanopore reads were then aligned against the assembly using bwa mem with parALS loci.Ambiguities in the genome sequence data were resolved by PCR amplification of the region and Sanger sequencing of the product. Primer design was aided by the PrimerQuest Tool (Integrated DNA Technologies). C. tropicalis MYA-3404 genome sequence was noted in the NCBI database induces C. albicans germ tube formation and its associated differential Als protein production . Filters were flash frozen in dry ice/ethanol and stored at \u221280\u00b0C.Growth conditions were selected from previous analyses of oduction . For groALS gene expression in cells from an early-growth-stage culture used YPD (rich) medium. Two identical 250-ml flasks were filled with 100 ml of YPD and inoculated at a cell density of 1 \u00d7 106 per ml. Flasks were incubated for 1 h at either 37\u00b0C or 30\u00b0C and 200 r/min shaking. Cells were collected by filtration as detailed above and filters flash frozen and stored at \u221280\u00b0C for RNA extraction.Analysis of C. tropicalis were intended to examine ALS gene expression during morphological change. 600 of approximately 1.7. Ten milliliters of the SD culture were added to 10 ml of 100% FBS and incubated at 30\u00b0C and 200 r/min shaking for 2 h. The culture was divided in half, cells collected by centrifugation, flash frozen in dry ice/ethanol, and stored at \u221280\u00b0C. The remaining 5 ml of the 15 ml SD starter culture was combined with 5 ml of 100% FBS and incubated for 24 h at 30\u00b0C and 200 r/min shaking. Cells were collected and stored as described above.Additional growth conditions for Cultures were prepared on three different days. Each experimental day had daily replication by creating two unique cultures from two different colonies on the original agar plate.C. tropicalis ALS genes, as well as to differentiate between alleles of CtrALS3882. To place C. tropicalis gene expression data into a larger context, TaqMan assays were also designed and validated for each ALS gene in C. albicans. TaqMan assays were also designed for C. dubliniensis ALS genes because the literature lacked information about ALS gene expression in this species. Methods for designing and validating TaqMan assays were detailed by TaqMan assays were designed to specifically detect each of the ACT1 and TEF1 using primer/probe sets capable of recognizing all species in the study . The accurate Illumina data were included to correct the MinION data, which were already recognized as error prone as well as previous publications that mentioned the C. tropicalis ALS family.The overall strategy for accurate assembly of the C. tropicalis MYA-3404 genome assembly became available in the NCBI database . Subsequent efforts focused on designing PCR primers that anchored CtrALS3882 to an exact physical location, then demonstrating that both alleles occupied the same location, presumably on sister chromosomes to specifically amplify CtrALS3882-2. The physical location of each allele was validated by amplifying upstream and downstream fragments for Sanger sequence analysis. Examples include the use of primers 12-33F/12-36R and 12-31F/12-58R that amplified the regions upstream and downstream of CtrALS3882-2, respectively, amplification of the region upstream of CtrALS3882-1 with primers 12-33F/12-19R, and amplification of the region upstream of CtrALS3871 with primers 11-26F/11-25R. Sequence polymorphisms unique to each physical location were deduced by comparison among the PCR products and available genome assemblies. Each CtrALS3882 allele was detected in all six C. tropicalis isolates studied suggesting that heterozygosity at this locus is reasonably common within the species. DNA sequences for each allele in the various isolates are included in The divergent sequences in the 5\u2032 domain of omosomes . The effC. tropicalis MYA-3404 ALS alleles was large and in others was limited to a single unit (CtrALS1041 and CtrALS2293). The consensus sequence derived from the main, central domain of tandem repeats was similar to those found in previously characterized Als proteins , C. dubliniensis (Cd), C. tropicalis (Ctr), Candida parapsilosis (Cp), Candida orthopsilosis (Co), and C. metapsilosis domain from each Als protein in osis Cm; . Sequencosis Cm; offered ALS gene expression was pursued to determine if the family was differentially expressed by growth condition and cellular morphology. High gene expression levels might also predict which Als proteins were present in the greatest abundance on the C. tropicalis cell surface, thereby positioned to contribute most to the adhesive function. A unique TaqMan assay was designed for each ALS gene and to differentiate between the alleles of CtrALS3882. Assay primers and probe sequences were placed at a similar location within the 5\u2032 domain for each gene. C. tropicalis MYA-3404 cells were harvested from five different culture conditions, RNA extracted, and cDNA synthesized for TaqMan analysis.Real-time RT-PCR analysis of t) values relative to the ACT1 and TEF1 control genes. With Ct values that lagged only one to two cycles behind the control gene expression, CtrALS1028 showed high expression levels in RPMI medium and from a 2-h culture of SD + FBS. These expression levels were not significantly different from each other (p = 0.1276). Expression levels were lower in cells from a 16-h YPD culture and the 24-h SD + FBS condition; these values were not significantly different from each other either (p = 0.1245). All other comparisons were significantly different suggesting differential expression of CtrALS1028 by stage of culture growth with higher expression levels in cells that were more recently transferred to fresh growth medium. In contrast, CtrALS941 expression was lower at early stages of culture growth and significantly higher as the culture reached saturation . CtrALS3791 showed yet another type of differential expression pattern with its highest expression in the 1 h YPD culture and significantly lower values in the early-stage growth in other media (p < 0.0001). Many genes had an expression pattern like CtrALS1030, expressed at only moderate levels in all five growth conditions. For CtrALS1030, none of the statistical comparisons were significant (p > 0.05) suggesting lack of differential expression for the growth conditions tested.Color coding in t values and detectable Als protein on the C. tropicalis cell surface. A potential context was provided by designing TaqMan assays for C. albicans ALS genes . Some genes were differentially expressed at a sufficient level to produce detectable protein in specific growth conditions , while others appeared to produce the proteins that might have the greatest cell-surface presence. For example, CtrALS2293 was transcribed highly in all growth conditions tested. In a 24-h SD + FBS culture, it was likely that CtrAls2293 and CtrAls3882-1 would dominate the cell surface. In each growth condition tested, CtrALS3882-1 was more highly expressed than its allele CtrALS3882-2 (p < 0.0001).Revisiting C. dubliniensis ALS genes. C. dubliniensis and C. albicans are closely related species in which ALS genes occupy similar physical loci are 100% identical within the 5\u2032 domain of the gene. TaqMan assays were designed and used to analyze ALS gene expression in cells from strain CD36 grown using the same conditions applied to C. tropicalis and C. albicans; a single assay detected transcription from Cd64800 and Cd65010 , while others were barely transcribed (Cd86150). Differential expression was observed among the growth conditions tested .One larger goal of our work was to understand the cell-surface Als presence on various fungal species that cause candidiasis. In this regard, little attention has been given to the expression of cal loci . However Cd65010 . Some C.C. tropicalis ALS family. One potential experimental question was whether ALS expression patterns varied across C. tropicalis clinical isolates. DNA sequences were derived for the 5\u2032 domain of each ALS gene in the six C. tropicalis isolates in our laboratory collection ; sequence mismatches for CtrALS3797 in three additional strains suggested the potential for redesign of that TaqMan assay depending on the strains and biological questions addressed.The newly validated TaqMan assays were a tool that others may use to explore various hypotheses regarding the llection . While mllection ; howeverllection . Some millection . Examinaisolates suggesteALS family in fungal species. The availability of Oxford Nanopore MinION technology provided the initial opportunity to incorporate long-read DNA sequence data to resolve the C. tropicalis ALS genes. Despite the insights added by these long-read data, assembling accurate ALS sequences required considerable additional effort. The more recently released PacBio Sequel-based assembly in C. tropicalis, so C. tropicalis CTRG_02293 is not \u201cC. tropicalis ALS1\u201d and is definitely not ALST1 as described by C. tropicalis ALS family.Browser6 indicateC. tropicalis ALS gene expression expression increased in sessile cells. ALST3 had the highest expression across the three experimental conditions, perhaps consistent with the high level of CtrALS2293 expression demonstrated here were selected deliberately to place C. tropicalis results into a well-characterized context that was developed for studies of ALS gene expression in C. albicans and those that were not in 68 C. tropicalis isolates revealed considerable sequence variation in the coding region, localized primarily to the central tandem repeat domain was completed during 2017 yet the large number of C. tropicalis ALS genes, lengthy tandem repeat regions, and unexpected allelic variation at the CtrALS3882 locus required considerably more effort to resolve. The recent PacBio Sequel/Illumina HiSeq assembly (ASM1317755v1; ALS sequences and other gene families that contain conserved, repeated sequences.Compared to discovery of the Candida tropicalis strain MYA-3404, generated using Illumina MiSeq and Oxford Nanopore MinION data, is available in GenBank under BioProject accession number PRJNA432250, BioSample accession number SAMN08439037, and Genome accession number PQTP00000000. Version 01 of the project has the accession number PQTP01000000 and consists of sequences PQTP01000001\u2013PQTP01000029. All individual ALS gene sequences were deposited in GenBank. Accession numbers are noted throughout the manuscript.The genome assembly for LLH conceptualized the study, acquired funding, and was in charge of project administration. VH, CF, and AH conducted formal analysis. S-HO, AI, RR-B, BS, JJ, CF, AH, and LLH performed the investigation. VH, CF, AH, and LLH developed the study methodology. S-HO, AI, RR-B, JJ, VH, CF, AH, and LLH wrote the original draft. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Drosophila melanogaster transcriptome data set comprising 14 time points. We deployed a network approach which inferred 10 discrete co-expression networks by smoothly sliding along from early to late development using 5 consecutive time points per window. Such an approach allows changing network structure, including the presence of hubs, modules and other topological parameters to be quantitated. To explore the dynamic aspects of gene expression captured by our approach, we focused on regulator genes with apparent influence over particular aspects of development. Those key regulators were selected using a differential network algorithm to contrast the first 7 (early) with the last 7 (late) developmental time points. This assigns high scores to genes whose connectivity to abundant differentially expressed target genes has changed dramatically between states. We have produced a list of key regulators \u2013 some increasing and some decreasing connectivity during development \u2013 which reflects their role in different stages of embryogenesis. The networks we have constructed can be explored and interpreted within Cytoscape software and provide a new systems biology approach for the Drosophila research community to better visualize and interpret developmental regulation of gene expression.Co-expression networks tightly coordinate the spatiotemporal patterns of gene expression unfolding during development. Due to the dynamic nature of developmental processes simply overlaying gene expression patterns onto static representations of co-expression networks may be misleading. Here, we aim to formally quantitate topological changes of co-expression networks during embryonic development using a publicly available The increasing accessibility of -omics data drives the development of computational methods to integrate different sources of information and connect the underlying molecular mechanisms to complex phenotypes . Among tAdditionally, constructing co-expression networks considering all samples under scrutiny has a limited ability in identifying condition-specific modules because such a correlation signal can be diluted by a possible lack of correlation in other conditions . But limDrosophila melanogaster embryogenesis comprising multiple developmental stages. For example, in the early stages of embryogenesis, the zinc finger class of TFs are predominantly expressed and a large number of TFs are maternally contributed .The Drosophila embryogenesis dataset used in this study was obtained from the NCBI\u2019s Gene Expression Omnibus, GEO Series accession number lication . The menReads were mapped to the BDGP6 fly reference genome and gene expression was estimated as read counts. We considered for the analysis 7,640 genes that presented counts in at least 20 samples and more than 100 counts on average. Data were averaged within each time point and log2 transformed. From the genes that passed quality control, we focused on 3,568 genes clustered according to , based ot-test P < 0.05) were considered significant and labeled as \u201ckey.\u201dTo identify key regulators of Drosophila embryogenesis among the 791 pre-defined regulators mentioned before, we applied the regulatory impact factor metrics RIF; considerThe dynamic aspects of gene expression during embryogenesis were explored by creating 10 groups of 5 consecutive time points which were then used to create 10 networks by applying the Partial Correlation and Information Theory (PCIT) algorithm to the 4FDR < 0.05).To further explore the networks, we eliminated connections that were present in six or more networks, as those were considered to be fairly conserved and would not reflect the dynamic aspects of gene expression during embryogenesis. On the other side, we kept only connections that appear in the same direction (positive or negative) in at least 3 consecutive networks, to capture more meaningful correlation and avoid technical noise. Finally, we focused on correlations involving key regulators and explored the changes between networks over time by creating an animated image in Graphics Interchange Format (GIF). Functional enrichment analysis was performed on STRINGv10 online platform using hyBy defining 5 consecutive time points as the number of samples used to construct co-expression networks and sliding forward one time point at a time, we generated 10 networks and compared them to recover some of the dynamic aspects of gene expression during Drosophila embryogenesis. The number of consecutive time points and total networks were arbitrary and designed to generate the higher number of networks while keeping a reasonable number of samples to extract meaningful correlations. Our goal was to demonstrate that changes in gene behavior over time can be captured and add value to the interpretation of the underlying biological processes. However, those parameters can be adjusted according to the biological question and number of samples under investigation.Comparing the characteristics of each network , althougThe variation in the number of connections per gene in each network was captured in FDR = 1.63e-21), spliceosome (FDR = 3.92e-08), and DNA replication . On the other side, key regulators that decrease connectivity over time are more spread among all the genes in network1.RIF and PCIT are related in the sense they are both assessing patterns of connectivity via co-expression. By sorting the 59 key regulators according to their connectivity on network 1 (representing the earliest embryogenesis stage) it is possible to notice two blocks of key regulators: the ones increasing the number of connections over time and the ones decreasing the number of connections over time . ConsideTusp (tubby domain superfamily proteins) expression was detected in sensory neurons and brain cells during the later stages of Drosophila embryogenesis and this is consistent with our findings, showing an increased connectivity of Tusp during late embryogenesis (slbo (slow border cells) and DCAF12 (DDB1 and CUL4 associated factor 12). The slbo locus is vital for regulated cell migration during Drosophila development and a null mutation can lead to lethality in late embryonic or early larval life , a gene that has been well studied for its role in the anterior-posterior specification during Drosophila embryogenesis (bcd demonstrated the highest connectivity during early embryogenesis which is consistent with a previous study where mRNA of bcd were highly expressed in early embryo for anterior specification (Hmx (homeobox) gene has been shown to be expressed in developing Drosophila brain during early embryonic stages and it was suggested to be paramount for the development of the Drosophila central nervous system (Rfx (regulatory factor X) gene was previously identified as being a peripheral neuron marker and can also be found in the brain, although its presence is not restricted to embryogenesis but throughout development gene to decrease connectivity over time. The mld gene is required for the production of ecdysone, a hormone that controls molting during Drosophila larval development and Tusp).By focusing only on connections involving key regulators and present in less than 6 networks, we can observe the changes in the co-expression network over time regarding key regulators gaining and losing connections, as well as the change in the direction of the connections. When a gene is highly connected (present significant correlation with many other genes in the network), it is considered a central regulator of gene expression, since a slight change in their expression will simultaneously influence several other genes. This tightly connected cluster of genes is expected to work coordinately to a specific biological function or pathway relevant to the trait under investigation . When inbcd and Tuspcd (Tusp , and thebcd and Tusp in the beginning and in the end of embryogenesis, respectively, are well known and revealed here by the changes in number of connections over time. In contrast, combining connections from all the 10 networks in a single network hinders the observation of such changes which consequently complicates the extraction of information regarding regulatory role of the gene at specific time points. Although considering all time points leads to numerous significant connections that are statistically more robust and can be important to understand the overall function of a gene, the dynamic aspects expected to be represented in a time series data is actually lost. To compensate for the small number of time points used in each network, our approach focused on connections appearing consistently in at least three consecutive networks. It is important to note that tissues samples collected for any RNA-Seq experiment are prone to bias due to cellular heterogeneity (As discussed earlier, the roles of ogeneity . In our ogeneity .With a likelihood-based approach, we were able to capture the dynamicity of gene networks across different time points in Drosophila embryogenesis, focusing on key regulatory genes. Our approach provides a novel and complementary strategy to understanding the topology of gene networks by sliding smoothly from early to late development. One can focus on specific dynamic aspects such as genes with increasing or decreasing connectivity over time, or even explore conserved mechanisms along the biological process under investigation. Although it is out of the scope of this work to discuss specific biological aspects of Drosophila embryogenesis, we were able to capture some known biological signals regarding early and late developmental stages. Our results recapitulate the known molecular biology of Drosophila embryogenesis and revealed new insights for further studies. Being able to extract such comprehensive information justify the value of this approach. We anticipate the dynamic investigation proposed here being applied to other time-series-omics data, as a way to further explore regulatory aspects of gene expression changes over time. We argue this approach is preferable to overlaying patterns of differential expression onto static representations of co-expression network.https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE121160.Publicly available datasets were analyzed in this study. This data can be found here: AR conceived and designed the study. AR, LL, NH, and PA performed formal analysis, investigation and visualization of the presented data. LL and PA wrote the manuscript with contributions from all authors. MF and MN-S reviewed the manuscript and added substantial information and insights.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "IDEA (the Induction Dynamics gene Expression Atlas), a dataset constructed by independently inducing hundreds of transcription factors (TFs) and measuring timecourses of the resulting gene expression responses in budding yeast. Each experiment captures a regulatory cascade connecting a single induced regulator to the genes it causally regulates. We discuss the regulatory cascade of a single TF, Aft1, in detail; however, IDEA contains >\u00a0200 TF induction experiments with 20\u00a0million individual observations and 100,000 signal\u2010containing dynamic responses. As an application of IDEA, we integrate all timecourses into a whole\u2010cell transcriptional model, which is used to predict and validate multiple new and underappreciated transcriptional regulators. We also find that the magnitudes of coefficients in this model are predictive of genetic interaction profile similarities. In addition to being a resource for exploring regulatory connectivity between TFs and their target genes, our modeling approach shows that combining rapid perturbations of individual genes with genome\u2010scale time\u2010series measurements is an effective strategy for elucidating gene regulatory networks.We present A transcriptional induction system is used to conditionally express hundreds of transcription factors in yeast. The resulting time\u2010course transcriptomics data are used to train parametric models and predict regulatory connections between genes. We generated ~\u00a0200 induction experiments in which a single yeast TF was rapidly induced from a \u03b2\u2010estradiol\u2010responsive synthetic promoter, and full transcriptome differential expression was subsequently tracked, typically across eight time points. Such experiments feature the near immediate strong induction of an inducer\u2010driven TF of interest, followed by rapid changes in genes that are directly regulated by these TFs, and later changes of indirectly regulated genes Fig\u00a0A\u2013C. Whilet\u00a0al, et\u00a0al, Each of 201 genes\u2019 native promoters was separately replaced with a \u03b2\u2010estradiol\u2010inducible promoter as previously described that focused on the sulfur regulon and amino acid metabolism was published previously . Accordingly, the inducer\u2010driven signal of interest is relatively sparse and interspersed among ubiquitous noise. This noise is governed by both a mild stress response . The full transcriptional dataset is available as Across our dataset, most genes\u2019 expression does not change in a typical induction experiment, with some notable exceptions . The CK model characterizes a timecourse as a double sigmoid but can be reduced to a simpler sigmoid that has fewer parameters. Specifically, the original CK kinetic model contains six parameters, which we reduced to five parameters because the initial amplitude for all timecourses is zero due to normalization. The impulse (double sigmoid) response is ideal for capturing two\u2010transition behavior in biological timecourses. One sigmoid characterizes the onset response, and a second sigmoid characterizes the offset response ; thus, we fit a Bayesian version of the Chechik & Koller (CK) kinetic model to each timecourse (Chechik trise), an asymptotic expression level (vinter), and a slope parameter (\u03b2). Impulses include two additional parameters: tfall, which describes the time when the response returns halfway to its final level, and vfinal, the asymptotic expression level of the impulse .Sigmoidal responses are summarized with a half\u2010max time constant are causally responsible for gene expression changes occurring later in the experiment. In a single experiment, however, we would only be able to identify a coexpressed cluster of genes whose expression coincides with a late change, rather than a single candidate regulator Fig\u00a0D. While To learn direct regulatory relationships from such data, we formulated a set of gene\u2010level regression models that predict the rate of change of each target gene as a sparse linear combination of all genes\u2019 expression:yijt is the expression relative to the control strain at time zero of a gene i in experiment j at a time t /(rij0/gij0); therefore, yij0\u00a0=\u00a01 \u2200{I,J}). Here, \u03b1 represents the linear effect of one transcript on another and \u03b2 represents the effect proportional to the target transcript. We allow any transcript to affect any other transcript, and thus, we sum over all genes (with index k). Since most genes will not be regulatory, we use L1 regularization (LASSO) to shrink uninformative predictive coefficients to zero. We also enforce a predicted rate of change of zero at time zero, reflecting the pre\u2010induction steady\u2010state assumption.Here, exclusively includes transcriptional regulation is inherently incomplete, the parameters of such a model would be inappropriately contorted to compensate for in\u2010expressible regulation. Regression models that express the measured abundance of a gene of interest based on measured abundances of candidate regulators do not suffer from such a problem. As many regression models can be posed, we explored a wide\u2010space of model formulations defined by a set of hyperparameters do not have predictive power of genetic interactions in Costanzo et\u00a0al to one or more regulators based on regulators\u2019 marginal contributions to the response. Revisiting the Aft1 experiment, this marginal attribution analysis suggests that several regulators are operating in a cascade to regulate genes with different kinetics Fig\u00a0C. In linThe synthesis of experiment\u2010level graphs reveals a genome\u2010scale causal expression network Fig\u00a0A that liOur modeling results highlight many potential new regulators that we sought to confirm experimentally. These regulators include both hubs predicted to regulate targets across many experiments as well as mediators of interesting dynamic phenomena (such as impulses). To validate putative regulators, a separate \u03b2\u2010estradiol induction experiment was generated for each of ten candidate regulators .P\u00a0<\u00a010\u221211; overlap of predicted and measured targets by chi\u2010square and Fisher exact tests; Fig\u00a0trise values were correlated with target trise values showed strong changes in the putative regulators\u2019 targets , including CYB5 (encodes cytochrome b5) and nearly every ERG gene . Fmp48, ) et\u00a0al, . While tTo understand regulatory architecture, we require datasets that elicit diverse physiological regulatory responses, and possess sufficient information to disambiguate the drivers of each regulatory response. As demonstrated here, synthetic biology holds great promise for creating such datasets, and, when combined with new analytical tools, can be utilized to identify new regulators and GRNs.Several results are worth highlighting. First, only a small number of genes annotated as being TF\u2010bound, based on ChIP, respond in a typical experiment, and most responses are previously undescribed, including ~1,700 instances of ephemeral homeostatic impulses. Second, expression variation is associated with a modest number of transcriptional regulators with major effects, while many perturbations elicit minimal transcriptional responses. Thirty\u2010eight TFs affected the expression of <\u00a050 genes each . Third, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, et\u00a0al, Approaching genome\u2010scale modeling of network regulation from dynamic data yielded insights about how to collect, process, and analyze such data. Because most genes do not respond in a typical induction experiment, we used hard\u2010thresholding to remove the majority of values in our dataset, leaving ~\u00a0100,000 gene\u2010level timecourses with coherent, biologically feasible patterns of variability. Having identified signal\u2010containing timecourses, we were then able to model these signals using parametric fits and regularized regression. When fitting a genome\u2010scale regression model, we made a number of assumptions about biological processes to include vs. those we should ignore based on our experimental design. In doing so, our model may fail to capture a number of important regulatory phenomena including complex combinatorial regulation, post\u2010transcriptional regulation, post\u2010translational regulation, localization, and regulation due to non\u2010proteins of ~\u00a02\u00a0kilobases (kb) in length was PCR\u2010amplified and introduced into parent strains with homologous recombination using a standard lithium acetate transformation procedure. For the vast majority of strains, clones containing the synthetic promoter were selected for on rich medium [YPD containing G418 (200\u2013300\u00a0\u03bcg/ml)]. Primers were designed using custom software in R such that the cassette was introduced directly between a target gene's first methionine residue and its native promoter to prevent the removal of any genomic DNA. Synthetic promoters were inserted into the genome without removing native DNA for two reasons. First, we believed that removing at TF's native promoter could disrupt expression of a divergently transcribed gene. Second, binding sites in Saccharomyces cerevisiae need to be within a few hundred base pairs of an ORF to be functional for each gene. For each spot, we computed ratio\u00a0=\u00a0max/max, where C\u00a0=\u00a02 in arbitrary units. This minimum value application only affects 0.3% of spots. Typical genes have red and green channel measurements above 200. These values for each spot serve as the \u201craw\u201d data.For each microarray, we aggregated the data across individual spots. Specifically, for each gene at a given time point within an experiment we aggregated the spot values and measured the minimum value, maximum value, median value, and standard deviation of the values. In the usual case of two spots, the median value is equivalent to mean and standard deviation is equivalent to (max\u2212min)/sqrt(2).At this stage, we corrected the most extreme outlier observations. First, for a given sample, we examined the case where, for a given gene, the ratio of the spot values was larger than four. Since the spots values do not agree, we interpolated the value with the geometric mean across bracketing time points (or neighboring time point in the case of first or last time point). The second class of outliers is where the median ratio moves by a factor of at least four between two time points and then by a factor of four in the opposite direction for the next time point. In these cases, we again replaced the central point with the geometric mean of the bracketing points. These corrections apply to less than 0.2% of the data.t\u00a0=\u00a00), as well as the 30% quantile of the log\u2010ratio. We then flagged timecourses where the green ratio quantile exceeded a factor of eight, and the log\u2010ratio quantile exceeds a twofold change. These represent cases where the green channel is increasing a great deal when it should be constant. For these rare cases, we repaired the red\u2010to\u2010green ratios by duplicating the time zero green channel across the full timecourse. This affects about two\u2010dozen TF\u2010gene timecourses (out of more than a million).In processing gene expression microarrays, crosstalk between red and green channels can occur. When the red channel fluorescence is much larger than the green channel fluorescence, there can be leakage of signal, and the green channel measurement is affected. To identify instances of this occurring, we first computed the green channel ratio relative to the time zero measurement. We then measured the 30% quantile of this value within each timecourse is one way to remove such signals , each kinetic coefficient was regressed on a motif\u2010by\u2010motif basis using ordinary least squares (OLS) on three summaries of each motif's presence. These three predictors were as follows: the top enriched PWM match for the gene, a binary variable indicating whether one or more motifs were present, and counts of how many PWM matches were found. OLS t\u2010statistic P\u2010values were separately FDR\u2010controlled for each type of motif summary and model type (regulation or impulse) ) in the promoters of regulated compared to control promoters (using the same primary vs. control comparisons as for DREME). Briefly, this was done by ordering sequences in descending order of PWM score match and then using a rolling mean estimate of k\u2010mer frequency to estimate the PWM score cutoff where enrichment in the primary sequences decreases to a heuristic 1.5\u2010fold cutoff. Using this approach, each gene responding in a given experiment was summarized based on how many times each motif was detected and what its strongest PWM match was. To determine whether any motif was associated with variation in a kinetic property . Second, the regression model combines regulatory potential with variation in regulators through observation\u2010level fitted values (\u03b1X) to summarize how regulation unfolded within an experiment (where xij is the abundance of a gene i at time j for a given experiment). To interpret regulation, we want to be able to identify the major variable regulators underlying regulatory phenomena of interest such as the impulse\u2010like dynamics of Aft1 or variable timing of expression induction or repression. Since realized regulation is a property of an experiment, the fitted model (\u03b1X) informs whether the model collectively predicts regulatory expression changes of interest and marginal interpretation of components of this model (\u03b1ixi.) can be used to attribute regulation to specific regulators.The whole\u2010cell regression model readily provides two important summaries of regulation. First, the estimated coefficients of the regression model, \u03b1 and t{sat\u00a0=\u00a0x}\u00a0=\u00a0tcoef\u00a0+\u00a0log(x/(1\u00a0\u2212\u00a0x))/\u03b1.To attribute regulation, we first identify instances of regulation that are reasonably predicted by the whole\u2010cell model. Each instance of realized regulation is a change in expression occurring over a period of time. These transitions in the data can be readily understood within the framework of the previously discussed parametric models since these models indicate which regulatory phenomena to track and the saturation of the sigmoidal curves implies the period of time over which regulation is unfolding: t{sat\u00a0=\u00a0x} is the time at which the sigmoid is x saturated . tcoef is the half\u2010max time coefficient of the transition . With this convention, the end\u2010points of each rise and fall phenomena, tstart and tend, were defined as the time when the rise or fall sigmoid was 5\u201395% saturated.Here, fmodel\u00a0=\u00a0log2(y[tend])\u00a0\u2212\u00a0log2(y[tstart]) were compared with the observed change in that gene's expression over the regulatory interval fobserved\u00a0=\u00a0log2(x[tend])\u00a0\u2212\u00a0log2(x[tstart]). Since {tstart, tend} will generally occur between time points, linear interpolation of both log2(y) and log2(x) with the two closest time points was used to infer these intermediate expression states. For 47,802 responses , the model had some predictive value based on the following cutoff:To determine when the whole\u2010cell model accounts for an appreciable fraction of observed regulatory changes, model\u2010predicted fold changes over each regulatory interval Marginal attribution analysis was used to dissect total model fits into the marginal contributions of each regulator. Here, the marginal attribution of each regulator to the response was defined as:\u03c8ijkz is the model's predicted proportional control of a regulator j to a gene i in experiment k for the zth response occurring in that timecourse. \u03c8 values (filtered to \u03c8\u00a0>\u00a00.2) were used to interpret the major regulator(s) contributing to observed responses in individual experiments and in the meta\u2010graph of cross\u2010experiment regulation.Here, Datasets and computer code used in this study are publicly available.https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE142864; http://idea.research.calicolabs.com.Microarray data, Gene Expression: https://github.com/calico/impulse.Computer software, Bayesian Chechik & Koller Model: https://github.com/google-research/google-research/tree/master/yeast_transcription_network.Computer software, Dynamical System Model: impulse R package. AB developed the interactive website. RSM conceived and oversaw the study.SRH, EAB, MC, and RSM performed analysis and wrote the paper. MB and MF performed analysis. BJW, GK, DGH, and RSM performed experiments. EAB, MC, and MB developed the dynamical system model. SRH wrote the SRH, DGH, BJW, GK, AB, and RSM are employees of Calico Life Sciences LLC. EAB, MC, MF, and MB are employees of Google LLC. The authors declare that they have no conflict of interest and no competing financial interests from this work.AppendixClick here for additional data file.Table\u00a0EV1Click here for additional data file.Table\u00a0EV2Click here for additional data file.Table\u00a0EV3Click here for additional data file.Dataset EV1Click here for additional data file.Dataset EV2Click here for additional data file.Review Process FileClick here for additional data file."}
+{"text": "Their dynamic changes are termed as RNA life cycle dynamics (RLCD). It is still challenging for the accurate and robust identification of RLCD under unknow the functional form of RLCD. By using the pulse model, we developed an R package named pulseTD to identify RLCD by integrating 4sU-seq and RNA-seq data, and it provides flexible functions to capture continuous changes in RCLD rates. More importantly, it also can predict the trend of RNA transcription and expression changes in future time points. The pulseTD shows better accuracy and robustness than some other methods, and it is available on the GitHub repository ( The response of cell stimulation is mainly manifested in the transcription, processing and degradation levels of RNA . The dynCurrently, experimental techniques based on short pulse labeling, such as 4-thiouridine (4sU), provide the possibility to identify RLCD . IncorpoHere, we developed an R package termed pulseTD that can serve as a powerful tool to identify RNA RLCD based on the pulse model. It can adequately capture the trend of RLCD, which is important to analyze the process of cells from homeostasis to new homeostasis in cell-stimulated responses. More importantly, pulseTD shows better performance in predicting RLCD and gene expression values than other methods.https://www.ncbi.nlm.nih.gov/geo/). The first dataset was an RNA-seq dataset from mouse DC (GSE56977). RNA was sampled from mouse DC every 15 min for the first 3 h of their response to LPS, followed by a short (10 min) metabolic marker pulse(4sU) before the sampling time point to analyze RNA-seq and 4sU-seq bam files, in order to quantify intronic and exonic in Reads Per Kilobase of exon model per Million mapped reads (RPKM), Transcripts Per Kilobase of exon model per Million mapped reads (TPM) or Fragments Per Kilobase of exon model per Million mapped fragments (FPKM) of each gene. Some methods had been added to standardize expression profile data before evaluating RLCD. In the cell life cycle, transcription continuously generates new pre-mRNA expressed by radation . The equAmong them, The transcription level of intracellular RNA was in a steady state without being stimulated by outside world. When some factors (transient pulse stimulation) disrupted the equilibrium, the rates of genes at different stages were changed to cushion the stimulus. After a period of time, most cells returned to a steady state due to some factors such as cell morphology and stress. As mentioned above, the change in gene expression values was mainly the result of the combination of transcription, processing and degradation. Therefore, external stimulus conditions, pulse stimuli directly affected RLCD. To this end, we hypothesized that the three processes of gene expression were transcription In this case, the rates of transcription, processing and degradation changed from ent}f(x) to repreThe Taking into account the effects of existing RNA, we first estimated the global normalization factors in the model. We assumed that real mRNA Make the total mRNA observations proportional to Three scale factors are constant in one sample, and formula :(1.6)\\dHere formula ~1.6: random initial values. At the same time, the chi-square test of goodness of fit was used to verify the statistical validity of pulseTD.t. We assumed that the processing rates were directly proportional to the pre-mRNA expression and the ratio was Here, the rates of transcription, processing and degradation were defined as the expression level of RNA transcribed, processed or degraded per unit time (unit: RNA/min or RNA/hour). In order to increase the interpretability of pulseTD and make it easier to compare with other tools, a conversion method was applied to pulseTD output. Let formula :(1.9)\\dP(t) and T(t) at time t, respectively. And they are functions of time t. Here we used This was similar to previous researches . \\documeTo evaluate the effectiveness of the model, we generated simulation data for 1,000 genes by randomly drawing from a specific distribution. Transcription, processing, and degradation rates were first generated, as well as randomly generated scale factors. The simulation expression value was then created based on the rates using Runge\u2013Kutta method.First of all, based on previous researches, we knew that the half-life of RNA was considered to follow a normal distribution . Next, wAfter determining all rates, we used the fourth-order Runge\u2013Kutta method of the R package (deSolve) to evaluate the expression levels of pre-mRNA, mature mRNA and labeled RNA as a simulation data set. Among them, the initial value of the expression value was randomly selected from the distribution of the observed data.In general, we determined the rate distribution based on the experimental data and generated simulated data parameters based on the mean and variance of the rate. Here, the time range of the experimental analysis was 0\u2013180 min, sampling was performed every 15 min, and the 4sU marking was performed 10 min before sampling. Finally, we got a simulation data set of 1,000 genes.The pulseTD combines the pulse model to predict the steady state of the RLCD. We defined the rates of transcription processing and degradation as a pulse function which had 6 parameters, a total of 18 parameters. To standardize RNA expression levels, additional global scale factors needed to be evaluated. For the evaluation of each gene, 100 random initializations were required. We used a multi-threaded approach to reduce runtime. The software supports different ways to evaluate expression levels such as counts, RPKM, TPM, FPKM. The workflow of pulseTD software is as follows : (i) TheSeveral published studies had revealed RNA RLCD from different perspectives. We compared the following common software tools: INSPEcT , DRiLL , DTA Sc and pulsThe efficacy of pulseTD was evaluated on the simulation dataset. We calculated the Pearson correlation coefficient (PCC) for pulseTD between real and simulated transcriptional dynamic rates, and found that the PCC values for transcription, processing and degradation rates were 0.95, 0.95 and 0.77, respectively \u20132C. The To eliminate the bias of simulation data, simulation data produced by INSPEcT was used to evaluate the performance of pulseTD. We used the INSPEcT tool to generate two biological replications dataset, which contained 10 time nodes. The correlation of total RNA expression dataset was 0.98. Then, we used pulseTD to evaluate the RCLD of the biological replications, where the correlations of transcription, processing and degradation rates were 0.97, 0.90 and 0.92 \u20134C. As aGSE59717). The PCC values of transcription, processing, and degradation rates between replications were 0.95, and the RCLD rates distribution also showed high consistency dataset into two parts. We selected the first 5, 7, 9 and 11 time points from 13 measurement time points as training samples to estimate the rates of RCLD and used the remaining time points (test samples) for prediction. We estimated and predicted the rates of RCLD based on the training samples by using pulseTD and some other methods. Then, the MSE values between the predicted results and the test samples were calculated to evaluate the prediction performance. The results showed that pulseTD had the lower average MES values of RLCD rates than other methods .In the field of bioinformatics, it is important to accurately identify the rates of RLCD and predict transcriptional stability. Few programs can identify the rates of RLCD, and none can provide predictions of the dynamic rates and steady state of RLCD. Here we use 4sU-seq and RNA-seq technology to analyze and predict RLCD. In summary, based on the pulse model, combined with the biological significance of RNA life cycle, we developed the R package named pulseTD. It only needs the alignment files of 4sU-seq and RNA-seq to calculate the expression value and the pulse model parameters in a simple manner. Here, we recommend using min\u2013max normalization when comparing experiments with different conditions to remove the dimension and logarithmic normalization when analyzing single experimental data to narrow the range of values. It can easily evaluate the RLCD at any time points. More importantly, it can predict the trend and the steady state of transcriptional dynamic rates. It has better accuracy and robustness than other methods. You can get source code on GitHub ("}
+{"text": "Caenorhabditis elegans larvae. We report that oscillations initiate in embryos, arrest transiently after hatching and in response to perturbation, and cease in adults. Experimental observation of the transitions between oscillatory and non\u2010oscillatory states at high temporal resolution reveals an oscillator operating near a Saddle Node on Invariant Cycle (SNIC) bifurcation. These findings constrain the architecture and mathematical models that can represent this oscillator. They also reveal that oscillator arrests occur reproducibly in a specific phase. Since we find oscillations to be coupled to developmental processes, including molting, this characteristic of SNIC bifurcations endows the oscillator with the potential to halt larval development at defined intervals, and thereby execute a developmental checkpoint function.Gene expression oscillators can structure biological events temporally and spatially. Different biological functions benefit from distinct oscillator properties. Thus, finite developmental processes rely on oscillators that start and stop at specific times, a poorly understood behavior. Here, we have characterized a massive gene expression oscillator comprising >\u00a03,700 genes in C.\u00a0elegans. Population\u2010 and single animal\u2010based analyses uncover a gene expression oscillator that may support a developmental checkpoint function.The authors investigate a putative developmental clock in For the purpose of this study, and because insufficient information exists on the identities of core oscillator versus output genes, we define the entire system of oscillating genes as \u201cthe oscillator\u201d. We demonstrate that the oscillations are coupled to molting, i.e., the cyclical process of new cuticle synthesis and old cuticle shedding that occurs at the end of each larval stage. We observe and characterize onset and offset of oscillations both during continuous development and upon perturbation, and find that transitions occur with a sudden change in amplitude. They also occur in a characteristic oscillator phase and thus at specific, recurring intervals. The transitions are a manifestation of a bifurcation, i.e., qualitative change in behavior, of the underlying oscillator system. Hence, our observations constrain possible oscillator architectures, excluding a simple negative\u2010loop design, and parametrization of mathematical models.Here, we characterize the recently discovered \u201cC.\u00a0elegans oscillator functions as a developmental clock whose architecture supports a developmental checkpoint function.Functionally, because of the phase\u2010locking of the oscillator and molting, arrests always occur at the same time during larval stages, around molt exit. This time coincides with the previously reported recurring window of activity of a checkpoint that can halt larval development in response to nutritionally poor conditions. Hence, our results indicate that the C.\u00a0elegans larvae covered the first 15\u00a0h of development on food at 25\u00b0C, and the second time course (TC2) covered the span of 5\u00a0h through 48\u00a0h after plating at 25\u00b0C. II (this study).HW1939: EG6699; xeSi437 II (this study).HW2523: EG6699; xeSi440 II (this study).HW2526: feIs5 V ] X , and unc\u201054 3\u2032UTR (amplified from genomic DNA) to yield pYPH0.14 and pMM001, respectively. Second, promoters consisting of either 2\u00a0kb upstream of the ATG or up to the next gene were amplified from C.\u00a0elegans genomic DNA before inserting them into NheI\u2010digested pYPH0.14 or pMM001. PCR primers and resulting plasmids are listed in the ttTi5605 locus on chromosome II by following the published protocol for injection with low DNA concentration diluted in S\u2010Basal medium . Embryos hatched and developed in 90\u00a0\u03bcl volume containing et\u00a0al, Luminescence data were analyzed using an automated algorithm for molt detection on trend\u2010corrected data as described previously from the time point at molt exit of the current stage. The duration of the intermolt was quantified as duration of the molt subtracted from duration of the larval stage. For statistical analysis, we assumed the durations to be normally distributed and used Welch two\u2010sample and two\u2010sided n 3.5.1) in R.et\u00a0al, For RNA sequencing, synchronized L1 worms, obtained by hatching eggs in the absence of food, were cultured at 25\u00b0C and collected hourly from 1\u00a0h until 15\u00a0h of larval development, or 5\u00a0h until 48\u00a0h of larval development, for L1\u2013L2 time course (TC1) and L1\u2013YA time course (TC2), respectively. A replicate experiment was performed at room temperature from 1\u00a0h until 24\u00a0h (TC4). RNA was extracted in Tri Reagent and DNase\u2010treated as described previously (Hendriks C.\u00a0elegans genome using the qAlign function (splicedAlignment\u00a0=\u00a0TRUE) from the QuasR package and reads were counted using htseq\u2010count (version\u00a0=\u00a00.11.2).RNA\u2010seq data were mapped to the 2\u2010transformed. For TC2, lowly expressed genes were excluded (maximum log2\u2010transformed gene expression \u2010 (log2(gene width)\u2010mean(log2(gene width)))\u00a0\u2264\u00a06). This step was omitted in the early time courses because many genes start robust expressing only after 5\u20136\u00a0h. Expression data of the dauer exit time course were obtained from of TC2, when the oscillation period is most stable and B\u00a0=\u00a0C\u00a0|\u00a0sin\u00a0(\u03c6)To classify genes, we applied cosine fitting to the logable Fig\u00a0C. DuringFrom the linear regression (\u201clm\u201d function of the package \u201cstats\u201d in R), we obtained the coefficients A and B, and their standard errors. A and B represent the phase and the amplitude of the oscillation: 2(amplitude) \u2265 0.5 as a first classifier. We propagated the standard error of the coefficients A and B to the amplitude using Taylor expansion in the \u201cpropagate\u201d function (expr\u00a0=\u00a0expression(sqrt(((A^2)+(B^2))), ntype\u00a0=\u00a0\u201cstat\u201d, do.sim\u00a0=\u00a0FALSE, alpha\u00a0=\u00a00.01) from the package \u201cpropagate\u201d (version 1.0\u20106) , we used the lower boundary (0.5%) of the CI as a second classifier. Thus, we classified genes with an amplitude\u00a0\u2265\u00a00.5 and lower CI\u2010boundary\u00a0\u2265\u00a00 as \u201coscillating\u201d and genes with an\u00a0<\u00a00.5 or a lower CI\u2010boundary\u00a0<\u00a00 were classified as \u201cnot oscillating\u201d , in which the SStot is the sum of squares in peak phase of the L1\u2010YA time course. SSres (response sum of squares) is the sum of squares of the phase difference.As the density of the genes strongly decreased around 0.5 Fig\u00a0C, we use Spiess, in R. Weng\u201d Figs\u00a0. Every gng\u201d Figs\u00a0. A peak et\u00a0al, Our previous work of TC2. We applied the meta2d algorithm ) with a period ranging between 4 and 9\u00a0h , weighted scores based on the P\u2010value of each method to calculate the integrated period length and phase (weightedPerPha\u00a0=\u00a0TRUE), and otherwise default parameters. We classified genes with an amplitude\u00a0\u2265\u00a00.5 and FDR\u00a0<\u00a00.05 as \u201cMeta2D\u201d oscillating genes and compared them to the set of oscillating genes determined by cosine fitting.As an alternative approach, we classified oscillating genes using the MetaCycle package (version 1.2.0) in R of both time courses against each other using the log2\u2010transformed data with a pseudocount of 8 with Pearson correlation. In general, we saw good correlation between the two time courses, e.g., TP5 correlated well with TP5, etc. with TP14\u2013TP48.In order to obtain an RNA\u2010seq time course spanning the complete larval development, we fused the L1\u2013L2 time course with the L1\u2013YA course . To decide which time points to choose from the individual time courses, we correlated the gene expression of all genes were excluded if the absolute value of the difference in mean expression between L2 and L4 normalized by their mean difference exceeded 0.25, i.e.: abs((L2meanExpr\u2010L4meanExpr)/(0.5*(L2meanExpr+L4meanExpr)))>0.25.Given that oscillating genes were identified based on gene expression in TP10\u2010TP25, when oscillation period is most stable, some genes showed deviating behavior in the last oscillation cycle, C4. Hence, for quantification of oscillation amplitude, period, and correlation, we excluded those genes. We determined the mean expression levels for each gene over time in oscillation cycles C2 (TP14\u2010TP20), C3 (TP20\u2010TP27), and C4 (TP27\u2010TP36). Genes using a Butterworth filter (\u201cbwfilter\u201d function of the package \u201cseewave\u201d (version 2.1.0) . The bandpass frequency from 0.1 to 0.2 was selected based on the Fourier spectrum obtained after Fourier transform (\u201cfft\u201d function with standard parameters of the package \u201cstats\u201d). As an input for the Hilbert transform, we used the Butterworth\u2010filtered gene expression. The \u201cifreq\u201d function (with standard parameters from the package \u201cseewave\u201d) was used to calculate the instantaneous phase and frequency based on the Hilbert transform . The correlation line plots represent the correlations of selected time points to the fused full developmental time course using a spline analysis from Scipy (v1.2.1) .We performed correlation analyses without mean normalization of expression data, and hence, correlation values cannot be negative but remain between 0 and 1. We made this decision because a correlation analysis using mean\u2010centered data, where correlations can vary between \u20101 and +1, requires specific assumptions on which time points to include or exclude for mean normalization, and because it is sensitive to gene expression trends. However, we confirmed, as a proof of principle, the expected negative correlation of time points that are in antiphase when using mean\u2010centered data using osGO\u2010term analysis was performed using the GO biological process complete option from the online tool PANTHER .items)). In order to obtain the enrichment of tissues, we divided the percentage of tissue X among oscillating genes in the tissue enriched dataset by the percentage of tissue X among all genes in the tissue\u2010enriched dataset and plotted the resulting values. The list of tissue\u2010specific oscillating genes was further used to investigate the peak phases within one tissue by plotting a density plot of the peak phase .et\u00a0al, To identify the first peak of oscillating genes, we used a spline analysis from Scipy (v1.2.1) using the \u201ccor\u201d function (method\u00a0=\u00a0 \u201cpearson \u201c) of the package \u201cstats\u201d in R of the package \u201cstats\u201d in R. To call the peaks of the interpolated correlation lines, we applied the \u201cfindpeaks\u201d function of the package \u201cpracma\u201d (version 2.2.5) on the time points on the interpolated time points 10\u2013185, that cover the four cycles. To find the embryonic time point at which oscillations initiate, we plotted the larval TP in cycle 2 at which the correlation peak occurred over embryonic time .The dauer exit time course TP1\u201015 were obtained from Hendriks et\u00a0al , https:/et\u00a0al, et\u00a0al CSU_W1 Yokogawa microscope with 20\u00d7 air objective, NA\u00a0=\u00a00.8 in combination with a 50\u2010\u03bcm disk unit to obtain images of single worms. For a high throughput, we motorized the stage positioning and the exchange between confocal and brightfield. We used a red LED light to combine brightfield with fluorescence without closing the shutter. Additionally, we used a motorized z\u2010drive with 2\u00a0\u03bcm step size and 23 images per z\u2010stack. The 488\u00a0nm laser power for GFP imaging was set to 70%, and a binning of 2 was used.t\u00a0=\u00a00\u00a0h), which we identified by visual inspection of the brightfield images as the first time point when the worm exited the egg shell.To facilitate detection of transgene expression and oscillation, we generated reporters using the promoters of genes that exhibited high transcript levels and amplitudes, and where GFP was concentrated in the nucleus and destabilized through fusion to PEST::H2B (see strain list above). We placed embryos into chambers containing food (concentrated bacteria HT115 with L4440 vector) and imaged every worm with a z\u2010stack in time intervals of 10\u00a0min during larval development in a room kept at ~21\u00b0C, using a double camera setting to acquire brightfield images in parallel with the fluorescent images. We exploited the availability of matching fluorescent and brightfield images to identify worms by machine learning. After identification, we flattened the worm at each time point to a single\u2010pixel line and stacked all time points from left to right, resulting in one kymograph image per worm. We then plotted background\u2010subtracted GFP intensity values from the time of hatch or not (lethargus/molt). Additionally to the pumping behavior, we used two further requirements that needed to be true in order to assign the lethargus time span: First, worms needed to be quiescent , and second, a cuticle needed to be shed at the end of lethargus. Usually worms start pumping one to two time points before they shed the cuticle. This analysis was done manually with the software ImageJ, and results were recorded in an excel file, where for every time point, the worms\u2019 behavior was denoted as 1 for pumping and as 0 for non\u2010pumping.\u03c32). We define the phase at either molt entry or molt exit asTo\u223c being the period of oscillation, TIM\u223c the intermolt duration and TL\u223c the larval stage duration of the respective larval stages. These calculations result in a phase with mean \u03bc and a standard deviation \u03c3 at molt entry and molt exit, respectively, for each larval stage indicated. Should the two processes be coupled as in scenario 2, we would expect \u03c3observed\u00a0<\u00a0\u03c3calculated.To determine a possible connection between oscillations and development, we applied error propagation, assuming normal distribution of the measured phases and larval stage durations. Thereby, we exploited the inherent variation of the oscillation periods and developmental rates among worms, rather than experimental perturbation, to probe for such a connection. The durations are represented with the mean (\u03bc) and the standard deviation in pythoUsing single worm imaging data, we compared the absolute time at which we observed a specific but arbitrarily chosen unwrapped phase from the GFP oscillation with the absolute time at which we observed either molt entry or molt exit.The unwrapped phases we chose were 11rad for L2 comparisons and 18rad for L3 comparisons. We chose these phases because they occurred late in L2 and L3, respectively. The scatterplot reveals a good correlation with Pearson correlation coefficients exceeding 0.9 which was calculated using the pandas (v0.24.1) function df.corr(method\u00a0=\u00a0\u201cpearson\u201d). We used linear models to fit the data with the function \u201cregression.linear_model.OLS\u201d from statsmodels.api (v0.10.1) assuming an intercept of 0. From these models, we obtained the slope with 95% confidence intervals. The predicted values from the linear model are plotted in blue with the shaded area corresponding to the 95% confidence intervals.x and y are two variables describing the state of the oscillator, and \u03b2 and \u03bb are the Hopf and SNIC parameters, respectively. Default values for \u03b2 and \u03bb were 1 and 0, respectively. The model was integrated using the ODE solver in the Scipy package (v1.3.1) are the amplitude and phase of the system. For fixed parameter values, this system shows oscillations for positive values of \u03b2 and |\u03bbr|\u00a0<\u00a01, with maximum amplitude given by the radius of the limit cycle, rLC=\u03b2. At the limit cycle, for a fixed \u03b2, the period of the oscillator is given byThe model can be better understood when the system is transformed into polar coordinates, i.e., k\u03b2 is the rate of change of \u03b2. For illustrative purposes, the value of \u03b2(t) was defined to be 1 when \u03b2(t)\u00a0>\u00a01. All deterministic simulations for a slowly changing \u03b2 were performed with initial conditions of r\u00a0=\u00a010\u22125 and stochastic simulations with a value of r0\u00a0=\u00a00. The initial phase was defined to be \u03b80\u00a0=\u00a0\u03c0/2.To simulate the effects of a Hopf bifurcation under a slowly changing parameter, Equation 1 was simulated with a fixed value of \u03bb\u00a0=\u00a00 andSolutions for the amplitude go through an interval where the solution remains close to the steady state and then jumps suddenly to a neighborhood of the limit cycle. However, the amplitude approaches asymptotically the limit cycle, and thus, the system was determined to have reached the limit cycle if the difference between the rate of change of the radius of the limit cycle and the amplitude was sufficiently small, i.e., \u03c6\u00a0=\u00a00.01.For the simulations, the threshold \u03b2\u00a0=\u00a01 andk\u03bb is the rate of change of \u03bb. As the effect for \u03bb is symmetric, values were constrained to the positive real numbers including zero. Negative values were set to zero.To simulate the effects of a SNIC bifurcation under a slowly changing parameter, Equation 1 was simulated for a value of \u03b80\u00a0=\u00a0\u03c0/2 and r0\u00a0=\u00a01, respectively.The system was initialized at the SNIC bifurcation point on the limit cycle; i.e., the initial conditions for phase and amplitude were defined to be G\u2010JH and YPH performed RNA sequencing time courses. MWMM and YPH analyzed RNA sequencing data. MWMM performed and analyzed luciferase assays. LJMM performed simulations. GB developed the graphical user interface for the luciferase data. YPH acquired and analyzed single worm imaging data. JE wrote the KNIME workflow for the single worm imaging. CT conceived parts of the analysis. HG, MWMM, and YPH conceived the project and wrote the manuscript.The authors declare that they have no conflict of interest.AppendixClick here for additional data file.Expanded View Figures PDFClick here for additional data file.Dataset EV1Click here for additional data file.Dataset EV2Click here for additional data file.Review Process FileClick here for additional data file."}
+{"text": "Omics technologies have been widely applied in toxicology studies to investigate the effects of different substances on exposed biological systems. A classical toxicogenomic study consists in testing the effects of a compound at different dose levels and different time points. The main challenge consists in identifying the gene alteration patterns that are correlated to doses and time points. The majority of existing methods for toxicogenomics data analysis allow the study of the molecular alteration after the exposure (or treatment) at each time point individually. However, this kind of analysis cannot identify dynamic (time-dependent) events of dose responsiveness.We propose TinderMIX, an approach that simultaneously models the effects of time and dose on the transcriptome to investigate the course of molecular alterations exerted in response to the exposure. Starting from gene log fold-change, TinderMIX fits different integrated time and dose models to each gene, selects the optimal one, and computes its time and dose effect map; then a user-selected threshold is applied to identify the responsive area on each map and verify whether the gene shows a dynamic (time-dependent) and dose-dependent response; eventually, responsive genes are labelled according to the integrated time and dose point of departure.To showcase the TinderMIX method, we analysed 2 drugs from the Open TG-GATEs dataset, namely, cyclosporin A and thioacetamide. We first identified the dynamic dose-dependent mechanism of action of each drug and compared them. Our analysis highlights that different time- and dose-integrated point of departure recapitulates the toxicity potential of the compounds as well as their dynamic dose-dependent mechanism of action. Toxicogenomic methods are widely used for the assessment of chemical hazards and environmental health . Omics tDirect effects of chemical insults are generally expected to follow a monotonic dose-response alteration resulting in increasing effect as the dose increases until a plateau is reached . In clasUntil now, the no-observed-adverse-effect-level and the in vivo studies in the Open TG-GATEs involves animal treatment in which each drug is tested at 3 doses at multiple time points [However, assaying the mechanism of action (MOA) of exposure at multiple time points is valuable to highlight useful information on the kinetic molecular responses of a biological system. For example, the experimental set-up for the e points . Thus, 1e points ,14 or toe points ,16. Howee points .In the present work, we propose a new computational framework, TinderMIX, for dose- and time-dependent gene expression analysis that aims to combine dimensionality reduction, BMD analysis, and polynomial fitting to find groups of genes that show a dynamic dose-response (DDR) behaviour. The integrated modelling used in TinderMIX allows us to interpolate the continuous joint dose-time space and predict the molecular alteration values for the doses and time points not included in the original experiment. Moreover, our approach is importantly able to inform on POD in both the dimensions of the doses and time points, hence resolving at once 2 analytical tasks that, thus far, have only been carried out subsequently to each other.To illustrate our methodology we analysed the gene expression data for cyclosporine A and thioacetamide from the Open TG-GATEs database.The TinderMIX methodology proposed in this study starts from the sample-wise log fold-change of the genes and is able to identify which of them show a DDR behaviour and to estimate their joint dose-time POD. The methodology is composed of multiple steps that can be grouped into 2 parts: the gene modelling with POD identification Fig.\u00a0 that is As for the classical BMD analysis, the first step of the gene modelling analysis Fig.\u00a0 consistsFor every gene, linear regression models including first-, second-, and third-order polynomials Eqs \u20133 of the0. This is done to assess the quality of the fit in the first place. If the P-values corresponding to testing the models in Eqs\u00a0P-value.The fitting was performed by using the R lm function from the stats package . The besFor each gene, the selected model is used to predict an activity map with the values of a smooth log fold-change function on a grid of 50 \u00d7 50 points covering the entire range of doses and time points tested. This map is represented as a contour plot and used to identify the dose-response area. A desired activity threshold corresponding to a 10% increase/decrease with respect to controls is set to identify the responsive area of each gene time-dose effect map. If a gene does not show an activity satisfying the threshold, it is removed from the analysis. A gene is also removed from the analysis if the responsive area does not include the highest dose of the experimental setting. The selected threshold of 10% is a default threshold used in BMD analysis of transcriptomics data ,12, 13. pn is the number of points included in the region, ntp-md is the number of time points (rows in the gene map) that are included in the candidate region that include the highest dose, dm is the minimum dose covered by the candidate region, and n is the number of candidate regions found in the gene maps. The optimal region is selected as the one maximizing the score. The active region is further reduced, by removing time points that are still active (log fold-change > 1.1) outside the optimal region but with a non-monotonic behaviour compared to the one inside the optimal region.where We identify the external borders of the segmented responsive area using a trace-boundary algorithm. The dose-responsive front is identified as the smallest dose present in the responsive area for each time point. Moreover, the IC50 front is also computed as the doses that give 50% of changes in the log fold-change at every time point.The dose-time space is partitioned into 3 \u00d7 3 regions, where each dose can be labelled as sensitive, intermediate, or resilient and each time point is labelled as early, middle, or late. Different strategies to identify the POD of the DDRGs are implemented in TinderMix Fig.\u00a0. The firOnce the DDRA is identified, the influence of time on the fold-change is studied by analysing the vector field of gradients in the region. For each point, the magnitude and the angle of the gradient are computed and used to evaluate the time-dose response score as follows:ig and modi are the angle and magnitude of the gradient in the i-th point in the DDRA and n is the number of points in the same region. This score can be used to categorize the genes on the basis of the dose and time effect on their fold-change. According to the time-dose response score, the active genes are divided into 4 groups, corresponding to the 4 quadrants of Cartesian space. In the first quadrant the fold-change increases with both time and dose (0 < std \u2264 90); in the second quadrant the fold-change increases with time but decreases with dose (90 < std \u2264 180); in the third quadrant the fold-change decreases with both time and dose (180 < std \u2264 270); in the fourth quadrant the fold-change increases with dose and decreases with time (270 < std \u2264 360). The 4 quadrants can be further dissected in 3 smaller sectors: 1 in which the dose has a stronger effect than the time, 1 in which they have the same effect, and 1 in which the time has a stronger effect than the dose. We assign a label to each gene composed by the letters d and t,standing for dose and time, respectively, and a positive or negative sign. If the influence of 1 of the 2 components is stronger than the other, this is highlighted by using capital letters. For example, the label d + T + stands for fold-change increasing with both dose and time, with a stronger effect from the time.where P-value < 0.05.KEGG enrichment analysis is performed by using the methodology implemented in the FunMappOne tool . PathwayPathological events registered in rats after drug exposure were downloaded from the Open TG-GATEs database , 21. Evein vivo rat liver tissue, were downloaded from the Open TG-GATEs dataset [2 expression values of each pair of treated and control samples. In this way, we obtained 108 pair-samples and 11,721 genes to be used in the TinderMIX analysis.Raw data microarray transcriptomic data for cyclosporine A and thioacetamide exposure, in dataset , 21. Eac dataset . The raw dataset to annot dataset ) and qua dataset ,29. For in vivo gene expression data for 2 drugs (cyclosporine A and thioacetamide) available in the Open TG-GATEs dataset.We developed a novel dose and time integrative modelling strategy for transcriptomics data able to identify molecular features with a DDR alteration pattern Fig.\u00a0. We showThe TinderMIX methodology allows dy th ofe distribution of the gene log fold-change with respect to both dose and time. For this purpose, TinderMIX implements a strategy similar to the classical BMD analysis but tranP-value < 0.01 and best modelling performance according to the nested model hypothesis test, as performed by ANOVA. Furthermore, for the genes that pass the goodness-of-fit filtering, their dynamic dose-responsiveness is investigated. Hence, TinderMIX maps the 3D optimal model in its corresponding time-dose effect map by means of contour plots . In both drugs in our case study, the number of DDRGs decreases with increasing threshold .By identifying the first dose and time point present in the DDRA Fig.\u00a0, TinderMEven though the labels that TinderMIX assigns inform on the gene POD, they do not give insights on the relative impact of the dose and time on the variation of the log fold-change. To dissect these effects and the relative contribution of dose and time to the gene alteration, TinderMIX weighs their effect in the time-dose effect maps Fig.\u00a0. Indeed,Cd4, one of the main markers of T lymphocytes, was labelled by TinderMIX as a sensitive-early DDR gene. On the other hand, thioacetamide exposure has been associated with liver toxic effects and inflammatory cell infiltration [Fadd, Fas, Bad, and Bid. Consistently with a hepatotoxic induced effect, the Aldh2 gene, which is known to be altered in patients with chronic hepatitis and non-alcoholic cirrhosis, was found also deregulated in intermediate pathways [To characterize the biological processes underlying the POD labels assigned to the DDRG, we performed KEGG enrichment analysis of the genes belonging to the 9 label categories previously identified . We furtltration . As mighpathways .Cd4 is marked as sensitive-early, and it is downregulated at early time points . Furthermore, the log fold-change increases at middle time points and is eventually upregulated at late time points . The expression pattern of this gene changes over time in concordance with the known MOA of cyclosporine A [Aldh2 gene is labelled as sensitive-middle because the DDRA begins at the lowest doses and middle time points : Platform independentProgramming language: ROther requirements: JavaLicense: GNU GPL (version 3 or greater)RRID:SCR_018364https://github.com/grecolab/TinderMIX/tree/master/sample_data. Further supporting data and snapshots of our code are openly available in the GigaScience repository, GigaDB [The data used to showcase the TinderMIX methodology are available in the git repository at , GigaDB .Additional File 1: TinderMIX pseudo-codeAdditional File 2: List of dynamic dose-responsive genes for cyclosporine A. The file contains the following information: (1) dose_time_comparison: specifies whether the activation is more dependent on the dose or the time; (2) Gene Description: is a text description of the gene; (3) Gene Symbol; (4) Joint Label: is the POD label; (5) gene_sign: specifies whether the gene activation is increasing or decreasing with respect to the dose; (6) MeanFC: mean log fold-change of the gene in the POD area; (7) adj.pval: adjusted P-value of the fitted polynomial modelAdditional File 3: List of dynamic dose-responsive genes for thioacetamide. The file contains the following information: (1) dose_time_comparison: specifies whether the activation is more dependent on the dose or the time; (2) Gene Description: is a text description of the gene; (3) Gene Symbol; (4) Joint Label: is the POD label; (5) gene_sign: specifies whether the gene activation is increasing or decreasing with respect to the dose; (6) MeanFC: mean log fold-change of the gene in the POD area; (7) adj.pval: adjusted P-value of the fitted polynomial modelAdditional File 4: Sensitivity analysis of the activation thresholdAdditional File 5: List of pathways for cyclosporine AAdditional File 6: List of pathways for thioacetamideANOVA: analysis of variance; BMD: benchmark dose analysis; DDR: dynamic dose response; DDRA: dynamic dose responsive area; DDRG: dynamic dose responsive gene; IC50: half maximal inhibitory concentration; KEGG: Kyoto Encyclopedia of Genes and Genomes; MOA: mechanism of action; Open TG-GATEs: Open Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System; POD: point of departure; TinderMIX: Time-Dose Integrated Modelling of Omics Data.The authors declare that they have no competing interests.This study was supported by the Academy of Finland (grant number 322761) and the EU H2020 NanoSolveIT project (grant number 814572).Conceptualization: A.S., D.G., L.A.S., Data curation: A.S., Formal Analysis: A.S., Funding acquisition: D.G., Methodology: A.S., M.F., M.P., Project administration: D.G., A.S., Software: A.S., M.F., M.P., Supervision: D.G., Visualization: A.S., Writing \u2013 original draft: A.S., M.F., G.d.G., A.F., D.G. Writing - review and editing: A.S., M.F., G.d.G., A.F., L.A.S., M.P., D.G. All authors have read and agreed to the published version of the manuscript.giaa055_GIGA-D-20-00057_Original_SubmissionClick here for additional data file.giaa055_GIGA-D-20-00057_Revision_1Click here for additional data file.giaa055_GIGA-D-20-00057_Revision_2Click here for additional data file.giaa055_Response_to_Reviewer_Comments_Original_SubmissionClick here for additional data file.giaa055_Response_to_Reviewer_Comments_Revision_1Click here for additional data file.giaa055_Reviewer_1_Report_Original_SubmissionDiana Hendrickx -- 3/5/2020 ReviewedClick here for additional data file.giaa055_Reviewer_1_Report_Revision_1Diana Hendrickx -- 4/25/2020 ReviewedClick here for additional data file.giaa055_Reviewer_2_Report_Original_SubmissionHaralambos Sarimveis -- 3/30/2020 ReviewedClick here for additional data file.giaa055_Supplemental_FilesClick here for additional data file."}
+{"text": "This study illustrates how ADAPT provides estimates for biomedically important parameters that cannot be measured directly, explaining (side-)effects of pharmacological treatment with LXR agonists.Temporal multi-omics data can provide information about the dynamics of disease development and therapeutic response. However, statistical analysis of high-dimensional time-series data is challenging. Here we develop a novel approach to model temporal metabolomic and transcriptomic data by combining machine learning with metabolic models. ADAPT performs metabolic trajectory modeling by introducing time-dependent parameters in differential equation models of metabolic systems. ADAPT translates structural uncertainty in the model, such as missing information about regulation, into a parameter estimation problem that is solved by iterative learning. We have now extended ADAPT to include both metabolic and transcriptomic time-series data by introducing a regularization function in the learning algorithm. The ADAPT learning algorithm was (re)formulated as a multi-objective optimization problem in which the estimation of trajectories of metabolic parameters is constrained by the metabolite data and refined by gene expression data. ADAPT was applied to a model of hepatic lipid and plasma lipoprotein metabolism to predict metabolic adaptations that are induced upon pharmacological treatment of mice by a Liver X receptor (LXR) agonist. We investigated the excessive accumulation of triglycerides (TG) in the liver resulting in the development of hepatic steatosis. ADAPT predicted that hepatic TG accumulation after LXR activation originates for 80% from an increased influx of free fatty acids. The model also correctly estimated that TG was stored in the cytosol rather than transferred to nascent very-low density lipoproteins. Through model-based integration of temporal metabolic and gene expression data we discovered that increased free fatty acid influx instead of In silico dynamic models often lack the multi level layers of regulation that control metabolism. This impedes their application in disease modeling because causes of disease can be located at multiple levels, and also molecular therapies can be targeted to genes, proteins and metabolites. To overcome current limitations in statistical analysis and mechanistic modeling we combine metabolic modeling with machine learning techniques to integrate longitudinal metabolic and transcriptomic data. Previously we developed the computational approach called ADAPT . In contrast, ADAPT predicts that the hepatic influx of free fatty acids is the major contributor to hepatic TG accumulation in the early phase of LXR activation. This prediction is tested in vivo by a metabolic tracer experiment.ADAPT has been applied to a model of hepatic lipid and plasma lipoprotein metabolism (HepaLip2) to predict which metabolic adaptations are induced upon pharmacological treatment of mice by Liver X receptor (LXR) agonist T0901317. LXR agonists exert potent antiatherosclerotic actions but simultaneously induce excessive triglyceride (TG) accumulation in the liver. Using the new version of ADAPT we reveal that both input and output fluxes to hepatic TG content are considerably induced on LXR activation and that in the early phase of LXR agonism, hepatic steatosis results from only a minor imbalance between the two. It is generally believed that LXR-induced hepatic steatosis results from increased HepaLip2). The mathematical model contains three compartments representing the liver cytosol, liver endoplasmic reticulum (ER) and blood plasma pathways of interest. We developed a mathematical multi-compartment model describing triglyceride and cholesterol metabolism (d plasma . The livd plasma 29 fluxeThe liver X receptor (LXR) plays a central role in the control of cellular lipid and cholesterol metabolism and is considered a potential target to treat or prevent atherosclerosis. However, a serious complication of LXR activation is the excessive accumulation of triglycerides in the liver, which finally results in the development of hepatic steatosis. The underlying molecular mechanisms inducing these adaptations are not fully understood, which complicates the clinical application of LXR agonists was coupled to experimental data di. Some model outputs are equal to state variables, other outputs are a combination (summation) of state variables. The data also includes fluxes, such as the synthesis rate of triglycerides secreted in VLDL particles, and the size and composition of VLDL particles and the corresponding variables in the model were also selected as outputs. Data was collected at 0, 1, 2, 4, 7, 14, and 21 days of treatment with T0901317 only.An overview of the quantities that were experimentally observed and their relation to corresponding model components is presented in T0901317 . Most medm,i of the untreated phenotype and corresponding model outputs yi. To account for experimental and biological uncertainties different random samples of the data were generated assuming a data error model based on Gaussian distributions, with means and standard deviations according to the experimental data. A global scatter search was used to initialize a multi-start, gradient-based, interior point local optimization method, resulting in a collection of parameter sets that describe the untreated phenotype. These parameter sets served as a starting point from which ADAPT iteratively learns and updates the parameters to describe the transition between experimental data obtained during different stages of the treatment, as is described next.First the HepaLip2 model was used to describe the untreated phenotype. Model parameters at baseline (start of simulation and experiment) are estimated from metabolic data and flux information. ADAPT estimates the model parameters by applying a least squares algorithm that minimizes the sum of squared errors (SSE) between the metabolic data HepaLip2 and ADAPT have been employed to generate insight in the LXR agonism response. The T0901317-induced perturbation starts at the proteome level and subsequently induces adaptations at the other levels. During the 3 week treatment the metabolic parameters and fluxes are expected to change over time. ADAPT captures adaptations or modulating effects on metabolic pathways by introducing time-dependent descriptions of model parameters. Parameter trajectories are constrained by experimental data. To enable the estimation of dynamic trajectories of metabolic parameters and fluxes, continuous dynamic descriptions of the experimental data are used as input for ADAPT. For this purpose, cubic smoothing splines were calculated that describe the experimental data, taking into account experimental and biological uncertainties. A collection of splines was calculated using a Monte Carlo approach as follows. For all time points in the data the same data model and sampling approach were used as described above for the untreated phenotype (the first time point in the time-series). Subsequently, for each generated set of time samples a cubic smoothing spline was fitted, which is used as input for the next step of the ADAPT algorithm. The experimental data and splines are presented in Nt of time segments \u0394t. First, the simulation is started using the parameters and model state of the untreated phenotype. Next, for each subsequent segment n, the system is simulated (using a variable step integration method) for a time-period \u0394t using the parameters and model state of the previous step n\u22121 as a starting point. The parameters for segment n are re-estimated by minimizing the difference between the data interpolants and corresponding model outputs for that time segment. This procedure is repeated for all segments and as a result parameter trajectories are inferred by minimizing the objective function \u03c72 over the time segments through numerical optimization:The HepaLip2 model mechanistically describes the kinetics of metabolic pathways . ADAPT inth time segment. The objective function \u03c72 is the weighted sum of squared differences between model outputs and data:Ny is the number of measured model variables (outputs), Yi(n\u0394t) are the discrete time model outputs, dm,i(n\u0394t) are the interpolants of the metabolic data with standard deviation \u03c3m,i. The optimization procedure is repeated for all data interpolants, starting from the state and parameter set of the untreated phenotype. An ADAPT solution was considered acceptable if model outputs were within the 95% confidence interval of the data. In this study Ny = 15, and Nt = 200 was used.where ADAPT simulation of HepaLip2 provides estimates for system variables that were not experimentally observed, such as the synthesis rate and composition of VLDL particles . As obset, parameter adaptations are preferred such that resulting parameter trajectories and corresponding gene expression profiles display temporal correlation. This was implemented by including an additional component (Until here ADAPT connected metabolic parameters to activity of enzymes (protein level). Next, gene expression was added as a third layer of information. ADAPT has been extended to include a potential functional relationship between metabolic parameters and gene expression levels. Variables in the mechanistic (metabolic) part of the model can be directly linked to metabolic data, which is used to fit the model to that experimental data. Pathways at the transcriptome level were not modeled mechanistically due to the lack of sufficient quantitative information about these systems. Gene expression data does not have an one-to-one connection with the metabolic variables and, therefore, cannot be included in the error function (Equation 2). Therefore, a different approach was used to integrate gene expression data in the parameter trajectory estimation algorithm. The transcriptomic data is implicitly used to constrain the dynamic behavior of parameter trajectories, by including a regularization function. Time-course data of relative expression levels of 23 genes was available . Table 3g1 and \u03bbg2 are regularization constants that determine the relative importance of the components. Further details are provided in section 5 and in the in which g1) and damping of unnecessary parameter fluctuations (\u03bbg2) on the estimation of the parameter trajectories was investigated using a Monte Carlo approach. ADAPT was performed for 20, 000 random combinations for \u03bbg1 and \u03bbg2 and the values of the three components in the objective function were analyzed. Results are reported in the g1 is larger than 10\u22126 and \u03bbg2 is smaller than 10\u22128 parameter-gene couples displayed temporal correlation. For these combinations \u03bbg2 is sufficiently large for In multi-objective optimization and regularized regression approaches, like Equation (3), the weights of the different components in the objective function are important hyper-parameters of the algorithm that are problem dependent and need to be tuned for adequate performance. First, the influence of the regularization constants for gene correlation temporal correlation without including gene expression data the compartmentalization of hepatic triglycerides, (2) adaptations in the hepatic lipid loading capacity, and (3) the quantitative contribution of the different metabolic routes to the increased hepatic triglyceride level.Gi where i (0.05 \u2264 i \u2264 1) represents the fraction of all solutions with the highest temporal correlations of parameter trajectories with gene expression over the entire treatment period . The effect of integration of gene expression data on model performance was expressed as reduction in variance in model estimations , parameters (middle panel), and fluxes (right panel). The (dark-)gray parts clearly display model predictions that were effectively constrained by the gene expression data. Note that in multiple cases also a reduction in variance was obtained for parameters that were not coupled to genes.We introduce the following notation: A group of trajectory solutions is denoted by x4 + x6) and endoplasmic reticulum (x5 + x7) fractions. The cytosolic fraction represents the TG pool stored in lipid droplets and the ER fraction the TG contained in nascent VLDL particles. G1 (G0.1 (G0.05 (y1) and the model provides more detailed information on where these lipids reside inside the hepatocyte. Experimental data of the total hepatic triglyceride content (y1 = x4 + x5 + x6 + x7) was included in the optimization procedure and all solution groups describe this data adequately. Before the inclusion of gene expression data, it was not possible to accurately predict how the total triglyceride content is distributed between cytosolic and VLDL fractions (G0.05 vs. G0.1).A reduction in the variance (estimation uncertainty) was observed for many of the model components when gene expression was included . One exaSubsequently, additional independent measurements were performed to validate this model result. Fractionation experiments were performed on livers from untreated C57BL/6J mice and C57BL/6J mice treated with T0901317 for 14 days, to separate the cytosolic triglyceride fraction from the microsomal fraction, containing VLDL. A description of the experimental materials and procedures is available in section 5. Indeed, the experimental data shows that the increased triglyceride fluxes are predominantly stored in the cytosolic fraction compared to the microsomal fraction , confirmf14 and f15). A second factor is the VLDL-TG production flux which increases progressively during the treatment include: de novo lipogenesis, hepatic FFA uptake from plasma, and clearance of lipoproteins via lipases and whole-particle uptake and Scd1 (stearoyl-CoA desaturase 1) . Therefore, we quantified the contribution of all metabolic routes included in the mathematical model that influence the hepatic triglyceride level. Fa changes during treatment with T0901317. The analysis shows that plasma FFA provided a major contribution to the supply of hepatic triglycerides, whereas the clearance of lipoproteins played merely a minor role. Furthermore, the figure shows a peak contribution of hepatic FFA uptake at t \u2248 1 day, while the contribution of de novo lipogenesis increased gradually up to one week of treatment. de novo lipogenesis. The hepatic influx of FFA contributes for roughly 80% to the accumulation of TG in the liver.Pharmacological activation of LXR induces the excessive accumulation of triglycerides in the liver . Figure urase 1) , resultiogenesis . A quest13C-palmitate was infused into C57Bl/6J mice that were treated with T0901317 for 1 day, and untreated controls of the underlying biological systems and the timescales on which these occur (seconds to years). Physiological parameters with diagnostic value are hidden in complicated, multivariate datasets. Time-series measurements of the metabolome provide information-rich data about the status of a biological system systems biology and systems medicine requires credible models, that have been scrutinized on verification, validation and uncertainty quantification . The network topology of metabolic pathways is (relatively) well-known. Network structures impose strong constraints on the solution space of mathematical models, a characteristic that is employed in constraint-based simulation and analysis of metabolic network models , a receptor that facilitates the hepatic uptake of cholesterol from HDL particles and non-alcoholic steatohepatitis (NASH) associated with Metabolic Syndrome and different timescales (seconds to years). During disease development metabolic parameters (and consequently metabolic fluxes and concentrations) can be expected to change over time. The concept of time-dependent model parameters is introduced to study these adaptations. ADAPT identifies necessary dynamic changes in the model parameters to describe the transition between experimental data obtained during different stages (time points) of the disease. To estimate dynamic trajectories of model parameters, continuous dynamic descriptions of the experimental data were used as input for ADAPT. Cubic smoothing splines were calculated to describe the dynamics of the experimental data. To account for experimental variance and biological variation a collection of splines was calculated using a Monte Carlo approach. Different random samples of the experimental data were generated assuming Gaussian distributions with means and standard deviations according to the data. Subsequently, for each generated sample a cubic smoothing spline was calculated .In the present study, a distinction between two types of data was made. First, metabolic data was acquired, e.g., concentrations and fluxes of metabolites in plasma and tissue compartments. The splines describing this data are denoted by Mathematical modeling was centered on metabolic pathways. Pathways at the proteome and transcriptome levels that modulate the metabolic processes were not modeled explicitly as insufficient information of the underlying network structure and interaction mechanisms was available. The metabolic model is defined by a set of (non)linear ordinary differential equations (state-space structure):N) and a set of functions where Nt steps of \u0394t time period using the following discretization:Details of the ADAPT methodology have been described in Tiemann et al. and are n \u2264 Nt and Nt\u0394t the time period of the entire experiment. The simulation is initiated (n = 0) using the initial values of the model states n > 0 the system is simulated for a time period of \u0394t using the final values of the model states of the previous step n \u2212 1 as initial conditions. Parameters with 0 \u2264 Ny the number of model outputs . Parameter trajectories were estimated using 200 time steps (Nt = 200).where 5 parameter vectors were sampled from a widely dispersed range of initial parameter values (10\u22126 to 106). For each parameter vector t = 0). 2 \u00d7 104 (10%) of the best performing parameter sets and (7), was modified to integrate gene expression data. ADAPT is based on the assumption that changes in metabolic parameters are reflected by changes in corresponding enzymes, which in turn are reflected by changes in corresponding gene expression levels. This implies there is a functional relationship between a metabolic parameter n and corresponding gene expression data, respectively. During a re-optimization of the metabolic parameters n\u22121 to step n, a The optimization problem presented in Equation (6) was extended as follows. For clarity we introduce the following definitions: Np is the number of parameters, and Vi(n\u0394t) is given by:where Nci is the number of genes assigned to parameter i, and \u03c1ij(n\u0394t) is given by:where Equation (10) represents the Pearson correlation coefficient between a parameter trajectory and corresponding gene expression data, which is bounded between \u22121 and 1 . Note that multiple genes can be assigned to a parameter, which could be useful for instance when a cascade of molecular processes is integrated in a single mathematical reaction equation.The gene expression data was also used to constrain the magnitude of dynamic variations in the parameter trajectories. It was assumed that parameters are less likely to change when corresponding gene expression levels remain unchanged, compared to scenarios when expression of the genes is induced or repressed. Therefore, in latter cases parameter adaptations are less penalized compared to former cases. This was effectuated by including an additional objective function Wi(n\u0394t) given by:with Pi(n\u0394t) and Gij(n\u0394t) defined as:with Pi(n\u0394t) represents the normalized derivative of parameter i at time step n. Relative derivatives were used to assign equal relevance to all parameters and to avoid domination of the optimization by large absolute values. Furthermore, Gij(n\u0394t) represents the normalized derivative of the spline function that describes corresponding gene expression data. To avoid division by zero (when Gij(n\u0394t) = 0), the minimal absolute value of Gij(n\u0394t) was truncated at 10\u22126. Note that Pi(n\u0394t) effectuates that changing a parameter is costly, which will therefore be avoided unless it is required to describe the experimental data. However, when accompanied by a change in gene expression level, the penalty of changing corresponding parameter is reduced (because P is divided by G).where g1 and \u03bbg2, which determine the contribution strengths of the functions. This implies that metabolic parameters and corresponding gene expression levels do not necessarily have to display temporal correlation when this is in contradiction to the metabolic data. This provides the possibility to account for post-transcriptional control. In summary, an optimized parameter set is defined as follows:Objective functions The determination of the regularization constants is discussed in https://github.com/nvanriel/ADAPT, https://github.com/rcqsnel/adapt-modeling-framework, and https://github.com/yvonnerozendaal). The ordinary differential equations were solved with compiled MEX files using numerical integrators from the SUNDIALS CVode package , which uses an interior reflective Newton method was used to calculate cubic smoothing splines using the default smoothness setting (=1) and the roughness dependent on the variation in the data: (1/std)2 (std: standard deviation).The mathematical model and ADAPT were implemented in MATLAB . The code is available on GitHub (The experimental procedures have been described previously (Tiemann et al., All datasets generated for this study are included in the article/The animal study was reviewed and approved by Institutional Animal Care and Use Committee of the University of Groningen.NR and AG conceived and designed the study. NR, AG, and PH supervised the research. CT developed the software and performed the simulations. CT and NR analyzed the results and wrote the paper. NR, PH, and AG read and revised the paper. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Apis mellifera) are an agriculturally important pollinator species that live in easily managed social groups . Unfortunately, annual losses of honey bee colonies in many parts of the world have reached unsustainable levels. Multiple abiotic and biotic stressors, including viruses, are associated with individual honey bee and colony mortality. Honey bees have evolved several antiviral defense mechanisms including conserved immune pathways and dsRNA-triggered responses including RNA interference and a non-sequence specific dsRNA-mediated response. In addition, transcriptome analyses of virus-infected honey bees implicate an antiviral role of stress response pathways, including the heat shock response. Herein, we demonstrate that the heat shock response is antiviral in honey bees. Specifically, heat-shocked honey bees had reduced levels of the model virus, Sindbis-GFP, compared with bees maintained at a constant temperature. Virus-infection and/or heat shock resulted in differential expression of six heat shock protein encoding genes and three immune genes, many of which are positively correlated. The heat shock protein encoding and immune gene transcriptional responses observed in virus-infected bees were not completely recapitulated by administration of double stranded RNA (dsRNA), a virus-associated molecular pattern, indicating that additional virus\u2013host interactions are involved in triggering antiviral stress response pathways. Honey bees ( Apis mellifera) are eusocial insects in the order Hymenoptera that live in colonies consisting of one reproductive queen, hundreds of reproductive males (drones), and approximately 30,000 non-reproductive sterile female workers. Honey bees are generalist pollinators of numerous plant species, including fruit, nut, and vegetable crops . L. LAgo2 i \u00d7 10\u22125) C. Howevehsp90 and hsc70-4 act as chaperones for RISC assembly, thereby facilitating RNAi-mediated antiviral defense [ago2 expression was positively correlated with the expression of three HSP-encoding genes, including hsp83-like , hsc70-3 and hsp90 , hsc70-3 , hsc70-4 , and dnaj shv-like . Lastly, mf116383 expression positively correlated with the expression of two HSP-encoding genes, including hsp83-like and hsp90 . Dcr-lik = 0.02) .mf116383, which is induced by dsRNA in a previous study [p = 1.36 \u00d7 10\u22125; rep2, 1.20 fold, p = 0.0014) , is necessary and sufficient to induce heat shock protein gene expression, honey bees were either injected with buffer or with buffer containing 1 \u03bcg of dsRNA, with sequence corresponding to the Drosophila C virus genome, and thus not specific to any honey bee gene or honey bee-infecting virus. As a control, we assessed the expression of us study , and, in 0.0014) . Likewishsc70-3 , and hsp83-like in replicate 1 , but not in replicate 2 (p = 0.51). In contrast, dsRNA-treated bees exhibited reduced expression of dnaj shv-like and hsc70-4 in one of two replicates . Injection with dsRNA had a variable impact on hsp90 expression, increasing it 1.17-fold in the first replicate (p = 0.00038) and decreasing 0.83-fold in the second replicate (p = 8.46 \u00d7 10\u22125). Together, these results indicate that the HSP expression profile induced by virus infection is not completely recapitulated by exposure to dsRNA. Instead, there are likely other aspects of the virus-honey bee host interaction that result in differential regulation of genes in the heat shock stress response pathway.Specifically, dsRNA-treated bees exhibited greater expression of Honey bees have evolved a wide range of social and molecular strategies to control pathogens ,51,108. The mechanisms and genes involved in honey bee antiviral responses require further investigation. Herein, we present evidence that the heat shock response is involved in honey bee antiviral defense. First, we demonstrated that heat shock reduced the abundance of the model virus, SINV-GFP, compared to bees maintained at a constant temperature. One hypothesis that might explain the 74\u201390% reduction in virus abundance in heat-shocked honey bees is that there is a general disruption of protein synthesis in heat-stressed bees. However, drosophila cells completely recover normal protein synthesis upon return to physiologically normal temperatures after heat shock (four treatments at 37 \u00b0C for 25 min each) . Furtherprotein lethal(2) essential for life-like and hsc70-4, which were induced in two biological replicates. Though in general the expression of heat shock protein genes was induced by heat-treatment alone or with the combination of both stressors, there was some heterogeneity in expression in these treatment groups. These differences could be explained by either stochasticity, or by real differences in independent biological replicates for which we used three separate outbred honey bee colonies. These colonies include individual half-sisters of different genetic lineages that likely represent several different genetic sub-species prevalent in the U.S. [A. mellifera carnica and A. mellifera ligustica) have different thermotolerances and metabolic responses to heat stress [The expression of most of the heat shock protein encoding genes examined in this study was induced by virus infection in three biological replicates, except the U.S. . Indeed,t stress . Therefoago-2, dcr-like, and mf116383 [mf116383 was the only gene consistently induced by heat-treatment alone. Therefore, these data reveal a novel aspect of this recently described immune gene and suggests that mf116383 may serve as one point of cross-talk between the generalized antiviral immune response and the heat shock response in honey bees. Since only ~35% of the honey bee genome has well-annotated orthologues with genes in other species including Drosophila melanogaster, there are numerous uncharacterized genes, like mf116383. It is exciting to further understand the biological role(s) of these genes in honey bees and other model and non-model organisms.Virus infection induces the expression of numerous honey bee immune genes ,49,61,67mf116383 expression, heat shock had a variable effect on the expression of RNAi machinery across replicates. In some cases, the expression of dcr-like and ago2 was reduced in heat shocked honey bees when compared to honey bees that were maintained at 32 \u00b0C. This implies that the mitigating effect of heat-treatment on virus infection is not simply explained by greater expression of the RNAi machinery. Instead, the protective effect of HSPs may be in part due to more efficient chaperone-mediated loading of the RISC [hsc70-3, hsc70-4, and hsp90) was positively correlated with dcr-like and ago2. This suggests co-regulation of these genes. Though further studies are needed to determine the mechanisms leading to co-regulation of immune genes and HSP-encoding genes, it may be advantageous to co-regulate HSPs and HSP client proteins [In contrast to the impact of heat shock on the RISC ,78. Incrthe RISC ,80. In aproteins . In addimf116383 expression was induced by dsRNA in the experiments described herein. Intriguingly, dsRNA-treatment did not fully recapitulate the HSP gene expression pattern that was observed in virus-infected bees. For example, dnaj shv-like and hsc70-4 were both consistently induced by viral infection, but they had reduced expression in bees exposed to dsRNA alone. In contrast, the expression of both hsc70-3 and hsp83-like was increased in virus-infected bees and dsRNA-treated bees. It is unclear which protein might be mediating transcriptional regulation of heat shock protein encoding genes in response to dsRNA, but it may be a protein like the mammalian dsRNA-dependent Protein kinase R (PKR), which is essential for the murine heat shock response and the expression of hsp70 and hsp84 [The majority of viruses produce dsRNA molecules during their replication cycle and tRNA-like structures). Therefore, most host organisms have evolved mechanisms to detect dsRNA and subsequently trigger antiviral defense pathways. As expected based on a previous transcriptome analysis , mf11638nd hsp84 . Alternand hsp84 ,123 (revnd hsp84 ). There nd hsp84 ,125,126.In summary, the work described herein indicates that stress response proteins, including those in the heat shock response and proteostasis network, are involved in honey bee antiviral defense. Further biochemical analyses are needed to confidently demonstrate their role in antiviral defense and the protective effect of heat shock. Future studies will aim to identify modes of coordination between stress and immune response pathways and proteins, as well as other potential antiviral functions of heat shock proteins, such as direct interaction with viral proteins."}
+{"text": "Triple-negative breast cancer (TNBC) is the most refractory subtype of breast cancer. Immune checkpoint inhibitor (ICI) therapy has made progress in TNBC treatment. PD-L1 expression is a useful biomarker of ICI therapy efficacy. However, tumor-immune microenvironment (TIME) factors, such as immune cell compositions and tumor-infiltrating lymphocyte (TIL) status, also influence tumor immunity. Therefore, it is necessary to seek biomarkers that are associated with multiple aspects of TIME in TNBC. In this study, we developed an immune-related gene prognostic index (IRGPI) with a substantial prognostic value for TNBC. Moreover, the results from multiple cohorts reproducibly demonstrate that IRGPI is significantly associated with immune cell compositions, the exclusion and dysfunction of TILs, as well as PD-1 and PD-L1 expression in TIME. Therefore, IRGPI is a promising biomarker closely related to patient survival and TIME of TNBC and may have a potential effect on the immunotherapy strategy of TNBC.n = 115). IRGPI was constructed with Cox regression analysis. Immune cell compositions and TIL status were analyzed with CIBERSORT and TIDE. The discovery was validated with the Molecular Taxonomy of Breast Cancer International Consortium data set (n = 196) and a patient cohort from our hospital. Tumor expression or serum concentrations of CCL5, CCL25, or PD-L1 were determined with immunohistochemistry or ELISA. The constructed IRGPI was composed of CCL5 and CCL25 genes and was negatively associated with the patient\u2019s survival. IRGPI also predicts the compositions of M0 and M2 macrophages, memory B cells, CD8+ T cells, activated memory CD4 T cells, and the exclusion and dysfunction of TILs, as well as PD-1 and PD-L1 expression of TNBC. IRGPI is a promising biomarker for predicting the prognosis and multiple immune characteristics of TNBC.Tumor-immune cell compositions and immune checkpoints comprehensively affect TNBC outcomes. With the significantly improved survival rate of TNBC patients treated with ICI therapies, a biomarker integrating multiple aspects of TIME may have prognostic value for improving the efficacy of ICI therapy. Immune-related hub genes were identified with weighted gene co-expression network analysis and differential gene expression assay using The Cancer Genome Atlas TNBC data set ( Triple-negative breast cancer (TNBC) is one of the subtypes of breast cancer, which is named because of the negative expression of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2). TNBC accounts for about 15\u201320% of breast cancer cases and is associated with a high risk of mortality in the past few decades owing to its aggressiveness and the lack of effective targeted therapies ,4.The accumulated mutations during cancer development generated neoantigens, which make the tumor immunogenic. Immune cells can recognize neoantigens and eliminate cancer cells. However, the expression of programmed death-ligand 1 (PD-L1) in cancer cells and programmed death 1 (PD1) PD-1 in immune cells, two immune checkpoint (IC) molecules, allow cancer cells to escape the attack by immune cells, even though the immune cells have already infiltrated the tumor tissue. Immune checkpoint inhibitors (ICIs) block this immune checkpoint between cancer cells and immune cells so that the immune cells can recognize and attack cancer cells again. ICI treatments, including therapies targeting PD-1, PD-L1, and cytotoxic T lymphocyte-associated protein 4 (CTLA4), have significantly benefited the survival of many types of tumor patients ,7,8,9,10With high levels of tumor-infiltrating lymphocytes (TILs) and PD-L1 expression, TNBC exhibits stronger immunogenicity than other subtypes of breast cancer and may be more likely to benefit from immunotherapy ,13,14,15Some breast cancer patients are not sensitive to PD-1/PD-L1 treatment . In othen-terminus of chemokines, they are divided into four subfamilies: CC, CXC, CX3C, and XC [Chemokines are a family of small, secreted proteins that bind to their G protein-coupled heptahelical receptors on the cell surface. The primary role of chemokines is to stimulate inflammatory cell migration, thus involving immune and inflammatory responses. According to the , and XC ,25. As t, and XC ,27,28.In this study, we constructed a prognostic signature composed of the CC chemokines to predict TNBC prognosis and immune characteristics. We focused on all immune-related genes in the transcriptome data of TNBC and screened immune-related hub genes related to patient prognosis by weighted gene co-expression network analysis (WGCNA). An immune-related gene prognostic index (IRGPI) was constructed, and its prognostic value was confirmed with multiple cohorts. Its relationships with the profiles of tumor-immune cells, the status of TILs and PD1/PD-L1 immune checkpoints were further characterized. The results show that IRGPI was a promising marker for predicting the prognosis and TIME status in TNBC.http://xena.ucsc.edu/, accessed on 11 January 2021). We also downloaded the immune-related genes from the ImmPort database and the InnateDB database.RNA-seq data (Level 3) of breast cancer patients were obtained from the TCGA database. Triple-negative breast cancer (TNBC) data were extracted based on estrogen receptor (ER), progesterone receptor (PR), and proto-oncogene HER-2 status, and only patients with overall survival (OS) > 30 days were selected. The RNA-seq data included 115 cancer samples and 13 adjacent normal tissue samples. Clinical information of patients with TNBC was obtained from UCSC Xena data set were downloaded from the cbioportal website, and 196 TNBC patients with OS > 30 days were selected for the model validation.2 = 0.9, a soft threshold of \u03b2 = 4 was picked and used to calculate the signed adjacency matrix from the similarity matrix. With the dendrogram cut height for module merging set to 0.25, we identified 7 modules. Each color module represented a collection of genes that are highly correlated to each other among the patients, except the gray module. Genes not to cluster with any modules were assigned into the grey module. A topological overlap matrix (TOM) was used to visualize the gene\u2013gene connectivity.Weighted gene co-expression network analysis (WGCNA) is a powerful tool for finding clusters (modules) of highly interconnected genes . WGCNA wThe genes in each module (except the gray module) were subject to Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses with the clusterProfiler package of R to identify significantly enriched pathways. CC chemokines play an important role in inflammatory response and immunity . The modp-value < 0.05, |log2(Fold Change)| > 1) were identified using the edgeR package of R. Next, the DEGs in the blue and brown modules were identified as immune-related DEGs for subsequent analysis.Based on the RNA-seq data of TCGA TNBC samples , differentially expressed genes (DEGs) and stepwise regression analysis (R \u201cstats\u201d package). The coefficients of the final optimal regression model were calculated by the stepwise regression analysis. We then calculated prognostic indexes for all the cancer samples by the formula IRGPI = expression level of gene1* coef1 + expression level of gene2* coef2 +\u2026+ expression level of geneN* coefN. TCGA patients were divided into an IRGPI-high group and an IRGPI-low group according to the median IRGPI score. KM survival curves were used to evaluate the IRGPI model with the log-rank test in the TCGA cohort, which was further validated with the METABRIC cohort. The prognostic model of survival was further evaluated by calculating the AUC values (the areas under the ROC curves) at 1, 3, 5, and 7 years. We also performed the multivariate Cox regression analysis using important clinicopathological features and IRGPI scores in the TNBC patients.First, univariate Cox regression analysis was used to determine associations between immune-related DEGs and overall survival in TNBC patients, and genes with n = 57) and low (n = 58) IRGPI scores. The DEGs in the IRGPI-high and low groups were subjected to the enrichment analysis and the gene set enrichment analysis (GSEA) to determine the signaling pathways involved with the clusterProfiler package of R .Differential expression analysis was first performed for the groups with high and analyzed the mRNA expression matrix in combination with LM22 in TNBC samples. The Wilcoxon rank sum test was used to calculate the levels of immune cell infiltration among different IRGPI groups , and Spearman correlation was used to calculate the correlation between immune cells .CIBERSORT is a method using gene expression profiles to characterize cell compositions of complex tissues . We usedp values < 0.05).Immune-related DEGs were obtained and analyzed using GO and KEGG analyses with the clusterProfiler package of R to analyze the biological processes (BP), cellular component (CC), and molecular function (MF) of the DEGs involved and to identify significantly enriched pathways . In addition, Spearman correlation was also adopted to investigate the correlation between prognostic markers and PD-L1/PD1 expression.TIDE is a computational method to evaluate T-cell dysfunction and exclusion in tumor microenvironments . We usedFormalinfixed, paraffin-embedding (FFPE) specimens were collected from the Harbin Medical University Cancer Hospital. Tissues were collected from 89 TNBC patients who underwent radical mastectomy of breast cancer between 2012 and 2015 and 40 patients between 2020 and 2021 in Harbin Medical University Cancer Hospital. Serum samples were collected from the 40 surgical patients diagnosed between 2020 and 2021 mentioned above. The patients did not have hepatitis and other infectious diseases or immune diseases. The study was approved by the Ethics Committee of Harbin Medical University Cancer Hospital. Follow-up time ranged from 1 to 107 months, with a median of 72 months.Tissue sections were dewaxed in xylene and hydrated gradually through graded alcohol. EDTA buffer was used for antigen retrieval. The endogenous peroxidase activity was blocked, and then the sections were incubated with the primary anti-CCL25 , anti-CCL5 , or anti-PD-L1 overnight at 4 \u00b0C. Then the secondary antibody and a DAB kit were applied to the sections. A rabbit-specific HRP/DAB (ABC) IHC Detection Kit was used for CCL25. Normal tonsil tissue was used as a positive control for CCL5 and PD-L1 IHC, and normal thymus tissue was used as a positive control for CCL25 IHC .The tissue sections were evaluated by two pathologists who were unaware of the patient\u2019s clinical information. Positive PD-L1 expression was defined as the Combined Positive Score (CPS) \u2265 10 ,33. For Serums CCL5 and CCL25 were measured by ELISA according to the manufacturer\u2019s instructions. Optical densities at 450 nm of replicate specimens were determined in a plate reader. Untreated wells were used as the blank control.2 test was used to compare the relationships between the IRGPI and clinicopathological factors of TNBC, and the Cox regression analysis was used to determine hazard ratios (HRs) and 95% confidential intervals (CIs) for univariable and multivariable analyses. Progression-free survival (PFS) was defined as the time between the initiation of surgical treatment and the date of the first evidence of tumor progression. We used Kaplan\u2013Meier plots and log-rank tests to calculate the differences in PFS among different subgroups. The Pearson correlation analysis was used to calculate the correlation of two variables. p < 0.05 was defined as a statistical significance.All statistical analysis was performed with R language. \u03c7To obtain the immune-related hub genes, WGCNA analysis was carried out on the immune genes obtained from the ImmPort and InnateDB databases. The WGCNA analysis identifies gene clusters that are highly correlated to each other among patients. The results are shown with modules of different colors. Seven modules were then identified with a soft threshold of 4. A total of 1957 genes were allocated to 7 modules . With thBy comparing the expression data of 115 tumors vs. 13 normal samples, a total of 4375 DEGs were obtained, of which 2377 genes were upregulated, and 1998 genes were downregulated in the tumor samples . Interesp = 0.0026, log-rank test) c. The prp values= 0.004). Multivariate Cox regression analysis confirmed that IRGPI was an independent prognostic factor after adjusting for other clinicopathologic factors and IRGPI scores to determine whether IRGPI was an independent prognostic predictor of overall survival. Univariate Cox regression analysis showed that the IRGPI group was significantly associated with overall survival .http://kmplot.com/, accessed on 21 February 2021) to evaluate the effect of CCL5 and CCL25 expression on survival. The results show that CCL5 expression was positively associated with the overall survival (OS) of TNBC patients . Similarly, the OS of the high-CCL25-expression group was superior to the low-CCL25-expression group (n = 196). Consistent with the result of the TCGA data set, the IRGPI-low subgroup had a significantly better prognosis than those in the IRGPI-high subgroup . Becausenk test) .p = 0.047) and CCL25 (p = 0.049) was associated with a better PFS. Furthermore, the KM estimator showed that PFS in the IRGPI-low group was significantly longer than that in the IRGPI-high group . Because age was significantly associated with PFS in the univariate analysis, it was added to IRGPI for multivariate COX analysis, and the results also show that IRGPI was an independent biomarker for evaluating the outcomes of patients underwent radical mastectomy after being diagnosed with breast cancer between 2012 and 2015. To confirm the correlation of IRGPI with the characteristics of TNBC patients, we used the semi-quantitative IHC scores of CCL5 and CCL25 to calculate IRGPI in the TNBC patients from our hospital. As shown in = 0.026) .Differentially expressed genes in the IRGPI groups showed that 1096 genes were upregulated in the IRGPI-high group while 750 genes were downregulated . The top+ T cells, activated memory CD4 T cells, follicular helper T cells, regulatory T cells (Tregs), gamma delta T cells and M1 macrophages were more abundant in the IRGPI-low subgroup . Besides, the compositions of plasma cells, naive CD4 T cells, activated NK cells and activated mast cells were also significantly higher in the IRGPI-low subgroup than the high subgroup, but activated master cells and neutrophils were markedly low in the IRGPI-low subgroup. Even though the differences exist between the two data sets, the results of major TIME players are consistent, including memory B cells, CD8+ T cells, activated memory CD4 T cells, M0, M1, and M2 macrophages.Next, we verified this result with the METABRIC data set. Consistent with the TCGA results, M0 macrophages and M2 macrophages were also more abundant in the IRGPI-high subgroup, and there were more memory B cells, CD8subgroup . Similarp = 0.99) in the TCGA data set and a lower MSI score in the METABRIC analyses and CCL25 expression were correlated with PD-L1 expression (p = 0.020), indicating that PD-L1 expression was higher in IRGPI-low group .TIME, which are affected by numerous genes, are critical for tumor growth and ICI therapy effects. Chemokines are pivotal molecules that regulate the migration of immune cells, and may thus shape the TIME in TNBC. To reduce the complexity of the gene co-expression network, we employed WGCNA to cluster immune-related genes and identify immune-related hub biomarkers in chemokine enriched modules. The 67 immune-related hub genes were further subject to survival analysis, and an immune-related gene prognostic index (IRGPI) was constructed, composed of two CCL genes, CCL5 and CCL25. IRGPI was further demonstrated to perform superbly as an independent prognostic factor for TNBC in multiple cohorts, including TCGA, METABRIC, and a cohort of patients from our hospital. IRGPI predicts better survival outcomes for IRGPI-low patients and worse outcomes for IRGPI-high patients. The semi-quantitative IHC scores obtained from the patients of our hospital also confirmed similar results. The consistency among different cohorts indicates the great prognostic value of the IRGPI and suggests the component of IRGPI may be critical for the modulation of TIME in TNBC.p = 0.0335, HR = 0.94) in metastatic relapse breast cancer had a positive effect on prognosis [The role of CCL5 and CCL25 is not well understood in TNBC. CCL5 is mainly secreted by T lymphocytes, macrophages, platelets, synovial fibroblasts, tubular epithelium, and tumor cells ,35,36,37rognosis . All the+ CD8+ T cells to infiltrate the tumor and enhances CD47-targeted immunotherapy in a murine TNBC model [p = 2.7 \u00d7 10\u22126, HR = 0.77) expression in breast cancer was associated with an increase in RFS . Similarly, the neutralization of CCL25 also promoted tumor growth in a CCL25-expressing mouse melanoma model [CCL25, also known as thymus-expressed chemokine (TECK), is the ligand for CCR9 . CCL25 iBC model . Thomas BC model consistema model . Our stuAIM2 expression showed a significant association with TNBC survival in the univariate analysis but was not included in the final IRGPI score. AIM2 is a component of inflammasome, which plays a crucial role in the function of T regulatory cells . Its rolOwing to the crucial role of CCL5 and CCL25 in TIME, we analyzed the immune cell profiles in TNBC to explore the link between IRGPI and tumor-immune cell compositions. The composition of immune cells differed between two IRGPI subgroups. Cytotoxic CD8 T cells, CD4 T cells, and M1 macrophages were more enriched in the IRGPI-low subgroup, and M0 and M2 macrophages were more abundant in the IRGPI-high subgroup. A large number of studies have shown that dense infiltration of T cells, especially cytotoxic CD8 T cells, indicates a favorable prognosis ,70,71. T+ T cells infiltrated the IRGPI-low groups at a significantly higher rate. However, the high CTL dysfunction scores in the IRGPI-low TNBC suggests a compromised cytotoxic response to cancer cells due to immune checkpoints modulated by PD-L1 and PD-1 molecules and thus a potential benefit from ICI therapy for the IRGPI-low patients. We noticed that TIDE and MSI scores were not always associated with IRGPI. This may be attributed to the inherent data structure differences in gene expression of the two data sets: TCGA gene expression data are RNA-seq results, which are more accurate and have broader detection ranges, whereas the METABRIC gene expression data are from microarrays, which may exhibit false hybridization and hybridization saturation problems. Nevertheless, the inconsistent results suggest more care should be taken when applying TIDE scores in TNBC. And since the frequency of MSI in breast cancer is relatively low, it also should not be used in evaluating ICI therapy efficacy in TNBC.The function of infiltrating cytotoxic T lymphocytes (CTLs) is not only related to their levels but also to their appropriate priming. A new algorithm, TIDE, was recently developed to model tumor-immune evasion by evaluating the exclusion level of T cells, as well as the priming level of infiltrating CTLs . TIDE exn = 40, p = 0.058). Further validation of these results in a cohort with a large sample size may provide a non-invasive way to test PD-L1 expression in TNBC patients.Next, we examined the relationship between PD-1/PD-L1 expression and IRGPI. In both the TCGA and METABRI data sets, we observed that IRGPI was negatively correlated with both PD-1 and PD-L1 expression, and the expression of PD-1 and PD-L1 was positively correlated with CCL5 and CCL25 expression. These results were reproducible, even when we used the semi-quantitative CCL5 and CCL25 IHC scores to calculate IRGPI IHC scores. And we observed the negative correlation between IRGPI IHC scores and PD-L1 expression. More interestingly, IRGPI serum scores, which were calculated with the serum CCL5 and CCL25 concentrations, also showed a close-to-significant negative correlation with tumor PD-L1 IHC scores (Although we repeatedly observed a positive correlation between PD-L1 and CCL5 or CCL25 expression, the molecular mechanisms behind these observations remain elusive. In colorectal cancer, tumor-infiltrated macrophages secreted CCL5, which activates p65/STAT3 pathway and indirectly stabilizes PD-L1 protein rather than increasing the mRNA expression of PD-L1 in cancer cells . And in Taken together, we discovered the IRGPI was composed of only two genes. IRGPI is superb in predicting TNBC survival and is a measurement for the infiltration of major players of tumor-immune microenvironmental cells and the status of TILs, as well as PD-1 and PD-L1 expression. IRGPI-low patients may benefit more from the activation of CTLs in ICI therapy."}
+{"text": "One of the most important challenges in Wireless Sensor Networks (WSN) is the extension of the sensors lifetime, which are battery-powered devices, through a reduction in energy consumption. Using data prediction to decrease the amount of transmitted data is one of the approaches to solve this problem. This paper provides a comparison of deep learning methods in a dual prediction scheme to reduce transmission. The structures of the models are presented along with their parameters. A comparison of the models is provided using different performance metrics, together with the percent of points transmitted per threshold, and the errors between the final data received by Base Station (BS) and the measured values. The results show that the model with better performance in the dataset was the model with Attention, saving a considerable amount of data in transmission and still maintaining a good representation of the measured data. Advances in microelectronics have boosted the development of small, low-cost, and low-power electronic devices capable of sensing the environment and with considerable processing capacity. These developments have been, in part, fueled by the rapid advances of the Internet of Things (IoT) concept and our necessity for connectivity, making possible applications such as Smart Cars, Traffic Management Systems, Smart Houses, Digital Twins, and many others. These applications need to collect data from the monitored process and transport the information to a computational site. This has made WSNs one of the most rapidly evolving technological domains, mainly because of the many advantages they present compared to equivalent wired networks. The most significant advantage of WSNs is the low cost of implementation and faster deployment of wireless devices . There aA WSN consists of a spatially distributed collection of sensor nodes, router nodes, and one or more Base Stations (BS), also called Edge Gateways, because they represent the border line between wired and wireless communication devices, as depicted in While wireless sensor networks offer many advantages, they also present some drawbacks. A major problem is the energy consumption of a sensor node, as most devices use batteries as the primary source of energy. Sometimes the sensor node depends on harvesting the energy from the monitored process. Therefore, it is very important to extend the battery life as much as possible. Considering a well-behaved sensor node that is allowed to sleep in a deep low-power mode between measurements, approximately ocessing . In a WSocessing ,6,7,8.This paper proposes the use of end-to-end deep learning strategies to approach the problem of Multivariate Time Series predictions in WSNs in a dual prediction scheme to reduce the amount of transmitted data and therefore mitigate the consumption of energy. The methods are compared using multiple error metrics during forecasts and by measuring the effective number of transmitted points per model. The results show that the Attention model was the most effective when performing long-term forecasts. The model, in a dual prediction scheme, can reduce more than The paper extends , which pThe rest of this paper is organized as follows: A time series is an ordered collection of observations recorded in a specific time span. Depending on the period of observation, a time series may typically be hourly, daily, monthly, and so on. In the case of data collected by sensors, the sampling period is generally much smaller, ranging from seconds to several minutes. Time series can be univariate or multivariate. A univariate time series consists of a single observation for each point in time, and a multivariate time series is a set of related univariate time series, each one describing a measured variable. A sensor node that measures temperature, humidity, and voltage and produces a set of observations tagged at the same point in time is an example of a multivariate time series. Multivariate time series provides more information about the process under observation and higher accuracy in prediction tasks, but most state-of-the-art methods can only forecast/predict one variable while taking the collateral information into account .Time series can be decomposed in trend, seasonality, and irregular components or residuals. The trend is the long-term development of the time series in one direction. It can be seen in the slope, and it can be increasing, decreasing, or with no trend. A time series can present different trends in different periods of time. On the other hand, the seasonality is a distinctive pattern that is repeated at regular intervals as the result of seasonal patterns. The unpredictable part of a time series is the irregular component, possibly following a certain statistical noise distribution, also considered the residual time series after all the other components have been removed . Anotherforecasting horizon.A forecast is considered a prediction derived from time series data. It is an estimation of the future based on historical data and considering the time dimension. A forecast can be classified as one-step-ahead or multiple-step-ahead. In the first case, only the next value is forecast, and in the second case, multiple values are forecast in the same iteration. The amount of values to be forecast is called the Most statistical forecasting methods are designed to function on stationary time series. A time series is said to be stationary when statistical properties, such as mean and variance, are not a function of time. This considerably reduces the complexity of the models. Time series with trend and seasonal patterns are usually non-stationary by nature . In suchThere are many methods for time series forecasting. We can divide them into Statistical Methods and Machine Learning Methods.Naive and sNaive: This is the most basic forecasting method. The method assumes the next value will be the same as the last one. sNaive, defined in Equation , is simin, called an average window. Moving Average: This method, defined by Equation , takes tExponential Smoothing (ETS): This method assigns weights to the observations. The weights decay exponentially as we move back in time. It is defined by:The model is a weighted average between the most recent observation l and trend b and applying the same exponential smoothing to the trend. The time series is divided into two functions: the first one describes the intercept that depends on the current observation, and the second function describes the trend, which depends on the intercept\u2019s latest change and the previous values of the trend. The method is defined by the following equations:Double Exponential Smoothing (DETS): ETS does not perform well when the time series presents a trend. This method is a solution for such cases. It consists of dividing the time series into intercept Seasonal Autoregressive Integrated Moving Average (SARIMA): The SARIMA model is a combination of the Integrated ARMA model (ARIMA) and a seasonal component. It is very similar to the ARIMA model but is preferable when the time series exhibits seasonality. The (AR) stands for Autoregressive, a regression of the time series onto itself. The (MA) stands for Moving Average . The (I)eXtreme Gradient Boosting (XGBoost): This method is an efficient implementation of the gradient boosting decision tree (GBDT) and can be used for both classification and regression problems . BoostinArtificial Neural Networks (ANN): An ANN is a computational network designed to emulate the behavior of the human brain. The main unit of these structures is the neuron, which is inspired by actual human neurons. The neuron receives weighted inputs that are multiplied by an activation function to produce an output . The netDeep learning (DL) is a subset of Machine Learning. It refers to learning models based on deep Artificial Neural Network architectures with a large number of layers. One of the advantages of DL over ML algorithms is that, as we input more data, the performance of the network tends to improve, as the network can learn more patterns in contrast to traditional algorithms. In recent years, the use of ANNs and DL algorithms have rocketed due to the rapid evolution in hardware capabilities, shifting the paradigm of avoiding complex algorithms. Complex algorithms can be trained and deployed in a compact form consuming fewer resources, such as in mobile phones and other embedded devices.Data from sensors may contain noise, missing values, correlations between different types of magnitudes and/or between data from different sensors, seasonality, and trends. Statistical and classical ML time series forecasting models are limited in their ability to extract information from nonlinear patterns and data in such conditions; therefore, they rely on clean, mostly stationary data and hand-crafted features, which is time-consuming in some cases requiring high processing power and can introduce human biases .Some architectures are very good at learning from sequential data. One-Dimensional Convolutional Neural Networks (1D-CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory Neural Networks (LSTMs), Gated Recurrent Unit (GRU) and combinations of these architectures have been used in many classification and forecasting problems with sequential data ,18.h is just a copy of the output. Given an input sequence t and defines the recurrent function as interval. In this study, the Luong\u2019s Dot-Product attention score was used, as presented in [The context vector, defined in Equation , is a weEquation , is prodented in and defiSeq2Seq and Attention models for time series forecasting have been used in many research articles, showing promising results when dealing with long sequences ,27,28,29Predictions are made in one location of the network, which can be either in the sensor, BS, or Cluster Heads (CHs). The CH or BS can forecast the data that were received from the sensor node and decide when to forecast more points based on the reliability of the current predictions . This arIn a DPS, predictions are made in both sensor nodes and CHs or gateways. Both devices are able to produce the same results since the prediction model is shared between them, but sensors can check the accuracy of the predictions by comparing the measured data to avoid an unnecessary transmission. The sensor node is constantly comparing the current observation with the predicted value, and only when the difference falls outside a specified threshold, it transmits the measured value to the CH, which will substitute the prediction with the real value ,31,32. TCHs usually have more computational and energy resources; therefore, they can be used to do the extra work of generating the model. The CH can use the data received from the sensors to create a model for every sensor node and be in charge of updating and transmitting this model.In this scheme, the prediction model is generated independently in both sensors and CHs, as presented in In this case, sensor nodes start transmitting the data to the CHs, and they also have to produce the prediction model. This scheme requires much more computing power from the sensor node since it is the one producing and transmitting the model. This method allows sensors a certain autonomy because they can decide if a new model is needed based on all the measured data, instead of using only the information that they share with the CH or gateway . An examMany researches have addressed the problem of transmissions reduction in WSNs to prolong the lifetime of the sensors. In , the autproposed . The autIn , the autIn , the autAuthors in used an p-value used for choosing the appropriate order of the AR model that can be determined by using the Partial Autocorrelation Function (PACF). The results show a reduction in transmission between the sensor node and base stations up to Another autoregressive method was presented in also in In , the autIn , the autA prediction model based on Bidirectional LSTMs, in what the authors called the multi-node multi-feature model (MNMF), was proposed in . The algThe authors in also conIn , the autThe authors in propose In , the autm time series h is the number of time steps ahead (horizon) of the current time for each time series The objective of this work is to forecast a set of time series measured by sensors in a WSN to compare the results of the predictions. Formally, given a set of Once the neural networks are trained, predictions are made over each time series The data used in this work were gathered by a WSN deployed at the hydroelectric plant Cachoeira Dourada with 8 routers, 3 of them modified to function as sensors also. All nodes report their internal temperature and power supply at least. Sensor nodes also send periodic readings from external industrial probes. The location of the nodes is presented in Three time series from one sensor node were considered for the task of forecasting. Each time series corresponds to a physical parameter over 80 days with a 5-min period.In order to apply DL forecasting methods, the data were divided into batches of using a rolling window method to create a supervised training dataset. In the case of multiple-step outputs, the model predicts multiple future time steps; therefore, the rolling window is of the size of the forecast horizon, see X is the actual value.The dataset was split into Equation , where XSince statistical and ML methods allow the forecast of a single time series (univariate predictions) and their performance is low with noisy data, deep learning models are better for this application. An LSTM, a Gated Recurrent Unit Network (GRU), a Deep Neural Network (DNN), a One-Dimensional Convolutional Neural Network (1D CNN), a Seq2Seq, and a Seq2Seq with Attention were trained with the same data to compare the results. All neural networks were created and trained using TensorFlow 2.0.The length of the input sequence and the batch size were set to 290 and 245, respectively. Moreover, the training process was programmed for 200 epochs and monitored to stop when the performance on the validation set started to diminish, considering a tolerance of three epochs. The efficient Adam version of the SGD was used as an optimizer for all of the models. The Mean Squared Error (MSE), defined in Equation , was useLSTM Model: The LSTM model consists of two stacked LSTMs layers, respectively, with 128 units and a fully connected layer (Dense) with 120 units, dropout of 0.2, and learning rate of 0.01, defined after different trials.GRU Model: The GRU model consists of two stacked GRUs layers, respectively, with 128 units with also a fully connected layer of 120 units, dropout of 0.2, and learning rate of 0.01.DNN Model: The DNN model is composed of two Dense layers with 128 units, a dropout layer of 0.2, a Max Pooling layer with pool size of 2, a Flatten layer to convert the pooled feature map into a vector and pass it to a final Dense layer with the output neurons.CNN Model: The CNN model consists of two 1D CNN layers, followed by a dropout layer serving as regularization, a Max Pooling layer, and two FCLs. It is very common to use CNN in groups of two, so the model can learn more features from the input data. CNNs are very fast at learning, which is why its good to use a dropout layer to slow down this process.The learned features are transformed into a vector using a flatten layer to serve as inputs to a Fully Connected Layer. The use of this intermediate FCL is to serve as an interpreter of the learned features before passing them to the output layer. A standard configuration of 128 feature maps with a kernel size of 3 and dropout of 0.2 was used.Seq2Seq Model: The encoder and decoder from the Seq2Seq model are based in LSTM networks with 128 units. The decoder receives the last hidden state from the encoder as initialization. The decoder produces a hidden state for each output time step. The output from the decoder is connected to a fully connected layer with three output neurons for multivariate prediction. The model was trained with a learning rate of 0.01 and a dropout of 0.2. The structure of the model can be seen in Seq2Seq+Attention Model: When adding the attention layer, all the hidden states from the encoder are needed to compute the scores and the alignment. The alignment score was created using a Dot layer followed by a Softmax layer, as is defined in Luong Attention. The context vector was then created by combining the alignment scores. Finally, the context vector was concatenated with the decoder\u2019s previous hidden states and passed to a fully connected output layer, with three output neurons. An important consideration that must be made is that, in the cases of the LSTM, GRU, CNN, and DNN models, predictions are made in the form of single-step prediction since the goal is to predict multiple time series at the same time from a sensor node. For the Seq2Seq and the Attention model, predictions can be made in the form of multiple-steps-ahead fashion, forecasting, in this case, the next 12 points of the time series that correspond to the next hour of measurements.A model evaluation was provided to better understand and compare the performance of the forecasting models. The comparison of the performances was made by using three different error metrics in a multi-step scenario, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and R-Square (quations \u201321), re, reR2), Every model was trained 20 times and evaluated in the testing set using the defined metrics. The sensor and the BS train the model using the same historical data. This way both models are the same and start forecasting from the same point in time in a dual prediction scheme. The sensor compares every prediction with the observation and triggers a transmission when the error between them is higher than a defined threshold. Consider that the points transmitted by the sensor will help further predictions be more accurate; therefore, they are used to update the wrong values and to forecast the next points.A small batch of three days was used to inspect the effect of defining different error thresholds for transmission. The models performed a forecast during the three days, measuring the errors between the observations and the forecast using multiple thresholds. The process starts using the historical data and uses the observations only to update the points transmitted. The percent of points transmitted can be seen in It can be seen that the Attention model performs much better than the rest, starting in around We can see that, with a maximum error of 3 Data prediction can help reduce the energy consumption in transmission in Wireless Sensor Networks by predicting part of the sensed data without transmitting. This paper presented a comparison of some of the more effective data prediction methods to reduce transmissions in a WSN. The comparison was performed between deep learning models because of the advantage they provide compared to traditional ones and their performance when dealing with relatively noisy data. The results show that the best model is the Seq2Seq with Attention model, which produces more accurate predictions and stable long-term forecasts. By setting different error limits, the percentage of transmitted points by the sensor can be adjusted to improve the accuracy of the final data. Therefore, depending on the accuracy required by the application, setting a small value, such as"}
+{"text": "Moreover, we propose a strategy to evaluate the resulting attractor patterns. Interaction graph-based features and dynamic investigations of our model ensembles provide a new perspective on the heterogeneity and mechanisms related to human HSCs aging.Regulatory dependencies in molecular networks are the basis of dynamic behaviors affecting the phenotypical landscape. With the advance of high throughput technologies, the detail of omics data has arrived at the single-cell level. Nevertheless, new strategies are required to reconstruct regulatory networks based on populations of single-cell data. Here, we present a new approach to generate populations of gene regulatory networks from single-cell RNA-sequencing (scRNA-seq) data. Our approach exploits the heterogeneity of single-cell populations to generate pseudo-timepoints. This allows for the first time to uncouple network reconstruction from a direct dependency on time series measurements. The generated time series are then fed to a combined reconstruction algorithm. The latter allows a fast and efficient reconstruction of ensembles of gene regulatory networks. Since this approach does not require knowledge on time-related trajectories, it allows us to model heterogeneous processes such as aging. Applying the approach to the aging-associated NF- To this end, single-cell reconstruction approaches to study the differentiation of HSCs have recently been proposed HSCs form all blood cells in a process termed hematopoiesis \u03baB pathway is known to play a crucial role, but the main players and mechanisms regulating this process are still not well characterized Upon aging, HSCs are increased in number, but their activity is highly heterogeneous and impaired. This impaired function of aged HSCs might influence the immune system (AAIR) and is likely to be associated with leukemia \u03baB pathway involved in human HSC aging. The network reconstruction is based on data from the recently published work from Ratliff and colleagues filtered best-fit\u201d, combines the detection of a filtered selection of input candidates In the present work, we focus on developing and analyzing a network reconstruction pipeline taking advantage of the emerging single-cell sequencing techniques. The latter was applied to investigate the NF-22.1ix is considered to be either active (1) or inactive (0). The state of a network at a specific point in time t is therefore determined by a vector if at the same time. This transition is described as n, Boolean networks are dynamic mathematical models applied to describe biological regulatory processes. These models are defined as a set of n compounds Boolean network models were simulated using the R-package BoolNet 2.2Model reconstruction was performed on the publicly available single-cell RNA sequencing data from Ratliff and colleagues 2.3\u03baB pathway, the gene symbols from the dataset were mapped to Entrez IDs using the R-package biomaRt \u03baB signaling pathway (hsa04064). All selected genes were then binarized using the BASCA algorithm from the R-package BiTrinA To reconstruct Boolean network ensembles of the NF-2.4t) and X(t\u00a0+\u00a01) by a combination of random single-cell measurements. Given a total amount of single-cell measurements (s), the number of couples of predecessor and successor states is After binarization, time series for the 96 binarizable genes were generated to proceed with the network reconstruction. Therefore, the state of each single-cell measurement is assumed to be a potential predecessor or successor of the state of each other single-cell measurement coming from the same individual. Consequently, it is possible to form a large number of tuples of predecessor and successor time steps X to generate pseudo-time series of length two for reconstruction. This means, e.g. for individual young A (19\u00a0years old) having 94 measurements, we receive 4371 couples of single-cell measurements, resulting in 8742 possible tuples. Out of them, 1000 tuples are picked to reconstruct the Boolean functions. This procedure was repeated 20 times.2.5Monotonicity is a dominant pattern in biological functions best-fit approach tests all possible combinations of inputs for each Boolean function to fit the dynamics of the given time series as correctly as possible. In more detail, the algorithm screens for the subset X\u00a0, with up to k inputs. To find Boolean functions which match the measured observations in the time series data corresponds to the consistency problem partially defined Boolean functions pdBFs, with pdBFs describe the true and false observations in the given time series of binary data. Each tuple of predecessor X(t) and successor time point X(t\u00a0+\u00a01) is added to the pdBF as follows While the algorithm by Maucher et al. \u03b5. In the final step, a Boolean function based on these inputs is created using truth tables. The truth table is filled by iterating through all examples j as follows:Next, the number of inconsistencies in the pdBF is measured by intersecting T and F 0 is initialized as Here ? means undefined and * indicates a conflict. f2.6filtered best-fit approach, Boolean functions were reconstructed for each individual independently. For each set of single-cell data obtained from one individual, 1000 tuples of data points were randomly drawn and used as time points (predecessor and successor state) to generate pseudo-time series for reconstruction. Next, Boolean networks were reconstructed from these pseudo-time series. For each of the eight individuals, eight populations of networks (ensembles), comprising all reconstructed Boolean functions for each gene, were generated. For dynamic analysis, we sampled 100 Boolean networks from each ensemble by randomly choosing one of the potential Boolean functions suggested by the reconstruction algorithm. This procedure was repeated for 20 random picks of the 1000 tuples of pseudo-time points per individual, resulting in a total of 2000 networks per individual.Based on the binarized pseudo-time series and the 2.7To evaluate the proposed reconstruction pipeline, we compared its computation time and performance to the original Best-Fit Extension approach. We created random networks of different sizes with scale-free topology as ground truth networks using the BoolNet R-package. Next, we created time series of different lengths from each of the random networks. For the first measurements, the number of time steps was fixed to 20 (for each of 100 networks per size 20 to 200). For the second measurements, we created a time series of |V| + 10 time points .(fixed genes),The number (#) of compounds, which are unregulated and, thus, set to a constant value (0/1) in the Boolean networks 2).(isolated genes),The number (#) of compounds that have connectivity of 0 and are, consequently, disconnected from the rest of the graph 3).(mean input), andThe mean number (#) of incoming edges across the ensemble of networks 4).(mean functions).The mean number (#) of potential regulatory functions which could be found for each compound in the reconstructed process To assess structural changes during aging, we compared the reconstructed networks of the young and aged groups as well as the structural changes of each individual on its own. Therefore, we calculated different properties based on the suggested interactions between the genes of the reconstructed Boolean networks by measuring:In addition, network motifs were also investigated across all individuals and by age groups (young vs aged). To this end, feed-forward loops and bi-fan motifs if the compound Similarly, a binary matrix has been constructed from the data available in STRING-DB These two binary matrices for the ensembles and the STRING-DB were then compared to obtain a matching matrix. Here, 1 indicates that there is a match between interactions present both in the Boolean network as well as in STRING-DB. If there is no interaction in the Boolean network, the match matrix has a 0 entry. An entry of \u22121 indicates a mismatch, meaning that the interaction in the Boolean network was not found in STRING-DB or only exists via more than one indirect node.2.9For further analysis of the reconstructed ensembles of Boolean networks, we investigated its dynamic behavior by screening its attractor landscape. To do so, we performed an exhaustive attractor search on each of the networks in the network population (ensemble). All simulations were performed using the synchronous update strategy as previously applied in other model simulations describing complex pathway interactions First, we studied the mean number of attractors which could be found across all networks in the different ensembles and the repeated runs, and their number of states.Second, we investigated the distribution of gene activity through the different attractor patterns. Here, the availability of ensembles of networks allows analyzing probabilities of attractor patterns. We summarized the binary states of the different attractors of each network and normalized them by the total number of attractor states. This procedure yields the probabilities of each gene to be active within the complete attractor landscape. Next, we summed up these probabilities for all network simulations within one ensemble and the repeated runs and normalized again by ensemble size and the number of repeated runs. Finally, results show an average probability of genes to be active in the long-term behavior of the networks in each ensemble over the repeated reconstruction runs.33.1\u03baB The aging process of HSCs is suggested to be characterized by a dynamic and heterogeneous behavior. While common effects of aging-related dysfunctions have been identified, not all HSCs in the elderly are thought to present this loss of function, which results in a likely heterogeneity of individual aged HSCs in single-cell expression data. Likely reflecting this heterogeneity, hierarchical clustering and t-SNE procedures on single HSCs from our selected dataset were not successful. In particular, analyzing the expression data of the genes in the known aging-related pathway as NF-For this reason, we reconstructed populations of Boolean models from single-cell RNA sequencing data. By studying the behavior and characteristics of the ensemble of individual networks, we provide novel tools to capture the heterogeneity of the aging process of HSCs. In the following, we will present in detail the rationale of our approach, together with the major insights from the ensembles of Boolean networks.3.2All data-based reconstruction algorithms largely depend on the number of data and especially the time points which are available for reconstruction. Typically, there is only a small number of time points, and the reconstruction potential is limited. In this approach, we take advantage of the emerging single-cell high-throughput data and propose a new strategy to increase the amount of available time series within a population of cells.In single-cell sequencing data, though, each sample is described by a potentially large set of single-cell measurements. We group all measurements which belong to the same individual. Within these groups, we assume the state of each single-cell measurement is a potential predecessor or successor of the state of each other single-cell measurement in the same domain \u2013 each single-cell sample is treated as a pseudo-time-point and heterogeneity converted into pseudo-time-points. This assumption is based on experimental results showing that LT-HSCs can depict interchangeable activity states, implying that a population of LT-HSCs dynamically transits from one activity level to the other Introducing this new concept for retrieving pseudo-time points from single-cell experiments leads to an increased number of time points compared to bulk sequencing experiments. This increase of information challenges the currently available reconstruction algorithms, impairing the efficient reconstruction of models..ix are determined using this approach fiX as derived from the pre-processing step and regulatory inputs (p\u00a0=\u00a02.8\u00b710-10) in the young phenotype compared to the aged individuals, while fixed genes (p\u00a0=\u00a01.1\u00b710\u22129) and isolated ones (p\u00a0=\u00a01.6\u00b710\u22125) were lower in the young reconstructed networks and length (\u03baB pathway), which are considered to be majorly quiescent, also sustains the hypothesis that major dysregulations in hematopoiesis during aging arise from differentially prompted and reactive HSCs per se causing a direct activation of the HSC, we deepened our analysis on the retrieved attractor landscape. To do so, we first analyzed the general activity tendencies between young and aged individuals and the inter-individual heterogeneity decreased, 2) remaining stable, and 3) increased in aging . IRAK4 aFurthermore, few genes showed either a stable active behavior or even an increase in activity throughout the aging process . The funFinally, our results show that PIDD1 is only active in one aged individual . Interestingly, the reconstructed networks of this individual already showed more similarity to interaction graph-based features and network motifs to the youngest individuals (19 and 21 y.o.) compared to the remaining individuals of the aged group . A potential explanation for that might relate to a discrepancy between biological and chronological age for this individual. Notably, literature research revealed that PIDD1 activity is connected to DNA damage response Following this hypothesis, we further deepened the literature screening looking for an explanatory mechanism on the potential implication of PIDD1 in delaying age-related processes. PIDD1 is suggested to act as a switch between DNA damage response and apoptosis by regulating the activation of first damage response via NF-Altogether our dynamic analyses indicate a certain level of heterogeneity in the behavior of our ensembles of models. This suggests that the description of aging does not follow the idea of \u201cone fits all\u201d but might come with different mechanisms and activities. This idea is reflected in the distinction between biological and chronological aging 4Here, we present a new method to reconstruct data-driven ensembles of regulatory networks from single-cell RNA-sequencing data potentially applicable to a wide variety of sequencing data. Taking advantage of the intra-cell population heterogeneity, the approach exploits the generation of pseudo-time points of populations of the same cell type. These pseudo-time points enable the reconstruction of Boolean network ensembles. To handle such a potentially large (pseudo-) time series, we furthermore developed a new Boolean network reconstruction pipeline by adding a correlation-based screening of potential regulatory dependencies as a pre-processing step to the est-it\u00a0xtension algorithm. The pre-processing step not only decreases computational time but also is more robust to noisy data.Our use-case, the analysis of aging-related changes in NF-Julian D. Schwab: Conceptualization, Software, Methodology, Formal analysis, Data curation, Visualization, Writing - original draft, Writing - review & editing. Nensi Ikonomi: Conceptualization, Formal analysis, Investigation, Visualization, Writing - original draft, Writing - review & editing. Silke D. Werle: Conceptualization, Formal analysis, Investigation, Visualization, Writing - original draft, Writing - review & editing. Felix M. Weidner: Formal analysis, Software, Visualization, Writing - original draft, Writing - review & editing. Harmut Geiger: Formal analysis, Writing - review & editing. Hans A. Kestler: Project administration, Supervision, Funding acquisition, Conceptualization, Formal analysis, Writing - original draft, Writing - review & editing.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."}
+{"text": "In recent times, deep artificial neural networks have achieved many successes in pattern recognition. Part of this success can be attributed to the reliance on big data to increase generalization. However, in the field of time series recognition, many datasets are often very small. One method of addressing this problem is through the use of data augmentation. In this paper, we survey data augmentation techniques for time series and their application to time series classification with neural networks. We propose a taxonomy and outline the four families in time series data augmentation, including transformation-based methods, pattern mixing, generative models, and decomposition methods. Furthermore, we empirically evaluate 12 time series data augmentation methods on 128 time series classification datasets with six different types of neural networks. Through the results, we are able to analyze the characteristics, advantages and disadvantages, and recommendations of each data augmentation method. This survey aims to help in the selection of time series data augmentation for neural network applications. Some applications include the recognition of signals, biometrics, sequences, sound, trajectories, and more. The challenge of using time series is that time series are structural patterns that are dependent on element order. Note that Traditionally, time series classification was tackled using distance-based methods . HoweverHowever, acquiring large amounts of data can be a problem for many time series recognition tasks. For example, the 2018 University of California Riverside (UCR) Time Series Archive is one odata augmentation. Notably, data augmentation is a universal model-independent data side solution. Data augmentation attempts to increase the generalization ability of trained models by reducing overfitting and expanding the decision boundary of the model 20]. In tTemporal Convolutional GANs (T-CGAN) are GANsFinally, there are hybrid GAN models, such as the BLSTM-CNN GAN proposed by Zhu et al. . In Zhu k-nearest neighbor, and SVM. Similar results were found in denotes a vector of the true distributions \u03b6c of the classes c in the dataset where:Nc is the number of patterns in each class c and N is the total number of training patterns. Vector \u03b2 = is the distribution of a balanced dataset with each Cm is the number of minority classes. A minority class is a class that has with \u03b9 = is a vector representing the class distributions that is has the maximal distance from \u03b2 with the same Cm, or a vector of Cm number of \u03b9c = 0, C \u2212 Cm \u2212 1 number of ID = 0 and unbalanced datasets have larger ID scores. In the experiment, the Hellinger distance is used as the distance measure d due to it having the highest Pearson correlation coefficient between imbalance and neural network performance among the distance functions tested in Ortigosa-Hernandez et al. [Finally, we consider the class imbalance as a property. Class imbalance is how imbalanced the size of the classes in the training set is. Augmentation methods such as SMOTE attempt z et al. Unlike SMOTE, the data augmentation methods in the experiments sample the classes at the same distribution as what already exists in the training dataset. While small classes can benefit from more training samples, the large classes also increase in size. Despite this, there are correlations that can be found from the results.Acc tends to rise for magnitude domain data augmentation methods for MLP, VGG, and LSTM-FCN. Conversely, the time domain data augmentations tend have negative or very small correlations for the same models.In O(T) and the complex transformations and pattern mixing methods are O(T2), where T is the number of time steps. However, as shown in https://github.com/uchidalab/time_series_augmentation.The theoretical computational complexity of many of the methods are similar; the simple transformations are The average observed execution times for the transformation-based methods are negligible. They only took a fraction of a second on average to double the dataset size. On the other hand, the pattern mixing methods are much slower with SPAWNER and RGW taking about a minute on average to execute and wDBA and DGW taking 2,300 and 4,290 seconds, respectively. The reason for the slow run times is primarily due to the speed of DTW, which each of the methods relies on for element alignment. For most datasets, this is not a problem, but ones with long time series, such as HandOutlines with 2,709 time steps, can take extraordinary amounts of time compared to the transformation-based methods. Accordingly, pattern mixing methods which do not use DTW, such as interpolation, might not face the same issue. DGW and wDBA take extra long because for each generated time series, multiple DTW calculations must be performed. In addition, the DGW implementation used in the experiment uses shapeDTW which takes significantly longer to execute than standard DTW.The number of tunable parameters is another aspect of the data augmentation methods that one needs to consider. The hyperparameter, design choice, and variation of the method need to be selected and the effectiveness of the augmentation method can depend on the parameters. Thus, methods with many parameters may need many adjustments and evaluations to be effectively used. A list of the parameters that need to be defined manually is displayed in Each data augmentation method has different effects depending on the model and dataset, as shown in Tables The datasets can be broken into categories, such as ECG, sensor, etc. To categorize the datasets, we use the labels provided by . In TablJittering, Magnitude Warping, and Scaling had similar results. They tend to act similarly since they are similar transformations in that they only differ in how many directions magnitude gets scaled. For example, in general, these magnitude domain transformations work well with VGG and with LSTM.As described previously, Rotation (flipping in this implementation), seems to be not suitable as a general time series data augmentation method. The overall accuracy for Rotation, in Similar to Rotation, Permutation had severely detrimental effects on accuracy. However, this is somewhat expected because Permutation breaks the time dependency of the time series. The only time that it would intuitively make sense for Permutation to be used would be for periodic time series or very sparse time series.As a general purpose data augmentation method for time series, the other time domain transformations far outperform Permutation. Specifically, Slicing and Window Warping performed well on most datasets and models. In particular, the CNN-based models, VGG and ResNet, were significantly improved by Slicing and Window Warping . ConsideHowever, Time Warping showed a poor performance. This is likely due to over transforming the time series causing significant noise, as illustrated in As mentioned previously, the largest downside to the pattern matching methods is the slow computation time. While it is possible to achieve better generalization and results using these methods, one has to consider the extra time it would take to generate the data. However, this issue is only with longer patterns and the execution time is also negligible for short time series. Furthermore, from However, for most datasets, wDBA had disappointing results. The primary reason for this is due to wDBA not creating diverse enough results. In the evaluation, we used the ASD because Forestier et al. found itAs for SPAWNER, RGW, and DGW, there were mixed results. In some models and some datasets, they performed better than other augmentation methods and on others, they performed worse. Notably, DGW had the most performance increase compared to no augmentation out of all of the pattern mixing methods and the highest average rank on BLSTM compared to all augmentation methods . As mentIn this paper, we performed a comprehensive survey of data augmentation methods for time series. The survey categorizes and outlines the various time series data augmentation methods. We include transformation-based methods across the related time series domains and time series pattern mixing, generative models, and decomposition methods. Furthermore, a taxonomy of time series data augmentation methods was proposed.In addition, an empirical comparative evaluation of 12 time series data augmentation methods on six neural network models and 128 discrete finite time series datasets was performed. Namely, we use all of the datasets in the 2018 UCR Time Series Archive to evaluBy using all 128 datasets of the 2018 UCR Time Series Archive, we are able to test the data augmentation methods on a wide variety of time series and make recommendations on data augmentation usage. As a general, easy-to-use, and effective method, the Window Warping algorithm that was proposed by Le Guennec et al. is the mThe results also revealed some key aspects of data augmentation with time series based neural networks. For example, LSTM-FCN does not respond well to data augmentation. Part of this could be because LSTM-FCN uses dropout with a very high rate or it could be that the design of the architecture is just not suitable for data augmentation. MLP, to a lesser extent, also did not respond well. Conversely, the accuracy of VGG was often improved with most of the data augmentation methods. The other models, ResNet, LSTM, and BLSTM had mixed results, and careful augmentation method selection is required.We also analyzed different aspects of time series datasets and the effects data augmentation had on datasets with them. First, the correlations between properties of time series datasets and the change in accuracy from augmentation were found. The findings showed that there generally was a negative correlation between change in accuracy and training set size (and number of patterns per class) and a positive correlation for dataset variance and intra-class variance. We found that time series length generally had a negative correlation for RNN-based models, but positive correlations for CNN-based models. There was a strong positive correlation between change in accuracy and class imbalance for MLP, VGG, and LSTM-FCN for the magnitude domain data augmentation methods. Second, using ranking, we found the top augmentation methods for each model and dataset type combination.https://github.com/uchidalab/time_series_augmentation.This survey serves as a guide for researchers and developers in selecting the appropriate data augmentation method for their applications. Using this, it is possible to refine the selection of data augmentation based on dataset type, property, and model. An easy to use implementation of the algorithms evaluated in this survey are provided at Since data augmentation for time series is not established as much as data augmentation for images, there is a lot of room for time series data augmentation to grow. For example, like most works, this survey only uses a single data augmentation method for each model. It is possible that multiple data augmentation methods can synergize well and be used serially. With exception to Um et al. , this idS1 Table(PDF)Click here for additional data file.S1 FigThe top row in red is the change in accuracy and the subsequent rows are the correlations.(PDF)Click here for additional data file."}
+{"text": "Psoriasis is an immune-mediated, inflammatory disorder of the skin with chronic inflammation and hyper-proliferation of the epidermis. Since psoriasis has genetic components and the diseased tissue of psoriasis is very easily accessible, it is natural to use high-throughput technologies to characterize psoriasis and thus seek targeted therapies. Transcriptional profiles change correspondingly after an intervention. Unlike cross-sectional gene expression data, longitudinal gene expression data can capture the dynamic changes and thus facilitate causal inference.Using the iCluster method as a building block, an ensemble method was proposed and applied to a longitudinal gene expression dataset for psoriasis, with the objective of identifying key lncRNAs that can discriminate the responders from the non-responders to two immune treatments of psoriasis.Using support vector machine models, the leave-one-out predictive accuracy of the 20-lncRNA signature identified by this ensemble was estimated as 80%, which outperforms several competing methods. Furthermore, pathway enrichment analysis was performed on the target mRNAs of the identified lncRNAs. Of the enriched GO terms or KEGG pathways, proteasome, and protein deubiquitination is included. The ubiquitination-proteasome system is regarded as a key player in psoriasis, and a proteasome inhibitor to target ubiquitination pathway holds promises for treating psoriasis.An integrative method such as iCluster for multiple data integration can be adopted directly to analyze longitudinal gene expression data, which offers more promising options for longitudinal big data analysis. A comprehensive evaluation and validation of the resulting 20-lncRNA signature is highly desirable.The online version contains supplementary material available at 10.1186/s40246-021-00323-6. Psoriasis is an immune-mediated, inflammatory disorder of the skin with chronic inflammation, and hyper-proliferation of the epidermis . It is wANRIL) in 286 patients with psoriasis and 300 controls, and demonstrated that this lncRNA can be regarded as a risk locus of psoriasis. Another study [MEG3), a competing endogenous RNAs (ceRNA) of miR-21, was significantly downregulated in lesional skin of psoriasis. Furthermore, by carrying out the weighted gene correlation network analysis (WGCNA) [Long non-coding RNAs (lncRNAs) are post-transcriptional and epigenetic regulators that have lower expression levels and are more tissue-specific compared with protein-coding genes . Once reer study showed m (WGCNA) , a studyTranscriptional profiles not only vary under different conditions or in different tissues but also change correspondingly as a disease initializes and advances, or after an intervention or a stimulus. Unlike cross-sectional gene expression data , longitudinal gene expression data can capture such dynamic changes and infer the causality relationship between these temporal changes and the phenotype of interest. Consequently, the amount of such data has increased dramatically. For psoriasis alone, several longitudinal gene expression data \u201310 have In this study, a medium-sized longitudinal dataset was reanThe iCluster method was propIn our opinion, longitudinal gene expression data can be regarded as a special case of multiple omics data or multiple-view data integration , with thhttps://www.ncbi.nlm.nih.gov/geo/) was used to identify relevant lncRNA to predict the response status of individual patients to immune treatments. There were 179 arrays in this experiment, which involved the gene expression profiles of 30 patients with moderate to severe psoriasis at the baseline with both non-lesion skins and lesion skins, and at weeks 1, 2, 4, and 16. Of these 30 patients, half were treated with adalimumab (ADA) and the other half were treated with methotrexate (MTX). Of note, one patient who was on the ADA arm had no expression level measured at week 16 given his/her PASI score already achieved a reduction of 75% at the week 4 (thus had been discharged). In Table SA microarray dataset whose achttps://www.gencodegenes.org/) database (version 32) to those of genes annotated by the Illumina HumanHT-12 V 4.0 bead chips, 662 unique lncRNAs were identified, upon which the downstream analysis was carried out.The pre-processed data that were quantile normalized were downloaded from the GEO database. By matching the gene symbols of lncRNAs in the GENCODE are related to a set of latent variables Zi in the following way,The integrative clustering method (iCluster) proposed by Shen et al. uses a jWt represents the coefficient of gene g for data type t and \u03b5it\u00a0is the error terms. Conditioned on the latent variable Zi, Xit are independent from one another. The correlations of different genomic data for the same people are modeled with these latent variables. In the iCluster model, an expectation-maximization (EM) algorithm is used for parameter estimation. By using a soft-threshold method to continuously shrink the coefficients of non-informative values toward 0\u2019s, the iCluster method [here, r method simultant in the above equation corresponds to time points.In this study, the iCluster/iClusterPlus method is adopted to analyze longitudinal gene expression data that involve four time points\u2014lesional tissues at the baseline, week 1, week 2, and week 4, with the objectives of selecting important lncRNAs which can distinguish responders from non-responders to a specific immune therapy, revealing the underlying therapeutic mechanisms of the treatment and thus detecting patients who are highly likely to respond and thus benefit from the treatment as early as possible. Consequently, instead of representing multiple data types, the index \u03bb\u2019s to zero. This consideration is based on the fact that we only used a small subset of lncRNAs for each replicate. In addition, the number of clusters in iCluster was set at two given that the response status to a specific treatment is the outcome of interest.The iCluster method is essentially an unsupervised learning method whose predictive performance is usually inferior to a supervised learning method. To address this issue, by following the idea of an ensemble learning method, we randomly selected a small subset of lncRNAs and performed clustering repeatedly by applying the iCluster method to the resulting subsets for 10,000 times. Of note, we disabled feature selection of the iCluster method by setting the tuning parameter Wt values in those \u201cweaker\u201d iCluster learners. For a specific gene, therefore, if |Wt|=0 for then this gene would be ruled out. On the other hand, if the sum of |Wt| is large enough, which may correspond to two extreme cases\u2014either the magnitude of |Wt| is very large at a single time point or two or their values are subtle at all time points but when added up together the sum is large enough, the certain gene is subject to temporal changes over time. Alternatively, the maximum of |Wt| may be used to represent the importance of a certain gene. However, it would lead to a high probability of missing the latter scenario. We believe that this strategy can help obtain a stronger and more robust learner and identify core lncRNAs associated with the outcome of interest. This procedure is referred to as the iCluster ensemble hereafter, and the R codes of iCluster ensemble have been restored in the Github repository (https://github.com/windytian/icluster_ensemble-).Then, we combined the resulting lncRNA lists of learners whose accuracy is > 75%, and ranked the lncRNAs according to self-customized scores which may be used to evaluate the importance of certain lncRNAs in the overall integrated \u201cstronger\u201d learner. These scores were calculated by summing up a specific gene\u2019s absolute http://www.r-project.org), with the aid of Bioconductor packages and CRAN packages. Specifically, iClusterPlus [https://cran.r-project.org/web/packages/gee/gee.pdf) for fitting the GEE models, and e1071 (https://cran.r-project.org/web/packages/e1071/e1071.pdf) for support vector machine modeling.All statistical analyses were carried out in the R language, version 3.6.1 , the prediction of response status for all samples was made. If the number of predicting the sample as a responder is more than that of non-responder, then the specific sample is classified as a responder (vice versa), the iCluster ensemble of 10,000 replicates resulted in an overall accuracy of 83.33%, with 5 responders being misclassified as non-responders (2 were on MTX treatment and 3 on ADA). Notably, if only the learners with accuracy >80% were considered, the final accuracy was increased slightly to 86.67%. Nevertheless, given there were only 6 leaners that met this stringent cutoff and most lncRNAs within these 6 leaners only appeared once (mostly subject to the randomness), a less stringent cutoff was chosen.Ranking decreasingly according to the self-defined scores in the \u201cMaterials and Methods\u201d section, we selected the first 20 lncRNAs as core genes to build up a classification model and predict the response status of a psoriasis patient to a specific immune therapy. Table LINC00936 ranked at the top. In the literature, we cannot find evidence to link this lncRNA to psoriasis. Future experimental validation of its role in psoriasis is highly desirable. However, the lncRNADisease 2.0 knowledgebase [ATP2B1-AS1 was identified as a differentially expressed gene in astrocytoma using a microarray experiment. Additionally, using the GeneCards database [CD27-AS1 and H19 were indicated to directly relate to psoriasis, and seven other lncRNAs were indirectly related to psoriasis.Of the 20 lncRNAs on the list, edgebase suggesteedgebase , ATP2B1-database , the bioThe heatmap of average expression values for the 20 lncRNAs across the baseline , week 1, week 2, and week 4 is shown in Fig. To investigate the predictive capacity of the resulting 20-lncRNA list, we have randomly selected a set of 20 lncRNAs for 1000 times. Subsequently, SVM models were fit on the LOO data using the randomly selected 20 lncRNAs as predictors, and then the predictive accuracies for these 1000 replicates were calculated and averaged. The baseline accuracy of a 20-gene list is estimated as 53.47%. Therefore, the 20-gene list identified by the iCluster ensemble outperforms the randomly selected 20-gene list.Furthermore, a comparison between the iCluster ensemble and three competing methods, namely, iCluster , LASSO , an enseFor the GEE-based screening , GEE models with unstructured working correlation structure were fit for individual genes and the top 20 lncRNAs (most significant) were selected. Upon the 20-gene list, LOO support vector machine models were fit to calculate the predictive error rate, which is estimated as 33.33% and is inferior to the 20% achieved by the iCluster ensemble method.As a specific feature-selection method to handle longitudinal data, the EDGE method has been widely utilized. When setting the cutoff value of FDR at 0.05, 27 lncRNAs were deemed as differentially expressed genes across time between the responder group and the non-responder group by the EDGE method. Then, LOO SVM models were fit to estimate predictive accuracy of the 27-gene list, whose value is 56.67%.Specifically, LASSO is an embedded method that identifies relevant features and constructs the final classifier simultaneously. In order to fit LASSO, the longitudinal expression profiles need to be downgraded as cross-sectional expression profiles by calculating the averages of each gene across time points. For this application, most LASSO methods select no lncRNAs at all (which corresponds to the null model), resulting in an error rate of 46.67%, which is very close to a random guess. Furthermore, we replaced iCluster with LASSO to frame LASSO-ensemble in which a LASSO logistic model was used as the basic learner to identify relevant lncRNAs among randomly selected 100 genes. Based on the sum of estimated coefficients for the 10,000 replicates, the top 20 lncRNAs were selected. Then LOO SVM models were fit to estimate predictive accuracy of the LASSO ensemble, whose value is 73.33%, presenting a substantial improvement over LASSO. The results of this comparison are presented in Table Based on the above comparison, we concluded that the superiority of iCluster-ensemble may be due to two aspects: one is a method capable of analyzing longitudinal data and the ensemble that enables to abstract a stronger learner from weak learners. Moreover, the contribution of an ensemble may be substantially bigger, while it also addresses the drawback of iCluster being an unsupervised learning method. In addition, since the relevant biomarkers for the two treatments may differ, separate analyses stratified by treatments using iCluster-ensemble were also performed, and the results are given in Table SAnother application of the iCluster-ensemble procedure on a longitudinal microarray dataset of patients with multiple sclerosis was made, and the analysis results were presented in Supplementary File FRMDEAS1 possessed this pattern. The certain temporal change pattern over time in the responder group suggested the expression level of these lncRNAs recovered to their respective normal values; thus, these lncRNAs may have prognostic values on the response status indeed. As an example, for H19, the average expression value for non-lesional skin is 7.405, while it reached the plateau (the minimum) for baseline lesional skin, the average expression level turned up back and monotonically increased, even surpassed the non-lesional level and climbed up to 8.188 at week 16 in the responder group. In contrast, the U-shape in the non-responder group has much less curvature. Actually, it looks more like a horizontal line.Using the loess method, the change trajectories of identified lncRNAs\u2019 expression profiles stratified by the response status were made , and are presented in Fig. Lastly, the target mRNAs by these 20 lncRNAs were retrieved from the lncRNADisease 2.0 knowledgebase and fed H19 was predicted to be associated with psoriasis by some computational methods. Overall, the literature mining and the lncRNA canonical knowledgebase search found limited valuable information on the roles that this 20-lncRNA signature may play in combating psoriasis.As far as psoriasis is concerned, the research on its relevant lncRNA markers is really rare, explaining why in the lncRNADisease 2.0 knowledgebase , a searcIt is worth pointing out that there are several limitations in this study. First, the sample size is not very large. Stratified by the treatment arms, there were only 15 patients in each stratum. Given these two treatments may differ in terms of underlying therapeutic mechanisms and targeted molecular markers or pathways, separate analyses stratified by treatment arms were conducted and the results are presented in the Additional file PRINS) which has been shown to exhibit the highest expression levels in non-psoriatic skin lesions and play an important role in pathogenesis of psoriasis does not belong to the 662 lncRNAs under consideration in this study. As aforementioned, lncRNA investigations on psoriasis are rare, let alone here, we considered a longitudinal study. To the best of our knowledge, the present study is among the first effort to explore the association between lncRNAs and psoriasis using longitudinal gene expression data.Second, this study had not been carried out in a specific platform for lncRNAs. As a result, some crucial lncRNAs for psoriasis may be absent in this analysis. For example, psoriasis associated non-protein coding RNA induced by stress (Lastly, the predictive performance of the identified 20-lncRNA list was not validated on an independent dataset, resulting in a potential overestimation. This is due to the shortage of an independent dataset that has same or similar objectives and study design, in addition to a decent sample size. A large longitudinal lncRNA study is needed to reveal the therapeutic mechanism of an immune treatment for psoriasis and thus predict the response status as early as possible, from the perspective of lncRNAs.In addition to being viewed as a gene set \u201332, longIn this study, a well-known integrative clustering method, namely, the iCluster method was used repeatedly to devise an ensemble for longitudinal microarray data analysis, with the objective of identifying relevant lncRNAs to predict response status of psoriasis patients to immune therapies. Using the iCluster ensemble and longitudinal lncRNA expression values during the early period of treatments for patients with psoriasis, our analysis highlighted 20 lncRNAs that may hold predictive values for distinguishing between the responders and the non-responders to immune treatment. Further investigation on these 20 lncRNAs to reveal comprehensively how they function in concert triggered by immune treatment to fight psoriasis is warranted.Additional file 1: Supplementary File 1\u2013Another application of the iCluster ensemble procedure on multiple sclerosis data, and separate analyses stratified by treatments for psoriasis data. Table S1\u2014The clinical and demographic characteristics of psoriasis patients in the longitudinal microarray experiment. Table S2\u2014Relevant lncRNAs identified by separate analyses. Table S3\u2014Comparison between iCluster-ensemble and competing methods for the multiple sclerosis application."}
+{"text": "CLDN1, HCAR3, FNBP1L, and BRCA2, the expression of which was confirmed in ESCC samples. The prognosis of patients in the high-IRGPI group was poor, as verified using publicly available expression data. KMT2D mutations were more common in the high-IRGPI group. Enrichment analysis revealed an active immune response, and immune infiltration assessment showed that the high-IRGPI group had an increased infiltration degree of CD8 T cells, which contributed to the improved response to ICI treatment. Collectively, these data demonstrate that IRGPI is a robust biomarker for predicting the prognosis and response to therapy of patients with ESCC.Immune checkpoint inhibitor (ICI) therapy may benefit patients with advanced esophageal squamous cell carcinoma (ESCC); however, novel biomarkers are needed to help predict the response of patients to treatment. Differentially expressed immune-related genes within The Cancer Genome Atlas ESCC dataset were selected using the weighted gene coexpression network and lasso Cox regression analyses. Based on these data, an immune-related gene prognostic index (IRGPI) was constructed. The molecular characteristics of the different IRGPI subgroups were assessed using mutation information and gene set enrichment analysis. Differences in immune cell infiltration and the response to ICI therapy and other drugs were also analyzed. Additionally, tumor and adjacent control tissues were collected from six patients with ESCC and the expression of these genes was verified using real-time quantitative polymerase chain reaction. IRGPI was designed based on Esophageal carcinoma is one of the leading causes of cancer-related death worldwide . The maiAn increasing number of researchers have recently focused on immune checkpoint inhibitor (ICI) therapy, which prevents tumor cell immune escape and induces an immune response by inhibiting immune checkpoints, such as programmed death-1 (PD-1), programmed death-ligand 1 (PD-L1), and CTL-associated protein-4 (CTLA-4) pathways \u20139. PhaseThe response of patients to ICI therapy is mainly affected by tumor cell-intrinsic factors and the tumor microenvironment , 13. HenIn this study, we used immune gene signatures to develop prognostic and ICI therapy indicators for patients with ESCC. We also performed weighted gene coexpression network analysis (WGCNA) and lasso regression analysis to construct an immune-related gene prognostic index (IRGPI). The molecular and immune characteristics of the IRGPI subgroups were evaluated, and the potential of IRGPI for assessing immunotherapy efficacy in patients with ESCC was determined. The study design is shown in https://www.innatedb.com), ImmPort, and IRIS [ESCC transcriptome data, clinical information, and gene mutation data were downloaded from The Cancer Genome Atlas (TCGA) database, which included 81 tumor and 11 adjacent noncancerous samples. Transcriptome data and clinical information of the validation cohort GSE53625 with 179 ESCC tumor samples and 179 adjacent normal tissues were downloaded from the NCBI Gene Expression Omnibus database . Immune-and IRIS \u201321.Differentially expressed ESCC genes were identified using the R package limma (version 3.44) based on TCGA transcriptome data, with a false discovery rate < 0.05 and fold change > 1.5 . The dif\u03b2 = 7. Under this selection, the scale-free topology fitting index R2 > 0.85. Based on the gene expression matrix, the similarity of gene expression was calculated to obtain the adjacency matrix, which was then transformed into a topological overlap matrix. Genes were grouped by hierarchical clustering and then divided into different expression modules according to the coexpression pattern. We then calculated the correlation between these gene modules and ESCC occurrence. Modules with an absolute value of the correlation coefficient > 0.7 were selected for further analysis.The WGCNA (version 1.46) method was used to identify hub genes that were significantly associated with ESCC . We usedUnivariate prognostic analysis was performed for genes in the selected modules with the R package survival (version 3.2). The R package glmnet (version 4.0) was used for lasso analysis of survival-related genes in univariate analysis . The immFor immune-related genes included in the IRGPI, the R package survminer (version 0.4.8) was used to determine the optimal cutoff expression value for prognosis and the logrank test and Kaplan\u2013Meier survival curve were used to determine its relationship with overall survival , 27. BasBased on the median value of IRGPI (0.13), TCGA ESCC patients were divided into high- and low-IRGPI groups. The R package limma (version 3.44) was used to analyze the differentially expressed genes between groups. The R package maftools (version 2.6.05) was used to summarize and visualize mutation information between subgroups . The R pThe ssGSEA function of the R package GSVA (version 1.36) was used to calculate the enrichment score of 28 types of immune cells for each sample . The R phttps://cloud.genepattern.org/gp) was applied to predict the response effectiveness of the IRGPI subgroups to immunotherapy [The SubMap module in GenePattern (otherapy . Expressotherapy .https://www.cancerrxgene.org/), we predicted the different reactions of the two IRGPI subgroups to chemotherapy. Based on the gene expression of samples and GDSC training set, the half-maximal inhibitory concentration (IC50) of each chemotherapy was evaluated by ridge regression using the R package pRRophetic (version 0.5) [According to the Genomics of Drug Sensitivity in Cancer (GDSC) database (ion 0.5) . TenfoldAfter obtaining ethics approval (no. 2019ZDSYLL023-Y01) from Zhongda Hospital, ESCC and adjacent-control tissues were collected from six patients with ESCC. The adjacent tissues were collected from the esophageal tissues that were more than 2\u2009cm and less than 5\u2009cm away from the ESCC tissues. Fresh ESCC and adjacent normal tissues were collected during surgery. After rapid freezing in liquid nitrogen, all tissue samples were stored at \u221280\u00b0C. The obtained tumor tissues were pathologically verified.Ct\u2212\u0394\u0394 method, with GAPDH as an internal reference. All samples were evaluated three times. The corresponding primers used are listed in Supplementary Table The FastPure Cell/Tissue Total RNA Isolation Kit was used according to the manufacturer's instructions to extract total RNA from patient tissue samples. cDNA was synthesized using HiScript III RT SuperMix for qPCR (Vazyme). Using ChamQ SYBR qPCR Master Mix (Vazyme), qPCR was performed. The gDNA filter columns in the RNA extraction kit and subsequent gDNA wiper mix before reverse transcription ensured that there was little or no gDNA residue in the qPCR system. Relative gene expression was calculated using the 2P value < 0.05 was considered to indicate statistically significant results.All statistical analyses were implemented using R (3.6.1 version). The Wilcoxon rank test was used to verify the statistical significance between continuous variables, and the chi-squared test was used to compare classified variables. A 2\u2009fold\u2009change | >0.585 as a standard). Cross-analysis with immune-related gene databases revelated 1271 differentially expressed immune-related genes in the TCGA ESCC dataset. Overall, 713 and 558 genes were up- and downregulated, respectively. GO enrichment and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses further showed that these differentially expressed immune-related genes were significantly correlated with 479 GO terms and 56 KEGG pathways were selected for further analysis were selected as the best model , and the median expression levels were 8.56 and 7.4, respectively. The expression of HCAR3 in the high-IRGPI group was higher than that in the low-IRGPI group , and the median expression levels were 1.89 and 0.51, respectively. The expression of FNBP1L in the high-IRGPI group was lower than that in the low-IRGPI group , and the median expression levels were 4.90 and 5.48, respectively. The expression of BRCA2 in the high-IRGPI group was lower than that in the low-IRGPI group , and the median expression levels were 4.13 and 4.65, respectively.We used the aforementioned four genes to construct an immune-related gene prognosis model of ESCC by multivariate Cox regression. The formula for IRGPI was as follows:ff point . Using tP < 0.0001) (P < 0.05) curve. The area under the curve (AUC) values of IRGPI at 1, 3, and 5 years were 0.791, 0.807, and 0.845, respectively . Moreove 0.0001) . The pro Figures .TP53, TTN, CSMD3, DNAH5, MUC16, NFE2L2, and PIK3CA were the most commonly mutated genes, showing mutation rates of over 10% in ESCC samples. KMT2D, MUC17, and TGFBR2 mutations were more common in the high-IRGPI group, whereas FLG, ZNF750, and NOTCH1 mutations were more common in the low-IRGPI group , C3 (inflammatory), C4 (lymphocyte-depleted), C5 , and C6 (TGF-\u03b2-dominant) [P = 0.04457, chi-squared test) , C2 (interferon-ominant) . Compareed test) . Accordied test) . Accordied test) .P = 0.0169830, Bonferroni corrected P = 0.1358641) .50 of drugs from IRGPI subgroups. Drugs such as gefitinib showed a significant difference in IC50 between the IRGPI groups (P < 0.001) .P = 0.042). The median relative expression of HCAR3 was 1.87 in paracancerous tissues and 3.04 in ESCC tissues, which was increased in tumor tissues . The median relative expression of FNBP1L was 1.31 in paracancerous tissues and 0.57 in ESCC tissues, which was significantly decreased in tumor tissues . The median relative expression of BRCA2 was 1.04 in paracancerous tissues and 0.46 in ESCC tissues, which was significantly decreased in tumor tissues .We verified the expression of genes in the IRGPI in the collected ESCC and adjacent paracancerous tissues. The trend in the qPCR results was the same as that in TCGA ESCC cohorts, confirming the role of these genes in ESCC . The medThe therapeutic effect of ICI therapy results from the interactions of the tumor cells, tumor microenvironment, and immune system. Previous clinical trials showed that ICI therapy can effectively prolong the survival time and reduce treatment-related adverse reactions in patients with advanced ESCC compared with traditional chemotherapy and radiotherapy , 6, 37; Expression of immune-related genes in tumor samples impacts the tumor immune microenvironment. Thus, an immune gene signature can effectively help predict the clinical benefit of patients receiving immunotherapy . In thisCLDN1, HCAR3, FNBP1L, and BRCA2. CLDN1 is a membrane protein involved in the formation of tight junctions between cells and regulates the proliferation and metastasis of various tumors [HCAR3 is a potential target for regulating cellular immunity and metabolism [BRCA2 is a common tumor suppressor gene, and its mutation increases the risk of ESCC [BRCA2 leads to upregulation of interferon-stimulated genes and activation of the cGAS/STING/STAT pathway, confirming that inactivation of BRCA2 triggers cellular innate immune responses [CLDN1 and HCAR3 were positive, whereas those of FNBP1L and BRCA2 were negative. Therefore, IRGPI is positively correlated with the expression of CLDN1 and HCAR3 and negatively correlated with that of FNBP1L and BRCA2.The IRGPI is based on the expression of four genes: s tumors , includis tumors , and is s tumors . HCAR3 itabolism , with ittabolism , 43. Mortabolism . FNBP1L tabolism , 46. BRC of ESCC \u201349. In lesponses . In the KMT2D mutation was more common in the high-IRGPI group. It has been reported that KMT2D mutation is the main modulator of ICI in several tumors [KMT2D mutation enhances the immune infiltration and immunogenicity of tumors, thereby making tumors more sensitive to ICI therapy. In turn, ZNF750 mutation was more common in the low-IRGPI group. Studies have shown that ZNF750 is a commonly mutated gene in ESCC, mainly with nonsense mutations. ZNF750 can inhibit epithelial-mesenchymal transition by directly depressing the SNAI1 promoter [To explore the differences in the molecular characteristics between IRGPI subgroups, we analyzed their mutational status. C > T transitions are the most common single-nucleotide variant type of ESCC. A high frequency of C > T substitution may be associated with CpG methylation, and the change of germ line methylation can lead to substitution rate variation at the CpG region . The mosl tumors , as KMT2promoter . In addipromoter . Therefo\u03b1 response gene set was found to be enriched in the high-IRGPI group. Interferon-\u03b1 can effectively activate the immune response and reverse the immunosuppressive effect of mesenchymal stem cells [\u03b3 response gene set was enriched in the high-IRGPI group. Interferon-\u03b3 maintains immune homeostasis in the tumor microenvironment, limits adaptive and innate immune killing, and, thus, limits the response of patients to ICI treatment [\u03baB pathways, which play a role in the tumor immune microenvironment and ICI therapy, were enriched in the high-IRGPI group [GSEA revealed that the enriched gene sets differed between the IRGPI subgroups. The interferon-em cells , 56. Addreatment , 58. Thereatment . FurtherPI group , 61. TheNext, we analyzed the difference in immune cell infiltration between the IRGPI subgroups. In the high-IRGPI group, activated CD8 T cells, monocytes, neutrophils, and type 17 T helper cells showed higher infiltration, whereas in the low-IRGPI group, memory B cell infiltration was more common. Studies have shown that CD8 T cells are closely related to the expression of PD-L1, suggesting their value for predicting the prognosis of patients and response to ICI treatment \u201364. The \u03b3 response and high CD8 T cell markers and lymphocyte infiltration rate, indicating a better immune response but poor prognosis. The C1 type indicates more angiogenic gene expression. In squamous cell carcinoma immune subtypes, IS4 and IS6 were more prevalent in the high-IRGPI group than in the low-IRGPI group [\u03b3 response, with a good immune activation phenotype, whereas IS1 showed an immunosuppressive phenotype. The high-IRGPI group tended to undergo ESCC2 classification, which involves higher leukocyte infiltration [Combined with other immune subtypes, we can determine the immune status of IRGPI subgroups. Compared with pan-cancer immune subtypes, there were more C2-type patients in the high-IRGPI group and more C1-type patients in the low-IRGPI group . The C2 PI group . IS4 shoTraditional chemotherapy and radiotherapy are effective in few patients with ESCC. Hence, for these patients, ICI therapy may be beneficial for prolonging their survival time and reducing the incidence of treatment-related adverse reactions. Taken together, this study may fill the gap in the need for new biomarkers and the proposed IRGPI may be used as a biomarker to evaluate ESCC prognosis and the response to ICI therapy."}
+{"text": "Gene co-expression networks (GCNs) provide multiple benefits to molecular research including hypothesis generation and biomarker discovery. Transcriptome profiles serve as input for GCN construction and are derived from increasingly larger studies with samples across multiple experimental conditions, treatments, time points, genotypes, etc. Such experiments with larger numbers of variables confound discovery of true network edges, exclude edges and inhibit discovery of context (or condition) specific network edges. To demonstrate this problem, a 475-sample dataset is used to show that up to 97% of GCN edges can be misleading because correlations are false or incorrect. False and incorrect correlations can occur when tests are applied without ensuring assumptions are met, and pairwise gene expression may not meet test assumptions if the expression of at least one gene in the pairwise comparison is a function of multiple confounding variables. The \u2018one-size-fits-all\u2019 approach to GCN construction is therefore problematic for large, multivariable datasets. Recently, the Knowledge Independent Network Construction toolkit has been used in multiple studies to provide a dynamic approach to GCN construction that ensures statistical tests meet assumptions and confounding variables are addressed. Additionally, it can associate experimental context for each edge of the network resulting in context-specific GCNs (csGCNs). To help researchers recognize such challenges in GCN construction, and the creation of csGCNs, we provide a review of the workflow. The first form of a gene co-expression network (GCN) was reported in 1998 , 3, to tad hoc methods /2, where n is the number of genes}. KINC therefore automatically sets a default correlation threshold of 0.5 and uses a sparse matrix format to keep data files relatively small although end-users can change the cutoff as needed. We used this 0.5 cutoff for performance testing. The amount of storage space required for this experiment was also recorded.KINC uses a binary encoding for storing the similarity matrix and GMM results. These output files were named with the extension \u2018CCM\u2019 (for the cluster correlation matrix) and \u2018CMX\u2019 (for the correlation matrix). The similarity matrix can become quite large {size of [B), execution on Kamiak with 16 CPUs required several hours and execution on just a single GPU provided similar performance. The time was dramatically decreased as the number of CPUs and GPUs increased but with diminishing speedup. Even the dataset with more genes and similar samples (55\u00a0986 genes \u00d7 141 samples) was completed in less than a day with three GPUs to large (56\u00a0202 genes \u00d7 1671 samples) . For theree GPUs C. Thus, The workflow described here is provided as a protocol that can be used to address issues of noise in GCN construction. One objective for this manuscript is to alert researchers to such issues and to foster the development of better tools, including those that can improve on this protocol. Here, we describe a few limitations. First, GMMs are not perfect in identifying all clusters. Because it is computationally intractable to explore all possible solutions, GMMs require random start locations, which may settle in different local minimums with different runs. We have observed improper identification of clusters when few samples are present or for genes biased by very low expression levels. The power-analysis step should filter clusters with few samples and context association testing will overlook clusters with no discoverable context. However, a more thorough examination of false edges resulting from the imperfect use of GMMs is needed.GMMs may also suffer when context expression of genes overlaps in the 2D space. For example, consider the scatterplot of An additional challenge is that running this workflow on large GEMs may be difficult for some users who do not have access to GPUs or large compute clusters. We anticipate that as such resources become more widely available via institutional, national and commercial cloud computing, researchers will have access to these facilities. Given that facilities such as XSEDE , the OpeOne topic absent from this manuscript is the measurement of the biological performance of csGCNs. Biological performance can be defined in terms of the number of true relationships that are represented in the csGCN and the lack of false associations. This is a challenging question to address for GCNs as well as csGCNs because of the lack of a gold standard, validated network by which GCNs can be compared, especially in all contexts. One of the most popular methods for measuring the performance of a GCN is comparing the number of conserved functional terms between neighbors in the network. This approach relies on the GBA concept that interconnected genes should share similar function. Extending \u2018Guilt-by-Association\u2019 by Degree is one tDespite the lack of a metric on biological performance, we assume that biological performance is improved in the csGCNs simply by ensuring that statistical biases are handled, and noise is accounted for using simple, commonly used statistical practices and methods. By ensuring that statistical assumptions are met, that tests have sufficient power and bias from missing values and confounding variables are accounted for, we conclude that the number of false edges should be reduced when exploring deeper into the correlation space that RMT would normally exclude.Finally, the use of GMMs provides thousands of new context-specific edges that need exploration. KINC-derived networks can retrieve context-specific, statistically significant edges at correlation values as low as \u00b10.3 provided sufficient data are available. The biological role of correlation at such lowly correlated relationships is unknown. Are these primarily indirect relationships? More research is needed to determine the role of such significant but lowly correlated relationships.GCN construction is a widely applied technique that warrants improvement. As described here, multidimensional transcript profiles create challenges for traditional GCN construction due to multiple sources of noise and bias that are unaccounted for in traditional approaches. As previously noted, approximately 97% of edges in the rice dataset did not meet test assumptions. This result implies that the quality of traditional approaches (including the highly popular WGCNA) is highly dependent on the structure of the data. Thus, when researchers fail to find modules of interest it may be due to deficiencies of current GCN construction methods for multidimensional data rather than a lack of \u2018signal\u2019 in the data.No new data were generated or analyzed in support of this research.Multidimensional gene expression data contain natural and systematic noise that affects GCN results.The \u2018one-size-fits-all\u2019 approach to co-expression network construction cannot correct for noise.Sources of noise are different for each pairwise comparison so correction strategies should be applied at the pairwise level.The KINC toolkit offers an approach for pairwise correction of bias."}
+{"text": "The circadian rhythm drives the oscillatory expression of thousands of genes across all tissues. The recent revolution in high-throughput transcriptomics, coupled with the significant implications of the circadian clock for human health, has sparked an interest in circadian profiling studies to discover genes under circadian control.We present TimeCycle: a topology-based rhythm detection method designed to identify cycling transcripts. For a given time-series, the method reconstructs the state space using time-delay embedding, a data transformation technique from dynamical systems theory. In the embedded space, Takens\u2019 theorem proves that the dynamics of a rhythmic signal will exhibit circular patterns. The degree of circularity of the embedding is calculated as a persistence score using persistent homology, an algebraic method for discerning the topological features of data. By comparing the persistence scores to a bootstrapped null distribution, cycling genes are identified. Results in both synthetic and biological data highlight TimeCycle\u2019s ability to identify cycling genes across a range of sampling schemes, number of replicates and missing data. Comparison to competing methods highlights their relative strengths, providing guidance as to the optimal choice of cycling detection method.https://nesscoder.github.io/TimeCycle/.A fully documented open-source R package implementing TimeCycle is available at: Bioinformatics online. Circadian rhythms\u2014physiological, behavioral and metabolic oscillations with an approximate 24-h period\u2014are controlled by an evolutionarily conserved set of core clock genes operating at the transcriptional and protein level. Entrained by Zeitgebers that modulate time-of-day specific functions, the circadian clock orchestrates a multitude of cellular processes, including nearly half of genes across all tissues . Such methods may fail to detect cyclic patterns that do not fall into the predetermined profiles of reference signals, effectively limiting the scope of discovery. Moreover, these methods typically assess the statistical significance relative to a null model of a randomized time-series. Since randomized time-series may jump from low to high gene expression faster than biological translation and degradation processes allow, these methods may produce unrealistic null models and misleading significance tests.Nevertheless, limitations remain in current methods cyclers out of the total possible (non-)cyclers.For each method across all 144 synthetic datasets and missing data analysis, the receiver operating characteristic (ROC) and accompanying area under the curve (AUC) were computed using the pROC R package and sampling intervals . Following Microarray data was preprocessed as described in P-value and thus could not be meaningfully compared to the other methods). Further details may be found in the Supplement.To characterize the effects of sampling schemes using real data, the three datasets were processed to ensure comparability across datasets. The 2.3.1The distribution of Spearman rank correlations were computed by comparing genes identified by each method with an FDR < 0.05 and LogFC < 2 in one dataset e.g. with therelation . The disTimeCycle is a method for classifying and quantifying cyclic patterns of gene expression in transciptomic time-series data. Application to both synthetic and experimental data demonstrate TimeCycle\u2019s ability to efficiently discriminate cycling genes from non-cycling genes across a range of sampling schemes, number of replicates and missing data.https://nesscoder.github.io/TimeCycle/.TimeCycle\u2019s basic framework consists of a rescaling/normalization step, reconstruction of the state space via time-delay embedding, isolation of non-linear patterns via manifold learning and dimension reduction, quantification of the circularity of the signal using persistent homology, and comparison of that measure to a bootstrapped null distribution to assess statistical significance . We desc3.1.1d-dimensional time\u2013delay embedding of a time-series X is defined as a representation of that time\u2013series in a d-dimensional space where each point is given by the coordinates TimeCycle exploits Takens\u2019 theorem, a result from dynamical systems theory that proves that the time\u2013delay embedding of a single variable observed over time will reconstruct (up to diffeomorphism) the state space of a multivariate dynamical system . A d-dimm a mass . Using t3.1.2d and the delay lag \u03c4. A perfectly sinusoidal signal will form a circle in d\u2009=\u20092 dimensions, and hence a two-dimensional embedding would be sufficient in the ideal case. However, biological signals are often not strictly periodic, but exhibit drifts in the oscillation. A common approach is to detrend the time-series by fitting it to a line and analyzing the residuals Often, researchers desire estimates of the period, phase and amplitude of the genes detected as cycling. Because the time-delay embedding and PH computation does not provide these estimates, a separate computation in TimeCycle is used to generate these results. To estimate the period, amplitude and phase of the oscillations , signals3.1.6A practical consideration when designing transcriptomic time-series experiment is the trade-off between replicates, sampling resolution and sampling length in relation to experimental cost ; technicTo comprehensively evaluate TimeCycle\u2019s capabilities across different patterns of temporal gene expression, we applied TimeCycle and four other methods\u2014JTK_CYCLE , GeneCyc3.2.1A summary of TimeCycle results in the synthetic data is found in P-values across the varying noise levels, we find that TimeCycle and JTK_Cycle are more conservative in comparison to RAIN and GeneCycle , whereas small P-values are overrepresented under the null for RAIN and GeneCycle for non-cycling genes. Across all methods, false negative error rates increase with noise. Examining the type I and type II error rates relative to the FDR adjusted eneCycle . This caOutside the 48-h sampling schemes, TimeCycle exhibits good performance for longer time-series sampled every 1 or 2-h, but reduced performance for sparser sampling (every 4-h compared to 1-h and 2-h), even for longer time-series. This is attributable to sparsely sampled manifolds in the reconstructed state space, resulting in an insufficient number of data-points for cycle formation. This suggests that, for a fixed number of samples, denser sampling for a shorter duration may be favorable to longer, sparser sampling.It is also instructive to examine how the classification accuracy for different waveforms changes with the sampling strategy. We find that for time-series lasting 36-h, strong symmetric cyclers are robustly detected, while asymmetric cyclers (such as the sawtooth) do not have sufficient points to close the cycle in the embedded space . At 72-h and 96-h, linear trending and damped oscillations are more likely to be classified as non-cycling due to the fact that as the time-series sampling is extended, the trends and dampening become more pronounced, dominating the underlying oscillation. Because these signals are not in fact strictly periodic , this can be a desirable or undesirable behavior depending on whether the user wishes to classify linear trending and damped oscillations as cycling. Details for an implementation with alternative parameters to detect oscillations with linear trends can be found in TimeCycle\u2019s documentation.3.2.2P-values generated by each method. To simulate results for an ideal dataset, comparisons were made using the recommended sampling scheme\u2014every 2-h for 48-h with one replicate\u2014under low (10%) noise conditions The results are generally well-correlated, with non-cycling waveforms clustering at low significance and sinusoidal waveforms ranked high; differences between methods are most noticeable for trending, sawtooth and contractile waveforms. Given the algorithmic similarities of RAIN/JTK_CYCLE (via the Jonckheere-Terpstra test) and TimeCycle/SW1PerS , it is unsurprising that these methods are more highly correlated within method pairings than across methodological approaches.The lower triangle of J statistic for each time\u2013series, following previous studies choice of significance threshold, we calculated the percent of genes concordantly classified as cycling or non-cycling for each pair of studies as a function of the FDR significance threshold . The conr\u2009=\u20090.9, The Hughes and Zhang studies differed in the sampling start phase by 6-h, presenting an opportunity to examine whether genes detected as cycling in both had similar estimated periods and amplitudes, but differing phases, as would be expected if the cycling detection is accurate r\u2009=\u20090.9, . We alsoP-value to classify genes as cycling/non-cycling, it was omitted from this comparison.) . In thisWe have presented TimeCycle, a new method that leverages results from dynamical systems theory and topology to detect patterns of cyclic expression in time-series experiments. TimeCycle reconstructs the state space for the dynamical system governing each gene using time-delay embedding, and quantifies how cyclic the embedding is using persistence homology. Statistical significance is assessed by comparing the persistence scores to those that obtain from a resampled null model. TimeCycle accurately detects rhythmic transcripts in both synthetic and real biological data, and is robust to missingness, noise and non-cyclic components in the dynamics.A few methodological innovations distinguish TimeCycle from other cycling-detection algorithms. In contrast to methods that compare gene expression profiles to templates of expected cycling patterns, TimeCycle reconstructs the underlying dynamical system directly from the observed data. This enables TimeCycle to articulate more complex dynamics than can be easily considered using template-based approaches. Additionally, the method to construct the null distribution is both computationally efficient and biologically representative. SW1PerS did not implement hypothesis testing due to the computational cost, while template-based methods that implement hypothesis testing do so by resampling the time-series in a manner that can generate biologically implausible null models. TimeCycle is thus an improvement in both regards.From a practical standpoint, we identified strengths and weaknesses of TimeCycle\u2019s ability to detect cycling transcripts under varying conditions. We find that TimeCycle is better able to detect sharply peaked waveforms and waveforms where the period appears variable, while JTK_Cycle, RAIN and GeneCycle are more robust with respect to linear trends. This is in keeping with prior work (P-value, and also upon methods such as RAIN that randomize the time-series itself. We note that this method for constructing the null distributions could also be adapted for other methods, and emphasize the need for method developers to consider biological constraints when devising null models.The results also highlight the importance of constructing biologically representative null models. By resampling the finite differences from the gene expression time-series to construct a null distribution of persistence scores, TimeCycle tests whether an observed gene has a stronger cycling behavior than expected by chance, conditioned upon the speed at which the expression is capable of changing. This improves upon SW1PerS, which does not compute a A practical consequence of TimeCycle\u2019s methodological features is that the genes detected as cycling by TimeCycle are highly reproducible. Applied to three independent studies of mouse live gene expression, TimeCycle consistently identified genes as cycling in multiple studies, and those genes were shown to exhibit reproducible dynamics.Finally, we note that the experimental sampling design remains a crucial factor for the reliability of any cycling detection method. As with other methods (Research reported in this publication was supported by the NSF-Simons Center for Quantitative Biology at Northwestern University, an NSF-Simons MathBioSys Research Center. This work was supported by a grant from the Simons Foundation/SFARI [597491-RWC] and the National Science Foundation [1764421]. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Science Foundation and Simons Foundation. This research was supported in part through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.Conflict of Interest: none declared.https://github.com/nesscoder/TimeCycle-data.Data and source code to reproduce the analysis and figures in this article are available at: btab476_Supplementary_DataClick here for additional data file."}
+{"text": "Capsicum annuum L.) during fruit development. All relevant data, as well as functions to perform analyses and interpretations from this experiment, were gathered into a publicly available R package: \u201cSalsa\u201d. Here we explain the rational of the methodology and exemplify the use of the package to obtain valuable insights into the multidimensional time expression changes that occur during chili pepper fruit development. We hope that this tool will be of interest for researchers studying fruit development in chili pepper as well as in other angiosperms.RNA-Seq experiments allow genome-wide estimation of relative gene expression. Estimation of gene expression at different time points generates time expression profiles of phenomena of interest, as for example fruit development. However, such profiles can be complex to analyze and interpret. We developed a methodology that transforms original RNA-Seq data from time course experiments into standardized expression profiles, which can be easily interpreted and analyzed. To exemplify this methodology we used RNA-Seq data obtained from 12 accessions of chili pepper ( RNA-Seq experiments3, and from such data it is possible to estimate the transcriptome changes that occur during the progression of organ developing programs. Phenomena as seed development4, senescence5 and aging6 have been shown to be conserved in plants. In particular, the development of fleshy fruits\u2014an indispensable part of the human diet, is probably conserved throughout the angiosperms7.Temporal gene expression profiles consist in measurements of gene expression at consecutive times9, many of them designed to perform differential gene expression. For example, the NCBI provides \u201cGEO2R\u201d (https://www.ncbi.nlm.nih.gov/geo/geo2r/), a tool to compare two or more groups of samples in order to identify genes that are differentially expressed across experimental conditions10. However, given that time expression profiles are multidimensional\u2014generally with more than 3 consecutive times sampled, traditional statistical methods are of limited relevance, and new approaches are required3.There is a plethora of software tools developed to analyze different aspects of RNA-Seq data11. Assume that gene expression is measured at times 13, which consist on estimating the wrong ordering in a set of means.As mentioned above, the main challenge for the analysis and interpretation of time course experiments is their multidimensionality14 and15. Other approach, closer to our interest here, is the identification of expression patterns in time course experiments. In16, the authors present an application combining modeling and a dimension reduction technique based in the ANOVA of simultaneous component analysis useful for microarray data. Also17 presents a test for microarray data that makes explicit use of the temporal order by fitting polynomial functions to temporal profiles. Other references for time course microarray experiments are cited in3, which also presents and discusses clustering and inference of networks from time course datasets.One aim of the analysis of time course experiments could be to detect significant periodic modes in time, and examples of tools for this propose are the ones documented in18, presents a linear mixed model spline framework. The proposed framework consists basically of three stages: (1) Identify and removal of \u201cnon-informative\u201d profiles, (2) Modeling, via a serial model fitting approach to obtain smoothed profiles and (3) Analysis to identify similarities between summarized profiles by clustering, or hypothesis testing to identify differences over time. In this approach filtering \u201cnon-informative\u201d profiles let out of the analysis genes that are relatively constant through the time frame explored, while fitting successive models of increasing complexity has the risk of ignoring patterns that are too convoluted to be fit with this spline approach.With regard to the analysis of RNA-Seq time course experimentsmaSigPro\u201d19, was updated to be able to use RNA-Seq time series analysis by introducing Generalized Linear Models (GLM) under the Negative Binomial distribution20. However, as its original version, the updated version of maSigPro relies on polynomials to fit time expression data through time. While the use of polynomials gives good results for cases when there are one or few critical points, either maxima or minima, it is well known that polynomial fitting fails when the behavior of the target function is too complex21. Because there is no guarantee that gene expression through time will always be simple, it appears better to look for a methodology that could fit even the most complex patterns shown by the data.A method to identify differential expression profiles in time course microarray experiments\u2014implemented in the package \u201cCapsicum spp.) is an important crop, as well as a model for fruit development studies and domestication22. We developed a methodology to examine time expression profiles by testing neighboring time intervals and then obtaining \u201cStandardized Expression Profiles\u201d (SEPs), which can be easily interpreted and tested. That procedure was applied to all genes estimated in 12 accessions of chili pepper expressed during 7 temporal stages of fruit development. All curated data and functions to analyze them are included in the R23 package \u201cSalsa\u201d24. This data mining tool arouse during the course of a previous project, and has proven to be useful to reveal novel insights about the domestication of chili pepper22; please see https://www.mdpi.com/2223-7747/10/3/585. Here we explain the rationale of the methodology, present a panorama of its possibilities and exemplify the use of this tool.Chili pepper , in 12 accessions of chili pepper. Table\u00a0The content of data frame \u201cacc\u201d, displayed in Table\u00a022. In total we obtained data from 179 RNA-Seq libraries, that comprise the 168 used for time profiles estimation plus some extra samples from times larger than 60 DAA as well as libraries from plantlet stage for one domesticated and one wild accession. Descriptions of these 179 libraries are in data frame \u201clibrary.desc\u201d, while raw counts of map reads for all genes at each one of the libraries is in data frame \u201creadcounts\u201d, both within the Salsa package. Sequencing, filtering and mapping of the raw reads to the reference Capsicum genome are presented in the supplementary methods and results in22. In total more that 3 billions of raw reads where map to the reference genome, and these data have been deposited in NCBI\u2019s gene expression omnibus (GEO)25, and are accessible through GEO Series accession number GSE165448 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE165448).The RNA-Seq experiment from which Salsa data originated consisted in sampling fruits of the 12 accessions shown in Table\u00a026, we presented a methodology to classify time expression profiles into discrete classes, and here we extend such procedure to obtain \u201cstandardized expression profiles\u201d (SEPs) in the general framework of an RNA-Seq time course experiment.Previously, ing genes for each one of the times. Now, consider only the n. Those contrasts can be tested, for example by using the package \u201cEdgeR\u201d27, to obtain the p-values corresponding to the g genes at each one of the i at time j. If the null hypothesis is not rejected, then we will consider that the gene i is in a steady state, \u201cS\u201d, between times j and i can be summarized as p-values need to be corrected by multi-testing, transforming them into q-values to obtain an acceptable \u201cfalse discovery rate\u201d (FDR)28.We will assume to have data from replicated RNA-Seq libraries at times t-dimensional semi-continuous space of gene expression into a unidimensional space of t times, including the random error (unexplained variation) represented by differences between the replicates for each time, given by distinct RNA-Seq libraries. To give a numerical representation of a model t times, say, the vector 30. Finally, we calculate the SEP for a gene i as a vector S, equal to 1. For details of this procedure please see the \u201cSupplementary Material\u201d in22 as well as Additional file In summary, the procedure described above gives a function that transforms the mentclass2pt{minimp-values of the contrasts between the 6 neighboring time intervals for all 35883 genes annotated in the Capsicum reference genome in each one of the 12 accessions. Of the total of 35883 genes annotated, only 29946 , or the Gene Ontology category , or any other known attribute of the corresponding genes. On the other hand, the researcher could have sets of genes which are of interest from results obtained in previous studies, or in a different specie, etc.k and r, must be both larger than two . The rational to suggest this test is that the vector of estimated distances between members of the two sets, t-dimensional space, while the distances within the two groups, estimated by within treatments is used as a measure of noise (unexplained error), while the variation between treatments is the one of interest.The biological hypothesis that the two SEP sets have statistically equivalent time profiles can be translated to the null hypothesis k and r.Now, we must select a statistical test to decide between k) and r). For example, for values of t-test to decide between 31. Additionally, the vectors p-values that are highly concordant with the ones obtained by the t-test in all cases assayed (results not shown).In Table\u00a0t-test on the distances between and within SEPs elements is useful to decide if two sets of SEPs are equal in average. Salsa function \u201canalyze.2.SEPs\u201d implements this procedure for the chili pepper data.In summary, we propose that the application of the 32. This means that different data aspects are organized into data frames which are connected among them by common variables. Table\u00a0Salsa is organized as a relational databaseIn Table\u00a0Main functions in Salsa are presented in rows f.1 to f.7 in Table\u00a0https://www.britannica.com/technology/data-mining\u201cData mining\u201d entry at Britannica). In the context of gene expression, data mining has been used for example to obtain34 and visualize networks35, or to find association rules36.Data mining, also called \u201cknowledge discovery\u201d, is the process of uncovering interesting and useful patterns and relationships in large volumes of data \u201d; see Table\u00a0temp1<- get.SEPtemp2<- get.SEPIn both cases the function is call with parameters \u201cdescr=\u201dMYB\u201d\u201d, which will select only genes that contain within their protein description the chain \u201cMYB\u201d, and \u201cacc.key=\u201dCM\u201d\u201d which will select cases that belong to the accession CM, and which also fulfill \u201cisTF=TRUE\u201d\u2014which means that the genes are annotated as transcription factors. The differences between the output objects, \u201ctemp1\u201d and \u201ctemp\u201d is given because in the first we ask for cases where the time where the maximum expression is reached is at 0 DAA (\u201cTimeMaxExp=0\u201d), while in the second case we ask for cases where that point is reached at 20 DAA (\u201cTimeMaxExp=20\u201d).The first step to initiate an analysis in Salsa is to select one or more sets of The output of the function get.SEP, obtained with statements (1) and (2) above, are SEP data frames. Thus we obtain information of how many different genes fulfill the parameters , as well as the numerical values of the estimated SEPs and extra information about other of the attributes of these genes.We can now proceed to the \u201cSummarization\u201d step of the analysis, represented in the second row of rectangles in Fig.\u00a0transcription factor MYB108-like. Figure\u00a0Figure\u00a0In Fig.\u00a0t-test of Euclidean distances between and within the two SEPs, and confirm that, as shown in Fig.\u00a0p-value The last step in the data mining workflow in Salsa corresponds to \u201cAnalyses\u201d in Fig.\u00a039. Accordingly, the simple example presented above only gives a glance to the Salsa possibilities to find interesting aspects in the time profiles contained in the package. Next section presents a more detailed example.The simplified workflow schema for Salsa, shown in Fig.\u00a0We begin our analysis by isolating, as separate SEP data.frames, the genes expressed in accessions, \u201cAS\u201d (Ancho San Luis), of the domesticated accession set, which produces very large and moderately pungent fruits and \u201cSR\u201d (Sonora Red), a wild accession with very small and highly pungent fruits while having such maximum at 60 DAA in \u201cSR\u201d (Y-axis).Gene sets that are out the dashed green diagonal of Fig.\u00a0Salsa\u201d capabilities, we performed an in-depth analysis of the set of 758 genes , Cell Component (CC) and Molecular Function (MF). Results are summarized in Table\u00a0In the first row of Table\u00a0We are not going to extend here the discussion of the biological relevance of the results in Table\u00a040. However, SEP estimation is dependent on the specific data type as well as on the number and separation of the times sampled. In summary, to estimate SEPs we need replicated measures of the target data at times that include the whole relevant time period. It is also advisable that sampling times will be equally separated, forming intervals with the same time length. Constructing ternary models, that implies hypothesis testing of neighboring intervals to decide if the variable has not significantly changed , or it increased (\u201cI\u201d) or it decreased (\u201cD\u201d), at each interval is the core of SEPs estimation, and can be performed for any time course experiment, but the particular method for hypothesis testing depends on the nature of the data. Having such ternary model for the variable of interest, the standardization necessary is straightforward, and, as seen here, SEP plotting, grouping and analysis present advantages for the interpretation of the results. A disadvantage of this approach is that SEP estimation use error variance (differences between replicates), and thus further analyses must rely on indirect evidence, as the Euclidean distances between and within SEPs sets. However, this approach is statistically robust giving good practical results, even when it is computationally heavy when SEPs sets are large.The methodology to estimate and analyze SEPs can be extended to almost any time course experiment, including not only RNA-Seq data, but also other expression profiling methods as microarrays or metabolomic time profiling experiments, as the ones described inIn conclusion, we presented a methodology to summarize time expression profiles directly applicable to any RNA-Seq time course experiment, and which can be adapted to other types of time course experiments. This methodology was applied to a large set of 179 RNA-Seq libraries that estimate gene expression during fruit development in 12 chili pepper accessions. All relevant data and functions to mine these transcriptomes are collected in the Salsa R package, which possibilities for data mining have been demonstrated here. We anticipate that the R package will be useful to the research community studying gene expression changes during fruit development.Salsa\u201d originated have been deposited in NCBI\u2019s Gene Expression Omnibus (GEO)25, and are accessible through GEO Series accession number GSE165448.The full set of 179 RNA-Seq libraries from which the R package \u201chttps://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE165448).. On that link you can download the package, its manual as well as instructions to install it.Supplementary Information."}
+{"text": "From time course gene expression data, we may identify genes that modulate in a certain pattern across time. Such patterns are advantageous to investigate the transcriptomic response to a certain condition. Especially, it is of interest to compare two or more conditions to detect gene expression patterns that significantly differ between them. Time course analysis can become difficult using traditional differentially expressed gene (DEG) analysis methods since they are based on pair-wise sample comparison instead of a series of time points. Most importantly, the related tools are mostly available as local Software, requiring technical expertise. Here, we present TimesVector-web, which is an easy to use web service for analysing time course gene expression data with multiple conditions. The web-service was developed to (1) alleviate the burden for analyzing multi-class time course data and (2) provide downstream analysis on the results for biological interpretation including TF, miRNA target, gene ontology and pathway analysis. TimesVector-web was validated using three case studies that use both microarray and RNA-seq time course data and showed that the results captured important biological findings from the original studies. Times-course analysis of gene expression data can be advantageous for revealing modulating gene expression patterns of certain biological mechanism across time. It is a common practice to search for significantly differentially expressed genes (DEGs) between two conditions, for example, a cohort of normal versus cancer patients. Such analysis is tailored to observe the transcriptomic difference within a single snapshot of the current gene expression status in a pairwise manner. Thus, the dynamics or ongoing transition of important gene regulatory functions may be overlooked.Time course analysis data is generated to observe any significant modulation of gene expression or other omics data to explain the biological phenomena of interest in respect to the condition of the experiment. Time course analysis can become a complex procedure and can produce different results depending on the researchers perspective of the analysis. For example, when adopting the traditional DEG analysis, we are opted to compare a combination of time point pairs. Such approach requires post analysis of the result, since the results of each DEG pair itself is not sufficient for interpretation. They need to be integrated afterwards to derive biologically meaningful results. In terms of experimental setting, if time course data is generated for a single condition, identified gene expression patterns are expected to be able to explain the response to the related condition ,2,3. In Collectively, time course data allows us to search for genes that yield significantly different expression patterns across time. Here, the gene expression patterns can be used to comprehend the biological mechanisms with greater detail. For example, we can observe how genes of a certain pathway respond to some external stress across time. More importantly, we can pinpoint the time when such genes started to respond to the stress. However, time course data is expensive and choosing the time points for sampling is a non-trivial task, requiring expert knowledge and careful pre-experimenting on putative or known condition responsive genes. Without such careful design, important gene expression modulation may not be captured within the selected time course. Until now, many time course analysis methods for identifying differentially expressed genes were developed. However, due to the difficulty of the interpretation of the result or usage of the tool, many studies base their analysis on DEG methods . For biohttps://cobi.knu.ac.kr/webserv/TimesVector-web (accessed on 26 November 2021). TimesVector is a time course gene expression analysis tool that aims to search for gene clusters that exhibit differential expression patterns across multiple conditions in time. Since there may be multiple conditions and multiple time points, the data forms a three dimensional structure . Since sub-space clustering (or bi-clustering) on two dimensional is NP-hard [To alleviate the burden for performing the rather difficult time course analysis, we developed an easy to use web service for analysing time course gene expression data with multiple conditions. The web service implements the algorithm of TimesVector , previou NP-hard ,10, thre NP-hard may be rA simple user interface for uploading data and parameter configurationGene filtering functionSupport of microarray and NGS dataGene expression normalizationSupport of sample replicatesK A quick test for choosing a sub-optimal Interactive result page for visualizing and selecting the searched gene cluster patternsBiological downstream analysis including GO (Gene Ontology) analysis, pathway analysis, identification of Transcription Factors (TF) and putative miRNA targetsThe functions of the web-based TimesVector are as below:The web-service mainly focuses on providing (1) intuitive and easy click-based analysis interface, (2) visualizing the clustering results and (3) continuous downstream analysis further biological comprehension of the cluster results. Based on the pattern, each cluster is categorized as one of the following: Differentially Expressed Pattern (DEP), at least One Differentially Expressed Pattern (ODEP) and Similarly Expressed Pattern (SEP) clusters. Here, a DEP cluster is defined as a cluster where the genes have similar expression patterns within each condition but have different expression patterns across all the conditions. ODEP is a cluster where the pattern is different in only one condition. At last, the SEP cluster is comprised of genes whose expression patterns is similar across all the conditions. To interpret the searched DEP, ODEP or SEP clusters, candidate TFs and miRNA target genes are identified using external TF and miRNA databases. This is because TFs and miRNAs are well known regulators influencing the gene expression. Furthermore, gene set enrichment analysis is performed on the DEP, ODEP and SEP type clusters for biological interpretation.As a guide to use the TimesVector-web, we present the analysis results using three case studies that use both microarray and RNA-seq time course data.The TimesVector-web is executed by a number of procedures in a sequential manner. It starts from preparing the input data files and finishes with the output results. Each procedure is described in detail in the latter section followed by two tutorials in the website. The workflow of TimesVector-web is shown in K-test provided by the web-service.The TimesVector-web is a web service that allows users to analyse microarray and high-throughput time course gene expression data in means to detect differential and similar gene expression patterns across multiple conditions. Thus, the main objective is to detect the differentially modulating gene expression patterns between two or more experimental conditions . Its simple user interface and the downstream analysis function for biological interpretation of the cluster results makes it easy to perform time course analysis without the need of expert computing knowledge. Furthermore, TimesVector-web requires only a few parameters as input. For sub-optimal results, the number of clusters can be automatically estimated by the The user interface mainly consists of two panels; the input panel and the result panel. In the input panel, the time course gene expression data can be uploaded followed by options for setting the parameters required for preprocessing and clustering the input data as shown in K-means based clustering algorithm, the number of clusters to be found need to be provided as the input parameter K. Parameter K is an integer, that indicates the number of time-course patterns to be identified. However, since there may be patterns that are not significantly different across the conditions, only clusters with a significant p-value are reported. Collectively, the number of reported clusters are usually smaller the K. Thus, it is suggested to start the analysis with a large K . K is the most important parameter, which has the greatest impact on the result. The user may run the analysis with different K\u2019s. In case it is difficult to find a good K, we provide a K-test function in means to help users to select a good K, which will be discussed in section \u201cFinding an optimal K for clustering\u201d . A K of 170 does not mean that the result will output 170 clusters. Instead, from the 170 clusters, each cluster will be tested for statistical significance in pattern difference across the conditions. Thus, a much smaller number of clusters may be the result of the analysis even with such a high K. However, when choosing a rather too large K, one may get many small clusters that actually have similar patterns, thus redundant. To avoid such redundancy, it is recommended to start with a small K and increase if the biological results are not satisfactory.TimesVector-web provides K-means algorithm [K-means type clustering methods are affected by the initially set K centroids since in many cases they are set in random manner. To improve the performance, often the K-means++ methods [K , TimesVector is used,lgorithm . The res methods is appliEach cluster is classified as either a DEP, ODEP or SEP. A DEP cluster is a set of genes that have similar gene expressions within conditions, but are different in other all conditions. An ODEP cluster is a set of genes where gene expression pattern in at least one condition is different from others. An SEP cluster is a set of genes that share a common expression pattern across all conditions. Only clusters with patterns of statistical significance are selected as the final result. The clustering result showing DEP, ODEP and SEP clusters is shown in The genes in each detected cluster serve as sources for biological interpretation of the conditions or phenotypes of interest. Our web service provides a five of analysis panels for visualizing gene cluster patterns, performing gene set enrichment analysis and searching for gene regulatory elements, such as TFs and miRNAs. The functional analysis tabs are located at the bottom of Starting from the left, the first tab \u201cCluster pattern\u201d shows the detailed expression pattern of the selected cluster. The left image represents the average expression pattern of each condition and the right depicts the expression pattern of each gene of the cluster . Here, wThe second tab, \u201cGene list\u201d, lists genes belonging to the selected cluster, which can be selected in the \u201cCluster\u201d panel within the result page. An example gene list of a cluster is shown in The third tab, \u201cTF\u201d, lists the TF genes present in the selected cluster as shown in Similar to TFs, putative gene targeting miRNAs are lists in the \u201cmiRNA\u201d tab C,D, whicx-axis of each barplot represent the At last, the \u201cg:Profiler\u201d tab provides the result of gene set enrichment based analysis to detect significant GO terms, pathways, TF binding sites and miRNA targets using g:We used TimesVector-web to analyze multi-class time course data of three case studies and show that TimesVector-web was able to capture biologically meaningful results that agree with the outcomes reported in the original studies. The dataset of the three case studies includes both microarray and RNA-seq data and have different characteristics, such as time points, conditions and number of replicates which are shown in K was set to 30 according to the K test result.The dataset of this case study was collected from the research in . Here, tp-value of 0.0125 which was also reported in the original study. Collectively, the clusters were able to partially reproduce the results of the study.TimesVector-web found a cluster, which expression pattern was the highest specific to the DV10 strain as shown K recommended by the K test, which was 170. However, the functional results of the detected clusters were not sufficient for interpretation and also agreed with the original study. The K was increased to 500 to detect more cluster patterns in means to improve the functional result. As a result, TimesVector-web detected 2 DEP, 9 ODEP and 21 SEP gene clusters, which contained 31, 1543 and 3258 genes respectively, which well captured related biological signals inline with the previous study.Here, we collected time course gene expression data from GSE4324 . Here, tIn the most ODEP clusters, genes of gonadectomized samples and intact female sample have similar expression patterns, but genes of intact male sample have different patterns from them. Furthermore, in these clusters, the difference in gene expression level was the largest at 7 days among several time points. The related study showed that the levels of parasitemia in gonadectomized male sample are marginally higher than those of intact males, intact females, and gonadectomized females at 7 days. In the DEP and ODEP clusters, we found that the gene expression of gonadectomized female sample was higher than that of other samples.In addition, we obtained biologically interpretable results from certain cluster patterns through the functional enrichment analysis provided by the TimseVector-web. Other study that conducted GO (Gene Ontology) analysis with the same data showed GO terms related to immune response in clusters that are upregulated and downregulated in intact male sample . Also, wK-text on the corresponding data set, the appropriate number of clusters was recommended as 30.The third case study is the rice root RNA-seq data GSE92835. Data is the rice root to know the process by which formation of nodule-like structures(NLS) are induced by plant hormones with three time points 0, 7 and 14 days. Especially conditions are inducing or not NLS by applying synthetic auxin, 2,4-dichlorophenoxyacetic acid to root of Oryza sativa. Also, each samples in this data set included three biological replicates. Namely, two conditions, and three different time points were present. As a result of performing p-value by comparing all the time points of DEP , and datasets with different number of time points, conditions and genes. More importantly, TimesVector-web was able to reproduce important results reported in the original studies of the three case studies.The TimesVector-web is to be extended to further handle single-cell RNA-seq time course data."}
+{"text": "Influenza is a serious global health threat that shows varying pathogenicity among different virus strains. Understanding similarities and differences among activated functional pathways in the host responses can help elucidate therapeutic targets responsible for pathogenesis. To compare the types and timing of functional modules activated in host cells by four influenza viruses of varying pathogenicity, we developed a new DYNAmic MOdule (DYNAMO) method that addresses the need to compare functional module utilization over time. This integrative approach overlays whole genome time series expression data onto an immune-specific functional network, and extracts conserved modules exhibiting either different temporal patterns or overall transcriptional activity. We identified a common core response to influenza virus infection that is temporally shifted for different viruses. We also identified differentially regulated functional modules that reveal unique elements of responses to different virus strains. Our work highlights the usefulness of combining time series gene expression data with a functional interaction map to capture temporal dynamics of the same cellular pathways under different conditions. Our results help elucidate conservation of the immune response both globally and at a granular level, and provide mechanistic insight into the differences in the host response to infection by influenza strains of varying pathogenicity. The possibility of influenza virus pandemics remains a potent public health threat. While most annual influenza strains are associated with a relatively low global infection rate and mortality, more widely infectious or lethal influenza virus strains arise periodically. The influenza pandemic of 1918 was responsible for more than 50 million deaths and, within one year, reduced the life expectancy in the United States by a dozen years . Thus, iIncreasingly, emerging research suggests that temporal dynamics may play an important role in the varying pathogenicity that is observed among different influenza strains . This prIntegration of gene expression data with complementary information about physical or functional associations between molecular entities has been proposed as a powerful approach to improve the interpretation of global transcriptional changes. These integrative approaches analyze gene expression experiments in the context of an independently constructed connectivity map, such as a protein-protein interaction (PPI) network, to identify modules comprised of genes or proteins that participate in common biological pathways or functions . More re\u2018comparative module discovery\u2019, i.e., the identification of temporally-shifted, network-based patterns of expression showing conservation (or divergence) between time course datasets that are generated in the same experimental system by different perturbations. Developing such an analysis method would be valuable in elucidating commonalities and differences in the biological responses to these perturbations. The identification of such comparative modules is critical for addressing the central question of our study - that of understanding the similarities and differences in virus-host interaction effects in response to related influenza virus infections.Time course gene expression datasets capture important features of the temporal trajectories of transcriptional changes. While the majority of integrative gene expression and interaction network analyses have not utilized the temporal dimension of the data, there have been attempts to incorporate temporal information into module discovery \u201324. For In order to perform comparative module discovery, we developed a novel integrative DYNAmic MOdule (DYNAMO) method, and applied it to understand the common and unique features of the host immune response to infection by related strains of the influenza virus. Integrating datasets that capture the temporal progression of the global gene expression response post-infection with an interaction network, our method discovers both conserved and differential comparative modules. Conserved comparative module discovery identifies a set of highly functionally connected genes that show a high degree of similarity between their regulation and response patterns for perturbations being compared. Our approach allows the possibility that the module responses may be shifted in time across different perturbations. Differential comparative module discovery identifies genes that show differences in their pattern of regulation across different perturbations. Differential module discovery is a difficult problem because truly condition-specific regulatory patterns must be distinguished from experimental and biological variability , 18. Ourvia a user-friendly interface at (http://dynamo.mssm.edu/).We apply DYNAMO to the problem of studying host-pathogen interactions for multiple H1N1 influenza virus strains. Our study builds upon the availability of identically sampled time series data for H1N1 seasonal and pandemic influenza virus of a human immune cell that lends itself to a systems-wide comparison of the dynamics underlying the modulation of the host response by each virus . DYNAMO DYNAMO searches for groups of genes in two time-series expression experiments that exhibit similar gene-by-gene expression patterns while allowing a temporal shift. DYNAMO is an integrative method that overlays expression data on a functional interaction network and leverages the methodology of the neXus algorithm to reinfThe neXus algorithm was deveg in two aligned time course expression datasets. DYNAMO evaluates optimal similarity between the two vectors while allowing one vector to be shifted relative to the other by some time shift, \u0394t. To assess similarity in expression at any such \u0394t, we calculate time-lagged Pearson correlation coefficient of the two vectors. Let T be the set of discrete time points at which gene expression was sampled for each virus infection and T' be the corresponding set of time points shifted by \u0394t. Denoting the expression vectors as TX (g) for the stationary time course and T'Y (g) for the time-shifted course, we compute time-lagged correlation coefficient, (TLC) g between the two responses asConsider the expression vectors of gene cov is the standard covariance. We use linear interpolation to calculate the values in the stationary time course that correspond to the new time points. Just like the standard correlation, a time-lagged correlation close to 1 means that the expression of gene g is perfectly correlated between the two responses once the time-shift is taken into account. We determined (data not shown) that transforming the correlation distributions via the Fisher Z-transformwhere resulted in better findings, and used these Fisher-transformed scores within the algorithm when assessing expression coherence of growing subnetworks at various time lags.via a depth-first search from the seed gene, as in the original neXus algorithm that brings the group of genes in the two responses into temporal alignment. Note that the optimal time lag for a growing network can change with addition of new genes, but, in our experience, does not vary widely. We use the average of the per-gene maximal fold-changes during the time course in each response as a third cutoff to be met in order to filter out false high subnetwork scores that may be due to a good alignment of flat time courses of genes that do not show significant differential expression. Finally, we merge the discovered subnetworks if there is considerable (0.6) overlap among their constituent genes and their identified time lags are the same.We begin with a list of seed genes, and their expression vectors from a pair of aligned time course experiments. We use fold-change values over a control condition, though other quantitative vectors such as differential expression p-values can be used as well. The matrix of standardized z-scores is computed for all genes at every considered time lag. Putative subnetworks are grown greedily from every seed in turn. First, candidate genes are identified lgorithm . To asseIdentifying genes that behave differently between a pair of responses is a difficult problem because many spurious expression differences can arise for individual genes. We again employ the insight of constraining expression differences by requiring tight clustering of such genes in the underlying functional network. The structure of the algorithm is similar to that of the algorithm for finding conserved temporally-shifted subnetworks. We enforce the network connectivity requirement by maintaining a minimum desired clustering coefficient, and optimize the choice of candidate genes for addition to the growing subnetwork by selecting one that shows the highest divergence in its expression pattern between the responses, provided that the average expression score stays below a selected score threshold. The subnetwork expression score that, in the case of differential modules, needs to identify genes with divergent expression patterns, is modified to reflect that difference. We observe that the correlations and their corresponding Fisher z-score distributions for most time lags have positive means expression pattern coherence that arises from the clustering of genes by random chance. For a given expression score threshold, the subnetworks discovered in the randomized data at that threshold represent false positive findings and enable an estimation of false discovery rate. We calculate the associated subnetwork confidence value asand use it to assess the subnetworks\u2019 statistical significance. Overall, exploring the algorithm\u2019s findings over various parameter ranges for randomized and real data allows a substantiation of our parameter choices and a quantification of the biological significance of the results.log2(expression) >6.6], determined based on visual inspection of the distribution), averaged over the triplicates, and converted to fold-change values over the time-matched allantoic fluid control condition. Each viral time course was analyzed for differential expression using LIMMA (BioConductor implementation) after correction for multiple hypothesis testing (q < 0.05) . For each infection, cells were collected at the following time points post infection: 120, 160, 200, 240, 300, 360, 420, 480\u00a0minutes. Na\u00efve non-infected DCs underwent the same experimental handling as infected DCs in virus-free allantoic fluid to ensure that mechanical manipulations could not be responsible for differences in experimental readouts. These served as a negative control time course. All time points and controls were performed in triplicates. The details of DC maturation, virus preparation and infection as well as RNA extraction for microarray experiments are described elsewhere . The RNA < 0.05) . MaximalHuman monocyte-derived DCs were infected with either NC, Tx or Cal (pandemic) H1N1 IAV. Samples were fixed in 1.6% paraformaldehyde (Sigma) and subsequently stained with fluorophore conjugated antibodies against CD86 and HLADR (both BD) at multiple time points post infection. Cells were analyzed with a LSRII flow cytometer (BD) and data was analyzed with Cytobank and R.via Bayesian integration, differ between the two networks, with the median edge weights being 0.85 and 0.22 for the general functional network and the immune-specific network, respectively. We retained one million most highly weighted edges for each network. We explored the algorithm\u2019s performance and its dependence on the clustering coefficient parameter for each network separately (data not shown), and found that in each case there exists a range of this parameter (different for each network because of the differences in the underlying edge weight distributions) with comparably good performance. We use these ranges, and set the average clustering coefficient cutoffs to 0.8 and 0.5 for general and immune-specific networks respectively.We consider two human functional linkage networks, the general network that is We chose the values of 1.5 for subnetwork score . We noteMaterials and Methods section. Each DYNAMO module is grown from a seed gene by adding nearby genes in the interaction network in a way that maximizes the average gene expression activity score of the module, while maintaining a minimum desired clustering coefficient. DYNAMO\u2019s expression activity score (subnetwork score) addresses the challenge of comparing time course datasets and studying response programs that may be temporally shifted with respect to one another. DYNAMO samples time-shifts in the gene expression dynamics, computing the time-lagged Pearson correlation coefficient, and conducts a greedy search for coherent active subnetworks, such that each module member gene in one dataset exhibits a maximally similar expression pattern to the same gene in the other dataset. For each module, the optimal time shift, applied to all genes, is identified. Subnetworks with high overlap in gene membership that exhibit the same time lag are merged. DYNAMO identifies the set of highly coherent, statistically significant modules by determining the false discovery rate (FDR) via analysis of randomly shuffled expression data. The same methodological approach is applied to the problem of differential comparative module discovery. DYNAMO identifies maximally differentially regulated genes in two datasets that represent a highly functionally related module in the underlying functional interaction network.We built upon the approach of neXus , an algohttp://dynamo.mssm.edu/). In the following sections, we used DYNAMO to identify and compare modules in time course responses to the different influenza viruses. We first performed an in-depth analysis of the Brevig/Cal response comparison, validating our method and offering insight into the biology of their shared and unique response processes. We then compared temporal dynamics and functional pathway activity, computed as GO term enrichment of discovered modules, for all the strains. Detailed analyses of each comparison, including conserved and differential comparative modules, functional pathway activity and performance characteristics are available at .We evaluated two important aspects of the DYNAMO algorithm. First, we considered the effect of the choice of the functional network used by DYNAMO to identify functional connectivity. Next, we assessed the effects of allowing a temporal shift of the gene expression dynamics on module discovery. In evaluating the algorithm\u2019s performance, we considered the number of conserved modules that were discovered by the algorithm, and we estimated the false positive rates for the discovered modules Functional networks are constructed from heterogeneous data sources and represent diverse associations between genes or proteins , 34. Bayvia Bayesian integration, differ, based on the entirety of the experimental evidence for connecting them . Whvia a randomization analysis. We concluded that using a functional connectivity map and selecting a map that is most relevant for the experimental study are essential for identifying significant modules. As such, we used the immune-specific functional network in all further evaluations within this study.We further assessed the importance of enforcing the functional coherence of the modules and considered whether our method can extract high-confidence subnetworks from expression data alone. We used DYNAMO without enforcing the clustering coefficient parameter, while adding putative module member genes in the same order from a pool that is functionally proximal to the seed gene. As shown previously by Deshpande et\u00a0al. and corrWe evaluated the advantage gained by the introduction of a time shift in the identification of active subnetworks shared by the Brevig and Cal responses. We considered possible time lags of -80, -60, -40, -20, 0, 20, 40, 60 and 80 minutes, and shifted the Cal time course with respect to the Brevig time course. We compared DYNAMO\u2019s results when optimizing module discovery over the possible time-lags to those found with no time shift allowed , while keeping all other parameters the same. As shown in DYNAMO\u2019s objective in identifying conserved or divergent temporally shifted modules that are common between two responses is quite unique, and, to the best of our knowledge, has not been addressed in the literature. Nonetheless, we evaluated DYNAMO against two other methods that are most similar and identify conserved subnetworks from gene expression data, ModuleBlast and TDARModuleBlast was designed to compare module activation patterns across species. It uses expression data and network topology information to search for conserved and divergent sub-networks. Analysis of the host immune response gene expression data comparing Brevig and Cal infections using ModuleBlast resulted in 38 modules. These modules were generally not functionally enriched for immune-specific processes, according to functional annotation within ModuleBlast. Analysis by DYNAMO shows the importance of the network context in which gene expression data is analyzed. Biological pathways that are activated in an immune context are best identified using an underlying network that emphasizes immune-specific interactions. Since ModuleBlast employs a generic interaction network, the relative paucity of conserved modules is not surprising. Furthermore, while ModuleBlast makes use of temporal information, it does not optimally align the responses. This is a key difference that enables DYNAMO to capture coherent activation patterns that are temporally shifted.We also applied TDARACNE to our dataset. TDARACE was designed to address a different problem - it is a subnetwork inference method that is not comparative and operates on each gene expression dataset individually. Therefore, it generally infers dissimilar sets of modules for the Brevig and Cal datasets, making a direct comparison with DYNAMO meaningless.DYNAMO identified 207 high confidence functionally-coherent subnetworks that are time-shifted between the two pandemic strains, Brevig and Cal. To evaluate the subnetworks for biological significance, we assessed functional enrichment in the set of genes contained in each subnetwork. The enrichment was computed for each subnetwork individually based on the overlap of its constituent genes with the Gene Ontology (GO) biologicAbsolute majority of the subnetworks (82%) identified showed optimal similarity when aligned at the 80 minute time lag, with the Cal response activated after the Brevig response. Our findings confirmed the earlier observations that theWe next used DYNAMO to identify conserved temporally shifted modules to compare all pairs of influenza strains. p < 0.0001) immune-related GO terms that were common to all the comparisons and collectively were assigned to the absolute majority of the subnetworks. Representative GO terms are shown in An overall conservation of the immune response for all the pairwise comparisons was evident in the functional enrichment observed in the subnetworks. Using GO term enrichment by enrichR , we founComparative differential subnetworks are a group of highly functionally related genes that show differences in their pattern\u00a0of regulation in response to two perturbations being compared. Because their identification is a less constrained problem than conserved subnetwork discovery, the reliable selection of comparative differential modules is challenging. When identifying conserved subnetworks, the effects of noise in the data are mitigated by the requirement that common regulatory changes must be observed in different experiments. Methods that rely on pairwise gene interactions , 43 to rWe applied our method for differential module discovery to all pairs of influenza strain responses. Shown in Overall, we found many fewer differential modules as compared to conserved identified among pairs of responses (p < 0.0001) among the subnetworks and annotated by GO terms that are shared across pairs. The GO enrichment analysis of differential modules for the Cal/NC comparison is shown in In the four comparisons of differential modules for a pandemic and a seasonal strain, a considerable overlap in the GO terms assigned to these subnetworks was observed. We found 31 processes enriched .To validate the hypothesis of the DYNAMO algorithm, we experimentally tested if infection with the two seasonal and one pandemic IAV strain resulted in differences in antigen presentation. Antigen presentation by professional APCs, such as dendritic cells, occurs signals . In T ce signals Figure\u00a06Overall, the application of DYNAMO to the seasonal and pandemic H1N1 influenza infection datasets derived insight into commonalities and differences in the regulation of functional modules and potential mechanisms of immune response modulation by the individual influenza virus strains.In this study, we applied DYNAMO, a technique for discovery of comparative modules with different temporal dynamics or patterns of activation, to investigate host responses to infection by four different influenza virus strains and gain insight into the\u00a0temporal and functional similarities and differences between\u00a0them. We showed that the ability to search over multiple temporal lags allowed us to discover conserved temporally shifted mechanisms between different immune responses. Overall, we found remarkable temporally coherent conservation of a core group of immune processes that are crucial to infection control, such as cytokine signaling and specifically interferon signaling, in responses to all four viruses.Our search for differential modules pointed to potential mechanistic differences among the seasonal and pandemic strains, discovering subnetworks that suggest a key role for apoptosis, a finding consonant with previous experimental work implicating apoptosis in the host response to influenza . MoreoveMethodologically, the development of DYNAMO represents an important advance, which adds the element of temporal dynamics to the broad systems biology problem of functional subnetwork discovery . Our methttp://dynamo.mssm.edu/).While our algorithm development and successful application to the study of the immune response to multiple strains of the influenza virus is encouraging, a number of promising directions for further improvement of the method remain. The current version is restricted to expression data that is identically sampled and aligned. Since few datasets in the public domain share the same experimental design, relaxing this restriction, possibly using the time-warping algorithm , would mRaw microarray expression data are available from GEO, Series\u00a0GSE55276. Flow cytometry data are included as IN and DK developed and implemented the algorithm and conducted all data analysis. GN provided support for the development of the website. RD assisted with algorithm implementation and software development. BH conducted the experimental work. SK, CM, and SS supervised the analyses and edited the manuscript. EZ conceptualized and managed the research project, supervised the algorithm development and data analyses, and wrote the manuscript. All authors contributed to the article and approved the submitted version.This work was supported by National Institutes of Health contract HHSN272201000054C and Grant 1U19AI117873. IN was supported by the Graduate School of Arts and Science, New York University.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Methods for time series prediction and classification of gene regulatory networks (GRNs) from gene expression data have been treated separately so far. The recent emergence of attention-based recurrent neural network (RNN) models boosted the interpretability of RNN parameters, making them appealing for the understanding of gene interactions. In this work, we generated synthetic time series gene expression data from a range of archetypal GRNs and we relied on a dual attention RNN to predict the gene temporal dynamics. We show that the prediction is extremely accurate for GRNs with different architectures. Next, we focused on the attention mechanism of the RNN and, using tools from graph theory, we found that its graph properties allow one to hierarchically distinguish different architectures of the GRN. We show that the GRN responded differently to the addition of noise in the prediction by the RNN and we related the noise response to the analysis of the attention mechanism. In conclusion, this work provides a way to understand and exploit the attention mechanism of RNNs and it paves the way to RNN-based methods for time series prediction and inference of GRNs from gene expression data. Recent technological innovations, such as chromatin immunoprecipitation sequencing (ChIP-seq) and RNA sequencing (RNA-seq), allow complex networks formed by interactions among proteins, DNA and RNA to be systematic studied . These aThe complex interaction network among genes in a cell forms the gene regulatory network (GRN). In a GRN, there are, typically, transcription factors (TFs) proteins and target genes. TFs activate and inhibit the transcription of target genes. In turn, target genes produce other TFs or proteins that regulate cell metabolism . The recPredicting the behavior of a stochastic network is an extremely challenging task in modern science. Indeed, the measurable output of many processes, such as the evolution of the stock market, meteorology, transport and biological systems, consists of stochastic time traces ,23,24,25Analytical models give the possibility of inferring the physical parameters of the system, or at least parameters that have a well-defined meaning for the system. This aspect plays a crucial role because it allows one to obtain a physical interpretation of the network due to the interaction between its constituent elements. In this sense, a learning process is not only able to estimate the prediction of temporal evolution data, but it also provides the generative model, which is, in some cases, physically interpretable. Somehow parallel to the general problem of predicting the dynamical evolution of a time sequence, there is the task of classifying a set of time series. In this context, the estimated parameters obtained from the predictive model can be used in a classifier to classify different time series . Among sIn this work, a DNN approach was adopted with the aim of inferring information on the behavior and on the structure of GRNs. To evaluate the effectiveness of this approach, we generated a synthetic gene regulatory model and its stochastic evolution in time and we used a deep neural network to predict the future behavior of the time traces of the system. Through a series of statistical analyses of the results obtained, we took into account the DNN parameters in order to infer information on the physical structure of the gene interactions. We show that this approach can be used to categorize protein expression profiles obtained from GRNs with different topological structures, e.g., master regulator genes or oscillating gene networks.At the heart of our approach, there is the consideration that high-throughput experimental techniques, such as microarrays and RNA-seq, can produce time traces of gene expression. For instance, whole-genome microarrays have been employed to monitor the gene expression dynamics of circadian genes in cyanobacteria, providing the expression levels of approximately 1500 genes over 60 h, measured at intervals of 4 h ,32. AnotT time steps of the ensemble of variables that are likely responsible for its evolution need to be considered. Moreover, the dual attention mechanism is needed to infer the functional interaction among the variables. The first AM selects the genes out of the pool that are master regulators of the given state. The second AM is applied to the selected genes of the first level and it prioritizes important time steps of their expression. These two attention mechanisms are integrated within an LSTM-based recurrent neural network (RNN) and are jointly trained using standard backpropagation. In this way, the dual attention recurrent neural network (DA-RNN) adaptively selects the most important time steps of the most relevant inputs and captures the long-term temporal dependencies. Since our goal is to predict the dynamic of all the genes that belong to the GRN, we built a scheme parallel to the DA-RNN. We use RNNs because we know that these neural networks can store several patterns and that the network structure affects the dynamics and the number of the stored patterns [In this work, we used the dual attention mechanism (AM) that has been recently introduced . Using tpatterns ,43,44.Regarding the forecasting task, the DA-RNN has been recently shown to outperform state-of-the-art methods for time series prediction and different types of RNNs without the dual attention mechanism ,46.To evaluate the general usefulness of the DA-RNN, we generated synthetic time series of gene expression from different classes of GRNs, resembling known biological networks. We trained a parallel DA-RNN for each GRN and we showed that it predicted the future behavior of the time traces with high accuracy. Next, we focused on the input attention mechanism, with the goal of gathering information about the structure of the different GRNs used. Relying on graph theory and network analyses, we studied different properties of the attention layer, finding that, in general, they were able to discriminate different GRN architectures, thus identifying a structural information about the GRN in the attention mechanism of the DA-RNN. We observed that the robustness of the prediction by the DA-RNN to noise addition was different for the various GRN architectures and we compared it to the properties of the input attention. Our work represents a first promising application of a DA-RNN to the prediction of time series gene expression data and it provides a step beyond in the analysis of the interpretability of the attention mechanism of deep neural networks.Instead of using a classical neural network approach, we propose a new method to predict the concentrations of a set of proteins.The method uses state-of-the art deep learning methodologies, including long short-term memory (LSTM) and attention mechanism (AM).N time traces to optimize the prediction of the target, while the decoder attention layer looks for the best way of weighting the past time points in each time trace. The combination of these two layers finds two key characteristics of the prediction for a given target time trace; on one hand, it finds the most important elements in the system that should be considered for predicting the target and, on the other hand, it quantifies the importance of the different past points.We implemented a deep neural network model that relies on the dual-stage attention-based recurrent neural network (DA-RNN) for time series prediction , which hN parallel DA-RNNs were trained, where the i-th network predicts the behavior of the i-th target gene. We hope to find, encoded in the attention layer, functional information about the interaction among the genes.We are interested in predicting the further time step of the whole network of interacting genes. To this end, we implemented the parallel scheme shown in To optimize network performance, a standard scaling of the data was performed, converting all the time traces to zero mean and unit variance.M is their length.We used a mini-batch stochastic gradient descent (SGD) together with the Adam optimizer to trainT for training the parallel DA-RNN and we computed the following: (i) the training time for all the genes of the GRN for each gene regulatory network. To choose the time window for training the DA-RNN, we performed an analysis of the autocorrelation time of the gene expression time traces and an evaluation of the prediction performance of the DA-RNN. We looked at the distribution of the autocorrelation times for two example gene regulatory networks t the GRN E; (iii) the GRN F. To useWe implemented the network in python using the py-torch library.N input attention weights, where N is the number of genes. This resulted in the definition of a matrix j obtained by training the neural network on target gene i. Next, we treated the input attention matrix as a graph and we computed three local network properties, i.e., the clustering coefficient, the betweenness centrality and the hub score. We used the \u201ctransitivity\u201d function, \u201cbetweenness\u201d and \u201chubscore\u201d functions of the \u201cigraph\u201d package of R for the three descriptors, respectively. For each case, we considered a weighted graph. Specifically, the three network descriptors are defined as follows:To extract the weights of the input attention mechanism for each GRN, we trained a DA-RNN for each target gene, as described in the previous section, and we collected the vector of i, which is defined as the sum of all weighed links. The n and m being i, j and h indices). Finally, (i) The clustering coefficient is given bys) that go from node k to node l passing through node i and k to node l.(ii) The betweenness centrality is defined as follows:(iii) The Kleinberg\u2019s hub centrality score, or hub score, of the vertices is defined as the principal eigenvector of The interpretation of the three network properties is provided in T time steps using the Python package scikits-learn.Firstly, we scaled the data to zero mean and unit variance using the function \u201cStandardScaler\u201d. Next, we performed agglomerative hierarchical clustering on each matrix, with complete linkage and Euclidean distance, using the function \u201cAgglomerativeClustering\u201d. For each matrix, we chose the optimal number of clusters as that maximizing the average silhouette score. The silhouette score for a given sample is defined asWe also ran a PCA on the same matrix, using the \u201cpca\u201d function.In order to compare the dendrograms obtained from the hierarchical clustering of the different matrices, we first converted them to Newick trees, which is a standard format for phylogenetic trees, then we used the R package \u201cape\u201d to read the dendrograms and the R package \u201cTreeDist\u201d to compute the distance between each pair of dendrograms. In particular, we used the information-based generalized Robinson\u2013Foulds distance . The RobFinally, the comparison between the partitions obtained from the hierarchical clusterings was performed using the variation of information , which mModeling and simulating a whole gene regulatory network is a fundamental challenge for computational biophysics. Indeed, the network of interacting genes includes many chemical reactions, with the respective rates and parameters that define a high-dimensional parameter space. Moreover, the activation of a gene in the classic framework happens via a transcription factor (TF) that, in order to bind the DNA, usually goes through a set of crucial chemical post-transcriptional modifications, which can drastically change the nature of the protein. Other physical dynamics, such as tuned delay, repression, the presence of multiple TFs and the cooperativity among them, further complicate the endeavor of modeling gene regulatory networks.i expressing a protein with a concentration t and i and j, which is 1 if gene j expresses a protein, at concentration i, \u22121 if it is a repressor and 0 if the two genes do not interact.i from the TF (repressor) j.i. It allows one to account for the translation of mRNAs into proteins and it has been previously included in Gillespie simulations [ulations ,51,52. HThe simplest way of describing a gene regulatory network is to define the nature of the interactions, writing the probability Notice that the term We stress that the behavior of the gene regulatory network described in Equation is intriEquation . MoreoveN can be interpreted as the mean number of repressors and activators for a given gene. Using these parameters, we can tune the connectivity matrix density of the net. Indeed, if we analyze the extreme values of these parameters, for Different classes of gene regulatory networks model distinct biological systems. In this section, we describe the gene regulatory network architectures that we chose for the generation of time series data and for training the deep neural network model. Since we aim at understanding the regulatory interactions between genes in a gene regulatory network, we generated synthetic data for which we know the ground truth network. Firstly, we focused the analysis on the degree of connectivity, which is an important network feature. Indeed, some networks are extremely connected, meaning that the matrix Oscillations are another important property of GRNs. Therefore, we included, in our analysis, oscillatory networks in which there is a central core of genes that evolve in an oscillatory fashion. The simplest way to induce oscillations in a gene regulatory system is to introduce a feed-forward loop with some delay. To this end, we relied on the Goldbetter model , wherebyi shares out edges with all other genes; in addition to these edges, we added random connections among the other gene pairs.Another class of networks that we studied is that where all genes share incoming connections from a master regulator gene. In other words, we define a GRN in which the first gene The last class of GRNs that we studied is that connected to an external oscillatory signal. In addition, in this case, the connectivity among the internal genes is random and quantified by the parameters T of the input data, which plays a fundamental role in the training procedure, as described in detail in To predict the temporal evolution of a GRN, the neural network is trained on a time window t goes to 0 for In our approach, we considered that the mutual information between the state of the system at time for \u0394t\u2192\u221e , also fofor \u0394t\u2192\u221e ,56.(7)Ie DA-RNN C.We used the parallel DA-RNN described in detail in T of the original data. The RMSE computed with these predicted value is the standard performance measure for the time series prediction task [T values. To also account for the long-range performance of the parallel DA-RNN in time series prediction, we introduce an additional way of predicting the time series after training the DA-RNN. The prediction is propagated by providing, to the network, the first T input data from the time trace and letting the DNN to predict the next time step. When the first time point (after T) is predicted, the network has, as the input for the next prediction, the T steps the system would rely its prediction only on previously predicted data, showing a complete autonomous behavior. Using this procedure to calculate the prediction, we found that the DA-RNN was able to reconstruct well the dynamics of the system, as observed in The results shown in ion task ,46. HoweThe input attention mechanism of the DA-RNN assigned weights to the genes of the gene regulatory network under study, with the goal of prioritizing those that enabled the prediction of the dynamic of the target gene. Thus, we reasoned that the set of input attention weights for a given gene regulatory network could reflect, to some extent, the structure of the regulatory interactions, allowing us to distinguish different architectures of the gene regulatory networks. For each gene regulatory network, we extracted the input attention matrix A recent study showed that the input attention mechanism of the DA-RNN does not reflect the causal interactions between the variables composing the system . This isi should be connected both with node j and node k, but nodes j and k should also be connected. As a second descriptor, we considered the betweenness centrality, which analyzes the centrality of the node with a higher level of complexity since it is not obtained only from the interactions with the first neighbors (such as the clustering coefficient) but it is defined in terms of the number of all the shortest paths connecting all node pairs that pass through the given node. Indeed, a node i with a large value of betweenness is a key node of the network , because many shortest paths connecting the pairs of nodes pass through it. The last descriptor, the hub score, is directly related to the intrinsic properties of the adjacency matrix, since it is defined as the principal eigenvector of the adjacency matrix of the graph. This means that it is able to capture mathematical properties of the network given by the optimization of its information contained in the corresponding adjacency matrix. For the mathematical definition of the three descriptors, see the Methods section.Thus, to investigate the organization of the interactions that make up the network architecture from the input attention matrices, we employed methods from graph theory. We chose three different local parameters, which were defined for each node of the network and provided different levels of description. The first is the clustering coefficient, which describes the topological/structural organization of the interactions involving a node and its first neighbors, meaning that the local relationship structure that each node has with its first neighbors is evaluated. Specifically, the clustering coefficient is defined on the number of triangulations of each node. Therefore, not only node To analyze the role of each network parameter for each type of network used, we needed to consider the values of the descriptors for each node (gene) of the graph in order, in this case, decreasing. The reason for this choice is due to the fact that there is no correspondence of a physical and/or biological nature between the genes of two different matrices; therefore, there is no a priori numbering of the genes. We only evaluated two matrices that were similar to each other if the two profiles of the analyzed network property were similar. This procedure allowed us to compactly describe each attention matrix with a single vector composed of a specific network property, node by node, in descending order.Next, we studied how the time series prediction by the DA-RNN was affected by noise added to the gene expression, to see if different regulatory architectures reacted in distinct ways. The addition of noise to the time series was meant to reflect the alteration in genes belonging to specific networks coming from different extrinsic sources ,59,60,61T predicted values of the protein concentration, where T is the time window used for training the DA-RNN. We computed the MSE on the prediction and we repeated the procedure for several values of To this end, we used the parameters of the DA-RNN, trained as described in the Methods section, to predict the level of the target gene, but we added a Gaussian noise with zero mean and variance As a first comparison between the matrices obtained for each descriptor from graph theory and from the results of the noise addition, we represented each pair of matrices in scatter plots, shown in To better capture information from the attention matrices, we interpreted these as a graph and used local parameters from graph theory to quantify the role of each node (gene) in the complexity of the interaction network with other nodes (genes). To this end, we considered three local descriptors that are able to determine both local topological properties and network properties linked to the definition of the shortest path (see Methods). Using the matrices obtained for the clustering coefficient, the hub score and the betweenness centrality from the input attention matrix of each GRN, we performed hierarchical clustering, as detailed in the Methods section. From the dendrograms shown in We ran a PCA on the same matrices and we represented the GRNs and the partition in clusters using the first two principal components (PCs), which, together, explained, on average, fficient A, the \u201cMfficient B, the osub score C, the seIn this section, we present the study of the matrices of the MSE obtained from the analysis of noise described in the Methods and in Next, we built a matrix containing the means and the variances of the MSE for each gene and each GRN and we performed hierarchical clustering and PCA on it, as described in detail in the Methods section. The dendrogram representing the hierarchical clustering is shown in Finally, we compared the dendrograms obtained from the hierarchical clustering for the different network properties and for the analysis of the noise by computing the information-based generalized Robinson\u2013Foulds distance, or tree distance (see the Methods section). The results are shown in In this manuscript, we used a DA-RNN to predict the behavior of stochastic processes. The aim of the work is not only to recover the time traces of the system, but also to infer information on its structure. A central point of this study is that we used classical models of gene regulatory networks to define a stochastic system for the simulation of our case studies. Using the in silico data generation approach, we altered the internal parameters of the system, whose behavior was then reproduced through the DA-RNN. This approach gave us full control over the interpretation of the results, preparing the ground for future applications of the outputs to real experiments. Indeed, state-of-the-art techniques for the measurement of gene expression, such as RNA-seq, can only provide time traces of gene expression, hiding the relationships among the components of the system. The inference of interactions among genes is a critical goal for biology, since understanding the effect of a gene on another allows one to predict the behavior of a gene regulatory network upon perturbations, opening the possibility to design new therapies to reshape the network.Our work shows that the DA-RNN accurately reconstructed the time traces of genes belonging to different types of gene regulatory networks. In particular, we generated synthetic data from networks with different degrees of connectivity (from fully connected to sparser networks), networks displaying oscillatory behavior or driven by an external signal and networks with a master regulator gene. However, looking at the internal parameters of the attention layer of the neural network, we could not fully reconstruct the internal connection among the genes. Using tools from graph theory, we went beyond this lack of interpretability of the neural network parameters and we showed that considering the network properties of the input attention matrices, such as the clustering coefficient, the betweenness centrality and the hub score, it was possible to obtain information about the type of gene regulatory network under study. In particular, the clustering analysis showed that these network properties allowed one to distinguish different GRN architectures, with the clustering coefficient reflecting this structure better than other properties. We also studied the change in accuracy of the prediction by the neural network under noise addition on protein concentration. Performing a similar analysis, we showed that the response to noise of GRNs allowed one to separate different GRN architectures. Moreover, this analysis suggests that the core oscillating genes in oscillatory GRNs were more robust to noise addition, from the point of view of the ability of the neural network to predict their time traces, compared to the others, while, for a network controlled by a master regulator, all the genes responded in a similar way.In this work, to train the DA-RNN, we used all the protein time traces but only one sample for each network. We already obtained a very accurate prediction of the system dynamics, although the performance might be improved by considering multiple samples, possibly generated with different initial conditions.An interesting application of our method is the analysis of time series produced by high-throughput experimental techniques, such as microarray or RNA-seq data ,64,65,66In conclusion, we here propose an analysis of gene regulatory networks based on deep neural networks, which exploits the recently introduced DA mechanism and the power of RNN for time series prediction. We were able to reconstruct the behavior of the system with high accuracy for most of the GRN architectures. Moreover, through a network analysis of the internal parameters of the input attention, we could discriminate the different classes of gene regulatory networks, overcoming the lack of a direct connection between the internal parameters of the DNN and the physical quantities that describe gene interactions. Summing up, our work paves the way to the development of a method for simultaneous time series prediction and analysis of gene interactions from real gene expression data."}
+{"text": "This paper addresses the development of predictive models for distinguishing pre-symptomatic infections from uninfected individuals. Our machine learning experiments are conducted on publicly available challenge studies that collected whole-blood transcriptomics data from individuals infected with HRV, RSV, H1N1, and H3N2. We address the problem of identifying discriminatory biomarkers between controls and eventual shedders in the first 32 h post-infection. Our exploratory analysis shows that the most discriminatory biomarkers exhibit a strong dependence on time over the course of the human response to infection. We visualize the feature sets to provide evidence of the rapid evolution of the gene expression profiles. To quantify this observation, we partition the data in the first 32 h into four equal time windows of 8 h each and identify all discriminatory biomarkers using sparsity-promoting classifiers and Iterated Feature Removal. We then perform a comparative machine learning classification analysis using linear support vector machines, artificial neural networks and Centroid-Encoder. We present a range of experiments on different groupings of the diseases to demonstrate the robustness of the resulting models. It is also widely speculated that pre-symptomatic shedding is an important feature of the transmission of COVID-193.Transmission routes of human respiratory virus infections are typically via respiratory droplets that arise as a consequence of speaking, sneezing, and coughing. Such infections include a broad range of pathogens including influenza virus, human rhinovirus (HRV), respiratory syncytial virus (RSV), severe acute respiratory syndrome coronavirus (SARS-CoV), Middle East respiratory syndrome coronavirus (MERS-CoV) and the novel coronavirus SARS-CoV-2. These transmission mechanisms are exacerbated by the fact that infected subjects may shed a virus even before the onset of symptoms5. These individuals are capable of infecting large numbers of the population and, therefore, are referred to as \u201csuper-shedders\u201d of a pathogen, or \u201csuper-spreaders\u201d of disease. A number of examples of this phenomenon have been documented including typhoid, tuberculosis, and measles virus6. There is evidence that super-shedders play a pivotal role in the spread of disease such as in the SARS outbreak of 20027. There is also emerging evidence of super-shedders in the COVID-19 pandemic10. However, little is known about the specific host responses to infection that contribute to shedding.The ability for asymptomatic individuals to shed virus has increased significance given the observation that some of these shedders are responsible for infecting large populations. There is significant evidence that, at least in some cases, the spread of infectious disease can be traced to a small fraction of the population who are typically asymptomatic and shed high volumes of pathogenThe primary aim of this investigation is the development of a predictive model capable of distinguishing pre-symptomatic infected individuals from uninfected controls. This is accomplished via the analysis of host gene expression profiles of blood and the exploration of signatures of shedding before symptom onset. Based on evidence from machine learning analytics, these sample measurements provide significant discriminatory information related to the host immune response to infection soon after infection and significantly before the development of symptoms. Experimental findings suggest that gene expression associated with the immune response changes significantly, even within the first 8 h after infection. As such, they provide a wealth of quantitative information that can to be decoded to reflect discriminative signatures that can be used for predictive models as well as biological discovery.To accomplish our predictive modeling aim we focus on identification of signatures that are predictive of shedding within the first 32 h post-exposure. We begin with a visual exploration of data from infected individuals and visually demonstrate how clearly the movement of host response to disease through time is and lend visual justification to our premise that the prognosis, i.e., the prediction of the course of a disease, can be determined in the first 32 h. We provide visualizations to support this hypothesis. Next we implement feature selection algorithms designed to extract discriminatory sets on a variety of dataset groups and time windows. These machine learning algorithms, provide evidence that these feature sets are capable of identifying shedders in the first 32 h after exposure. Lastly, we show that these predictive features reflect biologically relevant host responses that may contribute directly to shedding.11. This data is publically available on the NCBI Gene Expression Omnibus (GEO) database with identifier GSE73072. The data was normalized using standard RMA (Robust Multi-array Average) normalization procedure on the entire dataset12. We implemented additional strategies for removing batch effects using Limma (LInear Models for MicroArray) including Subject ID and Study ID normalization13. The features of the datasets are probe set identifications associated with gene expression.We analyze microarray data of gene expression profiles of blood samples from individuals at different time points who were infected with HRV, RSV, H1N1 and H3N2 as part of several clinical challenge studiesThe machine learning experiments were performed using binary classes. We take the negative class all at some point in time be symptomatic and shed, i.e., will become clinically positive.A novel tactic employed here is to separate the positive classes all features capable of discriminating between the control class Our analysis begins by using machine learning algorithms to first identify mentclass2pt{minim14 applied to each time-bin separately; Feature Set 2 is constructed similarly on the combined HRV, RSV, H1N1 and H3N2/DEE5 data while Feature Set 3 employs H1N1 (both sets) H3N2/DEE5 and HRV (both sets). This seemingly elaborate partitioning was done to explore the ability of different viruses to produce generally discriminatory features. The resulting feature sets are used to perform the machine learning experiments in this paper.The net result of the feature extraction described above was Feature Sets 1\u20133, each constructed for the 8 h time intervals over the first 32 h. Feature Set 1 uses only the influenza data in the iterative feature removal (IFR)Given that the number of control samples is generally a factor of two or more larger than the number of positive samples in There is significant experimental evidence that the immune response may be viewed as a sequence of biological manufacturing processes. There is a collection of interacting biological pathways that produce a temporally evolving series of molecular defense mechanisms in an organized cascade. As such, the genes being expressed in these pathways at any given time after the initial infection will be changing as a reflection of this temporal progression of the host response. Understanding the temporal evolution of the gene expression is potentially important for accurate diagnosis and prognosis of patient outcome.15. A visualization of this pathway illustrates the evolution of the host immune response to H1N1 as seen in Fig. 17. Association of Entrez Gene IDs to microarray probe identifiers was done using the associated microarray platform file, resulting in 94 probes of interest.As an illustration of the temporal evolution of the immune response, the host gene expression of the Reactome mentclass2pt{minimmentclass2pt{minim18 visualization of this gene expression data for all subjects and all time points is shown in Fig. A standard principal component analysis (PCA)20. In this example the class labels correspond to the time intervals of the after 5 h. The trajectories for this pathway appear to return towards the nominal state over the course of 24 h. The evolution of the location of the data neighborhoods serves as a geometric characterization of the biological processes. In contrast to PCA, the visualization characterizes the significant temporal variation in the first 24 h after exposure to the H1N1 pathogen.A neural network dimension reduction technique is applied to the mentclass2pt{minimmentclass2pt{minimAnother look at this early temporal evolution is shown for Fig. In our final visualization, Fig. In summary, this suite of visualizations consistently suggests that there is signficant temporal evolution in gene expression that could permit the classification of shedders in the first 32 h. As a group, they provide a compelling body of evidence to suggest that the biological response to infection is extremely rapid and as such provide visual support for the successful prognosis experiments that follow. In other words, the fact that these low-dimensional visualizations provide clear delineation of the change in gene expression over time serves to validate the machine learning classification results we will present in what follows.14. The resulting discriminatory features are used to build classifiers to predict whether a sample is a shedder, or not. In \u201cHere we present the results machine learning algorithms used to determine biological signatures of shedding in the first 32 h. All of the feature selection algorithms in this paper are based on a modification of the iterative feature removal algorithm based on sparse support vector machines as originally described inretrospective. The results in Table The first experiment concerns Feature Set 1 which is obtained by applying time bin feature selection to the four influenza data sets. We perform a LOSO classification on the same data, the influenza studies, using these features. Since the discriminatory features were identified on the same data sets that are then predicted in this experiment, we view this analysis as The next task is to see how the feature sets generalize to prospective data sets. With this in mind we apply Feature Set 1, determined on the influenza data, to the problem of prognosis of the respiratory infections RSV and HRV at different time bins. This classification task is again the discrimination of controls from eventual shedders.Our first test is to use the RSV/DEE1 and HRV/Duke, HRV/UVa data sets individually as prospective datasets; we show the results of this experiment in Table We now explore the impact of merging these HRV and RSV studies to create larger data sets. As shown in Table It is also interesting to explore the predictive value of our models. Recall that positive predictive value (PPV) is the probability that subjects classified as In our final experiment for predicting controls from pre-symptomatic shedders we apply Feature Set 2 to the experiment of predicting H3N2/DEE2 influenza. The dataset H3N2/DEE2 was sequestered from the feature selection, i.e., it is prospective. Feature sets were found using IFR on the data without H3N2/DEE2 with Limma batch effect removal using Study ID applied before feature selection. We then compare the performance of the prediction on H3N2/DEE2 using the features selected. The results from this experiment are shown in Table In the next section, we explore the effect of pooling features. In other words, rather than restricting classification to bins of 8 h intervals, we broaden the bins successively until we collect features from all four time bins as a single feature set.In this section, we explore the effectiveness of pooling the feature sets determined as optimal for individual time-intervals. In doing this we gain additional insight into the changes of the host response at different stages of the infection. Combining features over time will indicate how useful the union of discriminatory features is to the overall prognostic question. In the first 8 h, the classification is computed exactly as above, using features identified as optimal for the time bin 1\u20138 h. The classification in the second 8 h interval now uses the features identified as optimal in the first 8 h combined with those that were identified as optimal in the second 8 h period, or time bin 9\u201316. We refer to this combined interval using the starting and ending time of the included features, e.g., the union of features from bin 1\u20138 and bin 9\u201316 as referred to as bin 1\u201316. Proceeding in this fashion, the bin labeled 1\u201332 includes features that comprise the union of all the features that were identified as optimal in each of the first four 8-h time intervals. Classification results on data selected from these time bin intervals are provided where the features are pooled as described above.In this feature pooling experiment we classify the H3N2/DEE2 data set alone using features from the combination of the H1N1, H3N2/DEE5, and HRV, we have omitted RSV given its known dissimilarities with influenza. We refer to the features drawn from these particular studies as Feature Set 3. In Fig. In the previous section, we addressed the question of whether biomarkers from different time bins had discriminatory power in other time bins. Here we look at the feature sets themselves and analyze any overlap between features selected for different time bins. In Fig. 21. For example the diagonal in Fig. The Jaccard similarity metric of two sets is the number of elements in the intersection of the two sets divided by the number of elements in the union of the two setsAnalyzing where these areas of overlap occur, several are different feature selection experiments for the same time bin, indicating, for example, Feature Set 3 (with Limma using study ID) at time bin 1\u20138 has overlap with Feature Set 2 (with Limma using study ID) at the same time bin 1\u20138. This is unsurprising since the feature sets were drawn from overlapping data sets, but it is worth further consideration if the differing Limma techniques should have resulted in such low overlap.The other areas of overlap, which have to do with our analysis of time dependent features, all occur between the first time bin, hours 1\u20138, and last, hours 25\u201332. These are minimal but suggest possible circadian rhythm influence or similar cyclical nature of features selected from pre-symptomatic shedders.Finally we note the lack of overlap between feature sets from time bins 1\u20138, 9\u201316, and 17\u201324. This indicates that the features being drawn are in fact time dependent. Besides not improving classification on different time bins as addressed in the previous section, features drawn in a particular time bin to maximize discrimination between pre-symptomatic shedders and controls have little to no overlap with those in other time bins reinforcing the conclusion that time plays a pivotal role in identifying pre-symptomatic shedders. This validates the feature sets from a modeling prospective moving to the next section we will address the question of whether the feature sets themselves are associated with pathways and genes involved in immune response and viral shedding.Now that the validity of features has been assessed through modeling in this section, we assess the validity of the features as biological features associated with viral immune response. To assess the biological significance of the classification signature, we calculated fold gene expression in shedders relative to controls and assessed functional enrichment using Ingenuity Pathway Analysis (IPA) on Feature Set 2. Viral shedding was associated with an early perturbation in canonical pathways associated to cell cycle regulation, as well as suppressed inflammation and stress responses Fig. A. AlthouTo further investigate the functional implications of these enriched pathways, we used the IPA Upstream Analysis module to identify the predicted activation state of key pathway regulators Fig. B. Predic26. Although virus shedding was not specifically associated with sex in these studies, we used the IPA Molecule Activity Prediction analysis module to investigate the relationship between several significantly enriched functional categories in the classifier signature and estrogen secretion , sparse SVMs, artificial neural network (ANNs) classifiers and the supervised data reduction algorithm Centroid-Encoder. SVMs have the advantage that they are convex optimization problems and hence are fast and produce solutions that are globally optimal. The sparse penalty for SVM allows one to select features in the challenging setting where the number of variables (20000) is significantly larger than the number of data points Finally, the experiments in this investigation could benefit from more samples. There have recently been additional sequencing studies on human clinical blood samples related to the host response to infection by respiratory viruses that will serve to enhance and validate the work presented here.In all experiments we employed leave one subject out (LOSO) cross validation and repeated the trials 15 times using both Subject ID and Study ID.data partitioningdata normalizationfeature selection using iterative feature removal, a technique based on sparse support vector machinesclassification on retrospective and prospective data using a SVM, ANN and CE classifiersevaluation of classification accuracy using balanced success rate (BSR)functional analysis using ingenuity pathway analysis (IPA)The workflow of our machine learning experiments follows these steps: Feature Set 1: IFR applied to four influenza data sets including all H1N1 and H3N2 samples. The feature counts were 136, 38, 200 and 60 for the time-bins normalized with subject ID while the feature counts were 300, 97 500 and 200 for the time-bins normalized by study ID.Feature Set 2: IFR applied to H1N1 (both sets), H3N2/DEE5, HRV (both sets), and RSV. All the feature counts for Feature Set 2 were of size 200 with the exception of the time bin 25\u201332 h which had 130 features for subject ID normalization and 145 features for study ID normalization.Feature Set 3: IFR applied to H1N1 (both sets), H3N2/DEE5, and HRV (both sets). All the feature counts for Feature Set 3 were of size 200.Each feature set has subsets associated with the time bins. We see that many of the bins were determined by an arbitrary capping of the number of features when there was no clear cut off. This ad hoc approach did not seem to significantly impact the results.Here we summarize how the data was partitioned for the various experiments. We computed features on each of the four time bins in the first 32 h using to produce12. The data utilized is incorporated from multiple studies and so batch effects are inevitable due to a variety of factors, including study location, the year and time of year of experiments, and procedures for processing samples for their expression values, to name a few. We employ a simple linear normalization process using either Study ID, or Subject ID and the well-known Limma process13. Additional details related to normalization are in the Supplementary Materials All microarray data is normalized at the beginning using a typical RMA normalization methodall discriminatory features and so we use the iterative feature removal (IFR) procedure developed in14.Each sample has approximately 22,000 microarray probe set identification components, or features. Using entire feature sets tends to induce overfitting and poor generalization, so our work emphasizes data reduction through optimal feature selection. Our philosophy is rooted in the idea of extracting The first step, in the feature selection process, is to identify the classes that we are proposing to discriminate. In this paper we limit the scope of the machine learning to the pre-symptomatic shedders versus pre-infection samples, or controls. Further, we partition the data by 8 h time windows. So, an example of a machine learning experiment in this paper considers controls versus shedders in time bin 1\u20138 h. Given these two classes, the next step is to identify a minimal set of discriminatory features using a sparse support vector classifier on a training data set informed by the balanced success rate accuracy (BSR) on a validation set. These features are then removed and a new sparse classifier is trained, again observing the error on the validation set as a stopping criterion. This process is repeated until the BSR on the validation set falls below a tolerance, e.g, As part of the IFR feature selection process, we use a stratified-4-fold cross validation, with test balanced success rate cutoff set to 0.5 with 50 repetitions and limit the number of iterations of IFR to 70. In any given model we retain the features selected until their is a weight ratio that exceeds 5 or a weight magnitude drops below 1e\u22126. For each data partition we cap the number of features that can be selected at 80% of the size of the training data. Additional details are described in the Supplementary Materials 30 and Sparse33 support vector machines (SVMs). In this paper we use linear SVMs for building classification models based on the features determined by sparse SVM (SSVM) Iterative Feature Removal14. The classical linear SVM objective function is 34. Note that SVMs in various forms have a long history related to the analysis of gene expression data35.Linearentclass1pt{minima36 during error backpropagation.Artificial Neural Networks: We apply a standard feed-forward neural network trained with one-hot-encoding to learn the labels of the training data . In all the classification tasks, we used two hidden layers with 200 ReLU activation in each layer. We used the whole training set to calculate the gradient of the error function (cross-entropy) while updating the network parameters using Scaled Conjugate Gradient Descent(SCG) seeN samples and M classes. The classes denoted Centroid-Encoder. This is a variation of an autoencoder which can be used for both visualization and classification purposes. Consider a data set with f is composed of a dimension reducing mapping g (encoder) followed by a dimension increasing reconstruction mapping h (decoder). The output of the encoder is used as a supervised visualization tool, and attaching another layer to map to the one-hot encoded labels and further training by fine-tuning provides a classifier. For further details, see19. We use SCG36 to update the network parameters during error backpropagation.The mapping Once a collection of features has been selected using sparse support vector machine based iterative feature, we build classifiers using linear support vector machines, Artificial Neural Networks and Centroid-Encoder. We describe each of these methods briefly below and direct the reader to the references for further details. Normalized array data was used to calculate expression of classifier genes from shedders relative to control samples. Log2 expression ratios were uploaded to Ingenuity Pathways Analysis (QIAGEN Bioinformatics) and analyzed using the IPA Core Analysis function. No expression or significance thresholds were applied to the classifier genes.Supplementary Information 1."}
+{"text": "Time series transcriptome data can help define new cellular states at the molecular level since the analysis of transcriptional changes can provide information on cell states and transitions. However, existing methods for inferring cell states from transcriptome data use additional information such as prior knowledge on cell types or cell-type-specific markers to reduce the complexity of data. In this study, we present a novel time series clustering framework to infer TRAnscriptomic Cellular States (TRACS) only from time series transcriptome data by integrating Gaussian process regression, shape-based distance, and ranked pairs algorithm in a single computational framework. TRACS determines patterns that correspond to hidden cellular states by clustering gene expression data. TRACS was used to analyse single-cell and bulk RNA sequencing data and successfully generated cluster networks that reflected the characteristics of key stages of biological processes. Thus, TRACS has a potential to help reveal unknown cellular states and transitions at the molecular level using only time series transcriptome data. TRACS is implemented in Python and available at In particular, transcriptome data contain cell-specific information. In humans and other organisms, nearly every cell contains the same genes, but different cells show different patterns of gene expression. These differences are responsible for the many different properties and behaviours of various cells and tissues, both in health and disease2. Since cells make transitions over time, time series transcriptome data can be useful for predicting transitions of cell states as well. Liu et al. used Global nuclear Run-On sequencing (GRO-seq), RNA sequencing (RNA-seq), and histone-modification Chromatin ImmunoPrecipitation sequencing (ChIP-seq) to reveal lag between transcription and steady-state RNA expression and to identify dynamic transcriptional signatures across the cell cycle such as a large amount of active transcription during early mitosis3. van Galen et al. used single-cell transcriptome data and genetic mutation information for relationship among cell types to analyse acute myeloid leukemia (AML) heterogeneity that resides within a complex microenvironment that complicates efforts to understand contribution of different cell types to disease progression4. Leveraging the power of single-cell data and cell type information, Gr\u00fcn et al. developed VarID, a computational method that identifies locally homogenous neighbourhoods in cell state space and reveals pseudo-temporal dynamics of gene expression variability5. Although these studies are successful in inferring cell states using bulk RNA-seq or single-cell RNA sequencing (scRNA-seq) data, information in addition to transcriptome data is required. In the next section, we discuss how much information transcriptome data can provide for inferring cell states and transitions.The emergence of high-throughput technologies enabled a large number of cellular parameters to define a cell state, including mRNA, histone modifications, DNA modifications and cell surface proteinsFeasibility of defining cell states Recent studies showed that sophisticated computational analysis can infer states of cells and their transitions from transcriptome data. Analysis of scRNA-seq data typically arranges cells in \u2018pseudotime\u2019 by their gene expression profiles to add the time dimension to the RNA-seq data for tracking a trajectory of biological transition, which suggests that it is possible to define transitions of cell states from transcriptomes when the time domain is defined7. In general, clustering of cells is incorporated as an initial step to guide trajectory inference for scRNA-seq data8. However, prior knowledge is required to predict the order of the clusters and to assign the cell type to each cluster. For example, a study on acute myeloid leukemia (AML) performed trajectory analysis on scRNA-seq data and re-clustered the results to classify the cells as 15 cell types using the pre-defined cell-type-specific genes4. Some of the algorithms for analysing scRNA-seq data try to decode directionality of trajectories as well8. The algorithms, however, are designed to order the cells in differentiation such as stem cells, which limits the range of applicable data11. In addition, clustering cells do not identify the features (genes) that contribute to define cell states.12. When time series transcriptome data is used, grouped genes and their activated time points represent key stages of biological process such as cell cycle phases3 or cell types during cell differentiation4. One of the widely used methods to detect the gene sets characterizing cell states is clustering gene expression patterns. Chang et al. performed hierarchical clustering on time series mRNA profile in adenocarcinomic human alveolar basal epithelial cells and detected three stages in epithelial-mesenchymal transition (EMT) from the clustering result visualized as heatmap13. Clustering genes from gene expression data enables identifying marker genes for each cell state but the requirement of prior knowledge such as the number of cell states can significantly affect the accuracy of clustering.On the other hand, defining cell states and transition is possible using bulk RNA-seq data, given that the data are measured in specific conditions with uniform cell populations such as a cyclic process where cells are synchronized and a developmental process or perturbation response with a common starting pointResearch question We assume that time series transcriptome data itself has information sufficient enough to predict cell states and their transitions, as described in the previous section. With the availability of a novel framework that uses state-of-the-art computational methods for analysing time series transcriptome data, can we predict both cell states and their transitions without using additional information? In this study, we demonstrate that our proposed computational framework with state-of-the-art clustering algorithms can determine cell states and transitions from bulk-cell or single-cell time series transcriptome data without using any additional information. In the following section, we review clustering algorithms that can be used to analyse time series transcriptome data.N time points as N dimensional vectors, with an assumption that the time points are evenly sampled and independent. STEM14 is a tool for clustering short time series gene expression data. STEM predefines N time points assuming that expression values increase/decrease or remain consistent from the previous time point. Due to the large number of predefined profiles, STEM is limited to short time series with 8 time points or fewer. K-Shape15 is an adapted clustering algorithm for time series based on the K-means algorithm. It uses a new distance measure called shape-based distance (SBD), clustering similar shapes shifted along the time axis. K-Shape is useful in datasets where similar patterns with different starting points need to be clustered together but is not appropriate when the patterns of different clusters themselves are similar . Similar to K-means, the number of clusters should be given to K-Shape by the user.Clustering algorithms on gene expression data can detect distinctive expression patterns of genes, and most of the algorithms aim to improve accuracy by considering time-to-time dependency in expression values. Traditional clustering methods, such as K-means and hierarchical clustering, have been used for many applications and showed high performance in a general clustering problem. However, these methods treat time series observations with 16 uses GP models and their posterior probabilities for the agglomerative hierarchical clustering with an assumption that gene expression values in a same cluster are generated by one GP. BHC automatically detects the optimal number of clusters by Bayesian model selection. However, BHC is suitable for detecting small groups of genes, not a general landscape of cellular states, as it tends to generate a relatively large number of gene clusters due to its bottom-up strategy. GPclust17 introduces a three-level hierarchical structure (cluster-gene-replicate) of the GP models and each GP of clusters is generated by a Dirichlet process (DP). DPGP18 uses DP and GP models as well but the number of clusters is automatically set by a model similar to the Chinese restaurant process with a hyperparameter Some of the clustering algorithms assume that the observed gene expression values are inherently generated by an underlying model such as a Gaussian process (GP) and state-space model. GP is a stochastic process with a collection of random variables indexed by time or space where the finite collection of the variables follow a multivariate normal distribution. GP regression incorporates time point information, which can be an informative source of time dependency and noise correction, especially for datasets with uneven sampling rates such as developmental processes and perturbation responses. Bayesian hierarchical clustering (BHC) algorithm19 introduces a state-space model to clustering, assuming the observed expression values are generated by hidden state variables. K-dimensional hidden variables that represent K clusters depend linearly on the previous values of all K variables, which helps to infer regulatory relationships \u2018between\u2019 clusters unlike the other algorithms. ClusterNet can yield a number of false positive edges due to the limitation of using only gene expression values for network inference. Nevertheless, ClusterNet suggests that the order and relationship between clusters can provide new knowledge.ClusterNetSelecting the number of clusters TRACS automatically determines the optimal number of clusters with adapted gap statistics that leverage time point information and consider time-to-time dependency (Gaussian process)Clustering time series By incorporating adapted gap statistics into clustering, TRACS predicts gene clusters based on temporal patterns of genes adjusted by Gaussian process regression. We will show that the predicted gene clusters correspond to cell states in our experiments.Analysis of the clustering result TRACS infers the transition of cell states as a cluster network, predicting the order of clusters by their pattern similarity and reducing false positive edges by functional similarity (two-group enrichment test)Although the existing clustering methods try to detect distinctive expression patterns from time series data, the methods are not designed for inferring cell states. To infer cell states and transitions from bulk transcriptome data, clustering algorithm should consider long time series data including unobserved time points and data with temporal dependency. The predicted number of clusters is expected to reflect the number of cell states. In addition, when inferring relationship between clusters, the functional analysis such as enrichment tests needs to be incorporated to filter out false positive relationships derived from expression patterns. Thus, we propose a novel time series clustering algorithm that infer TRAnscriptomic Cellular States (TRACS) with the following features :Gaussian gap statistics (Step 1). Time series gene expression data are first clustered by a user-selected algorithm such as K-means. Assuming that the gene expressions in k clusters are generated from k Gaussian processes, Gaussian gap statistics calculate a gap between the likelihood of k Gaussian processes from the observed data and from the reference data that are randomly generated with no significant cluster formation. The optimal number of clusters k. During Gaussian process regression for each cluster, time point information of gene expression data is used to infer Gaussian means and variances at given time points considering time-to-time dependency. If a pathway of interest is given by the user, TRACS filters out clusters that are not enriched with pathway genes by statistical tests. With the clustering result from Step 1, a network of clusters is generated and visualized to track the dynamic expression patterns of clusters and their relationships (Step 2). For each pair of clusters, shape-based distance (SBD) is calculated, and a shift between two clusters that minimizes SBD is identified. The sign (+/-) of the optimal shift determines the order of two clusters, and all pairwise orders are combined to obtain the overall order of clusters. Functional similarity between neighbouring clusters is tested by a two-group enrichment test to identify a shared function activated over time.An overview of the proposed framework is illustrated in Fig. 4. ScRNA-seq was carried out using Seq-Well protocol to acquire transcriptional data from bone marrow (BM) aspirates. 6915 cells from healthy donors characterized the baseline cellular diversity in BM and the authors distinguished 15 different hematopoietic cell types. The putative differentiation trajectories were inferred by the gene expression similarities of cells, including a continuum of cells from hematopoietic stem cells (HSCs) to monocytes. The authors defined three successive stages of normal hematopoietic development as HSC/Prog, GMP and differentiated myeloid that correspond to five cell types , granulocyte-macrophage progenitor (GMP), promonocyte (ProMono), monocyte (Mono)) as they show clearly distinguished expression patterns. To generate a time series gene expression data from scRNA-seq data, 2317 cells from the five cell types were sampled and the average gene expression values of each cell type were calculated leading to 5 time points in the order of HSC-Prog-GMP-ProMono-Mono. Based on the process in the original paper, the most variably expressed genes were determined and 21 cell-type-specific genes were added. Removing the genes with zero expression values in more than four time points, the final gene expression data consisted of 360 genes. The original dataset were retrieved from Gene Expression Omnibus (GEO) database (GSE116256).The performance of TRACS was evaluated with three sets of gene expression data from scRNA-seq and bulk RNA-seq. We used scRNA-seq data from the research on AML20. Cdc28-13 cells were collected at 17 time points taken at 10 min intervals, covering nearly two cell cycles. Gene expression data of 6149 genes were downloaded from the Saccharomyces Genome Database (SGD)21, among which 220 genes were characterized for each cell cycle phase by the authors according to their transcript levels and biological functions. The dataset from this study has been used as a benchmark for clustering time series as there are few datasets with cluster labels for time series. Chang et al. observed time series mRNA profile in A549 cells from TGF-22. The study reported a three-state model including the partial-EMT state between epithelial and mesenchymal transition and 1,632 genes corresponding to the three states13. Gene expression data were downloaded from Gene Expression Omnibus (GEO) (GSE69667). For all datasets, expression values were log- and z-normalized before clustering.Two sets of bulk RNA-seq time series data represent a cyclic process and a developmental process, respectively. Cho et al. measured genome-wide mRNA transcript levels during the cell cycle of the budding yeast Saccharomyces cerevisiae4, the three stages of hematopoietic development detected by TRACS were the main signature that differentiate normal cells of healthy donors from malignant cells of AML patients. 20 out of 21 cell-type-specific genes clustered correctly to their stages are shown below the network. In particular, CD34 is a predominant marker of HSC and hematopoietic progenitor cells that can represent the HSC/Prog state and CD14 is a typical blood monocyte markers expressed on cells of the myelomonocyte lineage24. To demonstrate the stability of the proposed method, an additional gene expression data was generated with the increased number of time points by dividing cells from three cell types with large populations. In the experiment on the extended data with eight time points, TRACS generated three clusters that represent three stages of hematopoietic development pathway between GMP and differentiated myeloid stages. Antigen processing capacity is known to be induced through differentiation of BM cells starting from GMP to monocytes and monocyte-derived dendritic cells, which is consistent with the TRACS result25.Figure 20, 31, 81, 44, 31 and 34 genes belong to early G1, late G1, S, G2 and M phases, respectively. As the cell cycle dataset contains only 220 genes with labels, which is a relatively small number compared to the total number of genes in yeast, two experiments were designed to evaluate the performance of TRACS using DEGs and 220 labelled genes. The number of genes and the number of clusters used in the two experiments are summarized in Supplementary Table 26 is calculated to check DEGs with the same predefined label (5 cell cycle phases) that are clustered together (Supplementary Table cell cycle (sce04111) and a threshold as 0.1. As shown in Fig. A cluster network is generated by TRACS as shown in Fig. In the second experiment, 220 genes that are functionally characterized for each cell cycle phase were used to generate a cluster network Fig. c. Each c27. TRACS identified cell division cycle 6 (CDC6) and every component of the MCM2-7 replicative helicase complex except MCM6. The MCM2-7 complex is known as a component of pre-RC that is loaded onto DNA by CDC6 in G1 phase and activated for DNA unwinding28. The result implies that TRACS was able to capture the pre-replicative state of cells in G1 phase. The DNA replication checkpoint prevents the accumulation of DNA damage, such as replication blocks or damaged DNA templates, and the checkpoint signal in turn promotes G1-S phase transcription29. Checking DNA damage from replication is continued through the intra-S-phase checkpoint30 and G2 checkpoint31, which corresponds to the functional relationship between clusters suggested from the network. Lipid biosynthesis is also known to coordinate with cell cycle. Inhibition of fatty acid synthesis induces a cell cycle delay at early G1 phase, and a commitment point monitoring the synthesis of lipids is expected to be at the late G1 phase32, which explains why pathways related to fatty acid biosynthesis are enriched in early G1-late G1 clusters. Finally, sugar is the most important nutrient for yeast, and the storage of carbohydrates is under cell-cycle control. Storage carbohydrates rise to high levels in the early G1 phase and decrease in late G1 by liquidation to glucose, slowly growing again after S phase33.TRACS predicted pathways that are activated in two neighbouring cell cycle phases, DNA replication (early G1-late G1), DNA mismatch repair (late G1-S-G2), fatty acid biosynthesis (early G1-late G1) and carbohydrate metabolism (S-G2 and M-early G1). It is known that the formation of the pre-replication complex (pre-RC) occurs during G1 phase and is required for the appropriate initiation of DNA replication in the subsequent S phase13. Additionally, the result is consistent with the GO enrichment test result in the original paper that showed no common function between three groups of genes. Specifically, genes actively transcribed in the epithelial state were enriched for cell cycle process, while the mesenchymal state was characterized by genes related to cell adhesion. Meanwhile, the partial-EMT state was marked by genes associated with cell motility. Given the results of the GO enrichment test, it is understandable that TRACS assigned the ECM-receptor interaction pathway (hsa04512) as the only common function between the partial-EMT and mesenchymal clusters because the interaction between cells undergoing EMT and extracellular matrix (ECM) protein is known to regulate EMT processes including cell adhesion and migration34. For example, certain ECM proteins, such as type I collagen, are known to facilitate the EMT process through integrin signalling and disrupt cell-cell adhesions. Furthermore, mesenchymal-like cells migrate along the type I collagen matrix35.TRACS generated a cluster network for the EMT dataset time points such as single cell expression data from EMT, TRACS is expected to help reveal more concrete dynamics of EMT process with multiple partical-EMT states.In summary, TRACS generated three most distinct clusters that represent epithelial, partial-EMT and mesenchymal states based on the gene expression data with eight time points. Currently, a different number of partial-EMT states in various cancer cell lines have been characterizedThe performance of TRACS is evaluated differently in scRNA-seq data and two bulk RNA-seq data. As the scRNA-seq dataset does not provide cluster labels of every gene, the accuracy of clustering cannot be measured with evaluation scores such as F1 score. Therefore, the result was compared with biclustering algorithms to visually inspect the gene expression patterns of clusters. Biclustering is a method to clustering rows and columns of a data matrix simultaneously. With time series gene expression data, biclustering algorithms can be used to generate a gene set related to a certain cluster of time points that might represent a potential cellular state.Cell cycle and EMT dataset that provide true cluster labels of genes are evaluated with the accuracy of clustering measured by the inferred number of clusters and evaluation scores . To see how accurately the Gaussian gap statistics using Gaussian process likelihood can infer the number of clusters, the optimal number of clusters inferred by TRACS is compared with that from the original gap statistics using Euclidean distance. The clustering performance of TRACS is compared with the performance of six algorithms for time series clustering before clustering. PCA45. The algorithm defines \u2018gap\u2019 as a difference of within-cluster variance calculated from observed data and from a reference null distribution. An optimal number of clusters k where the gap is within the standard deviation of the global maximum gap46. Assume that gene expression of one gene is represented as a vector k clusters is defined as a sum of squared Euclidean distances between all pairs of expression vectors from each cluster r in , which can be calculated by a sum of distances between all data points rth cluster and as cluster mean r (Eqs. Gaussian gap statistics) as a metric for within-cluster variance to reflect temporal dependency in gene expression. We assume that gene expressions in a cluster are generated from a Gaussian process. Cluster mean k being increased by one when a cluster is divided into two clusters. When the data points are divided into two smooth Gaussian processes and become closer to the new confidence intervals, the likelihood increases. The likelihood decreases when an existing cluster is not properly divided into two clusters when k increases. The Gaussian process of a new top cluster has larger variances from regression because the scatteredness of the data points generated an improbable fluctuation. A change in the variances of a Gaussian process results in a decrease of the likelihood, which might not be captured with the Euclidean distance measure.Gap statistics are used to estimate the optimal number of clusters mentclass2pt{minim}K. Eqs. , 4.3\\doc47 Python package. The radial-basis function kernel l and noise variance 48, TRACS performs enrichment test on the cluster genes and filters out clusters that are not enriched with pathway genes based on the given threshold of P-value. The filtering process is helpful when the number of genes used in the analysis is large such as using differentially expressed genes (DEGs) detected from the gene expression analysis that are expected to include genes related to other simultaneously activated biological pathways with the pathway of interest.Gaussian process regression for each cluster is performed with the Scikit-learn15. Among all possible shifts m, SBD selects an optimal shift where the cross-correlation and vice versa. The order of each pair of clusters can be used to derive the overall order of clusters by ranked pairs algorithm49. The ranked pairs algorithm was developed for an electoral system to create a sorted list of winners from the votes comparing each pair of candidates. Using a sequential order of clusters, we can reduce the number of candidate edges of the cluster network from Shape-based distance (SBD) is a distance metric developed for time series considering the shifted pattern of two seriesN is the total number of genes and X of the hypergeometric distribution represents the number of pathway genes i and j, We define the edge between two clusters if i and j is enriched in pathway p, and an enrichment p-value becomes lower when two clusters are merged, excluding small pathways with 10 or fewer genes. This indicates the ratio of genes relevant to a pathway (function) becomes more significant when two clusters are merged. Cluster genes that meet the criterion are divided into two clusters due to their different expression patterns but they might have activated a same pathway together through the interaction propagation from the previously activated genes to the others over time.Once a cluster network is constructed using SBD, the edges between clusters are annotated in terms of the functional similarity between groups of genes in two clusters. The statistical test is to measure the significance of pathways being assigned to clusters. First, we perform a hypergeometric test on every pair consisting of a cluster and a pathway. The hypergeometric distribution is used to model the behaviour of drawing objects (pathway genes) from a bin (cluster). Assume that X and the assignment Y from the clustering algorithm. The number of pairs of data points that are in the same cluster in X and in the same cluster in Y is defined as true positive (TP) and the number of pairs of data points that are in the different cluster in X and in the same cluster in Y is defined as false positive (FP). In a similar fashion, we can define true negative (TN) and false negative (FN) as well. F1 score50 is the harmonic mean of precision and recall:51 measures the percentage of correct cluster assignments made by the algorithm. However, RI does not guarantee that random assignments have a RI score close to zero. Adjusted Rand index (ARI) is the corrected-for-chance version of RI that establishes a baseline using the expected RI, E[RI], by a random model.52 is used to evaluate the clustering results when the ground truth labels are not known. The silhouette coefficient s(i) for a single data point i is defined as:a(i) is the mean distance between i and all other data points in the same cluster and b(i) is the smallest mean distance of i to all points in any other cluster where i is not a member. The silhouette score is the mean s(i) over all data points, which is higher when clusters are dense and well separated.To evaluate and compare the performance of clustering algorithms, three metrics are used: F1 score, adjusted Rand index and silhouette score. F1 score and adjusted Rand index compares the ground truth class assignment Supplementary Information."}
+{"text": "Plasmodium. The most severe cases of this disease are caused by the Plasmodium species, Falciparum. Once infected, a human host experiences symptoms of recurrent and intermittent fevers occurring over a time-frame of 48 hours, attributed to the synchronized developmental cycle of the parasite during the blood stage. To understand the regulated periodicity of Plasmodium falciparum transcription, this paper forecast and predict the P. falciparum gene transcription during its blood stage life cycle implementing a well-tuned recurrent neural network with gated recurrent units. Additionally, we also employ a spiking neural network to predict the expression levels of the P. falciparum gene. We provide results of this prediction on multiple genes including potential genes that express possible drug target enzymes. Our results show a high level of accuracy in being able to predict and forecast the expression levels of the different genes.Malaria is a mosquito-borne disease caused by single-celled blood parasites of the genus Plasmodium parasite coupled with its changing metabolism makes the development of new effective drug treatments a continuous problem.According to the World Health Organization, malaria has established itself as one of the leading causes of death in developing countries concentrated within the tropics and subtropics. In 2017, there were an estimated 219 global million malaria cases, where the majority of these cases were found in Sub-Saharan Africa and Southeast Asia. While there is a projected downward trend in expected malaria cases, the high mutational capacity of the Plasmodium falciparum do not have a single stable life stage. Instead, they periodically transition between several intermediate stages within a complete life cycle. Beginning with the injection of infectious sporozoites from the mosquito gut to the human circulation, the parasite migrates to hepatocytes where they consume intracellular content and rapidly proliferate, preparing themselves for erythrocytic invasion post cell lysis. Once within the erythrocytes, the parasite adjusts itself to its immediate environment by transitioning between the three distinct stages: trophozoite, merozoite, schizont. Over a general time frame of 48 h, it cycles through these stages causing a series of synchronized mass erythrocyte destruction and invasion, giving rise to the clinical symptoms of intermittent fever associated with malaria [Centers for Disease Control and Prevention (CDC), P. falciparum remains an area of high interest for antimalarial development.As an apicomplexan, Plasmodium biology is how the parasite maintains precise control of gene expression during these stages in the intraerythrocytic developmental cycle (IDC) for the malaria parasite Amongst the multitude of biological processes occurring within a cell, metabolism is considered to be among the most researched. Specifically, antimalarial development focuses on identifying essential metabolic components that can serve as drug targets and Recurrent Neural Network (RNN), have seen increased usage in biology and medical applications. Tavanaei et al. proposedP. falciparum parasite during the blood stage life cycle.Various time series methods such as Auto-regressive Moving Average are also being used in various applications. Specifically, Kasabov demonstrt as well as information about its previous hidden state at time step t\u22121 through a recurrent connection as following: h(t) = \u03b8. The function \u03b8 is a non-linearity such as tanh or sigmoid. Unlike other types of neural networks, RNNs share the same parameters across all steps, thus, reducing the number of parameters the networks need to learn. However, training RNNs can be complicated by vanishing and exploding gradients st\u22121, where st is the output of the algorithm and xt is the actual observation at time t; and 0 < \u03b1 <1 is the smoothing factor.We train each RNN using the Adam optimizer , unbiased MAPE (uMAPE), and volume-weighted MAPE (vMAPE) to measure the prediction accuracy of a forecasting series. We use the following formulas to calculate the error where During training, for each model, we compute three error evaluation metrics . We choose the model that scores best on at least two of the three validation metrics. If there is a tie, we choose the model with the best vMAPE. Thus network architectures and parameters are chosen based on validation performance.Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs are the third generation of neural networks Maass, , in whicith synaptic input to neuron n is given by:Training for classification models of SNNs has recently been investigated based on both the supervised and unsupervised mechanisms. In this work, we use the adapted Widrow-Hoff learning rule proposed in Ponulak and Kasi\u0144ski . The leawni is the amount of weight change for the synapse, xi is the ith synaptic input, yd and yo are the desired and observed outputs, respectively.where We use simulated networks of leaky integrate-and-fire (LIF) neurons in the experiments, which is the most popular one for building SNNs. LIF neurons are characterized by the internal state called the membrane potential. The membrane potential integrates the inputs over time and generates an output spike when the neuronal firing threshold. As mentioned above, there is a positive association between mRNA and protein levels, thus different expression levels should correspond to different mRNA abundance patterns. Consequently, when these expressions with different patterns are converted into spikes in the SNN, the membrane voltage of each output spiking neuron would be different. Additionally, it has been shown that the SNN can achieve a better performance of classification if the membrane voltages of output spiking neurons vary from each other is the ith synapse, si synaptic weight of ith synapse, \u03bb is leak, \u03b1 is spiking threshold, and R is the resting potential. During the integration, for each neuron in a time step, the membrane potential is the sum of the membrane potential in the previous time step and the synaptic input. Following the integration, the model subtracts the leak value from the membrane potential, and finally, when the membrane potential reaches a threshold, the neuron spikes and the membrane potential is reset to a resting value.where The mRNA abundance information is presented to the SNN directly without significant transformation. We directly mapped the time series values onto distinct neurons so that temporal information of the time series is encoded directly into the network without extra processing. We use receiver operator characteristic (ROC) curves, and the prediction accuracy to evaluate each model. The area under the ROC curve (AUC) quantifies the ability of the classifier to balance sensitivity (true positives) against specificity .We generate a list of genes corresponding to those potential drug targets from Fatumo et al. and our We apply the normalization method, so the training process is more stable and faster since it's shown that gradient descent converges much faster with feature normalization than without it ; late trophozoite (LT); late schizont (LS). We use the whole time series of 48 h periods to predict instead of its average, minimum, or maximum values because we want to keep the time aspect of the mRNA abundance. Additionally, with little input data, the machine learning model is very likely to be under-fitting, in which it is unable to capture the relationship between the input and output variables accurately.To evaluate our method's performance, we compared it to state-of-art models used in Read et al. : logisti\u22125. The number of epoch and smoothing factors are 200, 300, and 0.5, 0.4 for RNN1 and RNN2, respectively. We use the provided statistics library in Python to implement ARMA, SES, and HWES.Our method is implemented using Python with Pytorch .Overall, our model forecasts the time series accurately, the trend of expression for each gene is closely captured by our model, and the expression levels are correctly classified. Additionally, with our current dataset, it takes only a few minutes to train our models, and a few seconds to forecast the time series and classify the expression level. Genes which highly express during the trophozoite stage produce proteins increasing the metabolic activity of the parasite. Thus, by accurately forecasting the gene expression profiles, we can verify whether the genes are active during the trophozoite stage. Moreover, the genes which highly express during the schizont stage, are related to the replication and mitotic division of the parasite. Knowing which genes express during the schizont stage, could potentially aid in better knowledge and control over the parasite development cycle. Finally, since the model could forecast the time series and expression level for the trophozoite and schizont stage, it could potentially reduce time and serve as an important template for biologists. Therefore, with our model's high accuracy, we can potentially provide a beneficial tool to help biologists generate a model of the complete gene expression profiles even before performing more involving experiments in the lab. In some cases where there is an abundance of data but there could be some missing data at some time point for some reason, our work could fill in those gaps by accurately predicting the missing data.P. falciparum parasite during the blood-stage life cycle. Even with a small dataset for training, our model performs very well in most scenarios. Overall, our method can forecast adequately the magnitude, the trend of mRNA abundance, and the expression level of the P. falciparum parasite during the blood stage life cycle. In the future, with the forecast gene profiles, we would like to predict the relationship among those genes to identify the gene or group of genes which express potential drug target enzymes. Considering the many existing variants of the Plasmodium parasite, we plan to evolve our method to investigate the profiles of these variants. Additionally, we plan to refine our model so that it will take a smaller amount of data (3\u20135 data points) and predict the next immediate data point (instead of the whole next stage), thus making it more practical for biologists.In conclusion, to our knowledge, we are the first one to proposed the usage of RNNs and SNNs to forecast the gene expression profile for the https://www.nature.com/articles/s41467-018-04966-3.Publicly available datasets were analyzed in this study. This data can be found at: TT and CE contributed to the design of the recurrent neural network. BR contributed to the design of the spiking neural network. TT provides the implementation of the study and wrote the first draft. BR and CE revised the draft. All authors contributed to manuscript revision, read, and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."}
+{"text": "Immune responses need to be initiated rapidly, and maintained as needed, to prevent establishment and growth of infections. At the same time, resources need to be balanced with other physiological processes. On the level of transcription, studies have shown that this balancing act is reflected in tight control of the initiation kinetics and shutdown dynamics of specific immune genes.D. melanogaster with 20 time points post Imd stimulation. A combination of methods, including spline fitting, cluster analysis, and Granger causality inference, allowed detailed dissection of expression profiles, lead-lag interactions, and functional annotation of genes through guilt-by-association. We identified Imd-responsive genes and co-expressed, less well characterized genes, with an immediate-early response and sustained up-regulation up to 5 days after stimulation. In contrast, stress response and Toll-responsive genes, among which were Bomanins, demonstrated early and transient responses. We further observed a strong trade-off with metabolic genes, which strikingly recovered to pre-infection levels before the immune response was fully resolved.To investigate genome-wide expression dynamics and trade-offs after infection at a high temporal resolution, we performed an RNA-seq time course on D. melanogaster innate immune response, and for the development of methods for analysis of a post-stress transcriptional response time-series at whole-genome scale.This high-dimensional dataset enabled the comprehensive study of immune response dynamics through the parallel application of multiple temporal data analysis methods. The well annotated data set should also serve as a useful resource for further investigation of the The online version contains supplementary material available at 10.1186/s12864-021-07593-3. Drosophila launch rapid and efficient immune responses that are crucial to survival. However, immune responses are energetically costly [Upon microbial infection, y costly because y costly , 3 such y costly . Furthery costly , 6, and y costly \u20139. This y costly . TherefoDrosophila [While gene expression has been examined at several time points after infection in osophila \u201313, the Statistical analysis of such high-dimensional longitudinal time-course omics data is not straightforward. While the problems of detecting differentially expressed (DE) genes and inferring gene interaction networks from gene expression data are common in genomics, computational methods have focused primarily on cross-section rather than time-course data. Most popular methods to analyze static RNA-seq data \u2014 such as edgeR or DESeqDrosophila transcriptional response to commercial E. coli-derived crude lipopolysaccharide (LPS), which activates the Imd pathway [In this study, we performed a dense time-course RNA-seq analysis of the pathway , to bett pathway , a methoDrosophila melanogaster after Imd stimulation, we injected adult male flies with commercial E. coli-derived crude lipopolysaccharide (LPS). While pure E.coli LPS does not induce an immune response in Drosophila, the peptidoglycan contamination in crude LPS preparations consistently activates the Imd pathway [Methods). In the remaining text, we refer to this treatment as \u201cImd stimulation\u201d. Using commercial LPS instead of living bacteria gives the advantage of avoiding the confounding effects of a growing and changing population of pathogens, and of the mechanisms the bacteria use to circumvent immune responses [To generate a full transcriptional profile of gene expression dynamics in pathway . This waesponses .Flies were sampled in duplicate for a total of 21 time points throughout the course of 5 days, which includes an uninfected un-injected baseline sample as control at time zero, and 20 time points after injection. Since this is a perturbation-response experiment, denser sampling occurred at early time points , with thPrincipal components analysis (PCA) on the 500 genes with highest row variance across all time points revealed a horseshoe temporal trend, with the control samples clustering in the middle, and the post-injection time points following a horseshoe-shaped track, consistent with a pattern of many genes displaying a coordinated change over the five-day interval Fig. b. This tDrosophila housekeeping genes across time . As expTo identify genes whose expression levels were significantly altered across the time course, we employed two methods. First, we used gene-wise linear models to fit cubic splines with time, on both the first 8\u2009h and first 48\u2009h after commercial LPS exposure. Second, because we noticed that certain expression patterns were not adequately described using cubic splines (as discussed below), we also characterized the temporal patterns of expression by estimating the differential expression of every gene at each time point, from 1 to 48\u2009h, compared to the un-infected un-injected control samples at time zero . of at least 1 (i.e.: 2 times higher than baseline) in at least one time point throughout the first 48\u2009h after injections (Table 2FC (4 times higher than baseline), in at least one time interval after injection and remained elevated up until 48\u2009h after Imd stimulation. Further investigation of the 91 core genes showed that the number of up-regulated genes was much higher than the number of down-regulated genes across all time points , timeless (tim), takeout (to), and vrille (vri), which when plotted against time exhibit the classic 24\u2009h periodic expression of the circadian rhythm using the Benjamini-Hochberg method Table S. Of thesFig.\u00a0a. Long sptB Fig. c, which maSigPro , 20. Thins Table . Within n Figure . Of thes.01 Fig.\u00a0a. Among . et al. . Within m Figure B.Fig. 2).Of a total of 951 DE genes, 189 genes were identified as differentially expressed using both pairwise methods and spline modeling, but 762 out of 951 genes were identified using only one of these methods, indicating the importance of using complementary methods for the analysis of time course RNA-seq data and Gene Set Analysis. GO analysis is a useful tool to illustrate the functions of genes with significant differential expression over time, in this case 951 DE genes selected using spline fitting and/or pairwise contrasts. However, focusing only on the top-scoring genes can lead to missing biologically relevant signals from genes with modest expression changes. Furthermore, GO analysis does not take into account expression changes over time. Both of these limitations are addressed by Gene Set Analysis, which searches for enriched pathways (Gene Sets) across all 12,657 genes in the dataset, guided by their logAttA, AttB, AttC), Diptericins , Cecropins , Bomanins , genes encoding Daisho peptides , IMPPP , Drosocin (Dro), Drosomycin and Drosomycin-like genes , Metchnikowin (Mtk), Peptidoglycan Recognition Proteins , Diedel, Relish (Rel) and elevated during infection (edin), among others. DE genes known to respond to stress included Turandots and Heat Shock proteins .GO analysis of the 951 DE genes using PANTHER identified a significant (FDR\u2009<\u20090.05) overrepresentation of GO terms related to the immune and stress response, carbohydrate, carboxylic acid and lipid metabolism, and proteolysis. Immune response related genes included Attacins in Drosophila [vri, clk, Pdp1; Figure p53, involved in the response to genotoxic stress , and UGP, which encodes a UTP--glucose-1-phosphate uridylyltransferase , a glycerol-3-phosphate 1-O-acyltransferase , a rate limiting enzyme for gluconeogenesis [Supporting these results, Gene Set analysis across all 12,657 genes and all time points showed that the top up-regulated pathways were all related to immune response, defense response to bacteria, and peptidoglycan functions Fig. b. Withinase Fig. c. Down-rase Fig. d. Finallogenesis . Clusters 3 and 4 were characterized by an initial decrease in expression followed by an increase after 3\u2009h and 6\u2009h respectively , reaching a maximum of 6 to 8.5 log2FC (64 to 362 times higher than baseline), and maintaining persistent up-regulation of ~\u20092.5 to 6 log2FC throughout 5\u2009days , whose exact mode of action is unknown, but whose up-regulation stimulates starch catabolism as part of an immune-induced metabolic switch, likely to make free glucose available to circulating immune cells [Mtk-like, CG43920, and CG45045, which are less characterized transcripts known to be up-regulated after bacterial infection [Our second clustering analysis based on autocorrelation revealed additional differences regarding the initiation and resolution of gene expression after Imd stimulation. First, we identified a cluster of genes with an immediate and sustained up-regulation. This cluster was characterized by a strong early induction with an up-regulation of ~\u20092.5 to 6 logne cells . Finallynfection .Fig. 6CBomS1, BomS2, BomS3, BomBc1), Daisho genes , and two Baramicin gene family members, CG33470 (BaraA1) and IMPPP (BaraA2) and IMPPP (BaraA2) were also shown to be regulated by Toll signaling [2FC (6 to 11 times higher than baseline) within the first two hours, reaching a max of ~\u20092.5 to 5 log2FC, and returning to a steady state after 3\u20135\u2009days. Thus, clustering analysis identified effector immune genes partitioned by Imd vs Toll signaling: Imd-regulated genes showed an immediate early sustained up-regulation even after 5\u2009days Fig. b. The Bous fungi . Finallyignaling \u201352. ThesTotA, TotB, TotC, and TotX) [Diedel, Grik, lectin-24A, NimB3, BomT2, CG11459, and CG30287. Diedel encodes an immunomodulatory cytokine known to down-regulate the Imd pathway. Grik encodes a glutamate receptor, and Lectin-24A encodes a pattern recognition receptor that mediates pathogen encapsulation by hemocytes [Lectin-24A has been shown to be down-regulated in the first 2\u2009h following septic injury and then up-regulated 9\u2009h after [NimB3 is part of the Nimrod gene family, which is involved in phagocytosis [BomT2 is part of the Bomanin gene family as well emocytes . Lectin-\u2009h after , consistocytosis . BomT2 id injury . CG30287cascades .Q-value <\u20090.05 and amplitude >\u20090.5 , takeout (to), vrille (vri), and PAR-domain protein 1 (Pdp1), as well as eight genes which do not have assigned circadian functions but have evidence of cyclic behavior in previous literature .Finally, using JTK_cycle, we identified 22 periodic genes with a 24\u2009h\u2009cycle, using a cutoff of Benjamini-Hochberg corrected 0.5 Fig.\u00a0. Among tOverall, the combination of clustering methods augmented by GO analysis allowed us to identify strong temporal patterns that correspond to early and late induction of immune processes, as well as both transient and sustained responses to infection, which point to a trade-off between the immune response and metabolism. We found that genes that share functions often have similar temporal expression patterns, suggesting co-regulation. This observation further allowed us to assign putative functions to previously uncharacterized genes that cluster together with well-studied genes.Bivariate GC analysis between two genes A and B, as described above, does not account for possible confounding effects of other genes C, D, E which can also influence genes A and B to 2FC|\u2009>\u20091 (at least 2 times higher or lower than baseline) across the time course and had available functional annotations. We performed Granger causality analysis on sliding windows of 6 time points on the normalized counts of both replicates using bivariate and multivariate methods (see Methods). We investigated both positive and negative edges, reflecting positive and negative lagged correlations between genes. The overall unfiltered GC network has a multitude of relationships worth exploring, but limitations in the ability to distinguish different types of causality make widespread conclusions from the network challenging. Here, we discuss several examples of subnetworks which illustrate putative functional relationships among genes whose expression changes in response to Imd stimulation.We constructed directed GC edges and networks of putative interactions among a subset of 258 genes Table . These gBased on our interest in identifying trade-offs between biological processes in infected animals, we first constructed a high-quality set of consistently significant GC edges of divergent expression (negative edges). To this end we first filtered the subnetwork by (a) removing all edges with a positive weight, (b) removing all nodes corresponding to cyclic genes identified earlier through the JTK_Cycle method, (c) using only pairs of nodes with significant edges (Benjamini Hochberg FDR\u2009<\u20090.05%) in at least 3 consecutive windows within the first 24\u2009h of the time course. After filtering, the resulting high-quality GC network contained 51 nodes and 35 edges in 16 connected components Figure . This neSorbitol dehydrogenase 1 (Sodh-1) and UGP, both lead the divergent expression of Claspin is a multifunctional chain of 6 genes, which connects the down-regulation of four metabolic genes with the up-regulation of two genes that are involved in regulating proliferation and repair Fig.\u00a0a. Two ofly) Fig. c and S8An stress . It is kn stress . The immn stress . Furthern stress . UGP and13) Fig. d and S8Bocytosis . Lipophoocytosis , 70. Finosophila \u201374.Fig.Claspin was identified to be part of the same pathway as Orc1 in our previous Gene Set Analysis, showing similar patterns and window of up-regulation Fig. a and e. 12) Fig. b, motivacryptochrome and Smvt Figure A, vrille) Figure B, period) Figure C, and Sm) Figure D. Smvt ibehavior , 76. Metbehavior . In addi [et al. reportedperiod, a regulator of the circadian clock [Rhodopsin 5, which encodes a G-protein-coupled receptor involved in phototransduction , to DptB and AttC and to BomS1, Dso1, and BomBc1 to AGBE (a predicted hydrolase involved in glycogen synthesis) Figure B-F. We a) Figure G. While Drosophila transcriptome response after Imd stimulation through commercial LPS injection, using RNA-seq sampling over 20 time points spanning five days. This profiling provides a high-dimensional dataset, which is available as a resource for the community. We analyzed this dataset using a broad range of statistical methods, including Granger causality, to investigate lead-lag relationships between genes. Because of the high dimensionality, it is not straightforward to analyze a time series, as illustrated by the partially distinct results of spline fitting and pairwise comparisons. However, using a combination of analytical methods allowed us to identify distinct patterns with high confidence, specifically responses to Imd stimulation with divergent initiation and resolution dynamics, as well as cyclic patterns of gene expression, and patterns of co-regulation and trade-offs. Below, we describe and discuss the main insights from these analyses, as well as limitations and future steps.We have produced a dense and high-quality time-course profiling of the Clusters of genes demonstrated distinct activation kinetics after stimulation of the immune response. This phenomenon has been observed both in fly and mammAttacins, 2 Diptericins, Mtk, Dro, IBIN, PGRP-SB1, and PGRP-SD), as well as CG43920, CG45045, and Mtk-like. Imd-regulation of these 3 genes has not been experimentally validated, but their co-clustering pattern suggests that they are regulated by the Imd pathway.First, a cluster of 13 genes showed the fastest up-regulation within the first 1\u20132\u2009h and remained up-regulated during the entire five-day time course Fig. a. This cBomanins and Daisho genes [et al. [Attacin and Diptericin (Imd-responsive) versus Drosomycin (mostly Toll-responsive). The up-regulation of Toll-responsive genes might be a response to the wounding that occurred during LPS injection, as Irving et al. [E. coli induced responses typical of infection by Gram-negative and/or Gram-positive bacteria, and Boutros et al. [2FC while Toll-responsive genes reach a maximum of 2.5\u20134.5 log2FC).Second, a cluster containing Toll-regulated ho genes reached [et al. , who repg et al. reporteds et al. reportedet al. [Third, stress response genes, among which were members of the Turandot family, reached their highest point of expression at 10\u201312\u2009h, following a pattern of delayed response in line with observations by Ekengren et al. . Similaret al. [et al. [E. coli, none of the surviving flies were completely free of bacteria - suggesting suppression rather than clearance of the infection. These observations raise the question of whether one should expect to see a return to the baseline gene expression levels. Rather than returning to a pre-infection state, the prolonged up-regulation of Imd-responsive genes and the transcription factor Rel , with the most up-regulated pathways related to immune and stress responses, and the most down-regulated pathways related to metabolic functions Fig. .FASN1, which showed the strongest down-regulation in both glycogen metabolic process and triglyceride biosynthetic process and up-regulated genes with cell proliferation and repair functions network analysis. Main subnetwork components showed significant GC directional edges between down-regulated metabolic genes (such as c1) Fig. . These rUsing clustering analysis, we identified several genes that lack a well-established function, and that clustered tightly with well-studied genes. We can use this co-clustering to suggest shared functions .Mtk-like, CG43920, IBIN, and CG45045, which shared similar expression dynamics with Imd-regulated AMPs transcriptional control. However, these statistical causal relationships provide only hypotheses that should be tested with direct experimental disruptions of a system to demonstrate biological causality.Drosophila melanogaster after Imd stimulation at the highest temporal resolution to date, and serves as a proof of concept for high-density time-course RNA-seq analyses in other systems. Further, it motivates innovation in computational and statistical methods for longitudinal omics data, both to account for their inherent high-dimensionality and the complex underlying architecture that contains both causal and spurious coordination. Specifically, the development and application of multivariate Granger causality analysis highlights the potential of time-course data to evaluate coordinated gene expression changes through lags and trade-offs. While the immune response in D. melanogaster has been well studied, our research using dense time-course gene expression data reveals genome-wide dynamic expression patterns at higher temporal detail. Specifically, we reveal responses to Imd stimulation with divergent initiation and resolution dynamics, cyclic patterns of gene expression, and patterns of co-regulation and functional trade-offs, while also assigning putative gene functions to uncharacterized genes through a temporal guilt-by-association method.Our combination of analytical methods provides robust profiling of the innate immune response in Drosophila of about 4\u2009days old were sampled from an F1 cross from two Drosophila melanogaster Genetic Reference Panel (DGRP) lines: line 379, which was shown to have low bacterial resistance, and line 360, which has high bacterial resistance [Male adult sistance . OffspriEscherichia coli 055:B5 Sigma) derived from the outer membrane of Gram-negative bacteria. Flies were injected using a Nanoinjector , which allows high-throughput fly injections with a constant injection volume. Injections were performed in the abdomen, as it has been shown to be less detrimental to the fly compared to thorax injury [Flies were kept on a 12:12 dark-light cycle, on standard yeast/glucose food. Flies were injected in the abdomen with 9.2\u2009\u03bcl of commercial lipopolysaccharide (LPS) . This sampling was performed twice, using flies from the same stock, on two consecutive days. Therefore, all time points have two replicates, giving a total of 42 samples. Since flies were sampled from the same stock for both replicates, we consider the second replicate to control for any effects of the injection technique. This sampling strategy was informed by experimental data and time series theoretical analysis that show that under reasonable assumptions, sampling time points at higher resolution is preferred over having more replicates , an impoDuring collection, a group of ~\u200910 pooled flies corresponding to the sampled time point were flash frozen in dry ice and stored at \u2212\u200980 C for later RNA extraction.Drosophila were injected with 9.2\u2009\u03bcl or 40\u2009\u03bcl of 1\u2009mg/mL LPS and flash frozen at 8 and 24\u2009h for RNA extraction. Uninfected un-injected flies were used as control. Each sampled time point consisted of a group of ~\u200910 pooled flies. Each sample had two replicates. Genes AttA and DptB were measured to confirm Imd stimulation. Gene Rp49 was used as a baseline for expression normalization. Results showed a significant up-regulation of AttA and DptB at both volumes (9.2\u2009\u03bcl and 40\u2009\u03bcl) for both time points (8 and 24\u2009h). We decided to use 9.2\u2009\u03bcl so as to cause the least amount of disruption to flies during infections, while still eliciting an immune response.The Imd inducibility of commercial LPS was confirmed using qPCR. Adult male RNA extraction was performed using Trizol (Life Technologies) following the manufacturer\u2019s instructions. cDNA libraries were prepared using the TruSeq RNA Sample Preparation Kit (Illumina). RNA purity was assessed using a Nanodrop instrument. RNA concentration was determined using a Qubit (Life Technologies) instrument. Sequencing was performed on an Illumina Hi-Seq 2500, single-end, and a read length of 75\u2009bp, at Cornell Biotechnology Resource Center Genomics Facility.http://hannonlab.cshl.edu/fastx_toolkit/). Reads were mapped to the Drosophila melanogaster genome (r6.17) using STAR (v2.5.2b). BAM files were generated using SAMtools (v1.3.2). Only one sample out of the original 42 failed to pass the quality thresholds, and all subsequent analysis used the remaining 41 samples. An average of 92.97% reads per library mapped uniquely to the Drosophila melanogaster genome. We ended up with an average of 23.4 million uniquely mapped reads per library.Samples had an average of 24.8\u2009M raw reads. Samples went through quality control using FastQC (v0.11.5). Truseq adapter sequences were removed from any sample that showed any level of adapter contamination using cutadapt (v1.14). Low quality bases in the beginning and end of the reads were trimmed using fastx_trimmer . Samples were normalized to library size. A \u201c+\u20091\u201d count number was added to all genes before performing log2 transformation, to make sure values after transformation are finite, and stabilize the variance at low expression end. After normalization and log2 transformation, only genes with more than 5 counts in at least 2 samples were kept (removing 4156 genes). We ended up with 12,657 genes for downstream analysis.Reads mapping to genes were counted using the R package et al. [A heatmap of the row Z-scores of normalized counts for all 12,657 genes indicated that sample 6A was an outlier for a subset of the 12,657 genes Figure A, even tet al. , and nonplotPCA from the R package DESeq2 [Principal components analysis (PCA) was performed using function e DESeq2 after reet al. [limma. We first transformed the normalized RNA-seq read counts (before log2 transformation) using the voom transformation, which estimates the heteroscedastic mean variance relationships of log-counts and adds a precision weight to each observation to make them amenable to the usual linear modeling pipelines that rely on normality. We used gene-wise linear models to fit cubic splines (with 3 degrees of freedom) with time, TMM normalization method [F-tests to select genes whose expression levels were significantly altered across the time course in both replicates.In order to identify genes that had differential expression over the time course, we adopted the linear model-based methodology proposed in Law et al. and avain method , and sta2 value of at least 0.6.We also fit 3 degree polynomials across the first 48\u2009h using the R package maSigPro , 20. We t, for t\u2009=\u20091, 2, \u2026, 48\u2009h. For each test, a multiple testing correction at 5% False Discovery Rate (FDR) using the Benjamini-Hochberg method [http://bioinfogp.cnb.csic.es/tools/venny/).Next, we checked for differential expression of every gene between time point 0 (control) and time point g method was adophttp://pantherdb.org/, v14.1, released 2019/04/29) [GSA, which uses a Gene Set Analysis algorithm [FlyBase.org in January 2019 (version FB2019_01). Normalized counts for both replicates at each time point from 1 to 120\u2009h were compared against both control replicates (0\u2009h), using a two-class paired vector which corresponds to . We used 100,000 permutations to estimate false discovery rates. Only pathways with P-values below 0.05 and with 5 or more genes from our full dataset were kept. A subset of most relevant pathways was compiled by selecting pathways that had more than one gene from the subset of 551 most predominant time-dependent genes, and had a score of 2.5 or more in at least one time point from 1 to 48\u2009h. This gave us 41 unique pathways as shown in Fig. Gene Ontology (GO) enrichment analysis was performed using PANTHER Statistical Overrepresentation Test 9/04/29) using de9/04/29) . Gene selgorithm that implgorithm by allowlgorithm , 105. Ge/04/29 uhclust, using Euclidean correlation as a distance metric. Hierarchical clustering of 551 predominant time-dependent genes was done using the default Pearson correlation with R package pheatmap. Cluster membership assignment and mean patterns of expression across time for genes within each cluster was performed as done in White et al. [TSclust [P-value cutoff of 0.05, and only top 1% correlation edges were further explored.Hierarchical clustering of 91 core genes was performed using R package e et al. . Tempora[TSclust . Normali[TSclust . This meJTK_Cycle. Nine regularly distributed time points were subset from both replicates every 6\u2009h . The time point corresponding to 18\u2009h was approximated by averaging normalized gene counts between time points 16 and 20\u2009h. We looked for rhythms between 18 and 30\u2009h (4 to 6 time points) with a cutoff of BH Q-value <\u20090.05 and amplitude >\u20090.5.Cyclic gene patterns were identified using the JTK_Cycle algorithm availablA to gene B is added if the time course of gene A Granger-causes the time course of gene B. The notion of \u2018Granger causality\u2019 is popular in learning lead-lag relationships among two or more time series. Formally, if the time series of gene A, given by xt, has some power in predicting the expression of gene B at time t\u2009+\u20091, called yt\u2009+\u20091, over and above yt and conditioned on an information set It, then gene A is said to exert a Granger causal effect on gene B. Bivariate Granger causality uses a small information set It\u2009=\u2009{xt1\u2009:\u2009,\u2009yt1\u2009:\u2009} and captures Granger causal relationship from gene A to gene B by testing whether the regression coefficient \u03b2 in the following bivariate regression is different from zero:Granger causality-based methods were use2 fold change. Using linear regression (function lm in R), we conducted bivariate (pairwise) Granger causality tests for every pair of genes among this set of 258 genes using data on sliding windows of t\u2009=\u20096 consecutive time points and the two replicates (sample size\u2009=\u200912), and ranked them in order of increasing P-values , keeping the top resulting edges (BHFDR <\u20095%).A master set of 258 genes was constructed from the 551 predominant time-dependent genes by picking those that had available functional annotation and that had differential expression of at least 2 logA and B [A to gene B is an artefact of gene C, which is causal for one or both genes. To address this, we adopted multivariate (or network) Granger causality [p genes, and Granger causal relationship of Gene A on Gene B is tested by regressing yt\u2009+\u20091 on yt, xt and the time courses of the other p - 2 genes zt1, zt2, \u2026, zpt.A well-known critique of bivariate Granger causality is its use of a small information set that does not contain any other factors except genes A and B . This faA and B , where Gausality , allowinp, the above regression is not possible to run using ordinary least squares (OLS), so we use LASSO [\u03b2 in the above regression is different from zero, we used two different variants of de-biased LASSO [A is Granger causal for gene B, even after accounting for the effects of the other p-2 genes. Using this method on the master set of 258 genes, we reconstructed putative directed networks of multivariate Granger causality and ranked the edges in increasing order of P-values, following the same parameters used in the bivariate (pairwise) Granger causality method ).For small sample size and large se LASSO regressied LASSO , 63, eacAdditional file 1 Figure S1. Plots of normalized counts of housekeeping genes and immune response genes. Figure S2. Venn Diagram showing overlap and differences of DE genes identified using limma-voom spline fitting vs. maSigPro fitting of polynomials. Figure S3. Heatmap of 214 genes. Figure S4. Temporal dynamics of gene expression of the most strongly up-regulated genes . Figure S5. Expression profiles of DE genes encoding transcription factors. Figure S6. Gluconeogenesis pathway. Figure S7. GC filtered network of negative edges. Figure S8. Negative GC edges. Figure S9. Pathway corresponding to \u2018mitotic DNA replication checkpoint\u2019. Figure S10. GC edges of circadian rhythm genes plotted against time. Figure S11. Positive GC edges. Figure S12. Outlier explanationAdditional file 2 Table S1. List of differentially expressed genes identified using spline fitting and pairwise comparisons.Additional file 3 Table S2. Genes that encode transcription factors and respond to commercial LPS injection."}
+{"text": "Factor analysis is a widely used method for dimensionality reduction in genome biology, with applications from personalized health to single-cell biology. Existing factor analysis models assume independence of the observed samples, an assumption that fails in spatio-temporal profiling studies. Here we present MEFISTO, a flexible and versatile toolbox for modeling high-dimensional data when spatial or temporal dependencies between the samples are known. MEFISTO maintains the established benefits of factor analysis for multimodal data, but enables the performance of spatio-temporally informed dimensionality reduction, interpolation, and separation of smooth from non-smooth patterns of variation. Moreover, MEFISTO can integrate multiple related datasets by simultaneously identifying and aligning the underlying patterns of variation in a data-driven manner. To illustrate MEFISTO, we apply the model to different datasets with spatial or temporal resolution, including an evolutionary atlas of organ development, a longitudinal microbiome study, a single-cell multi-omics atlas of mouse gastrulation and spatially resolved transcriptomics. MEFISTO models bulk and single-cell multi-omics data with temporal or spatial dependencies for interpretable pattern discovery and integration. Given the popularity and broad applicability of factor analysis, this model class has undergone an evolution, from principal component analysis to sparse generalizations4, including non-negativity constraints9. Most recently, factor analysis has been extended to model structured datasets that consist of multiple data modalities or sample groups8. At the same time, the complexity of multi-omics designs is constantly increasing and, in particular, strategies for assaying multiple omics layers across temporal or spatial trajectories are gaining relevance. However, existing factor analysis methods do not account for the spatio-temporal dependencies between samples that result from such designs. Prominent domains in which spatio-temporal profiling is used include developmental biology10, longitudinal profiling in personalized medicine11 or spatially resolved omics12. Such designs and datasets pose new analytical challenges and opportunities, including the need to account for spatio-temporal dependencies across samples that are no longer invariant to permutations; deal with imperfect alignment between samples from different data modalities, and missing data; identify inter-individual heterogeneities of the underlying temporal and/or spatial functional modules; and distinguish spatio-temporal variation from non-smooth patterns of variations. In addition, spatio-temporally informed dimensionality reduction could enable more accurate and interpretable recovery of the underlying patterns by leveraging known spatio-temporal dependencies rather than by solely relying on feature correlations. To this end, we propose MEFISTO, a flexible and versatile method for addressing these challenges while maintaining the benefits of previous factor analysis models for multimodal data.Factor analysis is a first-line approach for the analysis of high-throughput sequencing dataMEFISTO takes as input a dataset that contains measurements from one or more feature sets , referred to as \u201cviews\u201d in the following, as well as one or multiple sets of samples , referred to as \u201cgroups\u201d in the following. In addition to these high-dimensional data, each sample is further characterized by a continuous covariate such as a one-dimensional temporal or two-dimensional spatial coordinate. MEFISTO factorizes the input data into latent factors, similar to conventional factor analysis, thereby recovering a joint embedding of the samples in a low-dimensional latent space. At the same time, the model yields a sparse linear and therefore interpretable mapping between the latent factors and the observed features in terms of view-specific weights. Formulated within a probabilistic framework, MEFISTO naturally accounts for missing values for arbitrary combinations of views, groups and covariate values.13 to model spatio-temporal dependencies in the latent space, where each factor is governed by a continuous latent process with a variable degree of smoothness and opossum (Factor 5), with distinct temporal patterns Fig. . InteresFinally, we considered this dataset to further assess the performance of MEFISTO in settings with pronounced missingness by masking data for random species\u2013time point combinations. MEFISTO yielded accurate imputations, and in particular was able to interpolate time points with completely missing data . MEFISTO identified distinct temporal trajectories depending on the birth mode or transcriptome sequencing. The sparsity and missing data of the epigenetic readouts is a major challenge in this dataset, with only 33% of the cells having measurements from the epigenetic modalities. To identify coordinated variation between the transcriptome and epigenome along development, we characterized developmental transitions using two-dimensional reference coordinates derived from the RNA expression and used these as covariates in MEFISTO , without the need of single-cell reference data. Enrichment analysis of the weights based on known marker genes . The scale parameter sk determines the relative smooth versus non-smooth variation per factor, and the lengthscale parameter lk determines the distance over which correlation decays along the covariate, for example, in time or space. Details on the model specification, illustrations of the resulting covariance structures and a plate diagram are provided in Supplementary Information Section The hyperparameters entclass1pt{minimaTo infer the unobserved model components as well as the hyperparameters of the Gaussian process, MEFISTO makes use of variational inference combined with optimization of the evidence lower bound in terms of the hyperparameters of the Gaussian processes. Details on the inference are described in Supplementary Information Section 19 in the latent space. To reduce noise prior to the alignment, MEFISTO simultaneously decomposes the input data and aligns the covariate. This is implemented by interleaving the updates of the model components with an optimization step, in which a warping curve is found that minimizes the distance of each group to a reference group in the current latent space. The alignment can be partial, that is, it can have different end or start points between groups. Furthermore, instead of learning an alignment between individual groups, the alignment step can also be used at higher levels, such as between distinct classes of groups based on known class annotations or hierarchies of the groups. Details on the alignment step are described in Supplementary Information Section If the temporal correspondence between different groups is imperfect, a non-linear alignment between sample groups is learnt based on dynamic time warpingFor each view a different likelihood model can be used in the matrix decomposition analogously to previous multimodal factor models and 1 (smooth)) and the group relationships specific to a latent factor 42 or are restricted to the same features in each view43. In addition, sparsity constraints, which enhance interpretability and identifiability, are not used in these models. Besides linear methods, non-linear approaches have made use of continuous side-information, for example, in the context of variational autoencoders45 or recurrent neural networks46. In particular, all of the above methods are incapable of handling non-aligned time courses across datasets (apart from the Duncker and Sahani method43) and cannot capture heterogeneity across sample groups in the latent factors. For a detailed overview on related methods we refer to Supplementary Information Section 47 or time course experiments48, as well as models aimed at clustering of time series 50. These differ in their objective to that of MEFISTO, which uses Gaussian processes at the level of inferred factors in the latent space. For a more detailed discussion see Supplementary Information Section MEFISTO is related to previous matrix factorization and tensor decomposition methods, which, however, mostly ignore temporal information8 in terms of factor recovery, given by the correlation of the inferred and simulated factor values, as well as in terms of the mean squared error between imputed and ground-truth values for the masked values in the high-dimensional input data. The base settings for all non-varied parameters are 20 time points per group, five groups, four views with 500 features each, and a noise variance of 1. A total of 20% of randomly selected time points were masked per group and view, of which 50% were missing in all views. To assess the alignment capabilities of the model, data were simulated with the same set-up for three groups and the covariates were transformed before training by a linear mapping (h(t)\u2009=\u20090.4t\u2009+\u20090.3), a non-linear mapping (h(t)\u2009=\u2009exp(t)), and the identity in each group, respectively. These transformed covariates were passed to the model and the learnt alignment was compared with the ground-truth warping functions. To test the alignment in the presence of non-temporal patterns of variation, we restricted the simulation to a single smooth factor and either varied the number of non-smooth factors or restricted the smooth factor to a single view with 100 features, and varied the number of features in a second view generated by a non-smooth factor. To assess the scalability in the number of time points using sparse Gaussian processes, data were simulated from one group and with the same base parameters as above. For the comparison with univariate Gaussian processes, we fitted Gaussian process models to all observed time points of each individual feature using the ExactGP model as implemented in GPyTorch v1.4.0 (ref. 51) with a squared exponential covariance function, and the parameters were optimized using Adam optimizer. Feature values at missing time points were predicted from the resulting posterior. Data were simulated as above with only the two smooth factors , as well as a single group and 100 features per view.Data were simulated from the generative model by varying the number of time points per group in a interval, the noise levels, the number of groups and the fraction of missing values. Ten independent datasets were simulated for each setting from the generative model with three latent processes, having scale parameters of 1, 0.6, 0 and lengthscales of 0.2, 0.1, 0. For the first two factors, one was randomly selected to be shared across all groups, while for the other factor a correlation matrix between groups of rank 1 was simulated randomly based on a uniformly distributed vector. MEFISTO was compared with MOFA10, corrected for library size, normalized using a variance stabilizing transformation provided by DESeq2 v1.26.0 (ref. 52) and the orthologous genes selected as given in the Cardoso-Moreira et al. study10. Following the trajectory analysis of the original publication, we focused on five species, namely human, opossum, mouse, rat and rabbit, and five organs, namely brain, cerebellum, heart, liver and testis. In total this resulted in a dataset of five groups (species) and five views (organs) with 7,696 features each. The number of time points for each species varied between 14 and 23. Given that developmental correspondences were unclear, we used a numeric ordering within each species ranging from 1 to the maximal number of time points in this species as input for MEFISTO and let the model infer the correspondences of time points between species. Stability analysis of the latent factors was performed by re-training the model on a down-sampled dataset, in which random selections of 1\u20135 time points were repeatedly masked in each organ\u2013species combination. Gene set enrichment analysis was performed based on the reactome gene sets53, the Molecular Signatures Database38 and cell type markers downloaded from https://panglaodb.se/markers.html (ref. 27). To assess the imputation performance, gene expression data in 2\u201320 randomly selected species\u2013time combinations were masked in three, four or all organs and the model was retrained on these data as described above. The experiment was repeated ten times and the mean squared error was calculated on all masked values. For the comparison with univariate Gaussian processes we restricted the experiment to 1,000 randomly selected genes of mouse brain and masked a varying fraction of these features at randomly sampled time points (out of 14).Count data were obtained from Cardoso-Moreira et al.29. The processed data contained microbial features provided at the level of sub-operational taxonomic units (sOTUs) and a phylogenetic tree as detailed in the Martino et al. study30. All samples from infants of type Stool_Stabilizer in months 0\u201324 of life were included, and maternal samples were excluded. Data were processed using a robust-centered log ratio following Martino et al.30, which treats zero values as missing, and features that were observed in less than five samples were excluded. This resulted in a total of 43 infants (groups) with up to 24 time points (months) and 969 features that were provided as input to MEFISTO using month of life as the covariate. To calculate taxonomic enrichments of the factor weights, we used a one-sided Wilcoxon test, separately comparing positive and negative weights for each genus against the appropriate background . Mean factor weights per genus were visualized on a taxonomic tree using iTOL v6 (ref. 54). For the stability analysis, we randomly masked a varying number of samples (out of 650 observed samples) and trained MOFA8, MEFISTO and CTF (gemelli v0.0.5)30 on the masked data. For each method, factor stability was evaluated using the Pearson correlation of the factors on the masked data to the corresponding factor on the full data. To compare the factor weights of MEFISTO to associations with known covariates we trained a linear mixed-effect (LME) model for each sOTU with time point and the covariate of interest as fixed effects and infant as the random effect. We subsequently extracted the LME model coefficient as effect size estimates and compared them to the factor weights of MEFISTO.Data were obtained from the Code Ocean capsule: 10.24433/CO.5938114.v1, which contains the data used in the Bokulich et al. study32 study, in which details on quality control and data preprocessing can be found. In brief, gene expression counts were quantified over protein-coding genes using the Ensembl gene annotation 87 (ref. 55). The read counts were log-transformed, size-factor adjusted, the top ~1,000 most variable genes selected and the number of expressed genes per cell regressed out prior to fitting the model. The UMAP algorithm34 was applied to the RNA expression data to infer the two-dimensional developmental coordinates used as covariates in MEFISTO. DNA methylation and chromatin accessibility data were quantified over transcription factor motifs across the genome. A position-specific weight matrix was extracted for each motif using the JASPAR database56 and motif occurrences in the genome were found using the Bioconductor package motifmatchr v1.12 with default options. For each cell and transcription factor motif CpG methylation and GpC accessibility counts were aggregated across all motif instances. A CpG methylation or GpC accessibility rate for each transcription factor motif and cell was calculated by maximum likelihood under a binomial model and subsequently transformed to M-values. As input to MEFISTO we selected the top 500 most variable transcription factor motifs for each data modality. Cell cycle states for each cell were inferred using cyclone37 (as implemented in scran v1.18). To evaluate the imputation accuracy, random sets of cells of varying size were selected and their epigenetic data were masked. Methods were trained on the masked data and evaluated in terms of their imputation performance using the mean absolute error to the masked measurements.Data were obtained from the Argelaguet et al.36. Normalized expression values at all 2,696 spots were provided to MEFISTO with tissue coordinates as the two-dimensional covariate. For training of MEFISTO, 1,000 inducing points were selected on a regular grid in space. For comparison a model with 500 inducing points and a model with all spots were trained and compared in terms of their inferred factors as well as in terms of their interpolation accuracy. For the latter, 250 randomly selected spots were masked in ten independent experiments and the mean squared error between predicted and true expression values of these spots was calculated for MEFISTO (trained with different numbers of inducing points) as well as for MOFA8. Cell type markers were downloaded from https://panglaodb.se/markers.html (ref. 27), and markers annotated for mouse brain were used for the enrichment analysis.Data were obtained from the SeuratData R package as stxBrain.anterior1, normalized, and the 2,000 most variable features selected using the NormalizeData and FindVariableFeatures functions provided by SeuratFurther information on research design is available in the Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41592-021-01343-9.Supplementary InformationSupplementary Methods, Supplementary Figs. 1\u201317Reporting Summary"}
+{"text": "A key challenge to gaining insight into complex systems is inferring nonlinear causal directional relations from observational time-series data. Specifically, estimating causal relationships between interacting components in large systems with only short recordings over few temporal observations remains an important, yet unresolved problem. Here, we introduce large-scale nonlinear Granger causality (lsNGC) which facilitates conditional Granger causality between two multivariate time series conditioned on a large number of confounding time series with a small number of observations. By modeling interactions with nonlinear state-space transformations from limited observational data, lsNGC identifies casual relations with no explicit a priori assumptions on functional interdependence between component time series in a computationally efficient manner. Additionally, our method provides a mathematical formulation revealing statistical significance of inferred causal relations. We extensively study the ability of lsNGC in inferring directed relations from two-node to thirty-four node chaotic time-series systems. Our results suggest that lsNGC captures meaningful interactions from limited observational data, where it performs favorably when compared to traditionally used methods. Finally, we demonstrate the applicability of lsNGC to estimating causality in large, real-world systems by inferring directional nonlinear, causal relationships among a large number of relatively short time series acquired from functional Magnetic Resonance Imaging (fMRI) data of the human brain. Systems with interacting components are ubiquitous in nature. A few examples of such systems are interactions between individual neurons, regions in the brain, protein interaction, climatological data, and genetic networks. However, the underlying interactions between various components of these systems are hidden, therefore, to understand their dynamics and glean more information about how various components interact or influence one another we must infer causal relations from the available observational data. For instance, analyzing signals recorded from the brain activity of healthy subjects and patients with some form of neurodegeneration can reveal vital information useful for diagnosis and treatment.Identifying nonlinear and directed relations between components of a complex system, especially from simultaneously observed time series, is an actively growing area of research6. It estimates causal influence from one time series to another, if the prediction quality of the influenced time series improves when the past of an influencer time series is used, as compared to its prediction quality when the past of the influencer is not used. GC was initially formulated for linear models but later was extended to nonlinear systems in7 and has shown promising results. Among the alternative methods for nonlinear causal discovery, transfer entropy (TE) was introduced in8, which was later found to be equivalent to GC for linear Gaussian processes9.One of the most widely used approaches for estimating causal relations from time-series data is Granger causality analysis10, while multivariate analysis conditioned on other variables distinguishes direct from indirect influences11. While GC is a multivariate analysis approach with both linear and nonlinear variants, its extension to large-scale systems, where the number of time series is much larger than the number of temporal observations, is challenging3, since the vector autoregressive models may involve solving inverse problems with redundant variables12. Various studies have proposed addressing the ill-posedness problem by dimensionality reduction or compressed sensing techniques14. Besides, most systems in nature exhibit complex dynamics that cannot be captured by linear approaches17. Nonlinear approaches may discover nonlinear relations at the cost of increased computation time and a possible increase in several parameters to be estimated.In systems containing more than two time series, a bivariate analysis - i.e., considering pairs of time series solely at a time without considering the effects of confounder variables - may result in spurious causalities as a consequence of indirect connections18. Although nonlinear extensions of GC have been proposed in7, and kernel-based nonlinear GC approaches in21, such approaches require a large number of observations to estimate causal relations effectively. Possible reasons for these restrictions are: besides the computational expense, the extendibility to multivariate analysis of high-dimensional dynamical systems based on a low number of temporal observations is non-trivial and involves parameter optimization of complex nonlinear time-series models on limited data. In more recent literature, methods such as multivariate transfer entropy (TE)8 and multivariate mutual information (MI), with nonlinear estimators such as the Kraskov-Stoegbauer-Grassberger estimator22, and PC-momentary conditional independence (PCMCI)23 have been developed to improve estimation of directed interactions from large-scale data. In this paper, we put forth the large-scale Nonlinear Granger Causality (lsNGC) approach to estimate the underlying directed relations between time-series. By introducing a nonlinear dimension reduction step, lsNGC aims at estimating such interactions from large complex systems, while reducing redundancies and conditioning on other variables in the system. LsNGC in addition to being a nonlinear, multivariate method, also provides control over the number of parameters to be estimated and derives significant connections in the systems. As such, lsNGC can effectively estimate interactions in large systems with short time-series data, without being computationally intensive. Besides presenting results of extensive computer simulations on synthetic networks, we also demonstrate the applicability of lsNGC on estimating connectivity from resting-state functional MRI. However, lsNGC may be useful for other domains as well, given that the data is represented as simultaneously acquired signals.Hence, an approach that can capture nonlinear interactions in large multivariate systems is desired. In summary, an approach that estimates interactions in multivariate systems while conditioning on all variables in the system, reducing redundancy, and being computationally feasible would be desired. Consequently, a causality analysis method should 1) be able to estimate causal interactions in multivariate systems, conditioned on all time series in the system, 2) be able to capture nonlinear dependencies, 3) work for systems with a large number of variables, and 4) be data-driven20, mutual nonlinear cross-mapping methods15 using local models (LM), transfer entropy (TE)22, and Peter-Clark momentary conditional independence (PCMCI)23. We test the performance of simulated data with known ground truth of connections. Additionally, we demonstrate applying the proposed lsNGC approach on real time-series data recorded using functional Magnetic Resonance Imaging (fMRI) from subjects presenting with symptoms of HIV associated neurocognitive disorder (HAND) and healthy controls. If lsNGC measures can characterize brain connectivity well, it should be useful in distinguishing the two subject groups.In the following sections we discuss the lsNGC algorithm and the various networks we investigate. We evaluate lsNGC against Kernel Granger causality6 under investigation. The supplementary material (section 1) details the theoretical concepts involved in Granger causality analysis.Large-scale nonlinear Granger causality adopts theoretical concepts from Granger causality analysis. Granger causality (GC) is based on the concept of time series precedence and predictability; here, the improvement in the prediction quality of a time series in the presence of the past of another time series is quantified. This reveals if the predicted time series was influenced by the time series whose past was used in the prediction, uncovering the causal relationship between the two seriesN time series, each with T temporal samples. Let the time-series ensemble d, as t is LsNGC estimates causal relationships by first creating a nonlinear transformation of the state-space representation of the time series, whose influence on others is to be measured, and another representation of the rest of the time series in the system. Consider a system with To perform a multivariate analysis, it is desirable to have a phase space reconstruction, where prediction is performed using all the time series, apart from It should be noted that In brief, we have constructed two systems represented by phase states Coming back to Granger causality (GC), GC works on the principle that if the prediction quality of a time series r,\u00a0s) denotes that these models were constructed to investigate the influence of k-means clustering, where k can be seen as the number of hidden neurons in a GRBF neural network. Let In the above equations, f-statistic can be obtained by recognizing that the two models, equations (RSS) of the restricted model, f-statistic.The quations and 2),,f-statisf-statistic, given by:A measure of lsNGC can be obtained using the Here, mentclass2pt{minimmentclass2pt{minim24, with the nonlinear transformations k-means clustering. Activation function 25. Analogously, cluster centers k-means clustering. Activation function d is chosen using Cao\u2019s method described in26. In this study, In this work we adopt the Generalized Radial Basis Functions (GRBF) neural network, originally described by15 using local models (LM), PC-momentary conditional independence (PCMCI)18, multivariate transfer entropy (TE)8 with the Kraskov-St\u00f6gbauer-Grassberger nonlinear estimators22 using the IDTxL toolbox27, and Kernel Granger Causality (KGC)20. These approaches are discussed briefly in the supplementary material, section\u00a05. Note that various software implementations of TE are currently available in different toolboxes30. We chose IDTxL27, since it is the most recently developed software in this regard, providing automatic parameter selection, controlling false positives and requiring only minimal user specification.To evaluate the approach, several benchmark simulations are considered and performance is compared to four state-of-the-art approaches, mutual nonlinear cross-mapping methodsWe begin by creating benchmark simulations. Fifty different sets of each type of simulation were created, which is useful for estimating the consistency of the method. All the simulations were generated to have 500 time-points.Two species logistic model Before investigating empirical data or systems with a large number of time series, it is imperative to test performance on a simple network structure with directed interaction. To this end, the two species logistic model which is one of the commonly studied31 chaotic time-series systems is considered:31. For the unidirectional case, the coupling constants take the values entclass1pt{minimaentclass1pt{minimaComplex system with three nodes Following this, we consider a three node complex system. See supplementary section\u00a03 for equations. In the fan-out case, nodes 32 and the linear system 5-node network (5-linear) are provided in the supplementary material (section\u00a03). Results estimating direction of connection are provided in the supplementary material (section\u00a04).34 node Zachary network Systems in nature involve of a number of interacting factors. Hence, it is important to evaluate systems with a considerably large number of interacting time series. To test lsNGC on networks with a large number of nodes, we consider the Zachary dataset34 consisting of 34 nodes. The nodal interactions is as follows and adopted from20:a = 1.8, s = 0.01, c = 0.05, where j has on i, and 20, where, the authors construct directed networks by assigning an edge, with equal probability of being in either direction. Apart from the directed connections, we randomly select 5 edges to be bidirectional. We construct 50 such networks and obtain 50-sets of time-series data from the corresponding network (34-Zachary2). One of the 50 networks used is shown in Fig.\u00a0mentclass2pt{minimf-statistic measures. The Receiver Operating Characteristic (ROC) plots the true positive rate (TPR) versus the false positive rate (FPR). Ideally, TPR = 1 and FPR = 0 at any one threshold applied on the connectivity graphs, i.e. affinity matrix, for the AUC to equal 1. An AUC of 0.5 represents assignment of random connections, analogous to guessing the absence or presence of connections. Since the AUC quantifies both, the strength of connections and the direction of information flow, it is used to evaluate performance in estimating the network structure. The AUC derives evaluation measures from the non-binarized connectivity matrix. However, it is also important to evaluate the true links obtained by significance testing of connections in the graph. The lsNGC measures of connectivity, expressed as f-statistic values, can be used to derive p-values for connections. Significant connections are obtained after multiple comparisons correction using False Discovery Rate (FDR) method at sensitivity and specificity are derived.LsNGC derives measures of nonlinear connectivity scores represented as edges in a network graph. These are non-binary scores, from which we obtain a measure of the Area Under the receiver operating characteristic Curve (AUC). However, before deriving AUC measures, the connectivity matrix is log transformed to reduce the skew in the method refers to the analysis method, i.e., lsNGC, LM, PCMCI, TE or KGC.Here, we present quantitative results on the recovered graph for the various simulations. For every network investigated in this study, 50 different sets of time series were simulated. Results are summarized as boxplots for the factors under investigation. Thus, it is also essential to test the performance of lsNGC for a lower number of observations. To this end, the time-series length is varied from 500 to 50 time-points. Figure\u00a036 from controls. We hypothesize that if lsNGC can capture brain connectivity from fMRI data for the subjects well, differences in connectivity between subjects with HAND and controls should be observed. Hence, we tested how well a classifier was able to discriminate the two subject groups. The classifier was able to learn relevant differences from the two groups using connectivity derived with lsNGC (AUC = 0.88 and accuracy = 0.77), suggesting that lsNGC was able to characterize the interactions well. More details on the data and analysis approach can be found in supplementary material section\u00a07.LsNGC showed promising results on the simulations. Nevertheless, its performance on real data can give more insight into its usability for various applications involving estimation of underlying interactions from signals. In this work, we analyze its performance on functional Magnetic Resonance Imaging (fMRI) data. It has been demonstrated that individuals presenting with symptoms of HAND have quantifiable differences in connectivity27. The Kraskov method is a non-parametric approach for density estimation, which approximates the probability distribution based on the k-nearest neighbors and has been used to obtain TE in27. However, for enhancing accuracy in large-scale multivariate time series, one needs to increase the number of nearest neighbors, which enforces a tradeoff between computational cost and accuracy for density estimation. Newer parametric estimators, such as the Mutual Information Neural Estimator (MINE)37, that rely on the characterizing the mutual information as the Kullback-Leibler divergence, may improve the performance of TE and possibly lsNGC (here we use GRBF as a density estimator). However, such an estimator requires training a generative model without any explicit assumptions about the underlying data distribution. While this estimator is promising, we anticipate it to be quite computationally intensive. With lsNGC, we focus on an approach that can estimate density using a closed form solution in the OLS formulation, unlike MINE, which requires learning of a representation using an iterative/gradient descent learning.The paper puts forth a novel approach called large-scale nonlinear Granger causality (lsNGC). The lsNGC unveils the underlying nonlinear interactions among multivariate and large-scale time series. We demonstrate its applicability on the real and synthetic data and its advantages for systems with large nodal and short temporal measurements. A common trend across all investigated methods, i.e. lsNGC, LM, PCMCI, TE and KGC is the decline in performance with an increasing number of nodes, for a given number of temporal observations. However, performance may be improved by increasing the number of observations. We observe that lsNGC outperforms other approaches in most cases. Furthermore, KGC and TE are markedly more susceptible to poor performance with increased number of nodes . The connectivity measures used as features in a classifier were highly discriminative. This suggests that lsNGC is able to capture relevant information regarding interaction between different regions in the human brain.f-statistic, without creating a null distribution from surrogate time series as is commonly done. Furthermore, using the f-statistic, our results on sensitivity and specificity demonstrate that lsNGC performs very well compared to other methods. Obtaining relevant connections after creating a null distribution with surrogate time series is possible with lsNGC; however, it will significantly increase the computational cost. The flexibility of estimating significant connections with lsNGC is a significant advantage over traditional approaches for detecting causality. Nevertheless, it is understandable to err on the side of caution, since estimating measures of significance with the f-statistic requires conditions of independence between delay embeddings to be met. Granger causality assumes that time series influence each other only d points in the past. Poor estimation of the order \u2019d\u2019 can result in erroneous values of significance. This is especially relevant when time series in the system are themselves dependent , as is commonly the case with fMRI data. Recent work propose hypothesis tests under autocorrelation39. Experimental evidence in39 demonstrates that commonly used hypothesis tests may result in type I or type II errors if autocorrelations exist among the various components in a system. This an important problem to consider and it is worth investigating the effect of autocorrelated time series on the lsNGC derived f-statistic measure, in a subsequent study.LsNGC is formulated as a multiple regression problem with nonlinear basis transformation using GRBF with Although, lsNGC shows promising performance for real world data and simulations, one of its shortcomings is that its formulation only allows for additive relationship between the time series whose influence is to be estimated and every other time series. In brief, lsNGC is additive in the functions In summary, lsNGC is robust in inferring causal, nonlinear interactions across different network topologies. Like all investigated methods, underlying network size does affect its performance; however, it significantly outperforms conventionally used methods for practical time-series lengths Fig.\u00a0. LsNGC bIn this work, we propose a nonlinear method for large-scale multivariate time series, named large-scale nonlinear Granger causality (lsNGC), to infer underlying directed interactions from time-series data. The number of temporal observations limit most approaches proposed in the literature and imposes a challenge for performing multivariate, nonlinear causality analysis to reveal the underlying interactions of large systems. We investigated some of the existing nonlinear causal inference methods\u2019 advantages and limitations through experimentation and analysis with different network structures. We demonstrated the advantage of lsNGC over current state-of-the-art multivariate and bivariate approaches. The high AUC, good sensitivity and specificity results for various lengths of time-series data demonstrate its potential and applicability to real world data. Furthermore, lsNGC\u2019s formulation allows obtaining binary interactions without creating a null distribution from surrogate time series, which is computationally expensive, especially for large networks. Finally, we have demonstrated the applicability of lsNGC in inferring interactions among different regions of the brain from brain activity data obtained using functional magnetic resonance imaging (fMRI). Besides clinical applications for diagnosing neurological disorders, such an approach may reveal valuable insights about directed interactions in the brain.Supplementary Information."}
+{"text": "Listeria (L.) monocytogenes is known to survive heat, cold, high pressure, and other extreme conditions. Although the response of this pathogen to pH, osmotic, temperature, and oxidative stress has been studied extensively, its reaction to the stress produced by high pressure processing HPP (which is a preservation method in the food industry), and the activated gene regulatory network (GRN) in response to this stress is still largely unknown.The pathogen L. monocytogenes (ScottA) treated at 400 MPa and 8\u2218C, for 8 min and combined it with current information in the literature to create a transcriptional regulation database, depicting the relationship between transcription factors (TFs) and their target genes (TGs) in L. monocytogenes. We then applied network component analysis (NCA), a matrix decomposition method, to reconstruct the activities of the TFs over time. According to our findings, L. monocytogenes responded to the stress applied during HPP by three statistically different gene regulation modes: survival mode during the first 10 min post-treatment, repair mode during 1 h post-treatment, and re-growth mode beyond 6 h after HPP. We identified the TFs and their TGs that were responsible for each of the modes. We developed a plausible model that could explain the regulatory mechanism that L. monocytogenes activated through the well-studied CIRCE operon via the regulator HrcA during the survival mode.We used RNA sequencing transcriptome data of L. monocytogenes to survive and recover extreme HPP conditions. We believe that our results give a better understanding of L. monocytogenes behavior after exposure to high pressure that may lead to the design of a specific knock-out process to target the genes or mechanisms. The results can help the food industry select appropriate HPP conditions to prevent L. monocytogenes recovery during food storage.Our findings suggest that the timely activation of TFs associated with an immediate stress response, followed by the expression of genes for repair purposes, and then re-growth and metabolism, could be a strategy of The online version contains supplementary material available at (10.1186/s12864-021-07461-0). L. monocytogenes is one of the target organisms in HPP of food due to its ability to tolerate adverse conditions such as refrigeration temperatures [L. monocytogenes could survive high pressure levels of up to 400 MPa [Extensive studies revealed how bacteria respond to various environmental stresses such as heat/cold shock, hyperosmotic and oxidative stress, nutrient depletion, acid, and antibiotics \u20134. Theseeratures , 10. How 400 MPa \u201313, althEscherichia (E.) coli to relatively low hydrostatic pressures (30 and 50 MPa) revealed regulations by several DNA-binding proteins [L. monocytogenes. However, as they only performed a single measurement of gene expression after exposure to high pressure, knowledge about the temporal gene regulatory response of bacteria is still missing.Although many studies indicated bacterial growth inhibition after HPP , 15, we proteins . Bowman proteins performeE. coli has been studied extensively [L. monocytogenes and some other organisms that the transcription factors (TFs) CtsR, HrcA, and CcpA regulate several genes, including those encode for chaperones and heat shock proteins such as DnaKJ and GroESL [Bacillus subtilis or L. monocytogenes, to acid and antibiotics [L. monocytogenes that were shown to be involved in the stress response.As a bacterial response to many types of stress involves similar mechanisms , currentensively \u201322, incld GroESL \u201327. Someibiotics \u201334. ThesL. monocytogenes, ScottA and studied how GRN in this type of bacteria responded to HPP with time. We exposed the bacteria to the high pressure of 400 MPa at 8\u2218C for 8 min. We performed RNA sequencing analysis at nine time points following HPP to extract differentially expressed genes, which we have described in detail in a separate work [L. monocytogenes over time after HPP, and then clustered the regulators into three different temporal groups.Here, we focused on ate work . We thenL. monocytogenes operated in three distinct time phases in response to high pressure: an early-phase (0-10 min), a mid-phase (30-60 min), and a late-time phase (6-48 h) after HPP. Most importantly, we found that the regulatory function of the first phase might be related to survival by regulating genes encoding for chaperones, cell wall structure, DNA repair, and SOS response . The second time phase involved GRN with a central role in synthesizing membrane components such as transmembrane proteins. The third phase appeared to regulate functions related to energy metabolism and re-growth. Furthermore, from our analysis, we derived a model of the regulation of chaperones production by HrcA as a TF at the first minutes after pressure treatment. This model, similar to the heat shock model [We found that the transcriptome of ck model , 37, shock model , 39.This temporal GRN division indicated a well-structured and timely response to stress, suggesting that bacteria could be evolved to switch the functionality mode with a strong priority to survive stress, repair, and re-initiate growth.L. monocytogenes is missing. We created a connectivity network between 37 TFs and 1113 TGs in L. monocytogenes of the connections between the TFs and TGs in our initial network were not relevant in response to high pressure stress (TGs with connectivity strength (CS) values less than 0.1 in A). Removing connections with CS <0.1 resulted in a network between 26 TFs and 533 TGs , cell wall (22/533), synthesis of chaperones and heat shock proteins and SOS response (32/533), virulence activity (14/533), ribosomal proteins (39/533), regulation of DNA replication and cell division (18/533), production of other transcription factors (15/533), and energy metabolism (95/533).A database that includes the network information between TFs and their TGs in ytogenes . To idenytogenes \u201d sectionNext, we studied the temporal activities of the 26 TFs of the reduced network Fig.\u00a0 during tWe identified a list of differentially expressed genes in pressure-treated samples compared to control samples by RNA sequencing analysis . As chanTo investigate if a TF activity was influenced and regulated (irrespective of whether it was increased or decreased) in response to HPP compared to control, we set a threshold value found by simulations, Fig.\u00a0p<0.05 level =7.15,p=2.7\u00d710\u22127) from the remaining time points =5.81,p=2.61\u00d710\u22126) from the other time points. The second group contained several TFs that belonged to the first or third groups as well. By taking the TFs that were activity-regulated only during the second period, we found that the second group was also significantly different at p<0.05 level =331.89,p=0) from the first and third groups and found that for the TFs that were activity-regulated during the first time points , the mean value (over 100 simulations) of activity was significantly different at Taken together, these results suggest three clusters of TFs, grouped according to their activity profiles: TFs belonged to early-phase (0-10 min), mid-phase (30-60 min), and late-phase (6-48 h) after HPP. We found that the activities of 12/26 TFs were regulated during the early-phase, i.e., the first 10 min post-treatment of the TFs in this phase were regulated positively Fig.\u00a0a. We alsA , We founMore than half of the TFs 14/26) were involved in the late-phase, Fig.\u00a0a. Among 6 were inL. monocytogenes TF-TG network topology during the first 48 h post-treatment Fig.\u00a0. In accoL. monocytogenes included upregulation of SOS response, heat shock, and cell wall associated genes during the first 3 min after heat exposure while genes encoding for cell division proteins were upregulated later. Another work [E. coli after antibiotic treatment with TMP (trimethoprim). In [Herminiimonas arsenicoxydans; an early (0-2 h) response of arsenic resistance, oxidative stress, chaperone synthesis and sulfur metabolism, and a late (8 h) response of arsenic metabolism, phosphate transport and motility. These temporal regulations are consistent with our observations for the timely-ordered response of L. monocytogenes following HPP.Several previous studies support the existence of a temporally structured gene expression in bacteria in response to stress \u201346. Veenher work reportedL. monocytogenes which consists of genes encoding proteins associated with translesion DNA synthesis and repair [L. monocytogenes response to HPP by regulating the SOS response, thereby likely contributing to survival. Our NCA results showed a reduced activity for the repressor LexA over the first 10 min after pressure treatment suggesting the upregulation of LexA-regulated genes including DNA repair genes of SOS regulon. RNA sequencing results revealed upregulation of lexA, recA, and several other LexA-regulated genes such as DNA polymerase IV and V of L. monocytogenes after exposure to HPP at 400 MPa and 8 min [LexA is a repressor for the SOS regulon in d repair . Accumuld repair , 48. As nd 8 min , arguingclpP and clpE by the repressor CtsR of L. monocytogenes. The lower activity of CtsR that we found in the pressure-treated sample compared to the control might allow the expression of stress tolerance genes and contribute to survival of L. monocytogenes upon exposure to high pressure stress. Our RNA sequencing results indicated that clpP and clpE genes were upregulated during the first 10 min after HPP [According to the NCA results, the activity of CtsR protein which regulates heat shock genes negatively was suppressed in response to HPP. Nair et al. demonstrfter HPP .L. monocytogenes regulates the expression of nagA and nagB genes [L. monocytogenes after HPP at 400 MPa and 8 min reported in [NagR which is a TF involved in N-acetylglucosamine utilization pathway in gB genes . PopowskgB genes reportedorted in .L. monocytogenes included several genes encoding for PTS systems . NCA results suggested that CcpA activity was higher in pressure-treated bacteria compared to untreated ones mainly during the late phase encoding for molecular chaperones were identified in the previous decades as the CIRCE (controlling inverted repeat of chaperone expression) operons [hrcA (heat shock regulation at CIRCE) is the gene encoding for the repressor protein binding to the CIRCE element. The GroE chaperonin system is responsible for creating an equilibrium between active and inactive forms of the repressor HrcA, where the inactive form is unable to bind to its operator [L. monocytogenes might be essential during the early-phase after HPP.Our observations suggested that the chaperonin group played a critical role in the first line of bacterial response to high pressure. Two operons The expression of hrcA Our NCA model indicated the inactivation of the repressor HrcA rather than degradation, which is consistent with the measured expression levels of the hrcA gene , whereas dnaK was highly over-expressed (compared to control) at 24 and 48 h after treatment . This suggests that another factor than the active HrcA might regulate the transcription of dnaJ and switched dnaJ (but not dnaK) expression back to its normal level via a second promoter precedes the whole operon, activated under stress conditions, whereas the other (PA2) is located between dnaK and dnaJ [dnaK and dnaJ genes in L. monocytogenes [L. monocytogenes. Hu et al. [hrcA, or co-regulation of other genes in HrcA regulon by SigB. Chaturongakul et al. [dnaJ and groEL of L. monocytogenes, which may again explain the difference we observed between the expression behaviour of dnaJ and dnaK. They also indicated that the expression of groES, in addition to HrcA, might be under control of SigB and SigH, a co-regulation that is required to be considered to improve the model in the future works.According to our observations shown in Fig.\u00a0ter Fig.\u00a0d. It hasand dnaJ . Moreoveytogenes . Some prytogenes , 59) ideu et al. reportedL. monocytogenes such as RO15 is essential as well to understand better the role of GRN in more barotolerant strains.Predictions in this work were based on an optimal model that guarantees a unique solution for recoL. monocytogenes to HPP is mostly unknown. Here we created a gene regulatory database (L. monocytogenes (strain ScottA), which was then used to input the NCA algorithm to reconstruct the activity of regulators (TFs) during 48 h after pressure treatment at 400 MPa, 8\u2218, for 8 min. Our transcriptome analysis following HPP in L. monocytogenes indicated a timely structured response that corresponds to three distinct time phases: an early-phase (the first 10 min after HPP), which was shown to be associated with survival by regulation of genes encoding for chaperones, cell wall components, and SOS response; a mid-phase (30-60 min after HPP), which was related to the regulatory networks with the primary role in the repair of membrane components; and a late-phase (6-48 h following HPP), in which the activity of TFs which are involved in energy metabolism pathways and re-growth were regulated. Based on our observations the chaperonin group played a central role in the initial response of L. monocytogenes to high pressure. Therefore, we studied the regulation of this group in more detail. We proposed a model that could explain the modulation of HrcA activity after HPP, which facilitated the expression of chaperone genes in response to pressure stress. We believe that our results provide a better understanding of L. monocytogenes behavior after high pressure exposure that may help with the development of a specific knock-out process to target critical genes and increase the efficiency of HPP in the food industry.The regulatory response of pathogenic database for TF-TL. monocytogenes Scott A was statically grown in full BHI broth , at 37\u2218C, until reaching the early stationary phase (\u2248 1.3 OD600). The culture was then transferred to 2 mL Eppendorf tubes, which were fully filled and carefully sealed by avoiding the formation of air bubbles inside. Prior to HPP, both controls and samples to be treated were cooled-down by storing at 4\u2218C for one hour. The samples were treated at 400 MPa, 8\u2218C, for 8 min, in a multi-vessel high pressure equipment with the compression rate applied during pressure build-up being 100 MPa/min. The pressure-transmitting fluid was a mixture of water and propylene glycol . An additional minute, after the come-up time, was considered as the equilibration time necessary for the treatment. The decompression of vessels was carried out automatically, in less than 5 seconds. After decompression, both treated and control samples were stored at 8\u2218C, at atmospheric pressure (0.1 MPa), for certain times, considered as recovery time points: 5, 10, 30, 45, and 60 min and 6, 24, and 48 h. At each mentioned time point, both treated samples (5 replicates) and corresponding control samples (4 replicates) were mixed with 4 mL of RNA protect reagent , for RNA stabilization, incubated at room temperature for 5 min, pelleted by centrifugation at 5000 rpm and stored at \u221280\u2218C, until RNA extraction procedure.L. monocytogenes cells by using the spread plate count method before exposure to high pressure (untreated) and at days 0, 1, and 2 after HPP . Dilutions of samples were plated on the nonselective medium tryptone soya agar supplemented with 0.6% (w/v) yeast extract and incubated at 37\u2218C for 48 h before counting.We measured the number of viable RNA sequencing and analysis of the data for obtaining differentially expressed genes were described in a separate work . BrieflyL. monocytogenes EGD-e connecting 37 TFs and 1113 TGs, mainly using the current information in the Regprecise database [L. monocytogenes EGD-e for three TFs by verifying binding sites (BS) using a comparative genomics approach. We took six complete genomes of different Listeria species/subspecies (including EGD-e) and Bacilli TFBS (transcription factor binding sites) profiles for the three TFs mentioned above. First, we predicted homologs in all the genomes using GET_HOMOLOGUES [Bacilli TFBS profiles and the FIMO tool (MEME suite [Bacilli regulons or other species (based on the RegPrecise database and literature mentioned above) or have a relevant function (related to the TF in question). The upstream regions of the pre-selected genes were used to create a new Listeria specific TFBS profile, which was then used to search the genomes again, presumably giving more accurate results. Again, only the genes in EGD-e strain with BS that had homologs with BS in at least two other genomes were selected for the final list of regulons in EGD-e strain. Predicted regulons for the three mentioned TFs is given by We built a connectivity network for L. mdatabase and somedatabase , 65\u201370. MOLOGUES . Then, uME suite ) with thL. monocytogenes following HPP. The NCA solves a matrix decomposition problem presented as: We employed Network Component Analysis (NCA) , 74 to pE is the differentially expressed gene values, i.e. log2 FC for each gene, log2(mRNAHPP(t)/mRNActrl(t)), at different recovery time points obtained from RNA sequencing experiments. mRNAHPP(t) and mRNActrl(t) are mRNA counts in pressure-treated and control sample, respectively. In this matrix, each row corresponds to one TG, and each column corresponds to one time point . We used our curated connectivity network and a random initial guess for the matrix A that preserves the null space of this connectivity matrix as inputs to the NCA algorithm. The algorithm then predicts a number as the CS between each regulatory layer (TF) and its TG (matrix A), as well as the matrix P, the reconstructed activity for TFs over time, log(TFAHPP(t)/TFActrl(t)) (where TFA is TF activity). In the matrix P, each row represents one TF, and each column represents one time point. The dimensions of E,A, and P, are N\u00d7M,N\u00d7L, and L\u00d7M, respectively, where N is the number of TGs, M is the number of time points, and L is the number of TFs., where the matrix network to buildE\u2212AP: The decomposition problem in Eq. E to A and P is unique up to a scaling factor X if A and P satisfy a set of mathematical criteria [A must be full-rank in columns.The connectivity matrix A which is still full-rank in columns.When we remove a TF with all the TGs connected to it, the remaining sub-network must have a connectivity matrix P must be full-rank in rows. To satisfy the third criterion, the number of time points for each gene must be greater than or equal to the number of TFs regulating that gene. This criterion was not valid in our case, and therefore we used a modified NCA algorithm [The matrix lgorithm that allThe decomposition of criteria : The coA with L=26 TFs and N=678 TGs such that the three criteria above are satisfied. To initialize the A matrix, we defined a set of constraints such that if TGi is positively (negatively) regulated by TFj,aij=1 (aij=\u22121), and if TGi is not regulated by TFj,aij=0 . We used the software Cytoscape [E contains expression values for 678 genes over nine time points.Our connectivity network contains the information about 37 TFs and their TGs from which we extracted the matrix ytoscape to illusytoscape . The genP contains normalized units of 26 TFs at nine time points, all relative to the control. We normalized the activity of each TF (rows of P) at each time point (columns of P) to its maximum level. We defined that the activity of a TFj at any given time point tk in the normalized matrix P was regulated (either activated or suppressed) when the absolute value in that time point in the matrix P exceeds a cut-off value. To determine this cut-off value, we increased threshold values incrementally (at steps of 0.01) and counted, at each time tk, the number of TFs with activity values above this threshold. Then at each time point tk,k={1,...,M} we chose a threshold that reached a stable number of TFs, and computed the average of these thresholds over time. By doing so, we set a cut-off value of 0.8 to represent a stable threshold to run the NCA algorithm and the analysis of variance (ANOVA). The homoscedasticity and normality condition were checked. The activity matrix Additional file 1 Table S1 (.xls format): TF-TG network in L. monocytogenes.Additional file 2 Figure S1 (.pdf format): TF activity (TFA) ratio. Error bars show the mean and standard deviation of TFA at each time point over 100 simulations. To make early time points distinguishable, the x-axis represents sample points for 9 time points (1-9) corresponding to 0, 5, 10, 30, 45, 60 min and 6, 24, 48 h, respectively.Additional file 3 Table S2 (.xls format): Content of the matrix A with connectivity strength (CS) values for each TF-TG connection."}
+{"text": "Skin, as the outmost layer of human body, is frequently exposed to environmental stressors including pollutants and ultraviolet (UV), which could lead to skin disorders. Generally, skin response process to ultraviolet B (UVB) irradiation is a nonlinear dynamic process, with unknown underlying molecular mechanism of critical transition. Here, the landscape dynamic network biomarker (l-DNB) analysis of time series transcriptome data on 3D skin model was conducted to reveal the complicated process of skin response to UV irradiation at both molecular and network levels. The advanced l-DNB analysis approach showed that: (i) there was a tipping point before critical transition state during pigmentation process, validated by 3D skin model; (ii) 13 core DNB genes were identified to detect the tipping point as a network biomarker, supported by computational assessment; (iii) core DNB genes such as COL7A1 and CTNNB1 can effectively predict skin lightening, validated by independent human skin data. Overall, this study provides new insights for skin response to repetitive UVB irradiation, including dynamic pathway pattern, biphasic response, and DNBs for skin lightening change, and enables us to further understand the skin resilience process after external stress. Skin is vulnerable to long-term exposure of environmental stressors like ultraviolet (UV) irradiation . ExposurNumerous studies investigated skin responses to UV irradiation from different viewpoints. However, previous studies mainly focused on one or more time points at individual gene expression level, ignoring the whole dynamic process. Many studies have suggested that the progression of complex disorders was not always smooth, but occasionally abrupt, indicating the existence of tipping point . Generalin vivo skin data. Collectively, our study combining wet LSE model with dry l-DNB model investigated the tipping point of transcription reprogramming during UVB irradiation and identified core DNB genes, which not only accurately predicted the skin lightness but also effectively prognosed in independent clinical trial. The results provide new insights into skin response to UVB irradiation and novel network biomarkers for skin protection.Here, to reveal the tipping point of skin response to UVB irradiation, we conducted landscape DNB (l-DNB) analysis of time series transcriptome data on the advanced 3D skin model, pigmented living skin equivalent (LSE) model, under repetitive UVB irradiation . We founTo study the dynamic response of skin to UVB exposure, two parallel experiments on LSE model, with/without UVB irradiation, were conducted simultaneously . TranscrConventional pairwise comparison was conducted to investigate differentially expressed genes (DEGs) induced by UVB at each sampling time point. Overall, there were 1431 DEGs on Day 1, 1386 DEGs on Day 4, 4879 DEGs on Day 6, and 1996 DEGs on Day 8. Obviously, the number of DEGs was not consistently increased with time due to accumulative UVB irradiation but fluctuated. The expression heatmap of DEGs also presented a time tendency of skin samples under UVB exposure . FurtherTo decipher the underlying dynamic process after UVB irradiation, the l-DNB approach was adopted and improved to incorporate network information for interpreting systemic dynamics. First, the single-sample network (SSN) approach was usedP-value as index, the top five transcription regulators were BRCA1, TP53, E2F4, MYC, and TWIST1. BRCA1, TP53 were identified for enrichment analysis. TP-DEGs over time were involved in pathways including senescence pathway that is a cell response to stress, GADD45 signaling and ATM signaling that are DNA damage and repair-related pathways, Eumelanin biosynthesis that can potentially influence lightening, and some other cell cycle-related pathways .Dynamic change of SSN over time demonstrated the network evolution induced by UVB irradiation. SSN dynamic change showed that there was a denser network at the tipping point , indicatThe first-order neighbors of skin signatures on the pCombining all information from above obtained SSNs, DNB genes, TP-DEGs, and TP-DNGs, with functional anchors from the collected prior-known skin signatures , 13 coreAmong core DNB genes, BRCA1, CTNNB1, TGFB1, ITGB1, and NF-\u03baB1 were upstream factors, which were predicted by Ingenuity Pathway Analysis (IPA) using DNB genes . They reIn silico, in vitro, and in vivo validations were conducted to verify the importance of tipping point for skin protection and consolidate the linkage between core DNB genes and skin lightness.in silico validation, the DNB score curve calculated using core DNB genes showed consistent tendency with the one using all genes was applied to reverse UVB-induced skin phenotype change. USBT2627 has been demonstrated to have a skin lightening effect through numerous internal in vitro and in vivo studies. Here, an independent human skin dataset with phenotype changes (L*) from clinical trial was adopted for in vivo validation. Using L* value as a phenotype factor of human skin with UVB irradiation, the modified risk/survival curve corresponding to core DNB genes was calculated, and several core DNB genes had consistently significant P-values as listed in For ll genes . For in in vitro validation using USBT2627 was conducted to mimic skin exposure to UVB in real life was applied to pathway analysis. The P-value of pathway analysis was used as indication for pathway involvement.RNA-sequencing was performed using Illumina NovaSeq6000 platform (150PE). Prebase quality was assessed with FastQC software. HISAT2 was applied to align raw files to the human genome. RSEM was used to quantify reads to cell counts and fragments per kilobase of transcript per millions mapped reads (FPKM). Statistical analysis was applied to test the DEGs between groups. Ingenuity Pathway Analysis software was applied to test whether gene x and gene y were significantly correlated at the single-sample level.The ribution , which aConstruction of DNB landscape. The tipping point identification with DNBs by l-DNB method is mainly based on three conditions of original DNB theory , was defined asx in new sample d and x in the n reference samples. x in the SSN constructed by sample d and reference samples. x in the SSN of sample d. The x for sample d against the n reference samples.Target gene and its first-order neighbors in SSN were regarded as a local module. According to three statistical conditions of DNB theory, the local DNB score for each gene in sample d were similarly calculated, named as x and its first-order neighborsd constructed by sample d and reference samples. d. And The correlation between genes within the module and correlation between inside and outside of module in SSN of sample K genes in the SSN. If there were multiple samples at the same time point, the average DNB score of all samples at the time point was used as the DNB score of the time point. In the present study, according to the number of genes in each SSN and the landscape of each gene\u2019s local DNB score, K\u2009=\u2009600 was used. Actually, the DNB score showed a similar tendency when trying different K values. And the tipping point was identified by selecting the largest DNB score. DNB genes were identified by retrieved genes on the SSN at the identified tipping point. In detail, to ensure the robustness of the identified DNB genes, we took the gene whose local DNB score ranked in top K in at least two samples at the tipping point as the DNB gene of the tipping point.The DNB score of each sample was the average DNB score of top Network edges/features presenting in at least two SSNs at each time point were retained, which not only ensured the robustness of selected network features but also considered the specific information in different samples. Using fused SSNs at every time point as the background network, the rewiring of DNB-related networks was reconstructed. For a fusion network at a given time point, the DNB score of the gene appeared in this network was represented by the mean local DNB score in all SSNs at this time point, and the scores of other genes were set to 0.Next, the differential network analysis was conducted using those SSNs at the tipping point and its adjacent time points. Edges in differential networks were newly appeared edges or disappeared edges between SSNs before tipping point and at tipping point or between SSNs at tipping point and after tipping point . Depending on the tipping point-relevant differential networks, we can obtain hubs, i.e. genes/nodes with large node degree in the network, which are considered to play an important role in function due to their hub/center position in the rewiring network. The nodes with high node degree in differential networks were ranked and prioritized for further analysis, e.g. the genes with higher degrees than average value were selected as TP-DNGs.Furthermore, DNB genes were overlaid with skin-related genes extracted from knowledge-based skin signature genes , and intIndependent gene expression data of core DNB genes with/without USBT2627 (Unilever skin lightening active) under UV challenge were adopted for estimation. The independent data include selected gene expression for placebo group and active group at an early time point (8\u2009h post last UV irradiation) and phenotype (L*) at multiple time points . Totally, 38 samples were collected for validation.P-value of survival analysis, the actual number of events occurring in each group over a given time period was compared with the theoretical number of events assuming a same mortality rate in both groups (null hypothesis).Survival analysis was adopted and modified to validate the prediction potency of the identified genes for L*. For the survival curve of phenotypic changes of skin under UV irradiation, the phenotypic value change of the sample greater than a threshold is defined as the occurrence of an event, and the risk here refers to the proportion of samples with such event occurrence at a time point. In the comparisons for Samples were divided into the high-expression group (group A) and the low-expression group (group B) according to gene expression, with respective sample size of The theoretical events in each group at time The total theoretical events in all time points are as follows:And the total observations of events in all time points are as follows:If the survival rates are the same for both groups, in vitro validation using USBT2627 was conducted. Here, USBT2627 was applied after the tipping point as the compared group (Since the identified core DNB genes accurately predicted the lightness (L*), a follow-up independent ed group . For thehttps://github.com/ChengmingZhang-CAS/Landscape-DNB-analysis-for-time-series-data-of-LSE-model-irradiated-by-UVB. Please contact the authors if interested in the data.Codes used in the paper are publicly available on Github Journal of Molecular Cell Biology online.mjab060_Supplementary_MaterialClick here for additional data file."}
+{"text": "Gene expression profile or transcriptome can represent cellular states, thus understanding gene regulation mechanisms can help understand how cells respond to external stress. Interaction between transcription factor (TF) and target gene (TG) is one of the representative regulatory mechanisms in cells. In this paper, we present a novel computational method to construct condition-specific transcriptional networks from transcriptome data. Regulatory interaction between TFs and TGs is very complex, specifically multiple-to-multiple relations. Experimental data from TF Chromatin Immunoprecipitation sequencing is useful but produces one-to-multiple relations between TF and TGs. On the other hand, co-expression networks of genes can be useful for constructing condition transcriptional networks, but there are many false positive relations in co-expression networks. In this paper, we propose a novel method to construct a condition-specific and combinatorial transcriptional network, applying kernel canonical correlation analysis (kernel CCA) to identify multiple-to-multiple TF\u2013TG relations in certain biological condition. Kernel CCA is a well-established statistical method for computing the correlation of a group of features vs. another group of features. We, therefore, employed kernel CCA to embed TFs and TGs into a new space where the correlation of TFs and TGs are reflected. To demonstrate the usefulness of our network construction method, we used the blood transcriptome data for the investigation on the response to high fat diet in a human and an arabidopsis data set for the investigation on the response to cold/heat stress. Our method detected not only important regulatory interactions reported in previous studies but also novel TF\u2013TG relations where a module of TF is regulating a module of TGs upon specific stress. In a living cell, rewiring of interactions among proteins, genes, and RNA molecules orchestrates how cells respond to external stimuli. One of the most fundamental regulatory relationships arise from transcription factors (TFs) that bound to the promoter of target genes (TGs) resulting in changing transcriptional dynamics. Since TF\u2013TG interactions can be represented as a network, dynamics of gene regulatory mechanisms upon stimuli can be modeled and analyzed as gene regulatory network (GRN). High-throughput experimental techniques, such as Chromatin Immunoprecipitation sequencing (ChIP-seq), have been widely utilized to construct GRNs detecting one-to-multiple relationships of TF and TGs . Such experimental techniques are powerful but provide only partial snapshot of condition-specific GRN. TF ChIP-seq can measure only one TF at a time and it is not practical to perform ChIP-seq experiments for all TFs under various conditions. More importantly, multiple TFs work together to regulate multiple TGs in a condition-specific way, thus data from TF ChIP-seq needs to be combined for constructing networks of multiple TFs and multiple TGs simultaneously. Thus, it is necessary to develop computational methods for elucidating multiple-to-multiple relations of TFs and TGs in a specific condition. There have been several studies to identify multiple-to-multiple interactions. A study by Jolma et al. tried toin silico reverse engineering methods that infer GRNs from gene expression data. Correlation-based network inference methods\u2014the most straightforward approach\u2014detect regulatory relations if two genes are linearly correlated is a generalization of correlation-based model that can detect non-linear dependencies, taking into account the effect of third-party genes in addition to two correlating genes. ARACNe network. For detection of combinatorial relations of TFs and TGs specific to the dataset, i.e., condition-specific combinatorial relations, we utilized kernel canonical correlation analysis (kernel CCA). Kernel CCA is a well-established statistical method for learning coefficients of two groups of features that maximize the correlation of a group of features vs. another group of features as input and produces a network of gene\u2013gene regulatory relations. Public PPI network and GRN network were utilized as a prior knowledge to guide the detection of correct TF\u2013TG relations. Specifically, our approach consists of three steps\u2014multilevel.community function in R igraph package that implemented the Louvain algorithm for community detection , through feature maps x(x) is the projection of a data point x \u2208 X and likewise \u03d5x(x) is the projection of a data point y \u2208 Y. We represent the datasets projected in feature space as \u03a6x = \u03d5x(x1), \u03d5x(x2), \u22ef\u00a0, \u03d5x(xn) and \u03a6y = \u03d5y(y1), \u03d5y(y2), \u22ef\u00a0, \u03d5y(yn), respectively. Applying kernel trick, the similarities of feature vectors can be defined as a positive definite kernel i, j = 1, 2, \u2026, n. Specifically, we applied Gaussian RBF kernel (Equation 1)Let We define kernel projection of data or kernel Gram matrices as fx and fy that maximize the correlation of canonical components fx and fy lie in space spanned by the feature space mapped objects, we can represent canonical vectors as linear combinations of \u03a6x and \u03a6y, where u and v are represented with kernel matrix, The aim of kernel CCA is to find projection vectors n,\u03b2 \u2208 \u211dn are expansion coefficients. The problem can be reformulated as a generalized eigenvalue problem with regularization as follows:where \u03b1 \u2208 \u211dI denotes the identity matrix, \u03bb is regularization parameter, and \u03c1 = max\u2329u, v\u232a/(\u2225u\u2225\u2225v\u2225). Once we obtain solutions for the above equations that represent the amount of contribution of each sample, we multiplied the transpose of gene expression matrices XT \u2208 \u211dp\u00d7n and YT \u2208 \u211dq\u00d7n with canonical weight vectors \u03b1 \u2208 \u211dn and \u03b2 \u2208 \u211dn to get the TF and TG embeddings, where k canonical components orthogonal to each other, so that we can get TF and TG embeddings matrix k canonical dimensions.We can now compute k dimension with the corresponding eigenvalue so that the eigenvalue-weighted embeddings is dominated by the leading eigenvectors. For every possible TF\u2013TG pair retrieved from public GRN, we next computed dot-product similarity of TF and TG embeddings to define an edge weight of the pair. We then filtered out edges that have weights below 0.5. This process is iteratively performed until there are no TFs left in candidate TG lists.Using kernel CCA, genes that greatly contribute to the correlations of TFs and TGs gain greater weights in canonical embeddings and TF\u2013TG pair that both TF and TG show high weights should be remarked as valid pair. Inspired by Seo and Kim , we weigWe analyzed public time-series gene expression data from NCBI GEO datasets .GSE127530 is an RNA-seq data that measure human blood transcriptome after high-fat meal (HFM) measured in three time points with 15 samples for each time point, where each time point denoted as tp0, tp1, and tp3. Raw counts are normalized in terms of gene length with TPM (transcripts per million). For our method, we applied MinMaxscaler in Python sklearn library in order not to make correlations dominated by highly expressed genesArabidopsis thaliana in response to cold stress at seven time points with two replicates for each time point, where each time point denoted as tp0, tp1, tp2, tp3, tp4, tp5, and tp6. GSE5628 is an microarray data responsive to heat stress, which consists of heat-shocked samples at 38 Centigrade and recovered samples after heat-shock treatment prolongs to 21 h at 25 Centigrade measured at five time points with two replicates for each time point, where each time point denoted as tp0, tp1, tp2, tp3, and tp4. We applied MinMaxscaler in Python sklearn library for normalization.GSE5621 is an microarray data that measures transcriptome from shoots in We compared our method with the existing methods: ARACNe-AP \u2212 precision = TP/(TP+FP)\u2212 recall = TP/(TP+FN)For the node-level comparison, we measured specificity and recall. True-positive (TP) are a set of genes that are both in the ground truth network and reported by our method. False-positive (FP) is a set of genes that are reported by our method but do not exist in the ground truth network. True-negative (TN) is a set of genes that are not in the ground truth network and not reported by our method. False negative (FN) is a set of genes that are not reported by our method but exist in the ground truth network. For the edge-level comparison, we measured precision and recall. TP are a set of edges that are both in the ground truth network and reported by our method. FP is a set of edges that are reported by our method but do not exist in the ground truth network. FN is a set of edges that are not reported by our method but exist in the ground truth network.Gall) that was constructed using all cooperating TFs in TF modules was compared to the sub-networks (each sub-network denoted as Gi) that was constructed using individual TFs in TF modules. We used two metrics for the evaluation of TF cooperation: the biological significance and the cooperative potential.A sub-network constructed by our method contains multiple TFs that cooperate with each other for regulating TGs in the sub-network. One way to evaluate the power of TF cooperation is to compare metrics from all TFs in the sub-network vs. metrics from a set of individual TFs in the sub-network. That is, we constructed sub-networks using individual TFs in TF modules without considering the cooperativeness of TFs. The original sub-network (denoted as Bp) of TF cooperation in terms of each pathway was calculated using Equation (5). Pathway enrichment with nodes in Gall and all gseapy library. For each pathway p, the p-value obtained from Gall is denoted as p-value obtained from Gi is denoted as Gis are constructed, aggregating pathway p-values from Gi was performed by Fisher's combined probability test . We used betweenness centrality of a node in a given sub-network that measures the proportion of the shortest paths present in the sub-network that pass through the corresponding node. Gene-level network centrality values were calculated on the Gall and the set of Gis, which are denoted as Gall (Cp) was calculated by summing up the cooperative potential of the overlap genes.The cooperative property of TFs was measured by comparing network centrality values between One of the advantages of our method is that the whole GRN is divided into small sub-networks. We suggest two approaches to choose sub-networks for detailed inspection.To emphasize on the dynamics of network over time, we chose sub-networks where regulatory relations vary significantly over time. For assessing the amount of variance across time, we measured the fraction of time-point exclusive nodes and edges to the size of a sub-network for each time point and then averaged across time. We applied this approach to the human dataset.Arabidopsis thaliana datasets. There were too many genes in the Arabidopsis thaliana network, thus we used only DEGs to reduce the number of genes.To investigate how combinations of co-working TFs vary over time, we chose a Differentially Expressed Gene (DEG)-enriched TG module and inspected the DEG-enriched sub-networks connected to the TG module. We applied this approach to the Given gene expression profiles, our method produces GRN that consists of multiple sub-networks where condition-specific interacting TFs regulate a set of TGs through intermediate genes. Utilizing the public GRN and signature genes as a guide, our method selects edges of the network with kernel CCA to model cooperative and combinatorial natures TFs and TGS. Another strength is that our method decomposes the whole GRN in to sub-networks to improve interpretability. When an organism is exposed to an environmental stimulus, it orchestrates multiple biological process as a response and what our method determines is the intermingled regulatory interactions. Therefore, decomposition of the whole GRN into sub-networks helps us to interpret the result better.We compared our method with the existing methods: ARACNe-AP with respect to tp0 as a baseline; a GRN with 7,021 nodes and 99,455 edges in tp1, and with 5,985 nodes and 61,646 edges in tp2. One challenge that arises in inspection of GRN is that regulatory relations are too complex to interpret in which multiple biological processes are intermingled together. One of the strengths of our method is that we can decompose the giant network into a feasible size of sub-network consisting of GRN projection from a TF module to a TG module. The resulting GRN from our method consisted of 31 TF modules and 76 TG modules in tp1 and 26 TF modules and 52 TG module in tp2 which means that 31 \u00d7 76 sub-networks and 26 \u00d7 52 sub-networks consists of a GRN of each time point.Gh3 for tp1 and Gh6 for tp2. We then compared how much biological pathways were enriched in these networks over time and their receptor, RAGE, are known to deal with the accumulation of metabolite end product in diabetes and cooperative potential (Cp) between the We next investigated how much cooperation occurs in sub-networks . To demoRP value is in a certain pathway p, the more genes exist in the p utilizing multiple TFs compared to the simulation with individual TFs. Heatmap in the left panel of Bp value of the enriched pathways in Bp at tp2 compared to that of the previous time point despite most of the pathways showed subtle enrichment changes against simulations. Such temporal changes indicate that there are regulatory dynamics in multiple pathways co-regulated by multiple TFs. Cp value in Equation (6) was developed here to investigate the degree of TF cooperation at The greater the Cp showed that tighter TF regulations were made by multiple TFs compared to the simulations throughout the enriched pathways. Compared to the subtle changes in Bp value, the ability of kernelCCA to construct a sensitive regulatory sub-network to temporal dynamics reflected greater Cp value across the pathways. Specifically, pathways related to cellular signaling were consistently co-regulated by two TFs (FOXO3 and FOXO4). This was also supported by a previous study that suggests greater co-regulation by multiple TFs stay invariant to perturbations as well as play a central role in controlling pivotal dynamics in response to external stimuli and late elongated hypocotyl (LHY), a short period after high-temperature treatment. These two genes, detected as a co-working TFs in our proposed method , involveUnlike a higher temperature treatment for several hours that increases physiological reactions and results in less severe consequences, lower temperature treatment over hours is life threatening. This characteristic difference of temperature treatment is why we found a relatively broad range of gene regulatory paths from cold treatment. We found well-defined cold response genes, such as CBF, DREB, COR, ERF, ZAT, RVE, and ABF1 and microarray data of Arabidopsis thaliana can be found in the GEO . Code for GRN construction with kernel CCA can be found at: https://github.com/DabinJeong/GRN_construction_with_kernelCCA (M\u00f6lder et al., Publicly available datasets were analyzed in this study. RNA-seq data of human blood transcriptome analyzed in this study can be found in the GEO (SK designed and directed the whole project. DJ designed and implemented the GRN construction algorithm. SLi, MO, and SLe involved in the discussion for building thesis. SLi designed the demonstration strategy and visualized the results. CC conducted the comparison analysis. WJ, SLi, DJ, SLe, MO, and CC biologically interpreted the analysis results. SK, DJ, and SLi wrote and revised the paper. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Many computational methods have been developed to infer causality among genes using cross-sectional gene expression data, such as single-cell RNA sequencing (scRNA-seq) data. However, due to the limitations of scRNA-seq technologies, time-lagged causal relationships may be missed by existing methods. In this work, we propose a method, called causal inference with time-lagged information (CITL), to infer time-lagged causal relationships from scRNA-seq data by assessing the conditional independence between the changing and current expression levels of genes. CITL estimates the changing expression levels of genes by \u201cRNA velocity\u201d. We demonstrate the accuracy and stability of CITL for inferring time-lagged causality on simulation data against other leading approaches. We have applied CITL to real scRNA data and inferred 878 pairs of time-lagged causal relationships. Furthermore, we showed that the number of regulatory relationships identified by CITL was significantly more than that expected by chance. We provide an R package and a command-line tool of CITL for different usage scenarios. Single-cell RNA sequencing (scRNA-seq) is a technology capable of measuring the expression level of RNA at the single-cell resolution . Rapidlyi at j at i with gene j. Note that with a time-lagged relationship, the expression level of gene i may not be related to the expression level of its target gene j at a specific time With reference to the time factor in causal inference, causal relationships among genes can be categorized into instant relationships and time-lagged relationships. In this study, we focus on the second. A time-lagged relationship is illustrated in There are two main challenges to inferring time-lagged causality for scRNA-seq data: the collection of longitudinal data and the presence of latent variables. First, it is difficult to continuously monitor the whole transcriptome within one cell. Of note, even when cells can be sequenced at different time points , such daThe second challenge is that unmeasured variables, also referred to as latent variables, are common in scRNA-seq experiments. scRNA-seq can capture the expression levels from 2000 to 6000 genes in a cell, where many genes with low-expression levels may not be captured. Additionally, the causal path from one gene to another often involves many biological molecules which cannot be detected by scRNA-seq, such as proteins, metal ions, and saccharides. Together with low-expression genes, these latent variables are common for scRNA-seq data. However, many existing methods for causal inference assume the absence of latent variables, and as a result, may have difficulty in inferring causality from scRNA-seq data.Here, we propose CITL , a method to infer the time-lagged causal relationships among genes in scRNA-seq data capable of overcoming the two challenges mentioned above. CITL uses RNA velocity information inferred from scRNA-seq data to estimate the changing expression levels of genes. By assessing the conditional independence between the changing and current expression levels, CITL reduces the interference by natural confounders. CITL can more accurately infer time-lagged relationships than a commonly used cross-sectional causal inference algorithm, the PC-stable algorithm in simulj than single-threaded CITL (less than 10 s). Consequently, we only applied Scribe to one replicate of each simulation. The edges discovered by CITL were compared with the top-RDI edges of the same number . BesidesWe further compared Scribe with CITL through simulations conducted as previously described by . In the The asynchronous regulations are time-lagged regulations with different extrapolation time steps. Therefore, the changing expression level of each asynchronous gene should be its reaction velocity multiplied with its corresponding extrapolation time step if the velocity is stable. However, the extrapolation time steps of all regulations are not available. CITL used RNA velocity to estimate the changing expression levels of all genes, assuming that the extrapolation time step used in RNA-velocity estimations is constant across cells and genes (same as ). The ascausal sufficiency assumption, which requires that all variables are measured. However, the causal sufficiency assumption is violated as described in the second challenge. For the latter, the relationship between the current and changing gene expression levels can be oriented according to the Time-lagged assumption. Since the current gene expression level precedes the changing gene expression level in time, the temporal order of these two types of gene expression levels is defined as the causal direction in the Time-lagged assumption. It enables CITL to orient causal edges without causal sufficiency, which releases the disturbance derived from the second challenge.The second challenge has less impact on causal orientation when adopting the Time-lagged assumption. As shown in To evaluate the impact of latent variables on CITL to infer time-lagged causality, we performed single-trace simulations by randomly removing 0%, 10%, 30%, and 50% of the total genes. As illustrated in For real data sets, we estimated the changing expression levels and the subsequent expression levels by RNA velocity. Before adopting the estimated ta set 1 a and datta set 1 b. This snclusion . Second,k for both CITL and PC-stable was equal to the square root of the number of genes for each data set. A total of 3998 and 4459 pairs were inferred by PC-stable from data set 1 and data set 2, respectively. In data set 1, only four gene pairs were found by both approaches, and there was no overlap for data set 2. These results suggest that CITL infers different types of causality from previous methods that only used the current expression level of genes.We applied CITL to data set 1 and data set 2 with 2508 and 878 time-lagged causal pairs (TLPs) inferred, respectively. We also applied PC-stable on the data sets with current-only expression data and compared the gene pairs inferred by PC-stable to TLPs. For computational efficiency, the value of http://www.pathwaystudio.com/ (accessed on 20 October 2020)) enables searching interactions between molecules, cell processes, and diseases from the literature. Almost any pair of two genes could be related, directly or indirectly, through Pathway Studio. Each interaction is annotated by a sentence from the literature. Not all interactions are regulatory, such as binding. We reviewed the annotation of every searched interaction to find TLPs with regulatory interactions. For the regulatory interactions, we divided them into two categories. The \u201cPROT\u201d type refers to interactions that only involve proteins, such as increasing or reducing protein activity, co-activating or antagonizing, and phosphorylating or dephosphorylating. The \u201cTRSC\u201d type refers to interactions relating to proteins regulating the transcription of specific genes, including activation and repression. Considering manually filtering interactions taking considerable time, we only investigated the biological functions of a subset of the pairs.Because we do not know the ground truth for time-lagged causality, we investigated the biological relationships of TLPs to evaluate the performance of different methods. Pathway Studio were taken from the TRRUST database, a repository of curated TF\u2013target relationships of human and mouse . We inveMAGED1\u2014EOMES was \u22120.19).All the 68 pairs had indirect relationships, forming paths with one or more intermediates. Most of the interactions among these paths were \u201cnon-regulatory\u201d. We focused on the regulatory paths that ended with a TRSC interaction, since the causality among genes\u2019 transcripts, rather than proteins, was of interest in scRNA-seq. In total, 14 TLPs with regulatory relationships (rTLPs) and their regulatory paths are shown in p-value of the test was 0.00024, suggesting the excellent performance of CITL.To evaluate the significance of the accuracy of CITL, we first investigate how likely it is that a random gene will be the target of a TF. We randomly chose 11 TFs from the 37 TFs and investigated their regulatory relationships with randomly selected genes. For each TF, a randomly selected gene was assigned as its effect. Then, the functional connection between the gene pair, referred to as randomly selected-and-direction-assigned pair (RAP), was searched using Pathway Studio. Like the TLPs inferred by CITL, most RAPs did not have regulatory relationship. To find a gene that had a regulatory relationship with each TF, we searched 35 RAPs. In the 11 RAPs with regulatory relationships (rRAPs), only two rRAPs\u2019 assigned directions were consistent with their known causal directions. Therefore, we speculate that, for a TF, there are more upstream genes than downstream ones after excluding non-regulatory genes. We compared the accuracy of CITL to the accuracy of random selection using Fisher\u2019s exact test. The MLLT3 \u2192 FLRT3\u201d in MLLT3 \u2192 FLRT3\u201d is a gene pair with a small negative cur_cur correlation genes. For the simulations and applications in this article, the k was set at the square root of n, and the type one error (CITL) at https://github.com/wJDKnight/CITL-Rpackage (accessed on 30 November 2021) and an open-source command-line tool of CITL at https://github.com/wJDKnight/CITL (accessed on 30 December 2021). Tutorials about data preparation and using CITL are also provided in the repositories.To infer time-lagged causal relationships, CITL first constructs an undirected graph (UG) through both bnlearn package [Causal Sufficiency, Causal Markov Assumption, and Faithfulness [k and We compared the performance of CITL versus a commonly used causal Bayesian network method, PC-stable , and a r package . The difhfulness ,24. We cApproach 1: PC-stable, using only Approach 2: PC-stable, using Approach 3: PC-stable, using umptions . We incli as the cause of gene j as baseline prediction.We note that any method which can identify a strong correlation between i is the cause of j. The default values of the parameters of Scribe were used in simulation studies.We also consider a recently published causal inference method for scRNA-seq data , Scribe.Some experiments sequence cells at one time point while others sequence cells at multiple time points. We refer to the former as single-trace data and the latter as multi-trace data. We considered both scenarios in our simulations. For single-trace data, we simulated 3000 cells. For multi-trace data, we simulated from three traces, with each trace having 1000 cells. We carried out 500 simulations for each set-up. For each simulation, we randomly generated a causal graph For each cell, we simulated four values related to each of the 50 genes\u2019 expression levels, including previous Equation using caHere, we used linear functions to describe time-lagged relationships. The coefficient of For single-trace data, In our simulations, we also investigated whether CITL can infer non-time-lagged relationships, referred to as instant causal relationships. This assumes that the current expression level of a gene results from its previous expression level and the current expression level of its causes. These data were generated in a similar manner as the time-lagged except for the method used to generate Equation , where cTo equally benchmark CITL against Scribe, we tested their performance in simulations under both Qiu\u2019s and our i, we changed its real changing expression level to evaluate the influence of varying degrees of desynchronization on CITL. We carried out 500 single-trace simulations for each set-up. In all simulations, the k of the CI test was equal to the square root of the number of genes n.Different gene regulations could have different reaction times in real data sets, resulting in asynchronous changing expression levels of genes. Therefore, we tested the robustness of CITL with the simulations where not all genes were synchronous. For each asynchronous gene We used precision, recall, and F-measure for the inferred node adjacency versus the data-generating model as the primary evaluation measures to compare the performances of different approaches. To compute these metrics, we first calculated three basic statistics: true positives (TP), false positives (FP), and false negatives (FN) that are related to inferring edges. TP is the number of adjacencies in both the output graph studies ,27,28,29ic (ROC) ,31 and pic (ROC) curves whttp://velocyto.org/ (accessed on 23 May 2018)). There were more than 18,000 cells and an average of 2160 genes for each cell in data set 1 after preprocessing. Data set 2 was the human week ten fetal forebrain data set in Velocyto, containing 1720 cells and an average of 1488 genes for each cell. According to La Manno et al. (2018), the forebrain, as identified by pre-defined markers, can be divided into eight developing stages (0\u20137). The stage information was only exploited in data visualization.We considered two data sets. Data set 1 was from mouse P0 and P5 dentate gyrus , and RNAIn this article, we propose CITL to infer the time-lagged causality of genes using scRNA-seq data. Specifically, we adopted the changing information of genes estimated by RNA velocity in our approach. We further present the superior performance of CITL against other methods in simulations under different set-ups. The proposed approach CITL achieves promising results on a human fetal forebrain scRNA-seq data set, which accurately provides time-lagged causal gene pairs curated by published articles. We note that most methods for analyzing scRNA-seq data did not consider the relationships between genes that could be time-lagged. The results of simulations and real data sets from this paper suggest that we cannot ignore such common relationships. Therefore, we foresee that CITL can provide more insights that may help to guide future gene regulatory research."}
+{"text": "Rapid development of high-throughput omics technologies generates an increasing interest in algorithms for cutoff point identification. Existing cutoff methods and tools identify cutoff points based on an association of continuous variables with another variable, such as phenotype, disease state, or treatment group. These approaches are not applicable for descriptive studies in which continuous variables are reported without known association with any biologically meaningful variables.The most common shape of the ranked distribution of continuous variables in high-throughput descriptive studies corresponds to a biphasic curve, where the first phase includes a big number of variables with values slowly growing with rank and the second phase includes a smaller number of variables rapidly growing with rank. This study describes an easy algorithm to identify the boundary between these phases to be used as a cutoff point.The major assumption of that approach is that a small number of variables with high values dominate the biological system and determine its major processes and functions. This approach was tested on three different datasets: human genes and their expression values in the human cerebral cortex, mammalian genes and their values of sensitivity to chemical exposures, and human proteins and their expression values in the human heart. In every case, the described cutoff identification method produced shortlists of variables highly relevant for dominant functions/pathways of the analyzed biological systems.The described method for cutoff identification may be used to prioritize variables in descriptive omics studies for a focused functional analysis, in situations where other methods of dichotomization of data are inaccessible.The online version contains supplementary material available at 10.1186/s12864-022-08427-6. Descriptive omics represent one particular type of study in which a big number of continuous biological variables are measured in a biological sample to characterize it rather than to compare it with other samples . Descriptive studies provide background knowledge for future research as they characterize biological systems at molecular levels. As such, descriptive omics is analogous to the effort of XVIII century biologists in building a descriptive fundament for organismal-level biology. Today descriptive omics results in many essential resources of medico-biological research such as databases providing quantitative information on genes, proteins, sncRNA, metabolites, and other biological variables across many organisms, tissues, cell types, and biological liquids. Extraction of biologically meaningful information from these resources may be challenging.One approach is based on an assumption that a small number of variables with the highest values of expression/abundance dominate functions of a biological system. For example, it is reasonable to assume, that genes with high expression values are more important for the normal tissue physiology than these expressions of which is close to zero. This approach requires methods of cut-off point identification to generate shortlists of variables for focused analysis.p-value as cutoff criteria. A range of algorithms and online tools was developed to categorize variables for decision-making about cancer treatments [Several methods of dichotomization were developed previously by different research domains as a result of the rapid development of high-throughput omics and other technologies and approaches in the medico-biological domain. For example, a big group of existing methods identifies cutoff points based on an association of continuous variables with other biologically meaningful variables. For example, a widely used approach for the identification of genes differentially expressed in relation to a health condition or treatment utilizes fold-change and false-discovery rate adjusted eatments \u20133. TheseAnother group of methods was developed for image segmentation. For example, Otsu\u2019s method was developed to separate pixels in an image into two classes: object and background . Global Several methods of dichotomization of droplets with and without cell RNA based on the content of their unique molecular identifier (UMI) were developed in a framework of single-cell sequencing technology. Although the distribution of UMI in droplets is continuous, methods used for dichotomization are based on the presence of 2 classes of droplets (empty and non-empty) allowing for the calculation of thresholds based on the deviation from UMI prediction for one class or another , 8. AlthThe most common shape of the ranked distribution of continuous variables in high-throughput descriptive studies corresponds to a biphasic curve Fig.\u00a0A, D, F, A. If we connect the first and the last points of the typical biphasic distribution curve (A) of a descriptive omics dataset by a straight line (B), together these 2 curves will produce a figure resembling a triangle Fig.\u00a0A. The loyB\u2009=\u2009mBxB\u2009+\u2009bB. Functions perpendicular to B, all have the following generic equation: yC\u2009=\u2009(-1/mB)xC\u2009+\u2009bC. Given the coordinates of crossing points between A curve and every C function are known ), bC can be calculated for each such crossing point:The B function is a linear function: A curve we have an equation of a linear function C that is crossing A in that point and is perpendicular to the short-cut line B. Now we need to identify coordinates of points at which B and C functions intersect. Given that coordinates of both functions are the same at intersection, we can equate x for both functions: (yCB \u2013 bB)/mB\u2009=\u2009(yCB \u2013 bC)/(-1/mB). From that equation, we can calculate y for intersection:Thus, now for every point of the y for intersection, we can calculate x for intersection as well, using an equation for B:As we know C function with A and coordinates for intersection of each C function with B we can calculate the length of segments using the Pythagorean theorem:Now, as we have coordinates for points of the intersection of each xAC is a rank number of the variable, and yAC is a value of the variable let\u2019s substitute xAC with R, and yAC with V. Let\u2019s also insert Eqs.\u00a0D segments:Given that A curve in the point of the curve bending.The longest segment will cross A. The linear function B connecting the first and the last points of the curve A has the following equation: y\u2009=\u20090.0364x \u2013 0.0364. Thus, mB\u2009=\u20090.0364 and bB\u2009=\u2009-0.0364. These values as well as rank values for every gene (R) and normalized expression values for every gene (V) were used in Eq. D for every gene. The longest segment corresponds to the gene ranked 15,778. This ranking number corresponds to the cutoff point that delineates genes with low and high expression in the human cerebral cortex. To test if highly expressed genes reflect the essential physiology of the cerebral cortex, I submitted the list of top 575 genes determined by the method and ranking 15,779 through 16,353 to Metascape [p)\u2009>\u200915). To control if any genes expressed in the human cerebral cortex are enriched for essential functions of the cerebral cortex, I also submitted to Metascape an equivalent size list of genes with non-zero expression values and lowest expression ranks. This list was enriched for categories non-relevant to brain and nerve tissue, such as \u201cformation of cornified envelop\u201d, \u201cresponse to bacterium\u201d and \u201cdigestion\u201d for example \u2009>\u200950). Similarly to Example 1, I looked at the enrichment of the equivalent-size list of lowest-ranking genes. Enriched categories were non-relevant to known mechanisms activated in response to chemical exposures \u2009>\u200910). Biological categories enriched in the shortlist of the lowest ranking proteins were non-relevant to known adult heart physiology in the adult human heart were downloaded from The Human Proteome Map portal . These dIn this study, I describe a simple and reproducible approach for the cutoff identification in descriptive high-throughput studies, which method is entirely based on the analysis of the shape of the curve of the data distribution. The major assumption of that approach is that a small number of variables with high values dominate the biological system and determine its major processes and functions. Thus, the described method for cutoff identification may be used following a visual inspection of the shape of the curve to confirm its biphasic nature to prioritize variables for more detailed functional analysis, in situations where other methods of dichotomization of data are inaccessible. As such the method should be used with a complete list of variables without prior application of other cutoff approaches.Three different datasets analyzed here as examples demonstrate that the described cutoff identification method produces shortlists of variables highly relevant for dominant functions/pathways of the analyzed biological systems. The shortlist of highly expressed genes in the human cerebral cortex was highly enriched for categories related to synaptic transmission, nervous system development, and even higher functions, such as learning and memory. The shortlist of genes sensitive to chemical exposures was enriched for biological categories involved in response to stress and damage. Finally, the shortlist of proteins expressed highly in the human heart was significantly enriched for biological categories relevant to muscle architecture, contractions, and contraction regulation.p-value and fold change as cutoff points in omics studies are selected arbitrarily by researchers, but they represent meaningful indicators of the data structure and provide reproducibility of the data analysis.I should note here, that some applications may require more or less stringent criteria for the cutoff. In these situations, the described approach may still be useful as it allows to identify the point where the curve of values distribution changes most rapidly. Using this reproducibly identifiable point one may further select criteria with different percent of stringency relative to it. In other words, the cutoff point identified as described here may provide some meaningful reference value. Similarly using of The results of the use of the described dichotomization approach should be interpreted cautiously. For example, the fact that some gene was found in the short-list of highly expressed genes in a tissue does not necessarily mean that this gene is highly tissue-specific. In fact, many \u201chousekeeping\u201d genes are highly expressed in the majority of cell types , as theyAdditional file 1:\u00a0Supplemental Fig. 1. Example of normalized geneexpression values distribution based on human cerebral cortex gene expressiondata (see Example 1 in Examples of the Method Use).\u00a0Supplemental Fig. 2. Biological categoriesenriched in shortlists of lowest-ranking variables for the following datasets:genes expressed in human cerebral cortex (A), genes sensitive to chemicalexposures (B), and proteins expressed in the adult human heart (C)."}
+{"text": "Longitudinal gene expression analysis and survival modeling have been proved to add valuable biological and clinical knowledge. This study proposes a novel framework to discover gene signatures and patterns in a high-dimensional time series transcriptomics data and to assess their association with hospital length of stay.We investigated a longitudinal and high-dimensional gene expression dataset from 168 blunt-force trauma patients followed during the first 28 days after injury. To model the length of stay, an initial dimensionality reduction step was performed by applying Cox regression with elastic net regularization using gene expression data from the first hospitalization days. Also, a novel methodology to impute missing values to the genes selected previously was proposed. We then applied multivariate time series (MTS) clustering to analyse gene expression over time and to stratify patients with similar trajectories. The validation of the patients\u2019 partitions obtained by MTS clustering was performed using Kaplan-Meier curves and log-rank tests.We were able to unravel 22 genes strongly associated with hospital\u2019s discharge. Their expression values in the first days after trauma showed to be good predictors of the length of stay. The proposed mixed imputation method allowed to achieve a complete dataset of short time series with a minimum loss of information for the 28 days of follow-up. MTS clustering enabled to group patients with similar genes trajectories and, notably, with similar discharge days from the hospital. Patients within each cluster have comparable genes\u2019 trajectories and may have an analogous response to injury.The proposed framework was able to tackle the joint analysis of time-to-event information with longitudinal multivariate high-dimensional data. The application to length of stay and transcriptomics data revealed a strong relationship between gene expression trajectory and patients\u2019 recovery, which may improve trauma patient\u2019s management by healthcare systems. The proposed methodology can be easily adapted to other medical data, towards more effective clinical decision support systems for health applications. Temporal data has been frequently used in medical research to follow disease progression over a few time points or prolonged periods. Hereupon, longitudinal studies help understanding patterns of change over time, and its analysis is receiving increasing attention . For insIn high-dimensional omics data, the number of variables (genes) measured in these experiments is usually much higher than the sample size (number of subjects included in the study). From a statistical point of view, this can be a nuisance due to associated identifiability problems of the parameter estimation procedure. Therefore, identifying accurate and relevant biomarkers in a high-dimensional data set has become one of the key challenges today for the advance of precision medicine.Different statistical methods have been proposed to deal with high-throughput data analysis, which is essential for an effective and reproducible variable selection, including software available for the storage and retrieval of large data sets, data mining in transcriptomics, and integrative interactomics \u20136. The aTraumatic diseases are a significant public health concern being the number one cause of death in young adults aged 1\u201344 years old and is expected to rise to the leading cause of death in older age groups . Trauma To improve healthcare systems in treating severe systemic inflammation and understanding its key regulatory elements and molecular signatures, the National Institute of General Medical Sciences supported the \u201cInflammation and the Host Response to Injury\u201d (IHRI) research program. One of the IHRI large-scale outcomes is the Trauma-Related Database (TRDB), containing a 28-day prospective clinical genomics study involving a cohort of 168 patients, \u223c800 microarrays, and a set of clinical variables. Thereby, a high-dimensional time-series gene expression data analysis is of paramount importance for improving hospital management, facilitating decision-making, and thus advance personalised medicine.http://www.gluegrant.org upon user registration). Our proposed method aims at predicting the distribution of times until hospital discharge, by using a constellation of gene expression variables measured at multiple time points within the first 28 days after injury. Combining bioinformatics and statistical tools, we unveiled relevant gene signatures associated with patients\u2019 recovery. Also, we studied the gene expression trajectory within each subject to reveal patients\u2019 clusters and thus predict discharge of future trauma patients by using the estimated cohort\u2019s stratification.Our study is based on the data provided by IHRI program .This section presents the data under study and the techniques used to reduce the dimensionality of gene expression data. Since the data under study is approached from the survival perspective, a brief introduction to this subject is also given. Next, the methods used for missing data imputation and for multivariate time series clustering are explained. The overall procedure is illustrated in Fig.\u00a0https://clinicaltrials.gov/ct2/show/NCT00257231), involving severe blunt-force trauma patients treated in seven U.S. Level I trauma centers across the U.S.A., from 2003 to 2011. The study was approved by the institutional review board of each center, and written informed consent was obtained from all patients. A standardised patient care protocol was applied in each hospital center to minimise the impact of possible variability. To ensure confidentially, patients were de-identified as defined by the Health Insurance Portability and Accountability Act of 1996.The dataset under study is constituted by leukocyte gene expression with 797 microarrays from 168 blunt-force trauma patients followed in the first 28 days after they experienced an injury. Each microarray contains information about 54675 genes. The cohort of trauma patients , aged between 16 to 55 y.o., are from a larger epidemiological study .In our case study, we will consider the number of patients as http://www.gluegrant.org. These data were previously analysed by Desai et al. [All the data are freely available at i et al. .Survival analysis studies the time until an event of interest occurs and is applied in various fields of science, especially in the medical research area. It is a set of statistical approaches to investigate the time it takes for an event of interest happens, as death, the development of a disease or, in the present case, the time since the injury occurred until patient discharge from the hospital, i.e., the length of stay. However, the event time may not be observed for some subjects within the study period, creating censored observations.h(t) at time t for the i-th patient is given by: To model this type of data, the Cox regression model is a widely used method due to its flexibility and its capability of handling censored data , 17. It \u03b2 represents the unknown regression coefficients, h0(t) represents the baseline hazards, and xi=T are the covariates related to individual i (and where the specific sampling time of each xi is omitted). In our case, this vector represents all the gene expression values for the individual i.where h0(t) is not specified and which is related to the Cox regression model, is given by: The semi-parametric likelihood function, where \u03b4i is an indicator function of the censored observations, and where R(t)={j:tj\u2265t} denotes the set of all individuals that are at risk at time t, i.e., with a follow-up time greater than or equal to t.where \u03b2, are calculated with the maximisation of the partial log-likelihood function, as follows: Then, the unknown regression coefficients, h0(ti), can be obtained by: As proposed by , the estThus, the partial log-likelihood in Eq. for the H0(ti) is the cumulative baseline hazard function.where p\u226bn), a dimensionality reduction step is fundamental. In the literature, there are several techniques for variable selection to reduce the dimensionality, thus providing a sparse estimate of \u03b2. In this context, elastic net has become a classical regularisation method that limits the solution space by imposing sparsity and small coefficients for the parameters, combining \u21131-norm and \u21132-norm (sum of the squared error of the coefficients) [To deal with high-dimensional datasets, where the number of variables (genes) is much higher than the number of observations (patients), (icients) . The balwith \u03bb controls the penalisation weights and \u03b1,0\u2264\u03b1\u22641, is a controller between \u21131 and \u21132 penalties, given a fixed \u03bb. Particularly, if \u03b1 = 0, the ridge regression [\u03b1 = 1, we are dealing with the Lasso (least absolute shrinkage and selection operator) regression [where gression is obtaigression .The elastic net adapted for Cox\u2019s regression can be considered as an automatic implementation of best-subset selection in some asymptotic sense. This approach simultaneously selects significant variables and estimate regression coefficients. The asymptotic properties of the Lasso-type estimator, which could be analogously derived for the elastic net adapted to Cox\u2019s regression can be seen in , 23. AccMissing data are ubiquitous in clinical studies, leading to difficulties in subsequent statistical analysis, specially when dealing with longitudinal patient data , 27. TheLongitudinal clinical data with missing values may occur for multiple reasons, such as failure to attend medical appointments, or lack of measurements in a particular visit. Another problem is that sometimes the intervals between sample points are not evenly spaced for a given patient, and often are different between patients, i.e., the observations are not synchronised. To correctly model the longitudinal data for all the cohort, one must take these issues into account and try to uniformize as much as possible the information. Particularly in this 28-days prospective transcriptomics study, missing values appear because blood samples were collected in different time points within the 28 days after injury across patients. Blood samples were mainly collected at specific time points, although with some variability due to daily clinical constraints. Moreover, some patients were discharged before the 28th day, leading to a loss in the gene expression follow-up after that time point.A variety of methods were developed to complete the information for the missing time points, which often lead to good results, including the following time series imputation strategies applied in the present study: (i) omitting the incomplete entries; (ii) imputation based on the Last Observation Carried Forward (LOCF); (iii) linear interpolation; and (iv) present proposal which combines imputation methodologies to apply in time-series gene expression data.The first approach is a basic strategy and corresponds to omitting the missing values, leading to a different trajectory evaluation and sampling scheme for each cohort patient.In the second, missing values are replaced with the last known observation of the same patient, i.e., the last observation is carried forward .The third method uses linear interpolation to replace missing values. It assumes a linear relationship between the missing and non-missing values, i.e., using the non-missing values from adjacent data points to compute a missing data point. Linear interpolation has already been shown to have overall superiority in replacing missing values than other complex techniques .T to be processed and maintained to the algorithm output. In our study, we obtain T, being T\u2282S, calculated such that the percentage of missing data in each time point is less than 50% , as presented in Fig.\u00a0Finally, to achieve the most reliable imputation for a time-series gene expression data within a range of days, like the one used in our study, a combination of these three methodologies (i)\u2013(iii) is proposed and presented in the Algorithm 1. For the input, in addition to the original data, it is necessary to define the time points T, but only when that difference did not exceed two days. Besides LOCF, we implemented a similar strategy named Next Observation Carried Backwards (NOCB). NOCB replaces missing values by the next known observation of the same patient. This strategy is particularly useful when the closest observation of the required missing value actually occurs very near in the future . If both LOCF and NOCB cannot be applied , linear interpolation is instead performed. The approximation takes into account the closest point in the past and the closest one in the future to impute the missing value. A representative example of how the methods are applied in data is in Fig.\u00a0i=1,\u2026,n will be used for the analysis.To utilise the most of the available original data, the nearest measurement was carried to selected time points Time-series clustering has played an essential role in pattern-mining discovery and shown to provide useful information about this type of data, especially for gene expression data . NumerouTherefore, to investigate whether there are similar patterns of change among trauma patients over time, we studied similarity in time-series prioritising change patterns over time. The MTS clustering was performed using the R package dtwclust , 36, whiOur study applied a partitional clustering, which uses by default partition around medoids (PAM) centroids. Also, we defined Global Alignment Kernels (GAK) for distance measures calculation and not the DTW distance. GAK was proposed by Cuturi et al. , and assxi the multivariate time series of the i-th patient, and xj the multivariate time series of the j-th patient, both with length q,r={1,2,...,7}. The first step involves creating a local cost matrix defined by: For MTS clustering, let us consider two patients, being Tq and Tr represent time points from T={0,1,4,7,14,21,28}. Note that each multivariate series has the same number of variables (genes).where ivs observation j). In the second step, the cross-distance matrix is calculated using the GAK algorithm for each LCMij, which interactively steps through these local cost matrices with a kernel to achieve an exponentiated soft-minimum. After the cross-distance matrix is calculated, the desired clusters for all observations included in the study are achieved.Thus, a local cost matrix is created for each pair of multivariate time series , taken independently. Thus, patients\u2019 gene expression was analysed at Days 0, 1, and 4, individually, and via supervised learning.\u03bb parameter estimated using cross-validation. Different \u03b1 parameters were also tested: 0.2,0.4,0.6,0.8 and 1, in order to evaluate the impact of this choice in the results. To select the best \u03b1, the Harrell\u2019s c-index [\u03b1 value giving the highest c-indexes across the three days was 0.8. So, \u03b1=0.8 is the parameter value chosen for further analysis.Since the availability of this high number of genes may hamper model interpretability, sparse methods were first applied to the data. Dimensionality reduction was accomplished with elastic net regularisation by fitting the Cox regression model for survival, using all the available information regarding the time until hospital discharge, and considering the gene expression values as features. First, a preliminary analysis was accomplished to understand if it is possible to predict recovery during the first days after injury. To do so, 70% of the dataset were randomly split for training the model, and the remaining 30% for the test set. The model was obtained for the training set with the c-index , widely \u03b1=0.8 and cross-validated \u03bb) was then used to separate the test set into high and low-risk groups using Kaplan-Meier survival curves. In our study, high risk corresponds to patients with earlier discharge from the hospital, and low risk the inverse case. This procedure was repeated for each dataset day and the correspondent Kaplan-Meier curves are presented in Fig.\u00a0The fitted model in the dataset. As in the previous task, the same parameters and procedure were used for model prediction . The intersection of genes appearing in all the n=168 models calculated with LOOCV correspond to the genes that may be more strongly associated with patients recovery. For Days 0, 1, and 4, the following genes appeared in all the regularised models: Day 0:AA808444, AI421660, AW474434, BC015390, BG120535;Day 1:AF279899, AI127598, AK097281, AW665177, BC022029, BG120535, NM_013450, NM_173529;Day 4:AA873542, AB011151, AW574504, BE906233, BF129339, NM_002600, NM_018368, NM_024669, NM_025151, X65661.Secondly, leave-one-out cross-validation (LOOCV) was performed to detect genes strongly associated with patients\u2019 recovery. For model estimation, LOOCV considers all observations except one, which is left out from the training set. So, models are calculated the same number of times as the number of observations . Thus, we can assume that the proportional hazard assumption is fulfilled and that the fitted Cox regression models adequately describe the data.We stress that the Cox proportional hazards assumption was tested for each of the variables using the scaled Schoenfeld residuals . The tesAW474434, with the gene symbol TNFSF10, is a protein coding gene that belongs to the tumor necrosis factor (TNF) ligand family, which induces apoptosis in transformed cells having a key anti-inflammatory role [BG120535, gene symbol VNN1, plays a significant role in the innate and adaptive immune response, and it is extensively known for its anti-inflammatory activity [AW574504, which corresponds to the PECAM1 gene symbol, is another protein coding gene with important functions in cell junction and in leukocyte trafficking and immune response [NM_002600, also known as PDE4B2 gene, which is a key regulator of many important physiological processes, specifically in the control of inflammation [Interestingly, some of these genes are already known to be related with the inflammation response and the proper development and functioning of the immune system, key points to be considered for wound healing process . For insory role . Also, BOnce again, the Kaplan-Meier curves for high and low-risk groups were calculated for the test set after model estimation with these specific genes that were always selected . As presented in Fig.\u00a0c-index in the test set was again calculated. Respectively for Day 0, Day 1, and Day 4, the c-indexes were 0.77, 0.80, and 0.84, respectively, which revealed that good model predictions were achieved.To evaluate the performance of the models using these specific genes, the The sparse PCA method , which iIt is also noteworthy that, in the present study, we focused only on transcriptomic variables. However, the inclusion of other patient-specific variables, such as clinical information, may further improve model performance and should be analysed in future studies.The union of the genes previously selected for each time point originates 22 genes strongly associated with patient recovery based on the first four days after injury. These are the genes used for further analysis, including MTS clustering.The analysis performed previously considers variables as static or time-independent. Henceforward, the 22 genes selected were analysed as longitudinal variables, considering blood samples collections up to the 28th day. The time trajectory of the 22 genes is illustrated in Fig.\u00a0Blood samples collection were performed at fixed time points during the 28 days of follow-up. However, measurements were not always made at the same fixed time points across patients. This led to many missing values per day, and consequently made longitudinal data analysis a challenge. At this task, variables were log-transformed.T={0,1,4,7,14,21,28}. Then, the imputation procedure described in the \u201cAs previously described, during the 28 days of follow-up, the time points with less than 50% of missing values are represented by In Fig.\u00a0n=33 (Dataset A) and complete cases from Day 0 to Day 7 with n=141 (Dataset B). For both datasets the same selected 22 genes were analysed.The last day of the measure would be the 28th day except in the cases where the discharge occurred earlier. There were only 33 complete cases from Day 0 to Day 28, i.e., only 33 patients were discharged from the hospital after the 28th day. 141 patients had complete gene expression information from Day 0 to Day 7, i.e., they were discharged after the 7th day in the hospital. Hereupon, the dataset was split into two independent datasets: complete cases from Day 0 to Day 28 with Another interesting approach left as a future work is to deal with these missing values as interval censoring data. Diverse methodologies have been explored for estimation and inference on the regression coefficients in the Cox proportional hazards model with interval censored data, specifically in medical studies , 49. So,Although sufficiently general, the data imputation procedure here described and performed should be adapted when analysing novel datasets. In fact, the optimisation of this pre-processing step is often data-dependent since it must take the particularities of each data set into account.A and B separately, by applying partitional clustering with PAM centroids and GAK distance for each dataset was chosen based on the p-value of the log-rank test comparing each cluster\u2019s survival curves after the MTS clustering over a range of possible values for k. We considered the lowest p-value achieved to select the value of k. The results for both Datasets A and B are presented in Table\u00a0k chosen was 8 and 13 for Datasets A and B respectively, based on the criteria described.The optimal number of clusters , 19 remained placed in the same strata, in the worst prognosis groups of the Dataset B. Hereupon, there is a statistically significant relationship between the gene expression trajectory of these patients and their recovery, evaluated as the time until their hospital discharge.Patients in the Clusters 1, 5, and 6 (Fig.\u00a0Noteworthy, MTS clustering revealed groups of patients with similar gene expression trajectories over time, that are also associated with the severity of the disease. Indeed, the analysis of the Kaplan-Meier curves and correspoding log-rank tests for these groups revealed a strong association between the obtained stratification and the time until their hospital discharge. With these results, future patients in the healthcare system may be placed in one of the found clusters based on their gene expression over time and therefore clinicians may predict in a more accurate way when a specific patient will be discharged.Time-series gene expression data are commonly high-dimensional datasets with missing values that cannot be tackled and analysed in a straightforward way. Furthermore, the availability of time-to-event censored data, like patient follow-up information, adds an extra degree of complexity, thus requiring the longitudinal analysis to be coupled with survival models such as the Cox regression. Pattern mining discovery in this type of data, although expected to bring promising results, is still a challenge. We proposed a reliable framework to deal with a longitudinal genomic trauma-related dataset, which may provide biological and clinical insight. First, the use of regularisation techniques was able to unravel relevant genes for trauma patients\u2019 recovery, measured as the duration of the hospital stay. A combination of imputation methodologies allowed the acquisition of a final dataset of short time series without any missing values and loss of information. Moreover, it was possible to cluster patients with similar gene expression over time, and it was noticeable a link between patients\u2019 cluster and their discharge day from the hospital. These results may be addressed for future trauma patients entering the healthcare system, and, more generally, to improve patients\u2019 management and further support clinical decisions."}
+{"text": "The effectiveness of immune responses depends on the precision of stimulus-responsive gene expression programs. Cells specify which genes to express by activating stimulus-specific combinations of stimulus-induced transcription factors (TFs). Their activities are decoded by a gene regulatory strategy (GRS) associated with each response gene. Here, we examined whether the GRSs of target genes may be inferred from stimulus-response (input-output) datasets, which remains an unresolved model-identifiability challenge. We developed a mechanistic modeling framework and computational workflow to determine the identifiability of all possible combinations of synergistic (AND) or non-synergistic (OR) GRSs involving three transcription factors. Considering different sets of perturbations for stimulus-response studies, we found that two thirds of GRSs are easily distinguishable but that substantially more quantitative data is required to distinguish the remaining third. To enhance the accuracy of the inference with timecourse experimental data, we developed an advanced error model that avoids error overestimates by distinguishing between value and temporal error. Incorporating this error model into a Bayesian framework, we show that GRS models can be identified for individual genes by considering multiple datasets. Our analysis rationalizes the allocation of experimental resources by identifying most informative TF stimulation conditions. Applying this computational workflow to experimental data of immune response genes in macrophages, we found that a much greater fraction of genes are combinatorially controlled than previously reported by considering compensation among transcription factors. Specifically, we revealed that a group of known NF\u03baB target genes may also be regulated by IRF3, which is supported by chromatin immuno-precipitation analysis. Our study provides a computational workflow for designing and interpreting stimulus-response gene expression studies to identify underlying gene regulatory strategies and further a mechanistic understanding. Cells need to sense environmental cues and respond appropriately. One important notion is that different stimuli activate different combinations of transcription factors and that responsive genes are regulated by distinct subsets of these. However, identifying the regulatory strategies by which genes interpret transcription factor activities remains a largely unsolved challenge. In this work we address the question: to what extent are combinatorial transcription factor regulatory strategies identifiable from stimulus-response (input-output) datasets? We present a computational framework to determine the identifiability of gene regulatory strategies, and examine how reliable and quantitative model inference is a function of the quality and quantity of available data. We present an error model that more precisely quantifies uncertainty for perturbation-timecourse data sets by also considering error in the time domain, and achieves an improved performance in identifying and quantifying gene regulatory strategies. With these tools, we generate guidelines for experimental design that optimize limited resources for generating data for model inference. Finally, we apply the workflow to immune response datasets and uncover evidence that many more genes are subject to combinatorial control than previously thought; we offer physical transcription factor binding data to support this finding for one particular group of genes. This demonstrates that the computational workflow may guide studies of the regulatory strategies that govern stimulus-responsive gene expression programs. A primary goal of biology is to understand biological phenomena in terms of the underlying factors, whether these are cells, molecules or genes. These factors form dynamic regulatory networks whose emergent properties are responsible for biological phenomena. Hence, the systems biology approach employs mathematical models that represent or abstract these networks to interpret experimental data.For studies of how genes are expressed, the advent of experimental assays that are capable of producing genome-wide measurements of mRNA abundance, chromatin-bound factors and modifications has been revolutionary. A variety of computational approaches have been developed to construct correlations networks based on these large datasets \u20136; theseIn order to leverage the wealth of genome-wide gene expression datasets for developing a mechanistic understanding of gene expression, prior studies have employed mathematical models that represent the functional interactions between the gene and the key transcription factor (TF) \u201313. In pMathematical models of such GRSs involving a single TF have been successfully fit to datasets from individual mammalian genes, when the TF activity may be induced by a stimulus to provide a perturbation with a defined starting timepoint. In this case both TF activity and target gene expression were measured in a timecourse to provide the data for GRS model fitting. Indeed, for immune response genes this approach allowed GRS model parameters to be fit ,13, or tHowever, many mammalian genes are not regulated by a single TF, but multiple TFs. TFs collaborate either by compensating for or enhancing each other\u2019s activities. Characterizing what is termed the \u201clogic\u201d of collaborative TFs regulating genes is critical to understanding how the genome is expressed. To quantitatively capture combinatorial gene regulation by multiple TFs, thermodynamic formulations of Boolean AND- and OR-gate-like relationships may be employed to describe molecular interactions between DNA, TFs, and the polymerase-containing transcription machinery that regulates transcription initiation \u201317. HoweA recent study aimed to identify GRSs with combinatorial TF logics using mammalian immune response datasets . To rendIn this study, we have addressed the GRS model identifiability challenge in three steps. First, we systematically delineated GRS model identifiability, thereby identifying GRS models that are easily distinguished from each other, and others that require substantially more data. Second, we developed a Bayesian computational workflow\u2013including a new error model\u2013to use experimental input-output datasets for GRS model selection and parameter fitting. Third, we applied the computational workflow to newly assembled experimental innate immune datasets. Our results demonstrate the utility of the newly developed computational workflow by applying it to an innate immune gene dataset to reveal potentially combinatorially controlled genes in line with reports in other biological systems ,22.Upon cellular stimulation, the activities of stimulus-induced TFs are activated and\u2013in a combinatorial manner\u2013enhance transcription initiation of specific genes. To explore the identifiability of distinct GRS models involving TF combinations, we first examined via a systematic model-based analysis how distinguishable the associated gene expression patterns really are when different TF combinations are activated.kproc) is first order and RNA synthesis is zeroth order but is modeled by multiplying the RNA synthesis rate constant (ksyn) with the fractional promoter activity (f (t)). Fractional promoter activity was modeled by an established thermodynamic formulation with a Hill function that captures the TF regulation strength , bu, bu19], ild type . This illls Figs and S7C.lls Figs , confirmOver the past two decades a wealth of cellular transcriptomic and epigenomic data have been generated but there has only been limited progress in understanding the multi-TF gene regulation networks (GRNs) of mammalian cells ,25. HereOur computational analysis revealed a non-intuitive mapping relationship between model/mechanism and gene expression responses. We showed that not all GRSs are equally identifiable, and nor are all GRS regulatory features equally identifiable. For example, GRSs formed by differential regulatory strengths are much harder to distinguish within the triple AND gate than within OR gates. But the triple AND gate logic feature is readily identifiable from all other GRSs with less available data. By displaying the model/mechanism-gene expression mapping relationship as a hierarchical tree, we found core regulation motifs that are hallmarks of specific GRS clusters and therefore play dominant roles in determining the input-output relationship. This finding suggested that some important insights can be gained reliably even when available datasets do not allow for the identification of a single GRS. For example, the triple AND gate and the double AND gate as core regulation motifs can be easily identified even when distinct GRSs that differ in the regulation strengths within each motif cannot be determined easily. Similarly, the emergent dynamical systems properties (e.g. perfect adaptation) may be mediated by diverse networks that nevertheless share common network motifs, as revealed by a function-topology map . It is iIn considering input-output datasets that involve perturbations at a given timepoint, we found it useful to precisely decompose value and temporal uncertainty for timecourse data. Our analysis of the \u201ctime-value\u201d error model with synthetic data indicates that accurately accounting for data uncertainty can improve model identifiability by correctly penalizing less reliable data, but avoiding under- and over-estimates of the error associated with value. The time-value error model may be useful for timecourse data involving sharp changes, observed in data sets from highly dynamic biological processes like signal transduction or gene activation in response to stresses, immune threats or growth factors, where temporal error can greatly contribute to the data uncertainty.After developing a practical computational workflow, we used simulated datasets to provide some guiding principles for experimental design for generating the most informative stimulus-response datasets. Model-guided experimental design is addressed in a rich literature \u201329, and Our work also provides guiding principles for mapping other types of observed data to the molecular network. The general approach outlined here\u2014generating a model-gene expression map by comparing all possible hypotheses and fitting the model to integrated data sets\u2014can be extended beyond the question of TF combinatorial regulation of transcription initiation. For example, epigenomic datasets that describe chromatin remodeling (ATAC-Seq), TF-DNA interactions (ChIP-Seq), chromatin conformation , and protein-RNA interactions (CLIP-Seq), may be used to distinguish alternate models of multiple or sequential steps of gene regulation. Furthermore, the abstracted combinatorial models described here may be iteratively refined for example by addressing the TFs regulating different and sequential kinetic steps leading to transcriptional initiation ,31, and Applying the computational workflow to publicly available and newly generated input-output data of macrophages responding to immune stimuli illustrates the power of this quantitatively rigorous analysis workflow. Previous purely experimental studies relied heavily on knockout data to infer the involvement of a transcription factor. However, biological systems are characterized by overlapping functions of its components, a feature that confers functional robustness . Hence, In sum, as we lack a quantitative understanding of the molecular mechanisms governing many important biological regulatory systems, which would enable detailed regulatory network models, we describe here an approach for distinguishing quantitatively between alternate regulatory strategies using input-output datasets. Such an approach connects biological knowledge to data-driven approaches, and may prompt and guide subsequent mechanistic experimental and mathematical modeling studies. Considering TF combinatorial regulation alone, there are numerous examples that may benefit from the described workflow. In macrophages, the response to pathogens and cytokines is mediated by a handful of inducible TFs that are thought to engage in AND and OR gates ,20. SimiIn this work, we quantitatively studied combinatorial GRSs formed by three TFs. Hence, we modeled nascent RNA expression from activated gene by considering RNA synthesis and first order RNA processing. RNA abundance dynamics can be modeled using a single ODE:kproc represents the nascent RNA processing rate, ksyn a constant synthesis rate that is modulated by the fractional promoter activity f(t), which includes both combinatorial regulation from the GRS and some basal activity:Here f(t) describes how TFs regulate promoter activity, which is composed by a logic gate function GRS that depends on the TFs activities and some basal promoter activity denoted k0.2 AND (TF1 OR TF3) and for TF3 AND (TF1 OR TF2):2 OR (TF1 AND TF3) and for TF3 OR (TF1 AND TF2):We considered all possible synergistic (AND) and non-synergistic (OR) gene regulation strategies involving three TFs. As it has been shown in Our focus here is the synergistic and/or non-synergistic combinatorial regulation by 3 TFs to produce a specific GRS. Hence for each regulatory logic, we considered that each TF may have one of three regulation strengths (strong (S), medium (M), and weak (W)). Thus, we considered a total of 27 regulation strength combinations for 3 TFs and 8 logic gates, yielding a total 216 possible GRSs. Of these 216 GRSs, several may in fact not be efficiently activated. For example, AND gate configurations in which one component has a \u201cWeak\u201d regulatory strength cannot be activated well within the present scheme. In this way, we identified 69 poorly activated GRSs and removed them from further consideration. In addition, several GRSs were found to be logically equivalent. For example, the TF1 is logically equivalent to either a triple OR gate in which TF2 and TF3 have \u201cWeak\u201d regulatory strength, or a TF1 or (TF2 and TF3) gate in which TF2 or TF3 have \u201cWeak\u201d regulatory strength. In this way we removed 54 redundant GRSs. These first principle considerations led us to a list of 93 potentially identifiable GRSs see .We defined perturbations as following functions.In addition to the simple perturbation, we have:Kd = 0.1, medium to Kd = 1, and weak to Kd = 10, given that Kd has minimal effect on promoter activity outside this range. We consider gene with a short RNA processing rate . Here, we simulate a timecourse gene expression at 0, 15, 30, 60 min, but our approach can be generalized into any time points served for the interests of experiments. To compare all the GRSs, we simulated resulting gene expression with ksyn = 1 RPKM/min (where RPKM means reads per kilobase per million of mapped reads), and normalized the gene expression such that its maximal value corresponds to 100 across all the perturbation conditions. This is equivalent to adjusting ksyn. In In our work, we defined GRS as the combination of logic gate and regulation strength parameters, which together determine gene expression levels. For testing purpose, we assume TFs activities range from 0 to 1 and define the high level as 1, the medium level as 0.5, and the low level as 0. Based on Hill function, we define regulation strength ranging from weak to strong with strong corresponding to gei,p(t) means gene expression of gene i for perturbation p at time t. We used a hierarchical clustering with single linkage approach to cluster GRSs based on their gene expression profile. With single linkage approach, the distance between two clusters of GRSs is defined as the minimum distance of inter-pairs, as we considered two groups of GRSs is not identifiable if one inter-pair distance is below a certain tree height threshold, called in the rest of the manuscript separation threshold.We define the distance between two gene expression profiles as the squared Euclidean distance for each timepoints across all perturbation conditions:ksyn will be counteracted by the normalization of gene expression, the random sampling of ksyn was replaced by constant number (1 RPKM/min).We followed the same procedures described in GRS simulation to examine GRS with randomly sampled parameters. Specifically, we generated 1000 sets of parameters of each logic gate by sampling uniformly from parameter space with point specific information using maximum likelihood estimation (MLE) to generate a stabilized estimate (posterior distribution). This empirical Bayes approach have been broadly used in handling small sample problem in bulk and single cell RNA-seq data, including by DESeq2 , voom 3, and BAS\u03beprior, which are defined based on the error model described later, from global trend by pooling all the points together. Next, we combine the global trend with data of each time point i to estimate the posterior distribution \u03bei,posterior of each time point, which is given by:Specifically, we first estimate the error parameters \u03bei) is the likelihood function that represents information from each time point i, and P(\u03bei) corresponds to the prior knowledge on the distribution of \u03bei which we assume to be normal with mean Where \u2112, and temporal uncertainty (gene response time variations and sample preparation timepoints variation), as those two sources of uncertainties are orthogonal , and have two different mathematical forms .t, yt) due to potential uncertainty in the exact time the measurement was done, we assume that it follows a Gaussian distribution with mean t and standard deviation \u03c3t. The resulting uncertainty in y due to the uncertainty in t is denoted by \u03c3temporal.To model the temporal uncertainty of a point with the slope as tangent of the curve. Therefore with the assumption that the uncertainty of t follows a Gaussian distribution, then y will also follow a Gaussian distribution, where variance is given by [For a small neighborhood around given by :\u03c3tempoThis can be considered as an error propagation function, which is:As we know multiple uncertainty sources can cause observed value uncertainty, we assume the overall observed value uncertainty follows a Gaussian distribution.\u03bc. This helps stabilize the estimation of variance by leveraging its dependency relationship with robust estimator, mean. Similar approach to capture mean-variance relationship was introduced by DESeq2 [To obtain robust value uncertainty estimation, we modeled mean-variance relationship by a polynomial regression, where y DESeq2 , where iy DESeq2 , where iHere, we only include the first two orders, as this already captures well the mean-value variance dependency. Higher order terms can be added when dealing with more complex datasets.\u22650 as its support, as all measurements are positive. Therefore, the distribution of gene expression is modeled by gamma and normal distributions, depending on the expression level, which is given below:We separately modeled background and induced gene expression distribution. These two parts are usually caused by different sources, and exhibit different level of uncertainty. In addition, the distribution for background uncertainty is desired to have \u211dk, \u03b8:Here, we connect data mean and variance to the gamma distribution\u2019s mean and variance to parameterize \u03b7 to be 3 RPKM.In addition, we set the threshold We followed the first principle to generate data uncertainty from both biological variability and technical uncertainty. We generated biological variability by varying TF amplitude, gene response time and parameters of the model by sampling from the distribution specified in To reliably estimate the slope of the curve, we first interpolated the curve with a piecewise cubic hermite function , and then estimated the slope from the interpolated curve. This allowed us to leverage information from all points and the shape of the curve. Specifically, the slope is calculated by the central difference:t depends on temporal uncertainty level, as one can imagine we should take bigger \u0394t when temporal uncertainty is high. Here, we set The choice of \u0394yi at time point i is given by:We first derived the likelihood function for the inference with parameters point is given by:Where Here, \u0393 and \u03beprior from global trend of points using global likelihood \u2112global, which is given by the multiplication of the point likelihood from all the perturbations p, genes g and time points t. \u2112global is given by:Next, we estimated \u03bei,posterior by combining global trend with point specific information. We assume \u03beprior follows a normal distribution, with mean as Finally, we use Bayes\u2019 rule to estimate For the computational efficiency during optimization, we convert all the target function as negative log, and use Limited-memory BFGS (L-BFGS) method which is implemented in the optim function in R for optimization. More details can be found in the parameter estimation and model selection part.N is the number of replicates (here N = 2). All the estimated parameters for variance have been corrected by:As MLE of variance is a biased estimator, we have corrected it to the unbiased estimator by:In the conventional model, we simply set p = 1,\u2026,N, with multiple time points t = 0, 15, 30, 60 min of each perturbation. This yields the probability to observe experimental data yobs)( as:We considered gene expression data from multiple perturbation conditions yobs) model simulated In our work, the experimental data 2) data uncertainty, which is estimated from global trend We took the negative log likelihood for the optimization.i and gene expression j as the average of NLL and NLL, where NLL is the negative log likelihood to observe gene expression i, given gene expression j as ground truth. We then performed hierarchical clustering with single linkage approach and the defined distance matrix.We define the distance between gene expression We first applied time-value error model to estimate data uncertainty level. We took average of two replicates of TF activities as input. We then incorporate averaged TF activities as input, simulated gene expression data as output and their estimated data uncertainty into the model parameterization part.Kd from 10 fold weaker and stronger than the weak and strong regulation strength in the process of optimization (i.e. from 0.01 to 100), as Kd has minimal effect on the GRS output beyond this range. All the parameters are optimized in logarithmic scale, as it enables the algorithm to quickly search large space. We used multi-start local optimization approach, as it has been shown outperformed or at least perform as well as some global optimization methods [To estimate parameters and select logic gates, we first optimized parameters for each logic gate, and picked the best fitted logic gate and its estimated parameters among all eight logic gates. In addition, we constrained the range of methods . SpecifiWe tested our error model and pipeline with different level of data uncertainty. As each uncertainty sources don\u2019t equally contribute to the final data uncertainty, we adjusted the altered level of each uncertainty source, so that the final error contribution from each of uncertainty sources would be approximately the same. After we simulated the data, we examined the overall uncertainty caused by all the uncertainty sources .Primary Bone Marrow Derived Macrophages (BMDMs) were prepared by culturing bone marrow cells from femurs of female 8\u201312 weeks old WT mice or different knock-out mice in L929-conditioned medium by standard methods . BMDMs wWestern blotting analysis and EMSAs were conducted with standard methods as described previously . BrieflyTFscaled and TFraw are the values of TF band intensity before and after scaling. TFpeak and TFbasal are the peak and basal band intensity before scaling.To quantify the TF activity upon stimulation, we first linearly scaled the value of band intensities so that for each perturbation to the range that the basal and peak band intensity to be 1% and 100% by the formula:p with factor pnorm:p and perturbation for wild type stimulated with LipidA that has been measured from the same gel.To compare TF activities in different perturbation conditions, we have normalized them by setting peak of wild type LipidA stimulation to be 1. We then normalized all the other perturbation conditions by multiplying each of the other perturbation conditions We first defined the MAPK targeted genes: Egr1, Fos, and Dusp4, that are neither NF\u03baB nor IRF targets. For those target genes, we estimate TF activities from their gene expression in RNA-seq. Specifically, for individual target genes, we linearly convert their gene expression (RPKM) so that in the WT stimulated with LipidA condition, the basal and peak value are 1% and 100%, by using the same formula described in the last section. Finally, we take the averaged gene expression from all the converted target genes as the inferred MAPK-regulated transcription factor activity.We have collected all the chromatin associated RNA-seq data from previously published work . BAM filInduced genes have been selected by the same criteria described in the paper : the peaRNAWT(ti) and RNAcontrol(ti) are the gene expression (RPKM) at time point i in wild type stimulated with LipidA and wild type with solvent stimulated with LipidA.For MAPK inhibited perturbation, we have collected both MAPK inhibited LipidA stimulation, and control condition (wild type in solvent with LipidA stimulation). To make the MAPK inhibited condition comparable to the other conditions, we normalized each time points by multiplying by the factor We have collected all the RelA and IRF3 ChIP-Seq data from previously published work , and dow-/- with LipidA stimulation, wild type with Pam3csk4 stimulation), we used the values estimated from wild type with LipidA stimulation.We applied the developed error model to the collected chromatin associated RNA-seq data to estimate data uncertainty. For RNA-seq data with only single replicate as gene basal expression for fitting, with formula:ysim,p,i and ysim raw,p,i are the final and raw model (with k0 = 0) simulated gene expression at time point i and perturbation condition p. ybasal,p is the experimental basal gene expression at perturbation p.We applied the same developed mechanistic model to identify GRSs for immune response genes. Specifically, to better capture basal gene expression, we set Kd >100). Then, we only selected the potential GRSs for downstream analysis by setting a threshold on the fitting score (NLL <100).We applied the same Bayesian framework and likelihood function to identify GRS for immune response genes. After parameter optimization for all the 8 logic gates, we mapped them back to the 17 logics gates by assigning the 8 triple TF logics gates to single or dual logic gates if one or two TFs has null regulation strength and varying TF1, TF2 regulation strengths.(TIFF)Click here for additional data file.S2 FigRelated to Diagram explaining how the 216 possible GRSs were reduced to 93 potentially identifiable GRSs. First 69 poorly activated GRSs were removed (left side), as well as 54 redundant GRSs given that they are logically equivalent to a single TF (top right) or to two TFs with an AND gate (bottom right). This results in 93 potentially identifiable GRSs.(TIFF)Click here for additional data file.S3 FigRelated to (A). The heatmap as (TIFF)Click here for additional data file.S4 FigRelated to Comparison between directly estimated raw variance of two replicates and ground truth.(TIFF)Click here for additional data file.S5 FigRelated to (A). Comparison of the estimated regulation strength between time-value error model, conventional error model, and raw variance with different levels of noise. (B). Confusion matrix of all the 93 estimated GRSs from time-value, conventional error models and raw variance.(TIFF)Click here for additional data file.S6 FigRelated to Fig 5 Comparison of the number of replicates and number of perturbation conditions for the identifiability of the 93 GRSs with a total of 8 datasets.(TIFF)Click here for additional data file.S7 FigRelated to (A). Graphs of measured and inferred TF activities in the perturbation conditions used by the caRNA-seq study. (B). Heatmap of measured nascent mRNA expression data together with fitness for all the candidate logic gates. The left side shows fitness (negative log likelihood) of all the possible logics. The white boxes are the logics that do not account for the data when mapping 8 logics back to the 17 logics. Genes are ordered by their expression with hieratical clustering approach. (C) Heatmaps of experimentally measured caRNA-seq time-course data and simulation data by the best-fit GRS models using the Time-Value error model or raw variance, as indicated. (D) Comparison of the negative log likelihood (NLL) of the best GRS model that contains any synergistic component to the negative log likelihood (NLL) of the best GRS model that does not contain any synergistic component.(TIFF)Click here for additional data file."}
+{"text": "Gene expression dynamics, such as stochastic oscillations and aperiodic fluctuations, have been associated with cell fate changes in multiple contexts, including development and cancer. Single cell live imaging of protein expression with endogenous reporters is widely used to observe such gene expression dynamics. However, the experimental investigation of regulatory mechanisms underlying the observed dynamics is challenging, since these mechanisms include complex interactions of multiple processes, including transcription, translation and protein degradation. Here, we present a Bayesian method to infer kinetic parameters of oscillatory gene expression regulation using an auto-negative feedback motif with delay. Specifically, we use a delay-adapted nonlinear Kalman filter within a Metropolis-adjusted Langevin algorithm to identify posterior probability distributions. Our method can be applied to time-series data on gene expression from single cells and is able to infer multiple parameters simultaneously. We apply it to published data on murine neural progenitor cells and show that it outperforms alternative methods. We further analyse how parameter uncertainty depends on the duration and time resolution of an imaging experiment, to make experimental design recommendations. This work demonstrates the utility of parameter inference on time course data from single cells and enables new studies on cell fate changes and population heterogeneity. T, and the relationship between xt and the observed data yt is given by yt = Fxt + \u03b5t, where F is a 1 \u00d7 2 matrix. Thus, F and \u03b5 represent our measurement model. Throughout, we use F = , since we aim to apply our method to data on protein expression dynamics, where measurements of mRNA levels are not available. The value \u03c1 and P represent the state space mean and state-space variance, respectively. We define yt0: = T, and write Pt = Cov(X(t), X(t)| yt\u221210:).It is possible to show that the likelihood of a set of observations given specific model parameters can be expressed as 2.5\u03c0(y\u2223\u03b8)\u03c1t, and Pt in equation for all discretization time points up to t = kz. This probability distribution is characterized by it\u2019s mean The Kalman filter calculates equation using anprediction step, we then use the model to calculate the predicted probability distribution for protein and mRNA copy numbers at the next observation time point, X((k + 1)z). We use this prediction to evaluate the likelihood of the observed data at the k + 1 observation time point. Before the prediction for the next observation is made, the Kalman filter update step is applied, in which the probability distribution of the state space up to observation k + 1 is updated to take the measurement at t = (k + 1)z into account.In the Kalman filter \u03c0(xt\u2212\u03c4:t| yt0:), denoted t \u2212 \u03c4 to the current time, t, given all of our current observations. This is necessary in order to accurately predict the state space distribution at the next observation time point, \u03c0(xt+\u0394t| yt0:), as past states can affect future states due to the presence of delays. We provide detailed derivations of our Kalman filter prediction and update steps in electronic supplementary material, S.1.For our update step, we derive an expression for the mean and variance of the state space distribution . 2.3The aim of our inference algorithm is to generate independent samples from the posterior distribution, \u03c1t, and state-space variance, Pt, with respect to each parameter, as detailed in electronic supplementary material, S.5.Drawing proposals using MALA requires the calculation of the gradient of the log-posterior . 2.4scaled squared exponential Gaussian process combined with white noise, whose kernel is given byx(t) \u2212 x(t\u2032)|| is the Euclidean distance between x(t) and x(t\u2032), l is the lengthscale, and \u03b3, \u03b7 \u2208 . In the Gaussian process regression, the hyperparameters \u03b3, l and \u03b7 are found using constrained optimization.Before applying our inference method we detrend protein expression time series using Gaussian process regression, in order to identify and exclude data that show significant long-term trends ,62 and x(t\u2032) represent our protein expression time course data at time t and t\u2032 respectively. We identified data without a significant long-term trend manually by visual inspection . The lower bound of this range, 1000 min, was chosen to ensure that detrending does not perturb ultradian dynamics in the data. The upper bound, 2000 min, was chosen sufficiently large to ensure that detrending is not affected by it. The initial value of the parameter . 3in silico datasets. We then demonstrate the utility of our method by applying it to experimentally measured data and, finally, use our method to analyse how parameter uncertainty may depend on properties of the data, as well as the experimental design.Single cells in a seemingly homogeneous population can change cell fate based on gene expression dynamics. The control of gene expression dynamics can be understood with the help of mathematical models, and by fitting these models to experimentally measured data. Here, we analyse our new method for parameter inference on single-cell time-series data of gene expression using the widely used auto-negative feedback motif. We first validate our method by showing the performance of our algorithm on . 3.1in silico data from the forward model of the auto-negative feedback motif (c). This is done using chemical Langevin equations, as detailed in \u00a72.1. Specifically, we emulate an in silico imaging experiment by selecting simulated data in sparse intervals of \u0394tobs mins and mimic measurement noise by adding random perturbations to each observation time point (a). These perturbations are drawn from a Gaussian distribution with variance in silico data first is beneficial, since ground truth parameter values are known a priori for the generated in silico datasets, and can be compared to the obtained posterior distributions.We first test our inference method on ck motif c. This ime point a. These in silico datasets . Additionally, ground truth parameter values lie well within the support of the posterior distribution .We start by applying our inference method to simple test cases, where the true values of all but one parameter are known, and only the remaining, unknown, parameter value is inferred . This alb\u2013d and electronic supplementary material, figure S1 validates our implementation of MALA, and the associated computations of the likelihood gradient. In order to further test our implementation of MALA, and the associated computations of the likelihood gradients, we compare our results to posterior distributions sampled using the MH algorithm, which does not require gradient calculations. Despite an expected slower convergence of the MH algorithm, this comparison is feasible for one-dimensional posterior distributions, which typically can be well approximated with a few thousand samples. The sampled means have a relative difference below 0.03%, and the standard deviations fall within 4% of each other , and further validates the implementation of the individual likelihood gradients.Our proposed inference method uses the MALA sampler, which relies on calculating likelihood gradients . The comparison with exact calculations in figure 2. 3.2in silico dataset and comparing the resulting posterior distribution to the ground truth parameter combination . Since we cannot measure convergence of the sampled posterior through comparison to the true posterior distribution in the multi-dimensional case, we rely on typical MCMC convergence diagnostics .Having validated the method on one-dimensional posterior distributions, we further test the performance of the method by simultaneously inferring multiple model parameters from a single bination a,b. Sincin silico dataset is of similar length and observation intervals as previously analysed by Manning et al. [\u03b1m and \u03b1p .We choose a dataset that shares characteristics with typically collected time course data from single cells. Specifically, our g et al. . In thisWe find that the marginal posterior means, i.e. values of largest probability, all lie within maximally half a standard deviation of the ground truth values . This in\u03bd (c). Pairwise correlations provide crucial information on how posterior distributions can be constrained further. Specifically, the strong correlation between the repression threshold, P0, and the logarithm of the basal transcription rate, log (\u03b1m) (e), highlights that the data in figure 3a are consistent with either high repression thresholds and low transcription, or vice versa. Such strong pairwise correlations imply that gaining new information on one of the two parameters would constrain the other. This is not the case when parameters are uncorrelated, such as the transcriptional delay and the translation rate (figure 3d), and experimentally measured values on either of these parameters would not inform the other.Simultaneous inference of multiple parameters further allows for the investigation of pairwise parameter correlations, using correlation coefficient \u03bd c. Pairwilog (\u03b1m) e, highli. 3.3a). In this previous approach, inference was performed using population-level summary statistics of the collected time-course data. This resulted in posterior distributions with high parameter uncertainty. Specifically, the marginal posterior distributions for the Hill coefficient and the transcriptional delay were close to uniform, illustrating that the provided summary statistics did not contain sufficient information to constrain the uniform prior distribution. The remaining parameters had distinct modes. Nonetheless, parameter uncertainty was high since the spread of the posterior distribution was comparable to that of the prior [Next, we seek to evaluate the performance and utility of our method by applying it to experimentally measured data. Specifically, we investigate data on gene expression oscillations in mouse spinal cord neural progenitor cells see elea. In thihe prior . Importab) and select these for inference.When applying our method to time-series data from fluorescence microscopy experiments, it is necessary to address that our model of the auto-negative feedback motif cannot describe long-term trends in data. Specifically, the model of the auto-negative feedback loop considered here is designed to describe ultradian oscillations that typically have periods shorter than 10 h ,13,55, afigure 4et al. [Nc is the number of considered traces, and ith detrended dataset.In order to identify a suitable value for the measurement variance stimates . Manning [et al. decompos [et al. . This sec versus 4a.). For the single-cell gene expression time course in figure 4b, we find that there is still comparatively high uncertainty on the basal transcription rate (\u03b1m in figure 4c), as the support of the marginal posterior distribution reflects that of the uniform prior distribution. However, for all other model parameters that are inferred from this time course, the marginal posterior distributions are narrower than the prior, and than previously identified marginal posterior distributions from ABC (figure 4c).We find that our method can identify more accurate posterior distributions than the previous ABC-based approach by using single cell time series of gene expression only which have distinct gene expression dynamics and which also do not have strong long-term trends [d,e). Specifically, the marginal posterior distributions of the translation rate \u03b1p are all larger than exp (2)/min, and biased to larger values. Similarly, the marginal posterior distributions for the delay \u03c4 cover the entire range of considered values, and are biased towards smaller values, with most likely values below 10 min. These observations appear to hold true for both clusters considered here, and they highlight that parameter inferences obtained from our method are biologically reproducible, which is a necessary feature to enable the use of the method in practical applications.We find that the posterior distributions inferred from multiple cells share features that are conserved across all cells and both populations d,e. Spec\u03b1m and the Hill coefficient h, marginal posterior distributions vary between individual cells, suggesting that there is an underlying heterogeneity of these parameters across the cell population. However, the remaining parameter uncertainty is too high to reliably identify differences between cells and clusters of cells, raising the question of how imaging protocols may need to be changed in order to achieve lower uncertainty on typical parameter estimates.By contrast, for the basal transcription rates . 3.4relative uncertainty, RU\u03b8 ), which quantifies the spread of the posterior distribution. We use this metric to quantify the performance of our inference method on a number of synthetic datasets with different lengths and sampling frequencies, and for different locations in parameter space.Typically, longer imaging time series can only be collected at the cost of a lower imaging frequency. When designing experiments, it may be desirable to choose an imaging protocol that optimizes the parameter inference towards high accuracy and low uncertainty. However, parameter uncertainty may not only be influenced by the imaging protocol, but also by the bifurcation structure of the underlying dynamical system . Hence, a; electronic supplementary material, table S2). The oscillation coherence is a measure of the quality of observed oscillations . Choosing parameter combinations with different coherence thus ensures that these correspond to different positions within the bifurcation structure of the auto-negative feedback loop [We choose two locations in parameter space that correspond to two different values of oscillation coherence, thus producing qualitatively different expression dynamics . We find that a longer sampling duration can strongly decrease parameter uncertainty. Doubling the length of the time-series reduces the uncertainty by 19% on average for the high coherence parameter combination, and 7.1% on average for the low coherence parameter combination. A tripling of the available data leads to reductions in uncertainty by 29.8% and 18.3% and for high and low coherence, respectively.We first analyse to what extent collecting data for a longer sampling duration may reduce parameter uncertainty b,d. We fc,e). Specifically, doubling the amount of data only leads to a decrease by 11.3% in the case of the high coherence parameter combination, and 6.7% in the case of low coherence. A tripling of the available data leads to reductions in uncertainty of 13.2% and 9.1% for low and high coherence, respectively.By contrast, an increase in sampling frequency leads to a smaller decrease in parameter uncertainty on average c,e. Spec\u03b8, electronic supplementary material, S.7, equation (S36)), instead of uncertainty . Accuracy increases with longer sampling durations and shorter imaging intervals, and longer sampling durations have a stronger effect than shorter imaging intervals.We find that analogous conclusions hold true if inference accuracy is analysed of the posterior distribution (figure 6a\u2013c). Intuitively, one may assume that estimates for the rate of transcription are improved if measurements of mRNA copy numbers, in addition to protein expression dynamics, are considered in the inference.In the previous section, we have analysed the impact of changes in the imaging protocol on parameter uncertainty overall. Alternatively, it may be desirable to identify interventions that reduce uncertainty for particular parameters of interest. For example, an important quantity of interest may be the average rate of transcription of the investigated gene, introduced as equation . In manyin silico data mimicking a single-molecule in situ hybridization (smFISH) experiment. Such smFISH experiments generate distributions of mRNA copy numbers, thus providing a snapshot of mRNA levels across a population at a fixed time point [Hence we next assume that, in addition to data on the dynamics of protein expression, measurements of mRNA copy numbers have been conducted on the observed cells. Specifically, we generate me point ,67. To ad,e, with the mean of the distribution being 5.4 times larger than the true value in figure 6d, and 2.4 times larger in figure 6e, respectively. Upon inclusion of mRNA information, these posterior distributions are instead concentrated around the true value, with a relative error below 15.3%. In both examples, the ground truth is contained within the 65% HDI. In figure 6f, a posterior distribution that is already close to the true value gets further constrained by the additional mRNA data. In these examples, the observed reduction in uncertainty on the inferred transcription rate is accompanied by a reduction in uncertainty on estimated mRNA copy numbers for individual cells, as inferred by the Kalman filter . Investigating the uncertainty on the average inferred transcription rate across datasets introduced in figure 6g), and a reduction in uncertainty of 51.2% for high coherence parameter combinations (figure 6h).We find that this inclusion of mRNA information collected from a cell population leads to more accurate inference of the average transcription rate for single cells, using our algorithm. Observing example datasets from figure 6i,j). For datasets from the low coherence parameter combination, the relative uncertainty increases by 1.1% on average when mRNA information is included (figure 6i). For datasets from the high coherence parameter combination, uncertainty decreases slightly (9.1% on average (figure 6j)). Importantly, this reduction of uncertainty is considerably smaller than the reduction of uncertainty observed when longer measurement durations are considered (cf. d). We make analogous observations as inference accuracy is analysed , instead of uncertainty. Inference accuracy is not reduced for high coherence datasets when data on mRNA copy numbers are included, and it is only slightly reduced for some of the low coherence datasets, with the effect being much smaller than the effect of considering longer time course data .How does this improved estimate of transcription rate affect overall uncertainty across parameter space, as analysed in figure 6ered cf. d. We mak\u03b1T will also reduce uncertainty on model parameters regulating \u03b1T, such as the basal transcription rate, \u03b1m and the repression threshold, P0. This effect may be attributed to correlations between these parameters, which we typically observe in our posterior distributions (see f). For the dataset in figure 6a, inference of \u03b1T is improved upon inclusion of the mRNA. This leads to a tighter coupling between the parameters \u03b1m and P0 . However, this constraining of the posterior distribution is not reflected in either of the marginal posterior distributions. Thus, although the inclusion of in silico smFISH data reduces the spread of the posterior distribution overall, uncertainty within the marginal posterior distributions is not reduced, and individual parameter estimates are not improved. An additional factor is that data from smFISH experiments may be considered to reflect the time-averaged mRNA copy number distribution of single cells. Hence, these data might not reduce uncertainty on parameters that are expected to predominantly alter the dynamics rather than the level of expression, such as the transcriptional delay \u03c4 and Hill coefficient h. Hence, to better infer these parameters, other strategies, e.g. those discussed in The effect that overall uncertainty is not decreased as new data on mRNA copy numbers are included contradicts the intuition that more accurate inference of the average rate of transcription ions see f. For thin silico, and highlights that our method can be naturally extended to use additional data of different types.We conclude that distributions of mRNA copy numbers from population-level measurements can be used to infer average transcription rates for individual cells, using our inference method, which may facilitate the study of transcriptional dynamics in the context of gene expression oscillations. Together with results from . 4The aim of this work was to develop a statistical tool that can be used to infer kinetic parameters of the auto-negative feedback motif, based on typically collected protein expression time-series data from single cells. Importantly, the stochastic nature of the involved processes demanded a method that enables uncertainty quantification. We have achieved our aim by embedding a nonlinear delay-adapted Kalman filter into the MALA sampling algorithm. Our method can generate accurate posterior distributions for the simultaneous inference of multiple parameters of the auto-negative feedback motif. The produced posterior distributions are more informative than those from previous approaches. Since our method can be applied to data from single cells, it enables the investigation of cell-to-cell heterogeneity within cell populations. It can further be used to make experimental design recommendations, which we demonstrated by investigating how parameter uncertainty may depend on the position in parameter space, the sampling frequency, and the length of the collected time-series data. Additionally, our method may be extended to account for the presence of different types of data, for example to improve estimates of the transcription rate for individual cells.Often, new inference algorithms are presented on a single dataset, and due to necessary tuning requirements of the involved sampling methods, further datasets are not considered. However, it is important to understand the behaviour of a method for a range of datasets if we wish to make experimental design recommendations. It is an achievement of this paper that we provide a method that demonstratively can reliably infer parameters, even when the size and structure of the data are changed significantly.The mathematical model underlying our method aims to describe the dynamic expression of a protein which is controlled by auto-negative feedback. The success of our inference relies upon how well this model approximates reality. Mathematical models for the oscillatory expression of transcription factors are informed by experimental research ,68 and hChemical Langevin equations such as equations and 2.22.2 approThe Gaussian approximation within the chemical Langevin equation can break down when molecule concentrations are very low, resulting in an inaccurate simulation of the dynamics. We do not expect this to be a problem for data analysed in this paper, since protein copy numbers throughout our analysis are around 50 000 protein molecules per nucleus. In other applications, the validity of the chemical Langevin equation may be explicitly tested on samples from the posterior distribution by directly comparing simulated expression time series with those obtained from an exact sampling algorithm, such as the Gillespie algorithm . Similard1/3, rather than d1 for MH [d. Note, that more efficient MCMC algorithms can eliminate the problem of tuning entirely [For Bayesian inference problems, it is common to use MCMC samplers, such as MH or MALA. We have found that combining a delay-adapted nonlinear Kalman filter and MALA can allow us to infer parameters of the auto-negative feedback motif. This builds on previous approaches which applied a Kalman filter in the context of a different transcriptional feedback motif with delay . MCMC al1 for MH , for modentirely . These mb). Such detrending is commonly used when analysing time series of oscillatory signals [In our applications of the algorithm to experimentally measured data, we detrended the data before applying our inference b. Such d signals ,73,74. Tet al. [When applying our inference method to experimental data , we relinsidered . When usg et al. .Our algorithm opens up the investigation of research problems, such as cell-to-cell heterogeneity in dynamic gene expression, which would previously not have been accessible. In future applications, our algorithm may provide a non-invasive method to measure the kinetic parameters of the gene of interest, such as the translation and transcription rates, or properties of the gene\u2019s promoter, which are described by the repression threshold and Hill coefficient parameters in our model. On experimental datasets where multiple, qualitatively different dynamics are observed \u201377, our F, \u00a72.2) to include an unknown, linear scaling parameter between protein copy numbers and imaged intensity values.Throughout, we have assumed that measurements in the form of protein copy numbers per nucleus are available over time. To collect such data, it is necessary to combine endogenous fluorescent reporters with FCS in order to translate reporter intensity values to molecule concentrations. Future versions of our algorithm may be applicable to data where FCS is not available, if one extends our measurement model (We highlight that the impact of this work is not limited to a single gene in a single model system. The conceptual framework and derivations described here are applicable to any system which can be described by delayed stochastic differential equations, although there may be computational limitations as model sizes increase."}
+{"text": "Data on genome organization and output over time, or the 4D Nucleome (4DN), require synthesis for meaningful interpretation. Development of tools for the efficient integration of these data is needed, especially for the time dimension. We present the \u20184DNvestigator\u2019, a user-friendly network-based toolbox for the analysis of time series genome-wide genome structure (Hi-C) and gene expression (RNA-seq) data. Additionally, we provide methods to quantify network entropy, tensor entropy, and statistically significant changes in time series Hi-C data at different genomic scales. To analAn overview of the 4DNvestigator workflow is depicted in The \u20184DN feature analyzer\u2019 quantifies and visualizes how much a genomic region changes in structure and function over time. To analyze both structural and functional data, we consider the genome as a network. Nodes within this network are genomic loci, where a locus can be a gene or a genomic region at a particular resolution (i.e. 100 kb or 1 Mb bins). Edges in the genomic network are the relationships or interactions between genomic loci.degree, eigenvector, betweenness, and closeness centrality (step 1 of Algorithm 1), which have been shown to be biologically relevant [Structure in the 4DN feature analyzer is derived from Hi-C data. Hi-C determines the edge weights in our genomic network through the frequency of contacts between genomic loci. To analyze genomic networks, we adopt an important concept from network theory called centrality. Network centrality is motivated by the identification of nodes that are the most \u2018central\u2019 or \u2018important\u2019 within a network . The 4DNrelevant . For exarelevant ,11. Addirelevant .Function in the 4DN feature analyzer is derived from gene expression through RNA-seq. Function is defined as the Hi-C data is naturally represented as a matrix of contacts between genomic loci. Network centrality measures are one-dimensional vectors that describe important structural features of the genomic network. We combine network centrality with RNA-seq expression to form a structure-function \u2018feature\u2019 matrix that defines the state of each genomic region at each time point . We include the main linear dimension reduction method, Principal Component Analysis (PCA), and multiple nonlinear dimension reduction methods: Laplacian Eigenmaps (LE) , t-distrThe 4DNvestigator also includes a suite of previously developed Hi-C and RNA-seq analysis methods. Euchromatin and heterochromatin compartments can be identified from Hi-C , and regEntropy measures the amount of uncertainty within a system . We use where envalues . Biologienvalues ,21. Furtenvalues . We provThe notion of transcription factories supports the existence of simultaneous interactions involving three or more genomic loci . This imergraphs . Tensor r theory ,25. We pThe 4DNvestigator includes a statistical test, proposed by Larntz and Perlman (the LP procedure), that compares correlation matrices ,27. The Then, form the matrices where, Example 1: Cellular Proliferation. Hi-C and RNA-seq data from B-lymphoblastoid cells (NA12878) capture the G1, S, and G2/M phases of the cell cycle for the maternal and paternal genomes [ genomes . We visuExample 2: Cellular Differentiation. We constructed a structure-function feature matrix from time series Hi-C and RNA-seq data obtained from differentiating human stem cells [em cells . These dem cells . We analExample 3: Cellular Reprogramming. Time series Hi-C and RNA-seq data were obtained from an experiment that reprogrammed human dermal fibroblasts to the skeletal muscle lineage [ lineage . We anal lineage ,9.We demonstrate how the 4DN feature analyzer can process time series structure and function data with thrThe 4DNvestigator provides rigorous and automated analysis of Hi-C and RNA-seq time series data by drawing on network theory, information theory, and multivariate statistics. It also introduces a simple statistical method for comparing Hi-C matrices, the LP procedure. The LP procedure is distinct from established Hi-C matrix comparison methods, as it takes a statistical approach to test for matrix equality, and allows for the comparison of many matrices simultaneously. Thus, the 4DNvestigator provides a comprehensive toolbox that can be applied to time series Hi-C and RNA-seq data simultaneously or independently. These methods are important for producing rigorous quantitative results in 4DN research.Click here for additional data file.Click here for additional data file.Click here for additional data file."}
+{"text": "Time-series data generally exists in many application fields, and the classification of time-series data is one of the important research directions in time-series data mining. In this paper, univariate time-series data are taken as the research object, deep learning and broad learning systems (BLSs) are the basic methods used to explore the classification of multi-modal time-series data features. Long short-term memory (LSTM), gated recurrent unit, and bidirectional LSTM networks are used to learn and test the original time-series data, and a Gramian angular field and recurrence plot are used to encode time-series data to images, and a BLS is employed for image learning and testing. Finally, to obtain the final classification results, Dempster\u2013Shafer evidence theory (D\u2013S evidence theory) is considered to fuse the probability outputs of the two categories. Through the testing of public datasets, the method proposed in this paper obtains competitive results, compensating for the deficiencies of using only time-series data or images for different types of datasets. The development of sensor technology has increased storage capacity and equipment types and record a significant amount of time-series data. It is very important to perform time-series data analysis in, for instance, accurate classification processing, which is widely used to solve different practical problems, such as mobile object tracking , machineBased on the investigation reported herein, it is found that there are two main time-series classification methods. The first mainly relies on the time series itself, using traditional machine learning or deep learning (DL) for classification. The second kind benefits from the development of image classification networks and encodes time series into images before classification. In this paper, both methods are considered to achieve the use of two modal features. Specifically, long short-term memory (LSTM), the gated recurrent unit (GRU), and bidirectional LSTM (BiLSTM) are selected as the feature extraction method for the original time series due to their ability in automatic feature extraction. Broad learning systems (BLSs) are selected for time-series images, which are simple and satisfy a BLS\u2019s characteristic. In brief, in this paper, a multi-channel fusion classification model is presented to improve the classification effect for different types of series data.The rest of this article is organized as follows. In The processing of classification problems mainly depends on whether the data are similar or not. Time-series classification problems are also analyzed based on this concept. The method of extracting features can be divided into manual and automatic feature extraction for classification.Manual feature extraction is usually used in conjunction with traditional machine learning methods. Measures based on distance are generally adopted, such as Euclidean distance (ED) and dynamic time warping (DTW), and work with k-nearest-neighbor (KNN) classifiers . Huang eAlthough machine learning methods show more superior performance in time-series data classification, many studies have shown that manual feature extraction is not easy with the growth of types and numbers of time series, and traditional machine learning is more suitable for sample learning with lower dimensions. As the superior performance of DL emerges, its application in the time-series analysis is gradually being explored for its ability of automatic feature extraction.Recurrent neural networks (RNNs) ,14 are tIn addition to RNN series methods, convolutional neural networks (CNNs) are also used for time-series classification. For example, Kong et al. proposedAs far as the network architecture is concerned, the characteristics of DL networks are the vertical expansion of the network layer, which imposes a greater demand for computing resources, which, in turn, places higher requirements on hardware. Therefore, in recent years, networks aimed at improving training speed have gradually attracted researcher attention. Among them, BLSs provide an alternative method for DL networks, which also can extract features automatically. Based on a random vector functional link NN (RVFLNN) and incremental learning , Chen prThe aforementioned methods are all from the perspective of data series, which need the memory capacity of the network or the similarity between data to be found through other methods to achieve time-series classification. With the development of DL in image classification, several researchers have discovered ways to encode data from the perspective of images and implement classification. Gramian angular field (GAF) and Markov transition field (MTF) methods proposed by Wang et al. and the The model framework of time-series data combined with multi-modal features presented in this paper mainly includes three parts: time-series data encoding and its feature extraction, original time-series data feature extraction, and decision-level fusion. The specific structural diagram is shown in In this subsection, the time series is first encoded to images by using RP and GAF, and then the BLS will be used to extract image features. A SoftMax layer is added to obtain the probability result for decision-level fusion.Inspired by the RP , Hatami First, given a time series Then, the RP can be expressed as:The GAF method transfers the normalized series data to a polar coordinate system and then generates the Gramian angular summation field (GASF) or Gramian angular difference field (GADF) matrix by calculating the cosine and sine of the corresponding angle of each pair of elements and then displays the series data in the form of images. The specific conversion process is the following.Given a time series For each piece of normalized data, the inverse cosine function is used to map to the polar coordinate system and process the time stamp as a radius; the formula isThen, GASF can be defined as:X, as shown on the right-hand side of The above two matrices are used to obtain the images of sequence BLS has a variety of structural forms, and the classical structure shown in First, the input data are subjected to feature mapping to form a feature node. Second, the feature nodes are enhanced to enhancement nodes by randomly generated weights. The optimal weight selection between the output layer and the feature and enhancement nodes can be obtained by ridge regression and pseudo-inverse algorithms. The specific process is the following.Assuming that the input data is All generated feature nodes are represented by Therefore, the BLS model can be expressed as:In the preceding subsection, the images encoded from time-series data are used in classification, while the original time series is also considered in order to prevent the information learned from being insufficient.Time series are limited or infinite data streams that depend on each other between data points, and an RNN is usually used to process such data. In this paper, LSTM, GRU, and BiLSTM are selected as the feature extraction methods of original series data in the way of parallelization, and the SoftMax layer is also added for the later operation of decision-level fusion. The structures of these three methods are introduced in the following subsections.As a special RNN, LSTM is mainly used to solve the problem of gradient disappearance and gradient explosion during long sequence training. In other words, LSTM can perform better in longer sequences than an ordinary RNN. The main reason is that LSTM adds a structure called a gate for selective control of the passage of information. Specifically, it includes three gates, called the forget, input, and output gates. The internal structure of LSTM is shown in tx\u00a0the input at the current moment, and th\u00a0the output at current moment.The forget gate is used to determine the retention of the information contained in the previous moment\u2019s state. The input gate selects the new state information that must be added so as to obtain the state of the current moment. The output decides the final unit output at the current time. The Equations of the entire procedure areSimilar to LSTM, GRU is proposed to solve the problems of long-term memory and gradient in back-propagation, but GRU has a simpler structure. It only contains two gates, a reset gate and an update gate, which reduces the amount of calculation it must do. Its internal structure is shown in The reset gate is used to control the degree of ignoring the state information at the previous moment. The smaller the value of the reset gate, the more it is ignored, and the less of the state information is retained. The update gate is used to control the degree of the previous moment\u2019s state being brought into the current state. Different from LSTM, the output of the GRU\u2019s unit contains only Although the structure of GRU is simpler than that of LSTM, the performance of the two is comparable on many tasks. The fewer parameters of GRU make it easier to converge, but when the dataset is large, LSTM may perform better. Therefore, both are considered in this paper.In addition to the above two RNNs, BiLSTM is also selected as one of the methods. The two-direction structure enables the network to obtain complete past and future context information for each point of the input sequence and can obtain better results in some prediction problems that require context information. The internal structure of BiLSTM is shown in Considering that the abovementioned methods may appear to have different effects on different datasets, to propose a more applicable model, a decision-level fusion strategy was adopted. Specifically, the method of D\u2013S evidence theory is used.D\u2013S evidence theory is a theory that deals with the uncertainty that was first proposed by Dempster and further developed by G. Shafer. In D\u2013S evidence theory, the required priori data are more intuitive and easier to obtain than in probabilistic reasoning theory. In addition, D\u2013S evidence theory can synthesize the knowledge or data from different experts or data sources. It has the ability to directly express \u201cuncertain\u201d and \u201cunknown. and these pieces of information are represented in the mass function and retained during the evidence synthesis process. These advantages make the D\u2013S evidence theory widely used ,24. The Letting (1)Basic probability allocation (BPA)The BPA in the recognition framework (2)Belief functionOn the recognition framework (3)Plausibility functionOn the recognition framework (4)Belief intervalIn the evidence theory, for a certain hypothesis (5)Dempster\u2019s combinational ruleThe combinational rules of mass functions are:In actual fusion, since the predicted label has only one result and there is no overlap, the element of the recognition framework is equal to the actual category of the dataset in this paper, and the probability result of each network for each sample is the mass function of the network. The fusion structure is shown in The experiments in this paper include two parts. In the first part, the RNN variants are used to classify the original time-series data, and the BLS is used to classify the images and evaluate the accuracy separately. In the second part, decision-level fusion is used to fuse the method results in the first part and evaluate and compare the accuracy.The data used in this article are from the public time-series dataset UCRArchive_2018 , which cIn this subsection, both GADF and GASF and RP images are used. During image generation, three image sizes and pixels were fixed. In practice, grayscale images were used for the experiments. The grayscale image reduces the dimension of the input data of the BLS network as well as the amount of calculation relative to the three-channel color image while ensuring recognition accuracy. The BLS network parameters use the same settings as Yang et al. . The numAs shown in As shown in A characteristic of the BLS network is that it only needs one epoch of calculation to obtain the result, and once the input and network structure are determined, the result is relatively stable, which is completely different from a DL network. The training result of the latter depends on the setting of network parameters and is prone to fluctuation. In addition, another advantage of BLS is that the training time of one epoch is very short. Even for the OSULeaf dataset, with many samples and long data length, the training time is less than 10 s.In this experiment, because the data are not particularly large, to ensure accuracy and as little calculation time as possible, all the three RNN variant network structures in this paper have two hidden layers, a fully connected layer, and a SoftMax layer for classification.The rules of early stopping have been adopted for the three RNN networks. When training DL networks, the best generalization performance is desired; that is, the data must be well fitted. However, usually, because the hyperparameters are not easy to set, especially the training epoch, the problem of overfitting may occur. Although the network performance improves on the training set and the error rate becomes lower, actually, in some moments, its performance on the testing set has begun to deteriorate. One of the methods that is widely used to solve overfitting problems is to set early stopping rules. The performance of the model is calculated on the validation set during training, and when the performance begins to decline, training is stopped so that the problem of overfitting can be avoided. Since there is no additional validation set in the experiment described in this paper, each generation of the model directly uses all of the testing set to test the performance, and the test accuracy is selected as an indicator of early stopping. To prevent the situation of the training being stopped due to unstable shocks at the beginning, another 75 generations is set when the indicator satisfies the stopping conditions to obtain more stable results.The proposed network has adopted the Dropout setting, which can prevent overfitting and reduce training time. As is well known, when the number of parameters is increasing, the training speed of the model will be affected obviously. With the Dropout strategy, the resulting training time will be greatly reduced by selectively ignoring some hidden-layer neurons in each epoch. Therefore, Dropout is necessary in our framework for the sake of efficiency.In addition, the SoftMax activation function is used in the multi-classification problem, and the output is turned into the probability format. As a result, the categorical cross-entropy is chosen as the loss indicator. For the network optimizer, the Adam optimizer is used in the proposed framework. Compared with the Stochastic Gradient Descent (SGD) optimizer, Adam does not need the manually selected initial learning rate, and the optimal value can be automatically adjusted during the training process. Moreover, Adam is easy to implement, computationally efficient, and suitable for scenarios with large-scale data and parameters.As can be seen from Considering that different methods exhibit different performances on the same dataset, to ensure that the classification results of images and time series can be reflected in the fusion, a multi-combination fusion method is adopted. At least one result of using time-series data and one of image data is selected for fusion using D\u2013S evidence theory, so there are a total of 13 combinations. The best combination is selected as the final classification result. As shown in In this paper, BLS is used to classify the images of time-series data, and three recurrent neural networks, i.e., LSTM, GRU, and BiLSTM, were used to classify the time-series data. The BLS and D\u2013S evidence theories are used to combine multiple decision fusion results to select the highest accuracy rate. The results of experiments prove the effectiveness of the proposed framework.In image classification, the BLS method can quickly and efficiently classify images with lower complexity. Compared with other deep networks, the BLS method can save a significant amount of training time. In terms of overall time usage, the time from encoding time-series data to images to using the BLS for learning is similar, or even less, than using time-series data and RNN variant networks for classification. However, to better improve the applicability of the model to the data, two features are indispensable. In the direct learning and classification of time-series, the series model of RNN is a very good choice due to its memory of the time relationship of the sequence data. LSTM solves the problem of long-term dependence of a traditional RNN through the control of information by forget, input, and output gates, while GRU simplifies the three gates into a reset gate and an update gate. The two performances are similar in most situations. BiLSTM solves the problem of requiring contextual information. In the method of encoding a time series as an image, the GAF and RP methods can intuitively show the time relationship between the sequence data through the image.Finally, in decision-level fusion, the D\u2013S evidence theory is considered a strategy that can synthesize the results of different decision-making methods; moreover, it does not need to meet the probability additivity requirements. To further improve the classification accuracy, the use of at least one original time-series dataset and one image data results set is guaranteed in this paper, and multi-combination decision-level fusion is carried out to achieve the purpose of fusing the best model.In future research, the framework proposed in this paper will continue to be improved to solve the problem of fast and efficient classification of multivariate time series."}
+{"text": "The DN method enabled us to identify the activation of biological pathways consistent with the mechanisms of action of the PPSV23 vaccine and target pathways of Rituximab. The community detection algorithm on the DN revealed clusters of genes characterized by collective temporal behavior. All saliva and some B cell DN communities showed characteristic time signatures, outlining a chronological order in pathway activation in response to the perturbation. Moreover, we identified early and delayed responses within network modules in the saliva dataset and three temporal patterns in the B cell data.Differential Network (DN) analysis is a method that has long been used to interpret changes in gene expression data and provide biological insights. The method identifies the rewiring of gene networks in response to external perturbations. Our study applies the DN method to the analysis of RNA-sequencing (RNA-seq) time series datasets. We focus on expression changes: (i) in saliva of a human subject after pneumococcal vaccination (PPSV23) and (ii) in primary B cells treated Network-based analysis, in particular, Differential Network (DN) analysis methods, have been very useful in analyzing the dynamics of gene expression under the effect of an external perturbation When DN analysis is built from longitudinal gene expression data, there is also the opportunity to map the DN structure to key time-resolved features of the gene expression response to perturbations. For instance, identifying clusters of genes with common time activation and their temporal ordering. In the present study, we applied such a DN approach to RNA-sequencing (RNA-seq) time series datasets retrieved from two longitudinal RNA-seq experiments: (i) The first dataset (GSE108664) was generated from saliva samples from a healthy individual before and after the administration of the Pneumococcal Polysaccharide Vaccine (PPSV23) For both datasets, we started with building gene networks, one for each of the control and the treatment sets. We used gene-gene correlations between time series signals, over 24\u00a0h in saliva and 15\u00a0h in B cells, to evaluate pairwise gene connections. Graphically, the time series correlation networks built from the treatment sets summarized a system-wide pathway activation due to the perturbation, whereas the networks from the controls sets acted as the baseline. Within the DN analysis framework, we subtracted the baseline network from the one obtained using the treatment data, arriving at the final differential network.The presence of modules, also known as communities, describes a topological property of networks Our investigation extends applications of DN to gene expression time series that include perturbative activation. The two DN applications, and particularly the community-wide investigations provide further biological insight in gene expression changes in both Rituximab treatment, as well as pneumococcal vaccination. Specifically, each of the three investigations on DN communities provided unique perspectives on the biological response to perturbations: (i) Using our heatmap analyses, we found that each DN community can have its own temporal pattern and be used as a categorization of time-resolved gene activation. (ii) Using community enrichment, we determined the associations between the activated biological pathways and their gene clusters (communities). Combined with the community temporal patterns, our results provide a chronological order of pathway activations, and show how these may be obtained through a DN application. (iii) Lastly, our community hub analysis gave further insights into the biological functionality of individual genes in a community. These include, for example, the presence of the hub gene IL4R in one of the saliva DN communities, which suggests that the respective cluster of genes collectively activated T cells in response to the PPSV23 vaccine, and may explain a fever event in the experimental subject. Likewise, the presence of the hub gene PELI, known to be an oncogene in lymphomagenesis, in one of the B cell DN communities suggests that the entire community participates in the B cell response to Rituximab. Additional findings are summarized in the results below, and illustrate the utility of extending DN analyses to investigate time-resolved gene expression changes induced by drug and vaccine treatments.Data for this investigation were obtained from Gene Expression Omnibus (GEO) for two time series studies using RNA-seq experiments, on Saliva (accession GSE108664) and Rituximab (GSE100441). Both sets of data are further described below. The raw RNA-seq data were mapped using Kallisto untreated data. In the second 24\u00a0h period, the same individual was monitored after receiving the PPSV23 vaccine. Saliva samples were again collected hourly over 24\u00a0h and profiled by RNA-seq. This second step yielded the RNA expression dataset after the PPSV23 vaccination. We call these data the treated dataset. Both treated and untreated datasets have 24 time points of 84,647 possible expression signals using GENCODE annotation The saliva dataset was obtained from our previous study of immune responses to the PPSV23 vaccine (GSE108664) The perturbation in the primary B cell experiment was Rituximab, a monoclonal antibody drug used in the treatment of different types of lymphomas and leukemias. The experimental study (data from GSE100441) began by culturing in parallel primary B cells with and without Rituximab. During the 15\u00a0h of Rituximab treatment, the treated and untreated primary B cells were both sampled at the same six time points simultaneously and profiled by RNA-seq. The untreated group provided a baseline, which we call untreated data, whereas the treated experiment produced the treated dataset. Since this study included a replicated experiment, each of the first and second duplicates was processed to generate a separate network.For quality control, we pre-processed the experimental data and filtered signals with multiple missing points right after importing the published data files. We coded all the data analysis in Python in this study. Using Python\u2019s pandas package We replaced missing signals with zero and also set values less than 1 to 1. Genes with zero variance in their time series were excluded from our analysis. Moreover, we considered a gene signal as sparse and removed it if its time series had missing values for more than 1/8 of the time points. The same quality control procedure was used for both the saliva and primary B cell datasets.TU:Ti is the expression value at time i in the treated dataset, Ui is the expression value at time i in the untreated dataset, and N is the total number of time points. This calculation yielded a \u0394TU distribution curve, from which we computed the lower and upper quartiles. As our goal was to identify time-resolved changes, genes were selected if their \u0394TUs were within the bottom 25% or top 25% of the \u0394TU distribution respectively. The Python Pandas package was used for all the above computations After quality control, we further processed the data to pre-screen and identify a pool of candidate genes that showed response to the perturbations . We selected genes that are highly expressed in both untreated and treated cases. Our goal with the differential networks was to identify genes that displayed notable changes. Hence the cutoffs were selected to exclude constant signals, and signals with moderate changes when comparing corresponding paired timepoints. For each of these genes, we calculated the time-averaged relative difference between treated and untreated normalized intensities, \u0394After gene selection, we calculated their pairwise Pearson correlation coefficients and built the co-expression networks. Genes were represented as nodes and were joined by edges if there was a non-zero correlation between them. We used the co-expression coefficient as a weight for each edge. In the layout representation of the networks, the node-node distance reflects their correlation coefficients. Two genes are nearby if they have a high positive correlation. They are far apart when they have a low positive correlation or remote if negatively correlated. We used Python\u2019s open source Networkx package We constructed the network with edges in the 99.5% quantile of the correlation distribution, and excluding singletons. The one-sided quantile cutoff essentially selects for positive correlations and is consistent with our modularity-based community analysis discussed further below. For the saliva data, we built one treated and one untreated network. Since we have data from two repeated experiments for B-cells, we built two networks for the Rituximab treatment and two networks for the untreated control. Then, we took the intersections between the two networks corresponding to the repeats to obtain a single Rituximab-treated network and one single control network.We defined the DN as the control network subtracted from the treated networks both for the saliva and B cell cases. In the subtraction, we remove an edge if that edge appears both in the treated and untreated networks. Edges appearing only in the treated network and absent in the untreated are kept in the differential network. Edges appearing only in the untreated network are not included. Isolated nodes left after this procedure are discarded. We analyzed the DN\u2019s structure using modularity In order to investigate the time-resolved response present within the communities, we applied clustering heatmaps to each of the DN communities. For genes in the same community, we first retrieved their treated and untreated expressions, then normalized each time series by subtracting individual time points from the time 0, followed by normalization with the Euclidean norm, for both expressions. We then took the difference between the normalized treated and untreated time series. Finally, we dendrogram-clustered these series (rows) with the complete-linkage method (Farthest Point Algorithm) As the heatmaps rendered distinctive time-resolved responses in each community, we identified communities that responded quickly to perturbations and those that responded slowly. In particular, we characterized saliva communities by their peak times and arranged them in temporal order. We did not obtain an order for the B cell communities, as the B cell heatmaps did not show dominant peak times. However, we were still able to characterize B cell communities based on 3 distinguishable temporal pattern categories.We conducted Reactome Enrichment Analysis Hubs are a typical feature in network topology. Visually, hubs represent highly connected nodes in a network. However, global connectivity differs from the regional structures. We isolated each community and identified localized hubs, only considering the communities rather than the global DN in our calculations. We adopted the standard Degree Centrality (DC) algorithm (which has been integrated with the Python networkx package) and identified the genes with the top five DC values as the community hub genes. We examined these hubs using functional annotations both for the saliva and B cell data. Using Mathematica Our RNA-seq time series raw data were retrieved from the Gene Expression Omnibus (GEO) database under accessions GSE108664 and GSE100441 for the saliva and B cell experiments, respectively. The study of the immune response to the PPSV23 vaccine in saliva probed the expression of a potential 84,647 gene identifiers and 13,775 edges. The Louvain algorithm identified 48 communities (modules) in total. 15 of the communities have a size of at least four nodes, while the remaining 33 are pairs or triplets. In the global saliva DN visualization, we excluded the communities with pairs or triplets, as none of them belonged to the three major connected components of the DN network. We also filtered the network to remove connected components with less than four genes. The global saliva DN is presented in We further visualized each community\u2019s change over time with heatmaps within the DN network. This is shown for C0 and C1 in Here we only show heatmaps for C0 and C1 as representative communities. However, we provide the other communities\u2019 heatmaps and with their corresponding Reactome pathway analysis in the ODFs (folder \u201cResults/SLV_results/network_plots\u201d). Our saliva DN has a clear pattern of mostly discrete punctuated gene expression response times for each community. As these punctuated response times, save for one exception (both C0 and C11 show maximized response at t5), are specific to each community, they reflect the biological signatures for individual groups. Most of our saliva DN communities have only one punctuated activation time, although C5 in the saliva DN has 3 up-regulation events at time points 15, 20, and 22 that do not overlap with those of other communities. Between the communities, we observed strong temporally-specific relationships. Our heatmaps are suggestive of the presence of directional signaling between early-activation communities and subsequent groups, with a potential sequential activation pattern as follows: C6, C9, C8, C2, C0, and C1, C3, C4, C10, C5, C1, and finally C5. At time points from t6 to t10, t14, and from t16 to t18, no communities activated.In our pathway analysis, we queried individual communities to investigate how their highly co-expressed genes are functionally related. Our analysis is based on the Reactome pathway database Statistically significant enrichment of pathways In the C0 community, the Reactome enrichment analysis identified 15 statistically significant pathways and 10,421 edges that we classified into 145 communities using the Louvain algorithm. Similar to the saliva DN, most of these communities are small clusters on small components. Due to its larger size relative to the saliva DN and larger number of communities, our cutoff for plotting was increased to 8 nodes both for community and component size. The global B cell DN is presented in \u03baB activation pathway, the B cell receptor (BCR) signaling pathway, and the Fc epsilon receptor (FCERI) signaling pathway. These pathways and others relevant to Rituximab mechanism of action are listed in \u03baB pathway activation by FCERI leads to the production of cytokines during mast cell activation, making it important in allergic inflammatory diseases As for the saliva DN, we conducted a community-wise Reactome enrichment analysis for communities with at least 8 genes. 14 communities in the B cell DN were analyzed. This analysis found 9 communities with statistically significant pathway enrichment pathways. According to \u03baB\u2019s signaling pathways by up-regulating RKIP and Raf-1 kinase inhibitors. RKIP has been found to antagonize signal transduction pathways that mediate the NF-\u03baB activation Regarding our primary B cell results, previous work \u03baB\u2019s down-regulation due to RKIP\u2019s up-regulation, the Bcl-xl expression is also down-regulated. As a result, tumor cells become more chemosensitive. Rituximab also decreased the activity of NF-\u03baB-inducing kinase, IkB kinase, and IkB-a phosphorylation. Finally, the introduction of Rituximab also decreased the activity of the IKK kinase and NF-\u03baB binding to DNA from 3 to 6\u00a0h after treatment Following NF-Among the more general enriched pathways observed are signaling pathways that play a role in the molecular mechanisms of chemosensitization, which are also impacted by Rituximab. In line with those effects, we anticipate impacts in the MAPK signaling pathway, the interleukin cytokine regulatory loop, and the Bcl-2 expression. Concerning the expression of genes involved in the healing process, research has uncovered Rituximab\u2019s role in affecting pathways associated with immunoglobulin production, chemotaxis, immune response, cell development, and wound healing. Rituximab can also increase existing drug-induced apoptosis \u03baB related pathways with FDR \u03baB pathways in C4 is the BCR pathway. Our results suggest that the C4 community response is highly relevant because of the activation of both NF-\u03baB and BCR pathways.In our community of C4, for example, our Reactome analysis found 5 NF-Our C2 community appears to be involved with the metabolism of proteins and cellular responses to external stimuli. Rituximab targets the CD20 B cell transmembrane protein that is involved in B-cell development, activation and proliferation etc. The pathways that emerged in our results are thus consistent and highly overlap with established pathways from previous studies.We also observed relevant responses in other communities. For example, the C8 community showed activity in the RAF/MAP kinase cascade pathway. In a similar fashion, C10 demonstrated CD22 mediated BCR regulation, classical antibody-mediated complement activation, FCGR activation, antigen activation of the BCR, and initial complement triggering, Hub genes most pertinent to B cell/lymphocytes included PELI1 in community C5, PRDM2, MALAT1, and SND1 in C2. Other high centrality genes with similar relevance included MAPK8 in C6 and AFF3 in C1. Among these, PELI1 turned out to be closely associated with antitumor immunity in B cells, which is the therapeutic goal of the Rituxmab treatment. A previous study ex-vivo Rituximab drug treatment as perturbation; six time points). In summary, our results from the saliva DN revealed pathway activation in immunological and inflammatory responses. In the B cell DN, statistically significant pathways were activated in the regulation of transcription, immune cell survival, activation and differentiation, and inflammatory response.Our goal was to use a DN approach to identify the activation of biological processes caused by a perturbation in saliva and primary B cells. This study applied DN analysis, community identification and Reactome pathway analysis of the DN communities, and identified communities with highly statistically significant enrichment. In this study we implemented a modularity-based community detection, that works with positive correlations. This is a limitation of the modularity approach to the DN that may be addressed using different community detection algorithms and merits followup investigations. We analyzed the DNs of two gene expression datasets where a perturbation was applied: (i) Saliva dataset , (ii) Primary B-Cells dataset at individual timepoints, indicative of sequential immune system responses due to the PPSV23 vaccination. In the primary B cell, data were less clear, as fewer time points were monitored, and also the network was more densely connected. The B-cell heatmaps still indicate overall trends associated with Rituximab activation (both up- and down-regulation) within the first 7\u00a0h of the treatment. Our future work will focus on the possibility of establishing a causal chained signaling response, and associated pathways across these communities.Our analysis showed the applicability of a DN approach in evaluating time course RNA-seq data. Specifically, the DN method results in the saliva experiment data were consistent with our previous work on profiling PPSV23 vaccination responses"}
+{"text": "Existing computational methods usually decompose the inference of gene regulatory networks (GRNs) into multiple subproblems, rather than detecting potential causal relationships simultaneously, which limits the application to data with a small number of genes. Here, we propose BiRGRN, a novel computational algorithm for inferring GRNs from time-series single-cell RNA-seq (scRNA-seq) data. BiRGRN utilizes a bidirectional recurrent neural network to infer GRNs. The recurrent neural network is a complex deep neural network that can capture complex, non-linear, and dynamic relationships among variables. It maps neurons to genes, and maps the connections between neural network layers to the regulatory relationship between genes, providing an intuitive solution to model GRNs with biological closeness and mathematical flexibility. Based on the deep network, we transform the inference of GRNs into a regression problem, using the gene expression data at previous time points to predict the gene expression data at the later time point. Furthermore, we adopt two strategies to improve the accuracy and stability of the algorithm. Specifically, we utilize a bidirectional structure to integrate the forward and reverse inference results and exploit an incomplete set of prior knowledge to filter out some candidate inferences of low confidence. BiRGRN is applied to four simulated datasets and three real scRNA-seq datasets to verify the proposed method. We perform comprehensive comparisons between our proposed method with other state-of-the-art techniques. These experimental results indicate that BiRGRN is capable of inferring GRN simultaneously from time-series scRNA-seq data. Our method BiRGRN is implemented in Python using the TensorFlow machine-learning library, and it is freely available at Gene regulatory mechanisms are crucial to understanding diverse dynamic processes such as development, stress response and disease . Cell stA plethora of computational or statistical approaches have been developed for inferring networks from observational gene expression data \u20138. The wAlthough much progress has been made, inferring a network of regulatory interactions between genes is still challenging. On one hand, for time-series scRNA-seq data, methods for reconstructing GRNs on bulk data are not directly applicable. As the biological meaning of a sample changes from the average for several cells in bulk data to the value for a single cell, the form of the gene expression data is also changed. Meanwhile, as the approaches devised for single-cell transcriptomics typically require a large number of time points to infer GRNs, they are usually suitable for a small number of genes. Adding a few genes to a network inference analysis may require the inference algorithm to consider many additional regulatory interactions between them. As the number of genes grows, the number of edges and the demand for input data might explode.Here, we present BiRGRN, a novel method of inferring GRNs from time-series scRNA-seq data. BiRGRN adopts a bidirectional recurrent neural network to infer GRNs. The recurrent neural network is a deep neural network that can capture complex, non-linear, and dynamic relationships among variables. It maps a neuron to a gene, and maps the connections between neural network layers to the regulatory relationship between genes, giving a good solution to model GRN with biological closeness and mathematical flexibility. Then we transform the reconstruction of GRNs into a regression problem, using the gene expression data of the previous time points to predict the gene expression data of the later time point. Meanwhile, we adopt a bidirectional structure and incorporate an incomplete set of prior knowledge to improve the accuracy and stability of the algorithm. To evaluate the performance of BiRGRN, we apply it to four simulated datasets and three real single-cell transcriptomic datasets. We performed a comparison of our results with other state-of-the-art techniques, which shows the better performance of our proposed model.In this work, we propose a new computational method BiRGRN to reconstruct gene regulatory networks based on bidirectional recurrent neural network and multiple prior networks. The overview of the BiRGRN is shown in i at time point p+1 is the total regulatory effect of the expression values of all genes at the previous p time points, the regulation process can be described as the following function as the regression loss function for deep neural network training. The RNN is a fully connected structure, whereas the regulatory network is usually sparsely connected. Thus, we add L1 regularization in the objective function, aiming to control the sparsity of the\u00a0resulted weight matrix tE,p+1 respectively represent the predicted and the real expression value of all genes at the time point t+p+1. \u03b1\u2225 \u03c9 \u22251 is the regularized term.where m connections as the candidate regulatory edges. As multiple weight matrixes are obtained after the training process, we can infer multiple candidate gene regulatory networks, which are used as the basic voters to determine the final regulatory edges in the following steps.For the training process, when the objective function converges to the minimum, the algorithm extracts the multiple weight matrixes between the RNN layer and each fully connected layer. Then we normalize each basic weight matrix separately. According to the proposed network structure, the weight matrix corresponds to the regulatory relationships among genes, which can be used to reconstruct a candidate gene regulatory network. For each matrix, we take the top During the above training process, the final loss function of the model usually cannot be completely reduced to zero due to the influence of external noise. Meanwhile, in convex optimization problems, there are a large number of approximate solutions near the global optimal. In order to improve the accuracy of the GRN inference, some prior knowledge can be utilized to filter the candidate regulatory edges. The previous method, such as NetREX and MiPGRN, assumes that the prior network and the target GRN have some similarity, and then bias the optimization procedure toward networks that overlap with the prior , 29. Herk\u03c9 represents the weight of the thk initial GRN, kContainPre denotes the number of candidate edges in the thk inferred GRN overlapping with the prior network, and preNumber represents the number of the prior edges.where As the usable prior knowledge usually does not exist for given datasets, here we adopt a general strategy to obtain an incomplete prior edge set. We utilize different computational algorithms to predict the putative GRNs, apply the method NETRex to optimize the predictions, and then integrate the top 10% of the resulted edges to obtain an incomplete prior edge set . ThroughK candidate GRNs, and each candidate GRN possesses an adjusted weight matrix. Next, we integrate these K different initial gene regulatory networks. The voting strategy is the addition of weights, and finally a global regulatory edge ranking is obtained according to the weights. For the regulatory edge of gene i to gene j, the weight ije is calculated as:Based on the deep neural network, we obtain k\u03c9 represents the weight of the thk candidate GRN, and i to gene j in the thk candidate GRN.where p time points of p+1, p, p-1,\u2026, 2, and the output is the gene expression data of the first time point. After getting the trained model, the algorithm extracts the weight matrix r\u03c9, and the subsequent operations are consistent with the forward model. Then the algorithm will eventually get two regulatory networks, and also use voting strategies to integrate forward and reverse results to get the final inferred regulatory network:Inspired by the bidirectional model of the algorithm BiXGBoost , we furtije obtained from forward inferring, and ije obtain the reverse inferred GRN. Based on the calculated new weights of these edges, we rank the regulatory edges and select the top m regulatory edges to form the inferred GRN.where http://www.regulatorynetworks.org), which was constructed from DNaseI footprints and TF-binding motifs . iAUROC and iAUPRC respectively denote the average AUROC and AUPRC of the algorithm on the thi data set.where To evaluate the effectiveness of BiRGRN, We apply the proposed GRN inference method to four simulated datasets, including datasets related to hematopoietic stem cell differentiation (HSC), gonadal sex determination (GSD), ventral spinal cord development (VSC), and mammalian cortical development (mCAD). In detail, each dataset is generated by the Boolean model in previous study , includiWe next measure the performance of BiRGRN for inferring GRNs on real datasets. Here, BiRGRN is applied to three real time-series scRNA-seq datasets. As previous studies did , the infWe also record the runtime of each method on three real data sets. As shown in As BiRGRN is mainly composed of the bidirectional RNN integrating the forward and reverse training, and the voting model incorporating prior knowledge, we further investigate the impact of the different components on the overall performance. Accordingly, we obtain three variants of BiRGRN, including BiRGRN-Prior(the model removing incorporated prior knowledge), BiRGRN-Forward (the model removing forward training), and BiRGRN-Reverse (the model removing reverse training). We respectively carry out the ablation study on the four simulated datasets. We first evaluate the contribution of prior information for guiding the voting process in the model. The results show that the removal of the prior information results in a slight drop in performance. Without incorporating prior information, the network is able to reconstruct a relatively coarse segmentation. Without further guidance of prior information, it might be not able to refine it properly. To further inspect the effectiveness of the bidirectional model, we respectively compare the performance of the BiRGRN without forwarding training and reverse training. From the table, we observe that the performance of two single directional training models is similar, and they are slightly lower than those of the bidirectional training model. This result of ablation Study indicates the forward training and the reverse training might be complementary to each other, and thus the bidirectional RNN structure is capable of capturing more regulation relationships among genes. On the whole, these results demonstrate that both the components are contributive to the performance of BiRGRN.Many cellular processes, either in development or disease progression are governed by complex gene regulatory mechanisms. GRN reverse engineering methods attempt to infer GRNs from large-scale transcriptomic data using computational or statistical models. A plethora of GRN inference methods has been proposed. However, with the development of single-cell sequencing technology, traditional GRN inference methods designed for bulk transcriptomic data might be unsuitable to process large quantities of scRNA-seq data. In this paper, we proposed a novel computational method BiRGRN to infer GRNs from time-series scRNA-seq data. BiRGRN utilizes a bidirectional recurrent neural network to infer GRNs. The recurrent neural network is a complex neural network, which can capture complex, non-linear, and dynamic relationships among variables. It maps a neuron to a gene, and maps the connections between neural network layers to the regulatory relationship between genes, giving a good solution to model GRN with biological closeness and mathematical flexibility. Then we transform the reconstruction of GRNs problem into a regression problem that uses the gene expression data of the previous time points to predict the gene expression data of the later time node. In order to improve the accuracy of the algorithm, the method can use an incomplete set of prior knowledge. The developed model has been tested on four simulated data and three real datasets. We performed a comparison of our results with other state-of-the-art techniques which shows the superiority of our proposed model. The experiments conducted on simulated datasets and real scRNA-seq datasets demonstrate that BiRGRN can infer gene regulatory networks with high performance, which that the\u00a0proposed bidirectional RNN structure is effective in GRN inference.https://github.com/hmatsu1226/SCODE, the simulated datasets are all from Beeline and can be found in https://github.com/Murali-group/Beeline.Publicly available datasets were analyzed in this study. The real dataset can be found in YG and XH are responsible for the main idea, as well as the completion of the manuscript. XH has developed the algorithm and performed data analysis. GZ, CY, and GX have coordinated data preprocessing and supervised the effort. All authors have read and approved the final manuscript.This work was sponsored in part by the National Natural Science Foundation of China (62172088), National Key Research and Development Program of China (2016YFC0901704), and Shanghai Natural Science Foundation .The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."}
+{"text": "The heterotetrameric complex is composed of a catalytic FXIIIA2 subunit and a protective/regulatory FXIII-B2 subunit coded by F13A1 and F13B genes, respectively. The catalytic FXIIIA2 subunit is encoded by the F13A1 gene, expressed primarily in cells of mesenchymal origin, whereas the FXIIIB subunit encoded by the F13B gene is expressed and secreted from hepatocytes. The plasma FXIIIA2 subunit, which earlier was believed to be secreted from cells of megakaryocytic lineage, is now understood to result primarily from resident macrophages. The regulation of the FXIII subunits at the genetic level is still poorly understood. The current study adopts a purely bioinformatic approach to analyze the temporal, time-specific expression array-data corresponding to both the subunits in specific cell lineages, with respect to the gene promoters. We analyze the differentially expressed genes correlated with F13A1 and F13B expression levels in an array of cell types, utilizing publicly available microarray data. We attempt to understand the regulatory mechanism underlying the variable expression of FXIIIA2 subunit in macrophages . Similarly, the FXIIIB2 subunit expression data from adult, fetal hepatocytes and embryonic stem cells derived hepatoblasts (hESC-hepatoblast) was analyzed. The results suggest regulatory dependence between the two FXIII subunits at the transcript level. Our analysis also predicts the involvement of the FXIIIA2 subunit in macrophage polarization, plaque stability, and inflammation.Coagulation factor XIII (FXIII) circulates in plasma as a pro-transglutaminase heterotetrameric complex (FXIIIA The zym 1.5 nM) . Other tsymptoms . More thF13B mutations, because the FXIIIB subunit is the protective partner in the heterotetrameric complex, it is a little bit harder to explain the same observation for defects in the F13A1 gene. This compels one to investigate if there is any co-regulatory mechanism shared by the two subunits that are expressed in different lineages yet define the final available dosage of potentially active FXIII transglutaminase in plasma. Previously, it has been reported that the pro-inflammatory M1 macrophages show lower to no expression of F13A1, compared to anti-inflammatory M2 macrophages [F13A1 has also been reported to be associated with carcinomas lately [F13A1 in diabetes-prone mice, osteoarthritis, and cancer, as one of the significant hits affected by altered cytokine signaling in these pathological states, indicating its role in these inflammatory states. A study performed in 2005 reported F13A1 differential gene expression during macrophage polarization by RT-PCR and single-cell analyses [F13A1 has any direct role in macrophage maturation and polarization. Apart from its role in coagulation, several other roles outside coagulation have been discovered for the catalytic FXIIIA subunit. These roles encompass wound healing, chronic inflammatory bowel diseases, atherosclerosis, rheumatoid arthritis, chronic inflammatory lung diseases, chronic rhinosinusitis, solid tumors, hematological malignancies, and obesity . Our gros lately . In seveanalyses . The revF13A1 expression, we have investigated the micro-array data derived from macrophages, which are also major players in immunity. Similarly, microarray data derived from hepatocytes is investigated for F13B expression behavior. Recent reports on reduction in plasma levels of FXIIIA, upon acquired deficiency of FXIIIB subunit, also motivates one to investigate the pathways responsible for the common regulation of the two subunits [F13A and F13B expression levels in respective datasets, a common regulatory pathway is traced by characterizing the promoters and transcriptional regulators at each time point and cell type. To address the mode of FXIII expression regulation, for both of its subunits here we attempted to adopt a pure bioinformatics approach to analyze the temporal, time-specific expression array data corresponding to both the subunits in specific cell lineages with respect to the gene promoters. Differentially expressed genes with respect to the expression of the FXIII subunit genes in different cell lineages were predicted for their potential roles in cellular pathways. The current availability of abundant data being submitted in expression databases such as gene expression omnibus (GEO) provides the opportunity to bioinformaticians to investigate trends and patterns within these datasets, which in turn provide us with valuable insights into several functional and physiological mechanisms. Advanced bioinformatics tools and increased computational power have made it possible for us to understand the dynamic behavior of these biological data sets. To understand the roles and involvement of FXIII subunits in dynamic biological processes intra and extracellularly considering the overall gene expression as read-outs, such as hepatocyte maturation, the development of atherosclerotic plaque, and macrophage polarization, we have utilized the publicly available RNA-microarray data sampled at different time points to understand and capture expression-switching with respect to time, and the behavior of transcriptome. Since the cells of bone-marrow and mesenchymal lineage have so far been reported to be responsible for stable subunits . This stExpression data were extracted from NCBI GEO for macrophages (time-series), hepatic progenitor, hepatocytes, and human-embryonic-stem-cells-derived hepatoblasts, respectively, ,19,20,21GSE128303: In the original study corresponding to this dataset, primary human monocyte-derived macrophages isolated from four donors were matured in the presence of recombinant human IL-4, followed by LPS treatment for 90 min [Platform: Illumina HumanHT-12 V4.0 expression beadchip.r 90 min . LPS-treGSE98324: In the original study corresponding to this dataset, research groups at the National University of Singapore collected array data from 32 distinct human-derived samples [Platform: Illumina HumanHT-12 V4.0 expression beadchip. samples . From thGSE39157: In the original study corresponding to this dataset, array data derived from total RNA were obtained from cultured primary hepatocytes and hESC-derived hepatic populations [Platform: (HuGene-1_0-st) Affymetrix Human Gene 1.0 ST Array (transcript (gene) version).ulations . In the GSE41571: In the original study corresponding to this dataset, expression data derived from genome-wide expression analyses of isolated macrophage-rich regions of stable and ruptured human atherosclerotic plaques were reported [Platform: (HG-U133_Plus_2) Affymetrix Human Genome U133 Plus 2.0 Array.reported ,21. PlatIllumina Human HT12-v4 Expression BeadChip: Raw data from GEO was downloaded as \u201cidat\u201d files for illumina human ht12-v4 platform. IDAT files were imported into R using \u201cread.idat\u201d function in \u201cIimma\u201d package [p-values for probe expression. Data were also tested for batch effects using \u201cComBat\u201d function in \u201csva\u201d package, but no batch effect was removed as the data has a non-significant batch effect [ package . Data weh effect .Affymetrix Arrays: The normalized series matrix file was downloaded from the NCBI GEO database for both Affymetrix array data, and this file was filtered for the detection p-value, where the probe signal was found to be statistically significant against background signal. Probes with non-significant probe signals were removed, and significant probes were further analyzed using the Qlucore Omics Explorer 3.5 (www.qlucore.com (accessed on 6 Jun 2020)) .Data Import: All expression arrays data were imported separately into Qlucore Omics Explorer 3.5 (www.qlucore.com (accessed on 6 Jun 2020)) using import wizard with sample annotation, as well as probes annotation for individual array platforms.www.qlucore.com (accessed on 6 Jun 2020)). A Student t-test was used to compare two samples, whereas an ANOVA was used for multiple-sample comparison. Significance values of p < 0.05 or 5% of the false discovery rate were used as statistically significant. Fold changes were also calculated for volcano plots.Differentially expressed genes (DEG) were identified using Qlucore Omics Explorer 3.5 ) [pe 3.7.1 . Biologipe 3.7.1 . Visualipe 3.7.1 . Tox funn 2020)) .Promoter Sequence: The F13A1 and F13B promoter sequences were downloaded from the eukaryotic promoter database ) [n 2020)) ,29. The Transcription factors analysis: The transcription factors binding to the promoter of F13A1 and F13B downloaded from EPD were identified using the TRANSFAC professional database from geneXplain [neXplain ,31,32. Twww.qlucore.com (accessed on 6 Jun 2020)). Volcano plots and correlation plots were generated using R programming. Box plots were generated using the GraphPad Prism 8. Pathways, gene networks, and tox functions plots were created using Ingenuity Pathway Analysis. Gene ontology plots were generated using Cytoscape 3.6. Transcription factors binding consensus sequences were downloaded from TRANSFAC.Principal component analysis and heatmaps were generated using Qlucore Omics Explorer 3.5 showed high levels of the F13A1 transcript, which after 24 h goes down to \u224850% of its original expression were found to be significantly correlated with F13A1 expression in macrophages at correlation > 0.5 and with a mean difference greater than 1 (p < 0.05 and mean difference greater than 1) as compared to F13A1 (in macrophages) was \u2248four times higher. This might explain the higher levels of the FXIIIB2 subunit in plasma compared to the FXIIIA2 subunit (\u22482 times), owing to which there is a significant amount of free unbound FXIIIB2 subunit in the plasma.The pression A. Overalr than 1 B. The toelation) C. The geelation) D, steroln level) A. Liver than 1) B. Meanwh than 1) C. The gerocesses D. The baF13A1 and F13B genes, the transcription factors (TFs) binding to these extended promoters using TRANSFAC professionals, were identified. TRANSFAC predicted 100 and 96 TF-Matrices binding to 636 and 542 binding sites on F13A1 and F13B promoters, respectively with a Pearson\u2019s correlation below 0.75 and p-value < 0.01 of these 375 transcription factors showed enrichment in congenital heart anomaly, cardiac proliferation, cardiac enlargement, liver necrosis, and liver proliferation. This tox function shows that transcription factors binding to both F13A1 and F13B play a significant role in pathological conditions such as cardiac anomaly . In the case of the M0-M1 switch, MAFB (z-score 3.945) and IL10RA (z-score 4.666) regulators are predicted to be inhibited in the genes that are correlated to F13A1 expression, whereas in the case of the M1-M2 switch, IL1B (z-score 4.249) and TFRG (z-score 3.097) are the predicted regulators to be activated, and TGFB1 (z-score 3.902), IL10RA z-score 4.972), IL-13 (z-score 4.446), and IL-4 (z-score 4.078) are the regulators predicted to be inactivated in the genes correlated to F13A1 expression serves as one of the largest publicly available repositories for raw as well as curated array-based expression data, deposited by the scientific fraternity for further analyses, interpretation, and validation. Such large datasets enable and promote meta-analyses where raw data generated and deposited from one working group can be accessed, analyzed, and validated using several other bioinformatics tools to derive a better understanding of biological data. In the present study, we have utilized the GSE data records for different array-data analyzed here, which defines sets of samples and how they are related , liver cells (hepatocytes and liver progenitor cells), and plaque-derived resident macrophages (from stable and ruptured plaques)). This work is an attempt to translate bioinformatics data into meaningful interpretive FXIII research that could direct future investigations into the cellular expression profile of FXIII. F13A1 and F13B, respectively). FXIIIA has been long been understood to be a key molecule in the inflammation-coagulation-complement axis. The bench data from several meta-analyses suggest that F13A1 expression is strongly correlated to inflammation-like cellular responses; the temporal array data analyzed for cell-specific expression also reveal that the genes correlated with the F13A1 and F13B differential expression pattern are largely involved in a common function of the immune response. The presented data here reveal that F13A1 expression levels in macrophages are strongly positively correlated to genes responsible for cell differentiation and migration, largely in an immune setup ; genes responsible for indirect clot stabilization (SERPINB2); and intracellular transporters such as SLC39A8 with FXIII in macrophage polarization towards M2-switch supports this notion (See F13A1 co-expressed genes) that upstream regulators such as growth factors, transcription factors, and interleukins are targeting both F13A1- and F13A1-correlated genes, which may suggest an association between F13A1 and macrophage polarization, i.e., macrophage polarization towards an M2 phenotype and increased expression of F13A1 occur in a temporal manner and are not mutually independent events (F13A1 and its correlated genes during M0/M1 and M1/M2 switch that conjoins these two events as a cause-and-effect (High levels of intracellular FXIIIA (in monocyte-derived macrophages) are reported to perform functions related to cellular remodeling, crosslinking of the cytoskeleton, microtubule assembly, etc. . Howeverrization . As disction See . Additiot events . There id-effect . F13A1 gene acts as a coadjutant in plaque fate, rather than being detrimental towards its progression. The bioinformatics data reveal and support the existence of a strong role of FXIIIA in inflammation, cellular remodeling, and remodeling of atheroma as well. The lack of any significant differential expression in the three stages analyzed here strongly indicates that FXIIIA is directly involved in plaque stability; it plays no role in plaque formation, progression, rupture, and/or healing but in plaque maintenance (F13A1 and its correlated genes is predicted to be \u201cMafB\u201d (z-score 3.945) , which ic plaque . SecondlF13B gene expression, and (c) progression of early-stage plaque to advanced stages, in thrombus models. As a summary and conclusion, analyses and profiling of time-series-based micro-array expression data derived from different cell types expressing (and secreting) FXIIIA and FXIIIB subunit predict the inter-regulation of expression for both the genes. FXIIIA has roles beyond coagulation and is very likely to be involved in pro-inflammatory M2 phenotype switching of monocyte-derived macrophages. Owing to its role as a stabilizer, FXIIIA is likely to be indispensable for plaque maintenance at advanced stages of atherosclerotic plaque; however, is not needed for plaque initiation. Future studies are warranted for establishing effective, direct, and exclusive roles of FXIIIA molecule towards (a) macrophage polarization, (b) regulation of"}
+{"text": "Medicago truncatula, in response to rhizobial signals that trigger the formation of root nodules.Symbiotic associations between bacteria and leguminous plants lead to the formation of root nodules that fix nitrogen needed for sustainable agricultural systems. Symbiosis triggers extensive genome and transcriptome remodeling in the plant, yet an integrated understanding of the extent of chromatin changes and transcriptional networks that functionally regulate gene expression associated with symbiosis remains poorly understood. In particular, analyses of early temporal events driving this symbiosis have only captured correlative relationships between regulators and targets at mRNA level. Here, we characterize changes in transcriptome and chromatin accessibility in the model legume M. truncatula roots treated with bacterial small molecules called lipo-chitooligosaccharides that trigger host symbiotic pathways of nodule development. Using a novel approach, dynamic regulatory module networks, we integrated ATAC-seq and RNA-seq time courses to predict cis-regulatory elements and transcription factors that most significantly contribute to transcriptomic changes associated with symbiosis. Regulators involved in auxin , ethylene , and abscisic acid (ABI5) hormone response, as well as histone and DNA methylation (IBM1), emerged among those most predictive of transcriptome dynamics. RNAi-based knockdown of EIN3 and ERF1 reduced nodule number in M. truncatula validating the role of these predicted regulators in symbiosis between legumes and rhizobia.We profiled the temporal chromatin accessibility (ATAC-seq) and transcriptome (RNA-seq) dynamics of Our transcriptomic and chromatin accessibility datasets provide a valuable resource to understand the gene regulatory programs controlling the early stages of the dynamic process of symbiosis. The regulators identified provide potential targets for future experimental validation, and the engineering of nodulation in species is unable to establish that symbiosis naturally.The online version contains supplementary material available at 10.1186/s12915-022-01450-9. Medicago truncatula can establish a well-characterized mutualism with nitrogen-fixing rhizobia. Signal exchanges between the host plant and bacteria initiate intracellular infection of host cells, followed by the development and colonization of root nodules [Legumes such as nodules . Nodules nodules . Hence, M. truncatula [M. truncatula calcium ATPase 8 (MtMCA8), and including the components of the nuclear pore complex [M. truncatula) [Symbiosis begins with compatible rhizobia detecting flavonoids and isoflavonoids produced by the legume host and subsuncatula , 5. LCO complex \u20138. The cncatula) . CCaMK ancatula) , 11.M. truncatula), nodule organogenesis, and infection of the nodule cortex [M. truncatula and the acquisition of organ identity [M. truncatula, the gene expression level of nodule-specific cysteine-rich genes (NCR) across root nodule zones are correlated with chromatin accessibility [The coordinated activity of these TFs triggers transcriptional changes essentiae cortex . These pe cortex on a cone cortex . Chromate cortex , 16. Foridentity . Also, isibility .Sinorhizobium meliloti LCOs in M. truncatula roots and genome-wide chromatin accessibility (ATAC-seq\u2014assay for transposase-accessible chromatin using sequencing) in response to ots Fig. A. To chaM. truncatula using the Jemalong A17 genotype, treated with LCOs purified from S. meliloti. An LCO concentration of 10\u22128 M was used, as in previous studies [t = 0 h) and seven time-point conditions after treatment . Principal component analysis (PCA) showed clustering of biological replicates and time-dependent ordering, the first component explaining ~36% of variation revealed 12,839 differentially expressed (DE) genes with significant change in expression (adjusted-P < 0.05), including 7540 and 7051 upregulated and downregulated at one time point relative to control, respectively , and the wild-type (WT) strain (0.40). Marker genes for rhizobium-induced nodulation were upregulated (compared to t = 0 h), including NIN , CRE1 , ENOD11 , RPG , and ERN1 and nfp (0.17) mutants and treatment (purified LCOs versus Sinorhizobium medicae), both inducing a strong LCO response.To corroborate these results with previous work on transcriptome dynamics of symbiosis, the identified DEGs were compared to DEGs identified from a published time course data of r et al. (see Addq\u00a0<\u2009=\u20090.05) for jasmonic acid (JA) biosynthesis and JA response genes, including Coronatine insensitive 1 (COI1), which forms part of the JA co-receptor complex for the perception of the JA-signal [Arabidopsis trithorax 3 (ATX3) homolog (MtrunA17Chr4g0005621) and a lateral organ boundaries domain (LBD) transcription factor (MtrunA17Chr4g0043421). ATX3 encodes an H3K4 methyltransferase [EPP1 and the cytokinin receptor CRE1, both positive regulators of early nodule symbiosis and development [DMI1 , NF-YA1 (cluster 177), and the marker of LCO perception ENOD11 (cluster 296). Together, the DE and ESCAROLE analysis showed that M. truncatula response to LCOs is characterized by complex expression dynamics recapitulating several known molecular features of this process.To examine more complex transcriptome dynamics beyond pairwise DE analysis associated with LCO response, we applied ESCAROLE, a probabilistic clustering algorithm designed for non-stationary time series . The expA-signal . Among tnsferase , and LBDnsferase , includinsferase . This clelopment , 27. OthM. truncatula ATAC-seq data [To study chromatin accessibility changes in a genome-wide manner in response to LCOs, we performed ATAC-seq on samples at all time points matching our RNA-seq time course. Overall, 54\u2013235 million paired-end reads were obtained for each sample, with 46\u201375% mappable to the (v5) reference genome read coverage within each region, for each time point. For each time-point, the log-ratio of per-bp read coverage in each promoter was taken relative to the global mean of per-bp coverage, quantile normalized across time points. High consistency was found between promoter signals between technical replicates from each time point based on Pearson\u2019s correlation genome. As with the promoter accessibility, clustering accessibility profiles of universal peaks identified distinct patterns of temporal change . Collectively, these results suggest that LCO treatment had a genome-wide impact on chromatin accessibility, prospectively associated with simultaneous change in gene expression.We called peaks for each time point using the Model-based Analysis of ChIP-Seq version 2 (MACS2) algorithm (AdditioP\u00a0<\u20090.05 relative to random permutation): 4777 with positive correlation and 1652 with negative correlation .Correlating accessibility of universal peaks centered within 10 kbp upstream to 1 kbp downstream of gene TSSs identified 100,722 peak-gene mappings associated with 28,803 expressed genes to integM. truncatula [cis-element (NRE). Hyper-parameters for DRMN were selected using a grid search and quality of inferred modules . To assess the extent to which DRMN captures variation in expression, we correlated predicted and measured expression levels , suggesting a significant module reorganization at ~2 h. This is consistent with the general reorganization of promoter accessibility ~1\u20132 h after the treatment and global expression correlation around 2 h , and (8398) 77% were identified in ESCAROLE, indicating consistency between the analyses.We applied DRMN with seven expression modules using two types of features between 0\u20132 and 4\u201324 h, corresponding to the reorganization of expression modules . Knockdown of MtrunA17Chr5g0440591 (EIN3) and MtrunA17Chr1g0186741 (ERF1) significantly lowered the number of nodules produced on the RNAi roots . Knockdown of MtrunA17Chr1g0166011 (IAA4-5) did not alter nodulation relative to the empty vector (EV) control and MtrunA17Chr1g0186741 (ERF1) in rhizobium-legume symbiosis, as predicted by DRMN.To experimentally test the involvement of DRMN prioritized transcription factors in root nodule symbiosis, we selected three TFs, ors Fig. D. We knoM. truncatula roots in response to S. meliloti LCOs by jointly profiling the temporal changes in the transcriptome and chromatin accessibility and integrating these data computationally. Extensive changes in the transcriptome are known to occur in Medicago roots in response to rhizobia signals, and we show these changes are accompanied and facilitated by extensive chromatin remodeling. While the overall percentage of accessible chromatin regions remained similar across our time course experiment, regions of accessibility underwent a dramatic shift 1\u20132 h after treatment. This remodeling appears to anticipate the development of root nodules, which requires stringent temporal and spatial control of gene expression. Chromatin accessibility of gene promoters notably also emerged as a significant predictor of gene expression and ABA (ABI4-5). EIN3 is a transcription factor mediating ethylene-regulated gene expression and morphological responses in Arabidopsis. The role of EIN3 in rhizobium-legume symbiosis or LCOs signaling remains uncharacterized, but sickle (skl) mutants for an EIN2 ortholog develop more infection threads and nodules and respond more to LCOs than wild-type plants, and ethylene treatment inhibits LCO signaling and nodule formation [M. truncatula [SHY2), and cytokinin (MtRRB15). SHY2, a member of the Aux/IAA family, plays a critical role in cell differentiation at root apical meristem and is activated by cytokinin [SHY2 was proposed as a candidate for nodule meristem regulation and differentiation after showing a very localized expression pattern in the nodule meristematic region [MtPLT genes (MtPLT1-5) are part of the root developmental program recruited from root formation and control meristem formation and maintenance for root and nodule organogenesis [EIN3 and ERF1 using RNAi in M. truncatula and showed a significant effect in nodule formation. Prior work of Asamizu et al. [ERF1 ortholog as an effector of nodule development in L. japonicus, where the number of nodules was likewise reduced in a similar RNAi experiment. Their findings suggest ERF1 is induced by rhizobium on a 3 to 24 h time scale, echoing the observed time scale of chromatin reorganization in M. truncatula in our work. Recent work of Reid et al. [L. japonicus, which supports why we observe ethylene-related TFs having a positive impact on nodulation, unlike the ethylene insensitive skl mutation [We applied novel methods for time-series analysis, ESCAROLE and DRMN , to modeormation . ABI4 anuncatula . DRMN alytokinin , 41. SHYc region . Also reogenesis . We expeu et al. independd et al. emphasizmutation . The exaNIN, NF-YA1/NF-YB1, and CYCLOPS. For example, MTG-LASSO analysis predicted NIN as a direct target of SHY2 and MTF1, and FLOT4, required for infection thread formation, as a target of IBM1 [ARF16a and SPK1 are targets of NF-Y TFs. ARF16a and SPK1 control infection initiation and nodule formation [Our analysis predicted genome-wide targets for transcription factors, including novel regulators identified by DRMN and previously known regulators of root nodulation, such as of IBM1 . Among kMedicago truncatula, in response to rhizobia signal that trigger nodule formation. We have jointly modeled the chromatin and transcriptome time series data to predict the most critical regulators of the response to these signals and that underlie molecular pathways driving nodule formation. Our transcriptomic and accessibility datasets and computational framework to integrate these datasets provide a valuable resource for identifying key regulators for the establishment of root nodulation symbiosis in M. truncatula that could inform engineering of nodulation in species unable to establish that symbiosis naturally.The regulatory mechanisms underlying plant-microbe symbiotic relationships remain poorly characterized. Here, we present a novel dataset that profiles the concurrent changes in transcriptome and chromatin accessibility in the model legume, Medicago truncatula Jemalong A17 strain (available through the USDA Germplasm Resources Information Network (GRIN)) were sterilized and germinated in 1% agar plates, including 1\u03bcM GA3. Plates were stored at 4\u00a0\u00b0C for 3\u00a0days in the dark and placed at room temperature overnight for germination. Seedlings were grown vertically for 5\u00a0days on a modified Fahraeus medium with no nitrogen [\u22122 s\u22121 photosynthetic photon flux). LCOs were purified from S. meliloti strain 2011 as described previously [\u22128 M) or 0.005% ethanol solution (control) for 1 h. Roots were cut and immediately used for nuclei extraction and generation of ATAC-seq libraries (see below) or snap-frozen in liquid nitrogen for posterior RNA isolation and sequencing. Roots were collected at 0 h (control), 15, 30 min, 1, 2, 4, 8, and 24 h after LCO treatment. Roots from seven plants were pooled for each of three biological replicates used in RNA sequencing, while roots from 15 plants were pooled for one replicate used in ATAC-seq, in each time point of the experiment.Seeds of wild-type nitrogen , in a greviously . Next, s2, 5 mM 2-ME, 1 mM EDTA, 0.15 % Triton X100), and centrifuged . The supernatant was removed, and the nuclei were resuspended in 500 \u03bcl of lysis buffer and then filtered in 70 \u03bcm and 40 \u03bcm filters consecutively. The nuclei were then collected by centrifuging the solution at 1000g for 5 min at 4\u00a0\u00b0C. After washing with 950 \u03bcl 1\u00d7TAPS buffer , the samples were centrifuged again at 1000g for 5 min at 4\u00a0\u00b0C. The supernatant was removed, leaving the nuclei suspended in approximately 10 \u03bcl of solution. Next, 1.5 \u03bcl of Tn5 transposase (Illumina FC-121-1030), 15 \u03bcl of Tagmentation buffer, and 13.5 \u03bcl of ddH20 were added to the solution. The reaction was incubated at 37\u00a0\u00b0C for 30 min. The product was purified using a QIAGEN MinElute PCR Purification kit and then amplified using Phusion DNA polymerase. One microliter of the product was used in 10 \u03bcl qPCR cocktail with Sybr Green. Cycle number X was determined as the cycle were the \u00bc of the maximum signal was reached. Then, we amplified the rest of the product in a Phusion (NEB) PCR system with X-2 cycles . Amplified libraries were purified with AMPure beads (Beckman Coulter), and library concentrations were determined using a Qubit. Sequencing was carried out in an Illumina HiSeqX (2\u2009\u00d7\u2009150 cycles) at the HudsonAlpha Institute for Biotechnology .For ATAC-seq library preparation, we followed the protocol described previously with modFor each RNA extraction, roots from 7 plants were pooled and ground while keeping the sample frozen. RNA extraction was performed as described previously . LibrariM. truncatula transcriptome , were analyzed using the same Kallisto/SLEUTH approach. The 144 samples characterized in that experiment presented alignment rates of 91\u201396%, except four outliers with rates of 73\u201388%. Analysis of this data set detected 40,988 genes with non-zero expression, of which 36,298 were in common with the 37,536 identified in the present LCO-treatment experiment genes for each time point in each experiment. Statistics relative to the wild-type reference for matched time points to the reference genome. Properly paired fragments with a quality score of 3 or greater were then obtained with \u201csamtools view -Sb -q3 -f2,\u201d . Peaks called at each time point were merged across time points to generate a set of \u201cuniversal peaks\u201d using custom scripts gene promoter regions (defined as 2 kbp upstream to 2 kbp downstream of a given gene TSS) and (2) universal peaks described above using custom scripts . Brieflysis Fig. C. The Pe gene proThe signal for the universal peaks was similarly quantified by the log-ratio of the mean per-bp coverage of the respective peak region relative to the global average per-bp coverage. For both data sets, this was followed by quantile normalization across time points, providing a continuous measure of the accessibility of gene promoter and peak regions.P-value that estimates the probability of observing a correlation in the permuted data more significant in magnitude than an observed correlation,\u00a0treating positive and negative correlations separately. For the eight time points in this data set Pearson\u2019s correlations were typically significant (P\u00a0<\u2009=\u20090.05) when > 0.50 or < \u2212\u20090.50. The zero-meaned promoter and universal peak accessibility profiles were clustered with k-means clustering, and the optimal settings for k were determined separately for each data set. In both cases, the silhouette index (computed with correlation distance metric) was used to select the optimal k. Here, k\u00a0=\u20096 clusters were chosen for the promoter accessibility data for each time point and their associated regulatory programs comprising the cis-regulatory elements that best predict gene expression of a particular module.We applied a novel algorithm, dynamic regulatory module networks (DRMN) , 66, 67,cis-regulatory features for each gene, we used 333 M. truncatula motif position weight matrices from the CisBP v1.02 database [pwmmatch.exact.r script (from the PIQ pipeline [daphne mutation for the NIN (NODULE INCEPTION) gene in Lotus japonicus by Yoro et al. [NIN gene in M. truncatula. For each gene, the accessibility of multiple instances of the same motif mapped to that gene was summed. Finally, the aggregated motif accessibility feature data were merged across the time course and quantile normalized [To obtain the pipeline ) using to et al. , which io et al. have likrmalized . The nork and uses a regularized regression model, Fused Lasso [k, for all time points jointly. This has the following objective:The DRMN algorithm takes as input the number of modules, ed Lasso , to learXc,k is the nk\u00a0X\u00a01 vector of expression levels for nk genes in modules k for time point c, Yc, k\u00a0is\u00a0nk X p motif-accessibility feature matrix corresponding to the same genes, k is the matrix of coefficients across time points. The sum over c, c\u2032 represents the sum over pairs of consecutive time points. Specifically, here, \u2016.\u20161 is the l1 norm , \u2016.\u20162 is the l2 norm , and \u2016.\u20162, 1 is the l1,2 norm, i.e., the sum of the l2 norm of the columns of the given matrix. Furthermore, \u03c11, \u03c12, and \u03c13 are hyper-parameters of the model that need to be tuned for optimal training and inference of DRMNs. These parameters represent (1) a sparsity penalty, (2) enforcing similarity of features for consecutive time points, and (3) enforcing an overall similarity of feature selection across all time points. We used several criteria to determine these hyper-parameter settings. The most important is the Pearson correlation of actual and predicted expression in threefold cross-validation settings to assess the resulting predictive power of models inferred for varied settings of the hyperparameters. Additionally, the quality of the clustering (silhouette index scores), the BIC-corrected likelihood score, and stability of predictive power in threefold cross-validation and \u03c12 independently and assessed the resulting predictive power for all models inferred. Predictive power generally monotonically decreased with increasing values of either parameter for values of \u03c11>10, while for \u03c12<\u2009=\u200925, the clustering was unstable. A choice was made for \u03c11\u2009=\u20095 over \u03c11\u2009=\u20091, since predictive power correlation was marginally higher for \u03c12\u2009=\u200930\u201360.Here, \u03c11\u00a0parameter fixed to 5, a second independent scan of \u03c12 and \u03c13 was performed, with (1) \u03c12 varied from 25\u201360 in increments of 5, 75, and 100, and (2) \u03c13 scanned for values of 0\u201360 in increments of 5, 75, and 100. For settings of \u03c13\u2009=\u20095\u201320, there tended to be unstable predictive power of the least expressed module, recovering comparable but not greater performance compared to results for \u03c13\u2009=\u20090 or \u03c13\u2009>\u200920, indicating no advantage for setting \u03c13\u2009>\u20090. We considered the cross-validation predictive power, silhouette index of modules, and similarity to ESCAROLE modules, in determining a setting for \u03c12 for all time points to define significant enrichment across the 0\u20131 and 2\u201324 h portions of the time course. The choice to compare across the 1\u2013>\u20092 h time point transition was motivated by the observation of module reorganization at this time window enforcing smoothness of the regression coefficients across genes according to the l2 norm. \u03bb is the hyper-parameter for controlling the group structure.Here, P-value to assess the significance of the frequency with which a given regulator was selected relative to random. This was achieved by randomizing the data 40 times and estimating a null distribution for the rate with which that regulator was selected across folds. A Z-test P-value was then obtained for the result relative to random.For each of the 79 transitioning gene sets, MTG-LASSO was applied (using the SLEP v4.1 package in MATLAt test P <\u20090.05) relative to random for the frequency of selection across folds. MTG-LASSO\u2019s hyper-parameter, \u03bb, was determined for each transitioning gene set from the range 0.20\u20130.99 based on (1) the mean Pearson\u2019s correlation (predictive power) of the inferred regulatory features and (2) the number of regulators (5\u201315 for most gene sets) identified as significant such that the ratio of the number of identified regulators to number of target genes being close to 0.05 . The remaining 73 motifs were assigned to 261 M. truncatula genes in the v5 genome assembly that were additionally identified as transcription factors (TFs). The relatively high number of motif to gene name mappings is because TF names were provided in CisBP v1.2 as systematic gene names from the v3/v3.5 M. truncatula genome assemblies rather than v5. We used a 70% BLAST similarity score to define mappings from M. truncatula v3/v3.5 genome systematic gene names to v5 genome systematic gene names.For each of the 33 gene sets for which we identified regulators using MTG-LASSO using Gateway\u00ae LR Clonase\u00ae II enzyme mix using manufacturer\u2019s instructions.We used RNAi to validate three predicted regulators from our DRMN analysis, EIN3, ERF1, and IAA4-5. 104 bp region in the CDS specific to the gene of interest was amplified with 5\u2032-CACC and inserted into pENTR\u2122/D-TOPO\u00ae using directional TOPO\u00ae cloning and further recombined in vitro with the destination vector pK7GW1WG2(II)-RedRoot (HEL and UBC9 genes were used as endogenous controls. Two (EIN3\u2014MtrunA17Chr5g0440591) or three (ERF1\u2014MtrunA17Chr1g0186741) technical replicates were used. A BLAST was performed for all primers against the M. truncatula v5 genome to ensure specificity. The primers chosen for the validation of RNAi do not overlap with the RNAi regions . First-strand cDNA was synthesized using RevertAid RT Reverse Transcription Kit (Thermo Scientific\u2122). Quantitative RT-PCR was performed using BIORAD SsoAdvanced Universal SYBR Green Supermix on BIORAD CFX96\u2122 Real-time system; C1000 Touch\u2122 Thermal cycler. The Agrobacterium rhizogenes MSU440 with electroporation. Composite M. truncatula plants were generated as previously described [A. rhizogenes MSU440, the roots were screened for red fluorescence of tdTomato, and the composite plants with red roots were transferred to growth pouches containing modified nodulation medium (MNM) [S. meliloti 1021 harboring pXLGD4 [lacZ overnight at 37\u00a0\u00b0C. Roots were rinsed in distilled water, and nodules were visualized and counted under a Leica fluorescence stereomicroscope . The plag pXLGD4 . Two weeAdditional file 1: Table S1. Primers used in the RNAi validation study. Figure S1. Analysis workflow. Figure S2. Detailed DE gene statistics summary. Figure S3. Supplementary ESCAROLE clustering results. Figure S4. ATAC-seq data alignment statistics and fragment length distributions. Figure S5. ATAC-seq activity heatmaps and line plots for \u00b11 kb TSS regions in LCO-treatment data. Figure S6. ATAC-seq activity heatmaps and line plots for \u00b11 kb TSS regions in the comparable Maher et al. Medicago root sample data. Figure S7. Correlation of aggregated ATAC-seq activity for \u00b12 kb promoter regions. Figure S8. Supplementary ATAC-seq promoter analysis plots. Figure S9. Supplementary ATAC-seq peak-calling analysis plots. Figure S10. DRMN hyper-parameter tuning summary. Figure S11. DRMN module network edge-weight summary. Figure S12. DRMN module GO enrichment summary. Figure S13. Summary of ESCAROLE and DRMN transitioning gene set statistics and comparison. Figure S14. Summary of MTG-LASSO results and parameter tuning. Figure S15. Supplementary RNAi validation information.Additional file 2: Table S1. DRMN module assignments . Table S2. Inferred module-network edge-weights from DRMN. Table S3. Module motif enrichments. Table S4. MTG-LASSO target predictions. Table S5. RNAi validation results."}
+{"text": "RNA degradation can significantly affect the results of gene expression profiling, with subsequent analysis failing to faithfully represent the initial gene expression level. It is urgent to have an artificial intelligence approach to better utilize the limited data to obtain meaningful and reliable analysis results in the case of data with missing destination time. In this study, we propose a method based on the signal decomposition technique and deep learning, named Multi-LSTM. It is divided into two main modules: One decomposes the collected gene expression data by an empirical mode decomposition (EMD) algorithm to obtain a series of sub-modules with different frequencies to improve data stability and reduce modeling complexity. The other is based on long short-term memory (LSTM) as the core predictor, aiming to deeply explore the temporal nonlinear relationships embedded in the sub-modules. Finally, the prediction results of sub-modules are reconstructed to obtain the final prediction results of time-series transcriptomic gene expression. The results show that EMD can efficiently reduce the nonlinearity of the original data, which provides reliable theoretical support to reduce the complexity and improve the robustness of LSTM models. Overall, the decomposition-combination prediction framework can effectively predict gene expression levels at unknown time points. The fate of RNA transcripts in dead tissues and the decay of isolated RNA are not subject to strict regulation, unlike in vivo normal cells . WhetherTime-series-based gene expression data have become one of the most fundamental methods for studying biological processes. Most biological processes are dynamic, and using time series data can characterize the function of specific genes, discover linkages and regulatory relationships between genes, and find their clinical applications ,7,8. TimIt is the first prediction framework constructed by deep learning methods for time series data of isolated RNA from tissue samples. The time-series-based prediction is able to extract causality because the information on gene expression flows over time in one direction. Overall, we developed a new LSTM-based deep learning model named Multi-LSTM to predict gene expression at target time points in transcriptome time series data. . This moOur data are the expressions of transcripts corresponding to each gene obtained by extracting RNA from mouse brain tissue after transcriptional library building and sequencing. Usually, the normalized Fragments Per Kilobase of exon model per Million mapped fragments (FPKM) value is a standard value to judge the high or low expression of that gene . In thisThe raw gene expression data are shown in The decomposition results are shown in p-values > 0.05.First, the LSTM model was built for each IMF component to obtain the corresponding prediction results. The prediction results of the nine components are shown in RMSE), mean absolute error (MAE), and coefficient of determination (R2) were selected as the prediction performance evaluation indexes. Among them, RMSE can be used to assess the degree of inconsistency between the predicted and true values of the model. Theoretically, the smaller its value is, the better the performance of the model. In addition, the MAE can be used to complement the evaluation of the performance of this model. Meanwhile, the induced effects of the model can be evaluated by the determinants statistic R2. The R2 is an excellent statistical indicator of the closeness between the predicted and true values. The results of R2 of each component are shown in In order to quantify the prediction accuracy of each component more directly, the root-mean-square error . To use this model for disease prediction in Parkinson\u2019s disease, the genes related to RNA degradation can simply be replaced with genes related to Parkinson\u2019s disease. Of course, this model is not limited to the brain, but can also be used to predict retinal degenerative diseases, such as age-related macular degeneration. In conclusion, this model can be very widely used in disease prediction.The data used in this study were obtained from brain tissue samples of normal mice. Both tissues and blood cells should be preserved by proper methods as soon as possible after leaving the optimal cellular life state; otherwise, they will all undergo degradation. This study did not involve biological experiments, and mainly explored the mechanism of RNA degradation in mouse brain tissue. RNA from mouse brain tissue was stored at room temperature (RT) prior to extraction for 3 samples at 0 h time points , 3 samples at 2 h time points , 3 samples at 4 h time points , and 3 samples at 6 h time points . The data obtained from the final sequencing of the samples were used for the analysis of this study. The 12 time points are 12 consecutive points in time. We used the next 11 time points as the training set and the gene expression data from the very first time point as the prediction set. Then, we found the list of genes associated with RNA degradation in existing studies from NCBI. In total, there are 60 genes corresponding to the expression values from time point 1 to time point 12, respectively. The data used in this study were partly obtained from previously published articles . Time-spEMD has been recognized as an effective time\u2013frequency analysis method for dealing with nonlinear, nonstationary signals since it was proposed in 1998 . This meThe variable instantaneous frequency and instantaneous amplitude can enhance the efficiency of signal decomposition. At the same time, the EMD method can better preserve the nonsmooth and nonlinear characteristics of the original signal. The eigenmode function obtained by EMD also has the characteristic of intra-wave modulation, which can cover the information of the same component expressed by different Fourier frequencies into one IMF . The resThe purpose of the EMD algorithm is to decompose the original signal into some series of IMFs, and then obtain the time\u2013frequency relationship of the signal by the Hilbert transform. Taking the terahertz time-domain spectral signal as an example, the basic calculation process of the EMD algorithm is as follows.Step 1: The terahertz time-domain spectral signal Step 2: The mean value of the upper and lower envelopes is calculated by the formula:The components of the EMD algorithm are defined as:Step 3: Determine whether the component Step 4: The first IMF component is raised from the original signal to obtain the residual signal, which is calculated as follows.Repeat the above three steps with The above steps are repeated continuously until the stopping criterion of the EMD algorithm is satisfied. The stopping criterion of the EMD algorithm uses the Cauchy convergence criterion, a test that requires the normalized squared difference between two adjacent extraction operations to be sufficiently small, as defined by:In the formula: T denotes the signal length, and the decomposition ends when RMSE), also known as root-mean-square deviation, is a commonly used measure of the difference between the values. Its formula is:The root-mean-square error (MAE) is the average of the absolute values of the deviations of all individual observations from the arithmetic mean. The mean absolute error avoids the problem of errors canceling each other out and thus accurately reflects the magnitude of the actual prediction error. Its formula is:The mean absolute error (R2) is a statistical measure of how close the regression prediction is to the true data point. The formula for The coefficient of determination proposed to solve the deficiency of RNN in the long-range dependency problem . LSTM acThe three gates in the LSTM gate structure are the input gate, the output gate, and the forget gate. The input gate mainly reflects the amount of information stored in the cell state Memory unit update:Status update:The formula In this study, we propose a method based on signal decomposition techniques and deep learning, called Multi-LSTM. It is a prediction framework to explore the expression changes of RNA degradation-related genes in the initial hours of the sample, which is important for studying the effect of RNA degradation on gene expression levels. The first module decomposes the collected gene expression data by the empirical mode decomposition algorithm to obtain a series of sub-modules with different frequencies in order to improve data stability and reduce modeling complexity. The second module uses an LSTM neural network as the core predictor, aiming to deeply explore the temporal nonlinear relationships embedded in the sub-modules. Finally, the prediction results of sub-modules are reconstructed to obtain the final transcriptome time series gene expression level prediction results. The results show that EMD can efficiently reduce the nonlinearity of the original data, which provides a reliable theoretical support to reduce the complexity and improve the robustness of LSTM models. Meanwhile, the decomposition-combination prediction framework can effectively predict gene expression levels at unknown time points by combining the robust temporal nonlinearity analysis capability of LSTM. The combination of LSTM and the effective EMD decomposition algorithm not only improves the accuracy of prediction results, but also has low prediction errors. The results show that both forward and reverse time series can be predicted accurately. The combination of deep learning neural networks with biomedicine will lead to great breakthroughs in gene function prediction, disease-related gene discovery, and transcriptome time series analysis."}
+{"text": "Gene regulatory network (GRN) inference is an effective approach to understand the molecular mechanisms underlying biological events. Generally, GRN inference mainly targets intracellular regulatory relationships such as transcription factors and their associated targets. In multicellular organisms, there are both intracellular and intercellular regulatory mechanisms. Thus, we hypothesize that GRNs inferred from time-course individual (whole embryo) RNA-Seq during development can reveal intercellular regulatory relationships underlying the development. Here, we conducted time-course bulk RNA-Seq of individual mouse embryos during early development, followed by pseudo-time analysis and GRN inference. The results demonstrated that GRN inference from RNA-Seq with pseudo-time can be applied for individual bulk RNA-Seq similar to scRNA-Seq. Validation using an experimental-source-based database showed that our approach could significantly infer GRN for all transcription factors in the database. Furthermore, the inferred ligand-related and receptor-related downstream genes were significantly overlapped. Thus, the inferred GRN based on whole organism could include intercellular regulatory relationships, which cannot be inferred from scRNA-Seq based only on gene expression data. Overall, inferring GRN from time-course bulk RNA-Seq is an effective approach to understand the regulatory relationships underlying biological events in multicellular organisms. Regulation of gene expression is a fundamental factor that controls cellular events such as proliferation and differentiation. Understanding gene regulatory networks is important to elucidate the molecular mechanisms underlying cellular events. Recently, gene regulatory network (GRN) inference based on time-course data has garnered considerable attention in single-cell RNA-Seq (scRNA-Seq). State-of-the-art scRNA-Seq analysis techniques can generate transcriptome information from thousands of cells . TranscrOriginally, GRN inference was applied for gene expression data from tissue and pooled cells (bulk samples) generated using DNA microarray and RNA-Seq . CompareTheoretically, GRN inferred from whole-body and tissue RNA-Seq are different from those inferred from scRNA-Seq. scRNA-Seq provides transcriptomic information at the cellular level that enables inference of intracellular GRN involved in proliferation and differentiation . In contThus, we hypothesized that GRNs inferred from time-course individual RNA-Seq during embryonic development would include intercellular regulatory relationships between ligand genes and downstream genes of related signaling pathways. To test this hypothesis, we conducted time-course bulk RNA-Seq of individual mouse embryos during early development, followed by pseudo-time analysis and GRN inference.Cancer Research (ICR) mice at different stages . The number of replicates (embryos) was 10 at E7.5 and 11 in the remaining stages. All sacrificed female mice were housed under a 12-h dark\u2013light cycle, with the light phase starting from 8 am. All animal experiments were performed in accordance with the guidelines of the Animal Care and Use Committee of Osaka University Graduate School of Dentistry, Osaka, Japan. All experimental protocols were approved by Animal Care and Use Committee of Osaka University Graduate School of Dentistry. All methods are reported in accordance with the ARRIVE guidelines according to manufacturer\u2019s protocol. The total RNA concentration was measured using the Qubit\u2122 RNA HS Assay Kit and was adjusted to 5\u00a0ng/\u03bcL and stored at \u221280\u00b0C until the subsequent analysis.The total RNA was extracted using the RNeasyhttps://sites.google.com/view/lasy-seq/) (Non targeted RNA-Seq was conducted according to the Lasy-Seq ver. 1.1 protocol (sy-seq/) . BrieflyRead 1 reads were processed with fastp (version 0.21.0) using thRead counts were normalized using the \u201cNormalizeData\u201d function with the default parameters in Seurat (version 4.0.0) (n 4.0.0) , which pn 4.0.0) and slinn 4.0.0) were useThe \u201csmooth.spline\u201d function in R (version 4.0.1) with theA was optimized 20 times with 100 iterations and D = 4. Pearson\u2019s correlation coefficients between values of each A from the 20 optimizations and the meanA were calculated. In the following analysis, we used the average values of the top 10 A showing higher correlations with the meanA from 20 optimizations. To define the thresholds for downstream gene selection for each gene, the linear function was regressed using the \u201cnls\u201d function in R for the scatter plot of absolute values of A for downstream genes in the decreasing order; the X and Y axes represented the integers from 1 to 28,117, and the absolute values of A, respectively. Genes with larger absolute values of A than the Y values of the regressed line were defined as the inferred gene downstream of each gene.In the SCODE algorithm , normaliIn the dynGENIE3 algorithm , normaliA were calculated using the \u201cperformance\u201d function in ROCR (version 1.0\u201311) . Statist 1.0\u201311) . For all 1.0\u201311) .Based on the batch-corrected scRNA-Seq data of approximately 60,000 cells of high quality , the aven = 10 or 11) at seven time points: E7.5, E8.5, E9.5, E10.5, E11.5, E12.5, and E.13.5, followed by 3\u2032 RNA-Seq using the Lasy-Seq method corresponding to the inferred gene regulatory network, in which the value of Ai,j indicates regulatory effects on the downstream gene i from the regulator j. Ai,j > 0 indicates that the regulator j positively regulates gene i, whereas Ai,j < 0 indicates the opposite. Because SCODE optimizes A by random sampling, we optimized A 20 times to check for reproducibility. Thereafter, PCCs between the values of each A from the 20 optimizations and the meanA were calculated , whereas = 0.54) . Althougoutliers . To defihreshold . Finally of Sox8 .A by calculating the AUC . These rp-value < 0.01) were also the same . On an a < 0.01) . For exazd genes . As SCODzd genes , we assezd genes . For sixzd genes , the regzd genes . In contzd genes . The AUC < 0.01) , suggest < 0.01) .In conclusion, our approach would allow successful inference of the intercellular regulatory relationships related to the major signaling pathways as well as the intracellular pathways related to TFs.Recently, GRN inference based on the combination of scRNA-Seq and pseudo-time analysis has garnered considerable attention . HoweverWnt and Fzd genes (Unlike scRNA-Seq, which can elucidate transcriptomic dynamics in a certain cellular event such as proliferation and differentiation, bulk RNA-Seq could provide a mixture of various transcriptomic dynamics regarding the cellular events occurring in an embryo . Theoretzd genes , these pAssignment of pseudo-time is an important step in GRN inference from time-course data. In case of scRNA-Seq, the accuracy of pseudo-time assignment is controversial . In contE2f3, the number of predicted target genes is only \u223c10% of the actual targets with our strategy. Shifting the threshold to approximately 17,000 targets for E2f3 may still result in a validated target rate of approximately 80% (Herein, we proposed a new strategy for the threshold of significant gene regulatory relationships inferred by SCODE . In the tely 80% . As decrtely 80% , the thrGRN indicates the intracellular interconnections of genes in a narrow sense; intercellular regulation of genes via cell\u2013cell communication is also a key factor to understand the regulatory mechanisms underlying multicellular organisms. Several studies have attempted to systematically identify cell\u2013cell communications based on single cell gene expression profiles and information regarding ligand\u2013receptor pairs . As thes"}
+{"text": "The problem addressed by dictionary learning (DL) is the representation of data as a sparse linear combination of columns of a matrix called dictionary. Both the dictionary and the sparse representations are learned from the data. We show how DL can be employed in the imputation of multivariate time series. We use a structured dictionary, which is comprised of one block for each time series and a common block for all the time series. The size of each block and the sparsity level of the representation are selected by using information theoretic criteria. The objective function used in learning is designed to minimize either the sum of the squared errors or the sum of the magnitudes of the errors. We propose dimensionality reduction techniques for the case of high-dimensional time series. For demonstrating how the new algorithms can be used in practical applications, we conduct a large set of experiments on five real-life data sets. The missing data (MD) are simulated according to various scenarios where both the percentage of MD and the length of the sequences of MD are considered. This allows us to identify the situations in which the novel DL-based methods are superior to the existing methods. T rows and K columns. For ease of writing, we use the notation T and K are very large for the data sets that are collected nowadays. The massive amount of data poses difficulties for both the storage and the processing. Another challenge comes from the fact that some of the entries of the big matrix It is well-known from the classical literature on time series that a multivariate time series data set is obtained by measuring The conventional approach is to estimate the missing data and then to use the resulting complete data set in statistical inference. The estimation methods span a wide range from the simple ones that perform imputation for each time series individually by considering the mean or the median, or employ the last value carried forward or the next value carried backward, to the more advanced ones that involve the evaluation of the (Gaussian) likelihood, see for example\u00a0.In here we do not discuss the imputation methods that can be easily found in the time series textbooks, but we briefly present the newer methods that have been compared in\u00a0. For insImp from\u00a0 explicitIn\u00a0, the impVT) from\u00a0. A partiK columns of An approximation of SVD, which is called centroid decomposition (CD), is employed in\u00a0,8 for rejth column of Another imputation method is dubbed Grassmannian Rank-One Update Subspace Estimation (GROUSE), see for example\u00a0,10. The In robust principal component analysis (RPCA), the data matrix (which is supposed to be complete) is represented as a low-rank matrix plus a sparse matrix\u00a0. BecauseThe imputation method from\u00a0 relies oAnother matrix factorization that can be instrumental in time series imputation is the one generated by dictionary learning (DL).In this article, we extend the DL-based solution for time series imputation which we proposed in our earlier work\u00a0. Our preDLM is presented in A flexible approach that allows the user to choose the norm of the errors minim(i)The current dictionary (ii)The dictionary The algorithm that solves the optimization problem in is initiThe two steps of the main algorithm are presented in .s nonzeros. A representative of this approach is\u00a0Mmiss intR red balls and S black balls . At eact is red and t is black. It was proven in\u00a0[M. For A sequence of random variables roven in\u00a0,39 that R and S are chosen such that The indexes of the missing data correspond to the positions of ones in the sequence rding to\u00a0, the corroach in\u00a0, where bAfter the missing values are simulated, each time series is decomposed into trend, seasonal component and remainder. Then the DLM imputation method is applied on the R package imputeTS\u00a0[https://github.com/SteffenMoritz/imputeTS/blob/master/R/na_seadec.R (accessed on 28 February 2022). The implementation returns a specific output when it cannot detect a seasonal pattern for the analyzed time series. From the package imputeTS, we only use the decomposition technique and not the imputation methods because all the imputation methods are designed for univariate time series; thus they are sub-optimal for the multivariate case.The decomposition uses the implementation for the imputeTS\u00a0,43, whicLet When the imputation is performed by using ssion in is replahttps://github.com/eXascaleInfolab/bench-vldb20.git (accessed on 3 October 2021) for the imputation methods that have been assessed in [In the empirical comparison of the methods, we have used the code available at https://www.stat.auckland.ac.nz/%7Ecgiu216/PUBLICATIONS.htm (accessed on 17 June 2022).In the next section, we present the results obtained by DLM on five data sets that have been also used in . For theK is relatively small, we cluster the time series into The data set comprises monthly climate measurements that have been recorded at various locations in North America from 1990 to 2002. We do not transform the time series with the method from by using : \u03a6={5,6, rule in yields ts is selected from the set Since the data are sampled monthly, it is natural to take the signal length 1+BIC and DLM1+EBIC are also very good when sampling without replacement is employed, but their performance diminishes in the case of the Polya model .m would be m: m corresponds to a time interval (in hours) that is a divisor of 24. For each value of m, s is chosen from the set m.As the time interval between successive observations of these time series is 10 min, the most suitable value for Taking into consideration these results, we further conduct the experiments for all cases of missing data simulations by setting These are water discharge time series recorded by BAFU on Swiss rivers from 2010 to 2015. We do not remove the seasonality from the simulated incomplete time series because the method from does notRelying on some of the empirical observations that we made for the MeteoSwiss time series and taking into consideration the fact that the sampling period for the BAFU time series is equal to 30 min, we set ROSL are the winners.The normalized errors for this data set are given in The data set contains average daily temperatures recorded at various sites in China from 1960 to 2012. After removing the trend and the seasonal components from all 50 time series with simulated missing data, we cluster them by applying the greedy algorithm which was introduced in s is selected from the set Mainly based on the lessons learned from the experiments with the data sets that have been analyzed in the previous sections, we have decided to take the signal length The normalized errors from The data set comprises hourly sampled air quality measurements that have been recorded at monitoring stations in China from 2014 to 2015. For the case when the missing data are simulated by sampling without replacement than in the case when the sequences of missing data are long (see the results for simulation by using the Polya model).s, and this supports the idea that sparse models are appropriate for the multivariate time series. The values selected for Both BIC and EBIC are effective in selecting the best structure of the dictionary and the sparsity. It is interesting that small values are chosen for the sparsity Our imputation method can also be applied when the percentage of the missing data is not the same for all the time series in the data set. Based on the experimental results, it is easy to see that the percentage of missing data does not have an important influence on how DLM is ranked in comparison with other methods; thus we expect the same to be true when the number of missing data varies across the time series."}
+{"text": "The rapid growth of digital information has produced massive amounts of time series data on rich features and most time series data are noisy and contain some outlier samples, which leads to a decline in the clustering effect. To efficiently discover the hidden statistical information about the data, a fast weighted fuzzy C-medoids clustering algorithm based on P-splines (PS-WFCMdd) is proposed for time series datasets in this study. Specifically, the P-spline method is used to fit the functional data related to the original time series data, and the obtained smooth-fitting data is used as the input of the clustering algorithm to enhance the ability to process the data set during the clustering process. Then, we define a new weighted method to further avoid the influence of outlier sample points in the weighted fuzzy C-medoids clustering process, to improve the robustness of our algorithm. We propose using the third version of mueen\u2019s algorithm for similarity search (MASS 3) to measure the similarity between time series quickly and accurately, to further improve the clustering efficiency. Our new algorithm is compared with several other time series clustering algorithms, and the performance of the algorithm is evaluated experimentally on different types of time series examples. The experimental results show that our new method can speed up data processing and the comprehensive performance of each clustering evaluation index are relatively good. With the rapid development of computer information technology, the research on time series clustering is gradually rising in various fields such as finance, biology, medicine, meteorology, electricity, industry, and agriculture ,2,3. To The data is often imperfect. Due to some human errors, machine failures, or unavoidable natural influences, noise points and outlier samples may appear in the data, this phenomenon also exists in time series data.To better avoid the impact of abnormal fluctuations in time series data onto clustering, some researchers try to smooth the data before clustering. Abraha et al. (2003) proposedWith the extensive research on time series clustering, many targeted methods have been proposed, such as model-based clustering, feature-based clustering, deep learning-based clustering, and fuzzy theory-based clustering. Among them, the Fuzzy C-Means (FCM) algorithm is the classical representative of fuzzy clustering , which aDue to the high-dimension and complexity of time series data, there will inevitably be some noise sample points in the dataset, which affects the updating of fuzzy clustering centers. To solve this problem, a new clustering algorithm is derived based on FCM, called Fuzzy C-medoids (FCMdd) ,23. The N. Among the edit distances, there is the TWED algorithm that satisfies the triangular inequality relationship, which can deal with time series of different sampling rates, and is also an excellent time series similarity measurement method, but the time complexity is the same as DTW. Among these similarity measures, DTW and TWED are popular, but their high time complexity leads to low clustering efficiency. This paper chooses the MASS 3 algorithm , negative scores indicate poor results, and a score equal to 1 indicates that the clustering results are completely consistent with the true labels. Where TP represents the number of true positive samples (the clustering results can group similar samples into the same cluster), TN represents the number of true negative samples (clustering results can group dissimilar samples into different clusters), FP represents the number of false positive samples , FN represents the number of false negative samples (clustering results group similar samples into different clusters). The definition of the ARI is:The parameter in the expression of FMI has the same meaning as the parameter in Equation and can The minimum score of the Fowlkes\u2013Mallows index is 0, indicating that the clustering results are different from the actual class labels; the highest score is 1, indicating that all data have been correctly classified.Y obtained by a clustering algorithm. The NMI is defined as:NMI value range is between , and the optimal value is 1. Given class labels To confirm that the PS-WFCMdd method is effective on common low-dimensional datasets, in the first part of this subsection, we compare the performance of PS-WFCMdd and six other clustering methods on the 10 low-dimensional datasets in By analyzing the results of the above three tables, it can be concluded that our algorithm can obtain relatively excellent performance under the three evaluation indicators when faced with ordinary low-dimensional time series data sets. The improved algorithm based on the K-means algorithm also achieved good results in some indicators, indicating that the performance of the algorithm should not be underestimated. Although the PS-WFCMdd-sD algorithm achieves good performance under the FMI evaluation index, the overall performance is still inferior to the PS-WFCMdd algorithm. This shows that the MASS 3 we used has good performance on low-dimensional data in terms of clustering effect and the time complexity of the PS-WFCMdd-sD algorithm is high, we will further analyze it in To confirm that the PS-WFCMdd method is also effective on large-scale datasets in By looking at According to the results in It can be seen from From the above analysis, it can be concluded that our algorithm can achieve excellent performance under three evaluation indicators when faced with large-scale time series datasets. On some datasets, the FCM algorithm and the K-shape algorithm have also achieved good results, indicating that these two algorithms have certain advantages in processing large-scale time series data. The PS-WFCMdd-sD algorithm has achieved good performance under the three evaluation indicators, but the overall performance is still better than the PS-WFCMdd algorithm.To more intuitively explain the robustness of the PS-WFCMdd algorithm from the data features, the trend of all experimental data and the cluster center results obtained by the PS-WFCMdd algorithm are shown in We then evaluate the data from the trend in the time series dynamic analysis. From the display of the source data set alone, it can be seen that BeetleFly, Car, Fish, Herring, and ChlorineConcentration all exhibit periodic changes, among which BeetleFly has a larger time series amplitude. In terms of the experimental results on Fish and Herring, PS-WFCMdd performs generally in the three evaluation indicators, while K-means-sD has better experimental results. In the experimental results of Car and ChlorineConcentration, the advantage of PS-WFCMdd is small. PS-WFCMdd performs well in the experimental results on BeetleFly. This shows that when faced with data with obvious periodicity and stable data trend, because there are few noise and abnormal samples in these data, PS-WFCMdd has no advantage in experimental results. The remaining 13 datasets contain a large number of irregular dynamic time series, which inevitably contain a lot of noise and edge data. From the experimental results of these data, the performance of PS-WFCMdd is relatively good. However, on Housetwenty and Wafer data with mutation frequencies, this type of data often presents more difficult modeling problems. PS-WFCMdd performs best among all comparison algorithms, but is not outstanding and does not make a practical solution for mutation frequency time series.Based on the above experimental analysis, P-splines are used to fit time series data and use the fitted data as the input of clustering, which can effectively improve the final clustering effect. Under the influence of the new weight, the PS-WFCMdd algorithm can still obtain a higher clustering effectiveness index score under the condition of noise and outlier samples in the sample data. However, our algorithm also has some limitations. For example, compared with most algorithms, PS-WFCMdd needs to adjust more smoothing parameters, so the algorithm is not very concise. When faced with time series with obvious periodicity and stable data trends and irregularly changing data with mutation, our algorithm performs in general and needs further optimization and improvement.N represents the number of sample data, C represents the number of clusters, and K represents the number of iterations. It can be seen that the time complexity of PS-WFCMdd is the same as that of K-shape, and it is second only to FCM and PS-K-means algorithm. The time complexity of the remaining clustering algorithms is higher. We can see that because of the difference in time complexity between soft-DTW and MASS 3, the running time consumption of PS-WFCMdd-sD and PS-WFCMdd is one order of magnitude different. Especially in the face of large time series, the advantages of the PS-WFCMdd algorithm are more obvious.The time complexity of each comparison algorithm and PS-WFCMdd algorithm in this experiment is shown in This paper proposes a clustering method called PS-WFCMdd suitable for time series characterized by being fast, parsimonious, and noise-resistant. The idea behind our proposal is simple and effective. It can solve the effect of noisy points and outlier samples on fuzzy clustering. We recommend fitting each series with a P-spline smoother before clustering, and using the fitted data as input to the clustering analysis. We also define a new weighting method to avoid the impact of outlier sample points on medoids update. The new weight allocation makes the selected new medoids more appropriate. In the experiment, compared with soft-DTW, MASS 3 has a better advantage in our new weighted fuzzy C-medoids method, MASS 3 can measure the similarity between time series data and medoids more accurately and quickly. In addition, the performance of our weighted fuzzy C-medoids method based on P-spline has been analyzed through three clustering evaluation indexes in https://github.com/houqinchen/PS-WFCMdd (accessed on 1 February 2022).In future work, we will try to study a better cluster initialization method to reduce the number of iterations. The distance measurement criterion is further improved to reduce the time complexity of the algorithm, to better improve the comprehensive performance of the algorithm. There are still many open-ended problems that need to be solved. The characteristics of each time series dataset are different. To obtain higher clustering accuracy, it is necessary to analyze the characteristics of the data more carefully and study corresponding characteristics. The corresponding clustering method is obtained, and a more appropriate similarity measure criterion is selected. These considerations can be gradually realized in future research, which must be very interesting. Finally, the code URL for PS-WFCMdd is:"}
+{"text": "To detect changes in biological processes, samples are often studied at several time points. We examined expression data measured at different developmental stages, or more broadly, historical data. Hence, the main assumption of our proposed methodology was the independence between the examined samples over time. In addition, however, the examinations were clustered at each time point by measuring littermates from relatively few mother mice at each developmental stage. As each examination was lethal, we had an independent data structure over the entire history, but a dependent data structure at a particular time point. Over the course of these historical data, we wanted to identify abrupt changes in the parameter of interest - change points.In this study, we demonstrated the application of generalized hypothesis testing using a linear mixed effects model as a possible method to detect change points. The coefficients from the linear mixed model were used in multiple contrast tests and the effect estimates were visualized with their respective simultaneous confidence intervals. The latter were used to determine the change point(s). In small simulation studies, we modelled different courses with abrupt changes and compared the influence of different contrast matrices. We found two contrasts, both capable of answering different research questions in change point detection: The Sequen contrast to detect individual change points and the McDermott contrast to find change points due to overall progression. We provide the R code for direct use with provided examples. The applicability of those tests for real experimental data was shown with in-vivo data from a preclinical study.Simultaneous confidence intervals estimated by multiple contrast tests using the model fit from a linear mixed model were capable to determine change points in clustered expression data. The confidence intervals directly delivered interpretable effect estimates representing the strength of the potential change point. Hence, scientists can define biologically relevant threshold of effect strength depending on their research question. We found two rarely used contrasts best fitted for detection of a possible change point: the Sequen and McDermott contrasts.The online version contains supplementary material available at (10.1186/s12864-022-08680-9). Independent observations over time are counterintuitive. Examining samples at different time points, one would assume a dependent data structure between those. An ongoing aim of scientists is a better understanding of the underlying fundamental mechanisms that control organisms\u2019 development. Scientists have investigated many genes, transcripts, proteins, etc. and their corresponding roles and have introduced models of connecting these networks. In our work, the observations between different time points were independent as the examination was lethal. The samples were measured at defined stages during gestation and later life and were hence considered, in a broader sense, historical data. For reasons of reproducibility, more than one sample was measured at each time point and the measured parameter was gene expression. At each developmental stage, littermates and non-littermates were examined. Hence, we had a data setting with independent developmental stages but a dependent and independent data structure at each stage. The described setting is common for development studies in small mammals. Therefore, we want to present a novel methodology to find abrupt changes - so-called change points - in clustered historical gene expression data.Two methodological approaches could be identified: A change point detection or a dose-response analysis. However, both ignore important aspects of our research question. A change point analysis assumes that the same subject is measured repeatedly over time and the data would therefore be dependent over time. Due to lethal examination of the mice, repeated measurement over time is not given in our data structure though. Therefore, change point detection algorithms assuming dependent points in time cannot be applied. Classical change point detection considered independent observations , 2 but iThe second methodological approach would be to analyze the developmental expression data with a dose-response analysis. In this setting, different increasing doses would be administered and the goal of the analysis would be to find the dose at which the (gene expression) response changes relevantly. As the measurement at each dose can be lethal, the observations are independent. In the dose-response setting, multiple contrast tests are widely used \u20135. NeverMoreover, not only the position of the change point was of interest but, for reasons of the underlying biological research question, also the corresponding effect size. In addition, we did not want to simply report the mean difference or the median difference, but also adjust the effect of the change point for possible confounders, which is not possible with classical machine learning methods for change point detection. In our view, the significance is not as important as the relevance . Therefo\u03b2-coefficients are dependent on and compared to the intercept. In case of mean parameterization, the intercept is set to zero and the calculated \u03b2-coefficients represent the mean of the corresponding variable. As we wanted to calculate the adjusted mean value for possible confounder effects for every time point, we decided to use mean parameterization. A linear mixed effect model with mean parameterization also allowed inclusion of the mix of dependent and independent data, while leaving the focus on the predictor of interest . Generalized hypothesis testing offered the possibility to include multiple contrast scenarios. To our knowledge, this combination of methods has not been used to detect change points with an interpretable effect estimate. We tested the applicability of three established types of contrast matrices for our specific biological data setting. With two of those, we were able to obtain confounder adjusted effect estimates to detect change points. The effect at each potential change point could easily be interpreted by the non-expert user. Finally, we successfully applied the proposed method to in-vivo data presented in Kirschner et al. (2022) [Therefore, we propose a novel application workflow to analyze historical data and return estimands for detected change points. In our case, we defined historical data as data consisting of a dependent structure between time points and a mixture of dependence and independence at each time point. We applied generalized hypothesis testing by using a linear mixed effect model as a possible change point detection method. We selected three potential contrast matrices for generalized hypothesis testing. When using a linear regression model, one can decide between effect parameterization and mean parameterization. In case of effect parameterization, one fits a model where the intercept is determined during the fitting process and all . (2022) In the following, we present a combination of model fitting and multiple contrast testing for the detection of change points in data which consisted of independent and dependent data points. However, dependence was not between data points at different but at the same time points and the observations were nested in each time point. As example, we used a developmental data set. The respective pups were nested through their mothers. At each time point, there were three new mother animals. Measurement of the expression levels is lethal for both, mother mice and their offspring. The aim was to find change points in historical gene expression data. In more detail, we wanted to find time points where the expression level of a gene majorly changed compared to the expression levels measured before, incorporating the underlying data characteristics. We tested our method on four biological sets of historical gene expression data and eleven simulated data sets. The simulation settings were designed by basic research scientists to ensure applicability. A flowchart of the main steps of the applied methods can be found in Supplementary Section 6 Fig. 17, Additional file\u00a0We present a biological data set as a motivational example. In case of gene expression across developmental stages, e.g. in mice, the collection time points must be as few as possible but as many as necessary . To asseglucose transporter 1 (Glut1) and carbonic anhydrase 9 (Car9) expression by probe-based qPCR against a standard curve. The expression levels are displayed as Glut1 or Car9 molecules per 106\u03b2Actin(Actb) molecules. We used log-transformed expression values for our analysis to meet normality assumptions of the linear mixed model. We provide more information on the biological data in the Supplementary Section 2.1, Additional file\u00a0Glut1) and low-to-high (kidney Car9) were used to visualize our approach.The expression data set is an extraction of a so-far unpublished study. We used the biological data as received to illustrate the proposed method. It is on the researcher to decide which developmental stages should be included depending on the research question. In detail, our example data consists of two genes in two mouse organs. We analyzed mouse livers and kidneys from thirteen developmental stages (embryonic to adult) for The researchers in the study defined four hypothetical historical gene expression data courses, representing biologically realistic and interesting scenarios. We simulated data with respect to the described data structure shown in Fig.\u00a0https://github.com/msieg08/clustered_data_changepoint_detection) and code chunks in the Supplementary Section 4, Additional file\u00a0We would not expect to detect change points in the historical data in scenarios a) and b). Therefore, both scenarios are our control or null models. However, for scenarios c) and d), we would expect detection of at least one change point. In addition, the confidence intervals should also provide more details on our findings. For each of the defined historical data scenarios, gene expression data for 12 distinct time points were simulated. As our biological example data had 13 developmental stages, we removed the adult stage to generate congruent data sets. The number has also good properties for the generation of the time points. For simulation of the expression data, we used the statistical programming language R 3.6 and the R package simstudy . For eacWe did not run different simulations with different sample sizes because the properties of the estimates from a linear mixed model in multiple contrast test is already well known. A general tutorial on linear mixed models using contrasts in R and the theoretical background can be found in Schad et al. (2020) , Bretz eTo determine change points in our specific time series data, we first fitted a simple linear mixed effects model with mean parametrization. The expression data for one gene was set as the response. The different measurement time points were set as the fixed effects. The random effects part of the model were the mothers of the mouse pups. Therefore, the litter effect was accounted for and possible overdispersion was reduced. Equation y is the 150\u00d71 vector of normally distributed expression values,X is the 150\u00d75 design matrix for the fixed effects considering five time points ,\u03b2 is the 5\u00d71 vector of the fixed effects coefficients due to mean parametrization the mean of each of the five time points ,Z is the 150\u00d715 design matrix for the random effects of the fifteen mothers with a constant intercept,u is the 15\u00d71 vector of the random effects coefficients i.e. the effect of the mother on the expression with u\u223cN.with \u03b2-coefficients represented the estimated mean values of the respective time points without the random effects variance introduced by the mothers. Using this approach, even more complex models with more confounders would be possible. In this study, we have concentrated on a simple model to illustrate the general framework. The effects of the time points could be adjusted as in any other multiple linear regression analysis. For further clarification, we provide a very short R code chunk as an example with the Changepoint contrast. The R terms can be matched to the formula y, the variable timepoint the X\u03b2 as fixed effect, and the term (1 | mother) the Zu as random effects. The 1 in (1 | mother) indicates a constant intercept for all mothers. Mean parameterization was achieved by removing the intercept and placing 0 at the beginning of the lmer formula. More complex code chunks are available in the Supplementary Section 4, Additional file\u00a0https://github.com/msieg08/clustered_data_changepoint_detection).As a result, the the term | motherTherefore, we used the lme4 package in R to Tables\u00a0We constructed the contrast matrices in our study as follows: Each row of a contrast matrix consisted of one possible single change point scenario with respect to the selected construction method. Hence, the contrast matrix represents all possible single change point scenarios for the respective time series and selected method. Table\u00a0Lastly, the McDermott contrast is a mixture between the Changepoint and the Sequen contrasts shows the respective data with time points on the x-axis and the measured expression values on the y-axis which we assume to have at least a log-normal distribution. Each dot in the plot represents one measured value. The colors represent the data dependencies, meaning that dots with the same color belong to the same cluster, e.g. pups from the same mother. Subplots b) to d) show the estimated mean difference including the 95%-confidence interval (x-axis) for each respective change point scenario (y-axis).Glut1 gene expression in the liver overlap with the confidence interval of the preceding contrast occurred. When there was no change point, the 95% confidence intervals for each contrast were either not significant or they overlapped with the confidence interval of the preceding contrast. The respective patterns can be observed in a more or less defined way on all simulated data for the Sequen and McDermott contrasts. The Changepoint contrast cannot be recommended for the detection of a change point in any simulation setting. Overall, we suggest using McDermott\u2019s method to determine if there is a significant change within the time frame, while Sequen could be applied to determine the specific change point(s) and their direction(s).In a classical longitudinal design, each patient is examined at each inter-dependent time point. In this study, we examined a different counterintuitive setting: The time points are independent as the intervention on the mice is lethal and the observations (gene expression in the litters\u2019 organs) at each time point are correlated, resulting in a mixture of dependent and independent data structures at one time point. We solved the research question looking for change points in this experimental setting by using multiple contrast tests and by visualizing the change point with simultaneous confidence intervals. We investigated three contrasts which differ in the research questions they can answer: Should a single change point be found, or should the overall course rather be pictured? The Sequen contrast answers the first, the McDermott the second. The Changepoint contrast gives a clearly biased visualization and is unable to correctly determine change points in our setting. To summarize, we used generalized hypothesis testing with linear mixed effect models using various contrast matrices to detect change points in historical data of gene expression levels with independent and dependent data points.A connected question is how long such a time line could be to still be able to detect differences. As generalized hypothesis testing was applied, it automatically adjusted locally for multiple testing. Therefore, for each model, the respective significance level was met. The number of time points minus one comparison was evaluated for all chosen contrasting methods. The higher the number of time points, the more contrasts were tested, leading to a stricter change point selection but also higher run times. In our method section, we only give an approximation of the theoretical maximal length of historical data because the main aim of our work was to identify the most informative contrast test for detecting a given change point pattern. Surprisingly, the Sequen and McDermott contrasts were found which both intuitively were not our first choice. In future work, the borders of the maximal number of time points and multiplicity adjustment approaches , 29 willWe have discussed the possible length of historical data in terms of significance. Thus, if a confidence interval is significant, we would assume a change point. However, in the biological example data, we could also define a relevance threshold ranging from (just barely) significant to biologically relevant in our decision making. The proper choice of estimands, i.e., effect estimators, is embedded in a more general discussion of reproducibility. To date, the discussion of estimands has focused on drug development and clinical trials. Akacha et al. (2017) notes thMany multiple contrast tests are well described in the literature as well as the application in statistical inference . The mosIf we used a simple linear model without taking the nested litter/mother effects into account, the linear model would cause some type of overdispersion. In addition, our model would not reflect our true data structure. The results would include a high amount of false positive (non-existing) change points. Especially, if we decided only based on significance. As a drawback, the lme package sometimes has convergence or model fitting problems with small sample sizes. In some cases, the lmer function displays a \u201cis singular\u201d warning that the estimated variance-covariance matrix has some entries of zero. Therefore, the matrix does not have a full rank. In these cases, it is possible that some standard errors are underestimated and results should be considered with care.We presented four in-vivo expression data sets of developmental stages in mice. We decided to present different biological courses to provide evidence for its practical application: Two of the data sets did not show any abrupt changes, one first showed a steady increase over three time points, stayed at that level for some time and then increased again. The fourth data set showed no changes apart from two time points with a drastic drop in expression. The respective R code can be found in the Supplementary Section 4, Additional file\u00a0In summary, we showed that multiple contrast tests can be used for change point detection in historical data. Our application is special in the sense that the individual time points are independent of each other. Nevertheless, there is a dependent data structure within the individual developmental stages. We showed that generalized hypothesis testing with linear mixed-effect models can be used to detect change points in clustered expression data. We delivered an approximation of the maximally usable time points in the historical data and allow the researcher to define relevance thresholds to guide decision making by the effect estimators. Our algorithm is easily applicable in R. We tested three different contrast matrices and found Sequen to be the best to detect a change point at a concrete time point in the course. Confidence intervals delivered a good visualization of the position of the change point as well as an interpretable estimator of the strength and direction of the change. To determine if there is an overall significant change within the time frame, we suggest using McDermott\u2019s method as it is good at detecting changes throughout the historical data course. Both methods can also be used in sequence to verify results from historical data: First McDermott for a general overview and then Sequen for a selective examination of (parts of) the course.Additional file 1 Supplementary Material."}
+{"text": "The software corrects for autofluorescence, the optical density\u2019s non-linear dependence on the number of cells, and the effects of the media. We use omniplate to measure the Monod relationship for the growth of budding yeast in raffinose, showing that raffinose is a convenient carbon source for controlling growth rates. Using fluorescent tagging, we study yeast\u2019s glucose transport. Our results are consistent with the regulation of the hexose transporter (HXT) genes being approximately bipartite: the medium and high affinity transporters are predominately regulated by both the high affinity glucose sensor Snf3 and the kinase complex SNF1 via the repressors Mth1, Mig1, and Mig2; the low affinity transporters are predominately regulated by the low affinity sensor Rgt2 via the co-repressor Std1. We thus demonstrate that omniplate is a powerful tool for exploiting the advantages offered by time-series data in revealing biological regulation.Responding to change is a fundamental property of life, making time-series data invaluable in biology. For microbes, plate readers are a popular, convenient means to measure growth and also gene expression using fluorescent reporters. Nevertheless, the difficulties of analysing the resulting data can be a bottleneck, particularly when combining measurements from different wells and plates. Here we present omniplate software corrects both measurements of optical density to become linear in the number of cells and measurements of fluorescence for autofluorescence. It estimates growth rates and fluorescence per cell as continuous functions of time and enables tens of plate-reader experiments to be analysed together. Data can be exported in text files in a format immediately suitable for public repositories. Plate readers are a convenient way to study cells; omniplate provides an equally convenient yet powerful way to analyse the resulting data.Time series of growth and of gene expression via fluorescent reporters are rich ways to characterise the behaviours of cells. With plate readers, it is straightforward to measure 96 independent time series in a single experiment, with readings taken every 10 minutes and each time series lasting tens of hours. Analysing such data can become challenging, particularly if multiple plate-reader experiments are required to characterise a phenomenon, which then should be analysed simultaneously. Taking advantage of existing packages in Python, we have written code that automates this analysis but yet still allows users to develop custom routines. Our Microbes live in dynamic environments , and timAlthough using growth curves to evaluate microbial behaviour is long established , researcNevertheless, averaging over different wells, estimating growth rates as a function of time, correcting for autofluorescence, determining the fluorescence per cell, and particularly combining data from multiple experiments can all become bottlenecks because of the amount of data generated. For example, with measurements taken every 10 minutes, experiments lasting say 24 hours, and 96 wells, one plate-reader experiment will generate almost 14,000 data points, and multiple experiments are typically required to characterise a biological system.Here we present software that performs a comprehensive analysis of plate-reader data via established methods and data structures using the free programming language Python. We illustrate the power of our approach both by finding the Monod curve for budding yeast growing in the sugar raffinose and by studying glucose transport in yeast, where cells need at least one of seven transporters to grow aerobically on glucose . Using Gomniplate Python module takes as input the data file produced by a plate reader, such as those made by Tecan, and a \u2018contents\u2019 file, which is a Microsoft Excel file with a 8 \u00d7 12 table that gives the strains and extracellular media in each well in a 96-well plate. Using the Python module pandas, omniplate creates three data frames, each a two dimensional, labelled data structure similar to a spreadsheet or to a table in SQL. The first data frame contains the raw time-series data, the second contains processed time-series data, and the third contains summary statistics calculated from the processed data, such as maximal growth rates and lag times.The omniplate is built on the core packages both of scientific Python\u2014numpy, scipy, and matplotlib\u2014and of Python\u2019s tools for data science\u2014pandas and seaborn. To ensure reproducibility, omniplate creates a log of the methods called by a particular instance, which is exported as plain text. Further, it allows all figures to be saved into a single PDF file.The user has great flexibility to customise analysis because omniplate, the user can automatically perform multiple tasks:With plotplate, which generates a plot of either the OD or fluorescence of all the wells in a 8 \u00d7 12 format to be proportional to the number of cells\u2014the Beer-Lambert law\u2014light passed through the cells should only be absorbed, but most microbes scatter light . This scx-axis and the expected OD on the y-axis, we generate a calibration curve, which we fit using a Gaussian process to interpolate to ODs that have not been measured [One way to correct for this failure is to calibrate , 13. A dmeasured .We use this calibration curve to correct all measured ODs, but the units of the corrected OD are determined by the OD of the initial dense culture we used to generate the dilution series. Rather than keep this arbitrary value, we rescale so that the measured OD and correct OD are the same at an OD of 0.3. This rescaling means that the corrected OD varies linearly with cell numbers for sufficiently small ODs, as it should . The valomniplate, correctOD fits a dilution series and corrects all measured ODs. We include data for a dilution series for haploid budding yeast in glucose, but the user can provide another via a text file comprising two columns: one of the measured ODs and the other of the corresponding dilution factors.With correctmedia. We use a Savitzky-Golay filter to smooth measurements as a function of time.The OD and fluorescence of wells containing medium only are subtracted from the OD and fluorescence of wells containing medium and microbes by correctauto.Two methods correct for autofluorescence, one specialised to Green Fluorescent Protein (GFP) and one more general, if potentially less accurate. Both require multiple technical replicates (the same strain in the same conditions in more than one well) and wells with strains not carrying any fluorescent markers. To unambiguously detect any outliers, we use five wells for the tagged strain. For the untagged strain, we use seven wells because data from this strain corrects all fluorescent strains and must be reliable. Both methods are called through When measuring GFP, we perform the correction with linear unmixing, which estimates the autofluorescence using measurements from the GFP-tagged strains themselves . We excira, the ratio of emissions at the two wavelengths for the untagged strain, as a function of OD. We interpolate the value of this ratio to the OD of tagged strains at each time point and so perform the correction via [rg is the measured ratio of GFP\u2019s emission at 585 nm to that at 525 nm. Here f525 is the emission of a tagged strain at the wavelength for GFP and f585 is its emission at the higher wavelength of 585 nm. The error at each time point is estimated as the variance of the corrected fluorescence of the tagged strain over its replicates.In practice, there is some GFP emission at the higher wavelength, but the ratio of GFP emissions at the two wavelengths can be measured and included in the correction . To combtion via fcorr=raWe estimate the fluorescence per cell by dividing the corrected fluorescence for each replicate by that replicate\u2019s OD and taking the mean over the replicates. The error is given by the corresponding variance.As an alternative and for other fluorophores, such as mCherry, we estimate autofluorescence as the fluorescence of the untagged strain as a function of its OD. To perform the correction, we subtract this autofluorescence interpolated to the OD of a tagged strain from the tagged strain\u2019s fluorescence at each time point . We use Both methods check consistency by returning the corrected fluorescence of the untagged strain, which should fluctuate around zero.getstats method not only determines the specific growth rate as a function of time, but also its time derivative and calculates statistics, such as the maximal growth rate, the local maximal growth rate (at a true maximum and not at the beginning or end of the experiment), and the lag time, as well as their errors, which are estimated using bootstrapping. We use a Matern covariance function [To estimate growth rates using data from multiple replicate wells, we use a Gaussian process, which makes only weak assumptions on the mathematical form of the growth curve and propagates the errors in measuring the OD to the errors in the inferred growth rates . The getfunction , which ofunction if prefegetstats method can be applied to other data to estimate its time derivative, such as the fluorescence per OD.The Either data from multiple experiments can be imported and processed simultaneously or previously processed data can be loaded at once. Data can be exported to text and JSON files or to Microsoft Excel spreadsheets.addcommonvar, we determine a common time variable and interpolate measurements not made at this common time to the common time. Averaging is then performed for each common time point.Even if the plate reader is programmed identically, different experiments with the same strain in the same condition typically have measurements at slightly different time points. Therefore to average over experiments, using addnumericcolumn.To enable plotting, it is often useful to add columns to the data frames specifying the values of variables that the experimenter has systematically changed, such as the concentration of a nutrient or antibiotic. These numerical values can be automatically extracted using Although strains were kept in plates of XY media with 2% glucose, we prepared pre-cultures by inoculating single yeast colonies in SC medium with 200 All strains were derived from BY4741 to grow on glucose [To illustrate further glucose , 22\u2014so m glucose , and we Although the transporters are known to have different affinities for glucose , 25, theThe first is the Snf3-Rgt2 subsystem, comprising a low affinity sensor for glucose\u2014Rgt2, a high affinity sensor\u2014Snf3, and the transcriptional regulator Rgt1, together with its two co-repressors Mth1 and Std1 . If Rgt1The Snf3-Rgt2 subsystem negatively regulates another repressor gene MIG2 , which tThe second subsystem is SNF1 kinase, a complex of three proteins known as AMP kinase in higher eukaryotes. SNF1 responds to both low intracellular glucose and to aomniplate.We sought to understand how these four repressors\u2014Std1, Mth1, Mig1, and Mig2\u2014differentially regulate the HXTs, whose expression peaks at different glucose concentrations , 37\u201341. Although the HXTs respond to extracellular glucose, we cannot straightforwardly measure its concentration in this many experiments, and instead we estimated the concentration from each culture\u2019s OD. Assuming that a fixed amount of glucose must be imported for a cell to replicate, we estimated the glucose concentration as a linear decreasing function of the OD :g(t)=g0g0, at the minimal OD when the experiment begins and is zero when the experiment finishes because the OD has then plateaued. Although the estimate is crude, plotting the fluorescence per cell as a function of the estimated glucose concentration emphasises how the deleted genes affect the transporters\u2019 levels Click here for additional data file."}
+{"text": "The psychological measurement method of college students is a hot issue in the field of educational management research. Based on the time series analysis theory, this paper constructs a psychological measurement model for college students. This paper analyzes the psychometric behavior data, uses the time series analysis method for behavior prediction, deeply mines the relevant component information of the psychometric data, and solves the problems of weak correlation between the functions of the psychometric platform and low data accuracy of the psychometric model. At the same time, taking the intervention target group and the intervention mode as the basic variables of the intervention classification system, combining these two dimensions, a two-dimensional classification framework for psychometric intervention was proposed, and four types of different psychometric intervention measures were applied. During the simulation process, a psychometric trajectory matrix was constructed, and a two-dimensional data extraction network was used to extract the psychometric pattern data of a certain period of time. The experimental results show that using the student mental state data as a label can obtain a low-coupling training set classification for psychometric effects of college students. With the development of the Internet and information technology, people's psychological measurement life has been accelerated, new ways of thinking and cognition have been provided, and psychological measurement has gradually been recognized by the public . This brModels about psychometric outcomes are used to explain related psychometric concepts. The model established by Yan specificF1 measure reaches 0.72, which can identify 75% of students with mental health problems.This paper mainly adopts the literature analysis method and mathematical statistics research method and mainly focuses on the following contents. The first is to sort out the domestic and foreign literature on the current research status through the literature analysis method and determine the development status of the research in this field, which lays a theoretical foundation for the research problem. Secondly, according to the three-dimensional time series psychometric behavior classification model proposed in doctoral dissertation, as well as summarizing the related researches on psychometric behavior, combined with the characteristics of the psychometric platform in this paper, a psychometric behavior classification model is constructed. Third, based on the psychometric behavior data in the psychometric platform, from the perspective of a time series classification model, this study uses statistical knowledge to predict psychometric behavior. In order to further optimize the recognition effect of the model, this paper proposes a mental health problem recognition algorithm based on the time series analysis model. The online trajectory matrix is constructed, the two-dimensional convolutional neural network (2D-CNN) is used to extract the online mode of one day, the memory network is used to capture the time dependence between each day, and the depth psychological measurement is designed by combining the basic features and the online trajectory mode. Experiments show that precision reaches 0.71, recall reaches 0.75, and h1(t) is an eigenmode function component. But in fact, it also needs to repeat the above process until the conditions of the time series are satisfied, thereby decomposing the first time series component c1(t) = h \u2212 k(t), which represents the highest frequency component of x(t). In this process, the removal of the superimposed wave makes the instantaneous frequency meaningful; at the same time, since the adjacent waveforms are also smoothed, resulting in the removal of meaningful amplitude fluctuations, the conditions for stopping this process are the number of poles and zero crossings, the point difference is at most 1, and the upper and lower envelopes are locally symmetrical about the time axis. By obtaining c1(t), then r1(t) = x(t) \u2212 c1(t), until the stopping condition is satisfied; finally, the original signal is decomposed into n time series and a residual rn. The decomposition process is empirical, adaptive, and based on the local features of digital signals [In the ideal time series case, signals \u201319. (1)cIn the process of time series data preprocessing, for consumption data, since consumption records, student information, and store information are stored in three tables, we will connect them. For the web log data, due to the large amount of noise data in the data, it is necessary to remove the noise data according to the request URL; at the same time, because there are too many types of URLs in web logs, we unified them into seven categories. For grade data, there are a large number of missing values, and it is necessary to find out the calculation formula according to the law of existing values and fill in the missing values. Taking the random correlation matrix ruler as the null hypothesis, comparing the statistical characteristics of the correlation matrix of college students' psychometric data in The essence of time series processing of college students' psychometric data is that the energy of signals containing noise is generally concentrated in low frequency and short, and the higher the height, the less energy. Therefore, there must be a time series component so that in the following components the signal energy is less, which is the dominant mode, and its previous components are heavy, and the noise is the dominant mode, and the purpose is to find this time series. The process of decomposing the time series into several components from low to high reflects the characteristics of self-adaptation. According to the prediction value of the current base classifier and the loss value of the actual value as the target of the latter base classifier, the establishment of the latter base classifier is to reduce the loss generated by the former base classifier. The method of reducing the loss is to let the former base classifier. The residual of the weak classifier is reduced in the direction of the gradient, and then, multiple weak classifiers are accumulated to obtain a strong classifier.According to the research results of psychometric behavior classification, the classification model is suitable for analyzing related psychometric behaviors in online platforms. Behavior within the platform is further segmented using a network behavior time series classification model. As mentioned above, it constructed a three-dimensional psychometric behavior time series model, which can be very comprehensive and specific to classify psychometric behaviors from multiple aspects and dimensions. Among them, the first two lines of dictionaries represent two dictionaries, respectively, and each picture is regarded as the feature base vector representation of a frame, that is, it is assumed that there are five feature base vectors each. It is the input data, and each image is equivalent to one frame of feature vector, that is, three frames of data are intercepted from the input. They are the coefficients obtained by traditional sparse coding algorithms. Therefore, according to the network behavior model, the experiment classifies various psychometric behaviors in the database according to the three dimensions of structure (S), function (F), and mode (F) to reveal the psychology of the platform's psychometrics in measurement characteristics. This article will use the time series psychometric behavior model to organize and summarize various network behaviors in the platform. However, in order for the model to be more suitable for the classification of specific psychometric behaviors in the psychometric platform, the time series model also needs to be adjusted and adjusted according to the actual situation of the psychometric platform in which it is used.As can be seen from the figure, without considering the time sequence, when the input data is similar to the feature base vector pose in the dictionary, the result as shown in paper will be obtained, but such a result will cause interference and misjudgment. In order to solve the above problem, there should be a regular term that can make the coefficient vector reflect the time order of the input data vector and suppress the time series error. This is equivalent to ensuring normal coefficient vector output when the time order of the input data vector is the same as that expressed by the base feature vector of the subdictionary in the dictionary, when the time order of the input data vector is the same as the base feature of the subdictionary in the dictionary. When the time sequence of the vector expressions is not the same, the blindly increasing coefficient value in the coefficient vector is suppressed.The quality of the model performance cannot be seen intuitively, and it needs to be measured by the evaluation indicators in D, where the values of D and IV are the same as the time series of emotional and physiological signals. The length and the number are the same. Four uncorrelated normalized time series of length D are composed of N \u00d7 T-dimensional data matrix A, and the correlation matrix of the data matrix is constructed. For the decomposed time series, the low-frequency time series is removed to preserve the high-frequency time series, the high-frequency time series is removed to preserve the low-frequency time series, the high-frequency and low-frequency time series are removed to preserve the intermediate time series, and the intermediate time series is removed to preserve the high-frequency and several combinations of low-frequency time series, respectively, forming high-pass, low-pass, band-pass, and band-stop filters. The advantage is that it can preserve the nonlinearity and nonstationarity of the signal to the greatest extent, and there is also the possibility that the energy of the signal may be lost in the process of discarding the time series, and the data of discarding the time series depends entirely on prior knowledge. In order to calculate the deviation between the psychological measurement data of college students and the prediction of random matrix theory, it is necessary to first assume a set of uncorrelated normalized time series of length After unifying the grade data, we also observed that there are missing values in the grade point column in the student's historical grade table. Since GPA is the result of multiplying GPA and course credits, missing GPA will result in missing corresponding GPA. By observing the existing grade points, we found out the calculation rules of grade points, as shown in For college students, the biggest pressure on grades is to fail a course. Failure to pass the makeup examination and retake will affect their smooth graduation. For the undergraduates of our school, failing a subject may cause them to repeat the grade. Referring to the relevant regulations of the undergraduate student handbook, the college will count the failing grades of the undergraduates of each grade after the end of each semester. If a student's failing grade exceeds 25 points (including 25), the student will enter the probationary stage in the next semester, that is, in the status of repeating grades. After completing the required credits during the probationary period, they can follow the students of the next grade for psychological measurement; if they fail to complete the required credits, they will be forcibly withdrawn from the school. Therefore, failing grades is the best feature to reflect the pressure of grades. The method of counting the passing credits is traverse the student's historical score table; if the credit acquisition method (QDFS) value is makeup examination, the corresponding credits will be counted as passing credits.The article obtains the student's mental state table; each record in the table contains the student's student number, gender, department, grade, class, attention level, and update time, as shown in the text. The level of concern includes three levels: light, medium, and severe. \u201cLight\u201d indicates mild mental health problems; \u201cmedium\u201d indicates moderate mental health problems; \u201csevere\u201d indicates severe mental health problems. In this paper, our target is a binary classification problem, that is, students with mental health problems and normal students.j should be in the form of a positive diagonal matrix, that is, except for the coefficients on the diagonal position has a value and zero elsewhere. If the expression order of the input matrix and the eigenvectors of the subdictionary are inconsistent in time and if the behaviors are mutually inverse, the optimal and limit expression is that the coefficient values are all zero. Such coefficient expressions can completely distinguish and identify different classes of behavior. Therefore, if the expression order of the input matrix and the eigenvectors of the subdictionary are the same in time, an optimal and limit expression of the subcoefficient matrix For the requested URL is an IP address, we directly remove such records. For the useless records generated by loaded web pages containing static resources, we exclude static resources based on resource extensions. The URL requested by the user usually contains the name of the resource. These names are composed of the file name and the file extension. The type of the file can be known according to the file extension. For example, js means JavaScript script, jpg means image, and html means hypertext. We have made statistics on common static resources. When traversing each file, we judge whether the extension of the requested resource belongs to static resources, and if so, remove this record. For the third question, according to the properties of the website, URLs can be divided into different types, and the URL type can indicate the purpose of the user visiting the website. In this paper, we choose n-gram-based URL classification because it has a large trade-off between computational complexity and accuracy.Among the 4295 pieces of data selected, some data are operational data of teachers and network teaching administrators, in order to ensure the accuracy and rigor of the research. The researchers eliminated the behavioral data generated by teachers and teaching managers in the 4295 pieces of data, and the remaining 3282 pieces of behavioral data were the psychometric behavioral data used in this study. First of all, in order to draw the overall sequence diagram of psychometric behavior, read the behavior data file and load the various packages required for drawing and draw.j, and the number of column vectors of the subcoefficient matrix j is equal to the number of frames of the input matrix.The correlation matrix was constructed for the 8 channels of physiological signals collected, respectively. Because the psychological measurement of college students is reflected in the variation law of physiological signals in It can be seen that the smaller the entropy value of a behavior is, the more concentrated its probability distribution is with the time interval, and the higher its regularity is. However, we found that if one student only occasionally went to the cafeteria to eat during the concentrated time period and another student often went to the cafeteria to eat during the centralized time period, although the regularity of the two students going to the cafeteria was different, they will also be similar entropy values.The first step of normalization is because the behavior data experiment stores the collected network behavior data in excel software. In order to read the data in R, the readxl package is used for data entry, and the time series analysis of the data is required, so the forecast package is the so-called forecast package for forecasting. The tidyverse package is a summary package. The various packages included are mainly to help with drawing, so we use these three packages for analysis, forecasting, and drawing. We used the ggplot package to plot the aggregated network behavior data, resulting in a time series diagram of the psychometric platform as a whole as shown. The purpose of this paper is to make use of this temporal information, treat the entire input as an overall multidimensional time series, and strengthen their inherent temporal information. The sparse coding method based on each frame alone cannot represent the underlying temporal information between frames, so better methods are needed to express it. The method described below in this paper is to enable the represented sparse coefficients to reflect temporal information and to preserve the temporal order between the feature vectors before encoding.According to the prior knowledge, the maximum MAXLEN and minimum MINLEN lengths of the subsequences are set, the information gain is measured for each subsequence in the dataset in The data feature extraction obtains the time series adaptive college students' psychometric data, uses wavelet transform to extract its high-frequency signals, time series extracts its low-frequency signals, and inputs them into the adaptive filter as noise, so as to process college students' psychometric data, improving the signal-to-noise ratio of college students' psychometric data, laying a solid foundation for better extraction of effective features in the next step. Aiming at the lack of prior knowledge of college students' psychometric data, combined with the time series characteristics of college students' psychometric data, a local feature descriptor of college students' psychometric data is constructed based on the time series shapelet recursively and recursively. The characteristic sequence of the data is extracted, and the characteristic information of different media contained in the psychological measurement data of college students is effectively extracted.On the basis of the existing 6 types of discrete emotion sample library, in order to verify the model selection and model prediction and recognition ability, the sample set in The stationarization test is carried out separately for the overall psychometric behavior time series data. According to the processed psychometric behavior time series diagram, it can be clearly found that the fluctuation of the data is very huge. Therefore, in order to establish a prediction model, it is necessary to stabilize the time series. Here, we choose the logarithmic method and then perform the unit root test on it to ensure that the processed data is a stationary time series. Sparse representation provides a good encoding method for behavior recognition data features. The usual method is to first decompose the input sequence into frame-by-frame data, that is, the feature vector at each moment, and then encode each frame of data independently, and finally encode it. The coefficient vectors corresponding to each frame of data are put together. But such methods ignore the strong temporal correlation between frames. r from the signal sequence is t = 8 seconds. At this time, the Q value is close to the lower limit value of 1. The time series reflects the slow changing law of the signal. The analysis of the time series correlation matrix shows that the time-accumulated physiological response of emotion, that is, the duration or required reaction time for emotional and psychological experience to be reflected in the changing laws of physiological signals. When the number of data samples is constant, a larger value of Q corresponds to a smaller time scale. When f is small enough to reflect the transient law of the signal, the properties of the signal time series correlation matrix constructed from this can reflect the instantaneous emotional state. Since skin conductance and heart rate are ultralow-frequency slowly varying signals, take the time scale At = 0 for them and 5 seconds to show its transient law.The time scale of constructing a time series of length P value is equal to 0.01, and then, the null hypothesis is rejected, that is, the data after taking the logarithm is stationary. In the same way, the three categories of psychometric behavior data under the F dimension are tested for stationary time series, and it is found that the P values are all less than 0.05, so the above four time series are all stationary time series.Here, we use the tseries package in the R language and use the time series test in this package. The null hypothesis of the test is that \u201cthe original sequence has a unit root,\u201d that is, the overall psychometric behavior time series data is not stationary. It can be found from the results that the In order to establish a more accurate prediction model, the key link is to order the model. There are three main methods of order determination at this stage: the order determination method based on the autocorrelation function and the partial correlation function, the determination of the order based on the F test, and the use of the information criterion to determine the order (AIC criterion and BIC criterion). The model is ordered based on the method of autocorrelation function and partial correlation function. The dictionary is divided into multiple groups of subdictionaries, each group of subdictionaries includes multiple feature base vectors (column vectors), and the dimension of the column vector is also the feature dimension. According to the grouping of the dictionary, the sparse coefficient matrix is a group of subcoefficient vectors of the same color and corresponds to the input vector and the subdictionary. The number of column vectors is the same as the number of column vectors of the input matrix column vector dimension. The number is the same as the number of base eigenvectors of the dictionary.It can be seen that the value of the autocorrelation coefficient decreases slowly, so it is tailing, and the value of the partial autocorrelation coefficient in P values of the Ljung-Box graph are all greater than 0.05, so the data in these time series are random data, and the data in a small number of lag periods are less than 0.05, so the null hypothesis is rejected; then, the data in the series has a relationship with the data in these lag periods, indicating that in the it. The individual values may not be independent and random during this time. On the whole, most of the data in the lag period conform to the null hypothesis that the sequence is a random sequence, which means that the residual test of the model is not bad. The model is used to predict and fit as shown in the text. It is found that the predicted time series is consistent with the actual time series trend, indicating that the overall psychometric behavior prediction model established by it is better.In the example, it can be found that the generalized variance, Ljung-Box test detection and ACF value of residual error, GVTEST value and LBQ value, the null hypothesis of these two tests is that the sequence is a random sequence; it can be found that most of the C is counted and compared with the distribution of the eigenvectors of the random matrix scale. The comparison is carried out from two aspects: (1) the eigenvector U corresponding to the maximum eigenvalue of the correlation matrix of the emotional physiological signal sequence outside the theoretically predicted maximum eigenvalue and (2) the steps of the eigenvectors corresponding to the eigenvalues of the correlation matrix of the emotional physiological signal sequence within the range of the theoretically predicted eigenvalues.Then, the distribution of the eigenvectors of the correlation matrix In the research process of college students' psychometric data, firstly, the data processing methods of several existing college students' psychometric data are deeply studied and then combined with the time series data mining method, and the medium information characteristics in college students' psychometric data are analyzed. It is extracted and analyzed, and finally, the psychological measurement data of college students are classified according to the characteristics extracted by the analysis, and the processing and analysis results are presented through visual effects.Through the research on the existing college students' psychological measurement data processing and analysis methods, this paper improves the existing college students' psychological measurement data processing technology, uses time series analysis to improve the signal-to-noise ratio of the data, and unifies the center point and scales the coordinate data. The transformation of unification and angle unification makes the coordinate features have scale invariance and rotation invariance, and the experimental analysis also confirms that it has better robustness. According to the psychological measurement principle of college students, the psychological measurement data of college students comes from the real-time sampling process that occurs at equal intervals of sampling pulses and has the characteristics of time series. During the experiment, the sparse representation algorithm of time series analysis was used as a multidimensional time series modeling method, multidimensional analysis of time series was carried out, and a regular term added to supervise time series chaos was used to suppress errors in motion time so that traditional sparse coding could be processed with a multidimensional time series modeling approach with motion information. The important influence of time series information on sparse coefficients and the improvement of classification accuracy are verified through experiments."}
+{"text": "Brassica rapa, we identified 28 putative HSP70 gene family members using state-of-the-art bioinformatics-based tools and methods. Based on chromosomal mapping, HSP70 genes were the most differentially distributed on chromosome A03 and the least distributed on chromosome A05. Ka/Ks analysis revealed that B. rapa evolution was subjected to intense purifying selection of the HSP70 gene family. RNA-sequencing data and expression profiling showed that heat and cold stress induced HSP70 genes. The qRT-PCR results verified that the HSP70 genes in Chinese cabbage (Brassica rapa ssp. pekinensis) are stress-inducible under both cold and heat stress. The upregulated expression pattern of these genes indicated the potential of HSP70 to mitigate environmental stress. These findings further explain the molecular mechanism underlying the responses of HSP70 to heat and cold stress.Heat shock proteins protect plants from abiotic stress, such as salt, drought, heat, and cold stress. HSP70 is one of the major members of the heat shock protein family. To explore the mechanism of HSP70 in The heat shock transcription factor family strengthens plants under biotic and abiotic stress conditions; as a result, plants proceed with normal growth and development . TemperaArabidopsis, HSP70 deficient plants showed the stunted plant growth and abnormal leaf phenotype [Arabidopsis plant development under abiotic stress conditions [Brassica rapa family member. This vegetable-based family includes cabbage, Chinese cabbage, mustard, and turnip, which are sources of oil, amino acids, and proteins for humans and animals. These vegetables are highly susceptible to temperature stress [The HSP70 family, which encodes HSP70 proteins, has been identified and studied in many plants, such as Arabidopsis (17 genes), rice (26 genes), spinach 12 genes), and soybean (61 genes) . In Arabhenotype , double henotype . Further genes, ae stress and are e stress .Brassica rapa has gained remarkable economic and nutritional value due to its heading trait, which makes it highly susceptible to stress. Under temperature stress, the leaves become pale and weakened due to chloroplast damage, and the plant becomes unable to properly regulate photosynthesis [Brassica rapa to date. We identified and explored the members of the HSP70 gene family, constructed their evolutionary tree and synteny correlations, and predicted protein structures with available bioinformatics software. Moreover, we analyzed the expression of these proteins by qRT-PCR to determine their responses to heat shock stress in Chinese cabbage (Brassica rapa subsp. Pekinensis). These findings will open up new gates for understanding the structure, function, and expression of HSP70 genes in B. rapa.ynthesis . Under sA. thaliana HSP70 family genes (AT3G12580.1) was obtained from the TAIR Arabidopsis genome database (TAIR-Home Page arabidopsis.org (accessed on 20 April 2022). This sequence was used as a query to retrieve the HSP70 genes in the Brassica database . Secondly, the HMMER 3.1 and BLASTP with the threshold e-value set to 1e\u22125 were performed using the Hidden Markov Model (HMM V 3.0) profiles of the HSP70 gene family (PF00012) were downloaded from the Pfam protein database and used as the inquiry, The default limitation of HMMER 3.1 was set to 0.01. Finally, total of 28 HSP70 were retrieved in the Brassica rapa genome. B.rapaHSP70 proteins were further characterized by determining the molecular weight, number of amino acids, isoelectric point, and grand average of hydropathicity (GRAVY) through the ProtParam tool . The protein sequences were uploaded to the online database Wolf PSORT for the prediction of subcellular localization . Furthermore, the protein conserved domain was identified using the NCBI conserved domain online server and motif analysis was performed by using MEME Suite . The gene structure was predicted by the Gene Structure Display Server (GSDS) , using the CDS and genomic sequences of 28 selected B.rapaHSP70. Furthermore, prediction of the subcellular location pattern of each B.rapaHSP70 was carried out using the WoLF PSORT server .Arabidopsis, B. oleracea, and B. napus were retrieved to construct the phylogenetic tree using MEGA X (V 6.06) software . The sequences were multiple aligned and employed using the neighbor-joining (NJ) method with 1000 bootstrap replicates [B. rapa with B. oleracea, B. napus, and Arabidopsis were developed using MCScanX to obtain collinearity files that were used to build dual synteny plots in TBtools .The protein sequences of HSP70s from plicates . SyntenyB. rapa [B. rapa and its closely related species through TBtools [Duplicated genes were retrieved using the Blast, MCScanX, and Advance Circos features of TBtools . For theB. rapa ,20. Synt TBtools .B.rapaHSP70 genes were classified, and the 2000 bp upstream of the start codon were downloaded from the BRAD database . The PlantCARE web-based tool was used for further classification, and the results were presented using TBtools. The graphical construction of the 3-dimensional (3D) structure of B.rapaHSP70 proteins was done using the online web-based software PHYRE2 (PHYRE2 Protein Fold Recognition Server (ic.ac.uk), selecting the fold recognition method [Putative cis-elements of the 28 B.rapaHSP70 were used to determine the interaction of genes with microRNAs using the psRNATarget database , and the interactions were further prophesied by Cytoscape Software. Gene ontology (GO) annotation was performed to explore functional characteristics. The B.rapaHSP70 gene sequences were uploaded to the online eggNOG database . Then, the GO annotation data were handled by using and graphical demonstration was given by using OmicShare .CDS sequences of B.rapaHSP70 genes in different tissues, and a heat map was constructed using TbTools [http://www.ncbi.nlm.nih.gov/geo/, accessed on 20 April 2022) under accession no. GSE43245. In addition, expression profiles of B.rapaHSP70 under cold and heat stress conditions were obtained by performing qRT-PCR.RNA-seq data were used to determine the expression of TbTools . The datSeeds of Chinese cabbage were grown in pots containing soil: vermiculite mixture (3:1) in a controlled chamber until they reached the five-leaf stage. Cold and heat stress conditions were applied to five-leaf seedlings. Seedlings were transferred to 4 \u00b0C for cold stress and 45 \u00b0C for heat stress under the same day/night and humidity conditions. Three biological replications were used in both stress conditions. Leaf samples were harvested after 0, 1, 3, 6, 12, and 24 h, placed in liquid nitrogen, and stored at \u221280 \u00b0C [B.rapaHSP70 gene expression; actin was used as the internal reference gene in Chinese cabbage. All primers used in this experiment are listed in \u2212\u0394\u0394Ct method was used to check the relative expression levels, and the results were graphically represented [Leaf samples were used to extract the RNA, and qRT-PCR was performed with three replicates to check resented .B.rapaHSP70-1\u2013B.rapaHSP70-28. These proteins were further analyzed for the HSP70 domain (PF00012) , equal number and similar motif are present within each group except in group 4 , and the longest was B.rapaHSP70-28 (894 aa). The genes that translate into HSP70 proteins were distributed on 10 chromosomes, A01\u2013A10. Among these, the maximum genes were distributed on chromosome A03 and the minimum on chromosomes A02, A04 and A05 , and all selected genes were named PF00012) . Conserv group 4 A. Detail group 4 . Gene st group 4 B. The am and A05 .B.rapaHSP70, B.nHSP70, B.olHSP70, and AtHSP70 were analyzed. Based on domains and a phylogenetic tree, 77 HSP70s were clustered into six major groups and Arabidopsis.The evolutionary relationships between , and E) . The resB.rapaHSP70 genes in the B. rapa genome. We detected duplicated gene couples among the 28 B.rapaHSP70 genes identified in B. rapa , B. napus (AC), and B. oleracea (CC)\u00a0using collinearity analysis. In addition, a comparative synteny analysis of HSP70 gene pairs among B. rapa, B. napus, B. oleracea, and A. thaliana was conducted (B. rapa (such as Chinese cabbage) and B. oleracea (Cabbage) and B. napus (Mustard) was included due the presence of both sub genome (A and C). B.rapaHSP70 displayed the most collinearity with B. napus, followed by A. thaliana and B. oleracea. Results demonstrated that homologues genes from A genome of B. rapa are present in A as well as C genome of B.napus. Similar behavior was also observed in C genome of B. oleracea. These results suggest that, in addition to the whole genome duplication event, an independent duplication event also occurred during the evolution of these species.BraA10g033140.3C/BraA10g033130.3C, BraA09g038510.3C/BraA01g027130.3C, BraA01g038810.3C/BraA03g035360.3C, BraA08g029700.3C/BraA06g012060.3C, BraA03g019440.3C/BraA03g016490.3C, BraA03g051830.3C/BraA08g020460.3C, BraA02g003050.3C/BraA03g003930.3C, BraA08g021750.3C/BraA01g001130.3C, BraA03g047170.3C/BraA01g019900.3C, and BraA06g001280.3C/BraA04g004290.3C were determined A, which onducted B. We selB.rapaHSP70 genes was retrieved and uploaded to the PlantCare database to examine the cis-elements in the promoter region. The graphical representation helices and beta (\u03b2) sheets, because proteins with similar structures often have similar functions topology, thus preserving conserved signature fold of HSP70 family with central mixed \u03b2-sheet sandwiched between \u03b1-helices [B.rapaHSP70 protein family.With the advent of graphical visualization of genomic data, 3D protein structures also explain the properties, as well as facilitate the comparative studies, of proteins. All 28 unctions . Protein-helices . On the B.rapaHSP70 genes, we identified the miRNAs associated with the identified genes , cellular component (CC), and biological process (BP), were investigated to further explore the ed terms . Some ofed terms .The GO-MF (Molecular Function) enrichment results detected 19 enriched terms, namely, protein serine/threonine phosphatase activity (GO:0004722 and GO:0004674), manganese ion binding (GO:0030145), pectate lyase activity (GO:0030570), 1-phosphatidylinositol binding (GO:0005545), glycoprotein binding (GO:0005515), DNA-binding transcription factor activity (GO:0003700), calmodulin binding (GO:0005516), cobalt ion binding (GO:0050897), clathrin binding (GO:0030276), ATP binding (GO:0005524 and GO:0005524), oligopeptide transmembrane transporter activity (GO:0015198), oxidoreductase activity (GO:0016491 and GO:0016491), glutathione peroxidase activity (GO:0004602), 2,3-bisphosphoglycerate-independent phosphoglycerate mutase activity (GO:0046537), protein serine/threonine kinase activity (GO:0004674), beta-amylase activity (GO:0016161), and maltose biosynthetic process (GO:0000024).The GO-BP enrichment results detected 26 enriched terms, namely, floral organ development (GO:0048437), oxidative stress responsive (GO:0006979), signal transduction (GO:0007165), signal transduction (GO:0006857), defense response (GO:0006952), pollen development (GO:0009555), male gamete generation (GO:0048235), signal transduction by cis-phosphorylation (GO:0007165), clathrin coat assembly (GO:0048268), regulation of transcription (GO:0006355), metabolic process (GO:0008152), salt stress responsive (GO:0009651), transmembrane transport (GO:0055085), plasma membrane organization (GO:0007009), glycolytic process (GO:0006096), abscisic acid responsive (GO:0009737), transmembrane receptor protein tyrosine kinase signaling pathway (GO:0007169), protein phosphorylation (GO:0006468), starch catabolic process (GO:0005983), cold stress responsive (GO:0009409), cadmium sensitivity/resistance (GO:0046686), regulation of meristem growth (GO:0010075), cadmium ion responsive (GO:0046686), temperature stimulus responsive (GO:0009409), maltose biosynthetic process (GO:0000024), and protein phosphorylation (GO:0006468).The GO-CC (cellular component) enrichment results detected 29 enriched terms, including mitochondrial protein-transporting ATPase activity (GO:0005739 and GO:0005739), plasma membrane (GO:0005886), chloroplast envelope (GO:0009941), endomembrane system (GO:0012505), nucleus (GO:0005634 and GO:0005634), chloroplast (GO:0009507), apoplast (GO:0048046), endomembrane system (GO:0012505), chloroplast stroma (GO:0009570), cytosol (GO:0005829), clathrin coat (GO:0030118), mitochondrial envelope (GO:0005740), vacuole (GO:0005773), and integral component of membrane (GO:0016021) .B.rapaHSP70 genes, we examined 8 tissues and organs of B. rapa at various growth phases based on RNA-seq data of B. rapa under accession no. GSE43245. For instance, the expression patterns of most B.rapaHSP70 genes in the silique, callus, and flower were higher than those of other tissues showed no transcript changes in any tissue/organ.To illustrate the transcript levels of the tissues . Three BB.rapaHSP70 genes varied in various tissues and organs. In silique, the expression level of B.rapaHSP70-4, B.rapaHSP70-9, and B.rapaHSP70-24 showing higher expression, and that of B.rapaHSP70-19 was lowest. In callus, B.rapaHSP70-25 was highly expressed, and B.rapaHSP70-10, B.rapaHSP70-11, B.rapaHSP70-12, B.rapaHSP70-16, B.rapaHSP70-18, B.rapaHSP70-20, B.rapaHSP70-4, and B.rapaHSP70-4 was the least expressed. In callus, B.rapaHSP70-4, B.rapaHSP70-9, B.rapaHSP70-24 and B.rapaHSP70-25 are expressing highly and B.rapaHSP70-10, B.rapaHSP70-11, B.rapaHSP70-12, B.rapaHSP70-18, B.rapaHSP70-22 and B.rapaHSP70-28 are showing no expression change. In stem, B.rapaHSP70-4, B.rapaHSP70-9, B.rapaHSP70-24 and B.rapaHSP70-25 are highly expressing, whereas B.rapaHSP70-10, B.rapaHSP70-11, B.rapaHSP70-12, B.rapaHSP70-16, B.rapaHSP70-20, and B.rapaHSP70-26 showing no expression change. In roots, B.rapaHSP70-4, B.rapaHSP70-9, B.rapaHSP70-24, and B.rapaHSP70-25 are highly expressed, whereas B.rapaHSP70-10, B.rapaHSP70-11, B.rapaHSP70-12, B.rapaHSP70-16, B.rapaHSP70-20, and B.rapaHSP70-26 showing no expression change. In flower, B.rapaHSP70-4, B.rapaHSP70-9, and B.rapaHSP70-24 are highly expressed genes, whereas B.rapaHSP70-10, B.rapaHSP70-11, B.rapaHSP70-12, B.rapaHSP70-16, and B.rapaHSP70-22 showing zero expression change. Lastly, the expression was analyzed in leaf, according to the expression pattern the B.rapaHSP70-4, B.rapaHSP70-9, B.rapaHSP70-17, B.rapaHSP70-24, and B.rapaHSP70-25 showing higher expression whereas B.rapaHSP70-10, B.rapaHSP70-11, B.rapaHSP70-12, B.rapaHSP70-16, B.rapaHSP70-20, and B.rapaHSP70-26 showing no expression change. These findings suggest that these candidate genes may play diverse roles in regulating B. rapa growth processes were upregulated under high-temperature stress at different time intervals, and most were highly expressed after 24 h of heat stress. The expression levels of 16 B.rapaHSP70s were high at different time intervals under cold stress as B. rapa. We identified 28 gene family members of HSP70 in B. rapa, which is higher than that found in Arabidopsis (11 AtHSP70) [The model plant B.rapaHSP70 were confirmed by the correlation between intron numbers and motif arrangements in combination with phylogeny. A phylogenetic tree was created to show the evolutionary relationships between B. rapa, B. napus, B. oleracea, and Arabidopsis. Recently, research has examined the evolutionary associations among these species [Notably, genes within the same phylogenetic subgroup had similar motif compositions and exon/intron structures . Genes w species ,34.B.rapaHSP70 family genes and we find all the duplicated B.rapaHSP70s are segmental duplicated.During the genome evolution process, gene duplications and chromosomal segments are major forces in the evolution of plant genome structure and content . Tandem A. thaliana were identified and subjected to promoter analysis to investigate the presence of temperature-associated (heat shock element (HSE)) and dehydration-associated cis-elements [Previously, a total of 11 HSP70 genes from nt; DRE) . Based oB. rapa stress tolerance. Furthermore, GO enrichment analysis supports the findings of the HSP70 gene association with stress tolerance [B.rapaHSP70 genes in post-transcriptional gene regulation . In the nditions ,38. For nditions . Similarinensis) . These s70 genes .StHSP20 under heat stress. In another study, the results verified the involvement of B.rapaHSP70 proteins in thermotolerance [B. rapa and other plant species. In this study, we laid a foundation for recognizing signaling controlled by HSP70 proteins under biotic and abiotic stress conditions.The expression patterns of genes are associated with their function . Severalolerance . Until nB. rapa, and 28 putative members were identified. In silico analyses were performed, including gene structure, distribution, phylogenetic relationship, and syntenic studies, which helped to explore the evolutionary properties of the HSP70 gene family in B. rapa. In addition, targeted miRNAs, promotor cis-acting regulatory elements, and gene ontology (GO) were executed. The finding of all these analysis elucidating that B.rapaHSP70s are targeted by 34 different families of microRNAs, these genes are highly responsive to light, temperature and phytohormones, and B.rapaHSP70s are involved in the Biological, cellular and molecular functioning of B. rapa, respectively. Three dimensional protein structural knowledge can help improve crop properties, such as improving stress resistance and biomass yield. Furthermore, quantitative real-time PCR results indicated that HSP70 genes are strongly involved in heat and cold stress responses in B. rapa plants. To a large extent, all analyses performed on the HSP70 gene family will lay the foundation for further studies of molecular and physiological functions in Brassica crops.In the current study, we performed a genome-wide analysis of the HSP70 gene family in"}
+{"text": "Gene expression is the result of the balance between transcription and degradation. Recent experimental findings have shown fine and specific regulation of RNA degradation and the presence of various molecular machinery purposely devoted to this task, such as RNA binding proteins, non-coding RNAs, etc. A biological process can be studied by measuring time-courses of RNA abundance in response of internal and/or external stimuli, using recent technologies, such as the microarrays or the Next Generation Sequencing devices. Unfortunately, the picture provided by looking only at the transcriptome abundance may not gain insight into its dynamic regulation. By contrast, independent simultaneous measurement of RNA expression and half-lives\u00a0could provide such valuable additional insight. A computational approach to the estimation of RNAs half-lives from RNA expression time profiles data,\u00a0can be a low-cost alternative to its\u00a0experimental measurement which may be also affected by various artifacts.Here we present a computational methodology, called StaRTrEK (STAbility Rates ThRough Expression Kinetics), able to estimate half-life values basing only on genome-wide gene expression time series without transcriptional inhibition. The StaRTrEK algorithm makes use of a simple first order kinetic model and of a We believe that our algorithm can be used as a fast valuable computational complement to time-course experimental gene expression\u00a0studies by adding a relevant kinetic property, i.e. the RNA half-life, with a strong biological interpretation, thus providing a dynamic picture of what is going in a cell during the biological process under study.The online version contains supplementary material available at 10.1186/s12859-022-04730-x. In the past, focus of the research has been on transcriptional regulation and on the resulting regulatory network , 16. By Since stability control is both transcript-specific and process-specific , to undein silico methodology called StaRTrEK (STAbility Rates ThRough Expression Kinetics), able to reliably estimate half-lives from short time series\u00a0without transcriptional inhibition, i.e., from at least 5\u20136 time points. The latter feature of the algorithm is very important in time-course experiments, since in most cases only few samples are experimentally measured over time. By contrast, for example, in physiological modeling [Here we developed an modeling the measStaRTrEK relies on a computational model of post-transcriptional regulation based on first order kinetics and a least square optimization with rameters . Precisementclass2pt{minimlong gene expression time-series (48 time points), providing an excellent significant agreement between StaRTrEK predictions and measured values.To prove the validity of the proposed methodology, we preliminary tested the algorithm performances on several types of artificial data, varying the number of samples, as well the type and amplitude of measurement noise. Most importantly, we tested the methodology on real experimental data by comparing StaRTrEK predictions versus two recent public datasets composed of simultaneous measurements of a short genome-wide gene expression time-courses (6 samples) and the corresponding transcripts\u2019 half-lives. We found a highly significant agreement between estimated and measured stability rates. Finally, the algorithm has been also tested on experimental measures of half-lives and In conclusion, we believe that our algorithm can be used as a fast valuable computational complement to time-course experimental studies by adding a relevant kinetic property with a strong biological interpretation.m gene expression time profiles, the rate of change of RNA concentration for two genes, say gene i and gene j, can be described by:i and j, i and j, respectively. Note that, in usual gene expression experiments, promoter activity P(t) is not measured and therefore equations . In fact, since such promoter activities are not measured, we want to select, among all available gene pairs, those having the same promoter activity as illustrated by Fig.\u00a0Among all mentclass2pt{minimmentclass2pt{minimu(t) is the unknown common promoter activity function, and u(t) from the first equation and by substituting it into the second equation of system (u(t) is absent. Again, it is important to realize that only three parameters have to be estimated from each gene expression time series.This amounts to saying that we are searching for gene pairs \u00a0having the following property:mentclass2pt{minimt, whilst data measurements are available only at (often few) discrete time points. Then, equation\u00a0(n time samples backward (or reverse) equation\u2014for each time sample, we obtained a set of linear equations that can be compactly written in matricial form:u(t) from the second equation of\u00a0(forward (or direct) equation:i and j, we have to solve the backward and the forward equationsEquation\u00a0 cannot bequation\u00a0 has beenequation\u00a0, obtainiation of\u00a0 and by sA is full row rank, we have to choose the solution Moreover, equations and 9),,9), can -workers , that coi,\u00a0j), we obtained two half-life estimates with Q. The next step is therefore the appropriate selection of the best estimates and their integration (averaging) in a single value.Let us now collect the total the type and 14)\\documentA in the presence of a large variability of the integral values A will have similar magnitude. The choice of the regularization parameter q. The output of the optimization step is therefore the matrix pair H and Q.To summarize the algorithm steps, we can identify four phases for the application of StaRTrEK: pre-processing, optimization, filtering and averaging. The pre-processing phase requires gene expression time profiles normalization , which may be followed by a sampling regularization procedure. In fact, a non uniform time sampling may lead to an ill-conditioned data matrix mentclasspt{minimamentclasspt{minimaH those entries corresponding to large MSEs value by selecting a maximal error threshold p value of a Kolmogorov-Smirnov test between the actual distribution of estimated half-lives and those obtained after a random permutation (shuffling) of time samples for each gene. More precisely, for each pair of time-series, we randomly shuffling one of them, thus mimicking the situation in which one of the two time series is purely random. p Values were corrected for multiple testing by computing a false discovery rate (FDR) using the Benjamini-Hochberg procedure [The filtering phase consists of removing from matrix rocedure and a thPre-processingZ-score normalization of time profiles;non-uniform sampling regularization (if needed);Optimizationselection of the regularization parameter qcomputation of the parameter vector mentclasspt{minimaH and Q;construction of matrices Filteringcomputation of matrices selection of the error threshold H with a corresponding estimation error (MSE) larger than the previously selected threshold removal of half-life estimations in the matrix AveragingH resulting from the previous step.computation of the median of the half-life estimations for each row (gene) of the matrix Summing up, the StaRTrEK algorithm pipeline is the following: in silico validation of the algorithm by generating simulated data to provide a preliminary assessment of its ability to recover true values on a variety of plausible situations, i.e., by considering its performance sensitivity to changes in (i) the number of available time samples, (ii) the amount of noise affecting the measurements and (iii) the number of genes involved into the estimation procedure. Artificial data were generated according to the following dynamic equation describing the rate of change of a given gene expression x:u(t) is the transcription rate, hl is the gene half-life. To provide biological plausibility to the simulated data, we estimated the values and range of the various parameters used, from experimental data of measurements of transcripts abundance over a time-course and their turnover [u(t) having a sinusoidal shape, fixing v(t) is the noise term. For any time t, v(t) was drawn from a normal distribution with zero mean and standard deviation depending on x(t). Precisely, the standard deviation was taken as a percentage of the current state, i.e., In this section, we provide an turnover . Specifientclass1pt{minima 100 min , 15 , and the number of genes (percentage) having a The performance indices we considered were the Pearson correlation n from 12 to 6, while keeping fixed the number of half-lives to be estimated . It is worth noting that the proposed algorithm works extremely well also on short time-series having a In this section, we present the results of the algorithm by considering three experimental datasets where both time-course and half-lives on a genome-wide scale has been measured (Supplementary Data). In particular, to show the performance of the proposed algorithm when dealing with a small number of samples, we focused on genome-wide yeast transcript half-lives and expression time-course data obtained in response to oxidative and DNA damage stress both collected by Shalem and co-workers , consistp value = This dataset includes genome-wide yeast transcript half-lives and expression time-course following exposure to methyl methanesulfonate (MMS), which induces DNA damage [entclass1pt{minimaentclass1pt{minimap value = This dataset includes genome-wide yeast transcript half-lives and expression time-course data following exposure to hydrogen peroxide obtained during the malaria IDC , 15. We entclass1pt{minimaentclass1pt{minimaThe availability of genome-wide gene expression profiles have revolutionized life sciences at the molecular level. The analysis of the transcriptome goes far beyond DNA sequencing, since allows to put genes into action in the highly coordinated cell regulatory network. Recently, the discovery of a specific and extensive post-transcriptional regulation of gene expression level, has attracted many researchers to the study of transcripts kinetic, i.e., the behavior over time of a cell response. In fact, the transcript half-life value determines the shape of the time profile during changes, i.e., during transient responses like a switch-on / switch-off transition. In other words, RNA half-life is a very important measure of cell response to an internal or external changing environment. Usually, genome-wide gene expression time profiles experiments are composed of few samples, since the interest of the researcher is focused on the early, middle and late response, so that about 5 or 6 time points are usually collected, considering also the high costs of a genome-wide measurement. Transcripts half-lives can be obtained by a variety of methodologies like transcriptional inhibition or metabolic labeling, but the costs are high and the measurement procedure may impact the physiology of the cell under study, thus leading to possible artifactual results. Here, we showed how to recover half-lives directly from gene expression time courses using a computational model of RNA dynamics. The model here proposed is very simple but effective, it required only three parameters to be estimated and, in fact, we showed a significant agreement between estimated and measured half-lives using two experimental datasets collecting 6 time samples. We believe that our algorithm can be used as a fast valuable computational complement to time-course experimental studies by adding a relevant kinetic property with a strong biological interpretation.Finally, we note that our method tends to underestimate half-life values. This observation actually needs an explanation or, at least, to suggest one. We did not observe this underestimation using artificial data, so we guess that it may have a biological reason rather than computational. To this aim, we note that the measured half-lives are obtained after transcriptional inhibition, whilst our algorithm makes use of the gene expression dataset where both transcription and degradation are present. It is well known that transcriptional inhibition has a large impact on the RNA half-life values since RNA half-life regulation is blocked and the experimental environment is far from a physiological status. By contrast, our computational analysis is based on more physiological data that refer to the specific biological process under study and, as such, it should be more reliable. Obviously, this claim needs experimental validation, but it is certainly reasonable. Finally, this observation also suggests the intriguing possibility that transcriptional inhibition impacts RNA half-lives by increasing their values.Additional file 1. Results of the StaRTrEK algorithm using experimental and artificial (simulated) data."}
+{"text": "CRISPR screens provide large-scale assessment of cellular gene functions. Pooled libraries typically consist of several single guide RNAs (sgRNAs) per gene, for a large number of genes, which are transduced in such a way that every cell receives at most one sgRNA, resulting in the disruption of a single gene in that cell. This approach is often used to investigate effects on cellular fitness, by measuring sgRNA abundance at different time points. Comparing gene knockout effects between different cell populations is challenging due to variable cell-type specific parameters and between replicates variation. Failure to take those into account can lead to inflated or false discoveries.We propose a new, flexible approach called ShrinkCRISPR that can take into account multiple sources of variation. Impact on cellular fitness between conditions is inferred by using a mixed-effects model, which allows to test for gene-knockout effects while taking into account sgRNA-specific variation. Estimates are obtained using an empirical Bayesian approach. ShrinkCRISPR can be applied to a variety of experimental designs, including multiple factors. In simulation studies, we compared ShrinkCRISPR results with those of drugZ and MAGeCK, common methods used to detect differential effect on cell fitness. ShrinkCRISPR yielded as many true discoveries as drugZ using a paired screen design, and outperformed both drugZ and MAGeCK for an independent screen design. Although conservative, ShrinkCRISPR was the only approach that kept false discoveries under control at the desired level, for both designs. Using data from several publicly available screens, we showed that ShrinkCRISPR can take data for several time points into account simultaneously, helping to detect early and late differential effects.ShrinkCRISPR is a robust and flexible approach, able to incorporate different sources of variations and to test for differential effect on cell fitness at the gene level. These improve power to find effects on cell fitness, while keeping multiple testing under the correct control level and helping to improve reproducibility. ShrinkCrispr can be applied to different study designs and incorporate multiple time points, making it a complete and reliable tool to analyze CRISPR screen data.The online version contains supplementary material available at 10.1186/s12859-023-05142-1. The study of effects of genetic perturbation is fundamental to elucidate gene function. In addition, the identification of genes which knockout leads to cell death, either in combination with another genetic change or in combination with a certain drug, may lead to more efficient cancer treatments. Genome-scale screening methods, in which thousands of genes are individually targeted in a single experiment, are often at the start of such investigations. Major challenges of such approaches have included undesired targeting of a specific sites (\u201coff-target\u201d effects) and variable gene inactivation efficiencies. The adaptation of clustered regularly interspaced short palindromic repeats (CRISPR) technology to mammalian cells led to the development of improved gene knockout screens, with higher efficiency and lower off-target effects , 2.Per cell one gene is knocked out and single guide RNAs (sgRNAs) of these cells are sequenced. By comparing sgRNA abundance between different conditions, the effect of specific knockouts on cell fitness can be investigated.Cas9 activity and between-replicate variation. Data analysis methods ideally should take care of all these issues, to ensure reliability and reproducibility of identified effects.This method can be applied to study the impact of gene knockout on cell lines of different origins, isogenic cell lines or a cell line with and without treatment. However, the comparison of gene knockout effects is challenging due to differences in abundance of specific sgRNAs at the start of the experiment. These may be due, for example, to variations in the library composition, efficiency of transduction of sgRNAs into cells and selection, growth rate of transduced cells, premature or incomplete CRISPR screen data involve additional aspects that need to be taken into account: (i) a large number of variables (sgRNAs) and a relatively small number of replicates, also typical for other omics data; (ii) the data are generated by DNA sequencing, and thus consist of counts displaying over-dispersion, aspects that classic statistical methods do not account for; (iii) the effect of a gene knockout is evaluated by several sgRNAs per gene, which need to be aggregated to reach a single conclusion about that gene; (iv) the experiment often involves two sequencinq runs, one at baseline measuring the starting abundance of each sgRNA, and a paired replicate at a later time point, typically after a chosen number of cell doublings. Furthermore, data related to multiple cell lines exhibit both technical and biological variability, which must be accounted for separate and differently in the data analysis. Indeed, the effect of condition such as different treatments may be represented as fixed in the model, as conclusions are to be drawn for the chosen conditions, whilst the variation between sgRNAs can better be assumed to be random\u2014the sgRNA effect then represents that of multiple, similar sgRNAs targeting the same gene.Currently available analysis methods only account for some of these issues. For example, two often used methods, MAGeCK and drugShrinkCRISPR.To tackle these challenges, we propose an analysis method that, by means of a mixed-effects regression-based empirical-Bayes framework, can efficiently detect genes with differential impact on cell fitness. It first transforms the count data into fold changes relative to the starting sgRNA abundance at creenorm , which fcreenorm , enablinThis manuscript is organized as follows. Section\u00a0\u201cCas9 endonuclease to cut a specific target, resulting in a specific gene knockout using aCisplatin causes DNA crosslinks, of which the repair requires specific DNA damage response genes, most notably components of the Fanconi anemia pathway . This meShrinkCRISPR identified 37 significant hits , ERCC5, RAD18, GTF2H5, RAD51B and MUS81.As expected, these include 12 (out of 22) known Fanconi anemia genes . It dismentclass2pt{minimTo illustrate the use of ShrinkCRISPR with multiple time points, we use screen data produced by with theThe raw data was preprocessed by first calculating fold changes Eq. and usinOR52H1 is an olfactory receptor gene, unlikely to be functionally different when knocked out in these cells, and known to be an off-target effect.The top 10 genes selected per time point display little overlap with those for other time points Fig.\u00a0. In addiEstimated effect sizes for consecutive time points showed remarkable consistency Fig.\u00a0. Indeed,A comparison between hit lists of genes obtained per time point yielded a substantial number of genes selected only for mentclass2pt{minimmentclass2pt{minimWe selected the most extreme time points to fit the model considering multiple time points. For this part, the rscreenorm quantile normalization step was removed from the preprocessing, and this model as well as the analysis of single time points Figure\u00a0We present ShrinkCRISPR, a new, flexible and powerful method for the analysis of CRISPR screen data for identification of differential effect on cell fitness between conditions. This method incorporates initial sgRNA abundance of each cell line in analyses, enabling its use for various types of experimental designs, including drug-sensitizing screens and isogenic-cell screens. Taking all individual sgRNAs per gene at once in the model, ShrinkCRISPR can test for differences between conditions at the gene level. It makes use of an empirical-Bayes framework, which allows us to represent sgRNA effects as random and condition effects as fixed. The model averages out extreme or conflicting changes, picking out effects that are consistent across most sgRNAs targeting that gene. By adequately accounting for different sources of variability, the method yields as much power as others for most effects, whilst consistently keeping false discoveries under control. Finally, testing at the gene level requires less multiple testing correction than at the sgRNA level, yielding more power.Our method takes into account existing variation between sgRNAs, as well as possible variation of sgRNAs on cell fitness between conditions via the interaction effect in the model. This yields more robust estimates than those obtained by analysing individual sgRNAs separately: such methods may seemingly produce estimates that display less variability per sgRNA, giving a false impression of more accuracy. In fact, by neglecting inter-sgRNA variability, results represent largely effects on the current experiment, so tend to be difficult to replicate in new experiments, when different replicates, and sometimes different sgRNAs, are used.In a simulation study, ShrinkCRISPR yielded similar ROC curves to those produced by a another method, drugZ, for drug sensitizing screens using paired designs. However, ShrinkCRISPR yields much less false positives in general. It also outperforms both drugZ and another method, MAGeCK, in the context of independent designs, used e.g. for isogenic screens, as it is the only approach to take into account initial sgRNA abundance. While multiple factors may lead to variability in initial sgRNAs abundance, in published work we found no results reporting such checks. The publicly available data we used in our examples illustrates this point.The method drugZ was developed to analyse screen data from paired designs. As such, it is not unexpected to perform less well for the analysis of screens generated using independent designs. In our simulation study, we used it to analyse data from independent designs to illustrate the impact of ignoring initial sgRNA abundance on results.ShrinkCRISPR is the best approach in terms of controlling the proportion of false positive hits, while it is able to find all hits with strong differential effect. However, ShrinkCRISPR is conservative: the false discovery rate is under the desired level, and the method is not able to detect hits with small effect sizes. The low power for detecting small effects could be potentially improved upon by using a spike-and-slab prior for the effect of interest, which would enable the model to better separate a subset of genes with no differential behaviour between groups, from those with differential behaviour. Using the current simulation study setup, however, this did not lead to a better performance (data not shown). The choice of spike-and-slab prior will be available in the R package ShrinkCRISPR.Results of the simulation study must be interpreted with care. Indeed, each individual simulated dataset used the same effect size Our method has been designed to analyse CRISPR screen data generated by sequencing, consisting of counts. Our proposed pipeline takes the initial sgRNA abundance into account by computing fold changes While we suggest using this pipeline, other researchers may choose to use fewer or none of these pre-processing steps. For example, when studying results for isogenic cell lines, relatively smaller effects are expected than when using cell lines from different individuals or different tissue types. In such cases, sharing of the initial sgRNA abundance eliminates one important variability source. In addition, lethality score distributions for library sgRNAs as well as for assay control sgRNAs tend to be stable across cell lines, and normalization with rscreenorm may be unnecessary. In such cases, the data will involve both over-dispersion as well as potentially zero inflation. The flexibility of the proposed framework enables ShrinkCRISPR to still be used, by fitting model p values (one example is REF), say using the minimum of them. This can lead to over-optimistic results, as the summary works similarly to a filtering of the features, since only one test is selected from a set of them. As a filter, the selection of the sgRNA test with the smallest p value is not independent of the test result by definition, and this yields a bias on the FDR control method [Some researchers suggest combining multiple test results for sgRNAs targeting the same gene by means of summarizing their l method .p values) to yield gene-level statistics or p values [p values of various tests [p value combining methods which allow for non-independent test, but then only for one-sided significance testing [There are methods currently in use which rely on more sophisticated approaches for combining sgRNA-level results (statistics or p values . While sus tests \u201319, most testing . Such meShrinkCRISPR relies on enough replicates per combination of group and cell line, ideally 3, to yield reliable results. Indeed, using 2 replicates to a poorer ShrinkCRISPR performance, as variances within and between cell lines are then poorly estimated. In particular, if a single replicate is available for each combination of group and cell line, ShrinkCRISPR cannot be applied. While this can be seen as a too strong requirement by some researchers, we think this is a reasonable restriction: it follows from the need for estimating variability for all sources of variation, which is precisely what enables ShrinkCRISPR to yield less false discoveries. A further challenge when using ShrinkCRISPR is that the effect sizes are not always straightforward to interpret due to the several normalization steps. Furthermore, as all approaches using fold changes, ShrinkCRISPR is sensitive to extreme values for sgRNA initial abundance, in particular very low ones.The TKO data analysis in section showed that our approach can account for multiple effects in CRISPR screens, both at the sgRNA and at the replicate levels. Indeed, estimated effects of different time points showed strong agreement: their correlation was at least 90% on average. Finally, by taking multiple time points into account in the model, ShrinkCRISPR significantly increased the power to detect differential effect on cell fitness, finding more time-independent effects than when individual time points were used.Another important step of pre-processing common to all methodologies based on fold changes is the handling of low counts. Indeed, in ShrinkCRISPR we create a fold change to measure the population growth of cell transduced with a specific sgRNA. The presence of low sgRNA counts at the initial time point can lead to a large fold change value and thus to an artificially large lethality score. This can then produce false positive hits. By modelling the variance between sgRNAs within a gene, ShrinkCRISPR is more robust to extreme values for individual sgRNAs. However, this may not be sufficient. One common approach to deal with such problems is to add to all raw low counts at We conclude that ShrinkCRISPR yields at least as much power to other existing ones for most effects, with much better true positive proportions, even if conservative. As downstream validation studies are extremely time-consuming, it represents an important step towards making better use of data produced, producing more reproducible results, and leading to more efficient studies.Additional file 1."}
+{"text": "Because gene interactions change over time, it is biologically meaningful to examine time-varying structures and to capture dynamic, even transient states, and cell-cell relationships. (2) Methods: a transcriptomic atlas of hematopoietic stem and progenitor cells was used for network analysis. After pseudo-time ordering with Monocle 2, LOGGLE was used to infer time-varying networks and to explore changes of differentiation gene networks over time. A range of network analysis tools were used to examine properties and genes in the inferred networks. (3) Results: shared characteristics of attributes during the evolution of differentiation gene networks showed a \u201cU\u201d shape of network density over time for all three branches for human and mouse. Differentiation appeared as a continuous process, originating from stem cells, through a brief transition state marked by fewer gene interactions, before stabilizing in a progenitor state. Human and mouse shared hub genes in evolutionary networks. (4) Conclusions: the conservation of network dynamics in the hematopoietic systems of mouse and human was reflected by shared hub genes and network topological changes during differentiation. Throughout life, hematopoietic stem cells (HSCs) maintain the mammalian blood system . HSCs haHere, we apply the LOGGLE model to pseudo-time ordered gene expression dataset of three lineage differentiation branches of human and mouse so as to construct hematopoietic time-varying gene interaction networks, since the model fits the biological realm of differentiation. We quantify an evolutionary trend of the gene network with differentiation (ordered by pseudo-time) by examining global attribute indicators, such as network density. We further apply network similarity analysis to confirm three stages of differentiation and to identify hub genes at different stages using centrality analysis. Last, we examine evolutionary changes of gene network topology as they can provide novel insights into hematopoietic differentiation. Hub genes identified in the aggregated networks at different evolutionary stages provide interacting candidate genes for further studies. Our model effectively captures the structural transitions in the dynamic networks.\u2212CD14\u2212CD19\u2212CD34+ cells were sorted using a LSRII Fortessa Cytometer. Lineage\u2212CD117+ cells from bone marrow of C57BL/6 mice were sorted. scRNA-seq cDNA libraries for human and mouse were prepared with the Chromium Single Cell 3\u2018 platform . scRNA-seq libraries were sequenced on the Illumina HiSeq 3000 System. The cellranger pipeline was used to process raw data, to align reads to the genome, and to produce gene\u2013cell expression matrices . R. R15]. RDifferentiation trajectory analyses were conducted with Monocle 2 . PreprocWe first filtered genes to reduce noise and data dimensionality. In this work, we considered the following two criteria for feature screening to obtain genes that carry important information b, by merhttp://www.informatics.jax.org/, accessed on 1 January 2022).(1) Relevance to hematopoiesis. We used an annotated gene list from one report . There w(2) We also retrieved the lineage-specific genes (progenitors only) from Haemopedia, an atlas of murine gene-expression data of 54 hematopoietic cell types . We calchttps://github.com/jlyang1990/LOGGLE, accessed on 31 November 2021) to build and understand differentiation time-varying network graphs L\u03a9k\u2236=CV), parameters at each time point were calculated , in which V and E are vertices and edges, several common network properties, including the number of edges, network diameter, and network density, were examined to explain a trend of network topology changes. The network diameter represents the shortest distance between the two most distant genes in the network, calculate as, i-th gene in the two compared networks. The similarity between the two networks is represented by the sum of After obtaining a series of networks at different time points, the similarity between networks will be examined and similar networks will be merged for biological interpretation. CompNet neighbor similarity index (CNSI) is used to measure the similarity between two compared networks . Given thttps://igraph.org/r/, accessed on 22 January 2022). We also borrowed the concept of the h-index, as widely used in the publication citation Networks [https://github.com/asalavaty/influential, accessed on 25 November 2021).In a gene network, not all genes equally influence a network. Gene networks usually follow a scale-free distribution, in which the majority of the genes have one or two connectivities, and only a few genes have large numbers of connectivities. Many centrality measures are proposed to indicate a gene\u2019s importance in the network context, such as connectivity/degree (the number of first neighboring genes), betweenness , clustering coefficient (probability of connections among gene\u2019s direct neighbors), and PageRank (popularity of a gene based solely on the interactions). These measures were calculated with igraph and edges (E). The energy is defined as:Graph energy is defined on the basis of the adjacency matrix The Randic adjacency matrix Laplacian energy is defined on the basis of the Laplacian matrix G is defined as:m and n are the numbers of edges and nodes.The Laplacian energy of \u2212CD34+ cells were sorted to enrich for HSPCs. A human dataset contained 15,245 single CD34+ stem/progenitor cells after filtering out cells with small numbers of detected genes, as visualized in UMAP, displayed clear clusters, suggesting distinct cell types at molecular levels , granulocyte\u2013monocyte progenitors (GMPs), B megakaryocyte\u2013erythroid progenitors (MEPs), lymphocyte progenitors (ProBs), and early T lineage progenitors (ETPs). With the same computational strategy, 17,560 lineage\u2212CD117+ cells from B6 mice were also clustered, unsupervised, based on transcriptome similarity using UMAP. Hematopoietic cell identity was assigned to each cluster of cells by comparing cluster-specific genes with published lineage signature genes. Cells were grouped into 36 clusters and assigned into long-term hematopoietic stem cells (LTHSCs), multipotent progenitors (MPPs), lymphoid multipotent progenitors (LMPPs), common myeloid progenitors (CMPs), MEPs, and GMPs to the transition stage with differentiation, complexity of a gene interaction network began to decrease, bottoming in the transition stage, then increased again before becoming progenitor cells (another stable state) a. It is This phenomenon was consistent with the themes of human and mouse differentiation, proving advantages of single-cell transcriptome profiling, which allowed inspection of cell states and cell-state transitions at fine resolution, and the identification of transition cells . TransitDynamic modeling provided a method to characterize multi-phase cell-fate transitions b. There Gata1 in mouse and GATA1 in human graphs were sigmoid shapes, implying existence of transition cells in human in mouse, and also high in human . The relatively lower correlations in human may be due to the heterogeneity of different individuals and complicated lineage specifications in human . Then, pin human f. ConsidThe time-varying networks in all three branches showed good scale-free behaviors, for both human and mouse . FrequenCompared with random networks, time-varying networks had lower entropy a,b. A neNetwork energy is an invariant that encodes the network structure. It is defined as a sum of absolute eigenvalues of a matrix, and so it is closely related to the network structure . MoreoveGenerally speaking, topological features of genes in a network associate to their biological importance . Genes wMEIS1 expression is correlated with cell self-renewal in normal hematopoiesis, and its expression level is highest in HSCs and declining with differentiation. In mouse, Meis1 is required to maintain functional LTHSCs [Gata1 is a hub gene in time-varying network from HSC to MEP with high betweenness. Gene targeting studies of Gata1 have confirmed its importance in primitive and definitive erythroid cells and megakaryocytes. For examples, in chimeric mice, Gata1-null erythroid cells are not able to mature beyond the proerythroblast stage, and a lack of Gata1 in megakaryocytes leads to increased proliferation and deficient maturation of megakaryocytic progenitors [Gata2 is a hub gene with high betweenness in the LMPP subnetwork. .Animal models are widely used in biological research on the predicate that fundamental biochemical processes are conserved across species, as between human and mouse . Evolutip values of 3 \u00d7 10\u22124, 8 \u00d7 10\u22126, and 3 \u00d7 10\u22123, comparing to the numbers occurred by chance in HSC to GMP, HSC to MEP, and HSC to LMPP. For the total number of interactions of shared interacting gene pairs, correlations were high between human and mouse in three trajectories . These showed species\u2019 conservation of gene regulation during hematopoiesis.Networks were converted into hypergraphs with the Dual Hypergraph Transformation (DHT) approach, which transforms the edges of a graph into the nodes of a hypergraph a. The toGATA1 and GATA2 are both hub genes, as expected. BLNK, a hub gene for the module from HSC to LMPP, encodes a cytoplasmic linker or adaptor protein that plays a critical role in B cell development [Considering conservation between human and mouse networks, we applied an R-package, bioNet, to three aggregated hypergraphs for the analysis with a heuristic approach to identify sub-hypergraphs which had higher numbers of edges in 50 time-varying networks in human and mouse . Three selopment . All genelopment . The conelopment .Hematopoiesis is a stepwise process, originating from HSCs and associated functionally with activation of lineage-specific transcription factors for progenitor cells. Transition cells are considered critical in many important biological processes, such as in organ development. We performed time-varying network reconstruction and analysis on pseudo-temporally ordered gene expression data of cells during hematopoietic differentiation. Conservation between human and mouse should help in interpreting disease models for research. Due to the complexity of the LOGGLE algorithm, we could only include about 100 genes, and more efficient algorithms are needed for the analysis of more genes. Another limitation is that the estimated pseudo-time does not accurately represent the biological time, and thus we cannot precisely determine the biologically interesting time points for network reconstruction. Due to the characteristics of single-cell data, a high noise level and a dropout rate affect the performance of LOGGLE to estimate networks, and imputation algorithms to perform denoising and drop out imputation in scRNA-seq may be a good direction to improve the results. The lower consistency between the results with Monocle 2 and SlingShot in human indicates the complicated branching structures may make the pseudo-time ordering difficult, and then affect the network estimations.The evolutionary trajectory of the time-varying networks, built with the LOGGLE model, characterizes the changes in transcription programs at the gene interaction levels, instead of at the individual gene expression levels. The differentiation trajectory could be divided into three stages, through similarity analysis of neighboring networks along pseudo-time, and hub genes at different differentiation stages were identified. The evolution of time-varying graphs revealed the differentiation patterns of human and mouse hematopoiesis with three states. Existence of transition states may be able to explain why different phenotype cells can form clusters in UMAP and the cells can be sorted with FACS technology c. AnalogThere was conservation of the overall hematopoietic process between mouse and human. Both showed three branches of differentiation pathways. Their networks shared hub genes and global topological characteristics. Six dynamic networks all had transition stages with loose network constrains. The conservation of networks and important genes inside helps to understand hematopoiesis and to develop treatment of blood diseases."}
+{"text": "The detection of Early Warning Signals (EWS) of imminent phase transitions, such as sudden changes in symptom severity could be an important innovation in the treatment or prevention of disease or psychopathology. Recurrence-based analyses are known for their ability to detect differences in behavioral modes and order transitions in extremely noisy data. As a proof of principle, the present paper provides an example of a recurrence network based analysis strategy which can be implemented in a clinical setting in which data from an individual is continuously monitored for the purpose of making decisions about diagnosis and intervention. Specifically, it is demonstrated that measures based on the geometry of the phase space can serve as Early Warning Signals of imminent phase transitions. A publicly available multivariate time series is analyzed using so-called cumulative Recurrence Networks (cRN), which are recurrence networks with edges weighted by recurrence time and directed towards previously observed data points. The results are compared to previous analyses of the same data set, benefits, limitations and future directions of the analysis approach are discussed. The detection of Early Warning Signals (EWS) of imminent transitions in states of the body and mind has been a topic of great interest in the sciences that study health and wellbeing. In a clinical setting, such transitions often concern sudden changes in symptom severity associated with the onset of disease or psychopathology , or Ecological Momentary Assessment (EMA) in which participants are prompted one or more times per day to answer some questions, which are often self-ratings of psychological or physical states. Another data source concerns the measurements of physiological variables recorded by wearable sensors. Both types of data are known to be non-stationary, contain sudden shifts in level, trend, or variance and contweak complexity assumption).Many researchers do acknowledge the data generating processes underlying time series appear to violate the ergodic assumptions of stationarity, homogeneity and memoOne approach suggested to \u201cdeal with\u201d non-stationarity is to collect data during a short, but very intensive, measurement period, for example, one or 2\u00a0weeks in which a participant is prompted 10 or more times per day to provide self-reports of activities, emotional, psychological or physiological states . A post-These examples also reveal that current methods and models (inference and interpretation) used to detect EWS in time series data of physiological and psychological variables impair their potential for being applicable in a clinical setting. The first, rather obvious point is that clinical practice requires personalized, idiographic methods, that is, EWS should be reliably detectable in data observed in a single individual and ideally make use of the particular facts pertaining to the case . This requirement excludes all methods that have been used in studies to evidence EWS based on samples of many individuals e.g., . Second,strong complexity assumption, that is, analytic techniques will be used that were developed to quantify the dynamics of complex systems, even in the context of nonstationary and nonhomogeneous time series data. Moreover, the explicit goal is to provide an analysis strategy that meets the requirements for potential use as a tool in a clinical setting for the diagnosis and treatment of individuals. The analysis strategy is based on recurrence networks, to which some constraints are added, resulting in a so-called cumulative Recurrence Network (cRN). The cRN is constructed from a recurrence matrix based on measurements of the state variables of the system cRN is introduced as a tool for analyzing multivariate time-series data obtained through the real-time monitoring of an individual. Finally the technique is applied to the data and compared to the original analyses by Recurrence Quantification Analysis and its derivatives are not \u2022 The description of linear as well as nonlinear dynamical phenomena .\u2022 The description of (transitions between) dynamical regimes, even in exceptionally noisy environments .\u2022 The quantification of recurrent dynamics across all available time scales .\u2022 The quantification of attractor geometry across all available time scales .\u2022 The quantification of structural similarities between different time series represented as Multiplex Recurrence Networks , suspendRecurrence analytic approaches to time series analysis are essentially model-free (descriptive) and make only a few assumptions about the data related the observation of dynamics/variance . MoreoveIt is not the case that results from recurrence-based analyses are immune, or \u201crobust\u201d to nonstationarity and nonhomogeneity, rather, it is the case that these methods can be understood as descriptive of such dynamics. The goal is not to infer a true model, process or estimate a population model parameter, but to characterize the dynamics displayed during the observation time, either as observed, or, after a reconstruction of the phase space.Recurrence analyses are based on a recurrence matrix which represents the states of the system that are re-visited at least once during the time the system was observed. If a sequence of states is recurring, this is referred to as a trajectory in phase space, a relatively stable state, or, orbit of the system. In this paper the observed variables in a multivariate time series are considered to be measurements of the state variables of the system cf. .yi = y (ti) . Each time series is considered a state vector m-dimensional phase space of the (sub)system. For example, the Mood subsystem consists of 12 time series representing a 12-dimensional state space. The 12 values that were simultaneously observed at each time point represent a specific Mood-state, which can be regarded as a 12-tuple coordinate in the 12 dimensional space. As the observed values change over time, they trace a trajectory through this space representing the mood dynamics of the participant. By evaluating the observed values as coordinates it is possible to quantify whether the system (approximately) returns to the same regions and revisits previously traversed trajectories. In the current context this would refer to recurring emotional states.An observed time series can be interpreted as a finite representation of the trajectory or state-evolution of a stochastic or deterministic dynamic system: i) cf. . The opeRi,j is defined as:\u025b is a threshold value for the distances between coordinates, and \u0398 is the Heaviside function, which returns one if a distance value falls below \u025b and 0 otherwise. The threshold value \u025b directly determines how many recurring values appear in the matrix, which is called the Recurrence Rate . \u025b, would result in the Recurrence Rate displayed on the left.The recurrence matrix Many other measures can be calculated from the matrix, such as the proportion of recurrent points that form line structures which represent larger patterns of recurring values .R(\u025b) to be an adjacency matrix A(\u025b) of an adjoint complex network. By removing the diagonal this matrix represents an unweighted, undirected, simple graph called an \u025b-Recurrence Network , a full analytic theory of ttractor , when usIn what follows a description of unweighted, undirected local and global characteristics of the vertices in a Recurrence Network is provided. This description is not exhaustive, but covers the most important vertex characteristics for the present context is the Adjacency matrix. In \u025b-RNs the local degree is normalized to the time series length, rather than to a theoretical maximum, and the degree density is calculated as:Where Vi indicates the probability that a randomly chosen state represented by Vj is \u025b-close to the state represented by Vi. It can be interpreted as a localized recurrence rate.The degree density of a vertex local clustering coefficientVi that are \u025b-close:Another value is the This measure quantifies the geometric alignment of the state vectors ts UPOs, .closeness centrality measure lij:The degree density and clustering coefficient quantify vertex properties at a local scale, a neighborhood, whereas measures based on shortest paths quantify vertex connectivity relative to the scale of the network as a whole. The local efficiency is the inverse geometric mean of lij:The N \u2212 1. This occurs quite often in \u025b-RN, a state occurring at Vi does not recur at every other vertex by construction of the matrix R(\u025b). It is known that the efficiency measure In the path length calculation \u2018disconnected\u2019 vertices are assigned a local average path length of ents cf. edge densityglobal clustering coefficientnetwork transitivityaverage path lengthglobal efficiencyedge density is equal to the Recurrence Rate in RQA.ents cf. . An altee system as oppos\u025b-RN measures within each of the sliding windows. The minimal window size is of course limited by the time series length, but also cannot be too small. To achieve the same goal, it is also possible to create a directed (weighted) network with recurrent points based on an adjacency matrix with the upper triangle set to zero. The structure of the network is evaluated based on measures that only consider the vertex out-degree. Such a network represents vertex properties that are cumulative with respect to the number of edges that represent whether the current vertex is a recurring state of the past, a cumulative Recurrence Network (cRN). In order to fulfill the goal of serving as a potential tool for assessing an imminent regime change in a clinical, real-time monitoring setting, we should not use any information about states that recur at some point in the future, because these are of course only known ex post facto. It is possible to use windowed analyses on the time series and construct a recurrence matrix based on a right aligned window and compute RQA or Multiplex Recurrence Network (MRN), the M time series of length n in the multivariate data set are turned into M recurrence networks, each with n nodes, to constitute the \u03ba = M layers of he MRN. In the present example the MRN is built from six weighted, directed recurrence networks, that represent cumulative time by only considering the vertex out-degree when computing vertex properties, see Multiplex recurrence networks provide a framework for studying the temporal structure in multivariate records of the observables of a complex dynamical system. Multiplex networks have recently been constructed from horizontal visibility graphs HVG, and recuM-dimensional time series \u03b1 components of NM \u00d7 NM matrix M adjacency matrices A\u03b1],P\u03ba[\u03b2] . The mut\u03ba\u03b1][ in layer \u03b1 and degree \u03ba\u03b2] cf. :I\u03b1,\u03b2=\u2211sEdge Overlap\u03c9. It represents the proportion of edges that are shared between any two vertices across all layer-pairs in the multiplex network ; mental unrest ; negative affect . The other two nodes consisted of a single observed time series worry and suspicious. It is likely the authors chose these specific variables from the set of about 70 available variables based on theoretical and practical considerations, but they do not explain why and how the time series were aggregated and subsequently analyzed in a moving window. Time series aggregation should be done with care, especially if the series are correlated were reversed such that higher ratings represent a less positive mood. Any missing values were imputed using multivariate imputation by chained equations based on each subset of variables to be included in the analysis. An exception was made for a set of variables that contained more than 5% missing due to the fact they were prompted only in the morning and in the evening (about the experience of the day). These variables were included because of their theoretical relevance and beccasnet can be considered a global network measure of coherence between the layers of the multiplex. In the present context of EWS, the fact a peak is observed before transitions is similar to the hypothesized increase in autocorrelation associated with critical slowing down, which can also be evidenced by performing a PCA on multivariate time series data and observing that most variables load on the principle component with the highest eigenvalue cf. . Figure The goal of the present paper was to examine the potential of recurrence based analyses as a method for use in a clinical context in which process monitoring of physical and mental states is used to detect early warning signals of imminent shifts in a state variable, for example, the severity of depressive symptoms. The analysis strategy focused on analyzing only those data points that would be available to a clinician in real time, using sliding window analyses with a window size that would be acceptable in setting in which a patient receives an intervention (7 days).The analyses reveal that layer similarity measures of a Multiplex network of which the layers consist of cumulative (directed and weighted) recurrence networks display peaks within a 7\u201314\u00a0days window before a major shift occurred in the state variable. Local network measures were examined and revealed two subsystems, \u201cDay\u201d and \u201cSelf Esteem,\u201d which played a central role in the Multiplex network. The peaks in the layer similarity measures are most clearly visible in the Edge Overlap and Inter-layer Mutual Information. However, not every transition is preceded by a peak, and not every peak indicates a transition is imminent within 7\u201314\u00a0days. For comparison, see An advantage of the current analysis strategy over the original study is that there is no aggregation of the multivariate time series for the purpose of dimension reduction. That is, it is possible to go back to the individual dimensions to further investigate the profiles of recurring phases (coordinates in phase space) and construct a transition matrix see . Figure As can be seen in Another limitation of the method as presented in this paper is the lack of a reference for determining whether a peak is a potential EWS or merely a random fluctuation. There are several ways to at least check whether the observed peaks are \u201creal\u201d, for example, by repeating the analysis several times on surrogate data e.g., or perfoA related issue concerns the fact that EWS are naive to the direction of change, that is, to be of clinical use it would be helpful to know whether an EWS implies the current state is deteriorating (higher symptom severity) or not. There have been theoretical advances to finding the direction of least resilience in multivariate time series data or basedTo summarize, recurrence network analysis of multivariate time series data has been shown to have great potential for use in clinical practice, specifically in the context of real-time process monitoring of individual patients. The current example has been a proof of principle, more thorough study and development is required into the relationship between transitions and the behavior of the (multiplex) recurrence measures."}
+{"text": "Improvements in single-cell RNA-seq technologies mean that studies measuring multiple experimental conditions, such as time series, have become more common. At present, few computational methods exist to infer time series-specific transcriptome changes, and such studies have therefore typically used unsupervised pseudotime methods. While these methods identify cell subpopulations and the transitions between them, they are not appropriate for identifying the genes that vary coherently along the time series. In addition, the orderings they estimate are based only on the major sources of variation in the data, which may not correspond to the processes related to the time labels.We introduce psupertime, a supervised pseudotime approach based on a regression model, which explicitly uses time-series labels as input. It identifies genes that vary coherently along a time series, in addition to pseudotime values for individual cells, and a classifier that can be used to estimate labels for new data with unknown or differing labels. We show that psupertime outperforms benchmark classifiers in terms of identifying time-varying genes and provides better individual cell orderings than popular unsupervised pseudotime techniques. psupertime is applicable to any single-cell RNA-seq dataset with sequential labels , derived from either experimental design and provides a fast, interpretable tool for targeted identification of genes varying along with specific biological processes.R package available at github.com/wmacnair/psupertime and code for results reproduction at github.com/wmacnair/psupplementary.Bioinformatics online. Single-cell RNA-sequencing studies have been used to define the transcriptional changes in biological time series, including embryonic development . To furtWe demonstrate psupertime on a dataset comprising 411 acinar cells from the pancreas, from eight human donors with ages from 1 to 54\u2009years . Despiteday1, day3, day1, day2, day3). psupertime then identifies a set of ordering coefficients, i\u03b2, one for each gene . The vector of coefficients is sparse, in the sense that many of the values are zero; these therefore have no influence on the ordering of the cells. Genes with non-zero coefficients are therefore identified by psupertime as relevant to the process which generated the sequential labels.psupertime requires two inputs: (i) a matrix of log read counts from single-cell RNA-seq, where columns correspond to genes and rows correspond to cells; and (ii) a set of labels for the cells, with a defined sequence for the labels a small and therefore interpretable set of genes with non-zero coefficients; (ii) a pseudotime value for each individual cell, obtained by multiplying the log gene expression values by the vector of coefficients; and (iii) a set of values along the pseudotime axis indicating the thresholds between successive sequential labels (these can then be used for classification of new samples). Where the data do not have condition labels, psupertime can be combined with unsupervised clustering to identify relevant processes see .To restrict the analysis to relevant genes and denoise the data, psupertime first applies pre-processing to the log transcripts per million values. Specifically, psupertime first restricts to highly variable genes, as defined in the scran package in R, i.e. genes that show above the expected variance relative to genes with similar mean expression psupertime applies cross-validated regularized ordinal logistic regression to the processed data, using the labels as the sequence. Ordinal logistic regression is an extension of binary logistic regression to an outcome variable with more than two labels, where the labels have a known or hypothesized sequence. The likelihood for ordinal logistic regression is defined by multiple simultaneous logistic regressions, where each one models the probability of a given observation having an earlier or later label, with the definition of \u2018early\u2019/\u2018late\u2019 differing across the simultaneous regressions . The samL1 regularization to do this. Our approach is based on that in the R package glmnetcr . psupertime uses glmnetcr , which riX and iy are the vector and integer corresponding to the ith observation and label respectively, j indicates one of the possible condition labels, \u03b2 is the vector of coefficients and unpenalized likelihood:iy is the label of observation i. Including the L1 penalty, for a given value of \u03bb, we obtain the optimal values of \u03b2 and \u03b8 by maximizing the following penalized objective function:\u03bb is the value with the highest mean score over all held-out folds . To increase sparsity, we use the highest value of \u03bb with mean training score within one standard error of the optimal \u03bb, rather than take the optimal \u03bb itself [following \u03bb, to obtain the best-fitting model.Here, Where psupertime is used to classify completely new data (e.g. from a different experiment), to make the predictions more robust, the cross-validation should take data structure into account .The psupertime procedure results in a set of coefficients for all input genes (many of which will be zero) that can be used to project each cell onto a pseudotime axis, and a set of cut-offs indicating the thresholds between successive sequential labels . These ci with coefficient i\u03b2, each unit increase in log transcript abundance multiplies the odds ratio between earlier and later labels by i\u03b2 is small, a Taylor expansion indicates this is approximately equal to a linear increase by a factor of i\u03b2.The small, interpretable set of genes reported to have non-zero coefficients permits both validation that the procedure has been successful (by observation of genes known to be relevant to the process) and discovery of new relevant genes. The magnitude of a coefficient is a measure of the contribution of this gene to the cell ordering. More precisely, for a gene The thresholds indicate the points along the psupertime axis at which the probability of label membership is equal for labels before the cut-off, and after the cut-off. The distances between thresholds, namely the size of transcriptional difference between successive labels, are not assumed to be constant and are learned by psupertime. Distances between thresholds therefore indicate dissimilarity between adjacent labels, and thresholds which are close together suggest labels which are transcriptionally difficult to distinguish.The learned geneset can also be used as input to dimensionality reduction algorithms such as t-SNE or UMAP; this is discussed in more detail in Rather than learning a pseudotime for one fixed set of input points, psupertime learns a function from transcript abundances to the pseudotime. It can therefore be trained on one set of labels and applied to new data with unknown or different labels: any data with overlapping gene measurements can be assessed with regard to the learned process. Furthermore, psupertime can be learned on two different datasets, with different labels, and then each applied to the other dataset: the sequential labels from one dataset allow coefficients relevant to that sequence to be learned, which can then be used to predict these labels for the second dataset. See psupertime is principally useful because it can identify genes which vary over the course of time-series labels. To test this capability, we simulated single-cell RNA-seq data to include three types of gene profiles, defined in terms of their mean expression: mean varying as a time series; sample-specific variation in the mean; and constant mean expression. All genes have biological and technical noise around this mean. This mimics the likely experimental setup, in which the expression at each timepoint is composed of both processes related to the time series, and unrelated variability particular to that sample, e.g. where the samples are derived from different mice.Our simulation procedure was as follows: (i) calculate relevant statistics from a selected reference dataset, composed of multiple labels, (ii) randomly sample latent time values for each cell, around a common mean for the cell\u2019s label, (iii) randomly assign one of the three gene profile types to each gene and randomly sample some parameters for each gene and (iv) sample counts for each cell and gene based on the combination of cell- and gene-level parameters. We discuss each of these steps in turn.et al., where the cells were labelled with seven distinct time labels. The statistics used were library size for each cell and the mean g\u03bc and dispersion g\u03c1 for each gene g : c, with mean 0 and standard deviation 1: To sample the latent time values for label The three possible types of gene expression profile that we defined were: time series; label-specific; and non-specific. Each gene follows one of these profiles. Each gene has dispersion and base mean expression defined by the reference dataset. The gene expression profiles were simulated as follows:t0, the curve\u2019s midpoint; k, half the derivative of the curve at that midpoint; and L, the asymptotic maximum value of the curve. The log mean expression of this gene in a cell with latent time value ct is therefore t0 from a uniform distribution over k from a log10-normal distribution with mean 1 and standard deviation 1; and L from a gamma distribution with shape 4 and rate 2.Time-series genes have expression which changes with respect to the latent time values for each cell, where the log fold change relative to the mean follows a logistic curve. This curve is defined by three values: Label-specific gene profiles are defined by two parameters: the sample in which they show differential expression, and the log fold change in that sample relative to the mean. For each gene, we uniformly at random select a label, and sample the log fold change from a gamma distribution with shape 4 and rate 2.Genes with non-specific expression are defined by the dispersion and base mean identified from the reference dataset, and have no difference in distribution across labels.Each simulation has a defined set of proportions for each type of gene profile, We now have all the parameters required to sample counts for each combination of cell and gene. The gene-level parameters define, via the combination of base mean expression and possibly also a log fold change relative to the base mean, the mean expression for a given gene, plus its dispersion. The cell-level parameters define the library size for each cell, which is used to scale the base mean. For each cell and gene combination, we sample from the defined negative binomial distribution.To simulate time-series data comprising multiple cell types, we used fluorescence-activated cell-sorted stem cells at different stages of differentiation , assuming that mean gene expression followed one of three possible profiles: time-series, label-specific or a constant mean across labels. We varied the proportions of these types of gene, so that the proportion of genes following time-series profiles, For each triplet of distinct L, and high dispersion \u03c1, will have a poor signal-to-noise ratio for the detection of time-series trends. This puts biologically realistic limits on the best performance possible for any algorithm, as for some genes any time-series trends will be obscured by transcriptional variability. For this reason, and also because psupertime is intended to identify a small set of genes, we have restricted our analysis to values of recall between 0% and 10%.We note that due to different combinations of randomly selected parameters, some time-series genes in the simulations will be easier to detect and some more difficult. For example, a gene with low absolute log fold change value Both psupertime and unsupervised pseudotime techniques produce a cell ordering, which may or may not correlate with the label ordering. We compared psupertime against unsupervised pseudotime methods, on five datasets with time-series labels . We firset al., using an false discovery rate (FDR) cut-off of 10% and biological variability cut-off of 0.5 [see To identify highly variable genes, we followed the procedure described by Lun DDRTree; root state selected as the state with the highest number of cells from the first label; function orderCells used to extract the ordering.For principle component analysis (PCA), we calculated the first principal component of the log counts and used this as the pseudotime. Calculation of Monocle2 uses the following default settings: genes with mean expression Note: For cells very distant from the selected path, slingshot does not give a pseudotime value. For these cells, we assigned the mean pseudotime value over those that slingshot did calculate. Calculation of psupertime used default settings, as described in Section 2.Calculation of slingshot uses the following default settings: first 10 PCA components used as dimensionality reduction; clustering via Gaussian mixture model clustering using the R package mclust, number of clusters selected by Bayesian information criterion; root and leaf clusters selected as the clusters with highest number of cells from the earliest and latest labels, respectively; lineage selected for pseudotime is path from root to leaf cluster. \u03c4 considers pairs of points and calculates the proportion of pairs in which the rank ordering within the pair is the same across both possible rankings.We tested the extent to which each pseudotime method could correctly order the cells by calculating measures of correlation between the learned pseudotime, and the sequential labels. Kendall\u2019s To identify genes with high correlation with the sequential condition labels , we treaTo identify biological processes associated with the condition labels, psupertime first clusters all genes selected for training (e.g. the default highly variable genes), using the R package fastcluster, using five clusters by default. These are ordered by correlation of the mean expression values with the learned pseudotime, i.e. approximately into genes that are up- or down-regulated along the course of the labelled process. psupertime then uses topGO to identify biological processes enriched in each cluster, relative to the remaining clusters; enriched GO terms are calculated using algorithm = \u2018weight\u2019 and statistic = \u2018fisher\u2019 .CLU) plays an essential role in pancreas regeneration and is expressed in chronic pancreatitis (\u03b1-amylase (AMY2B) is a characteristic gene for mature acinar cells, encoding a digestive enzyme . A non-zero ordering coefficient indicates that a gene was relevant to the label sequence. This balances the requirement for predictive accuracy against that for a small and therefore interpretable set of genes. For example, applied to the acinar cells, psupertime used 82 of the 827 highly variable genes to attain a test accuracy of 83% over the eight possible labels . Many ofe enzyme . In addiar cells .GO term enrichment analysis provides further support for the validity of the cell ordering identified by psupertime. We clustered the expression profiles of the highly variable genes and identified GO terms characteristic of each cluster (see Section 2). This procedure identified genes related to digestion as being up-regulated in early ages (\u2018proteolysis\u2019 and \u2018digestion\u2019 enriched in cluster 1), and terms related to ageing later in the process . We compared psupertime\u2019s performance against two benchmark classification methods, which also identify relevant features: multinomial regression, as a simple baseline approach to classification and a po\u03c4 correlation coefficient 0.86, which quantifies the concordance between two orderings), while providing a sparse interpretable gene signature related to age.Unsupervised projection techniques are commonly applied to analyse time-series single-cell RNA-seq data. We therefore compared the cell-level orderings identified by psupertime with those from three alternative, unsupervised pseudotime techniques: projection onto the first PCA component, as a simple, interpretable baseline; Monocle 2 . Acinar \u03c4 values of 0.12 or below for the human germline dataset . We found that psupertime is best able to identify globally varying genes when applied to all cell types together, and best able to identify cell-type-specific genes via application to each cell type individually see . In addiAMY2B in Typical workflows for single-cell RNA-seq data first restrict to highly variable genes. If the data are instead first restricted to genes that correlate strongly with the sequential labels, the relative performance of the benchmark methods might improve. Despite the selection of genes that correlate with the labels, psupertime consistently outperforms unsupervised methods in terms of identifying individual cell orderings . This ilThe time taken for psupertime to run varies over the five test datasets from 4\u2009s for a dataset with 00 cells . We empi00 cells . psupert00 cells .The number of studies using single-cell RNA-seq is increasing exponentially or to alFinancial Support: none declared.Conflict of Interest: none declared.btac227_Supplementary_DataClick here for additional data file."}
+{"text": "P2P lending is an important part of Internet finance, which is popular among users because of its efficiency, low cost, wide range, and ease of operation. The problem of predicting loan defaults is affected by many factors, such as the linear and nonlinear nature of the data itself and time dependence and multiple external factors, which have not been well captured in the previous work. In this paper, we propose a multiattention mechanism to capture the different effects of various time slices and various external factors on the results, introduce ARIMA and LSTM to capture the linear and nonlinear characteristics of the lending data respectively, and establish a Time Series Multiattention Prediction Model (MAT-ALSTM) based on LSTM and ARIMA. This paper uses the Lending Club dataset from the United States to prove that our model is superior to ANN, SVM, LSTM, GRU, and ARIMA models in the prediction effect of MAE, RMSE, and DA. Internet finance refers tIn 2017, China has surpassed the United States to become the largest online lending transaction time in the world. The comprehensive transaction volume of online lending platforms reached 1163.98 billion yuan. According to the World Bank's forecast, the global crowdfunding market will reach $300 billion in 2025. Behind the rapid development of Internet finance, it bears many risks [As a basic part of the P2P market, P2P online lending platforms accept social loan requests and provide people with investment opportunities. Compared to traditional bank loans, P2P loan origination relies heavily on the borrower's credit as collateral, and P2P loans are funded by thousands of active lenders on the platform. These characteristics mean that P2P loans are unsecured, and the potential for property loss is high.Therefore, tracking and predicting the lending market dynamics of Internet finance can predict and grasp the system risk of the platform promptly, and developing appropriate default risk management methods is significant to the system, platform, and users. However, the dynamic tracking and prediction of lending data is difficult due to the high liquidity, uncertainty, and volatility of the lending market. In addition, the number of influential variables in the online environment has increased, and the relationships of various time series have become more complex .The prediction of P2P lending data is a typical multifactor time series prediction problem . This prMost traditional time series forecasting algorithms model financial P2P lending data as time variable series. Among them, the ARIMA model is a classic time series forecasting method, which can better reflect the linear characteristics of time series data. However, time series data in finance in general consists of two components: linear and nonlinear . A singlModeling loan data as multifactor time series data, fully considering the time correlation of P2P loan markets, and the linear and nonlinear dependencies between time series data model the linear and nonlinear properties of the lending data with ARIMA and LSTM, respectively.This paper proposes a multiattention mechanism to integrate input variables, where according to the different influences of historical data in different periods on the prediction period, a time series attention mechanism is proposed. Aiming at the different influences of varying time sequence features of input variables on the target value, a sequence feature attention mechanism is proposed.Given the characteristics of P2P loan market data, this paper uses long and short-term memory neural networks to capture the time correlation of loan data and proposes a time series P2P market loan default prediction model MAT-ALSTM based on a multiattention mechanism.This paper conducts a large number of experiments on a real dataset, which proves the superiority of the MAT-LSTM model proposed in this paper in terms of RMSE, MAE, and DA compared with LSTM, GRU, and other models.Based on the above analysis of the P2P data prediction problem, this paper proposes a new MAT-ALSTM (Time Series Multiattention Prediction Model Based on LSTM and ARIMA) model, which can be effectively applied in multifactor finance time series forecasting, and the contributions of this paper can be summarized as follows:The rest of this paper is described as follows. In In this section, we review the previous work on modeling and prediction of financial lending data in terms of (1) traditional machine learning-based prediction work and (2) deep learning network-based prediction work.Some research works are based on traditional statistical methods and machine learning. Traditional statistical models analyze and predict default risk by finding the optimal linear combination of input variables, mainly including logistic regression, discriminant analysis, risk index models, and probabilistic models. For example, Serrano-Cinca and Guti\u00e9rrez-Nieto demonstrThere are different kinds of deep learning models: deep multilayer perceptron, RNN, LSTM, CNN, restricted Boltzmann machines, and autoencoder . Deep leTraditional statistical methods and machine learning methods only consider several features of a single lending data without considering the temporal coupling relationship existing in the data itself. Besides, many machine learning methods cannot capture the nonlinear relationships among time series data well, resulting in poor model coupling.Some time series deep learning models can handle time series data better; however, many models do not consider the interactions between adjacent one-step or multistep time slices and fail to capture the time dependence well.Few works consider the importance of different sequence features on the prediction results. Machine learning and deep learning models are able to handle multifeature data; however, different features have different impacts on lending default rates, yet many models fail to capture them well.From the analysis of existing work, we can draw the following conclusions:Based on the above analysis, this paper proposes a time series prediction model based on a multiattention mechanism, using ARIMA and LSTM algorithms to deal with both linear and nonlinear features of financial loan data to fully consider the time series and the different importance of sequence features to the results.X={X1, X2,\u2026, XT,} be the historical data of a time series, H be the desired prediction range, and the task is to predict the next value of the series {XT+1, XT+2,\u2026, XT+H,}. A time series is a sequence of observations recorded in chronological order over a fixed time interval. The prediction problem is to fit a model to predict the future values of the series, taking into account the past values. Let Time series forecasting refers to using historical data in the past period to predict data in the future, including continuous forecasting and discrete forecasting (event forecasting and data classification). The financial lending time series forecasting problem can be summarized as follows.\u03c4, each financial time series includes F eigenvalues, and then, all the financial time series features in the \u03c4 The financial period can be shown as X\u03c4= \u2208 RN\u00d7F, where Xi\u03c4 \u2208 R represents all eigenvalues of the i-th time series, N is the total number of time series, and we set yi\u03c4 \u2208 R to represent the characteristics of financial variables in the future \u03c4 time period. The financial prediction model can be expressed, using the past financial time series features X\u03c4 to predict the financial variable features yi\u03c4 \u2208 R in the future \u03c4 time period.Given a period LSTM (long short-term memory) is an imxt represents the input vector at time t, ht\u22121 represents the output at time t-1, Wf, Wi,WC, Wo, Uf, Ui, UC, Uo and Wf, Wi, WC, Wo, respectively, represent the weight matrix and the bias vector, and the process of the memory module for state update and information output is as follows.The structure of the LSTM memory module is shown in First, the forget gate forgets useless historical information:Then, the input gate performs state updates based on the input data and historical information:\u03c3 is the logistic sigmoid function, and ft, it, and ot, respectively, represent the output state of the forget gate, the input gate, and the output gate at time t, and Ct represents the state of the memory cell at time t.Finally, the output gate outputs the information at the current moment:The attention mechanism in deep learning is a resC. Secondly, a decoder is composed of neural network units of similar structure, which converts the encoding vector E into time series information with a prediction length of \u03c4. is a vector obtained by common mapping, and the output value at time j is the predicted value at the corresponding time, namely,The attention mechanism in the time series problem divides the learning model into two parts. First, the encoder composed of a single-layer or multilayer RNN encodes the input sequence according to the time relationship, which is used to learn the predependency and postdependency relationship and the current state representation of the known sequence, and generates the state representation of the current time. During the process of cyclic encoding, we get the state of the last moment and keep it, and this state retains the dynamic information of the input sequence and the current sequence state, which is denoted as vector E used at each moment of decoding is fixed. This structure does not incorporate the principle of different information at different times into the model [j, the encoder obtains a dynamic context vector Ej with different attention information so that the decoding process can pay more attention to the prediction content of the current time and important historical information specifically by F represents the process of combining the attention mechanism with the encoder part, hj\u22121\u2032 is the hidden state of the previous step at time j of the decoder, and h is the hidden state set of the encoder.In the traditional model, the context vector he model . TherefoIn this section, on the basis of III, a multiattention mechanism-based time series financial market dynamic prediction model MAT-ALSTM is proposed. The model structure is shown in The MAT-LSTM model consists of the following modules, including a data preprocessing module, a time series modeling module, and a prediction module. Among them, the data preprocessing module processes the raw financial loan time series data for missing values and outliers and normalizes the data to meet the needs of model training. The time series modeling module consists of two parts: ARIMA modeling and LSTM modeling. Among them, ARIMA modeling mainly extracts the linear part in the loan time series data, and LSTM modeling mainly extracts the nonlinear part in the loan time series data. In the LSTM module, we also introduce a multiattention mechanism to distinguish the different effects of different time series and sequence features on the results. Finally, the prediction module mainly combines the modeling results of ARIMA and the modeling results of LSTM to obtain the final prediction results.This work uses lending data disclosed by the US P2P lending platform Lending Club as the data source. The data range is from June 2008 to December 2015. A total of 2,260,701 data are available in the raw data. The data preprocessing process mainly deals with the data in the original data. For missing values and outliers, first, outlier detection is performed on the data, outliers are regarded as missing values, and the missing values are filled by using Lagrangian interpolation. Secondly, we calculate the average default rate of new loans per month as the dependent variable, and the formula is as follows:In order to make the data more suitable for the model training process, we standardized the data by max-min on columns. The normalization process is as follows:Dt, which can be decomposed into a linear part lt and a nonlinear part nt, which are recorded asFinancial loan data have both linear and nonlinear features. Denote the loan data as In the preprocessing process, the data are first processed for missing values. For the original dataset of 144 columns of features, there are 60 columns of features containing missing values and 43 columns of features with a percentage of true values above 30%, so we first remove the features with missing values above 30%. The digitized features are filled using the mean or median of other samples, and the nondigitized features are predicted using Sklearn. For outliers, we take the approach of using the mean rather than the outlier samples.p) and MA moving average regression (q). The general form of this model is given asp is the lag order of the autoregressive process, q is the lag order of the moving average process, \u03bc is the constant term coefficient, and \u03f5t is the random disturbance term sequence, which is shown as the white noise sequence \u03f5t ~ WriteNoise. The ARIMA model requires that the event sequence must be a stationary sequence during modeling. For nonstationary time series data, the d-order difference should be performed before modeling. Therefore, the complete difference autoregressive moving average process ARIMA is introduced in the introduction. The model after the lag factor is given asL represents the lag factor, which is defined as Lnyt=yt\u2212n, and (1 \u2212 L)d is the d-order difference operation.The ARIMA model is a difXt\u2032, and the residual error can be obtained asFirst, we use the ARIMA model to model the data, then fit the training data, and make predictions, and the output prediction result is recorded as In view of the different influences of historical data of different time periods on the prediction period and the different influences of different sequence features of input variables on the target value, we propose a multiattention mechanism.hj\u22121\u2032 is the hidden state of the model training in the previous stage; ht is the loan market state at time t in the model training; VaT, Wa, and Ua are all learnable parameter matrices. etj represents the degree of influence of the state of the loan market at time t in the encoder on the output of the state at the current predicted time j. Finally, the softmax function is used to normalize etj to obtain the weight factor of the current prediction of the state of the loan market at each historical time. etj\u2032 is the attention value in the time dimension.The hidden state at different times has different degrees of attention. In the time dimension, an attention mechanism based on the historical state of the lending market is constructed as follows:hj\u22121\u2032 is the hidden state of the model training in the previous stage; VpT, Wp, and bp are all learnable parameter matrices; ptj represents the sequence feature of the lending market at time t in the encoder for the influence degree of the state output of j at the current prediction time; and finally, normalize ptj through the softmax function so as to obtain the weight factor ptj\u2032 of different sequence features to the current prediction, that is, the attention value in the sequence feature dimension.Different sequence features have different degrees of attention. In the sequence feature dimension, an attention mechanism based on the time series features of the lending market is constructed as follows:Rt. Next, the LSTM model is used to train and predict the residual sequence, and the predicted value Et\u2032 of the residual sequence is obtained.The nonlinear components in the loan data are hidden in the residual sequence Xt\u2032 will be used as the output of the linear feature capture module, followed by using the original loop data Dt to do residual processing with Xt\u2032, which is noted as Rt which is the nonlinear part of the borrowing data, followed by adding a temporal attention mechanism to the temporal dimension of the data and a multifactor attention mechanism to the feature dimension, followed by using LSTM for the capture of that part of the features. Finally, the results are linearly summed to obtain the final prediction results.After preprocessing, the model will be trained by the ARIMA model, and the training result Xt\u2032 of the ARIMA time series model and the nonlinear prediction value Et\u2032 of the LSTM model, the final prediction value Resultt\u2032 isCombining the linear prediction value In the experiment, the desensitized data published on the official website of Lending Club were selected for the experiment. The time range is from 2008 to 2015. The data include personal information, loan information, and credit information. The loan status variable is the main focus of this paper. It has seven states: fully paid, charge off, default, current, in grace period, late (16\u201330 days), and late (31\u2013120 days). In this paper, we argue that all statuses should be considered as default except for the status of full payment.As shown in yi is the true value of the origin sample data, and m is the number of samples.Models built on time series data must make accurate forecasts in two dimensions of metrics: prediction accuracy and prediction trend accuracy. Therefore, we choose root mean square error (RMSE), mean absolute error (MAE), and orientation accuracy (DA) as the evaluation metrics in this paper. Among them, RMSE and MAE are used to evaluate the prediction accuracy of model predictions, and DA is used for evaluating the forecast trend accuracy of models' predictions. Formulas of the three statistical indices are as follows:In the following, we explain the experimental set-up for the models involved in this paper, as shown in Tables We use ARIMA , SVM 2727, ANN , LSTM 2, and GRUHere, we will briefly introduce the baseline model. ARIMA is an auFrom In order to compare the prediction accuracy of each model more conveniently, we draw the following model comparison chart, as shown in Next, we show the prediction results of the six models at some time points with the real data, as shown in The MAT-MLSTM model proposed in this paper has better prediction effect than the LSTM and GRU modelsThe SVM model and ANN model do not take into account the time series characteristics of loan data, so the prediction effect is poorCompared with the ARIMA model and LSTM model, the MAT-ALSTM model has significantly better prediction results because the MAT-ALSTM model captures the linear and nonlinear characteristics of the loan data, respectivelyAlthough the LSTM model and GRU model consider the time series characteristics of loan data, they do not consider the important difference between time series and series characteristics, while the MAT-ALSTM model fully considers the above characteristics and introduces many attention mechanisms, so the prediction effect is betterAs shown in P2P lending is one of the important components of Internet finance. Accurate lending data prediction is of great significance for applications such as platform construction, user evaluation, and system upgrades. This paper takes the loan default rate of users as the research object, fully considers the linear and nonlinear characteristics of the loan data, and proposes a multiattention model MAT-ALSTM based on LSTM and ARIMA, which can handle the time series in loan data well and fully capture the importance of different time series and sequence features to the prediction results. The experimental results show that the MAT-ALSTM model in this paper has a better prediction effect than GRU, LSTM, and other models.Theoretically, our model provides a new perspective for processing and prediction of time series data, i.e., starting from the linear and nonlinear characteristics of time series data itself and using the corresponding mechanisms to capture and merge the results to complete the prediction.In practice, our model has better prediction results compared with LSTM and GRU models, and the model complexity is not too high, and it can be used as a base model to compose components of deeper models, which are more accurate compared with simple RNN and its variants.In the future, we will try to combine the model with other domains, such as transportation and retail to verify its usability in other domains. Besides, in this paper, we use a self-attentive mechanism to describe the importance of different time slices and features, and we will also try the attention mechanism design approach. Finally, the interpretability problem of the model has been a bottleneck that hinders the application of deep learning, so we will try to improve the interpretability of the model."}
+{"text": "Time series appear in many scientific fields and are an important type of data. The use of time series analysis techniques is an essential means of discovering the knowledge hidden in this type of data. In recent years, many scholars have achieved fruitful results in the study of time series. A statistical analysis of 120,000 literatures published between 2017 and 2021 reveals that the topical research about time series is mostly focused on their classification and prediction. Therefore, in this study, we focus on analyzing the technical development routes of time series classification and prediction algorithms. 87 literatures with high relevance and high citation are selected for analysis, aiming to provide a more comprehensive reference base for interested researchers. For time series classification, it is divided into supervised methods, semi-supervised methods, and early classification of time series, which are key extensions of time series classification tasks. For time series prediction, from classical statistical methods, to neural network methods, and then to fuzzy modeling and transfer learning methods, the performance and applications of these different methods are discussed. We hope this article can help aid the understanding of the current development status and discover possible future research directions, such as exploring interpretability of time series analysis and online learning modeling. It is uTo gain a comprehensive understanding of the current status of time series application, we use time series as a keyword to search the Web of Science Core Collection and collect 120,000 references published between 2017 and 2021. Then, we use VOSViewer to visualize anaysis result: the subject category co-occurrence map of first-level disciplines as shown in i.e., Engineering and Computer Science, both of which have a high total link strength; this can be attributed to the fact that these two subjects are often used in research as analysis tools for other domains. The 120,000 publications contain 161 unique level-1 subjects in total. From To gain a clearer understanding of the application fields of time series, we remove the two subjects with the highest number of matches, etc. Time series is a ubiquitous data type in our daily lives, and the analysis thereof holds great value.Time series has been widely used in many fields such as Time series applications are present in every aspect of our lives, computational statistics and data analysis will give us a new perspective and help us gain a deeper understanding of the world.Time series is an important data object, used in an extensive range of research, including classification, prediction, clustering, similarity retrieval, anomaly detection, and noise elimination . The anaetc., we obtain a co-occurrence map by using literature keyword, shown in To identify the trending topics in current time series research, we further analyze the chosen studies. After removing generic terms like time series, time, analysis of time series, The font size in the figure is related to the frequency of occurrence of keywords. The larger the font, the higher the frequency of occurrence. There are approximately seven clusters in the figure, representing algorithms and different application domains. Two main research topics are identified, namely, classification and prediction. Because this article focuses on the analysis of time series algorithms, we will present our analysis and conclusions based on the technical development route of classification and prediction algorithms and discuss relevant areas for subsequent research.The main contributions of this article can be summarized as follows: \u2022a comprehensive analysis of prevalent topics in the field of time series; \u2022an investigation into the progress of time series classification and prediction problems in recent years, highlighting several technical development routes that are widely studied in the field, and discussing the improvement and optimization of these algorithms; \u2022a comparison of the performance of the different algorithms on multiple datasets, concluding with their advantages and disadvantages; \u2022and finally, an analysis of the challenges and future development tendencies of time series classification and prediction problems.The following is the process of our study. First, a literature analysis tool is used to identify current popular research topics. We use VOSViewer to analyze the time series literature through keywords to examine the areas of greatest interest. These topics are classified into 478 categories, and the two research directions with the highest frequency are \u201cclassification\u201d and \u201cprediction\u201d. Then, the relevant scientific literatures for the identified categories are located. We review related papers on time series classification and prediction and select 87 literatures with high relevance and high citation for analysis. The scientific databases used in the search include Web of Science Core Collection, IEEE Xplore, ACM Digital Library, Springer Link, and ScienceDirect. Finally, according to the literatures, important technical development routes are extracted, and detailed analysis and summary are carried out.The remainder of this article is organized as follows. \u2018Related Work\u2019 provides an introduction of related work of time series investigation and the comparison of our survey with other traditional surveys and review articles. \u2018Preliminaries\u2019 describes the fundamentals of time series classification and prediction tasks. \u2018Time Series Classification\u2019 elaborates on the development route of time series classification and prediction algorithms, by comparing their performances, analyzing the challenges being faced, and discussing future development trends. Finally, the \u2018Conclusion\u2019 concludes the article.Knowledge discovery in time series is an important direction for dynamic data analysis and processing. The urgent need to predict future data trends based on historical information has attracted widespread attention in many research fields. In the past few decades, many studies have summarized time series research methods from different perspectives. In contrast to the above works, we focus on the development direction of time series technical routes, try to track the most primitive methods of each technical route, study the improvement ideas and improvement strategies of subsequent methods, and compare the advantages and disadvantages of various technical routes and methods. Finally, we provide new ideas for future work.Using data characteristics, time series can be classified into five categories: 1.Variables: According to the number of variables, time series can be divided into univariate and multivariate time series. Univariate time series only contains a single variable, while multivariate time series contains multiple variables. For example, 2.Continuity: Time series can be classified as discrete or continuous time series. For example, a gene sequence can be regarded as discrete time series , is a sequence vector, where each element xi is a univariate time series, with differing lengths, X has T variables, with the ith variable being xi.Definition 3. Subsequence: Given a time sequence s with length L, ssub\u00a0=\u00a0s is a subsequence with a length n\u00a0<\u00a0L. The starting point of the subsequence is the position m in s, and the position m\u00a0+\u00a0n\u00a0\u2212\u00a01 is the end point, represented as ssub\u00a0=\u00a0tm,\u00a0\u2026,\u00a0tm+n\u22121, where, 1\u00a0\u2264\u00a0m\u00a0\u2264\u00a0L\u00a0\u2212\u00a0n\u00a0+\u00a01.Definition 4. Similarity degree: For two time series, b and s, (assuming |b|\u00a0\u2264\u00a0|s|), the similarity degree for them can be computed by Sim\u00a0=\u00a0min{dist}, where si is an arbitrary subsequence of s that satisfies the condition |b|\u00a0=\u00a0|si|.Definition 5. Shapelet: A shapelet is a subsequence of time series, s, with the strongest discriminative ability. Specifically, the shapelet can be represented by p\u00a0=\u00a0, where b,\u00a0\u03b4,\u00a0c are the subsequence, threshold, and class label, respectively. If an unknown time series satisfies the condition Sim\u00a0\u2264\u00a0\u03b4, then it can be categorized into class c.Definition 6. Euclidean distance: Euclidean distance is a frequently used distance measurement to determine the degree of similarity of two different time series. For sequences b and c, both with length L, the Euclidean distance can be calculated as Definition 7. Dynamic time warping (DTW): DTW is another widely used distance measurement method. Compared with Euclidean distance, it can compute the minimum distance between two sequences with different lengths. For its wide application, the principle will not be explained here, but the calculation is given as distDTW\u00a0=\u00a0DTW.In time series classification and prediction tasks, the most basic and widely used algorithms are 1NN-DTW (1 nearest neighbor dynamic time warping) and autoregressive (AR) and moving average (MA) models.The 1NN-DTW model uses DTW as distance measurement, and the simple but effective algorithm 1NN to find the nearest training sample of the current instance and assigns the same class label to the instance as the nearest training sample. This model does not require training of parameters and has high accuracy. The following pseudocode describes the procedure of 1NN-DTW. _______________________ Algorithm\u00a01\u00a01NN-DTW______________________________________________________________________ Require:\u00a0T:\u00a0labeled\u00a0time\u00a0series\u00a0dataset,\u00a0the\u00a0number\u00a0of\u00a0samples\u00a0is\u00a0N Ensure:\u00a0acc:\u00a0average\u00a01NN\u00a0classification\u00a0accuracy \u00a01:\u00a0\u00a0Num\u00a0=\u00a00 \u00a02:\u00a0\u00a0for\u00a0each\u00a0instance\u00a0si\u00a0of\u00a0T\u00a0do \u00a03:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0distance\u00a0=\u00a0DTW; \u00a04:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0assign\u00a0the\u00a0closest\u00a0instance\u00a0label\u00a0ypred\u00a0of\u00a0T\u00a0to\u00a0s i; \u00a05:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if\u00a0ypred\u00a0==\u00a0ysi \u00a0\u00a0\u00a0\u00a0then \u00a06:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Num\u00a0=\u00a0Num\u00a0+\u00a01; \u00a07:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0end\u00a0if \u00a08:\u00a0\u00a0end\u00a0for \u00a09:\u00a0\u00a0acc\u00a0=\u00a0Num \u00a0\u00a0N\u00a0\u00a0;___________________________________________________________________________________________ \u2022 AR modelAR(p), where a time series value can be expressed as a linear function of its previous value, Xt, and an impact value, \u025bt. This model is a dynamic model that is different from the static multiple regression model.The model is represented as \u2022 MA modelMA(q). The time series value, Xt, is the linear combination of the present and past error or shock value, \u025bt.The model is represented as Unlike traditional classification tasks, the order of the time series variable is related to the input object, which makes time series classification a more challenging problem. Based on data label availability, current time series classification research mainly focuses on supervised and semi-supervised learning. Usually, supervised learning methods with labeled information show better performances. However, in real life there is a tremendous amount of unlabeled data. Therefore, some semi-supervised methods have been proposed to address this situation by constructing models using limited labeled data and a large amount of unlabeled data. In addition, some specific application scenarios have new requirements for time series classification tasks, for example, the early diagnosis of a disease, which results in a better prognosis. Early classification is used in these situations, and its goal is to classify data as soon as possible with a certain accuracy rate. This is an important extension of time series classification. This section introduces the development route of time series classification technology, analyzes the current difficulties and challenges, and mentions some expected future trends.Based on the literature reviewed, we discover three development routes: supervised time series classification, semi-supervised time series classification, and early classification, which is a critical extension of the time series classification task. In early time series classification methods, the work mainly focus on the distance-based algorithm . The mosSpeed up\u2022 The idea of this type of algorithm is that the effectiveness can be improved by reducing the dataset size and accelerating the computation of DTW. Through numerosity reduction and dynamic adjustment of the DTW warping window size , 1NN-DTWShapelets\u2022 The advantages of the shapelet-based method are that it has strong interpretability, robustness, and low classification time complexity. Although it can be accelerated through early abandon and entropy pruning methods, the search space and time complexity of shapelets are still not negligible. Therefore, some acceleration strategies such as precomputing of reusable distance and allowable pruning , discretSince the advent of shapelet transform, subsequent research has shifted focus to identify more effective ways of finding shapelets . In contConstuct of a neural network\u2022 This type of algorithm is a feature-based method, and its main idea is to train the classifier in advance. Using the results from previous studies, we compare the accuracy of various methods , it is doomed to produce many false negatives . TherefoThe accuracy of different semi-supervised methods is compared in The main goal of early classification is to assign class labels as early as possible while guaranteeing a certain percentage of accuracy. It has great importance in time sensitive applications, such as the diagnosis of heart disease, as early diagnosis improves prognosis. In practical applications, due to an unclear description of the issues to be solved, the early classification of time series may cause false positives in practical applications, and the cost of false positives is very high. To solve this problem, Univariable\u2022 The above methods lack interpretability, which is useful in determining the factor affecting an object. EDSC introducMultivariable\u2022 MSD extends To solve these problems, In contrast, While most of the univariable classification algorithms achieve good results (above 85%), the accuracy of multivariable algorithms do not reach that high (except EPIMTS). This can be attributed to the fact that it is difficult to consider multiple variables simultaneously and extracting the interconnection between them correctly. EPIMTS uses an ensemble method to combine these two important factors into the algorithm, allowing it to achieve the best performance.This section discusses the different technology development routes in time series classification. Mainly, the research covers both traditional supervised learning methods and semi-supervised learning methods. In particular, an important extension\u2014early classification\u2014is proposed for specific application situations.Although the existing work has achieved good results in time classification tasks, there are still some problems. In real life, the amount of unlabeled data exceeds that of labeled data, and its sources are more abundant. Although supervised learning yields better classification results, labeling data is expensive and time consuming. In some fields such as medical and satellite data, experts are required to label the data, making the acquisition of labeled data even more difficult. Therefore, research on semi-supervised or unsupervised methods has great value. However, according to the research reviewed for this article, very few recent studies focus on semi-supervised learning methods and unsupervised learning methods for time series classification . ManaginAlthough time series prediction methods have experienced a long period of development, the rapid increase in data scale has brought severe challenges to traditional time series prediction methods, and has also seriously affected the efficiency of prediction methods. Time series prediction methods have gradually developed from simple linear regression models and nonlinear regression models based on traditional statistics to machine learning methods represented by neural networks and support vector machines. At the same time, researchers have also proposed other prediction methods for time series with different characteristics based on different theoretical foundations. Fuzzy cognitive map can deal with data uncertainty and maintain a high level of interpretability. To solve the problem of insufficient labeled data for some practical applications, transfer learning methods can be used. Two future research avenues are clear; first, dealing with rapid increase in the scale of time series data; second, choosing the most suitable model for a specific problem.According to the reviewed literature, we have defined four technical development routes, namely, the classic algorithm, neural network, fuzzy cognitive map, and transfer learning. The traditional time series prediction methods are mainly used to solve the model parameters on the basis of determining the time series parameter model and using the solved model to complete the prediction work, mainly from the perspective of a stationary series, non-stationary series, or multivariate time series.Stationary series\u2022 Russian astronomer Slutzky create and propose the moving average (MA) model , and BriNon-stationary series\u2022 Non-stationary time series comprise four trends: long-term trend, cyclic trend, seasonal trend, and irregular trend. Box and Jenkins propose the autoregressive integrated moving average (ARIMA) model for non-stationary short memory data with obvious trends . The ARIThe exponential smoothing (ES) model isMultivariate time series\u2022 The vector autoregressive (VAR) model isTraditional research methods mostly use statistical models to study the evolution of time data. For decades, linear statistical methods have dominated the prediction. Although linear models have many advantages in implementation and interpretation, they have serious limitations in capturing the nonlinear relationship in the data, which is common in many complex real-world problems.An artificial neural network (ANN) is a flexible computing framework and general approximator that can be applied to various time series prediction problems with high accuracy. The main advantage of a neural network is its flexible nonlinear modeling ability, without the need to specify a specific model form. The popularity of ANN stems from being a generalized nonlinear prediction model. Since the advent of the simplest ANN, the ideas of recursion, nonlinear regression, and convolution continues to develop. According to the characteristics of real data, the linear and nonlinear models can be combined to construct a hybrid model to achieve better performance.Recursion\u2022 Convolution\u2022 Convolutional neural network (CNN) is different from RNN, which strictly uses sequential learning processes. The latter processes one data point each time to generate data representations, while the former use nonlinear filters based on multiple dataset learning representation. In each step, a filter is used to extract features from a subset of local data, so that the representation is a set of extracted features. Hybrid model\u2022 Modeling real-world time series is a particularly difficult task because they usually consist of a combination of both linear and nonlinear patterns. In view of the limitations of linear and nonlinear models, hybrid models have been proposed in some studies to improve the quality of prediction. The ARIMA model, ANN model , and mulR2) of multiple methods. We summarize the advantages and disadvantages of different methods, and the results are presented in Fuzzy cognitive map (FCM) is a dynamic system quantitative modeling and simulation method proposed by The existing algorithms applied to train FCM belong to two main groups, population-based and Hebbian-based methods. Population-based algorithms include particle swarm optimization (PSO) , geneticFCM in the time series prediction domain is mostly composed of two parts, establishing the structure and learning the weight matrix. To facilitate an efficient extraction of concepts, FCM framework is constructed by using fuzzy c-means algorithm . When apPseudo-inverse learning\u2022 Wavelet transform\u2022 Although fuzzy cluster analysis has strong time series modeling capabilities, prediction methods based on fuzzy cluster analysis cannot handle non-stationary time series, and evolutionary learning methods are not suitable for large-scale time series. To overcome these two limitations, FCM has been successfully used to model and predict stationary time series. However, it is still challenging to deal with large-scale non-stationary time series with time trends and rapid changes over time. The main advantage of the FCM-based model is the human-centered knowledge representation interface. Therefore, in terms of accuracy, fuzzy admissible mapping time series modeling may not exceed the classical methods that have been studied, but FCM provides superior practical characteristics.Time series data usually change over time. Hence, samples collected over a long period of time are usually significantly different from each other. As such, it is generally not recommended to directly apply old data to the prediction process. For time series prediction problems, we hope to train an effective model with only a small number of fresh samples and relatively rich, old data. Therefore, to solve the problem of insufficient labeled data available in some practical applications, transfer learning methods can be used. Transfer learning is the reusing and transferring of knowledge in one field to other different but related fields. Its basic idea is to utilize the data or information of related source tasks to assist in modeling for the target task. Traditional machine learning techniques try to learn each task from scratch, while transfer learning techniques try to transfer the knowledge from some previous tasks to a target task when the latter has less high-quality training data .via transfer learning.At present, there are relatively few studies on the application of transfer learning to time series prediction. Existing research mainly focuses on the research of pattern classification. In many practical situations, the lack of labeled data may become an obstacle to time series prediction. Unlike traditional machine learning algorithms, transfer learning breaks the assumption that training data and test data must follow the same distribution. For relevant datasets with sufficiently labeled samples, the use of transfer learning framework has become a new trend, and the use of knowledge from relevant source datasets on target dataset effectively solves the problem of insufficient labeled data.This section discusses the method of time series prediction. Time series data essentially reflects the changing trend of some random variables over time. The core of the time series prediction problem is to identify trends from the data, and use it to estimate the future data and predict occurrences in the next period of time. There is not one best model for all actual data, only the most suitable model from a reasonable range of models can be chosen to provide better prediction. The establishment of new time series models is still a problem that scholars will continue to study in the future, giving direction for further research in the field of time series prediction.Time series is an important data type and is generated in almost every application domain at an unprecedented speed and scale. The analysis of time series can help us understand the essence of various phenomena. We investigate current research regarding time series and find that there are few reviews for time series algorithms. In this article, we analyze the prevalent topics of time series and divide them into two categories: classification and prediction. Further, we extract the important technology development routes for time series related algorithms, and introduce every original method and its subsequent improvements. In addition, we compare the performance of different algorithms, analyze and conclude their advantages and disadvantages, as well as the problems and challenges they face.Through our investigation, we find that the technological development has three areas: the traditional method, machine learning method, and deep learning method. In time series classification, the mainstream methods change from distance-based methods (1NN-DTW) into feature-based methods (shapelets), and finally they evolve into a mathematical optimization problem that not only improve the accuracy but also reduce the time complexity. In time series prediction, owing to the limitations of AR, MA, ARIMA, and other traditional methods that cannot cope with nonlinear problems well, neural network methods have become a popular topic, and it is expected to enhance the learning ability of models through fuzzy cognitive map and transfer learning. Despite the fact that the current research has obtained some achievements, we find some important problems during our investigation: \u2022For time series classification, the research on semi-supervised and unsupervised learning algorithms is insufficient. While unlabeled data is ubiquitous and available in large amounts in real life, labeling it is labor intensive and sometimes requires expert knowledge. \u2022For time series prediction, constructing targeted time series models to solve real-world problems is still an ongoing problem for future researchers.In view of the current development status of time series research, we believe that there are still many possible development directions for time series analysis. For example, neural network is a very popular method for time series analysis. In most cases, its solution process is a black box, which lacks interpretability, so that the results cannot be intuitively understood, and clear and targeted optimization scheme cannot be obtained. Exploring the symbolic expression of time series with stronger interpretability is the possible development direction of time series in the future. At present, most of the time series analysis is to collect data offline for offline analysis. When the model built in the offline phase is used in the online phase, new samples are continuously obtained as the working time increases. However, most methods do not consider the use of newly obtained data, and the model cannot be updated in time. Therefore, how to update the model for real-time data is the future task of time series modeling research.Time series has attracted much attention because of its important applications in many fields, such as disease diagnosis and traffic flow prediction. We believe that the study of time series in this article will provide a valuable reference for related research and inspire interested researchers and practitioners to invest more in this promising field."}
+{"text": "Actb), hypoxanthine phosphoribosyltransferase 1 (Hprt1), peptidyl\u2010propyl isomerase A (Ppia) and glyceraldehyde\u20103\u2010phosphate dehydrogenase (Gapdh), are used as RGs. However, most of these genes have not been validated in specific experimental settings. The aim of this study was to evaluate the time\u2010 and brain region\u2010dependent expression of RG candidates in a rat model of transient middle cerebral artery occlusion (tMCAO). The following genes were selected: Actb, Hprt1, Ppia, Gapdh, tyrosine 3\u2010monooxygenase/tryptophan 5\u2010monooxygenase activation protein, zeta (Ywhaz) and beta\u20102\u00a0microglobulin (B2m). Focal cerebral ischaemia was induced by 90\u00a0min of tMCAO in male Sprague\u2010Dawley rats. Expression was investigated at four time points and in three brain areas within the ischaemic brain hemisphere. The RT\u2010qPCR results were analysed using variance analysis and the \u0394Ct, GeNorm, NormFinder and BestKeeper methods. Data from these algorithms were ranked using the geometric mean of ranks of each analysis. Ppia, Hprt1 and Ywhaz were the most stable genes across the analysed brain areas and time points. B2m and Actb exhibited the greatest fluctuations, and the results for Gapdh were ambiguous.A proper reference gene (RG) is required to reliably measure mRNA levels in biological samples via quantitative reverse transcription PCR (RT\u2010qPCR). Various experimental paradigms require specific and stable RGs. In studies using rodent models of brain ischaemia, a variety of genes, such as \u03b2\u2010actin ( This method is highly specific, sensitive and reliable. However, each experimental setup requires proper reference genes (RGs) to reliably measure mRNA levels. The accuracy of RGs might be determined by dedicated statistical approaches: comparative \u0394Ct, BestKeeper, NormFinder, GeNorm or coefficient of variation analysisin vivo model of brain ischaemia. Previous reports on RGs in brain ischaemia models were focused on the analysis of the single brain structure or single time point post\u2010ischaemia/reperfusion. However, considering the complex pathomechanism of brain ischaemia, cellular diversity, the spatiotemporal variability in processes induced by blood flow cessation, previous analyses provide not enough reliable data for other experimental conditions. Here, the following genes were analysed: tyrosine 3\u2010monooxygenase/tryptophan 5\u2010monooxygenase activation protein, zeta (Ywhaz), \u03b2\u2010actin (Actb), beta\u20102\u2010microglobulin (B2m), glyceraldehyde\u20103\u2010phosphate dehydrogenase (Gapdh), hypoxanthine phosphoribosyltransferase 1 (Hprt1) and peptidyl\u2010propyl isomerase A (Ppia). Since detrimental and repair processes that occur after ischaemia and subsequent reperfusion are dynamic the expression was investigated at four time points and in three brain areas: the frontal cortex (CX), hippocampus (HIP) and dorsal striatum (DS). CX and HIP in tMCAO model represent the periinfarct area, whereas DS the core of ischaemia.The aim of this study was to evaluate the time\u2010 and brain region\u2010dependent expression of RGs candidate in a rat model of transient middle cerebral artery occlusion (tMCAO), which is one of the most widely used 2n\u00a0=\u00a08. Time point refers to the time lapse between the onset of reperfusion and animal decapitation and tissue collection. Using TaqMan based RT\u2010qPCR technique the stability of most frequently used RGs was analysed in CX, HIP and in DS . tMCAO was elicited as previously described.DS Table\u00a0. Detaile3The presence of cerebral infarction was confirmed using TTC staining of coronal brain sections . Ninety Ppia mRNA expression was stable across all time points revealed that two to three RGs was an optimal number in this animal model to perform the reliable normalization. However, in the DS, which is the region that is most severely affected by ischaemia, more RGs were needed to normalize target gene expression that is a component of the major histocompatibility class I (MHC I) complex. The role of B2m protein in cerebral ischaemia is unclear; however, its serum levels are associated with an increased risk of ischaemic stroke in humans.B2m was reported in a global brain ischaemia model.B2m might be regulated. In the present study, at 3 and 7\u00a0days after tMCAO, the expression of B2m was increased 5\u201310\u2010fold in the most affected brain structures.Gapdh was reported to be an unstable gene in stroke models.Gapdh was ranked as the most stable gene, but in the DS at 24\u00a0h, this gene was excluded from analysis due to its high SD value. Apoptosis occurs with some delay after ischaemic injury, which corresponds to increased expression of Gapdh at later time points at every studied time point ; Data curation ; Formal analysis ; Funding acquisition (lead); Investigation ; Methodology ; Project administration (lead); Visualization (lead); Writing \u2013 original draft (lead). Weronika Krzy\u017canowska: Conceptualization ; Data curation ; Formal analysis ; Investigation ; Methodology ; Validation ; Writing \u2013 original draft . Jakub Jurczyk: Data curation ; Investigation ; Methodology ; Validation ; Writing \u2013 original draft . Beata Strach: Data curation ; Investigation ; Methodology ; Writing \u2013 original draft . Alicja Sk\u00f3rkowska: Data curation ; Investigation ; Methodology ; Writing \u2013 original draft . Innesa Leonovich: Data curation ; Investigation ; Methodology . Bogus\u0142awa Budziszewska: Conceptualization ; Investigation ; Writing \u2013 original draft . Joanna Pera: Conceptualization (lead); Funding acquisition (lead); Investigation ; Methodology ; Project administration (lead); Supervision (lead); Writing \u2013 original draft (lead).All authors have given final approval of the version and agreed with the publication of this study here.Figure S1Click here for additional data file.Figure S2Click here for additional data file.Figure S3Click here for additional data file.Supplementary MaterialClick here for additional data file."}
+{"text": "However, nearly all such whole-embryo atlases of embryogenesis remain limited by sampling density\u2014i.e., the number of discrete time points at which individual embryos are harvested and cells or nuclei are collected. Given the rapidity with which molecular and cellular programs unfold, this limits the resolution at which regulatory transitions can be characterized. For example, in the mouse, there are typically Drosophila melanogaster, where collections of timed and yet somewhat asynchronous embryos are easy to obtain, such that, at least in principle, one can achieve arbitrarily high temporal resolution. Drosophila could therefore serve as a test case to develop a framework for the inference of continuous regulatory and cellular trajectories of in vivo embryogenesis. Because Drosophila is a preeminent model organism that has yielded many advances in the biological and biomedical sciences, obtaining a single-cell atlas of Drosophila embryogenesis is also an important goal in itself. This includes its embryonic development, where the use of this model in conjunction with powerful genetic tools has transformed our understanding of the mechanisms by which developmental complexity is achieved, in addition to uncovering many general principles of both genetic and epigenetic gene regulation.To construct an ungapped representation of embryogenesis in vivo, we would ideally sample embryos continuously. Although this is not practical for most model organisms, it is potentially possible in We profiled chromatin accessibility in almost 1 million nuclei and gene expression in half a million nuclei from eleven overlapping windows spanning the entirety of embryogenesis (0 to 20 hours). To exploit the developmental asynchronicity of embryos from each collection window, we applied deep neural network-based predictive modeling to more-precisely predict the developmental age of each nucleus within the dataset, resulting in continuous, multimodal views of molecular and cellular transitions in absolute time. With these data, the dynamics of enhancer usage and gene expression can be explored within and across lineages at the scale of minutes, including for precise transitions like zygotic genome activation.Drosophila embryonic atlas broadly informs the orchestration of cellular states during the most dynamic stages in the life cycle of metazoan organisms. The inclusion of predicted nuclear ages will facilitate the exploration of the precise time points at which genes become active in distinct tissues as well as how chromatin is remodeled across time.This Drosophila melanogaster is a powerful, long-standing model for metazoan development and gene regulation. We profiled chromatin accessibility in almost 1 million and gene expression in half a million nuclei from overlapping windows spanning the entirety of embryogenesis. Leveraging developmental asynchronicity within embryo collections, we applied deep neural networks to infer the age of each nucleus, resulting in continuous, multimodal views of molecular and cellular transitions in absolute time. We identify cell lineages; infer their developmental relationships; and link dynamic changes in enhancer usage, transcription factor (TF) expression, and the accessibility of TFs\u2019 cognate motifs. With these data, the dynamics of enhancer usage and gene expression can be explored within and across lineages at the scale of minutes, including for precise transitions like zygotic genome activation. Drosophila embryogenesis.Characterizing the continuum of We collected staged Drosophila embryos from overlapping time windows across the first 20 hours of embryogenesis. Then we extracted nuclei and performed single-cell RNA sequencing (RNA-seq) and assay for transposase-accessible chromatin using sequencing (ATAC-seq) profiling using combinatorial indexing (sci-RNA-seq and sci-ATAC-seq) to comprehensively map expressed genes and putatively active regulatory elements. We applied machine learning to infer a continuum of nuclear ages that is synchronized across unfolding lineages in absolute time. The continuous nuclear age predictions were used to annotate and then link cellular states at nonoverlapping 2-hour intervals, as well as to explore transcriptional regulatory dynamics across major cell lineages of embryonic development at fine-scale temporal resolution. Single-cell technologies are a powerful means of studying metazoan development, shedding light on the emergence of cellular diversity and the dynamics of gene regulation. However, nearly all such atlases of embryogenesis are limited in terms of the number of discrete time points and cells sampled per time point. Given the rapidity with which molecular and cellular programs unfold, this limits the resolution at which regulatory transitions can be characterized.Drosophila, where collections of timed and yet somewhat asynchronous embryos are easy to obtain, such that, in principle, one can achieve arbitrarily high temporal resolution. This sharply contrasts with mice, for which there are typically 6 to 24 hours between sampled time points, gaps within which massive molecular and morphological changes take place and include 85% of known embryonic enhancers, based on overlap with nearly 5000 curated enhancers confirmed in transgenic embryos . \u201310. Dros embryos , 12, sup embryos . We addi embryos that are embryos . The com embryos and 96% embryos and fewer identified clusters . We suspHere, we use cell state to mean an annotated cluster at a given time window. Altogether, we identified 171 cell states in sci-ATAC-seq data and 268 in sci-RNA-seq data across the nine time windows, each of which received one of 38 cell type annotations for ATAC or one of 54 cell type annotations for RNA , A and BDrosophila embryogenesis, represented by our 0- to 2-hour time window, include 13 rapid nuclear divisions within a syncytium that generates 6000 nuclei, regulated by maternal genes. At ~2 hours and 20 min after fertilization, cellularization occurs and the zygotic genome is activated and Delta (Dl), and neuroblast temporal TFs, such as miranda (mira) and castor (cas). Two additional neural progenitor clusters correspond to sensory progenitors, whereas immature neurons express low levels of both neural progenitor and pan-synaptic genes, including cacophony (cac) and synaptotagmin 1 (syt1). Mature neurons are marked by higher levels of pan- and subtype-specific synaptic genes coupled with low or no expression of earlier developmental genes. Finally, midline cells, consisting of both neurons and glia cluster together, become evident at 6 to 8 hours; using the midline TF single minded (sim) and glial immunoglobulin family member wrapper as markers, we can follow them forward in time as they mature (shaven (sv), from 6 to 16 hours neurons and internal chordotonal (Ch) neurons, and type II multidendritic (MD) neurons. We can clearly distinguish MD neurons on the basis of expression of genes, such as dendritic arbor reduction 1 (dar1), which promotes their characteristic branching dendrites, and the pseudouridine synthase RluA-1, which was recently identified as a marker of MD neurons (pickpocket (ppk) and ppk26. Mechanosensory ES neurons are specified by the TF hamlet (ham), which is specifically expressed in the middle sensory cluster as well as fate-determinant Rfx and a number of as-yet uncharacterized genes specific to this cluster and nompA, which promote the development and function of Ch support cells, respectively (ion transport peptide (ITP)], enzymes involved in their synthesis [amontillado (amon)], and receptors [myosuppressin receptor 1 (MsR1)] . Th. ThDroso cluster , 26 ] .complexin (cpx) and CG4328, identified in our analysis as enriched in the monoaminergic cluster, which includes midline neurons confirmed this unexpected finding CR31451 as enriched in mature neurons as well as two genes, neurons . This ne finding and raisThis deeper exploration of the neuroectoderm, validating and extending years of research from many groups, illustrates the depth of information that can be obtained from these data. We additionally performed a more detailed annotation of nonmyogenic mesoderm . A full eve locus, the stripe 1 enhancer has a much stronger skew for anterior accessibility compared with stripe 2, as has also been previously reported axis of the blastoderm embryo . Using lreported . Our sinelopment . Using telopment .To further leverage continuous views of unfolding trajectories, we next explored the gene regulatory modules active in germ layer\u2013specific development. We focused on the mesoderm and its derivatives as a complex, well-characterized system that we and others have studied previously , 29, 30.n = 571) are highly expressed from the beginning of mesoderm development ; are enriched for TFs (P = 1.4 \u00d7 10\u22126); and likely represent a mixture of genes involved in progenitor cells, mesoderm development, and transcriptional activation (n = 433) peak at ~9 to 11 hours, during the subdivision of the mesoderm into different muscle primordia and their subsequent specification. This cluster is enriched for genes involved in mesoderm development, including myoblast fusion and myotube differentiation, while losing enrichment for stem cell and self-renewal terms (n = 365) initiate expression at ~10 hours and steadily increase to the end of embryogenesis, whereas cluster 4 genes (n = 631) only switch on at ~15 hours, during muscle terminal differentiation. The last cluster lacks enrichment for TFs and rather includes genes involved in myofibril assembly and muscle assembly and maintenance as well as essential contractile proteins for differentiated muscle , are present in clusters 3 and 4. Mhc protein plays a critical role in providing muscle-contractile force. Our scRNA data show increasing Mhc expression along the muscle lineages in cells with later embryonic ages , a TF associated with muscle development, which has peak expression at 10 hours of embryogenesis data data , which spression .q < 1 \u00d7 10\u22123 and presence in >1% of target peaks; hb, en, Ubx, and pb), and concordantly expressed in the first temporal cluster. These factors have many functions, including setting up the segmentation of the mesoderm, regulating the expression of somatic muscle identity genes, establishing midgut constrictions in the visceral mesoderm, and heart patterning. Other examples from the second and third temporal clusters are genes required for cell fate specification of somatic muscle founder cells and heart development .To extend this analysis more globally, we searched for TF motifs enriched in putative enhancers (mesoderm-specific scATAC peaks 1 to 10 kb upstream of the TSS) of genes belonging to each of the four scRNA mesoderm expression clusters. This identified 458 TF motif-to-cluster enrichments . This caveat notwithstanding, these analyses highlight the potential for further discovery of coregulated gene modules related to distinct germ layers or cell types.We next investigated whether we could leverage the diversity of cell states across embryogenesis to infer which TFs drive specific programs of cell type differentiation. For this, we used all scATAC clusters at all time points and searched for differential enrichment of TF position weight matrices (PWMs) within each cluster\u2019s open chromatin regions.We first characterized enrichments across clusters from the 10- to 12-hour time window based on predicted time . EncouraBecause members of the same family of TFs typically recognize similar motif sequences , it is often difficult from motif analysis alone to pinpoint the responsible TF. To address this, we leveraged our scRNA data to identify the most likely active TF on the basis of its expression within the clusters among all factors that share the same motif binding pattern. First, we used a regression-based framework to integrate the scATAC and scRNA datasets and identify links between the different cell clusters , 6. Specsage transcript and high accessibility of the Sage-associated PWM TF-to-tissue relationships having both associated expression and chromatin activity at one or more of the nine time windows assessed. We note that in time windows with fewer clusters, the association effect estimates are susceptible to outliers and should be interpreted with caution. Notwithstanding this caveat, these putative assignments represent an extensive resource for future studies ; studies .CG5953 and CG11617) or neuroectodermal tissues (Ets65A and CG12605) was poorly characterized. We confirmed that these factors are in fact expressed in the tissue and time window predicted by our data neo38, Lim3, lola, fkh, and fru besides being positively regulated by the pan-muscle TF Mef2 and repressed by Run and Opa to developmental age inference. There are cases where technical features of the data can lead to increased uncertainty of model predictions. For example, we found that cells annotated as germ cells, from the first collection time window, or with low read count were associated with greater prediction error . Moving The extensive scATAC data, with deep coverage across almost a million cells, likely captured most regulatory elements active during embryonic development and provides a comprehensive resource of potential enhancers for almost any cell type in the embryo. By contrast, our scRNA data had relatively low unique reads per cell and will likely miss some differentially expressed genes in specific cell types. As a result, some delicate analyses remain challenging. For example, we found transcriptional velocity estimates to be unstable with sparse scRNA data, although this issue was mitigated by constructing metacells before velocity analysis , which mDrosophila embryonic atlas provides broad insights into the orchestration of cellular states during the most dynamic stages in the life cycle of the organism. Our results represent a rich resource for understanding precise time points at which genes become active in distinct tissues as well as how chromatin is remodeled across time. The annotation of cell types within these data is an ongoing process and one that is much more challenging at early and mid-stages of embryogenesis as compared with late time points or in adults with differentiated tissues. A comprehensive annotation of embryonic cell states will require a collective effort from the Drosophila community. To support these ongoing efforts, we provide information on expression and peaks from all clusters filters for low read depth or high proportions of reads mapping to the mitochondria or ribosomal genes and extensive doublet removal. Between the two data modalities, we obtained profiles for ~1.5 million nuclei, although unique read depth per nucleus was considerably lower for scRNA than scATAC data.A detailed version of the materials and methods is provided in the Using the center hour of the collection window, we used several machine learning approaches to fit a model that could infer the age of a nucleus with either gene expression or chromatin accessibility information. Both LL regression and neural networks were fitted to the same training data, with a held-out subset used for model validation and comparison. Given its consistently superior performance, we then relied on specific parameterizations of NN model\u2013inferred ages to reposition nuclei in time. To zoom into fine-scale time points, we binned data by small increments to explore the regulatory dynamics of ZGA. Then, using 2-hour adjacent windows of cells, we computed clusters of similar cells and performed extensive manual review to annotate each cluster\u2019s likely germ layer and cell type. We then used an iterative approach for constructing an acyclic tree of differentiation by identifying the likely precursor cluster for each cluster in a given time window.Neuroectoderm was iteratively analyzed for deeper annotation of neuronal subtypes, whereas mesoderm was picked for analyses focused on identifying coregulated genes and accessible regions, which were then subjected to ontology and TF motif enrichment analysis. To connect scATAC cell clusters with scRNA cell clusters, we used a regression-based approach (NNLS). Such connections between ATAC and RNA clusters enabled a series of analyses, such as correlating expression with motif accessibility, applying GRN analysis pipelines, etc.Several additional analyses were performed. We used probabilistic label transfer to map likely cluster annotations from these data to spatial information from patterned DNA nanoballs. We also found it is possible to infer the sex of cells from the proportion of chrX-mapped scATAC reads using a Gaussian mixture model to classify cells. Although RNA velocity was challenging to apply to sparse scRNA data, it yielded more sensible results when subsets of cells were first aggregated to metacells.elav with binou, genes active at specific mesoderm time points, and putative active TFs with less-characterized roles in tissue development.The expressions of several genes were verified by fluorescent in situ hybridization: specific neuronal genes active in identified clusters, unexpected coactivity of the Raw data are available through the Gene Expression Omnibus (GEO). Additional scripts and intermediate files, including bigwigs and a custom web app to visualize UMAPs, are available through our data-sharing website.Supplementary materialsSupplementary tables"}
+{"text": "Accurate inference and prediction of gene regulatory network are very important for understanding dynamic cellular processes. The large-scale time series genomics data are helpful to reveal the molecular dynamics and dynamic biological processes of complex biological systems. Firstly, we collected the time series data of the rat pineal gland tissue in the natural state according to a fixed sampling rate, and performed whole-genome sequencing. The large-scale time-series sequencing data set of rat pineal gland was constructed, which includes 480 time points, the time interval between adjacent time points is 3\u00a0min, and the sampling period is 24\u00a0h. Then, we proposed a new method of constructing gene expression regulatory network, named the gene regulatory network based on time series data and entropy transfer (GRNTSTE) method. The method is based on transfer entropy and large-scale time-series gene expression data to infer the causal regulatory relationship between genes in a data-driven mode. The comparative experiments prove that GRNTSTE has better performance than dynamical gene network inference with ensemble of trees (dynGENIE3) and SCRIBE, and has similar performance to TENET. Meanwhile, we proved that the performance of GRNTSTE is slightly lower than that of SINCERITIES method and better than other gene regulatory network construction methods in BEELINE framework, which is based on the BEELINE data set. Finally, the rat pineal rhythm gene expression regulatory network was constructed by us based on the GRNTSTE method, which provides an important reference for the study of the pineal rhythm mechanism, and is of great significance to the study of the pineal rhythm mechanism. So far, the mechanism of transcriptional regulation in complex systems is still difficult. The main reason is that experiments to verify protein-DNA interactions and their role in regulation are expensive and difficult to replicate7. Therefore, the methods based on predictive models instead of biological experiments have become one of the effective methods. For example, the inference method of the gene regulatory network (GRN). The GRN can vividly describe the dynamics and biological physiological state of transcription changes. It plays an important role in understanding the genetic basis of phenotypic traits11.With the development of sequencing technology, the cost of gene sequencing is getting lower and lower. It is no longer difficult to obtain a large amount of gene sequencing data according to the experimental design. However, large-scale time series genomic data can better understand and study the principles of biological dynamics and molecular dynamics12. For example, the co-expression cluster obtained by this method can provide a rough network representation and the co-expression relationship between genes. But, there are only correlations between genes. The causal regulatory relationship between genes cannot be identified. Therefore, the causal regulatory relationship between genes cannot be constructed.In the research of gene interaction, the cluster analysis of the whole gene expression profile is one of the important methods to study the expression relationship between genes. First of all, genes with similar transcriptional responses are put together by clustering algorithm, which can explore the interaction of genes involved in similar cellular processes15. In the gene regulation network, the direct interaction between genes represents the causal regulatory relationship between genes. The definition of the network edge depends on the selected method16. For example, the linear correlation model based on estimated mRNA abundance can determine the relationship between genes. This method not only lead to false positive edges, but also lose non-linear interaction relationship. Therefore, these models cannot provide reliable biological conclusions based on gene expression data.In the past few years, the main process of constructing gene regulatory network is to capture the genes interaction relationships as a network model. The nodes of the network are genes, and the edges represent the interaction relationships between genes19. The BLARS infers the relationship between genes based on punitive linear regression21. In addition, the GENIE3 of gene network inference with ensemble of trees (GENIE3)22 can infer network relationships based on machine learning method. However, in recent years, some new ideas for constructing gene regulatory networks have been proposed, which are based on time series data to infer the direct gene interaction relationships of gene regulatory networks. For example, as an upgraded version of GENIE3, the dynamical GENIE3 (dynGENIE3)23 can provide functions for processing short time series data. Moreover, the SWING correlation method proposed based on the Granger causality framework can infer gene regulatory networks based on short time series data24.In order to reliably reveal dynamic biological processes, the methods for constructing gene regulatory networks are emerging one after another. For example, the ARACNE and MRNET methods are based on mutual information to capture the nonlinear dynamics of gene regulation25 is a method for simultaneously estimating linear and nonlinear interactions. It can construct the causal relationship between variables without any assumptions. Because of its advantages and effectiveness in analyzing nonlinear complex systems, it has been widely used in different fields. The transfer entropy has made great achievements for the construction of causality in various fields, such as the field of science and engineering29, the field of industry30, the field of financial39, the field of brain science45, the field of climate47, etc. In addition, transfer entropy is tried to be applied to gene relationship inference49. For example, the transfer entropy is used to construct the gene regulation network of eukaryotic Saccharomyces cerevisiae by Juan Camilo Castro50. In addition, Junil Kim et al.51 reconstructed the single-cell gene regulatory network based on transfer entropy, and revealed key regulatory factors from single-cell transcriptome data, which also verified the effectiveness of transfer entropy in constructing gene regulatory networks.In addition, the transfer entropy (TE)In summary, the construction of gene regulatory networks based on large-scale time series genetic data has become one of the reliable methods for studying dynamic biological processes. Therefore, we propose a new gene regulation network construction method based on time series and transfer entropy, named GRNTSTE, which uses transfer entropy and a large amount of gene expression time series data. Then, we construct a gene regulation network for rat pineal rhythm genes based on GRNTSTE. The gene regulation network reveals the interaction between rat pineal rhythm genes under natural light conditions, which provides a hypothesis for biological experimental verification.The core of GRNTSTE method is the transfer entropy method. However, the transfer entropy is an index proposed based on information theory to measure the asymmetry between process variables. The information entropy and transfer entropy are described as follows.52. He borrowed the concept of thermodynamics, called the average amount of information excluding redundancy as \u201cInformation entropy\u201d. The mathematical expression for calculating information entropy is shown in Eq.\u00a0 in the information25 based on the theory of information entropy. Transfer entropy is often used to describe the transfer of information between process variables, and it can be calculated how much this information transfer can reduce the uncertainty of the observed system. For example, when the transfer entropy from variable In 2000, S. Thomas proposed the concept of transfer entropySince the application of transfer entropy requires a relatively large length of time series data, it can only be used in the analysis of neural signals and Electroencephalogram data in an era when the amount of data is generally small. However, with the development of the era of big data, data has gradually become an asset to be possessed. Therefore, various fields have gradually realized the importance of data, and collected and accumulated a large amount of time series data based on reasonable design. We believe that transfer entropy will become one of the important methods to analyze the causal driving relationship of time series data in the future.The transfer entropy is used to measure the asymmetry between time series variables based on conditional distributions, which produces a causal relationship between drive and response. In addition, the equivalence between transfer entropy and Granger causality test has been proved. The transfer entropy can handle nonlinear time series data well and is very sensitive to Granger causality. Since transfer entropy considers the transfer of information between time series variables without assuming a specific relationship between variables, it has better applicability than Wiener-Granger causality, especially for nonlinear systems. The formula for transferring entropy is shown in Eqs.\u00a0 and 3)..3).2\\docThe prerequisite for the application of transfer entropy is that the variables in the time series must satisfy the Markov property. When a random process is given the current state and all past states, the conditional probability distribution of its future state depends only on the current state. Let n in Eq.\u00a0.4\\docume formula .5\\documeTherefore, the transfer of information entropy in the network can be defined as Therefore, we can get that the two-way information flow of entropy is asymmetry. According to the asymmetry of the information flow, the driving and response factors of the variables can be determined, so as to construct the causal driving relationship. The core advantage of transfer entropy is the direction. We can infer the direction of the causal driving relationship between variables based on time series data.In order to evaluate the effectiveness and accuracy of the GRNTSTE method, We used the Ecoli simulation data set in the DERAM3 challenge for experimental verification. In addition, in order to avoid the randomness of the experimental results, we randomly selected 3 data sets containing 10 genes and 3 data sets containing 50 genes as the experimental data set. These data sets are time series gene expression data composed of 21 points. Then, we construct the sub-network topology interaction relationship of these 6 data sets, and compare and analyze the performance of the algorithms based on gold standards data.16. Therefore, the precision and recall (PR) curve and its corresponding area under the curve are also selected to evaluate the performance of the algorithm. In our experiments, we use both ROC and PR curves as metrics to evaluate algorithm performance.In the sub-network, we regard the transfer entropy value as a directed side information flow. We set different thresholds to calculate the true positive rate and false positive rate at different thresholds, and then calculate the receiving operating characteristic (ROC) curve and calculate the area under the curve. In this way, we can easily evaluate the specificity of the algorithm through the ROC curve. However, it has been noted that small variations from a value of 1 area under the ROC curve can result in large number of false positives53, TENET50 and dynGENIE3 algorithms. The SCRIBE, TENET and dynGENIE3 are effective methods to infer gene regulatory networks. For the 6 datasets of the DREAM3 challenge called the\u00a0DREAM challenges). The TENET is also a gene regulation network inference method based on transfer entropy. We reconstructed the gene regulatory network based on the SCRIBE, TENET, dynGENIE3 and GRNTSTE methods, respectively. We used the standard convention of calculating the area under the Precision Recall curve (AUPRC) and the area under the receiving operating characteristic (AUROC)54. The AUPRC determines the proportion of true positives among all positive predictions (prediction precision) versus the fraction of true positives retrieved among all correct predictions at varying thresholds. Conversely the AUROC estimates the average true positive rate versus the false positive rate. The Table In order to evaluate the effectiveness of our GRNTSTE method, we compared the GRNTSTE method with the SCRIBEAt the same time, we used dynGENIE3, TENET, SCRIBE and GRNTSTE algorithms to analyze 3 dream challenge datasets containing 50 genes. Table In summary, we conducted experimental verification based on the DREAM3 challenge open source data set, and the experimental results proved that the performance of the GRNTSTE method is significantly higher than that of the dynGENIE3 and SCRIBE algorithm. In addition, with the increase in the number of genes, the advantages of GRNTSTE are more obvious. However, GRNTSTE and TENET methods have similar performance. Both GRNTSTE and TENET are gene regulation network inference methods based on transfer entropy. It shows the effectiveness and superiority of transfer entropy in the inference of gene regulatory network.55 and existing effective gene regulation methods. The BEELINE simulation datasets is single-cell gene expression data. The datasets from synthetic networks were created five datasets per parameter set, one each with 100, 200, 500, 2000 and 5000 cells by sampling one cell per simulation. These datasets include 6 different data types, namely LI, linear; CY, cycle; LL, linear long; BF, bifurcating; BFC, bifurcating converging and TF, trifurcating. And we conducted experiments on different algorithms based on the BEELINE datasets, and evaluated the performance of different algorithms based on AUPRC.To further verify the performance of the GRNTSTE method, we conducted a comparative analysis of the gene regulatory network construction methods in the GRNTSTE and BEELINE frameworksSince the GRNTSTE method is based on time series datasets to infer gene regulatory networks. We first constructed a pseudo-time-series gene expression dataset based on the simulated dataset and time-lapse information. As shown in Table Furthermore, to compare with existing gene regulatory network construction methods, we calculated AUPRC for datasets of 2000 and 5000 cells, respectively. The detailed results are shown in Table 56. It includes five genes: SWI5, GAL80, GAL4, CBF1 and ASH1. The method we proposed is to infer the positive regulatory relationship based on time series data, so the IRMA ON data set is selected. Using the gene regulatory network inference method GRNTSTE proposed by us, we constructed the gene regulatory network, as shown in Fig. Furthermore, we further validate the effectiveness of our proposed gene regulatory network inference method GRNTSTE based on public datasets. The datasets named IRMA OFF/ON from Cantone et al.All procedures on rat presented in this manuscript were approved by the Institutional Experimental Animal Welfare and Ethics Committee of Inner Mongolia Agricultural University.The study was carried out in compliance with the ARRIVE guidelines . All procedures on rat presented in this manuscript were approved by the Institutional Experimental Animal Welfare and Ethics Committee of Inner Mongolia Agricultural University, based on the method of euthanasia for rat experiments. Then, open the skull, and take out the brain tissues. Next, the pineal gland in the rhythm center was isolated and the second microstructure was identified. Finally, the rat pineal gland was removed and put into a 2\u00a0ml Corning freezing tube. At the same time, the details of the sample were marked and immediately put into liquid nitrogen. All rat experiments in our work comply with the national \"Experimental Animal Environment and Facilities\" standard (GB14925-2010), and follow the \"Experimental Animal Management Regulations\" (No. 2 Order of the State Science and Technology Commission) and the Ministry of Science and Technology \"Experimental Animal License Management Measures\" [2001 No. 545]. In our work, we confirm that all our methods are performed in accordance with the above guidelines and regulations. A total of 480 male rats, aged 8\u00a0weeks, with an average body-mass index of 180\u00a0g, were selected from the rat aquatic breeding farm in Qingdao, Shandong Province. All experimental rats were kept in a 100 square meters independent rat room for two weeks . In complete circadian rhythm cycle, the rat pineal gland was sampled every three minutes from 7:00 a.m. on November 15, 2020 to 7:00 a.m. on November 16, 2020. It was carried out continuously for 24\u00a0h until the end of the experiment. The sampled rat pineal gland was put in a 2\u00a0ml Corning Freezer Tube , cryopreserved in liquid nitrogen immediately. The total RNA was extracted using the Biomend RNApure Rapid RNA Kit (RA103-02). The total RNA extraction results were detected by agilent 2100 with integrity RIN value more than 9.0. A total amount of 3\u00a0\u00b5g RNA per sample was used as input material for the RNA sample preparations. Sequencing libraries were generated using NEBNext\u00ae UltraTM RNA Library Prep Kit following manufacturer\u2019s recommendations and index codes were added to attribute sequences to each sample. Briefly, mRNA was purified from total RNA using poly-T oligo-attached magnetic beads. Fragmentation was carried out using divalent cations under elevated temperature in NEBNext First Strand Synthesis Reaction Buffer (5\u00a0\u00d7). First strand cDNA was synthesized using random hexamer primer and M-MuLV Reverse Transcriptase (RNase H-). Second strand cDNA synthesis was subsequently performed using DNA polymerase I and RNase H. Remaining overhangs were converted into blunt ends via exonuclease/polymerase activities. After adenylation of 3\u2032 ends of DNA fragments, NEBNext Adaptor with hairpin loop structure were ligated to prepare for hybridization. In order to select cDNA fragments of preferentially 250\u2009~\u2009300\u00a0bp in length, the library fragments were purified with AMPure XP system . Then 3\u00a0\u00b5l USER Enzyme was used with size-selected, adaptor-ligated cDNA at 37\u00a0\u00b0C for 15\u00a0min followed by 5\u00a0min at 95\u00a0\u00b0C before PCR. Then PCR was performed with Phusion High-Fidelity DNA polymerase, Universal PCR primers and Index (X) Primer. At last, PCR products were purified (AMPure XP system) and library quality was assessed on the Agilent Bioanalyzer 2100 system.The pineal gland is the regulatory center of the biological clock. The pineal gland alternately secretes melatonin and serotonin with a distinct circadian rhythm. The pineal gland secretes serotonin during the day and melatonin at night. Since the secretion of melatonin is regulated by light and dark, the light and dark of the circadian cycle also periodically causes changes in melatonin secretion. Studies have shown that plasma melatonin concentrations decrease during the day and increase at night. Therefore, the pineal gland sends time signals to the central nervous system according to the circadian secretion of melatonin, which in turn triggers some time- or age-related biological clock phenomena. For example, sleep and wakefulness in humans, ovulation in the menstrual cycle, and the onset of puberty. Therefore, rhythm genes in the pineal gland play an important role in regulating the rhythmic cycle of organisms.In our work, we constructed the rhythm gene regulatory network of rat pineal gland based on GRNTSTE method. And the directed graph is used to describe the regulatory relationship between genes. In this paper, the construction process of gene expression regulatory network is mainly composed of 6 stages. As shown in Fig. In the stage of sample collection based on time series. In order to understand and study the dynamic biological process of an organism, time series gene expression data can be used to study the state of biological processes at different time points, so as to discover the changing laws of biological processes. Currently, there are two main methods for obtaining time series data in the field of bioinformatics. The first method is single-cell sequencing technology, and the other method is sampling at fixed time intervals.Single-cell sequencing technology constructs a time-series data set by sampling target tissues and isolating single cells at different growth stages for sequencing. However, there are usually errors in cell separation, and the collected tissue cells cannot construct an equally spaced time series data set, and the cost of single-cell sequencing technology is relatively high. However, there are usually errors in cell separation, and the result data after separation cannot construct an equally spaced time series data set. In addition, the cost of single-cell sequencing technology is very high.However, the method of sampling by time point according to a fixed sampling rate is a process of sampling the observation target at equal time intervals. Compared with the single-cell sequencing technology, this method has a lower cost, and can obtain sample data at a specified time interval according to the sampling rate of the experimental design, so as to obtain a more accurate and richer sample data set.For our experimental data collection, we sample the pineal tissue of a rat with the same growth environment, age and sex every 3\u00a0min, and the time period is from 7:00 in the morning to 7:00 the next morning. Then, the samples were frozen in liquid nitrogen and sequenced. We collected samples for 24\u00a0h, so we obtained 480 rat pineal tissue samples. The n\u2009\u00d7\u2009t (where n is the number of genes and t is the time point) gene expression profile matrix is obtained by genes quantitative analysis.In the preprocessing stage of gene time series data, due to the influence of objective environment, equipment and man-made factors in the process of obtaining genetic time series data, there are usually outliers and random values in the data. As shown in Fig. The existence of outliers or random values will not only affect the accuracy of the calculation results, but also cause the calculation results to deviate from the essential trend of the time series. Therefore, preprocessing is one of the important steps in the data mining process. For the outliers in the time series data, we choose to use the moving average smoothing method to preprocess the collected time series big data, so as to reduce the influence of the outliers on the analysis results. The moving average formula is shown in Eq.\u00a0.6\\documeIn the stage of target gene selection, the target gene selection is an important process in the construction of gene regulatory networks. The target gene we took is the rhythm gene that regulates the secretion of melatonin in the rat pineal gland. In the previous research, the aromatic arylalkylamine N-acetyltransferase(AANAT) gene has been proven to be an important rate-limiting enzyme in the melatonin biosynthesis pathway. The melatonin is an important hormone secreted by pineal gland, and the secretion of melatonin has obvious periodic rhythm. In addition the rhythm of the synthesis of melatonin in the pineal gland is mainly controlled by light, and the change of light is an important sign of the change of day and night. The change trend of AANAT gene expression is shown in Fig. We select genes with the same expression patterns of AANAT genes based on pattern clustering. At present, the popular clustering methods of time series data include fuzzy c-means method, cosine similarity method and so on. For the gene expression profile matrix of rat pineal gland, we first filter out genes whose expression levels have not changed, have no significant changes, or have a standard deviation of 0 over time. Then, the remaining genes are analyzed by pattern clustering based on the fuzzy c-means method. We divided the genes into 12 categories, and the pattern clustering results are shown in Fig. In order to obtain better clustering results, we also perform cluster analysis based on the method of cosine similarity clustering. Finally, we selected the category that contains AANAT genes, which contains 643 genes. In view of the results of the two clustering methods, we selected the genes shared by the two clustering methods, and 350 target rhythm genes were obtained in this process. Then we manually screened 350 genes and removed genes that did not meet the expression trend of AANAT genes on the time axis. The change trend of target gene expression is shown in Fig. In the stage of in the stage of calculating transfer entropy, we screened out 124 genes with similar expression patterns to AANAT genes, and each gene contains 480 time points of gene expression information. These genes can be represented by an expression profile matrix in n\u00a0\u00d7\u00a0t format, where n represents the number of genes and t represents the length of the time sequence. Therefore, the gene expression data we collected based on time series accord with the application advantages of transferring entropy to deal with large-scale time series, and the causal relationship between gene pairs can be established by transferring entropy.2. Therefore, we can obtain 15,252 gene regulatory relationships based on the analysis of 124 genes. In this process, we get the values of transfer entropy and p-value between all paired genes.The gene expression data we collected based on time series has the characteristics of large scale, which is in line with the application advantage of transfer entropy. In addition, the causal regulation relationship between gene pairs can be established based on the transfer of entropy. Due to the asymmetry of the transfer entropy, we need to calculate the two-way transfer entropy between the pair of genes. The calculation of the total number of transfer entropy is shown in Eq. .7\\documementclass2pt{minimIn the stage of in the stage of regulatory relationship screening, the screening of regulatory relationships between genes is one of the key steps in the constructing of gene regulatory networks. We screened gene regulatory relationships based on the transfer entropy and p-value between paired genes. We first screen the one-way regulatory relationship between the paired genes. As shown in Fig. Then, we screened according to the p-value value of the evaluation index of the transfer entropy between the paired genes. The regulatory relationship between paired genes that has extremely significant information transfer changes needs to be selected by us, so we retain the regulatory relationship with p-value\u00a0<\u00a00.001. In the process, we can get 7243 gene regulation relationships with statistical significance. Finally, we further screened gene regulation relationships with TE\u00a0\u2265\u00a00.5 and obtained 743 gene regulation relationships.In the stage of gene regulation network construction, we need to screen the major gene regulation relationships. As shown in Fig. We finally got 117 gene regulatory relationships based on the above screening methods. Finally, the Cytoscape software was used to construct the gene regulatory network, and the gene regulatory network is shown in Fig. In addition, as shown in Fig. We finally got 80 gene regulatory relationships based on the above screening methods. Finally, the Cytoscape software was used to construct the gene regulatory network, and the gene regulatory network is shown in Fig. 61 found that the daily synthesis and secretion of melatonin in the rat pineal gland is highly correlated with N-acetyltransferase activity, which also shows the effectiveness of our method. In addition, many other rhythm genes are also included in the gene regulatory network we constructed, which have been verified by researchers through experiments such as gene knockout. For example, Wang et al.62 proved that the fcer1a gene is an important rhythm gene, and the expression of fcer1a gene and FceRIa protein displayed a circadian pattern following serum shock, with mean periods of 18.9 and 28.6\u00a0h, respectively. Pan et al.63 show that in mouse liver, transcriptional regulation significantly contributes to the establishment of 12-h rhythms of mRNA expression in a manner dependent on Spliced Form of X-box Binding Protein 1 (XBP1s). Mechanistically, the motif stringency of XBP1s promoter binding sites dictates XBP1s's ability to drive 12-h rhythms of nascent mRNA transcription at dawn and dusk. Terrelonge et al.64 KIBRA,\u00a0MTNR1B, and\u00a0FKBP5 play an important roles in the complex relationship between delirium, cognition, and sleep, and warrant further study in larger, more diverse populations. Secretion of the stress hormone cortisol follows a circadian rhythm and is stimulated following stress exposure. Yurtsever et al.65 studied the temporal association between unstimulated, diurnal cortisol secretion and the expression of selected GR-target genes in vivo to determine the timing of the most pronounced coupling between cortisol and mRNA expression. Adi et al.66 have implicated one rhythmically expressed gene, camk1gb, in connecting the clock with downstream physiology of the pineal gland. Remarkably, knockdown of camk1gb disrupts locomotor activity in the whole larva, even though it is predominantly expressed within the pineal gland. Therefore, it appears that camk1gb plays a role in linking the pineal master clock with the periphery. Wong et al.67 demonstrated that both loss and aberrant gain of RCAN1precipitate anomalous light-entrained diurnal and circadian activity patterns emblematic of DS, AD, and aging by gene knockout experiments. In conclusion, the above studies not only fully proved the effectiveness of GRNTSTE method, but also proved that the gene regulatory network we constructed has important reference value.In summary, our experimental analysis results show that the AANAT gene is the ultimate receptor gene and is highly related to the secretion of melatonin. It is consistent with the conclusion that BinkleyIn our work, in order to infer the gene regulation relationship from the massive time series gene expression data, we propose the GRNTSTE method, which uses transfer entropy to infer the regulatory relationship between genes. We compared GRNTSTE with the existing algorithms SCRBE, TENET and dynGENIE3, and the results show that GRNTSTE has better performance than dynGENIE3 and SCRBE. However, GRNTSTE and TENET have similar performance. At the same time, we prove the performance of GRNTSTE is slightly lower than that of the SINCERITIES method, and it outperforms other gene regulatory network construction methods in BEELINE. It shows the superiority of GRNTSTE in reconstructing gene regulatory networks based on single-cell gene expression data. Then, we applied the GRNTSTE method to the construction of the rhythm gene regulatory network in rat pineal gland tissue. The gene regulatory network constructed based on large-scale time series gene expression data is helpful for studying the interaction between rhythm genes. It is great significance to explore the interaction between genes that secrete melatonin in the pineal gland. It is great significance to comprehensively explain the molecular mechanism of melatonin secretion. In addition, it can guide and treat diseases caused by the pineal gland, such as insomnia.Aromatic alkylamine N-acetyltransferase in the pineal gland is an important rate-limiting enzyme in the melatonin biosynthesis pathway. It may be involved in regulating the synthesis rhythm of melatonin, and it may play an important role in influencing the regulation of the photoperiod to the night peak of melatonin. In the pineal gland of normal rats, the AANAT is a soluble cytoplasmic protein. The enzyme activity of AANAT is high at night and low during the day. In addition, light can quickly reduce the AANAT enzyme activity, and compared with the activities of other enzymes in the process of melatonin synthesis, the AANAT activity is extremely low during the day. It shows that AANAT is the main rate-limiting enzyme in the process of melatonin synthesis. The periodic changes of AANAT activity in the pineal gland of most mammals can drive the circadian secretion of melatonin. Therefore, AANAT is called the melatonin rhythm-forming enzyme.In order to study the regulatory relationship between rhythm genes in rat pineal tissue, we adopted the controlled variable method. The sampling interval is 3\u00a0min, and the sampling time is 24\u00a0h. We obtained 480 rat pineal tissue samples to form a time series gene sample data set. Large-scale time series data serve as our basic data set for constructing gene regulatory networks. The method replaces the traditional two-state or a small amount of time points data with large-scale time series data. We break through the traditional genetic data analysis model and propose a new analysis method GRNTSTE for the study of dynamic biological processes.Then, we choose the rate-limiting enzyme AANAT for melatonin synthesis as the starting point of the research object. We obtained the rhythm target genes similar to the expression pattern of the AANAT gene on the time axis based on the clustering method. And we construct a gene regulatory network of rhythm genes in rat pineal tissue based on large-scale gene representation time series data and transfer entropy, in which the transfer entropy is used to infer the gene regulatory relationship. And our experimental results are highly consistent with existing research, which provides a very valuable reference basis for further biological experiment verification.The GRNTSTE method breaks through the traditional way of gene regulatory network construction, and it is the first time to explore the regulatory network relationship between genes based on a data-driven model. And the construction of gene regulatory network by GRNTSTE method is based on large-scale data-driven analysis of genomics data, which effectively avoids the misleading caused by the randomness of gene expression data. In addition, large-scale time series data can effectively reflect the dynamic biological process information of gene expression levels. Therefore, the GRNTSTE method can not only effectively construct a gene expression regulatory network and provide a valuable basis for the in-depth exploration of biological experiments, but also can effectively avoid the huge cost waste caused by blind biological experiments. The method proposed in this paper provides a new analysis idea for the study of gene regulatory network, which has theoretical and practical value.The systems biology method of constructing gene regulatory network based on large-scale time series data can provide reference basis and hypothesis for biological experiment verification. However, there are few methods to construct gene expression regulatory networks based on large-scale time-series gene expression data, and existing methods cannot well capture continuous cell dynamics and dynamic biological processes.In this paper, we first collected the time series data of the rat pineal gland tissue in the natural state according to a fixed sampling rate, and performed whole-genome sequencing. The large-scale time-series sequencing data set of rat pineal gland was constructed, which includes 480 time points, the time interval between adjacent time points is 3\u00a0min, and the sampling period is 24\u00a0h.Then, we proposes a method named GRNTSTE for constructing gene regulatory networks based on large-scale time series data. We prove that the GRNTSTE algorithm has better performance than SCRIBE and dynGENIE3 based on the DREAM3 challenge data set. However, GRNTSTE and TENET have similar performance. At the same time, we compare and analyze the gene regulatory network method in the BEELINE framework and GRNTSTE based on the BEELINE single-cell datasets. It proves that the performance of GRNTSTE is slightly lower than that of SINCERITIES method and better than other gene regulatory network construction methods in BEELINE framework, which is based on the BEELINE data set. It shows the effectiveness and superiority of GRNTSTE in reconstructing gene regulatory networks based on single-cell gene expression data. In addition, we further verify the effectiveness of our proposed gene regulatory network inference method GRNTSTE based on public datasets named IRMA OFF/ON from Cantone et al. Comparing ODE-Based, the GRNTSTE has higher sensitivity when PPV is similar.Finally, take the rhythm gene in the pineal gland of the rat as an example, the transfer entropy is used to evaluate the regulatory relationship between gene pairs, and the rat pineal rhythm gene regulatory network is constructed based on the GRNTSTE algorithm. And in the gene regulatory network we constructed, many genes are consistent with the existing research results. It provides a valuable reference for the study of the regulation mechanism of pineal rhythm. It is of great significance to the study of dynamic biological processes."}
+{"text": "Leishmania amazonensis and Leishmania major are the causative agents of cutaneous and mucocutaneous diseases. The infections\u2018 outcome depends on host\u2013parasite interactions and Th1/Th2 response, and in cutaneous form, regulation of Th17 cytokines has been reported to maintain inflammation in lesions. Despite that, the Th17 regulatory scenario remains unclear. With the aim to gain a better understanding of the transcription factors (TFs) and genes involved in Th17 induction, in this study, the role of inducing factors of the Th17 pathway in Leishmania\u2013macrophage infection was addressed through computational modeling of gene regulatory networks (GRNs). The Th17 GRN modeling integrated experimentally validated data available in the literature and gene expression data from a time-series RNA-seq experiment . The generated model comprises a total of 10 TFs, 22 coding genes, and 16 cytokines related to the Th17 immune modulation. Addressing the Th17 induction in infected and uninfected macrophages, an increase of 2- to 3-fold in 4\u201324 h was observed in the former. However, there was a decrease in basal levels at 48\u201372 h for both groups. In order to evaluate the possible outcomes triggered by GRN component modulation in the Th17 pathway. The generated GRN models promoted an integrative and dynamic view of Leishmania\u2013macrophage interaction over time that extends beyond the analysis of single-gene expression. Leishmania (Kinetoplastida and Trypanosomatidae) that represent one of the major public health problems in developing countries, with 12 million cases and an incidence of 0.7\u20131.0 million new cases annually, according to the WHO , such as STAT1, IRF, NF-\u03baB, and STAT6 .Although many reviews have discussed the cellular immune responses against the different forms of leishmaniasis and the Some studies have sugAn alternative to better understanding the role of these elements of the immune response in the immunopathogenesis of leishmaniasis would be through the use of gene regulatory networks (GRNs). GRNs can be defined as mathematical and computational models capable of describing the logic underlying the regulatory occurrences among interacting genes while a specific cell program is operating. The fundamental challenge that GRNs address is the translation between the diverse information patterns presented by different types of biological macromolecules into biologically significant knowledge. This knowledge is directly associated with the time-dynamic processes necessarily underlying any kind of observable biological behavior . A key ein silico mathematical approach that provides a visual-aided network-oriented modeling process based on nodes and edges to represent places and transitions, with visual feedback affording interpretation by a broad audience and can be applied to describing distributed systems, including biological ones , human macrophages infected with L. major , uninfected human macrophages, and macrophages containing latex beads.A total of 66 samples from the study SRP062278 were performed by https://www.bioinformatics.babraham.ac.uk/projects/fastqc/) and visualized using MultiQC reporting tool , release 99. FASTA and GFF files with transcript sequences and annotations from L. amazonensis MHOM/BR/71973/M2269 and L. major Friedlin strain were downloaded from TriTrypDB (https://tritrypdb.org/tritrypdb/), release 46 was used 0.30.0) , Limma ( 0.30.0) , and edg 0.30.0) packagesThe network analysis was performed using Graphia Pro, available in BioLayout Express3D , where aA network graph of the data was generated with a Pearson\u2019s correlation coefficient of r\u2009\u2265\u20090.85 using TMM normalization. The network graph was then clustered into groups of genes sharing similar expression profiles using the MCL algorithLeishmania infection were evaluated, and the Th17 pathway was selected partner 1 (a gene that acts by somehow regulating a second gene); b) partner 2 (a gene that undergoes some type of induction/inhibition by partner 1); c) TFs that interact with partners 1 and 2; d) cell type in which this interaction was observed; e) type of interaction; and f) induced biological process and outcome of the interaction between partners 1 and 2 . To be included in the network diagram, all the identified interactions had to be confirmed by at least two different publications. This information was categorized as the network components: input node, transmission node, an output node, and mode of interaction . Each infection time in the experimental design represents a time-block that was used for the simulation. Thus, the time-blocks were defined as a) 4 hpi (time-block 0\u201325), b) 24 hpi (time-block 26\u201350), c) 48 hpi (time-block 51\u201375), and d) 72 hpi (time-block 76\u2013100).The network diagram was constructed using yEd Graph Editor following the mEPN notation, as described by L. amazonensis, and the third as macrophages infected with L. major.Prior to performing a simulation, a model was parameterized through the places representing the TFs at the beginning of the network. For this parametrization, the normalized read count values obtained from the RNA-seq data were used to parametrize the input tokens that correspond to TFs identified in the literature review that interact with one or more genes. This resulted in the modeling of three networks: the first considering the uninfected macrophages, the second considering macrophages infected with GraphML files were loaded into the BioLayout Express 3D, and a parser translated the diagrams drawn using the mEPN scheme. Once the files were imported, the conditions for stochastic PN (SPN) simulation were set .Each time point in the experimental design represents an interval of the time-blocks used for the simulation. All the measured read count averages for the TFs identified, at 4 hpi (0\u201325 time-block), 24 hpi (26\u201350 time-block), 48 hpi (51\u201375 time-block), and 72 hpi (76\u2013100 time-block), were converted into parameterization values for each entry node in the GRN. Simulations were conducted in Graphia Pro using 100 time-blocks, 500 runs, uniform distribution in SPNs, and distribution and consumptive transition in SPN transition type.During the simulation, some inhibitions and blocking edges were also made in the constructed models in order to observe the impact of this alteration on the outcome (Th17 response).post-hoc analysis with Bonferroni correction was conducted, also with a significance level of 0.05.In order to compare the TFs in each time point for each type of infected macrophage, the statistical analysis was performed using mixed ANOVA, with a general linear model (\u03b1 = 0.05). Sphericity was evaluated using Mauchly\u2019s test (\u03b1 = 0.05) and/or cases where it was not verified; the Greenhouse\u2013Geisser correction was performed to verify the effect of the independent variables and their interactions on the expression value of each of the factors individually. To locate the observed differences, a L. major at 24 hpi) were identified as outliers and removed from the analyses due to the observation of distinct profiles in the principal component analysis.Approximately 90% of reads were mapped against the reference transcriptomes. The samples SRR2163291 (macrophage containing beads at 48 hpi) and SRR2155160 , a total of 3,082, 1,389, 1,193, and 642 DE genes were identified, respectively.Considering the applied cutoff to define significance, for the macrophage infected with L. amazonensis, when compared to uninfected macrophages, a total of 2,452, 1,332, 301, and 327 DE genes were found for each time-point . The results are depicted in For the dataset comprising the macrophages infected with L. major and macrophages infected with L. amazonensis) were a cellular response to the stimulus caused by DNA damage (GO:0006974), DNA repair (GO:0006281), DNA metabolic process (GO:0006259), repair (GO:0006260), and modifications in lysines (GO:0018205). Considering the category of molecular function, the main ontologies identified were related to insertion or deletion of DNA bases (GO:0032135), DNA binding (GO:0003684), and DNA-dependent ATPase activity (GO:0008094). For the cellular component category, ontologies related to chromatin (GO:0000785) and centrosome (GO:0005813) were identified.The most expressed genes in infected macrophages were functionally annotated. For the biological process category, the main ontologies identified in both datasets , MAPK signaling pathway, toll-like receptor signaling pathway, IL-10 signaling pathway, and TH17/IL-17 signaling pathway.The genes comprising the selected clusters were functionally annotated, which allowed the identification of many immunological pathways such as tumor necrosis factor (TNF) signaling pathway, Leishmania infection. In the literature review, using the search terms depicted in Considering the pathways found in the annotation, a literature review was conducted in order to identify the most relevant pathways for All the information categorized in the literature review was used to model the IL-17 network diagram. The components (genes) in the IL-17 GRN are represented by rectangles connected by arrows, which indicate the molecular interactions found during the literature review step. System start nodes (TFs) are represented by colored rectangles in black. The GRN output signal (Th17 response) is represented in the rectangle on the right side of the diagram Figure\u00a01As a result of model parameterization, we identified the TFs involved in the IL-17 pathway and calculated the average of expression counts for each time-block evaluated Table\u00a02.L. major and L. amazonensis.After the establishment of parameterization, we performed the network simulations. Under normal conditions, as can be seen in After the network parameterization and simulation, inhibition points on the edges between certain interactions were inserted in order to observe how the signal intensity changed based on the modifications in the model Figure\u00a03For the transcription factor STAT1 , an edge change was inserted, which reduces the intensity in which phosphorylation is performed on the target gene for its activation, while in the STAT3 factor, a blocking edge was inserted, which completely inhibits the signal generated by this TF; i.e., it does not phosphorylate the target gene.In addition to these two modifications in the TFs STAT1 and STAT3, partial inhibitory modifications were also inserted in CEBP\u03b2 (CEBPB-CCAAT/enhancer-binding protein beta) and NF-k\u03b2 (nuclear factor kappa-light-chain-enhancer of activated B cells). These modifications caused differences in the intensity of the signal generated Figure\u00a04L. major. For L. amazonensis, a decrease in CxCL8 and CXCL9 expression levels was also observed. The addition of inhibitory points in NF-k\u03b2 also causes a decrease in the potential to induce IL-12 and IL-23 levels. However, even with the inhibition of TFs, the gene expression related to the IL-17 pathway on infected macrophages did not equalize with non-infected macrophages.After the inhibition of selected TFs, we noticed a decrease in the expression of CxCL2 and CCL7 in macrophages infected with L. major, and infected with L. amazonensis), For the analysis considering the TF comparisons between the time points and across the macrophage\u2019s type in the expression of only STAT1, STAT4, TRAF6, and NFKB1 factors were observed for the different types of macrophages, at different times of infection. The expression of STAT 1 and STAT 4 was more associated in the early points of the infection (4\u201324 hpi) in macrophages infected with Leishmania parasites are capable of modulating the immunological response and many signaling pathways on host cells in order to promote survival and infection . SimilarL. amazonensis showed that amastigotes induce a reduction of STAT-2 phosphorylation and an increase of degradation through parasite proteases (It is known that IFN-\u03b1 and IFN-\u03b2 are related to the expression of nitric oxide synthase type 2 (NOS2) and production of NO by macrophages, as other studies have shown the importance of this pathway against intracellular infections . In thisroteases .L. major. For L. amazonensis, a decrease in CxCL8 and CXCL9 expression levels was also observed. The addition of inhibitory points in NF-kB also causes a decrease in the potential to induce IL-12 and IL-23 levels. Despite this, it is well known that these ILs were associated with a protective role against leishmaniasis a set of TFs that can cause changes in the gene expression profile; b) a set of genes ranked as crucial for the immune modulation of the pathway ; and c) the IL-17 pathway model for uninfected and infected macrophages, considering the time course of the infection. These promising results encourage us to keep looking for key factors present in other crucial pathways that we believe are involved in the resistance and susceptibility of macrophage infection by leishmaniasis.The host\u2013parasite interaction of leishmaniasis has enormous relevance in the disease outcome with many factors involved, as gene expression triggered by this interaction until cell recruitment and activation was due to cytokine and chemokine production and signalization. In this context, GRN is a suitable tool for managing this ocean of factors, specific points for evaluation, and attribute values in a foreseen scenario of dynamic interactions. The GRN computational modeling built in this study was able to predict how the signal flows through the network. Furthermore, our study was able to follow across determined kinetics the gene expression profile of macrophages infected with two different Publicly available datasets were analyzed in this study. These data can be found here: SRA accession number: SRP062278.LG acquired, analyzed, and interpreted the data and wrote the paper. AP acquired, analyzed, and interpreted the data. FM interpreted the data and wrote the paper. AE interpreted the data. MC interpreted the data, wrote the paper, and revised it critically for intellectual content. DR analyzed and interpreted the data. MP interpreted the data, wrote the paper, and revised critically for intellectual content. JR interpreted the data, wrote the paper, and revised it critically for intellectual content. All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.JR was funded by CNPq project number 310104/2018- and, INOVA-Fiocruz, Grants VPPIS-001-FIO-18-12 and VPPCB-005-FIO-20-2-42. MC was funded by INOVA-Fiocruz, Grant VPPIS-001-FIO-18-8.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."}
+{"text": "The recent breakthrough of single-cell RNA velocity methods brings attractive promises to reveal directed trajectory on cell differentiation, states transition and response to perturbations. However, the existing RNA velocity methods are often found to return erroneous results, partly due to model violation or lack of temporal regularization. Here, we present UniTVelo, a statistical framework of RNA velocity that models the dynamics of spliced and unspliced RNAs via flexible transcription activities. Uniquely, it also supports the inference of a unified latent time across the transcriptome. With ten datasets, we demonstrate that UniTVelo returns the expected trajectory in different biological systems, including hematopoietic differentiation and those even with weak kinetics or complex branches. RNA velocity can detect the differentiation directionality by modelling sparse unspliced RNAs, but suffers from high estimation errors. Here, the authors develop a computational method called UniTVelo to reinforce the velocity estimation by introducing a unified time and a top-down model design. This technique is often referred to as trajectory inference, a set of computational algorithms to infer the order and pseudotime of individual cells along differentiation trajectories3. Plenty of methods have been developed for this purpose to model the progression of cells from transcriptome-derived manifolds, with either continuous pseudotime or topologies covering from linear, bifurcation to graph8. However, since scRNA-seq only captures a static snapshot of the transcriptome of a cell population, most conventional trajectory inference methods lack the ability to automatically identify the direction of the returned trajectory. Hence, these methods often require additional inputs or prior knowledge to specify progenitor cells and differentiated cells9, which consequently limits their applicability for biological processes with unknown cell fate or in abnormal conditions.Single-cell RNA sequencing (scRNA-seq) has already transformed how dynamic biological processes be studied at cellular level. It has enabled the tracking of developmental stages of distinct cell lineages10. By leveraging the balance between unspliced and spliced mRNA reads during transcription, RNA velocity11 has further extended the descriptive trajectory model to a predictive manner, where a positive velocity represents a gene being up-regulating whilst a negative velocity stands for down-regulating. Assuming transcription phases last sufficiently long to reach a new equilibrium, La Manno et al formed the fundamental \u2019steady-state\u2019 model of RNA velocity method and consequently projected cells\u2019 new states by aggregating velocities across all genes11. Recently, Bergen and colleagues further extended the RNA velocity quantification and introduced the scVelo package which contains the covariance-based \u2019stochastic\u2019 mode and the likelihood-based \u2019dynamical\u2019 mode12.On the other hand, the short-term change of expression levels (often referring to the spliced mature RNAs) can be indicated by the commonly captured nascent RNAs , which also reflect the regulatory activity of transcription14, partly because of the severely low signal-to-noise ratio in unspliced mRNAs. More importantly, current models either rely on linear assumptions to form a steady-state regression line or presume a time-invariant transcription rate for a certain state , which are often violated and may result in distorted or even reversed velocity matrix estimation, e.g., in ref. 15, identifying differential momentum genes16, projecting high dimensional transcriptomics onto effective embeddings17, or enriching the nascent RNAs via metabolic labeling techniques18.However, velocity estimations are still found to be inaccurate or inconsistent when recovering cellular transitions20 which contain MURK genes, retina development with a clear cell cycle phase21 and multi-branching scenario in bone marrow differentiation22.Here, to circumvent the limitations of linear assumptions that a gene ought to exhibit dynamical traits and instead of focusing on the refinement of pre- or post-processing modules, we focused on the core velocity estimation step and developed UniTVelo, a statistical method that models the full dynamics of gene expression with a radial basis function (RBF) and quantifies RNA velocity in a top-down manner. Uniquely, we also introduce a unified latent time across the whole transcriptome, which can resolve the discrepancy of directionality between genes. The capabilities and generalization abilities of UniTVelo is demonstrated on various developmental trajectories across 10 datasets , s(t) respectively represents normalized unspliced and spliced mRNA reads, with full transcription dynamics being described in a temporal relationship by transcription rate \u03b1(t), splicing rate \u03b2 and degradation rate \u03b3 is first defined, commonly with a step function, followed by deriving the profiles of unspliced and spliced RNAs via Eq.(s(t)\u2009:=\u2009f, where \u03b8 is a set of gene-specific parameters controlling the shape of phase portraits, then derives the dynamics of unspliced RNAs and transcription rates also via Eq., which has been validated its usefulness in modeling transcriptome dynamics23 and has the ability to capture the induction, repression, and transient shapes with a single function family , which therefore may mitigate model violation in complex transcription regulationsf, rather than the deviation to steady-state equilibrium (Methods). This benefits the RNA velocity quantification based on more reliable spliced RNAs and allows cells that are either above or under the steady-state equilibrium to be both assigned in steady-state, instead of forcibly dividing into induction and repression stages as done by previous framework. Overall, this top-down framework enjoys computational convenience of both modelling the spliced RNAs with diverse distribution families, including deep neural networks as demonstrated in ScTour25 and aggregating latent time across genes (see below), while preserving the same level of accuracy in a vanilla setting.Thanks to the spliced RNA oriented design, the velocity of each gene can be obtained directly from the derivative of spliced mRNA function Our second major innovation is the introduction of a unified latent time across the whole transcriptome when inferring gene expression dynamics and RNA velocity Fig.\u00a0.A maximum likelihood estimation of UniTVelo model is achieved by a principled Expectation-Maximization (EM) algorithm (Methods). Briefly, the predicted time for each cell along the differentiation path is updated concurrently during optimizing the parameters of the dynamical system Fig.\u00a0e. Furthe15 hence violating the conventional model assumption and limiting the applicability of current RNA velocity methods14. By re-analyzing with scVelo, we also evidenced the distorted lineage inference. In contrast, by using a unified time, UniTVelo corrects the trajectory directions to the expected erythroid maturation on both datasets Abcg2 and Smim1 to three distinct branches: erythroids, monocytes, and common lymphoid progenitors (CLP). When re-analyzing it with scVelo, we found it again returns reversed direction on the erythroid branch, similar to reports from the original authors14, and also distorted trajectory on the monocyte branch , namely the mean squared error divided by the variance of spliced mRNA counts , similar to scVelo where each gene is analyzed independently and has its own latent time. This flexible setting is useful for datasets with high signal-to-noise ratio, especially for complex differentiation scenarios containing cell cycles or sparse cell types which hampers the performance of unified-time mode . By large, both scVelo and UniTVelo(independent mode) successfully identified the major differentiation trajectory in this experiment that neuroblast cells gradually become granule cells ; see quantitative metrics in Table\u00a028.Lastly, on the two pancreas datasets, both UniTVelo (the independent mode) and scVelo identify the major trajectories, with minor differences on the cycling progenitors part and terminal areas behavior . High-quality genes were selected with a threshold that at least 20 cells have both un/spliced mRNA counts expression. Based on the principal component analysis (30 components by default), Euclidean distances were used to construct the K nearest-neighbor graph (30 neighbors by default) on logarithmized spliced mRNA counts. Due to high noise in scRNA-seq protocols, raw counts need to be smoothed before velocity estimation for variance stabilization, first-order moments were computed for each cell based on KNN graph, namely both spliced and unspliced RNA values of each cell were replaced by the average of its all neighbors. These pre-processing steps were done by scVelo (as scv) with following scripts, 32. Therefore using all expressed genes for downstream analysis is not recommended and highly variable genes (HVGs) which contribute to cell-cell variation were selected .scRNA-seq has the possibility of measuring thousands of genes simultaneously whereas limitations arise, including bias of transcript coverage, high technical noise and low capture efficiency\u03b3\u2009>\u20090.01) between spliced and unspliced counts and a positive coefficient of determination (R2\u2009>\u20090.01) as well. Given the low capture rate in unspliced mRNA counts, certain genes might exhibit irregular high discrepancy between un/spliced expression profiles, which need to be further examined. Therefore, velocity genes with extreme ratio of standard deviations between unspliced versus spliced RNAs were filtered out (\u03c3ratio\u2009<\u20090.03) or (\u03c3ratio\u2009>\u20093).We further selected informative genes with similar settings in scVelo (otherwise specified). The feature space is further filtered by choosing genes with a positive coefficient (R2 higher than the user-defined threshold (config.AGENES_R2) will be added to the subsequent model calculations. This allows post-analysis on more genes and the RNA velocity of those genes can be inferred as well.However, such stringent gene filtering process may remove some genes of interest and thus limits the downstream analysis. We further introduced an optional way to expand the velocity genes during the optimization process. Specifically, we fitted a regression analysis between interim inferred cell time and spliced mRNA reads of each gene, and genes with a sg(t)\u2009=\u2009f. Then a linear dynamical system is adopted to derive the expectation of unspliced RNAs. The main requirement of f is second-order differentiable, and in general it can be broadly chosen, hence sigmoid, bell-shape functions or flexible neural networks are well suitable to describe the gene expression behavior in time. Here, by using a radial basis function (RBF) as f(t) by default, we can write the model explicitly as follows,hg,\u2009ag,\u2009\u03c4g,\u2009og) to form a RBF based expression model, linking parameters to bond un/spliced data and cell-specific time points tng\u2009\u2208\u2009. Here, ug(t) and sg(t) is denoted as the mean function of un/spliced counts along time, which indicate how expression level of genes change along differentiation time whilst ug(t) is a simple transformation from Eq. to describe the expression strength, scaling factor of kernel to control the activation time, peak time of that particular gene and any offsets it may contain. Once we obtain the mean function, the RNA velocity can be directly calculated via the first derivative of spliced mRNA counts,Although gene\u2019s activity and its accompanying transcription regulation in a biological system is sophisticated, theoretically they have to go through induction phase followed by repression phase. The vital difference between individual genes within a biological process is the activation time and peak time. To utilize this trait, we purpose Gaussian-like mean kernel function as stated in Eq. with be the normalized observation of un/spliced counts for a particular gene. Similarly, let ei\u2009=\u2009 as residuals under the assumptions that the residuals are normally distributed with ei\u2009~\u2009N. We assume the gene-specific \u03c3g\u2009=\u2009 across cells are distinct between un/spliced mean functions to account for stochasticity. The combined likelihood function of both unspliced and spliced mRNA counts for a particular gene can be derived in the following equation,tg\u2009=\u20090.5, which assumes all genes have experienced induction and repression during differentiation.Let tng for each cell, the optimizer computes gradients of objective function and updates gene-related parameters \u03b8g iteratively. This procedure occurs for the majority of total iterations.Given \u03b8g fixed and re-assign tng by minimizing the Eculidean distance between observed Periodically, the algorithm hold The objective function is optimized by Gradient Descent algorithm after meaningful parameters are initialized. The algorithm is applied with Adam optimizer which basically contains the following two steps,\u22124) or the maximum number of iterations. The reference running time and memory usage for each dataset compared with scVelo dynamical mode can be found via Supplementary Table\u00a0Algorithm will terminate if all parameters reach the predefined convergence criteria to 1 (differentiated cells) and genes along the trajectory. Though both modes shared the same model structure, they differ in whether the gene-specific time is aggregated into a single gene-shared time points.The unified-time mode was firstly introduced with gene-shared time points and initially designed to applied on datasets in which genes have less kinetic information. We discovered in most cell lineages, a large proportion of genes rarely show the dynamical characteristic described in ref. \u03b8g during optimization which has less constraints on the phase portraits of unspliced and spliced counts. Consequently, parameter \u03c4g vividly reflects genes\u2019 behavior during differentiation. In detail, \u03c4g should have three configurations of gene\u2019s activity: repressed (\u03c4g\u2009\u2264\u20090), induction (\u03c4g\u2009\u2265\u20091) or a combination of induction and repression .To address the above issues, the unified-time mode adopted the full dynamic parameters tng for each gene g. Additionally, this procedure also supports a denoise by projecting the gene space to a lower dimensional space by singular vector composition, before averaging across all dimensions.For each gene, the time assignment of each cell was not directly based on projection to phase portraits, instead cells were re-ordered by relevant positions after projection, to make correct alignment between genes. After cell re-ordering by their relative positions, a gene-shared time point for each individual cell was calculated byWhilst unified-time mode has proven its ability through multiple scenarios, the performance was impaired in some other biological systems, like sparse cell types and cell proliferation cycle to elaborate. We hypothesized that the former setting emphasis more on those genes with stable and monotonic changes by aggregating cell time and might neglect the fact that some genes exhibit strong or complex dynamical pattern.\u03c4g,\u2009og,\u2009ig) were fixed at , giving more constraints on phase portraits and this indirectly fixed the starting point of phase portraits, similar as scVelo. Compare with the flexible portraits in unified-time mode which are more likely to capture non-kinetic genes, this assumes genes to exhibit a clear induction and repression process during model fitting.For gene-specific parameters, for identifying complicated datasets and suggesting the mode to use. The complicated datasets are defined with the following criteria,33 which are highly variable of the gene lists defined in ref. Datasets with cell cycle phase included , we also provide a utility script to down-sample the original dataset, run the model on down-sampled data and predict RNA velocity and cellular time for the rest of cells. This down-sampling strategy provided by UniTVelo considers the problem of rare cell populations in scRNA-seq datasets, by providing a user-defined threshold parameter specifying the minimal number of cells within each cluster to keep, and a parameter representing the percentage of sampling.After generating relevant gene-specific parameters using down-sampled data, the model would then predict RNA velocity and cell time by the same projection process used in normal training process (the second part of the parameter inference step). In Supplementary Fig.\u00a0R2, that are commonly used in the regression realm, to indicate the goodness-of-fit. Here, it only focuses on the spliced RNAs, so R2 denotes the proportion of variance explained by the time function f(t). Generally speaking, a high R2 means the time function well captures the dynamical pattern of a certain gene, hence reflecting the estimated biological progression.Identifying informative genes which explain the inferred cell trajectory, we employed the coefficient of determinant CA is sets of cells in target cluster A, N(c) stands for the neighboring cells of specified cell c. c and Furthermore, to assess the model performance with expected trajectory directions, two quantitative evaluation metrics are used in this paper to compare between different algorithms as proposed in ref. Further information on research design is available in the\u00a0Supplementary InformationPeer Review FileReporting Summary"}
+{"text": "After ischemic stroke (IS), peripheral leukocytes infiltrate the damaged region and modulate the response to injury. Peripheral blood cells display distinctive gene expression signatures post-IS and these transcriptional programs reflect changes in immune responses to IS. Dissecting the temporal dynamics of gene expression after IS improves our understanding of immune and clotting responses at the molecular and cellular level that are involved in acute brain injury and may assist with time-targeted, cell-specific therapy.The transcriptomic profiles from peripheral monocytes, neutrophils, and whole blood from 38 ischemic stroke patients and 18 controls were analyzed with RNA-seq as a function of time and etiology after stroke. Differential expression analyses were performed at 0\u201324 h, 24\u201348 h, and >48 h following stroke.Unique patterns of temporal gene expression and pathways were distinguished for monocytes, neutrophils, and whole blood with enrichment of interleukin signaling pathways for different time points and stroke etiologies. Compared to control subjects, gene expression was generally upregulated in neutrophils and generally downregulated in monocytes over all times for cardioembolic, large vessel, and small vessel strokes. Self-organizing maps identified gene clusters with similar trajectories of gene expression over time for different stroke causes and sample types. Weighted Gene Co-expression Network Analyses identified modules of co-expressed genes that significantly varied with time after stroke and included hub genes of immunoglobulin genes in whole blood.Altogether, the identified genes and pathways are critical for understanding how the immune and clotting systems change over time after stroke. This study identifies potential time- and cell-specific biomarkers and treatment targets.The online version contains supplementary material available at 10.1186/s12916-023-02766-1. Ischemic stroke (IS) is one of the leading causes of death and disability in the world. Brain injury follows arterial occlusions in large or small cerebral vessels. These may arise due to several different causes that ultimately deprive the tissue of necessary oxygen and glucose . Effecti++ CD16\u2212), intermediate (CD14++ CD16+), and non-classical (anti-inflammatory CD14+ CD16++) [The immune and clotting systems play critical roles in the injury and recovery from stroke. After IS, peripheral leukocytes, including monocytes and neutrophils, infiltrate the injured area, mediating the immune response that causes inflammation and subsequent resolution and repair , 3. Monodiate CD1++ CD16+, CD16++) \u20138.In models of experimental IS, neutrophils increase in the brain after 3 h and reach peak levels in the first 24 h , 10. MonTranscriptional changes are detected promptly after IS in peripheral blood cells, showing how dynamic changes in gene expression can be revealed even in the acute phase of stroke. This results in distinct signatures depending on the cell type and stroke etiology \u201322. PeriThirty-eight ischemic stroke (IS) patients and 18 vascular risk factor control (VRFC) subjects were recruited at the University of California at Davis Medical Center under a study protocol reviewed and approved by the Institutional Review Board (IRB ID 248994-41). The study adheres to federal and state regulations for protection of human research subjects, The Common Rule, Belmont Report, and Institutional policies and procedures. Written informed consent was obtained from all participants or a legally authorized representative.The criteria for recruitment are detailed in our previous study . BrieflyWhole blood for RNA analysis was drawn directly into PAXgene RNA stabilizing tubes for subsequent batch isolation. Blood for immune cell populations was collected in citrate tubes for immunomagnetic isolation by RoboSep . Cell isolation was performed as described in Carmona-Mora et al. . Monocythttp://www.gencodegenes.org/), using STAR v. 2.5.2b aligner [Yijklmn = \u03bc + Diabetes i + Diagnosis j + Hypercholesterolemia k + Hypertension l + Time (h) + Diagnosis*Time Point (TP) jm+ \u03b5ijklmn, where Yijklmn represents the nth observation on the ith Diabetes, jth Diagnosis, kth Hypercholesterolemia, lth Hypertension, mth Time Point (TP), \u03bc is the common effect for the whole experiment, and \u03b5ijklmn represents the random error component [RNA isolation and cDNA library preparation were performed as previously described . In summ aligner . Raw cou aligner , and noromponent .p < 0.02 and fold-change > |1.2| to create lists of genes. Fisher's least significant difference (LSD) was used for individual contrasts [To identify differentially expressed genes, subjects were split into time points (TPs) from stroke onset in every time point are available in Table ontrasts .Table 1Shttps://github.com/midas-wyss/gedi). Two phases of training iteration were used (40 and 100) with linear initialization. Grid sizes were chosen depending on the total number of differentially expressed genes to analyze per sample type to keep a similar number of genes per tile in all mosaics . Tiles corresponding to gene clusters of like-behaving genes were formed based on Pearson\u2019s correlation. Tiles are composed of the same genes across time points, and mosaics for monocytes, neutrophils, and whole blood have different tile composition.Gene Expression Dynamics Inspector (GEDI) v2.1 was usedSelf-organizing maps (SOM) were impp-value <\u20090.05 for significant enrichment) of gene lists with blood cell type-specific genes was assessed by comparing to previously described blood cell type-specific genes [phyper).The presence or enrichment with Fisher\u2019s exact test p\u2009<\u20090.05 were considered statistically over-represented, and those that also have a Benjamini\u2013Hochberg False Discovery Rate (FDR) correction for multiple comparisons are indicated in the figures. IPA also computes significant pathway activation or suppression , by using the z-score\u2014which is based on comparisons between input data and pathway patterns, causal relationships, and curated literature. Gene ontology (GO) enrichment was explored as implemented in Partek Genomics Suite in the Gene Set Analysis, using Fisher\u2019s exact test and FDR correction for multiple comparisons, with significance set at p<0.05.Pathway enrichment analyses were performed using Ingenuity Pathway Analysis . For input, differentially expressed genes and their fold-changes from every time point and sample type, with a goodSamplesGenes.Separate weighted gene co-expression networks were generated for isolated monocyte (MON network) and neutrophil (NEU network) data, as well as for whole blood (WB network). VRFC samples were excluded, and genes below a minimum of 40 counts in every sample were filtered out for MON and NEU, and below a total of 80 counts for WB. The MON network was generated using the 14,955 detected genes after filtering across 35 IS samples; the NEU network was generated using 13,921 genes across 31 IS samples; the WB network was generated using 15,360 genes across 37 IS samples. Data were imported into R and checked for missing or zero-variance counts using the function \u03b2) of 14, 8, and 16 were selected for the MON, NEU, and WB networks, respectively, to maximize strong correlations between genes while minimizing weak correlations [cutreeDynamic function was used to form modules due to its adaptability to complex dendrograms and ability to identify nested modules [Networks were generated with the Weighted Gene Co-Expression Network Analysis (WGCNA) package using a Pearson correlation to measure co-expression . An apprelations . A signeelations . The cut modules . Hub gen modules , 38.p-value < 0.05 was considered significant.Module-parameter associations were determined using Kruskal-Wallis and Spearman Ranked Correlation tests in Partek Genomics Suite for categorical and continuous variables, respectively. Ranked statistical tests were utilized to minimize the impact of outliers. Parameters were associated with the module eigengene, or first principal component of expression of genes within a module. Continuous time since event and all clinical parameters were examined on the complete datasets. A P-value < 0.05 (one-sided Fisher\u2019s exact tests with a Benjamini\u2013Hochberg FDR correction for multiple comparison).Functional modules were detected with HumanBase in a tisp < 0.05) between TPs and controls in any of the sample types . NIHSS at admission was significantly higher in subjects for samples at TP3 when comparing to VRFC in monocytes. Hyperlipidemia was significantly different for TP1 in all sample types when comparing to VRFC, while sex is significantly different in whole blood at TP3. In this cohort, 5 out of 38 IS patients received thrombolytic therapy within 4.5h of their stroke. None of the IS cases developed hemorrhagic transformation.Individuals were binned into time points (TPs) from stroke onset , and vascular risk factor controls (VRFC) were assigned TP0 , including chemokine, IL-8 signaling, and production of NO and ROS in monocytes/macrophages post-stroke in order to assure enough IS cases per etiology and time point. The cohort characteristics and analyses for the clinical variables after this re-stratification can be found in Additional file More DEGs were identified in all IS causes vs. VRFC in the >24 h period than in the first 24 h for monocytes and neutrophils Fig. . Gene exKey DEG clusters per IS cause were identified using SOM for gene expression trajectories . The gene expression counts for monocytes, neutrophils, and whole blood samples from IS patients were used to generate three separate networks were identified within this cell type network. Sixteen monocyte modules were found to be significant for time: MON-DarkRed, MON-MidnightBlue, MON-GreenYellow, MON-Sienna3, MON-Red, MON-Pink, MON-LightCyan, MON-Violet, MON-Yellow, MON-DarkMagenta, MON-SteelBlue, MON-Blue, MON-Grey60, MON-Tan, MON-PaleTurquoise, and MON-Magenta Fig. A. The geWithin the whole blood network, 51 modules of co-expressed genes and a module (WB-Grey) of unassigned genes were identified. Ten of these whole blood modules were significantly related to time: WB-Brown, WB-Grey60, WB-Tan, WB-Turquoise, WB-MidnightBlue, WB-SteelBlue, WB-DarkRed, WB-SaddleBrown, WB-Pink, and WB-MediumPurple3. Of these, WB-Tan and WB-DarkRed were also related to sex, while WB-SteelBlue and WB-MediumPurple3 were also related to age. The gene lists for each of these 10 whole blood modules are provided in Additional file SCAF11, CREBZF, SETX, JAK1, EIF3F, HNRNPK, UBC, DICER1, CAPN2, RAB10, SF3B1, DAZAP2, UTY, SPARC, PPP4C, and RAB11FIP1. Examples of hub genes from the different modules in whole blood include the following: TNFRSF18, PCSK9, IGSF9, MTHFSD, DCAF12, BCL2L1, MAU2, TLL2, FCRL6, PARVG, and VASP.Genes that are highly interconnected in each time-associated module were also identified for all sample types. These represent \u201chub\u201d genes with high potential for functional or regulatory significance , RNA splicing (HG-M2), RNA metabolism (HG-M3), DNA repair (HG-M4), cell morphogenesis (HG-M5), regulation of cell cycle (HG-M3 and HG-M6), cell adhesion and secretion (HG-M7), and protein transport (HG-M8) , leukocyte activation (HG-WB2), filopodium assembly (HG-WB3), regulation of cellular response to transforming growth factor beta receptor (HG-WB4), gene silencing (HG-WB4), and cytoskeleton organization (HG-WB5) versus controls were assessed by splitting subject samples into different time windows. Monocytes displayed downregulation of most of their DEGs, while neutrophils were generally upregulated. This is in accordance with the up- and downregulation patterns seen in our previous study that didz-score is calculated with Ingenuity software, where z\u2009\u2265\u20092 is activated (denoted ahead with subscript act) and z\u2009\u2264\u2009\u22122 is inhibited (denoted ahead with subscript inh).Canonical pathways associated with monocytes over time were mostly suppressed, similar to our previous findings . Many ofinh was also enriched across all time points in monocytes. Pro-inflammatory cytokine IL-8 has a role in monocytic recruitment, by promoting monocyte adhesion to the vascular endothelium [inh, IL-2inh, and TNFinh were predicted upstream regulators of gene expression at 0\u201324 h; IL-1Binh, IL-2inh, IL-3, IL-4inh, IL-5inh, IL-10inh, IL-13, IL-15, IL-33inh, and TNFinh are predicted upstream regulators at 24\u201348 h; and IL-2inh, IL-3, IL-4inh, IL-13, IL-33, and TNFinh are upstream regulators at >48 h.Cytokine expression changes, and specifically those of interleukins, were prominent and some changed with time in monocytes. Since changes of many interleukins have been directly quantified in blood of IS patients over time , 52, it othelium . TGF-\ua7b5 and STAT3 (M2 marker) genes are present. In this module, hub genes are also significantly enriched for monocyte-specific genes, suggesting that the expression dynamics of these genes could be critical for the monocyte response after stroke. Furthermore, the vast majority of the MON-LightCyan module hub genes display higher expression levels in classical monocytes , CSF1R (M2 polarization), and CCL2 (M2 marker). Together, these results may indicate transformation of monocytes to the restorative M2 type over time, even within the ~72 h time period of this study. Nonetheless, the correlation of CCR2 with time is consistent with active recruitment of monocytes from the bone marrow, which is in line with experimental data where monocytes begin to accumulate around day 3 or later after ischemic stroke [Monocyte polarization markers are present in four time-associated modules, including the MON-LightCyan module where in Atlas ). Other c stroke .CCR2 (M2) were present in whole blood modules associated with time. As expected, different polarization types are present in whole blood, the same is true for cell type markers, which were significantly enriched in several significant to time modules . CCR2 expression in neutrophils is not expected in a resting state, but has been linked to altered neutrophil programming in inflammatory states [In neutrophils, the genes in the only time-significant module did not overlap with cell type or polarization markers. In contrast, the gene markers TNF-\u03b1 (M1 and N1) and y states and promy states .The functional and biological roles of the highly interconnected time-associated hub genes identified through WGCNA point to unique processes and drivers of gene expression in monocytes and whole blood. Hub genes in monocytes are enriched significantly for RNA splicing and RNA metabolism functions. This may reflect active formation of specifically spliced gene transcripts in the response to injury and timing of polarization which likely changes as monocytes move from inflammatory to restorative subtypes .TSPAN14). This gene is involved in the regulation of Notch signaling pathway [Hub gene functional modules in whole blood represent a composite of the gene expression changes in individual immune cell types. Enrichment of a wide host of functions including leukocyte activation, filopodium assembly, cytoskeleton organization, regulation of cellular response to transforming growth factor beta receptor, and gene silencing point to the breadth of cell types in blood undergoing cell proliferation, activation, and migration in the first 72 h after ischemic stroke. Through different approaches, Li et al. identifi pathway and may TIA1 (T-cell-restricted intracellular antigen-1 aka cytotoxic granule associated RNA binding protein), an anti-inflammatory protein in peripheral tissues and a repressor of TNF-\u03b1 expression. It is a M1 marker and a key regulator of the innate immune response of the CNS during stress [MIER1, a transcriptional repressor [SELPLG, high affinity receptor for cell adhesion molecules in leukocytes [GNAI3, associated with the response to intracerebral hemorrhage [FBW5, an E3 ubiquitin ligase and negative regulator of MAP 3K7/TAK1 signaling in the interleukin-1B (IL1B) signaling pathway [Several time-associated hub genes were positively correlated with NIHSS at admission. In monocytes, a positive significant correlation with NIHSS was found for g stress \u201393. MIERepressor , and RMIepressor , were alepressor and myelepressor ,\u00a0respectepressor . Other Nukocytes ; GNAI3, morrhage and invomorrhage ; and FBW pathway . IL1B is pathway \u2013104 .MCTS1, negatively correlated with stroke severity. MCTS1 is a translation enhancer [let7i, which is involved in leukocyte attachment and recruitment to the endothelium in the brain [In neutrophils, time-associated hub genes did not show a significant correlation with NIHSS at admission. Only one gene in the time-significant module, enhancer and is ahe brain , 107.IGLV1-40, IGLV3-27, IGKV1-12, IGHV3-30, and IGLC3 showed positive correlation with NIHSS at admission. This highlights changes in the humoral immune response across early times after stroke and could relate to stroke outcomes [In whole blood, immunoglobulin constant region and variable region genes are hub genes in time modules. outcomes . HoweverTo generate DEG clusters based on self-organizing maps, tiled mosaics were constructed for each cell type over the three time windows. The GEDI maps allCACNA2D4 . This gene displays higher expression in classical monocytes (http://www.proteinatlas.org) [CACNA2D4 gene expression dynamics, as visible in its GEDI tile, could indicate a switch to non-classical monocytes. Closer examination of the same tile shows other genes including SURF1 (Cytochrome C Oxidase Assembly Factor) and TSPAN14, which are associated with monocyte counts in genetic studies [DNPH1 (2'-deoxynucleoside 5'-phosphate N-hydrolase 1), ZBTB5 (Zinc finger and BTB domain containing 5), MTBP (MDM2 binding protein), and six other uncharacterized transcripts, which may share similar roles to the other genes in the tile. These expression correlations still do not fully define shifts towards a cell subtype, like classical, non-classical, and intermediate monocytes. For example, TSPAN14 is highly expressed in non-classical monocytes more than in other subtypes, while DNPH1 is predominant in intermediate monocytes. Altogether, looking at specific tiles can implicate key genes and cellular processes that change with time after stroke.In the monocyte GEDI maps, the lower left tile is one case of a cell marker-containing tile that is unrelated to other tiles in the mosaic with other cell-specific markers. This tile/group of DEGs has sustained low expression and contains the monocyte marker las.org) , 109 and studies . Other gTLR5, an activator of the innate immune response [PDK1, key for cell division in hypoxic conditions [KREMEN1, a negative regulator of Wnt/\u03b2 catenin pathway [PPP4R2, a modulator of neuronal differentiation and survival [In monocytes, \u201cneighborhoods\u201d of tiles with monocyte markers can be seen on the upper and lower parts of the mosaic. These tiles showed different trends across time, from decreasing expression to unchanged gene expression . The grouping of cell-specific genes is also seen and even more accentuated in neutrophils, where granulocyte markers cluster in four opposite corners of the mosaic, displaying increasing or consistent high expression. Also in these neutrophil mosaics, two upper tiles in opposite corners rapidly change expression across TPs 1 and 2, and then decrease to reflect levels more like TP0/controls. These tiles also have interleukin receptor genes . These clusters of DEGs peak in the first 24 h and decrease thereafter. Several immunoglobulin genes are expressed as a function of time across samples. B cell activation and increased immunoglobulin production have been demonstrated after stroke \u2013116. FurSelf-organizing map (SOM) profiles enable grouping of the DEGs into clusters based on trajectory/directionality of expression changes and their functional associations across the time following stroke. After analyzing GEDI tiles, this is also useful because ontology analyses are more precise in larger gene groups and SOM profiles draw from larger and/or more balanced clusters. These profiles are crucial for understanding which molecular pathways and cell types show progressive activation or suppression over time, or whether they have \u201cpeaks\u201d or \u201cvalleys\u201d of expression over time.In monocytes and neutrophils, the profiles of DEGs that peak only in the first 24 h after stroke are enriched for myeloid leukocyte activation and leukocyte migration, respectively. Interleukin-1 beta secretion in monocytes, and positive regulation of reactive oxygen species metabolism in neutrophils, are over-represented in profiles that decrease expression over time. Genes that are suppressed between 0 and 24 h and then recover to levels seen in VRFCs are enriched for terms related to JNK and MAPK cascades, platelet activation, and macrophage chemotaxis in monocytes. This pattern also has over-representation of chemokine production and leukocyte aggregation genes in neutrophils. Opposite trends can also be seen between monocytes and neutrophils: Cdc42-associated signaling is enriFurthermore, the trajectory of the SOM profiles detected in the DEGs at different time points could be used to prioritize diagnostic biomarker candidates. A diagnostic panel that could be used to diagnose stroke in the first 3 days should primarily include DEGs that either consistently increase or consistently decrease over the 3 days after IS. Such panels could indicate not only that an ischemic stroke had occurred, but could indicate roughly how much time had elapsed following the stroke\u2014something that cannot be estimated with current methodology.There were large numbers of DEGs expressed in monocytes and neutrophils for each cause of stroke with somewhat more DEGs 24 h after stroke. In contrast, there were many more DEGs expressed in whole blood at 0\u201324 h in CE, LV, and SV stroke. This suggests that other cells in whole blood (in addition to neutrophils and monocytes) were contributing to the whole blood responses to stroke at 0\u201324 h.The trajectories of SOM profiles in strokes of all causes combined were similar to profiles for each stroke cause including CE, LV, and SV causes of strokes. Though the temporal profiles were similar, the DEGs and enriched GO terms were generally different for CE, LV, and SV causes of stroke in monocytes, neutrophils, and whole blood. There were exceptions, such as the \u201cRegulation of Golgi to plasma membrane protein transport\u201d pathway, which progressively decreased in expression in monocytes in CE, LV, and SV stroke, a pathway shown to affect outcomes in experimental stroke . Anotherclinicaltrials.gov ID# NCT04734548). TLR signaling impacts downstream pro- or anti-inflammatory molecules, including TNF-\u03b1, interleukins, interferons, and TGF-\u03b2 [SOM profiles identified important attributes of DEGs from opposite expression trajectories and between IS etiologies. In neutrophils, the toll-like receptor signaling pathway is associated with a profile that decreases at less than 24 h and increases after 24 h in CE stroke. In contrast, for LV, the toll-like receptor pathway is enriched in a profile that decreases at all times compared to controls. Toll-like receptor signaling modulates critical immunomodulatory NFkB signaling and is a promising target for treating cardiovascular disease \u2013128 , it was not possible to define a unique shift towards a specific subtype since we expect that both M1/M2 and N1/N2 phenotypes are present in the samples of pooled cells in the ~3 day time window studied here. It is possible that studying longer times after stroke might allow detection of these shifts in gene expression responsible for the evolution into polarized M2 or N2 phenotypes. Moreover, future single-cell RNA-seq studies should enable a better identification of the different cell subpopulations present at each time point.This study employed single samples from single patients at different times. Thus, the changes of gene expression represent the average of multiple patients over the times stipulated. We have shown that individual genetic variation plays a key role in the specific expression response after stroke . Our appA strength of the study was the multiple analytical approaches used, including differential expression, GEDI, SOM, and WGCNA, with each providing insight to the complexity and dynamics of gene expression changes after stroke. SOM in particular was able to demonstrate how pathways changed over time for each cell type and cause of stroke. WGCNA identified modules of co-expressed genes and associated hub genes that changed over time. However, even WGCNA had some weaknesses in that few hub genes and only one time-associated module were identified in neutrophils. Both these results likely arose from differences in size of input list of genes or specific parameters used in the models, since we indeed observed slightly more time-regulated DEGs in neutrophils compared to monocytes when studied by time point. Since WGCNA identified modules that correlated with the continuous time parameter, it may have missed modules with complex dynamic profiles (like the identified SOM profiles). Pathway and functional analyses helped to interpret aggregate shifts of groups of genes, while reducing the impact of potential false positives of individual genes. Though these results are likely to be more reliable, they are hampered in the current study by the small sample size for many of the time points particularly for those considering different causes of stroke in the three sample types examined . For example, subgrouping of TP3 resulted in a group of female only subjects, which may impact the genes discovered for that time point in particular, as sex influences gene expression and immune response after stroke . BecauseWe identified key genes associated with time in the response from leukocytes after ischemic stroke. Some of these correlated with stroke severity.Differentially expressed genes have distinctive trajectories for all IS etiologies analyzed, which allows refinement of critical genes and functions at specific time points after stroke.Altogether, the identified changes in gene expression and pathways over time are critical for understanding how the immune and clotting systems change dynamically after stroke, and point to the complexities of identifying biomarkers and treatment targets for stroke.Additional file 1: Table S1. Differentially expressed genes (DEGs) significant to time point (TP) p-value < 0.02 on the contrasts IS TP vs. VRFC (TP0). Table S2. Enriched canonical pathways overrepresented in all time points (TP) . Table S3. Predicted upstream regulators in all time points (TP) . Table S4. Differentially expressed genes (DEGs) distributed per GEDI map tile based on Pearson's correlation of gene expression values. Table S5. Differentially expressed genes (DEGs) distributed per SOM profile based on Euclidean distance. Table S6. GO terms for biological processes enriched in the SOM profiles from Fig. Table S7. Subject demographics and relevant clinical characteristics in the time bins analyzed after regrouping per IS etiology. Table S8. Differentially expressed genes (DEGs) significant to time point (TP) p-value < 0.02 on the contrasts IS TP-IS etiology vs. VRFC (TP0). Table S9. DEGs distributed per SOM profile in all IS etiologies, based on Euclidean distance. Table S10. GO terms for biological processes enriched in the SOM profiles from Fig. Table S11. Genes in WGCNA modules associated time (h).\u00a0Table S12. Hub genes in the modules significant to time (h). Table S13. HumanBase functional clustering of hub genes from time-associated modules and their corresponding gene ontology terms. Table S14. Correlation between NIHSS at admission and expression of time-associated hub genes.Additional file 2: Supplemental Figure 1. Percentage of DEGs classified per biotype. For every time point and comparison between time points, the percentage of biotype categories for DEGs are shown in the bar plots and corresponding table for (A) monocytes, (B) neutrophils and (C) whole blood. Ig: immunoglobulin; TcR: T-cell receptor; TEC: To be Experimentally Confirmed. Supplemental Figure 2. Predicted upstream regulators for monocytes. (A) Venn diagram represents all upstream regulators predicted at 0-24 h, 24-48 h, and >48 h . (B) Top over-represented regulators common to all time points . (C) Top over-represented regulators that are specific for each time point . Up arrows indicate predicted significant activation (z\u2009\u2265\u20092) or down arrow - significant inhibition (z\u2009\u2264\u2009\u22122). White cells indicate no direction can be predicted. . R: receptor; N.R: nuclear receptor. Supplemental Figure 3. Predicted upstream regulators for neutrophils. (A) Venn diagram represents all upstream regulators predicted at 0-24 h, 24-48 h, and >48 h . (B) Top over-represented regulators shared at all time points . (C) Top over-represented regulators that are specific for each time point . Up arrows indicate predicted significant activation (z\u2009\u2265\u20092) or down arrow - significant inhibition (z\u2009\u2264\u2009\u22122). White cells indicate no direction can be predicted. . R: receptor; endo: endogenous; reg: regulator. Supplemental Figure 4. Predicted upstream regulators for whole blood. (A) Venn diagram represents all upstream regulators predicted at 0-24 h, 24-48 h, and >48 h . (B) Only 2 predicted regulators are shared for the 3 time points. (C) Top over-represented regulators that are specific for each time point . Up arrows indicate predicted significant activation (z\u2009\u2265\u20092) or down arrow - significant inhibition (z\u2009\u2264\u2009\u22122). White cells indicate no direction can be predicted. R: receptor; reg: regulator; endo: endogenous. Supplemental Figure 5. tPA administration is not associated with differential expression. A) Characteristics of the five cases where tPA was administered. Principal Component Analyses (PCA) using the DEGs from the time points 0-24 and >24 h in LV and SV strokes in B) monocytes, C) neutrophils, and D) whole blood samples. Cases where tPA was administered are colored in orange. Supplemental Figure 6. Co-expression network construction. WGCNA Soft-thresholding power scale free topology fit (left panel) and mean connectivity (right panel) plots for all genes analyzed in (A) monocytes, (B) neutrophils and (C) whole blood. Reference lines are at 0.8 (red) and 0.9 (green) on the left panels and at 100 (red) and 200 (green) on the right panels. Supplemental Figure 7. Workflow of the study. Monocytes and neutrophils were isolated by flow cytometry (scatter plots adapted in part from Carmona-Mora et al. [a et al. . RNA-seq"}
+{"text": "Application of scSTEM to several scRNA-seq datasets demonstrates its usefulness and ability to improve downstream analysis of biological processes. scSTEM is available at https://github.com/alexQiSong/scSTEM.We develop scSTEM, single-cell STEM, a method for clustering dynamic profiles of genes in trajectories inferred from pseudotime ordering of single-cell RNA-seq (scRNA-seq) data. scSTEM uses one of several metrics to summarize the expression of genes and assigns a The online version contains supplementary material available at 10.1186/s13059-022-02716-9. Much attention in the analysis of single-cell data has focused on grouping cells to cell types or on modeling trajectories of cell development and differentiation , 2. MuchA unique aspect of scRNA-Seq data is the ability to extract detailed dynamics even when using a small number, or a single, time point. Such analysis, which is often termed pseudotime ordering of cells results in a reconstructed trajectory of cells along a number of branches and paths. An important question for the resulting trajectory is, What are the sets of expressed and repressed genes that are activated or repressed along a specific branch in the model and how different branches vary in such sets? This information can be very useful in determining the function or type of cells along a certain path . MoreoveWhile clustering of genes along paths is of interest, it is also challenging. As mentioned above, the number of branching points in an inferred trajectory is usually quite small , 8. ThisTo provide such method that can be used to cluster pseudotime ordered data, we extended the Short Time-series Expression Miner (STEM) so that Here, we extended STEM and developed scSTEM, which can use trajectory information from single-cell data to cluster genes. To perform such clustering, scSTEM first uses one of several pseudotime inference methods to construct a trajectory for a given scRNA-Seq data. Next, for every gene in every connected component in the analysis scSTEM generates summary time series data using several different approaches for each of the paths. This data is then used as input for STEM and clusters are determined for each path in the trajectory. Users can also compare STEM clusters between two trajectories in the same component to determine what are the differences in genes and biological processes that led to the divergence of these trajectories. We compared scSTEM to several other methods and show that scSTEM produced the best functional relevant clusters and scales well to large single-cell datasets. We have tested scSTEM on a number of datasets with different trajectory inference methods and gene summarization methods. As we show, scSTEM can correctly identify the key functional processes expected to be active along different paths. In addition, comparisons using scSTEM provide biological insights about the activity of different cell types, including clusters distinguishing between very similar cell types such as T cells and NK cells.The overall idea for scSTEM is to cluster genes based on their temporal expression patterns along a given trajectory path. The clustering process starts with building a trajectory tree using expression count matrix, gene metadata, and cell metadata as input files. scSTEM can work with several trajectory inference methods including several popular methods human fetal immune cells , 2) mou mou2: FiIn this study, researchers looked at fetus development and profiled cells from several different tissues over time. To focus on a specific biological process, we have analyzed the set of blood cells from this dataset . Cell type annotations provided in the original publication have ideThis data set investigated gene expressions during early organogenesis of mouse embryo . In thisFinally, we applied scSTEM with Monocle 3 and mean expression to neural crest cells in mouse embryo organogenesis data set . scSTEM has identified 12 paths with 1\u2009~\u20096 significant clusters . These markers include SAMD3, a signal transducer protein that has been reported to have reduced expressions in T cells [We performed the comparative analysis using human fetal immune cell data set to compare the clustering results related to the two very similar cell populations, T cells and NK cells. By using Monocle 3\u2009+\u2009mean expressions, we have identified clusters with similar genes that showed discrepant expression patterns in different paths. As can be seen Fig.\u00a0 T cells . GZMM an T cells . GZMM is T cells . Althoughttps://www.panglaodb.se/markers.html?cell_type=%27NK%20cells%27). We performed hypergeometric test and used all protein coding genes in GRCh38 genome [p-value\u2009<\u20090.001) and C2P7 has 4 NK cell markers and no NK cell markers were found in C2P6. The comparison between C0P2 and C2P7 is consistent with the expectation that NK cell marker genes showed increased expression early in the common developmental path between NK cells and T cells though the enrichment for C2P7 was not significant when using this reduced background.To further validate the role of the genes, we compared all three clusters to 80 available human NK cell expression markers . In thitradeSeq . Rate change may be a useful metric for cases where cells are expected to change along a path in a predictable way . Finally, entropy can be used for cases where little is known about the underlying molecular activity in each path.p-values (BH corrections) from F test to evaluate the fit of a linear model. As shown in Additional File Once the user selects a trajectory inference and gene summarization method, scSTEM clusters cells along each path. These clusters are easy to interpret and are represented by cluster plots in which the trend of expressions corresponds to the progression of cell fates along each specific trajectory path. Gene clustering of each path is fast , making it possible to cluster thousands of genes from many cells without the need to limit clustering to only a few hundred of genes as in . AlthougWhile useful, there are several limitations that should be noted. First, scSTEM relies on trajectory inference methods, which may not accurately reconstruct cell trajectories. While the package provides access to several such methods, and so it is likely that at least a few will work well, there is no guarantee that the resulting trajectories indeed capture the underlying process. Second, most trajectory inference methods do not reflect directionality of progression along the path. scSTEM uses real time point information to pinpoint the origin of the trajectory tree, but if such time information does not agree with the real direction, the analysis would be inaccurate. Third, in some cases, trajectory methods provide too many branches, which artificially increases the path length and may be detrimental to scSTEM analysis.To further improve scSTEM in the future, time point information may be explicitly used during the construction of trajectory and clustering of genes. Methods that provide directionality of single-cell progression such as RNA velocity , 23 may https://github.com/alexQiSong/scSTEM). While not being limited to bioinformatians and computational biologists, we aim to provide scSTEM to the broader audiences of all researchers interested in analyzing scRNA-seq data.We have provided scSTEM as an R shiny GUI-based tool which does not require any coding experience and used these genes for downstream analysis.We have tested scSTEM on three time series single-cell data sets: (1) human fetal immune cells , (2) moulication . For mouFor initial visualization of the data, scSTEM performs normalization of UMI count data using log normalization provided by Monocle3 package . NormaliInstead of using all cells to perform trajectory inference, it is important to infer trajectory tree for a subset of cells relevant to the biological question of interest. Therefore, scSTEM allows users to select a subset of cells after the dimensionality reduction step and before trajectory inference step. scSTEM uses Leiden algorithm to cluster cells. For each connected component in the Leiden graph, the software assigns a cluster ID allowing the users to analyze each of these separately. User can then select the cell partition by selecting the corresponding cluster IDs in the scSTEM GUI and perform trajectory inference for the selected cells.To perform pseudotime inference, scSTEM allows users to select one of several popular methods. To enable this, we use a general trajectory inference framework provided by the dynverse package . DynversTrajectory inference will produce a trajectory tree, with nodes representing possible cell states and edges representing the transitions among these cell states. Following the terminologies used by Dynverse, we term these nodes as \u201cmilestone nodes.\u201d A path is the shortest path connecting the root node and a leaf node. The goal for scSTEM is to perform gene clustering on each path. scSTEM assigns an ID number to each path. The user can then select a single or multiple path(s) by checking the corresponding path ID numbers in the GUI.While each path may contain hundreds or even thousands of cells, the changes between milestone nodes is usually linear for genes. Thus, the trajectory of a specific gene along a specific path can be represented with a small set of values, one for each of the milestone nodes along the paths. These values are used as inputs to scSTEM to cluster the genes along the path and to assign significance value to the clusters. However, there are several different ways in which one can summarize the expression of genes along the path and different summarization may be more appropriate for different studies.scSTEM provides three methods for aggregating expression levels from cells assigned to a specific segment. A segment is defined as the set of milestone nodes and edges connecting two consecutive milestone nodes in the trajectory. We define three types of milestone nodes: (1) the root node, (2) branching node, and (3) leaf node. For directed trees, a branching node is a node having greater than or equal to two outgoing edges. For undirected tree, branching node is a node having greater than or equal to three edges connected to it. Once trajectory tree has been constructed, scSTEM will iterate over all possible paths going from root node to leaf node and aggregate gene expressions in each key segment and in each path. We term these segments as \u201csampled time point,\u201d and each path may include several such sampled time points. scSTEM then clusters genes for each path based on these time point values. When the number of sampled time points is less than three, the STEM algorithm will not be able to produce meaningful clusters. To address this issue, in such cases scSTEM further breaks paths into more segments using ordering of pseudotime values. For paths containing only one key segment, scSTEM will partition the segment into three segments by looking for 1/3 and 2/3 quantile points. For paths containing only two key segments, scSTEM will partition each segment into two segments by looking for median point. We have observed that this strategy is useful for trajectory inference methods that do not create complex branch structure. The metrics used to aggregate gene expressions for each sampled time point are described below.This metric simply computes the mean of normalized expression values for each gene in all cells belonging to a sampled time point.i in a population of n cells is calculated asROGUE is a statistic initially proposed to measure the impurity of cell population . We usedwhere Z-score based simple strategy here. For each gene, scSTEM computes Z-scores for all expressions in the segment and then removes cells with Z-score\u2009<\u20091.96 and Z-score\u2009>\u20091.96 for that time point. After this filtering step, scSTEM fits a linear model using the remaining cells:This metric measures the rate of expression change for each gene along the segment. To compute the change rate, we first apply a filtering strategy to remove zero and low expression values which may represent dropouts. We use a i, t is the pseudotime values of cells within each sampled time point, and \u03b5 represents the noise in the expression values. scSTEM then uses where STEM is an algorithm for clustering genes in short time series data . STEM clSTEM can compare the clustering results from two different bulk RNA-seq time series data sets. Similarly, we have enabled scSTEM to perform comparison of clustering results from different trajectory paths to reveal how functional gene clusters change along different cell lineages. scSTEM performs hypergeometric test to identify clusters that have significant number of overlapping genes in different paths then visualizes the expression patterns of these similar clusters Fig.\u00a0B. Users We used built-in GO enrichment analysis in STEM software to perform GO analysis. If input gene IDs are ensemble IDs, scSTEM will first map them to gene symbols by biomaRt R package then passed down the converted IDs to STEM for GO analysis.http://www.bioconductor.org/packages/release/bioc/vignettes/tradeSeq/inst/doc/Monocle.html). GO enrichment was then performed using ranking of p-values.For DE gene analysis, we first performed trajectory inference with Monocle 3 and extracted cells mapped to path 1 Fig.\u00a0B from thFor clustering analysis, we have run two comparison methods as well, scHot and tradeSeq, on the same path. scHOT clusters genes by computing the pairwise Spearman correlation followed by hierarchical clustering. Such operations are very computationally expensive for large datasets. We thus only applied scHOT to the top 300 marker genes in the NK cell path (the path shown in Fig.\u00a0Additional file 1. Gene cluster sizes, profile sizes, and p-values for each combination of trajectory inference method and gene summarization metric; Number of NK cell markers for the clusters presented in Fig. Additional file 2. All supplementary notes, supplementary figures, and supplementary tables.Additional file 3. Gene IDs for clusters presented in Fig. Additional file 4. GO enrichment analysis results for clusters presented in Fig. Additional file 5. Review history."}
+{"text": "We achieved competitive results on the DREAM4 compared with other state-of-the-art algorithms and gained meaningful GRN structure on HCC data respectively.Gene regulatory network (GRN) provides abundant information on gene interactions, which contributes to demonstrating pathology, predicting clinical outcomes, and identifying drug targets. Existing high-throughput experiments provide rich time-series gene expression data to reconstruct the GRN to further gain insights into the mechanism of organisms responding to external stimuli. Numerous machine-learning methods have been proposed to infer gene regulatory networks. Nevertheless, machine learning, especially deep learning, is generally a \u201cblack box,\u201d which lacks interpretability. The causality has not been well recognized in GRN inference procedures. In this article, we introduce grey theory integrated with the adaptive sliding window technique to flexibly capture instant gene\u2013gene interactions in the uncertain regulatory system. Then, we incorporate generalized multivariate Granger causality regression methods to transform the dynamic grey association into causation to generate directional regulatory links. We evaluate our model on the DREAM4 Homo sapiens and yeast Saccharomyces cerevisiae, we can conclude that the complexity in life does not result from the number of genes, but the essence and dynamics of the interactions between genes from time-series gene expression data. Time-series can effectively disclose the dynamic interactions of genes with time , gradienty LASSO , and L2ty Ridge . We regro GENIE3 . The goaw from wy = regressor and false positive rate (FPR) across different thresholds. AUPRC is just the area under the PR curve, where the x-axis is Precision and the y-axis is Recall. The other measurement metrics, such as Precision, Matthews correlation coefficient (MCC), and Accuracy, are shown in L1-norm LASSO and L2-norm Ridge. For each regression method, we run them 100 times to reduce randomness. To further validate the capacity of the grey technique, we compare the four different regression methods with and without the dynamic grey association. The performances of bagging and gradient boosting comparisons are shown in PRF \u2212 value = 3.93\u201318; PXgboost \u2212 value = 6.57e \u2212 18, Wilcoxon test] and AUPRC . In the 100 trials, two regularized regression methods have little changes in AUROC and AUPRC. The results of the two regularized regression comparisons are shown in PLASSO \u2212 value = 1.86e \u2212 18; PRidge \u2212 value = 1.55e \u2212 23] and AUPRC . Therefore, the dynamic grey association is effective and efficient to improve the GRN structure inference.In the DREAM4 challenge, we implement four different regression strategies on the DREAM4 dataset combined with the dynamic grey association. We select one typical model from each bagging and gradient boosting algorithm, i.e., random forest (RF) and Xgboost. Analogously, we also incorporate two different regularized regression methods We further compare our model with other seven state-of-the-art methods, including GENIE3-lag , Jump3 , SWING , BTNET , BiXGBooHepatocellular carcinoma (HCC) accounts for d deaths . It is ed deaths . It is iIn this article, we apply GreyNet to the HCC time-course expression data. We select the top 500 regulatory links in HCC GRN by the score of the inference. The inferred HCC GRN is shown in \u03b2-catenin (a key component of the Wnt pathway) brings out the aberrant activation of signaling in HCC and LEF (lymphoid enhancer-binding factor), and activates the transcription of the target genes which participate in CSC maintenance and EMT (via p21 and c-Myc (a negative regulator of p21). It is found that the growth of xenografted human HCC cells can be reduced by pharmacologic inhibition of JNK (\u03b2) involves multiple stages of HCC development from liver injury toward fibrosis, cirrhosis, and cancer. In hepatocarcinogenesis, TGF-\u03b2 performs as a suppressor factor in the early stages. However, TGF-\u03b2 contributes to tumor progression latterly (Then, we enrich the HCC GRN by NOA . The resg in HCC . Activat and EMT . Ultimatn of JNK . The \u2018ren of JNK . The \u201ctrlatterly . The conLimited by current technology and knowledge, the underlying gene regulatory mechanisms in cells are not very clear to us. It is reasonable to assume that gene interactions behave as a grey system. Moreover, the similarity and association of the two components are variational, evolving the time in the biological systems. It is less useful to assign a single static score fraction to two variables over an entire time-series. In this condition, we turned the static coefficient to the dynamic grey association by incorporating the adaptive sliding window technique to capture the dynamic evolution, which is much more aligned with real and known gene regulations.The dynamic grey association is not enough to mine the causality in GRN. Thus, we further introduced both linear and non-linear regression methods to search for causal links by temporal information. Causal relationships between the variables can disclose the origin of the outcome and contribute to decision making. The Granger causal regression model is easy to be understood and explained.Reconstructing the causal gene regulatory network is a preliminary step to finding out the internal mechanism of the biological procedure and facilitating our understanding of the basic pathology of tumors and other diseases . Howeverin silico datasets, our model got competitive performances on AUROC and AUPRC compared with other state-of-the-art models. In the real HCC dataset, GreyNet can find meaningful pathways in HCC development from the functional enrichment results of HCC GRN. In the future, we will provide an update to make our model applicable to single-cell RNA-seq data.In this article, we proposed an interpretable machine-learning framework to infer the gene regulatory network from time-series expression data. We applied grey theory with the adaptive sliding window technique to model internal interactions in real regulatory procedures in the condition of limited information and knowledge. We further incorporated the Granger causality framework to search for causal regulations between genes. In DREAM4"}
+{"text": "Babesia is a genus of apicomplexan parasites that infect red blood cells in vertebrate hosts. Pathology occurs during rapid replication cycles in the asexual blood stage of infection. Current knowledge of Babesia replication cycle progression and regulation is limited and relies mostly on comparative studies with related parasites. Due to limitations in synchronizing Babesia parasites, fine-scale time-course transcriptomic resources are not readily available. Single-cell transcriptomics provides a powerful unbiased alternative for profiling asynchronous cell populations. Here, we applied single-cell RNA sequencing to 3 Babesia species . We used analytical approaches and algorithms to map the replication cycle and construct pseudo-synchronized time-course gene expression profiles. We identify clusters of co-expressed genes showing \u201cjust-in-time\u201d expression profiles, with gradually cascading peaks throughout asexual development. Moreover, clustering analysis of reconstructed gene curves reveals coordinated timing of peak expression in epigenetic markers and transcription factors. Using a regularized Gaussian graphical model, we reconstructed co-expression networks and identified conserved and species-specific nodes. Motif analysis of a co-expression interactome of AP2 transcription factors identified specific motifs previously reported to play a role in DNA replication in Plasmodium species. Finally, we present an interactive web application to visualize and interactively explore the datasets. Babesia is a genus of apicomplexan parasites that infect vertebrate red blood cells, but difficulties in synchronizing the parasites has limited knowledge of the Babesia replication cycle. This study uses single-cell RNA sequencing of three Babesia species to map the replication cycle and construct pseudo-synchronized time-course gene expression profiles. Babesia are some of the most widespread blood parasites of vertebrates, second only to the trypanosomes . Taken together, these observations suggest a role for Bdiv_028580c in pre-sexual development and suggest this process may be initiated in blood-stage Babesia parasites. Given the lack of an obvious sexual stage in the presented single-cell data , further showing that canonically sexual-stage genes are expressed in the replicative cycle [ama1 have been shown to be important in invasion in multiple Plasmodium life cycle stages [The hypothetical protein Bdiv_028580c shares the majority of its connections with genes in the SM phase (20 of 36 connections). Interestingly, it is connected to an aspartyl protease , suggesting a possible role in invasion. Bdiv_024700 has several connections to genes important to cytoskeletal arrangement, daughter cell formation, and egress, suggesting a role for the gene in cytokinesis and egress, and most connections occur between the SM and MC phase. One such connection is to mapk-2 (Bdiv_027570c), which is essential in initiation of mitosis and daughter cell budding in T. gondii [P. falciparum, which plays a role in the organization of the inner membrane complex [T. gondii [asp2, Bdiv_023140c) similar to plasmepsin IX and X in P. falciparum, and has a known role in invasion [The hypothetical protein Bdiv_024700 shares some homology with . gondii . This hu complex . Indeed,. gondii . Furtherinvasion . Togethecdpk4, Bdiv_024410; protein kinase domain containing protein, Bdiv_033990c) and the aspartyl protease asp3 (Bdiv_006490c), suggesting a possible role in signaling processes that control egress [Finally, looking at the interactome of Bdiv_011410c, most gene connections occur in the inferred C phase. This hypothetical protein is connected to many genes involved in signal transduction, most notably 2 calcium-dependent protein kinases putative epigenetic factors similarly identified by orthology with P. falciparum, and (4) variable erythrocyte surface antigen (VESA) genes subunit C (Bdiv_013610), each of these occurring in cluster 1 putative transcription factors (TFs) identified by the presence of a DNA binding domain in the sequence, (2) putative AP2 TFs identified by orthology with A) genes . The recved hubs . ASF-1 sluster 1 . IdentifP. falciparum [B. divergens , which is implicated in the replication cycle progression into S phase, which is more in line with the observed expression across Babesia species [The same analysis was performed for TFs with cyclical expression profiles, including AP2 domain containing proteins, identified through reciprocal blast with identified TFs in lciparum , PF3D7_1239200 (an unstudied AP2), and PF3D7_0802100 (AP2-I), respectively. Interestingly, the motif \u201cACACA\u201d has previously been reported to play a role in DNA replication in Plasmodium spp. [P. falciparum has identified and associated the same motif with AP2-I (PF3D7_0802100) [To examine whether the interactome of the TF and AP2 family of regulators is transcriptionally co-regulated, we performed a motif search analysis using MEME suite on the pium spp. ,97. More0802100) . Fig 9B Babesia species. Moreover, users can generate co-expression networks and interactively visualize and explore the inferred interactome of genes. https://umbibio.math.umb.edu/babesiasc/. The source code for the app is available on GitHub at https://github.com/umbibio/babesia_dash_app.To facilitate usage, we developed a user-friendly interactive web app using web dashboard. The app provides functionality to explore and visualize gene expression during the IDC across thousands of asynchronously dividing single cells projected on PCA or UMAP coordinates. Users can examine the timing of expression using pseudo-time analysis and perform comparative transcriptomic analysis across the Babesia parasites and characterize the progression of the replication cycle using newly developed computational approaches. The replication cycle in Babesia spp. likely relies heavily on various signaling pathways . In theer cells . Data ono egress . Howeverely 12 h . In contve cycle . Indeed,ve cycle ,101,102.ve cycle . Howeverand 12 h . Subsequand 14 h . In all surement ,101,102.ltration ,38, and ltration . Unfortuarasites . Of noteB. divergens to calibrate the progression of time in single-cell data. This technique allows us to generate pseudo-synchronized time-course data at a fine resolution and reconstruct the expression waves of genes during the replication cycle. Analysis of the data shows a high degree of agreement between the bulk and single-cell data, demonstrating the ability of single-cell measurements to match (and overcome some of the limitations of) synchronized time-course measurements.Taking advantage of the unique geometry of replicating parasites, we developed a pseudo-time analysis and used the synchronous bulk RNA-seq data in Babesia spp. datasets, we can clearly observe divergence between bulk RNA-seq and scRNA-seq in the pattern of expression over time in genes with low expression. For example, protein kinase G was recently identified as an essential gene in egress using synchronized bulk RNA-seq and reverse genetics [Babesia genome size and decreased number of predicted coding sequences in relation to mammalian cells , we estimated that an average of approximately 6,000 reads per cell should sufficiently capture the expression profiles . We also note that the read depths do vary between samples (correlating with genome size) and may cause some limitations in the downstream data analysis of genes with low expression. These limitations are important to consider when attempting to characterize cell populations, especially those that may represent rare cell types.The limitations, challenges, and advantages of scRNA-seq have been extensively reviewed, including comparisons of the most robust tools, understanding dropouts, and discussion of the ability to understand genes with low expression \u2013110. Whigenetics ; howevergenetics . In mammprofiles . We acknHowever, scRNA-seq also provides major advantages over bulk sequencing techniques. In most synchronized bulk RNA-seq time-course studies, measurements are limited to a few discrete time points , whereas in sequencing heterogenous single-cell populations, the entire trajectory of expression can be captured, representing a continuum of time. This is advantageous in capturing subtle variations in curvature, which allow more precise mapping of peak expression as well as clustering genes by expression similarities. Because of the ability to capture time using asynchronous populations, the difficulties of tightly synchronizing parasites are completely avoided in single-cell studies. Indeed, heterogeneity is an advantage in scRNA-seq that allows capturing and grouping cells in similar states at single-cell resolution, and scRNA-seq can capture and characterize small cell populations not distinguishable at the bulk level. In this study, we highlight the benefit of combining the 2 techniques to leverage the detection power of bulk RNA-seq and the fine resolution of single-cell sequencing.Babesia genomes are not extensive and do not include the untranslated regions (UTRs). Moreover, the gaps between the genes are shorter compared to T. gondii and Plasmodium spp. was identified in the proteome of the infected RBC membrane, in addition to several other multigene families , showing this protein is expressed and exported by the parasite [B. bovis could account for an increased use of the endomembrane system. However, this cannot fully explain the observed enrichment across the bovine-adapted parasites, as B. bigemina and B. divergens do not sequester. It may still be the case that parasites in bovine RBCs more dramatically alter the host cell. Indeed, when splenectomized calves were infected with stabilates of bovine-derived strains of B. divergens, an increase in the mean corpuscular volume of the host cell was observed in relation to infection by human-derived patient isolates (stabilates), suggesting changes in the membrane architecture. This suggests there is a parasite-specific effect on bovine host cells depending on which host (human versus cow) the parasite was originally isolated from [B. bigemina [B. bovis and B. divergens, combined with the pattern observed in this study, presents an intriguing possibility that protein export is affected by resident host cell.Interestingly, the major unifying difference in bovine parasites is the emphasis on vesicular transport and the endomembrane system. One hypothesis from this observation is that parasites grown in bovine RBCs require increased trafficking to and from the membrane, both to export proteins and scavenge nutrients. The ability of arasites \u2013120. Recparasite . These pted from . UnfortuBabesia species. To date, comparative genomic studies have focused on gene annotation, understanding variant gene expression, and elucidation of virulence determinants [B. bovis and B. divergens share similarity [B. bigemina contains extensive duplications of certain gene families, leading to an increased genome size [Our work provides to our knowledge the first comparative single-cell transcriptomic study across 3 rminants ,36,123. milarity . Additioome size . The genome size ,36,123. P. falciparum [Babesia species the expression peaks around 3.75 h, which is at the beginning of S phase. This difference is likely indicative of the different modes of division of the parasites (schizogony versus binary fission). We were also able to provide strong evidence for the utility of the interactome networks by showing the connection of asf-1 to histone expression hematocrit in RPMI-1640 medium supplemented with 25 mM HEPES, 11.50 mg/l hypoxanthine, 2.42 mM sodium bicarbonate, and 4.31 mg/ml AlbuMAX II (Invitrogen). Before addition of AlbuMAX II and sodium bicarbonate, we adjusted the pH of the medium to 6.75. B. divergens strain Rouen 1987, kindly provided by Kirk Deitsch and Laura Kirkman , was maintained under the same conditions in purified white male O+ human RBCs (Research Blood Components). All cultures were maintained at 37\u00b0C in a hypoxic environment . Clonal lines of parasites were used for all selections and were derived from the provided strains via limiting dilution\u2014these will be referenced as BdC9 (B. divergens), BigE1 (B. bigemina), and BOV2C (B. bovis).The B. divergens, B. bovis, and B. bigemina were grown to >15% parasitemia. Health of parasites was assessed by thin blood smear. Parasites were collected and pelleted, and washed with warm 1\u00d7 PBS, followed by a final wash with warm 0.4% BSA in 1\u00d7 PBS. To ensure loading of the correct number of parasites, parasitemia was counted on stained thin blood smears , and RBCs were counted using a hemocytometer. These values were used to calculate the number of infected cells to be loaded into the Chromium Chip B. We aimed for recovery of 10,000 infected cells, thus loaded 16,500 infected cells in bulk culture. Cell suspensions were loaded into individual wells on the Chromium Chip B. After gel bead-in-emulsion (GEM) generation, single-cell libraries were processed according to the 10\u00d7 Genomics\u2009Chromium 3\u2032 v2 User Guide protocol, using 13 amplification cycles for cDNA amplification, and 14 cycles in library construction. Libraries were subsequently sequenced on the Illumina NextSeq platform following the 10\u00d7 Genomics specifications, aiming for a minimum of 6,000 reads per cell.Babesia spp. (release 54) were downloaded from PiroplasmaDB (https://piroplasmadb.org/). Custom references were generated using the 10\u00d7 Genomics Cell Ranger (version 6.0.0) pipeline (cellranger mkref), and raw fastq files were aligned to the genome cellranger count with default parameters.The reference genomes and annotation files of k-nearest neighbors algorithm using Seurat function FindClusters with parameter res = 0.2. Datasets were down-sampled to include 800 cells per cluster. Orthologous genes in all 3 species were used to construct Seurat objects with same genes using B. divergens gene IDs. Datasets were then integrated using Seurat\u2019s merge and IntegrateData functions. The B. divergens sample in human host RBCs was used as the \u201creference\u201d dataset for integration (Seurat FindIntegrationAnchors function).All data analysis was performed in R (version 4.1.1). The R Seurat package version 3 was utilhttps://github.com/MarkusLoew/MyEllipsefit). Next, the elliptic fit was used as prior to fit a principal curve to each dataset [Babesia spp. replication cycle. The pseudo-time interval was partitioned into 20-min bins, and cells that fell within the same bin were treated as synchronized replicates. Next, we calculated the correlation of gene expression with pseudo-time using a GAM and filtered out genes that did not correlate with the pseudo-time . The R function gam from the package gam was used for this analysis. This resulted in a time-course expression matrix with dimensions n representing the total number of genes, N representing the total number of time bins , and nk representing the total number of cells mapping to the time bin k.Pseudo-time analysis was performed in 3 steps. First, an ellipsoid was fitted to the first 2 PCA coordinates in each dataset using the Ellipsefit function from the MyEllipsefit R package and gsc(t) are gene curves in the bulk and singe-cell data, respectively, and N is the total number of sample points in the gene curves. The lag time maximizing the cross-correlation was then calculated for each gene, and the distribution of lag times across all genes was examined to identify a single optimal lag time for all genes. Single-cell gene curves in all single-cell datasets were shifted by the optimal lag time to adjust the start time.Time-course bulk RNA-seq data from synchronized escribed . The bulT. gondii scRNA-seq data were obtained from the single-cell atlas of T. gondii [p-value < 0.01. The top 20 markers of each phase were then used and mapped to their Babesia spp. orthologs. The timing of peak expression of each marker was calculated by examining the local maxima of the fitted pseudo-time gene expression curves in each species. Transition time points between phases were then determined by examining the quantiles of the peak time distributions, and adjusted by visual inspection of the overlap of distributions.. gondii , where rB. bigemina G phase versus B. bovis, B. divergens (bovine), and B. divergens (human) G phase). For these analyses, FC > 2 and adjusted p-value < 0.01 were used to determine significance. For host-cell-specific differences, differential expression analysis was performed between B. divergens in human host and B. divergens in bovine host as well as between B. divergens in human host and merged B. bigemina, B. bovis, and B. divergens in bovine host in a phased-matched specific manner. For these analyses, FC > 1.5 and adjusted p-value < 0.01 were used to determine significance. Genes differentially expressed between B. divergens and B. bigemina as well as between B. divergens and B. bovis in bovine RBCs were excluded from this analysis to minimize the confounding effect caused by differences in parasites.DEGs of the inferred replication cycle phases were identified as follows. Conserved DEGs of each phase across species were identified using the Seurat function FindConservedMarkers with default parameters. DEGs of each phase were also calculated in each species independently using the FindAllMarkers function from the Seurat R package with parameters only.pos = TRUE. DEGs of each phase unique to a specific species were determined using the same function and by setting an appropriate contrast in cell identities , and significant GO terms (Benjamini < 0.1) were determined. The log fold enrichment and log p-value were used to visualize the significant GO terms.DEGs were mapped to their orthologs in n\u00d7N time-course matrix, with rows representing genes and columns representing pseudo-time bins. Data were scaled to z-scores, and a DTW metric was used to measure the similarity between curves and to perform a hierarchical clustering. For this analysis we used the tsclust function from the R package dtwclust (https://github.com/asardaes/dtwclust) with parameters control = hierarchical_control(method = \u201ccomplete\u201d), args = tsclust_args(dist = list(window.size = 4L). The total number of clusters was set empirically with trial and error. Genes were ordered according to peak expression time, and cells according to their inferred replication cycle phase . A heatmap was used to visualize the expression profile of the genes.Mean gene expression curves of inferred replication cycle genes were used to construct an S and \u0398 are the empirical covariance and precision matrices and \u03bb is the sparsity penalty.We used a Gaussian graphical model (GGM) to assemble a gene\u2013gene interaction network using the scRNA-seq expression data. The GGM can be used to calculate the partial correlation between gene pairs conditioned on the rest of the genes, and thus it captures pairwise relationships between the nodes in the interaction graph. Partial correlation is then used to assemble a gene\u2013gene interaction network, where genes represent nodes and edges represent a direct interaction between them after accounting for tertiary effects. The objective of the GGM is given byS using the pseudo-time-course gene expression data as follows. First, the mean trend \u03bci(tj) of each gene i at time point tj was estimated using the expression of replicate cells that mapped to time partition i. This mean trend was removed from the expression of genes to de-trend the data:To fit this model, we estimated the empirical covariance matrix j\u2113 at time bin j. The trended data were used to estimate the empirical covariance matrix S. As gene expression is periodic during the replication cycle, and assuming a non-time-varying covariance matrix, the GGM model can be directly applied to the de-trend data to capture direct covariations in gene expression. The Objective function of the GGM was then fitted for a grid of \u03bb values ranging from 0.01 to 1.0 with step size 0.01. The R package glassoFast was used for fitting the model (https://github.com/JClavel/glassoFast). The fitted precision matrices were converted to partial correlation matrices P, which in turn were converted to network adjacency matrices. The scale-free network property for each network was calculated, and the penalty value that maximized the scale-free property was used to identify the \u201coptimal\u201d network.The superscript represents (replicate) cell P. falciparum TFs and AP2s were mapped to their Babesia spp. orthologs when available . Motif search analysis was carried out on promoter sequences using the MEME suite [Annotated vailable . For eacon files . The proME suite . The memhttps://github.com/umbibio/babesia_dash_app.A web dashboard was built to provide easy access to single-cell expression data and analysis results. Data were preprocessed and loaded as tables to a SQL database. The web interface was implemented using the \u201cdash\u201d python framework, which allows building of dynamic, interactive, data-driven apps. The current instance of the app is running in a single server using docker containers. However, the app design and the stateless approach of the framework allows for easy scalability to support increasing traffic as needed. The app for the dashboard is organized as a python module and separated into a sub-module for each of the pages available. URL requests are processed using an index python script, which loads the appropriate UI layout and backend logic. The python app is served by a Gunicorn WSGI HTTP Server, which allows it to spawn multiple workers for the app. MariaDB is used as the SQL server for the app, with connection pools of size 32 for each python worker, allowing multiple persistent connections to the database. A series of bash and python scripts were written to automate the process of database initialization from the TSV datafiles. As the data are expected to remain static through the running time of the app, SQL tables were created with tight data size allocations to help with performance. SQL indices were created such that queries remain fast. SQL unique indices are used where possible, as a way of ensuring the input data\u2019s integrity. The resulting relational database contains 9 tables for each of the species, holding data and metadata for genes, orthologous genes, single-cell expression experiments, spline fits for pseudo-time analysis, and nodes and edges for each of the interaction networks available. Finally, a docker-compose configuration script was written, which contains all relevant configuration parameters in one place to easily deploy the app+sql server services. The app can be accessed at https://umbibio.math.umb.edu/babesiasc/. The source code for the app is available on GitHub at S1 Tablep-value < 0.01).Tab 1 contains the alignment metrics. Tab 2 summarizes the total number of genes and cells that pass the Seurat filters. Tab 3 summarizes the total number of significant genes in the fitted generalized additive model Click here for additional data file.S2 TableBabesia species . T. gondii and P. falciparum are also included for comparison. Tab 3 contains strand-specific as well as the overall percentage of overlapping genes. Tab 4 contains summary statistics (mean and median) of gaps between consecutive genes.Tabs 1 and 2 contain information and pie charts on the total length and percentages of exons, introns, and intergenic regions in (XLSX)Click here for additional data file.S3 Tablep-value < 0.01). Tabs 5 and 6 contain the list of conserved and species-specific DEGs. Tabs 7 and 8 contain the list of B. divergens and species-independent DEGs in the human versus bovine comparison.Tabs 1\u20135 contain the list of differentially expressed genes (DEGs) of the inferred cell cycle phases Click here for additional data file.S4 TableTabs 1\u20133 contain the list of enriched GO terms for conserved and species-specific DEGs and human versus bovine DEGs.(XLSX)Click here for additional data file.S5 TableBabesia transcription factors (TFs), AP2s, and epigenetic markers. Tabs 4\u20136 contain the list of VESA genes in B. bovis, B. bigemina, and B. divergens.Tabs 1\u20133 contain the list of orthologous (XLSX)Click here for additional data file.S6 TableTabs 1 and 2 contain the list of genes in each cluster for epigenetic markers and TFs (XLSX)Click here for additional data file.S1 Text(PDF)Click here for additional data file."}
+{"text": "For decades, time series forecasting had many applications in various industries such as weather, financial, healthcare, business, retail, and energy consumption forecasting. An accurate prediction in these applications is a very important and also difficult task because of high sampling rates leading to monthly, daily, or even hourly data. This high-frequency property of time series data results in complexity and seasonality. Moreover, the time series data can have irregular fluctuations caused by various factors. Thus, using a single model does not result in good accuracy results. In this study, we propose an efficient forecasting framework by hybridizing the recurrent neural network model with Facebook\u2019s Prophet to improve the forecasting performance. Seasonal-trend decomposition based on the Loess (STL) algorithm is applied to the original time series and these decomposed components are used to train our recurrent neural network for reducing the impact of these irregular patterns on final predictions. Moreover, to preserve seasonality, the original time series data is modeled with Prophet, and the output of both sub-models are merged as final prediction values. In experiments, we compared our model with state-of-art methods for real-world energy consumption data of seven countries and the proposed hybrid method demonstrates competitive results to these state-of-art methods. Time series forecasting is a popular research area and has various numbers of application fields such as energy, business, economy, health, and environment. Time series forecasting is a process using a dependent variable\u2019s past values to predict its future values. In detail, time series forecasting models try to understand the classical patterns found in these series. These patterns may include seasonality, trend, and noise. The energy field is one of the most studied areas for time series forecasting. As the population of the world grows, energy consumption and demand are always increasing. Moreover, the supply must meet the demand for energy since it cannot be stored. Therefore, the prediction of this consumption and demand has been regarded as a crucial and complex task .There are many studies on forecasting energy consumption using various statistical methods or neural networks. Most of these studies are related to specific countries . El-HendAt present, many single statistical methods are also used to forecast energy demand and consumption . The autProphet is an open-source time-series forecasting methodology developed by Facebook. Prophet can be seen as a relatively new approach and has become popular among time-series forecasting methodologies because it has ease of usability and power. Moreover, Prophet has been used in several areas already and results show that it outperforms classical statistical and machine learning approaches, especially in the environment field because Neural networks are also used widely for energy forecasting and these studies can be found in literature surveys . ArtificMany studies show that seasonal decomposition or adjustment entails better performance results for energy forecasting . With seBoth statistical and neural network based approaches have some drawbacks. The statistical approaches have some limitations because of not effectively handling non-linear part of time series while neural network models suffer from overfitting and also have parameter selection/tuning problem. In order to solve these problems, the hybrid models are used widely for forecasting using the time series. Moreover, the prediction accuracy of a single model is limited and dependent on the data set. Hence, the hybridization of models can result improvement in prediction accuracy. In The energy demand/consumption data can have irregular fluctuation caused by various factors. The decomposition of these data can help us reduce the influence of these factors and also can improve the accuracy of demand/consumption prediction by reflecting real world scenarios. Moreover, after decomposition, each component can be modelled independently, so that the model complexity is less than forecasting the original time series as a whole. While statistical methods use different approaches to deal with all components of time series, including seasonality, there is no general approach or methodology to handle seasonality in neural network models. Because these models are nonlinear they should cope with seasonality directly without transforming the time series. However, for artificial neural networks, removal of seasonal properties are recommended to improve forecasting accuracy . In thisThis article is organized as follows. In \u201cForecasting Methods\u201d, forecasting methods are introduced. The proposed hybrid model is discussed in \u201cProposed Hybrid Model\u201d. \u201cResults\u201d presents experiment results and discussion in which the performance of our model is evaluated. Finally, \u201cConclusion\u201d draws some conclusions.Prophet is an open-source time-series forecasting library developed by Facebook. It uses several distinct methods for time series forecasting. It also sup-ports seasonality and holiday week day split. Prophet consists of three main components. The first component is called trend, and is used to describe the trend in the time series data. The second component is seasonality and the third component is holidays. These three components can be described using the following equation:LSTMs are a special form of a recurrent neural network (RNN) and because of their memory structure, it is widely used in speech recognition, emotional analysis, text analysis, and time series forecasting. In LSTM model, the long sequence of data is remembered or stored by incorporating a gating mechanism. This gating mechanism uses some information from previous steps to produce output by evaluating a function. This output is used to modify the current LSTM cell state. There are three gate structures in LSTM cell; input gates, output gates, and forget gates, and the structure of an LSTM cell is shown in The forget gate is used to determine which information will be kept or not by using the following formula:Now cell\u2019s old state tanh function is used for cell state and multiplied by this sigmoid layer:Finally, we decide the output of the network and this output is based on our cell state. First, sigmoid layer is used to decide what parts of the cell state will be used and then, Stacked LSTMs (SLSTMs) are presented in Traditional LSTMs uses only previous information in order to determine the next states. Bidirectional LSTMs (BDLSTMs) are developed to process information in both directions . BDLSTMsTime series data have linear and non-linear relationships. While statistical methods are e\ufb03cient at handling linear relationships in the time series, they cannot handle non-linear relationships . On the A hybrid model is implemented to cope with these weaknesses and work together for better performance. The proposed model tries to maintain the seasonality feature and also reduce the impact of this feature at the same time. In order to preserve seasonality, Prophet is adapted along with trend and noise. The Prophet model is mainly composed of three components: trend, seasonality, and holidays. In addition to this, the Prophet model has linear and non-linear growth trend functions to handle different types of time series. Hence, this helps the model fit data sets having different periodic change rules effectively.On the other hand, we have used stacked bidirectional LSTM model which exploits the decomposition of time series data by removing the repeating temporal patterns. This decomposition and removal of the seasonality feature of time series data cause the system to handle irregular patterns in these data. For the decomposition, seasonal-trend decomposition based on the Loess (STL) method is applied to original time series data. The overall architecture of the proposed model is shown in First of all, original data is used to feed Prophet model. In order to use this model, the input data is first processed and modified since Prophet model expects only a formatted date and time series variable. After this preprocessing stage, some trend and seasonality parameters of the model are tuned to get fine results. The output of this model is simply forecasting values for the given specific time window.On the other hand, in order to use the same time series data as input for LSTM, seasonality information is extracted and removed from the original data. Time series data can exhibit a variety of patterns, and it is often helpful to split a time series into several components, each representing an underlying pattern category. An example of decomposition is demonstrated in The decomposition algorithm simply extracts these three patterns from original data. The combination of the components in time series can be of two types; additive or multiplicative. In the additive time series, the components of the time series are added together to make the original time series. On the other hand, in the multiplicative time series, the components of the time series are multiplicative together.seasonal decompose method of statsmodels Python library for additive decomposition shown in the following equation;In this study, in order to decompose the time series data, we have used After decomposition process, LSTM model is fitted to trend and residual components to train. Since this model ignores seasonal pattern, the reseasonalization process is applied after the prediction phase. In this process, the relevant seasonal components which are already extracted in decomposition stage are added to the forecasts generated by LSTM model. This is evaluated by a simple add function. The overall training phase is demonstrated in The performance of a model can be tested by evaluating a comparison between the actual values and the predicted values. In this study, three performance metrics, namely the mean squared error (MSE), root mean square error (RMSE), mean absolute error (MAE) are used to evaluate the performance of each model. These metrics are calculated using the following equations;Within the scope of this study, the data set from the study In order to show performance developments of our study, we compared our proposed hybrid system (Hybrid) with single stacked bidirectional LSTM model (BiLSTM) and also with deseasonalized LSTM model (deBiLSTM). Moreover, a Prophet model (Prophet) was used and compared with our hybrid model. In the training phase of the neural network part of the hybrid model, and also both BiLSTM and deBiLSTM models, the Adam optimizer was used. Mean squared error (MSE) was used as a loss function. We performed a series of tests for parameter selection since there is no global method to estimate these parameters. The selection of model parameters as completely data-dependent. Thus the parameters of the test with the best accuracy result are used throughout the experiments. To be able to use time series data with these LSTM models, we should transform these data into a structure of samples with input and output components. The Keras deep learning library provides the TimeseriesGenerator to automatically transform time series data into samples. TimeseriesGenerator uses the length parameter in order to define the sample length used to train the model. This parameter as used for predicting the next value after sample input which has (length) number of elements. For both models, different sample lengths which give the best accuracy results were used and these values are shown in the \u2018Sample Length\u2019 column of We also compared the hybrid model with the state-of-art techniques. ARIMA is a state-of-art statistical technique that is widely used in time series domain. Support vector regression (SVR) uses the basic idea of support vector machines for regression problems and it is widely used for learning from training data in order to generate decision boundaries in non-linear space . The HolEmpirical Mode Decomposition (EMD) is a signal processing method used for the decomposition of signals . It is aIn this study, we proposed a hybrid forecasting model for time series data to improve prediction accuracy. Our model is based on two different sub-models; a Prophet model for preserving seasonality information of time series and a neural network model for a deseasonalized version of this data. Thus, our model is capable of handling seasonality patterns and at the same time, it tries to reduce the impact of this seasonality in the prediction phase. By decomposition of time series data into its patterns, seasonality patterns can be extracted and removed from the original data and deseasonalized data is used in the training phase of the neural network model to have less training time. The extracted seasonality information is added back to the prediction output of the neural network model and final prediction results are produced by merging prediction results generated by both sub-models.We have evaluated the proposed model with real-world data set, containing monthly energy consumption values of seven countries for ten and half years. The model is compared with the single Prophet model, single stacked bidirectional LSTM model for original data, and also single stacked bidirectional LSTM model for deseasonalized data. The study showed that the decomposition of time series influences the final prediction accuracy because of the removal of irregular fluctuations. In addition to this, the results indicate that using a hybrid model has better prediction results than single models. Moreover, our model is compared with state-of-art techniques including ARIMA, support vector regression (SVR), Holt-Winters exponential smoothing method, EMD-LSTM, and EMD-GRU for time series data. The accuracy results show that our method outperforms these methods for some countries in our dataset while for other countries, it has competitive performance.In the future, we will try to use different data sources, with seasonality patterns. In addition, various types of RNNs will be adapted to explore the generalization and optimization of the hybrid mechanism.10.7717/peerj-cs.1001/supp-1Supplemental Information 1Click here for additional data file."}
+{"text": "During disease progression or organism development, alternative splicing may lead to isoform switches that demonstrate similar temporal patterns and reflect the alternative splicing co-regulation of such genes. Tools for dynamic process analysis usually neglect alternative splicing.Here, we propose Spycone, a splicing-aware framework for time course data analysis. Spycone exploits a novel IS detection algorithm and offers downstream analysis such as network and gene set enrichment. We demonstrate the performance of Spycone using simulated and real-world data of SARS-CoV-2 infection.https://github.com/yollct/spycone and the documentation at https://spycone.readthedocs.io/en/latest/.The Spycone package is available as a PyPI package. The source code of Spycone is available under the GPLv3 license at Bioinformatics online. N\u2009=\u20092352) of genes with multiple isoforms in cancer, most of them leading to a protein domain loss. In cardiovascular disease, the IS of Titin causes clinical symptoms of dilated cardiomyopathy lead to a differential abundance of gene isoforms between experimental conditions or time points. If the relative abundance of two isoforms of a gene changes between two conditions or time points, this behavior is called isoform switching (IS). While differential isoform expression focus on the change in the expression value of one isoform, IS detects switches of predominantly expressed isoforms between conditions. A change of the predominant isoform appears as an intersection in time course data. However, existing methods for time course change points detection are applied to detect abrupt change between states while IS events are usually slow and gradual changes of isoform expression . IS has However, the above examples refer to molecular snapshots of dynamic processes. In order to study such dynamic processes, like disease progression, we need time course data. By identifying groups of genes with similar temporal expression or AS/IS patterns, we can dissect the disease progression into mechanistic details. A study of mouse retinal development has shown that genes with similar temporal exon usage patterns shared similar biological functions and cell type specificity , TimesVeWe introduce Spycone, a splicing-aware framework for systematic time course transcriptomics data analysis. It employs a novel IS detection method that prioritizes isoform switches between highly expressed isoforms over those with minor expression levels, thus focusing on biologically relevant changes rather than transcriptional noise. Spycone operates on both gene and isoform levels. For isoform-level data, the total isoform usage is quantified across time points. We have incorporated clustering methods for grouping genes and isoforms with similar time course patterns, as well as network and gene set enrichment methods for the functional interpretation of clusters. The IS genes within the same clusters are expected to interact cooperatively with other functionally related genes. Thus, we hypothesize that disease mechanisms or developmental changes can be identified with network and functional enrichment methods. We compare the performance of Spycone and TSIS on a simulated and real-world dataset. On the latter, we demonstrate how Spycone identifies network modules that are potentially affected by alternatively spliced genes during SARS-CoV-2 infection.We demonstrated the performance of Spycone on RNA-seq data from SARS-CoV-2 infected human lung cells with eight time points and four replicates for each time point network is obtained from BioGRID (v.4.4.208) .We used the SARS-CoV-2 dataset described above as a reference for setting the parameters of a negative binomial distribution of gene expression counts, as well as the parameters of the Poisson distribution of the number of isoforms for each gene.A first-order Markov chain is used for the simulation of the gene states at each time point. In the simplest form, we defined two gene states: switched or unswitched. Change of the states along the time course depends on the transition probabilities.k-dimensional vectors x with real numbers between 0 and 1 such that the sum of the elements in x is 1. This is suitable to simulate probability distribution of k categories. The Dirichlet distribution is defined as:k-dimensional vector governing the distribution of the probabilities. In our case, We used a Dirichlet distribution to simulate relative abundance for each isoform of a gene. The relative abundance of an isoform is the ratio of the isoform expression to the total gene expression. The outcome of the Dirichlet distribution is s}*10, s is the number of switching isoforms, while In Model 1, where we assumed that switching isoforms are highly expressed, the k}*10, k is the number of all isoforms. To introduce switching events, the probabilities of two random isoforms will be swapped.In Model 2, where we assumed that isoforms with abundance higher than 0.3 have equal chances to switch, the vector is \u03bci). The gene expression means are randomly picked among the genes with the same number of isoforms from the real-world dataset.After we simulated abundances for each isoform, we multiplied it to a gene expression mean selected based on real-life dataset to obtain the transcript expression mean <1 over all time points. Spycone then detects IS events based on the relative abundance of the isoforms. The IS events are defined with the following metrics:Switch points refer to the points where two time courses intersect in at least 60% of the replicates. For every pair of isoforms in a gene, Spycone detects all possible switch points for further analysis. For a dataset that has only one replicate, Spycone checks the intersection between isoform pairs in one replicate.P denotes the frequency of the respective condition between relative abundance Pearson correlation < 0. More detailed descriptions of parameters are found in To detect IS in TSIS, we used the following parameters: (i) the switching probability > 0.5; (ii) difference before and after switch > 10; (iii) interval lasting before and after at minimum one time point; (iv) Isoform usage measures the relative abundance of an isoform. Isoform usage of all isoforms from one gene are summed up to obtain the total isoform usage. We defined the change of total isoform usage as between two consecutive time points:The clustering algorithms are implemented using the scikit-learn machine learning package in python (v0.23.2) are obtained from the mCross database (downloaded in 2022), currently only available for U-test is performed on the sets of scores. Each motif threshold is selected using the distribution of the PSSM score over the frequency of nucleotides (background). The threshold is set at a false positive rate <0.01, meaning the probability of finding the motif in the background is <0.01.Finally, we performed motif enrichment analysis using the motifs module from the Biopython library . Hence, Spycone is available as a python package that provides systematic analysis of time course transcriptomics data. IS detection. We propose novel metrics for the detection and selection of significant IS across time. IS events are described as a change of the isoform distribution between two conditions (time points). To detect an IS, our algorithm first searches for switch points, i.e. a specific time point where two isoform expression time courses intersect.The main challenges to detect time course IS are: (i) most genes have multiple isoforms, the changes of the relative abundance can be due to factors other than AS, e.g. RNA degradation. (ii) Most IS have multiple switch points, with different magnitudes of change in abundance; we need to consider how prominent the changes in abundance are to be recognized as an IS event. (iii) Most genes have multiple lowly expressed isoforms that constitute noise and might not be biologically relevant. An ideal IS detection tool, therefore, should prioritize IS events according to their expression level .P-value and event importance. The P-value is calculated by performing a two-sided Mann\u2013Whitney U-test between relative abundance before and after the switch point among the replicates. Event importance is the average of the ratio of the relative abundance of the two switching isoforms to the relative abundance of the isoform with the highest expression between the switching time points (see Section 2). Examples of high and low event importance are illustrated in Spycone overcomes these challenges by using a novel approach to detect IS events. Spycone uses two metrics: a Clustering analysis for identifying co-spliced genes. Similar to how transcription factors co-regulate sets of genes, in the context of AS, the splicing events of a subset of genes are co-regulated by splicing factors to TSIS using simulated data. TSIS provides an option to filter for IS events that involve only the highest abundance isoform\u2014we refer to the result after filtering as major_TSIS. We aimed to investigate whether the performance of TSIS improves when applying this option.We use a hidden Markov model to simulate the switching state of the genes at each time point (see Section 2). We simulated two models : Model 1For both tools, we varied their parameters (difference of relative abundance), to investigate how this affects their precision and recall. We also considered different levels of variance of gene expression, namely 1, 5 and 10, across replicates to mimic the noise .O(n*log(n)) than Spycone with a complexity of O(n), leading to a drastically lower runtime for Spycone in the range of a few minutes rather than hours and acceptable recall (0.75). In Model 2, Spycone achieved higher precision and recall than TSIS; however, they dropped as the model allows more IS events. We applied spline regression to detect switch points and calculated precision and recall as above . Resultsan hours . In summP-value < 0.05. For TSIS, we used (i) switching probability > 0.5; (ii) difference of expression before and after switch > 10; (iii) correlation coefficient < 0; and (iv) adjusted P-value < 0.05. The dissimilarity coefficient from Spycone and the correlation coefficient from TSIS are used to filter for IS events with negatively correlated isoforms. The values are chosen according to the performance on Model 2 simulated data with noise level 10 that showed the best precision. Spycone reported 915 IS events, of which 418 affected at least 1 protein domain. TSIS reported 985 events, of which 417 affected at least one protein domain. On gene level, Spycone reported 745 genes with IS events, TSIS reported 858 genes where 225 genes were found by both Spycone and TSIS -beta signaling is commonly found in both tools. MAPK pathway and DNA damage checkpoint are enriched uniquely in Spycone. TSIS\u2019s Cluster 2 and Spycone\u2019s Cluster 3 have lower changes of total isoform usage overall. Spycone\u2019s clusters showed more unique and relevant terms: 70 enriched Reactome terms in Spycone\u2019s clusters and only 7 terms in TSIS\u2019s clusters. TSIS\u2019s Cluster 3 and Spycone Cluster 2 show an increase of change of total isoform usage after 12\u2009h post-infection. Spycone\u2019s cluster is enriched uniquely in protein folding chaperonin complex TriC/CCT and NOTCH signaling pathway. Finally, TSIS\u2019s Cluster 4 and Spycone\u2019s Cluster 4 have increasing changes of total isoform usage overall. TSIS\u2019s cluster is enriched in mitosis-related pathways, cell cycle and tubulin folding. Whereas in Spycone\u2019s Cluster 4 is found with signaling by PTK6, interferon, metabolism of proteins, pentose phosphate pathway, etc.Next, we detected active modules that show over-representation of IS genes from the same cluster based on DOMINO using a PPI network from BioGRID .Assuming that multiple IS events occurring between the same time points are co-regulated by the same splicing factor, we perform co-expression and motif analysis. The co-expression analysis yields thirteen significant RNA-binding proteins that are positively or negatively correlated with at least two isoforms of the same gene: in cluster 1 - FUBP3, HLTF, IGF2BP3, ILF3, RBFOX2, RBM22, SF3B1 and TAF15; in cluster 3 - IGF2BP3, RBM22, RPS6, SRSF7 and SUGP2; (Table S5). To investigate whether the regulated exons, i.e. the lost or gained exons after IS events, show higher PSSM scores to a certain RNA-binding protein motif than the unregulated exons in a cluster, we applied motif enrichment analysis. We calculated PSSM scores along the flanking regions of the exons 5\u2019 and 3\u2019 boundaries and excluded the first and last exons in an isoform since these are often regulated by 5\u2019-cap binding proteins and polyadenylation regulating proteins . All exoAS regulates dynamic processes such as development and disease progression. However, AS analysis tools typically compare only two conditions and neglect how AS changes dynamically over time. Currently, the only existing tool for time course data analysis that accounts for splicing is TSIS. TSIS detects temporal IS events but is biased towards IS events between lowly expressed isoforms and does not offer features for downstream analysis which is important for interpreting the functional consequences of IS events.U-test to test for significant IS and performs multiple testing correction to reduce type I error.Spycone, a framework for analysis of time course transcriptomics data, features a new approach for detecting temporal IS events and a new event importance metric to filter out lowly expressed isoforms. We demonstrate that Spycone\u2019s IS detection method outperforms TSIS in terms of precision and recall based on simulated data. A key advantage of Spycone is that it explicitly considers how well IS events agree across replicates while TSIS considers averaged expression values among replicates and/or by natural spline-curves fitting. More specifically, Spycone uses a non-parametric Mann\u2013Whitney We have demonstrated the usability of Spycone by analyzing time course transcriptomics data for SARS-CoV-2 infection where we found affected signaling cascades. We performed NEASE enrichment on the clusters and compared the results from Spycone and TSIS. Spycone results are enriched in relevant terms such as mitogen-activated protein kinase (MAPK) pathway (Cluster 1), NOTCH signaling (Cluster 2), fibroblast growth factor receptor (FGFRs) and toll-like receptor (TLR) pathways (Cluster 3) and pentose phosphate pathway (Cluster 4). NOTCH signaling pathways are found up-regulated in the lungs of infected macaques . DCLK2 is differentially expressed in SARS-CoV-2 patients and vascular endothelial growth factor (VEGF)) and their downstream kinases. They are essential for viral infection since they modulate cellular processes like migration, adhesion, differentiation and survival. One example is that activation of EGFR in SARS-CoV-2 can suppress the IFN response and aid viral replication (Another key finding is that E3 ubiquitin ligases are affected by IS. They are known to mediate host immune response by removing virus particles. Various virus species hijack the host E3 ubiquitin ligases in favor of viral protein production (In splicing factor analysis, ILF3 and SRSF7 are identified as a splicing factor affecting the splicing of exons. ILF3 plays a role in antiviral response by inducing the expression of interferon-stimulated genes (Lastly, in order to get confident time course analysis results, one will need high-resolution data in terms of number of time points and sample replicates. Consequently, at least three time points and three replicates are recommended in Spycone analysis. However, this criterion is rather met due to technical and economical restraints. Thus, Spycone also provides an option for a permutation test with only one replicate for the dataset under investigation. We demonstrated this usage in a tumor development dataset with one replicate see .Limitations. Spycone achieves high precision and considerably higher recall than the only competing tool TSIS. Nevertheless, the moderate recall we observe in particular in the presence of noise shows that there is further room for method improvement. In our simulation Model 2, where we allowed for isoform switches between minor isoforms, we observed a reduction in both precision and recall. Spycone identifies only two isoforms that switch per event, but in reality, an event could involve more than two isoforms. In the future, we should consider multiple-isoforms switches to handle more complex scenarios. In addition, the usage of weighted PPI network might introduce selection bias. However, the higher weight gives higher confidence to an interaction, meaning more domains between the proteins are interacting. Therefore, using weighted PPI helps prioritizing interactions with higher confidence. We believe this advantage outweighs the potential bias. Nevertheless, the usage of weighted PPI is optional.Spycone uniquely offers features for detailed downstream analysis and allows for detecting the rewiring of network modules in a time course as a result of coordinated domain gain/loss. This type of analysis is limited by the availability of the structural annotation. However, the current developments in computational structural biology that could expand the information about domains and domain\u2013domain interactions e.g. AlphaFold2 (Spycone was thus far applied exclusively to bulk RNA-seq data. When considering tissue samples, IS switches between time points could also be attributed to changes in cellular composition. An attractive future prospect is thus to apply Spycone for studying IS in single-cell RNA-seq data where dynamic IS events could be traced across cellular differentiation using the concept of pseudotime. However, the current single-cell RNA-seq technologies are limited in their ability to discern isoforms With declining costs in next-generation sequencing, time course RNA-seq experiments are growing in popularity. Although AS is an important and dynamic mechanism it is currently rarely studied in a time course manner due to the lack of suitable tools. Spycone closes this gap by offering robust and comprehensive analysis of time course IS. Going beyond individual IS events, Spycone clusters genes with similar IS behavior in time course data and offers insights into the functional interpretation as well as putative mechanisms and co-regulation. The latter is achieved by RNA-binding protein motif analysis and highlights splice factors that could serve as potential drug targets for diseases. Using simulated and real data, we showed that Spycone has better precision and recall than its only competitor, TSIS and that Spycone is able to identify disease-related pathways in the real-world data, as we demonstrated for SARS-CoV-2 infection. In summary, Spycone brings mechanistic insights about the role of temporal changes in AS and thus perfectly complements RNA-seq time course analysis.btac846_Supplementary_DataClick here for additional data file."}
+{"text": "Anomaly detection in multivariate time series is an important problem with applications in several domains. However, the key limitation of the approaches that have been proposed so far lies in the lack of a highly parallel model that can fuse temporal and spatial features. In this paper, we propose TDRT, a three-dimensional ResNet and transformer-based anomaly detection method. TDRT can automatically learn the multi-dimensional features of temporal\u2013spatial data to improve the accuracy of anomaly detection. Using the TDRT method, we were able to obtain temporal\u2013spatial correlations from multi-dimensional industrial control temporal\u2013spatial data and quickly mine long-term dependencies. We compared the performance of five state-of-the-art algorithms on three datasets . TDRT achieves an average anomaly detection F1 score higher than 0.98 and a recall of 0.98, significantly outperforming five state-of-the-art anomaly detection methods. Anomaly detection is the core technology that enables a wide variety of applications, such as video surveillance, industrial anomaly detection, fraud detection, and medical anomaly detection. Traditional approaches use clustering algorithms and probThe key limitation of this deep learning-based anomaly detection method is the lack of highly parallel models that can fuse the temporal and spatial features. As such, most of these approaches rely on the time correlation of time series data for detecting anomalies. The lack of such a model limits the further development of deep learning-based anomaly detection technology. Without such a model, it is difficult to achieve an anomaly detection method with high accuracy, a low false alarm rate, and a fast detection speed.In this paper, we propose TDRT, a three-dimensional ResNet and transformer-based anomaly detection method. TDRT can automatically learn the multi-dimensional features of temporal\u2013spatial data to improve the accuracy of anomaly detection. TDRT is composed of three parts. The first part is three-dimensional mapping of multivariate time series data, the second part is time series embedding, and the third part is attention learning. TDRT combines the representation learning power of a three-dimensional convolution network with the temporal modeling ability of a transformer model. Via the three-dimensional convolution network, our model aims to capture the temporal\u2013spatial regularities of the temporal\u2013spatial data, while the transformer module attempts to model the longer- term trend.Our TDRT model advances the state of the art in deep learning-based anomaly detection on two fronts. First, it provides a method to capture the temporal\u2013spatial features for industrial control temporal\u2013spatial data. Our model shows that anomaly detection methods that consider temporal\u2013spatial features have higher accuracy than methods that only consider temporal features. Second, our model has a faster detection rate than the approach that uses LSTM and one-dimensional convolution separately and then fuses the features because it has better parallelism.The first challenge is to obtain the temporal\u2013spatial correlation from multi-dimensional industrial control temporal\u2013spatial data. This is challenging because the data in an industrial system are affected by multiple factors. The value of a sensor or controller may change over time and with other values. For example, SWAT consistsThe second challenge is to build a model for mining a long-term dependency relationship quickly. During a period of operation, the industrial control system operates in accordance with certain regular patterns. To address this challenge, we use the transformer to obtain long-term dependencies. The advantage of the transformer lies in two aspects. On the one hand, its self-attention mechanism can produce a more interpretable model, and the attention distribution can be checked from the model. On the other hand, it has less computational complexity and can reduce the running time.The key technical novelty of this paper is two fold. First, we propose a approach that simultaneously focuses on the order information of time series and the relationship between multiple dimensions of time series, which can extract temporal and spatial features at once instead of separately. Second, we propose a approach to apply an attention mechanism to three-dimensional convolutional neural network. We evaluated TDRT on three data sets . Our results show that TDRT achieves an anomaly recognition precision rate of over 98% on the three data sets.With the rapid development of the Industrial Internet, the Industrial Control Network has increasingly integrated network processes with physical components. The physical process is controlled by the computer and interacts with users through the computer. The local fieldbus communication between sensors, actuators, and programmable logic controllers (PLCs) in the Industrial Control Network can be realized through wired and wireless channels. Commands are sent between the PLC, sensors, and actuators through network protocols, such as industrial EtherNet/IP, common industrial protocol (CIP), or Modbus. The process control layer network is the core of the Industrial Control Network, including human\u2013machine interfaces (HMIs), the historian, and a supervisory control and data acquisition (SCADA) workstation. The HMI is used to monitor the control process and can display the historical status information of the control process through the historical data server. The historian is used to collect and store data from the PLC. The role of the supervisory control and data acquisition (SCADA) workstation is to monitor and control the PLC.Intruders can attack sensors, actuators, and controllers. For example, attackers modify the settings or configurations of sensors, actuators, and controllers, causing them to send incorrect information .Intruders can attack the HMI. For example, attackers exploit vulnerabilities in their software to affect the physical machines with which they interact.Intruders can attack the network. For example, attackers can affect the transmitted data by injecting false data, replaying old data, or discarding a portion of the data.Intruders can physically attack the Industrial Control Network components. For example, attackers can maliciously modify the location of devices, physically change device settings, install malware, or directly manipulate the sensors.The Industrial Control Network plays a key role in infrastructure , smart manufacturing, smart cities, and military manufacturing, making the Industrial Control Network an important target for attackers ,9,10,11.This paper considers a powerful adversary who can maliciously destroy the system through the above attacks. Their ultimate goal is to manipulate the normal operations of the plant. Attacks can exist anywhere in the system, and the adversary is able to eavesdrop on all exchanged sensor and command data, rewrite sensors or command values, and display false status information to the operators. Attackers attack the system in different ways, and all of them can eventually manifest as physical attacks. Therefore, we can detect anomalies by exploiting the deviation of the system caused by changes in the sensors and instructions.Anomaly detection is a challenging task that has been largely studied. Anomalies can be identified as outliers and time series anomalies, of which outlier detection has been largely studied ,14,15,16Clustering-based anomaly detection methods leverage similarity measures to identify critical and normal states. The key to this approach lies in how to choose the similarity, such as the Euclidean distance and shape distance. Clustering methods initially use the Euclidean distance as a similarity measure to divide data into different clusters. Almalawi proposedAnomaly detection has also been studied using probabilistic techniques ,22,23,24Recently deep networks have been applied to time series anomaly detection because of their powerful representation learning capabilities ,32,33,34m dimensions, where l is the length of the time series, and m is the number of measuring devices. A sequence l in the sequence X starting at timestamp t. We define the set of all overlapping subsequences in a given time series X: X.An industrial control system measurement device set X, where X and return A, a set of abnormal subsequences.Given a set of all subsequences In this section, we first introduce the overall architecture of our newly proposed TDRT method in d steps each time to obtain a subsequence and finally obtain a group of subsequences in the bsize time window. A detailed description of the method for mapping time series to three-dimensional spaces can be found in After completing the three-dimensional mapping, a low-dimensional time series embedding is learned in the convolutional unit. Specifically, we apply four stacked three-dimensional convolutional layers to model the relationships between the sequential information of a time series and the time series dimensions. A detailed description of the low-dimensional embedding learning method can be found in Furthermore, we propose a method to dynamically choose the temporal window size. Specifically, the dynamic window selection method utilizes similarity to group multivariate time series, and a batch of time series with high similarity is divided into a group. Details of the dynamic window selection method can be found in b. To describe the subsequences, we define a subsequence window l. The subsequence window t represents the start time of the time window X, each time window We first describe the method for projecting a data sequence into a three-dimensional space. Our TDRT method aims to learn relationships between sensors from two perspectives, on the one hand learning the sequential information of the time series and, on the other hand, learning the relationships between the time series dimensions. This facilitates the consideration of both temporal and spatial relationships. In the specific case of a data series, the length of the data series changes over time. To facilitate the analysis of a time series, we define a time window To better understand the process of three-dimensional mapping, we have visualized the process. f is the filter size of the last convolutional layer, and c is the output dimension of the convolution operation.In this work, we focus on subsequence anomalies of multivariate time series. The key is to extract the sequential information and the information between the time series dimensions. The multivariate time series embedding is for learning the embedding information of multivariate time series through convolutional units. The convolution unit is composed of four cascaded three-dimensional residual blocks. The reason we chose a three-dimensional convolutional neural network is that its convolution kernel is a cube, which can perform convolution operations in three dimensions at the same time. w and b are learnable parameters.At the core of attention learning is a transformer encoder. The reason for this design choice is to avoid overfitting of datasets with small data sizes. V: L. During implementation, the number of encoder layers L is set to 6. The transformer encoder is composed of two sub-layers, a multi-head attention layer, and a feed-forward neural network layer. The multi-layer attention mechanism does not encode local information but calculates different weights on the input data to grasp the global information. Given n input information Q, the key vector sequence K, and the value vector sequence V are obtained through the linear projection of Since there is a positional dependency between the groups of the feature tensor, in order to make the position information of the feature tensor clearer, we add an index vector m-dimension vector. For instance, when six sensors collect six pieces of data at time i, T. The normalization method is shown in Equation ti*=k sequences from k, that is, For the time series L to calculate the value of M in order to provide a unified standard. It is worth mentioning that the value of A given time series This section describes the three publicly available datasets and metrics for evaluation. We evaluate the performance of TDRT and compare it with other state-of-the-art methods.Three publicly available datasets are used in our experiments: two real-world datasets, SWaT (Secure Water Treatment) and WADI (Water Distribution), and a simulated dataset, BATADAL (Battle of Attack Detection Algorithms). The characteristics of the three datasets are summarized in SWaT Dataset: SWaT is a testbed for the production of filtered water, which is a scaled-down version of a real water treatment plant. The SWaT testbed is under normal operation for 7 days and under the attack scenario for 4 days. The SWaT dataset is collected for 11 days of data.WADI Dataset: WADI is an extension of SWaT, and it forms a complete and realistic water treatment, storage, and distribution network. The WADI testbed is under normal operation for 14 days and under the attack scenario for 2 days. The WADI dataset is collected for 16 days of data.BATADAL Dataset: BATADAL is a competition to detect cyber attacks on water distribution systems. The BATADAL dataset collects one year of normal data and six months of attack data, and the BATADAL dataset is generated by simulation.We consider that once there is an abnormal point in the time window In addition, we use the We study the performance of TDRT by comparing it to other state-of-the-art methods , analyziFor a comparison of the anomaly detection performance of TDRT, we select several state-of-the-art methods for multivariate time series anomaly detection as baselines.UAE Frequency: UAE Frequency is a ligMAD-GAN: MAD-GAN is a GANOmniAnomaly: OmniAnomaly is a stoUSAD: USAD is an anNSIBF: NSIBF is a timOn average, TDRT is the best performing method on all datasets, with an Different time windows have different effects on the performance of TDRT. In this section, we study the effect of the parameter The average F1 score for the TDRT variant is over 95%. Recall that we studied the effect of different time windows on the performance of TDRT. Choosing an appropriate time window is computationally intensive, so we propose a variant of TDRT that provides a unified approach that does not require much computation. In this experiment, we investigate the effectiveness of the TDRT variant. The results are shown in The average F1 score improved by 5.6% relative to methods that did not use attentional learning. Using the SWaT, WADI, and BATADAL datasets, we investigate the effect of attentional learning. In this paper, we make the following two key contributions: First, we propose TDRT, an anomaly detection method for multivariate time series, which simultaneously models the order information of multivariate time series and the relationships between the time series dimensions. By extracting spatiotemporal dependencies in multivariate time series of Industrial Control Networks, TDRT can accurately detect anomalies from multivariate time series. In comprehensive experiments on three high-dimensional datasets, TDRT outperforms state-of-the-art multivariate time series anomaly detection methods. Our results show that the average F1 score of TDRT is over 98%. Second, we propose a method to automatically select the temporal window size called the TDRT variant. In comprehensive experiments on three high-dimensional datasets, the TDRT variant provides significant performance advantages over state-of-the-art multivariate time series anomaly detection methods. Our results show that the average F1 score of the TDRT variant is over 95%.A limitation of this study is that the application scenarios of the multivariate time series used in the experiments are relatively homogeneous. In the future, we will conduct further research using datasets from various domains, such as natural gas transportation and the smart grid."}
+{"text": "Studying temporal gene expression shifts during disease progression provides important insights into the biological mechanisms that distinguish adaptive and maladaptive responses. Existing tools for the analysis of time course transcriptomic data are not designed to optimally identify distinct temporal patterns when analyzing dynamic differentially expressed genes (DDEGs). Moreover, there are not enough methods to assess and visualize the temporal progression of biological pathways mapped from time course transcriptomic data sets. In this study, we developed an open-source R package TrendCatcher , which applies the smoothing spline ANOVA model and break point searching strategy, to identify and visualize distinct dynamic transcriptional gene signatures and biological processes from longitudinal data sets. We used TrendCatcher to perform a systematic temporal analysis of COVID-19 peripheral blood transcriptomes, including bulk and single-cell RNA-Seq time course data. TrendCatcher uncovered the early and persistent activation of neutrophils and coagulation pathways, as well as impaired type I IFN (IFN-I) signaling in circulating cells as a hallmark of patients who progressed to severe COVID-19, whereas no such patterns were identified in individuals receiving SARS-CoV-2 vaccinations or patients with mild COVID-19. These results underscore the importance of systematic temporal analysis to identify early biomarkers and possible pathogenic therapeutic targets. Time course transcriptomic profiling has been widely used to study and model dynamic biological processes in cells . By profCurrently, there are 2 predominant strategies for the analysis of sequential transcriptomic data sets. One strategy treats the sampling time points as categorical variables and is based on generalized linear models (GLMs). GLM-based packages include DESeq2 , edgeR , and limWe found that this is challenging in a complex disease, such as COVID-19, caused by the SARS-CoV-2 infection . COVID-1https://github.com/jaleesr/TrendCatcher) tailored for longitudinal bulk RNA-Seq and scRNA-Seq analysis. TrendCatcher uses a framework that combines the smooth spline ANOVA model and break point searching strategy, which identifies inflection points when gene expression trends reverse. We show that TrendCatcher outperformed commonly used methods for longitudinal RNA-Seq analysis when using simulated time course data for benchmarking. We also analyzed bulk RNA-Seq and scRNA-Seq gene expression profiles of peripheral blood cells in COVID-19 patients at various disease time points. TrendCatcher allowed us to identify and visualize dynamic gene expression signature profiles in peripheral blood that were associated with poor disease outcomes during the early phases of disease and could, thus, serve as potentially novel mechanistic targets, as well as early biomarkers for patient prognostication.In this study, we developed TrendCatcher, an open-source R package , although the other 3 methods achieved slightly higher accuracy than TrendCatcher in monotonic trajectories, their AUCs dropped markedly when more complicated trajectories were embedded. DESeq2 only achieved AUC values of 0.49 to 0.62 for both biphasic trajectory and multimodal trajectory. The DESeq2Spline approach using a spline curve fitting model also dropped to an AUC of approximately 0.7 once multiphasic trajectories were introduced. These results suggest that existing approaches for longitudinal or time course analyses are well suited for monotonic trajectories (continuously up or continuously down) but that TrendCatcher may be more broadly applicable because it identifies monotonic, biphasic and multiphasic shifts in gene expression, which are especially important in complex pathological setting when initial biological responses are followed by counterregulatory adaptive or maladaptive responses.We next evaluated the prediction performance for each type of temporal trajectories. As shown in To define the key dynamic gene signatures associated with SARS-CoV-2 infection in peripheral blood, we first analyzed the global transcriptomics profiles from a nonhuman primate data set , in whicTo systematically assess and visualize the dynamic programming of the top biological pathways associated with SARS-CoV-2 infection, TrendCatcher generated a TimeHeatmap. The TimeHeatmap function of TrendCatcher visualizes shifts in pathways by displaying the mean-fold change of individual DDEGs in a given pathway at defined time points, while also depicting the number of dynamic genes and the percentage of dynamic genes within that pathway . This qu2 fold change (log2FC) of 2.57 and 1.76 within the first day. Mucosal immune response, antimicrobial humoral response, and killing of cells of other organisms were activated during the later stage of infection (days 4\u20137), with an average log2FC around 2. On the other hand, mitochondrial ATP synthesis\u2013coupled electron transport and protein targeting to ER, on the other hand, were gradually downregulated until day 14. Dynamic gene signatures from the IFN-I signaling pathway and mitochondrial ATP synthesis\u2013coupled electron transport were shown using traditional heatmaps in During initial infection (days 0\u20132), pathways related to innate immune response and IFN pathways were highly upregulated. Examples are upregulation of pathways such as defense response to virus and regulation of innate immune response increased with an average logNext, we analyzed the longitudinal gene expression profiles of peripheral blood mononuclear cells (PBMCs) obtained from patients diagnosed with a SARS-CoV-2 infection who were admitted to the hospital but predominantly had uncomplicated disease progression, with 4 of 5 patients showing only mild symptoms . The stu+ T cells, 645 DDEGs in CD8+ T cells, 423 DDEGs in MAIT, 1161 DDEGs in naive T cells, and 667 DDEGs in NK cells. The TimeHeatmap of plasma B cells visualized the most dynamic biological pathways. Importantly, this temporal analysis of scRNA-Seq data shows how rapidly plasma B cells ramp up the upregulation of Fc-\u03b3 receptor signaling and immunoglobulin synthesis as early as stage 1 (which corresponds to day 1 of symptom onset) . Howeverm onset) , thus mim onset) . To defim onset) , A\u2013D.P value to 0.05 as a threshold to compare the DDEGs identified by different methods. As shown in the Venn diagram in We then assessed whether a systematic temporal analysis of gene expression trends could be used to distinguish disease severity and prognosis of COVID-19 patients. We thus applied TrendCatcher to a time course human whole-blood bulk RNA-Seq data set , which c2 units (approximately 2.5-fold increase in mean gene expression) within week 1, increased continuously until week 4, and only very gradually decreased in surviving patients by week 7. However, the summation of the averaged log2FC (Avg_log2FC) from the TimeHeatmap was larger than 0, which indicates that the neutrophil activation may not have returned fully to baseline levels by 7 weeks. We next applied LOESS smooth curve fitting to all the neutrophil activation DDEGs identified from the 3 severity groups, and we used a permutation test approach to quantify when and how the gene signatures differed significantly between groups. LOESS fitting confirmed that severe COVID-19 patients showed markedly high neutrophil activation at the early stage of infection (weeks 1 and 2), and also remained highly activated even after 7 weeks from moderate COVID-19 were highly enriched in IFN-I signaling, negative regulation of viral processes, and defense to viruses, whereas these pathways were not significantly enriched in severe COVID-19. Severe COVID-19 patients were highly dynamic in MAPK cascade, NF-\u03baB signaling, T cell receptor signaling and positive regulation of cytokine production for both NK cells and CD4+ T cells. For monocytes and DC, no uniquely enriched dynamic biological processes were observed in severe COVID-19 patients versus moderate COVID-19 patients.Next, to define the cell type\u2013specific dynamic gene signatures and biological processes, we used TrendCatcher to analyze a human PBMC scRNA-Seq time course data set in which patients were categorized as having either moderate or severe COVID-19 . Importa+ T cells in moderate COVID-19 patients but not in severe COVID-19 patients. Patients with moderate COVID-19 showed activation of IFN-I signaling pathways, whereas patients with severe COVID-19 had a blunted activation of IFN-I signaling. This is also shown in the cell type\u2013specific TimeHeatmap. As shown in 2FC compared with 0.87 log2FC. In CD8+ T cells, only moderate COVID-19 groups were observed to have a strong IFN-I response within the first week. On the other hand, CD8+ T cells in patients who would go on to develop severe COVID-19 showed upregulation of cell proliferation and cell differentiation genes, instead compared with other commonly used methods for the analysis of temporal gene expression data sets, when analyzing 3 or more time points. When comparing our results with other methods using the whole-blood RNA-Seq data, we found that DESeq2 identified the fewest number of DDEGs in all patient groups, likely caused by a relative loss of statistical testing power by treating time as factorial variable. ImpulseDE2, identified the largest number of DDEGs, but around 50% of the DDEGs had only a 0.5 logDespite the extraordinary success of rapidly developed and deployed mRNA vaccines against SARS-CoV-2, the ongoing COVID-19 pandemic remains a major global health problem, in part due to the emergence of newer highly contagious SARS-CoV-2 variants of concern, as well as vaccine hesitancy. This requires the identification of novel mechanistic targets, especially in vulnerable patients who have a high risk of developing severe COVID-19. One of the key pathogenic factors driving COVID-19 severity is the profound immune dysregulation observed in patients with severe COVID-19 that can result in respiratory failure \u201329. HumaIn this study, we applied TrendCatcher to systematically analyze sequential blood samples from either nonhuman primates infected with SARS-CoV-2, human patients with COVID-19 of varying severity, or human subjects who received a SARS-CoV-2 mRNA vaccine. TrendCatcher identified dynamic gene expression and biological pathway signatures for (a) SARS-CoV-2 infection progression over time, (b) severe COVID-19 versus moderate or mild COVID-19, and (c) COVID-19 vaccine recipients versus control. TrendCatcher established response to virus, humoral immune response, and IFN-I signaling pathway activation across peripheral blood cell types in mild, moderate, and severe COVID-19. However, we found temporal patterns of gene expression shifts that were unique in the severe COVID-19 patients. Severe COVID-19 was associated with marked activation of neutrophils and upregulation of coagulation pathways, as well as with blunted IFN-I signaling as early as week 1 in the peripheral blood of patients. Importantly, severe COVID-19 was associated with persistent activation of neutrophils and genes regulating coagulation for as long as 6 weeks, underscoring that the importance of the temporal analysis by TrendCatcher, which identified hallmarks of severe COVID-19 in peripheral blood samples. It is important to note that, while peripheral blood samples are convenient to obtain for longitudinal studies because they only involve minimally invasive blood draws, they give a limited picture of the gene expression and activation states because gene expression patterns of immune cells in a tissue may be different from their counterparts in the circulating blood.Our findings complement recent studies that implicate neutrophils in the excessive inflammation, coagulopathy, immunothrombosis, and organ damage that characterize severe COVID-19 \u201335. NeutOur observation of early upregulation of coagulation genes in the whole-blood transcriptomes of severe COVID-19 patients also points to another feature that is associated with severe COVID-19, thrombotic, and embolic complications such as strokes , 44, 45.+ T cells. IFN-I, which are essential for antiviral immunity , compared with the moderate COVID-19 group. This impaired IFN-I signaling was identified in innate cells, such as NK cells and monocytes, and adaptive cells, such as B cell and CD8immunity , 54, havimmunity , 56. Othimmunity . HoweverIn our COVID-19 vaccination single-cell PBMC temporal data analysis, we also identified prominent metabolic shifts in NK cells. Cellular metabolism is recognized as an important factor that can determine the fate and function of lymphocytes , 59. CerIn conclusion, we have developed the potentially novel TrendCatcher R package platform designed for time course RNA-Seq data analysis, which identifies and visualizes distinct dynamic transcriptional programs. When applied to real whole-blood bulk RNA-Seq time course data sets from COVID-19 patients, we observed that patients with severe COVID-19 showed gene expression profiles consistent with profound neutrophil activation and coagulopathy early during the progression of the disease (starting from the first week of symptom onset). Even though the application of TrendCatcher in this manuscript focused on COVID-19 data sets, it has been designed to be used for the analysis of a broad range of dynamic biological processes and diseases.m \u00d7 n, where m denotes the number of genes and n denotes the number of samples, and a user-defined baseline time variable T, such as \u201c0 hour\u201d. Since samples may have different sequencing depths and batch effect, TrendCatcher integrates with limma gene-wise dynamic P value calculation, and (e) break point screening and gene-wise dynamic pattern assignment. Mathematical details will be expanded in the following sections. For the output of TrendCatcher, there are mainly 2 components: a master table and a set of functions for versatile visualization purposes. The master table contains all the dynamic details of each single gene, including its dynamic P value, its break point location time, and its dynamic trajectory pattern. In addition to the master table, TrendCatcher produces 5 main types of visualizations: (a) a figure showing the observed counts and fitted splines of each gene, (b) genes trajectories from each trajectory pattern group, (c) a hierarchical pie chart that represents trajectory pattern composition, (d) a TimeHeatmap to infer trajectory dynamics of top dynamic biological pathways, and (e) a 2-sided bar plot to show the top most positively and negatively changed (averaged accumulative log2FC) biological pathways.The main components of the TrendCatcher framework are shown in th limma and provi,tXbaseline was generated from a negative binomial (NB) distribution. We assumed that the observed number of RNA-Seq reads count from the baseline time i,tXbaseline~ NB\u2003(Equation 1)i,t baseline\u03bc is the mean count of gene i (i refers to the index of the gene) at the baseline time point, and i\u03c6 is the dispersion factor. First, the dispersion factor \u03c6i was preestimated as a constant hyperparameter for each gene with DESeq2 (i(t)\u03c3 is the variance at time t. Where h DESeq2 , as showi\u03c3(t)2 = \u03bci(t) + \u03c6i \u00d7 \u03bc(t)2\u2003(Equation 2)\u03bci,tbaseline was estimated using maximum likelihood from Equation 3. Then, \u2003(Equation 3)i,t baseline,0.05X, i,t baseline,0.95X) as the baseline count fluctuation interval.Based on the NB distribution and the estimated mean count for baseline time, we constructed the 90% CI i,tX(t\u2260tbaseline) is assumed to follow the NB distribution in Equation 5, with a positive integer \u03b1 represents number of failures before the \u03b1th success in a sequential of Bernoulli trials, and p(t) \u2208 represents the success probability. To model the time-dependent gene expression value, we applied a smoothing spline ANOVA model , 62 withi,tX(t\u2260tbaseline) ~ NB\u2003(Equation 5)i,tx(t\u2260tbaseline)}i =1,...,n;t=1,... T is calculated as Equation 6. The log-likelihood given a time course observed count x = {\u2003(Equation 6)Taking the logit link and model time effect, we define the logit link \u03b7.\u2003(Equation 7)J(\u03b7) is added to the minus log-likelihood, with the smoothing parameter \u03bb > 0.To allow for flexibility in the estimation of \u03b7, and find the best trade-off between goodness of fit and the smoothness of the spline curve, soft constraints of the form L + \u03bb J(\u03b7)\u2003(Equation 8)\u2013\u03bci,t(t\u2260tbaseline) can be estimated using Equation 9.The solution to the optimization of Equation 8 leads to a smoothing fitting to the reads from samples across different nonbaseline time points. The estimated mean of \u2003(Equation 9)\u03bci,t(t\u2260tbaseline) was tested against the baseline fluctuation interval. Based on Equation 10A and Equation 10B, for each gene at each single nonbaseline time point, a dynamic time P value was calculated.To calculate gene\u2019s nonbaseline dynamic signal significance, each gene\u2019s nonbaseline estimated mean count If\u03bct\u2260tbaseline)i,t\u2265 \u03bci,t(t\u2260tbaseline)|\u03bci,tbaseline,\u03c6i)\u2003(Equation 10A)If\u03bct\u2260tbaseline)i,t\u2264 \u03bci,t(t\u2260tbaseline)|\u03bci,tbaseline,\u03c6i)\u2003(Equation 10B)P value. Then, we applied Fisher\u2019s combined probability test method to calculate a gene-wise dynamic \u2003(Equation 11)P value threshold less than 0.05. Then, we applied a break point searching strategy to capture the gene expression change trend. The definition of break point is defined using Equation 12A and Equation 12B.First, we connect all the significant dynamic signal time points, with a If\u03bci,t, >\u03bci,tnext AND\u03bci,t >\u03bci,tprevious, break point type I\u2003(Equation 12A)If\u03bci,t, <\u03bci,tnext AND\u03bci,t <\u03bci,tprevious, break point type II\u2003(Equation 12B)There are 2 types of break points: type I means a gene\u2019s expression level is upregulated followed by a downregulation, and type II means a gene\u2019s expression level is downregulated, followed by an upregulation. By screening along the break point, the master-pattern and subpattern were assigned to each gene.jt and \u2208 1,\u2026, T, each time interval is denoted as . We found upN upregulated genes and downN downregulated genes within the time window ; then, Fisher\u2019s exact test was performed to obtain the GO term enrichment with the corresponding time interval for upregulated genes and downregulated genes separately. Users can select the top most enriched biological pathways for each time interval (the default is top 10 most enriched pathways). Then, for each selected GO term within the corresponding time window, we calculated the averAvg_log2FC of all the DDEGs from this GO term. A series of Avg_log2FC values over time characterize the trajectory dynamics of the corresponding biological pathway; it is defined as biological pathway trajectory inference in this study. The summation of the series of Avg_log2FC estimates the averaged accumulative log2FC (GO_mean_logFC) for the corresponding GO term. TrendCatcher ranks biological pathways based on their dynamic magnitude inferred from the GO_mean_logFC value. Users are free to choose the top most positively and negatively changed (averaged accumulative log2FC) biological pathways. Besides GO enrichment analysis .To mimic the real biological RNA-Seq data set, we only allowed approximately 10% of the genes to be dynamic responsive genes. In this study, we embedded 5 different types of trajectories into the temporal RNA-Seq simulated data sets, including nondynamic trajectory (~90%), monotonic trajectory (~2.5%), biphase trajectory (~2.5%), and multimodal trajectory (~5%), including 2\u2013break point and 3\u2013break point trajectory. Each type of trajectory was constructed by adding NB distribution noise to the embedded trajectory count. For example, for monotonic trajectory, we defined the first and last time points\u2019 RNA expression level changes to be 0.5\u20132 logTo construct \u201cpseudo-bulk\u201d RNA library from scRNA-Seq data sets, all cells for each cell type in a given sample were computationally \u201cpooled\u201d by summing all counts for a given gene. Since pseudobulk libraries composed of few cells are not likely modeled properly, we removed cell types containing less than 1000 cells in this study. Lowly expressed genes were removed for each cell type, as well, using the filterByExpr function from edgeR R package . Gene coGene set enrichment analysis (GSEA) in this study was performed using clusterProfiler R packagP value using the empirical distribution from the permutation test. This permutation test module in TrendCatcher allows users to assess between group differences of dynamic gene expression pathways in a time interval\u2013dependent manner.After fitting gene expression longitudinal profiles from each severity group with a LOESS smoothing spline, we binned the time variable into 100 time intervals and calculated the observed area ratio between 2 curves within each time interval. Next, we shuffled the severity group label on the gene expression longitudinal profiles and repeated the previous step to calculate the shuffled area ratio for each time interval. We iterated the shuffling step 1000 times. In this way, for each time interval, we calculated the https://github.com/jaleesr/TrendCatcher).The R package source code of TrendCatcher is available on GitHub (P values less than 0.05 were considered significant, and when multiple comparisons were performed, we applied Holm-Bonferroni method for P value adjustment.The statistical tests for each computational approach model are described in-depth above. Generally, All the data sets used for the analyses were publicly available (as indicated in XW and JR designed the study, XW performed the analyses and wrote the first draft of the manuscript, JR revised the initial draft of the manuscript, and MAS and YD provided critical input for the analyses, visualizations, and revisions of the manuscript."}
+{"text": "Biological data suffers from noise that is inherent in the measurements. This is particularly true for time-series gene expression measurements. Nevertheless, in order to to explore cellular dynamics, scientists employ such noisy measurements in predictive and clustering tools. However, noisy data can not only obscure the genes temporal patterns, but applying predictive and clustering tools on noisy data may yield inconsistent, and potentially incorrect, results.To reduce the noise of short-term (< 48 h) time-series expression data, we relied on the three basic temporal patterns of gene expression: waves, impulses and sustained responses. We constrained the estimation of the true signals to these patterns by estimating the parameters of first and second-order Fourier functions and using the nonlinear least-squares trust-region optimization technique. Our approach lowered the noise in at least 85% of synthetic time-series expression data, significantly more than the spline method (Our constrained Fourier de-noising method helps to cluster noisy gene expression and interpret dynamic gene networks more accurately. The benefit of noise reduction is large and can constitute the difference between a successful application and a failing one. Any biological data we collect is corrupted to some extent by noise. Most scientists address this by using a variety of methods, all of which aim to reduce the noise in the signal and to increase the useful information stored in it. In molecular biology, reducing the noise of gene expression data requires the removal of some undesired elements that degrade the useful information stored in the measurements.Time-series expression data has become important to the study of cellular network responses because the data contains both the gene expression levels and timings . This daSeveral authors in the past decade proposed solutions to reduce the noise of time-series data. References \u201310 firstlmms that can be used for both microarray and RNA-seq gene expression data. Another bioinformatic group [ImpulseDE [ImpulseDE2 [lmms and ImpulseDE2 using synthetic and real RNA-seq data. In their comparison, ImpulseDE2 was overall the best performing tool.More recently, different methods to reconstruct the original shape of the genes were presented. For instance, developeic group developeic group , 22 usedmpulseDE can be upulseDE2 comparesThe short-term temporal pattern of gene expression over a time scale of several hours appears to follow a few basic shapes, which we can exploit to reduce its noise , 25: (1)It was shown in several previous works that there are clear patterns of gene expression, both in response-to-stimulus experiments, developmental studies and cell cycle experiments. Bar-Joseph et al. show at We assume that the noise of time-series gene expression data arises from , 29, 30 It was shown, that the variance of noisy RNA-seq data increases with the gene expression in a negative binomial distribution manner , 27. We ribution . Normalin,x is the vector of parameters To fit the data points of each gene in an optimal manner, we use nonlinear least squares to estimate the parameters see In \u201c \u201d sectiony has the formh best fits the data in the least squares senser(x) are the residuals, so that f(x).Let the observed expression values of the gene at time points f(x)m-vectors composed of f(x) arex such that f(x) decreases for each iteration. At the iteration k, the step In reality, our time-series expression data contains measurement error. We account for that by redefining n theory . The graf(x). For this, we compare between the actual and predicted reduction by the following manner [Let bproblem :10\\documg manner :13\\documFor every gene in the dataset, repeat the following conceptual algorithm:Several other improvements and strategies of the trust region are discussed in . Unlike ImpulseDE [ImpulseDE first groups genes into a limited number of clusters. Afterwards, an impulse model is fit to the mean expression profile of each cluster which is then used as a starting point for fitting the impulse model to each single gene separately.To evaluate our algorithm, we first compared the performance of the constrained Fourier estimation, to the performance of the common spline smoothing method on synthetic microarray data . For that, we analyzed the performance of algorithm 1\u2013100 increasingly frequency by the sum of squared error (SSE) and the root mean squared error (RMSE). Our constrained Fourier estimation does not rely on SSE, it restricts the signals to certain frequencies and exits by the condition in Eqs.\u00a0For the comparison, we calculated the discrepancy between the curves and the evaluation of post-processing, we used (1) correlation coefficient to evaluate whether genes can be grouped together [Each experimental replicate provided a measurement matrix together , since ttogether . (2) Angtogether . Generaltogether . AlthougWe created synthetic data to evaluate the performance of post-processing such as clustering and network component analysis. In order to evaluate clustering performance, we first generated 6 non-correlated (mentclass2pt{minimmentclass2pt{minimmentclasspt{minimaP from the three E replicates byA), reconstruction of the regulators (matrix m number of regulators and l number of time samples) should be identical up to a scaling factor for allSSE below), and (2) mean correlation of the approximated signal to the true signal.To compare the performance of de-noising (smoothing) methods that first employ clustering to methomentclass2pt{minimmentclass2pt{minimkrandom initial cluster centroid positions from all the signals. We therefore run Monte Carlo simulations at 10 time points over a period of 0\u201324 h was downloaded from GEO database with accession number: GSE6085 . The datt-test, To compare between the Fourier and the spline approximations of the true signal, we generated a sequence of 100 noisy signals with variance ncy see \u201c\u201d sectionata see \u201c\u201d sectionFourier approximations with one or two harmonics accurately reconstructed the signals at low frequencies data to compare the performance of clustering and network component analysis computeWe tested and analyzed the accuracy of k-means clustering of raw data with de-noised data. The first analysis consisted of six selected, non-correlated , k-means clustering of raw data outperformed (SSE and correlation) clustering Fourier de-noised genes at variance ent Fig.\u00a0C, D. Morent Fig.\u00a0E, F.We tested two common NCA algorithms, the ROBNCA and the We found that the NCA algorithms consistently predicted similar TF signals from noisy replicates of data when the data was first treated by constrained Fourier Fig.\u00a0A\u2013C. Not mentclass2pt{minimListeria monocytogenes exposed to high pressure stress such that the maximum standard deviation of the counts at these points decreased from 90 counts to no more than 15 counts by de-noising, both for the genes lexA and recA. The removal of noise especially in early time points was confirmed when comparing the variance of the untreated and the denoised data . Exposurment}\u2218C) , 44. It ment}\u2218C) . We selement}\u2218C) , 47, andSecondly, we evaluated the de-noising effect on 3 replicates of real mice T-cells time-series microarray data, and estimated the true signal with two-harmonics Fourier function see \u201c\u201d sectionWe evaluated the k-means clustering performance of the de-noised real data by testing increasing cluster numbers from 10 to 50. The k-means clustering of de-noised data produced more accurate clusters Constrained Fourier accurately estimates cellular response to stimuli of the three temporal shapes we examined, and not only periodic (cyclic) signals as was suggested previously , 16, 17.Most importantly, our results imply that analysis by network component analysis (NCA) and k-means clustering of untreated, noisy data do not produce reliable predictions. Similar results were shown previously for PCA . Our MonWe showed that our Fourier approximation is sensitive to the sampling frequency. Because gene expression measurements demand resources, there is often a trade-off between exploring temporal behavior (many time points) and improving the accuracy at each time point . Here wThe limitations of real data analysis stem mostly from the unknown noise model, which is often difficult to predict. Unlike synthetic data, real measurements often contain colored noise that emerges among other things from correlations between sample acquisition, biased during the sample preparation, and most importantly the effect of time on the samples and the transcriptome. Stochastic fluctuations in gene expression are often assumed to be Gaussian white noise in nature but the zero correlation time for white noise assumes an infinite relaxation time . For insLastly, an extension of the algorithm (under development) clusters the genes using functional PCA , and re-The algorithm and results presented here can provide a robust technique to de-noise time-series gene expression data and have the potential to improve gene expression post processing methods such as PCA and clustering. This increases our chance to discover important network features from the large time-series data generated in the last decade.Additional file 1: Supplementary Tables and Figures."}
+{"text": "Time series data from environmental monitoring stations are often analysed with machine learning methods on an individual basis, however recent advances in the machine learning field point to the advantages of incorporating multiple related time series from the same monitoring network within a \u2018global\u2019 model. This approach provides the opportunity for larger training data sets, allows information to be shared across the network, leading to greater generalisability, and can overcome issues encountered in the individual time series, such as small datasets or missing data. We present a case study involving the analysis of 165 time series from groundwater monitoring wells in the Namoi region of Australia. Analyses of the multiple time series using a variety of different aggregations are compared and contrasted , using variations of the multilayer perceptron (MLP), self-organizing map (SOM), long short-term memory (LSTM), and a recently developed LSTM extension (DeepAR) that incorporates autoregressive terms and handles multiple time series. The benefits, in terms of prediction performance, of these various approaches are investigated, and challenges such as differing measurement frequencies and variations in temporal patterns between the time series are discussed. We conclude with some discussion regarding recommendations and opportunities associated with using networks of environmental data to help inform future resource-related decision making. Groundwater is an essential source of freshwater for much of the world\u2019s population. In many places, however, available groundwater resources are under stress due to increasing anthropogenic influences and demand, as well as a changing climate. The appropriate monitoring and modelling of these groundwater systems are critical to enabling management decisions that will lead to a sustainable future.Classical groundwater modelling approaches use mathematical models consisting of complex systems of differential equations to represent the physical processes known to contribute to groundwater levels. However, these models require substantial assumptions and are typically subject to considerable uncertainty. In particular, accurately characterizing the hydrogeological properties of an area with a physically based model requires extensive expert hydrogeological knowledge and many assumptions about the nature of underground structures and the mechanisms involved in groundwater recharge. Groundwater systems are often complex, with water levels depending on many static and time-varying influences, including long- and short-term climate conditions, vegetation, land use, soil permeability, hydraulic conductivities, subsurface geological structures, aquifer size and connectivity, extraction patterns, recharge from local rivers and lakes, overland flooding, and irrigation activities. Gathering relevant information on each of these variables is usually difficult, time-consuming, and expensive. Building models for areas where there is not access to this information require the incorporation of many assumptions. In areas where subsurface information is available, the climate, soil, and vegetation characteristics continually evolve over time, contributing temporal changes to groundwater recharge mechanisms and rendering these systems even more difficult to understand and represent through explicitly defined mathematical relationships. In general, the more realistic a physically based model is, the more data and assumptions will be required. See for a reTo preclude this need for extensive knowledge about the subsurface systems, groundwater modellers are increasingly turning to data-driven approaches that use statistical modelling and machine learning to make predictions. In recent years, these data-driven approaches have gravitated towards the use of neural networks and deep learning algorithms see ,3,4). Th,4. Th3,4In a recent project sponsored by the NSW Department of Planning and Environment (DPIE) in New South Wales (NSW), Australia, we compared machine learning approaches based on neural network models with classic time series methods to model the level of groundwater in an aquifer in the Richmond River catchment, in the northeast part of the state . We founIn a follow-up project for DPIE, we set out to apply the same analyses to data from a different catchment area, the Namoi River in the north-west part of the state. However, the results were not nearly as good. It became quickly apparent that, in contrast to the Richmond catchment, the Namoi analysis was significantly more complex due to being an area of relatively low rainfall combined with high groundwater extractions to meet the demands of intensive agriculture and mining industries. It was clear that effective modelling needed to draw on a more extensive range of potential predictors, including data related to extractions, as well as river flow rates . Additionally, the data sets from each monitoring bore were relatively small in terms of the number of observations, and exhibited large proportions of missing data, impeding the application of individual neural network time series models to each well. We decided that it would make sense to work with a larger, richer dataset that combined data from multiple aquifers in the catchment, to share information across the region and not simply analyse on a bore-by-bore basis, as had been successful for the Richmond River catchment. The analysis was complicated by high levels of spatiotemporal variability between the individual groundwater time series measured in the region. This present paper is a detailed case study describing our efforts to undertake this analysis based on multiple, inter-related time series corresponding to 165 groundwater monitoring bores in the Namoi River catchment. There is currently growing interest from the machine learning community around the use of global, rather than local, models for time series analysis ,12,13. LIt has been shown that gloIn a hydrological context, Ref. describeThe main purpose of this paper is to present a case study, comparing and evaluating the use of local, global and partitioned time series models on the time series from the 165 wells from the Namoi River catchment. Located in the same region, the time series have been created by similar, though not exactly the same, data-generating processes, indicating that it may be useful to share some information across the system. Climatic conditions of rainfall and evapotranspiration are closely related for all wells, but subsurface conditions affecting recharge rates, such as soil permeability, aquifer depth, and hydraulic connectivity, will affect each time series differently. We explore whether these system differences indicate that a separate model should be made for each time series, or if combining the time series is beneficial for prediction performance. If so, should they be subsetted in a meaningful way or simply all combined together? The benefits of modelling this set of related time series with the possible approaches are investigated and quantified here. To the best of our knowledge, this study represents the first application of the DeepAR technology in the context of hydrogeology. We also provide a general discussion of the benefits and limitations of the application of various contemporary machine learning multiple-time series methods to real-world environmental monitoring data. A key insight is that, while machine learning strategies such as DeepAR can do well in terms of short-term predictions, long term predictions require models where key drivers have been identified, measured well, and incorporated into the modelling process. In this paper, we focus on the analysis of 165 groundwater level time series from 70 different monitoring locations across the Namoi River catchment in northern NSW, located just to the west of the Great Dividing Range. Multiple monitoring bores, or wells, are often established at the same location, in order to allow access to aquifers at different depths, hence the greater number of time series than sites. The locations of the monitoring sites and environmental monitoring locations are shown in Groundwater level monitoring data have been provided by the water division of the NSW Department of Planning, Industry and Environment (DPIE). These data are also publicly available for download from a website maintained by WaterNSW , a stateRainfall and evapotranspiration measurements were obtained at a daily resolution from the SILO database , construRiver discharge data were downloaded directly from the WaterNSW website at daily resolution for the following stations: Goangra , Mollee , downstream of Keepit Dam (station 419007), the Peel River at Carroll Gap (station 419006), and the Mooki River at Breeza (station 419027). Patterns of measured streamflow are complicated in this region, influenced by a combination of natural phenomena and human interventions. Dam outflow rates may be increased during periods of low rainfall due to intentional dam releases, leading to elevated downstream flow. Of course, streamflow also fluctuates naturally during periods of high and low rainfall. Extraction (groundwater pumping) data were provided by DPIE in the form of annual extracted volumes (ML/year) at locations specified by latitude and longitude. These records begin in 1967 for some of the wells, and in 1985 for others. Due to a lack of recorded data, in this study, we assume that wells with no records before 1985 did not have pumping occurring. In actuality, the lack of reliability in the recorded extractions data may be a source of uncertainty that impacts the reliability of the fitted models, and we discuss this issue further in our discussion section. To integrate this annual lump-sum data with the daily environmental measurements, extractions have been set at a constant value throughout the year. As discussed presently, the inclusion of a day- or month-within-year variable in our models allows the flexibility for the neural networks to create interactions that help explain annual fluctuations.The study region forms part of the Murray\u2013Darling basin and is a highly productive agricultural area sustained by large volumes of groundwater extractions. As the subsurface characteristics of this area are complicated and not yet fully defined, classical, process-based hydrogeological modelling is difficult to apply. It is known that the groundwater system consists of multiple layers of aquifers, with unmeasured lateral through-flow and vertical leakage occurring between the shallow and deep aquifers. The surface water and groundwater systems are closely connected, meaning that groundwater depletions due to extractions may be masked by incoming surface water. Large amounts of water that are extracted from deep aquifers end up finding their way into shallow aquifers via irrigation. Surface waterways are substantially regulated by dams and weirs, providing a disconnection between rainfall events and groundwater recharge. This relationship is complicated further in that periods of low rainfall can lead to high extractions that in turn deplete groundwater, and yet low rainfall can also lead to dam releases that lead to recharge through the streambeds. The amount of rainfall varies greatly between the east and west of the region, and the response of groundwater levels to precipitation can vary between areas even within the same aquifer, due to differences in soil permeability. Geological fault lines disconnect subsurface hydraulic characteristics. The complex hydrogeology of the region, along with the extensive monitoring network in place, makes it very natural to consider empirically based approaches. The consideration of We begin by attempting to model the time series with \u2018local\u2019, or individual, neural network models using the basic multi-layer perceptron (MLP) architecture. These local models are produced by fitting a separate model to each time series, resulting in a number of models equal to the number of monitoring wells. This method is based on the assumption that each time series is the result of a different data-generating process, and separate models are warranted to represent the individual processes. Because large datasets are needed for training neural networks, this local modelling step is restricted to those wells with more than 2000 observations, which is the small subset of telemetered wells in the region.Next, for a potentially more efficient use of the data than could be obtained from the individual models, and to incorporate data from non-telemetered wells as well, we turn to an exploration of multiple time series models. Multiple time series models have the benefits of larger datasets for model training, and the sharing of pattern information between the time series from some or all of the wells. These \u2018multi-well\u2019 models can be either fit on the entire set of time series or on subsets of the time series (\u2018partitioned\u2019 models). The input data for the multi-well models consist of one large, stacked set of data from multiple wells, along with a new predictor variable that indicates from which well each measurement in the dataset was taken. Due to the process of pooling the data, there is the potential for greater generalisation and a lower prospect of over-fitting with these multiple time series models than with local models. (1)the time series are first clustered and then a recurrent neural network model specifically designed for the time series is appl(2)a global, standard neural network (MLP) is created using all the time series, and(3)a global, recurrent neural network (LSTM-based) is created using all the time series with the DeepAR algorithm . As discWe investigate three versions of multiple time series models with these groundwater monitoring data: The first approach is a partitioned model, where a model is produced to represent a subset of similar time series from the overall group, as determined by a clustering measure. The latter two approaches are global models, in which a single model is created to represent the data from all of the time series. These algorithms will each be described in detail below.tY, we have a set of predictors (or features). denoted tX, that can be used in constructing a prediction model. For our project, we consider dynamic regression models which take the following form:Y1, Y2, \u2026, TY} where tY represents the well water level measured at time t, B is a vector of regression coefficients to be estimated and te is an error term that can incorporate additional autocorrelation structure, if needed. In practice, the general philosophy is that the autocorrelation observable in a set of time series data can often be explained by measuring the right features. The best and most reliable predictions, particularly for the longer term, will result from models that incorporate a rich and relevant set of additional features. In before use in the models. This allows for comparisons to be made between time series with groundwater level changes of differing magnitudes. As these differences in scale are related to the unknown horizontal extent of each aquifer, they are not relevant to this study of time series patterns. Scaling the predictors as well as the groundwater levels is an important step to ensure a relatively similar order of magnitude of the neural network weights within a model.The accuracy of the various methods is characterised in terms of prediction performance on data that were not seen during the training process. Evaluations of the models are given using the root mean square error (RMSE) metric. RMSE measures the square root of the average of the squared differences between the observed and predicted values, or more simply, the standard deviation of the prediction errors. All analyses in this study are performed with open-source software. The MLPs and LSTMs use the \u2018keras\u2019 package in R verIndividual MLP models were created for all time series with more than 2000 daily observations, which are those corresponding to the telemetered wells. This is a subset of 11 time series, each having between 2589 and 4113 daily measurements. When converted to monthly time series, these 11 telemetered wells have between 230 and 389 data points each. Two MLPs are run for each; one using daily data and another using monthly data. Measured regional rainfall, evapotranspiration, surface flows, and extraction data are used to predict groundwater level at each time step. In order to incorporate time into these static models, two strategies are used. First, we include lagged predictor data and secondly, day-of-study and month-of-year variables are added. The number of lagged inputs for the monthly models was determined by first running some exploratory generalized additive models (GAM) to identify the lag numbers that could explain the high degree of observed variability. For the daily models, the number of lags is capped at 30 to limit the number of predictor variables. Uneven gaps between the observation dates for the water levels are not an issue with the MLPs, since each observation contains all the lagged predictors within it.Two-layer networks are used for the individual MLPs, with 32 and 16 nodes on the first and second layers, respectively. Overall, 70% of the data are used for training, with 15% each reserved for validation and prediction. The batch size is 32, meaning that the network weights are updated after each batch of 32 data points are seen by the model. The models are run for up to 200 epochs (an epoch is one run-through of the entire set of input data points) with early stopping if the prediction of the validation portion fails to improve over 20 consecutive epochs. Dropout is set at 15%, so that a random 15% of nodes are deactivated during each training epoch in order to prevent the overfitting of the network. For more information on the use of early stopping and dropout to prevent overfitting during neural network training, please see .The MLP prediction of daily water levels at one individual well is shown in The MLP results offer useful insight into the impact of climate and extractions on water levels. For example, the effects of extractions can be investigated by setting the extraction volumes predictor variable to zero in the prediction phase. The MLPs are run with daily data sets and monthly aggregated data sets. A comparison of the daily and monthly results for one well is shown in A significant drawback to using an MLP model with time series is that information is not naturally incorporated about the order of sequential observations. As recent information is often more valuable to predictions, this method is not ideal for time series analyses. Furthermore, the MLPs developed in our setting cannot be used to investigate the effects of predictors into the future, since they include an additional time variable, needed to explain the overall temporal trend. By including these time variables, the fitted models are operating somewhat similarly to how a GAM operates when fitting a flexible function of time to the dataset. The temporal trend is fit specifically to the available data and may not generalise well for the future for forecasting. Lastly, the use of individual models means that no sharing of information can take place between the sites.The self-organising map algorithm is used to explore the relationships among the multiple time series and to group them into sets of smaller clusters for use in the partitioned multi-well analysis. Then, LSTMs are fit to a representative time series from each subset (or cluster) obtained from the SOMs. Each LSTM is able to be used for the prediction of groundwater levels for any of the wells within its cluster. This work was published in and is sFor an initial exploration of the relationships between the multiple time series, the self-organising map algorithm is used to obtain: the common groundwater patterns in the region, clusters of time series with similar patterns, and visualisations of the historic groundwater patterns with information on the geographic relationships between them. When applied to the time series from wells in the Namoi basin, the SOM identified 16 clusters of time series that share similar temporal patterns over the timeline of the study. This indicates the 16 most prevalent water level patterns among the monitoring data. These 16 \u2018representative\u2019 patterns are shown in grey on the upper panel in Each well time series is matched to the pattern (and therefore becomes a member of the cluster) that it is most similar to. On the lower portion of It is important to note that the SOM analysis is not useful for prediction, since it is purely empirical and exploratory. It focuses only on the groundwater level data and does not incorporate any information from the predictor variables\u2014climate, surface flows, and extractions are not considered when determining the clusters. For a further analysis of the SOM results, please see .The clusters found by the SOM can now be used in the LSTM portion of the partitioned multi-well analysis. The SOM has produced complete monthly data sets (with no missing data over the study period) in the form of the clusters\u2019 representative time series, and these can be easily input into LSTMs in the next step.The deep learning algorithm for time series, LSTM, is applied to the clusters\u2019 representative time series to investigate the relationship between the groundwater levels and external predictors for each of the SOM clusters. As there are 16 SOM clusters, 16 LSTMs are produced at this stage. Predictor variables are now added into the analysis, including: rainfall and evapotranspiration, river flows and extraction data, and a month-of-year variable. All input data are monthly averages, and the representative time series from each SOM cluster is the target variable for each LSTM. LSTMs with two layers are used, with 50 nodes on the first and 25 nodes on the second layer. The first 80% of data are used for training, with 10% each for validation and testing. The LSTMs are run for up to 500 epochs with early stopping after 20 epochs, a batch size of 32, and a weight decay parameter (lambda = 0.00001) to regularise weight estimates and reduce overfitting. During training, the LSTM algorithm \u2018looks back\u2019 over the data for a specified number of time steps, cycling sequentially through the observations and retaining important information through time. The LSTMs are run with a variety of possible look-back lengths and the model providing the best overall match of model output with each cluster\u2019s representative time series is chosen.A sample of LSTM results is shown in The partitioned models created with the SOM and LSTM combination are able to provide predictions for all of the groundwater time series, even those with very few observations. This is attributed to the SOM algorithm being able to match the wells with low numbers of observations into clusters that share similar temporal patterns, and then representing these wells in the LSTMs with the clusters\u2019 representative time series. Neural networks in general need large datasets for training, and the LSTMs here are trained on the full representative time series, whilst allowing predictions to be made for the individual wells, regardless of the number of measurements at each well. Though this method has benefits, a drawback is that the same prediction is used for all wells within each cluster. A global MLP model is produced that incorporates information from all the wells. This allows for model training with a much larger dataset and aims to provide insight into what is happening at a catchment-wide level. Data are aggregated to the monthly level for use in the global MLP in order to improve computational efficiency over the use of daily data. This aggregation applies not only to the groundwater measurements, but also to all the predictor variables and their lags. The size of this data set is 36,142 observations with 368 predictor variables, which includes 12 lags on all rain and streamflow gauge measurements (at monthly aggregations). The global MLP is fit using a two-layer network with 64 and 32 nodes on the first and second layers. All other hyperparameters are the same as for the individual MLPs, as described above. Predictions produced by the global and individual MLPs are compared in Though the use of global MLP indicates some improvement for predictions on a telemetered well as shown in As the global model used here is based on the basic MLP algorithm, it also does not fully exploit the time series nature of the data, as was the case with the local MLPs. Below, we apply another global model. This one is based on the LSTM which, as discussed above, specifically incorporates the sequential nature of the data into the algorithm. A DeepAR model is another form of global model, and is therefore trained on all of the time series. This model uses the LSTM algorithm to analyse sequential data and is able to provide probabilistic predictions. The structure of this model differs from the global MLP in that rather than a single target variable of \u2018measured water level\u2019, all of the groundwater monitoring wells\u2019 time series become separate target variables, giving 165 target variables.We trained a two-layer DeepAR model with 32 nodes on each layer for 200 epochs, with a dropout rate of 10%. The context length is five years, and the prediction length is five years. The input variables are separated into categorical predictors (which are static for each time series) and dynamical predictors (which change over time for each time series). One categorical predictor, the well ID, is used in our model. The dynamical predictors are rain, evapotranspiration, surface flow, and extractions, as in the MLPs. Twelve months of rain lags are included in the input data set. In addition, the year-of-study is included as a dynamical predictor.The DeepAR results, shown in Whilst the DeepAR predictions follow the observations relatively well over the 5-year testing period, the results were less promising when trying to predict further into the future. Attempting 10-year predictions , the results deteriorated substantially when trying to predict for a longer period of 10 years. This phenomenon is quite typical for time series prediction, which works by exploiting the autocorrelation structure inherent in the data to predict the future. For a reliable long-term prediction, the best strategy is unquestionably to make sure that the analysis has access to the right predictors or features that can explain the observed patterns. Although there are limitations in terms of how reliably the data can be projected into the far future, the results demonstrate the usefulness of exploring \u201cwhat if\u201d scenarios such as setting extractions to zero. These explorations could be expanded to see how the predictions might look under scenarios, such as multiple successive years of very low or very high rainfall. In terms of identifying a best modelling strategy for this set of environmental monitoring time series, no single approach was found to be uniformly best across all time series. The choice of best strategy is complicated by the fact that differences between the various methods can be subtle, and there are many trade-offs in terms of prediction outcome and ease of working with the data at hand. The choice of the best strategy depends strongly on the context; the best strategy for short-term prediction is likely to be quite different from the best strategy for long-term prediction. There are also differences in how successfully the various approaches can be adapted to handle limitations in the available data. Whilst it is straightforward to adapt the MLP-type models to handle time series with missing data points or time series measured at sporadic timepoints, it is difficult to do this for the LSTM model. There are also some technical differences between the various methods in terms of the software available for implementation. This particular analysis was challenged by the variability in the frequency of groundwater level data collection. Although data collection goes back to the early 1970s for many wells, the earlier measurements were based on manual collection every 6 weeks or so. Starting in the early 2000s, some of the wells were fitted with automatic telemeters that provided continuous monitoring. As a result of this variability in measurement frequency and timing, it was virtually impossible to fit LSTM-type models to the daily data. This can be circumvented by aggregating to monthly data, however this aggregation potentially results in some loss of predictive power. Based on our local MLP analyses on individual wells, we found that RMSE was indeed better when daily data were used. Another strategy may be to employ approaches to infill the missing data, but we have not done this, as this study was aimed at determining what strategies could be used with the raw, sporadic data, as measured. Strong temporal patterns were found for many of the wells, as seen with the MLP modelling on the individual telemetered wells and with the SOM clustering analysis. As discussed earlier, we were able to develop models that fit the observed data by including appropriate time terms in our models, along with the observed climate and extraction predictors. However, while such models may explain observed data very well, they cannot be reliably used to predict far outside the range of observable data. Similarly, we found that models incorporating appropriate autoregressive terms could do a good job in terms of short-term predictions. For long-term prediction, however, it is critical to develop models that have a rich enough set of reliable predictors to explain the time trends. The importance of exogenous variables for improving time series forecasting accuracy was also listed as one of the main takeaways from the M5 competition .To further explore the importance of the long-term prediction of ensuring the right predictors are captured, a small computer simulation was conducted. Specifically, we generated a time series containing 2000 datapoints. The first 200 points were used for model fitting, and then the models were evaluated for how well they could predict (a) the next 200 points and (b) the final 200 points. Scenario a) could be considered an example of short-term prediction, whilst scenario b) could represent a long-term prediction. The data-generating model included a predictor that mimicked a rain variable, as well as some seasonal patterns and a long-term trend. The models that were fit included a linear regression model (ordinary least squares or OLS) that included the correct predictors, a linear model that did not include all the correct predictors, but included a lagged outcome variable, and finally, a classic time series model (ARIMA) which could exploit the autocorrelation structure in the data. While it is beyond the scope of this paper to discuss in more detail, it is interesting to note that some time series analysis programs provide error bands around their predictions, which shows the uncertainty of prediction of the future . AnotheLocal, partitioned, and global algorithms were investigated here for making predictions based on monitoring time series data for groundwater levels, and each was found to have benefits and drawbacks depending on the context and characteristics of the monitoring data. The local MLP models were able to incorporate the sporadic datasets and provide quick and easy predictions, though it was necessary to add temporal features manually. These individual models were restricted to the few groundwater wells that had automatic telemeters installed, as single wells with manual measurements do not have enough data points for the MLPs. The results of the telemetered wells showed relatively good predictive accuracy with the local MLPs. The partitioned models, using the SOM algorithm to partition the data sets into clusters that could be modelled with LSTMs, were also able to provide acceptable predictions. The benefit of these partitioned models is that they are able to provide predictions for the entire set of time series, even those with very few measurements. A drawback of this method is that the same LSTM prediction is given for a number of individual wells, and therefore may result in a loss of accuracy in the prediction of groundwater levels at telemetered wells for which the individual models worked well.The global models were a good solution for increasing the size of the data set, for sharing information catchment-wide, and again for providing predictions at wells with few data points. The DeepAR algorithm offers a powerful modelling framework to fit a global LSTM model to the time series from all the wells. The framework has a number of advantages, including the ability to create needed interactions and nonlinearities, however a primary emphasis of DeepAR is the incorporation of autoregressive terms; this is the \u2018AR\u2019 part of DeepAR. We have discussed how these are very helpful for predictions in the short term, but not for long-term predictions. Overall, it was determined that, whilst these methods are able to do a satisfactory job of modelling groundwater levels with the use of appropriate covariates, it is nonetheless not straightforward to use them for the prediction of the future. Our analyses of the Namoi catchment data suggest that, even though data related to climate, extractions, and streamflow/dam releases can explain a relatively high proportion of the observed trends in the regional groundwater levels, it appears that some residual temporal effects remain, even after these variables have been accounted for. Model fit can be improved through the inclusion of time effects into the modelling process, and the inclusion of autoregressive terms can also help to \u201csoak up\u201d temporal effects. However, for the purpose of projecting further into the future, it is important to have models that can explain temporal effects through accurately measured features. Additionally, it follows that the predictions of these features into the future must also be accurate if they are to be relied on as predictors in the groundwater level models.An interesting outcome of this study is the potential for the use of these methods for a relatively easy analysis of \u2018what-if\u2019 scenarios, as shown in There are a number of possible directions for future work. As discussed above, improved strategies for reliable measurement of extractions would likely improve model fits. It is intriguing to consider the possibility of using other data sources such as satellite data to capture land use and changes in water storage. We also believe that there are further analytical directions that could be explored. In particular, we recommend exploring the use of state\u2013space modelling combined with a deep learning framework. This would require the development of new statistical methods, as well as software engineering. It would also be useful to develop a modelling framework that incorporated a global LSTM structure like DeepAR, but which allowed the option to turn off the autoregressive component, or allowed for the number of AR terms to be delinked from the \u201clookback\u201d component of DeepAR. Such a modified version of DeepAR would allow for analyses that focus on explaining the observed trends in the data, without simply \u201csoaking up\u201d the effects with AR terms."}
+{"text": "Roux-en-Y gastric bypass (RYGB) surgery potently improves obesity and a myriad of obesity-associated co-morbidities including type 2 diabetes and non-alcoholic fatty liver disease (NAFLD). Time-series omics data are increasingly being utilized to provide insight into the mechanistic underpinnings that correspond to metabolic adaptations in RYGB. However, the conventional computational biology methods used to interpret these temporal multi-dimensional datasets have been generally limited to pathway enrichment analysis (PEA) of isolated pair-wise comparisons based on either experimental condition or time point, neither of which adequately capture responses to perturbations that span multiple time scales. To address this, we have developed a novel graph network-based analysis workflow designed to identify modules enriched with biomolecules that share common dynamic profiles, where the network is constructed from all known biological interactions available through the Kyoto Encyclopedia of Genes and Genomes (KEGG) resource. This methodology was applied to time-series RNAseq transcriptomics data collected on rodent liver samples following RYGB, and those of sham-operated and weight-matched control groups, to elucidate the molecular pathways involved in the improvement of as NAFLD. We report several network modules exhibiting a statistically significant enrichment of genes whose expression trends capture acute-phase as well as long term physiological responses to RYGB in a single analysis. Of note, we found the HIF1 and P53 signaling cascades to be associated with the immediate and the long-term response to RYGB, respectively. The discovery of less intuitive network modules that may have gone overlooked with conventional PEA techniques provides a framework for identifying novel drug targets for NAFLD and other metabolic syndrome co-morbidities. The discovery of novel druggable targets for metabolic diseases, such as type 2 diabetes, obesity, and non-alcoholic fatty liver disease (NAFLD), has recently been fueled by the application of multi-omics technology to eluciTime-series omics data are particularly useful for studying the dynamic behavior of biological systems and provide novel insight into disease progression and amelioration at the molecular level. However, conventional -omics analysis workflows often fall short for such data because they implicitly rely on pair-wise statistical comparisons, either between two experimental conditions at a single time-point, or between two time-points under one experimental condition, neither of which effectively capture the underlying molecular mechanisms spanning across multiple timescales. Several methods have been previously developed to address this and are based on mathematical and statistical models, as varied as Spline regression , Bayesia2 fold change (L2FC) vector trajectories between RYGB and weight-matched (WM) control groups tend to significantly cluster in nodal proximity on a large-scale network comprising human biological pathways, suggesting that functionally related genes share common expression dynamics. Our method reveals less-intuitive pathway modules activated by RYGB across different time points, thus facilitating the discovery of novel hepatic drug targets for metabolic syndrome indications.We herein present a novel graph network-based analysis workflow designed to identify modules enriched with biomolecules that share common dynamic profiles, where the network is constructed from all known biological interactions available through the KEGG resource. Focusing on time-series RNA-seq data, we studied the impact of rodent RYGB on liver metabolic and signaling pathways over the course of three months, with the intent of unraveling a dynamic liver remodeling that correlates with the beneficial metabolic effects of the surgery. Our results show that genes with similar Logp-value threshold of 0.05 were considered differentially expressed between RYGB and any of the two control groups, and the complete list can be found in RYGB or Sham surgery was performed on diet-induced obese (DIO) male Sprague Dawley rats. Immediately after the operations, the Sham-operated animals were further split into two cohorts wherein one group had an ad libitum access to food and the second group was calorie restricted to provide WM cohort (please see Methods). Animals were sacrificed post-operatively at week 1, month 1, and month 3. Five animals were used for each experimental group and each time point. These specific timepoints were selected to provide a comprehensive view of the temporal dynamics of the metabolic and physiological adaptations to the surgery. More specifically, while the 1 week timepoint corresponds to the immediate post-operative effects, at around 1 month after surgery, many of the physiological changes start stabilizing. By 3 months postoperatively, the impacts of surgery are well established and stabilized, and therefore this timepoint was used as a surrogate for mechanisms involved in the sustained and long-term effects of the surgery . We firs\u03b81 and \u03b82 represent directions between the corresponding vector and the horizontal axis . The distribution of NCTS values for genes in each trend is shown in We recognize that animal-to-animal variation in normalized gene expression counts at any given time point leads to variation in L2FC values between RYGB and control groups. Therefore, each gene is further assigned a discrete probability distribution (DPD) of possible CTs (Methods). As a motivating example, ein MDM2 and thesigned CT . For TP5We then used a bipartite graph network encompassing 337 metabolic and signaling pathways provided by the KEGG resourceSRC, SYK, PIK3AP1, PIK3CA, SLC2A4 (or GLUT4), TRIP10, ADIPOQ, PTPN11, and GAB1, where the L2FC between RYGB and WM either starts as positive at week 1 and continues to increase with time or starts as negative and continues to decrease the liv cancers . However cancers .MPI, TKT, HK3, HKDC1, PFKM, TIGAR, PFKFB3, HK2, PFKP, PFKL, HK1, HIF1A) whose L2FC between RYGB and WM starts high, decreases at 1 month, and subsequently increases again by 3 months. This phenomenon might be a consequence of glucose homeostasis being regulated by several triggers across multiple time scales and that an escalation of fructose metabolism genes can occur both acutely after RYGB, as well as after 1 month to achieve a long-term steady state. Lastly, CT-4 gene expression is similar to CT-3, except the linear trend between 1 month and 3 month matches that between 1 week and 1 month and we report a module . Diet-induced obese (DIO) male Sprague Dawley rats were used for all studies. Obesity was induced by feeding the animals ad libitum with a high-fat diet (HFD), which provides 60% of total energy as fat, 20% as carbohydrate, and 20% as protein . At the time of surgery, DIO rats weighed 675 \u00b1 25 g. Animals were individually housed and were maintained on 12-h light, 12-h dark cycle (lights on at 0700 h) in facilities with an ambient temperature of 19\u201322 \u00b0C and 40\u201360% humidity. For each experimental group and each time point, five animals were used.The RYGB procedure was performed according to the method previously described . The totAnimals were fasted overnight. The following morning, animals were weighed, and a glucose solution (1 g glucose/kg body weight) was administered intraperitoneally. Blood glucose levels were measured before glucose administration (time 0) and 10, 20, 30, 45, 60, and 120 min after intraperitoneal glucose administration using a blood glucose meter via tail vein.2 fold change and Wald test p-value for each transcript at each time point. Genes whose corresponding transcripts were denoted as statistically significant based on the Benjamini\u2013Hochberg-corrected p-value threshold of 0.05 were considered differentially expressed in RYGB compared to control animals. The complete list of DEGs can be found in RNA-seq was performed for each sample individually . The sequencing was used to measure transcript counts corresponding to 23,113 genes in crushed liver samples from which differentially expressed genes were determined for RYGB vs. control groups at 1 week, 1 month, and 3 months post-surgery. RNA was isolated from approximately 60 mg homogenized tissue using TRIzol reagent, and RNA quality was subsequently determined with Tapestation 4200 RNA ScreenTape analysis. A total of 500 ng of RNA was used for sequencing after ribosomal RNA depletion using the Illumina RiboZero rRNA Removal Kit. AMPure XP paramagnetic beads were utilized for RNA purification and library clean-up of contaminants throughout protocol. The Illumina TruSeq Stranded mRNA Library Prep kit was used for library preparation, comprised of fragmentation, and priming, 1st strand cDNA synthesis, 2nd strand DNA synthesis, followed by clean up with AMPure XP beads, according to manufacturer\u2019s protocol. A-tailing and adaptor ligation was performed with TruSeq RNA Single Indexes and followed by two additional clean ups with AMPure XP Beads. PCR enrichment was followed by additional clean up with AMPure XP Beads. Libraries were normalized and pooled. The Tapestation 4200 was used to analyze library quality, which had an average peak size of 323 bp. The libraries were sequenced on the Illumina NextSeq 550 v2.0 kit. DESeq2 [version 1.15] package in R was used to compute the log\u03b81 and \u03b82, that capture the mean expression L2FC dynamics between 1 week and 1 month post-surgery, and 1 month and 3 months post-surgery respectively. While mathematical combinatorics of assigning a positive or negative sign to each of the three L2FC values would suggest 32 theoretical trends, only 18 of them are feasible after recognizing that vectors crossing the x-axis necessitate specific \u03b8 values. For example, if the signs of the L2FC vector for a particular gene were determined to be , \u03b81 and \u03b82 need to be negative and positive respectively. These 18 possible trends were further reduced to 9 CTs by combining trends with L2FC trajectories that were reflections of each other over the x-axis.Combined trends (CTs) are defined by first identifying all theoretical gene expression trajectories in the L2FC vs. time coordinate space based on the RYGB vs. WM comparison at each time point. A gene expression trajectory is comprised of two vectors, Animal-to-animal variation in normalized gene expression counts implies that the L2FC between RYGB and WM groups is a random variable, whose underlying distribution results in a corresponding distribution of CTs that describe the gene expression trajectory. To discriminate between higher and lower confidence gene CT assignments, we determine the probability of observing each CT based on the discrete probability distribution (DPD) of possible CTs, rather than relying on a single trend derived from the mean L2FC values. To generate a gene\u2019s DPD, we first model the normalized expression count distribution for each sample group (condition and time-point) by a log-normal probability density function (PDF) using the mean and standard deviation of the observed normalized counts as parameters. We then applied Monte-Carlo sampling from the each of the 6 PDFs generated for each gene 1000 times, computing L2FC between RYGB and WM at each time point, along with the corresponding expression trajectory-based CT. For each CT, the gene CT score is defined as the fraction of sampling cases where the CT was observed. In order to compare scores across CTs, we normalized the gene CT scores to the median CT score across all genes, resulting in the normalized gene CT score. This normalization was necessary because genes are more likely to be assigned to certain CTs, such as CT-9, over others in the scenario where all expression values are derived from the same underlying distribution, which causes an inflation in gene scores for CTs that have a higher likelihood of being observed. In this manner, each gene is assigned nine normalized gene scores for each CT, with the largest score informing the CT that is most likely the true CT for a gene\u2019s expression dynamics.In this study, our aim is to identify functionally related genes in metabolic and signaling pathways that share a similar dynamic differential gene expression profile when comparing RYGB vs. WM groups. We avoided traditional PEA to liberate the analysis from canonical pathway boundaries, and instead pursued module discovery on graph-based networks to account for interactions between genes that span multiple functions. Specifically, we developed a set of metrics to assess whether genes with the same high-scoring CTs cluster within sub-networks of a comprehensive large-scale network describing human metabolic and signaling pathways. Using the KEGG REST API and Biop1) and added to subnetwork S. Then, a random edge is selected from all edges connected to N1, and the second node connected to the edge is added to sub-network S, where S is now . Again, a random node is selected from S and a random edge is selected from all edges that are connected to that node. This node is added to S if it does not belong to S. This procedure is repeated until the subnetwork S has M protein nodes. If it takes more than 1000 iterations to add a new node, the procedure is terminated, and a new subnetwork is built starting from N1. The KEGG-HN was sampled for 100,000 sub-networks of size M, inclusively ranging from 10 to 20. To identify groups of genes enriched with high normalized CT score for each CT, we applied the Monte Carlo sampling of subnetworks of KEGG-HN. Computationally, a sub-network is selected using the following procedure. First, a sub-network, S, is defined as an empty array of nodes. Next, a random node from KEGG-HN is chosen (NFor each sub-network sampled, a module CT score is computed for each CT by taking the mean of the normalized CT scores across all genes in the sub-network. In this regard, modules with high modular CT score values are more likely to be enriched in genes that exhibit the CTs dynamic expression trajectory. Subnetworks are ranked by module CT scores.To determine a significance threshold for the module CT sores, a null distribution of module CT scores is generated by shuffling the gene labels of our gene CT score dataset and recalculating module CT scores. This randomized dataset is designed to control for clustering of genes with high normalized CT scores for a given CT in sub-networks of KEGG-HN by random chance. The experimental and null cumulative distributions of module scores for each CT are compared using probability plots, elucidating that module scores observed for CTs 1\u20134 is significantly greater than those observed in the null model. Each module is then assigned a \u03c1 value for each CT, calculated as the probability of observing a module CT score at least that high in the null distribution. For CTs 1\u20134, the group of modules with \u03c1 equal to the minimum \u03c1 observed are considered significant and are visualized in Cytoscape and further assessed for biological relevance.We propose our computational platform as a universally applicable time-series omics analysis workflow that facilitates the discovery of network modules with common dynamic profiles. Focusing on RYGB surgery which triggers a complex, time-dependent, multi-organ metabolic and signaling rewiring , we repo"}
+{"text": "Anomaly detection has been widely used in grid operation and maintenance, machine fault detection, and so on. In these applications, the multivariate time-series data from multiple sensors with latent relationships are always high-dimensional, which makes multivariate time-series anomaly detection particularly challenging. In existing unsupervised anomaly detection methods for multivariate time series, it is difficult to capture the complex associations among multiple sensors. Graph neural networks (GNNs) can model complex relations in the form of a graph, but the observed time-series data from multiple sensors lack explicit graph structures. GNNs cannot automatically learn the complex correlations in the multivariate time-series data or make good use of the latent relationships among time-series data. In this paper, we propose a new method\u2014masked graph neural networks for unsupervised anomaly detection (MGUAD). MGUAD can learn the structure of the unobserved causality among sensors to detect anomalies. To robustly learn the temporal context from adjacent time points of time-series data from the same sensor, MGUAD randomly masks some points of the time-series data from the sensor and reconstructs the masked time points. Similarly, to robustly learn the graph-level context from adjacent nodes or edges in the relation graph of multivariate time series, MGUAD masks some nodes or edges in the graph under the framework of a GNN. Comprehensive experiments are conducted on three public datasets. According to the experimental findings, MGUAD outperforms state-of-the-art anomaly detection methods. To guarantee that network systems operate normally, large amounts of industrial data are monitored at all times. The data come from numerous interrelated monitoring sensors, which are continuously generated through the operation of the system and constitute multivariate time-series data. Time-series anomaly detection is a corUnsupervised anomaly detection is more widely used since the prevailing scenarios often lack stable anomaly signs and the anomaly changes irregularly. Intuitively, the task can be accomplished by setting a threshold and identifying data that exceed the threshold as anomalous. However, this approach cannot cope with the complexity and diversity of exceptions and data types. For example, anomalies may occur where the absolute value of the deviation is not very large but the trend of the data is different. Traditional unsupervised anomaly detection utilizes statistical learning methods, such as principal component analysis , distancDeep learning techniques have become popular in the area of anomaly detection by allowing neural networks to learn characteristics because of their strong learning ability and high adaptability ,6. For eRecently, graph neural networks (GNNs) have been applied to anomaly detection\u00a0. GNNs arTo address the problems with the above methods, we propose a masked graph neural network for unsupervised anomaly detection (MGUAD), a novel method that uses a GNN with masking strategies to robustly learn the temporal context from time-series data and the graph-level context from multiple time-series data for anomaly detection. The observed time-series data from multiple sensors often lack explicit graph structures. MGUAD can model the relations among sensors as a graph and dynamically update the graph structure based on time-series data over time. A GAN framework is used to train MGUAD. The GNNs are employed as a generator, and a discriminator is present to distinguish between the original and generated time-series sequences. To ensure that the proposed model is robust and can make the most of the learned correlations among sensors, multi-masking strategies are adopted to model the time series and graph structure. In terms of masking on time-series data, MGUAD randomly masks time points from the sequence and then reconstructs the time points through the neural network, which can adequately learn the temporal correlation of the contexts in the sequence to recover the masked part. Meanwhile, by masking the nodes or edges, MGUAD can learn the graph-level context from multiple sensors robustly within the framework of the GNN.For the purpose of unsupervised multivariate anomaly detection, we propose a novel network design that can exploit the temporal correlations of time series and the complex relationships among different sensors. MGUAD is the latest example of a GNN applied to multivariate time-series anomaly detection.We are the first to introduce the masking operation into time series and graph structures, and we use two masking strategies to enhance the learning capabilities of the\u00a0model.Extensive experiments on three publicly available datasets demonstrate that our model outperforms all current state-of-the-art methods.To summarize, our main contributions are as follows:The goal of anomaly detection is to identify samples that are aberrant and deviate from the typical data trend. In this section, we review anomaly detection on time-series data in the existing literature, especially unsupervised methods on multivariate time series. Our model learns the distribution of temporal data through masking and GNNs, so we also provide a summary of the related works on these two topics.Numerous anomaly detection techniques have been developed as a result of the diversity of anomalous patterns, data formats, and application contexts. Taken together, the three types of efficient anomaly detection techniques are as follows: clustering-based methods , reconstThe clustering-based method is primarily predicated on the idea that normal data samples are located closer to the local clustering centroid while anomaly samples are located further away. The distance between each sample and the closest clustering centroid serves as the anomaly score. Clustering-based methods mainly include the Gaussian mixture model\u00a0, K-neareThe potential distribution of time-series data can be learned using a reconstruction-based method. These methods are based on the idea that anomalies lose information when mapped to a lower-dimensional space, making it impossible for them to be successfully reconstructed. Because of this, the anomaly score is estimated using the reconstruction loss, as seen in Principle Component Analysis (PCA)\u00a0. AutoencPrediction-based methods learn to fit the time series and then predict the values at the next moment. If there is a significant disparity between the predicted data and the initial sample , this dThe masking operation, a common method used to improve a model\u2019s learning capabilities, has received widespread application across several deep learning tasks. In general, the masking operation removes or replaces a portion of the input to the neural network, which can improve the model\u2019s capacity by reconstructing the masked data. In the natural language processing field, BERT\u00a0 employs Large amounts of data in the real world, such as social networks, knowledge graphs, complex file systems, etc., are unstructured. The emergence of GNNs addresses the limitation of traditional neural networks, which are only effective in processing structured data such as sequences and grids. GNNs are good at modeling intricate patterns in data using a graph structure. In general, the key design element of GNNs is that graph nodes exchange information with their neighbors to update their representations. Graph convolution networks (GCNs)\u00a0,35 use cn variables (sensors), and p is the anomaly\u2019s starting time point, and Our research attempts to address the issue of multivariate time-series anomaly detection. A multivariate time series is represented by V is a finite set of nodes and E is a finite set of edges. We use In this section, we describe how to capture the relationships between multivariate time series and build a graph to utilize the correlations between sensors. The graph\u2019s nodes correspond to individual sensors. Let m samples of each time series are selected to represent the initial behavior of these sensors, and we calculate the Pearson correlation coefficient between them to obtain the correlation between two time series. The values of the similarity are normalized and are used as the weights of the edges between nodes. To control the scale of the graph, which can eliminate unnecessary edges and improve the efficiency of the operation, we use two strategies to control the number of edges: filtering out edges between nodes with a similarity lower than s and constraining the maximum number of neighbors per node to n.To build a graph structure, we need to obtain the complex relationships between the sensors. The similarity of data in time series can be used to calculate the correlation between two sensors. The first m time points in each time series to calculate the initial correlations between time series. kth time series with an m number of timestamps. We calculate the Pearson correlation coefficient between two series to obtain their similarity. As shown in the formula below, We select the data within the first n among them as candidate neighbors i: i and each node in The relationship s to streamline the graph structure. Next, we obtain the initial composition of the graph structure.Then, we eliminate the edges with weights lower than Relationships between sensors can change over time , and MGUAD should update the graph structure as the state changes. For graph-structure learning, we map each sequence as a node embedding . We use node embedding to represent each sensor\u2019s features. The node embedding vectors are initialized randomly and are updated with the model training. We represent embeddings as By calculating the cosine similarity between node embeddings, we can obtain the correlations between node features using m samples of the time series from the current time period. Similar to how we proceed during the initial graph creation, these samples represent the behaviors of the sensors during this time period. MGUAD uses both node embedding and sampling in tandem for graph-structure learning. We calculate m samples and normalize and assign weights to the sum to obtain the new similarity. We then update the graph structure in a similar way to graph building as follows: Graph-structure learning also needs Regarding reconstructive learning, by randomly masking multivariate sequences, MGUAD can be forced to reconstruct the sequences by leveraging the context of each sequence and the correlations between different sequences, thereby enhancing the model\u2019s ability to handle complex data. To learn the time correlations in time series, we initially remove a portion of the input time series and then instruct the network to recreate the original input time series. We remove a random portion of the sequence determined by the sliding window. For example, when using a 20% masking ratio for a sliding window with a length of 20, we randomly mask four time points. Our goal is to learn contextual information within time series, so to break the continuity in the time series on a large scale, we specify the maximum sequence length for the masked subsequences. Generally, the maximum continuous masking length will not exceed half of the masking ratio. Thus, the longest consecutive masked time points for a 20% masking ratio of 20-length inputs would be 2.To reconstruct the masked time series, the model can learn the temporal dependencies between different time points. Specifically, MGUAD generates values at the masked time points and calculates the reconstruction loss by comparing these values to the original data before masking. For example, a sequence The reconstruction loss, i as a is a vector of learning coefficients for the attention mechanism, \u2295 denotes concatenation, and This section includes instructions for using graph structures to formulate predictions. To enhance the robustness of the model, we mask the nodes or edges of the graph. During prediction, the model can only use incomplete data after applying a random mask as input, which enhances its capability to extract complex information from within and between sequences, ultimately improving its ability to leverage unbalanced or ambiguous data. The idea of using graph structures to calculate predicted values through an attention mechanism comes from GAT\u00a0, where wWe obtain the aggregated representation Then, we input the aggregated representation Similar to the dropout operation\u00a0 in deep The main goal of prediction-based anomaly detection methods is to develop a model that can accurately anticipate future values. Anomaly scores are computed by the difference between the true value and the predicted value. The deviation between the expected and actual behavior of the time series is the key to anomaly detection. MGUAD should model the behavior of time series, so MGUAD needs to learn to predict the future value close to the true value. We apply an adversarial learning approach to learn the \u201cnormal\u201d behavior of time series. In such a structure, the two players in a two-player min-max game are the generator and the discriminator. The generator attempts to produce samples that may deceive the discriminator, whereas the discriminator attempts to discern between real samples and generated ones.Specifically, when training in an adversarial manner, we need to optimize the generator to fool the discriminator. The input obtained through the time window is represented by Equation : (12)losWe use a Transformer as a discriminator, denoted as ies data ,40. TheyUltimately, we sum three loss functions, the reconstruction loss, In the inference phase, we calculate an anomaly score for each time series. A higher score means the time series is more likely to be anomalous. The anomaly scores consist of two components: deviation scores from the deviation between the predicted value and the true value, and critic scores from the discriminator. Intuitively, the deviation is a measurement that indicates the abnormal behavior of the time series.To reduce the effect of the relative sizes of the different series, we use smoothing scores when calculating the anomaly scores, i.e., by subtracting the mean of each series and dividing by the standard deviation.t.The largest normalized anomaly score of all the sequences can be selected as the anomaly score We consider moments where the anomaly score exceeds a threshold as anomalous time points, and nodes with large anomaly scores as anomaly sensors.Algorithm 1 Multivariate Time-Series Anomaly Detection AlgorithmTraining:Input:\u00a0Output: model parameters such as generator parameters: if epochs within the number of training iterations thenfor each epoch do\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node embedding: \u00a0\u00a0\u00a0\u00a0\u00a0Node masking, drop graph nodes by a certain percentagefor each nodes do\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Calculate the embedding similarity with other nodes\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Fill in the adjacency matrix with the Top-k nodes with the highest similarity as neighborsend for\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Time-series masking: \u00a0\u00a0\u00a0\u00a0\u00a0Calculate the attention scores by the adjacency matrix\u00a0\u00a0\u00a0\u00a0\u00a0Reconstruct the masked data through the attention mechanism: t through the attention mechanism: \u00a0\u00a0\u00a0\u00a0\u00a0Predict the next moment \u00a0\u00a0\u00a0\u00a0\u00a0Discrimination: \u00a0\u00a0\u00a0\u00a0\u00a0Update discriminator: \u00a0\u00a0\u00a0\u00a0\u00a0Update generator: \u00a0\u00a0\u00a0\u00a0\u00a0Record parameters in the current iterationend for\u00a0\u00a0\u00a0end ifInference:Input:\u00a0Output: Time points when anomalies occurred: Predicting the value: for each nodes do\u00a0\u00a0\u00a0Calculate the loss between the predicted value and the true value for each node: end fort\u2019s anomaly detection score: time Get anomaly time points The overall algorithm is shown in Algorithm 1.We first describe the utilized datasets and experimental settings in this section. Then, the results of the experiments are shown, and we subsequently analyze them.https://mlad.kaspersky.com/swat-testbed/, accessed on 10 August 2022), WADI [https://itrust.sutd.edu.sg/itrust-labs_datasets/dataset_info/, accessed on 10 August 2022), and KDDCUP99 . They are all multivariate time-series datasets and contain sufficient data for training and testing.Three public datasets are utilized in our experiments to evaluate our model, as shown in 2), WADI ), a machine learning method (isolation forest), and a traditional deep learning method (LSTM). The remaining baselines were complicated combined models, which have proven to achieve outstanding performance in related tasks in recent years.PCA: Principal component analysis [analysis is a linIsolation Forest: Isolation forest [n forest is an efLSTM: LSTM [TM: LSTM is a claLSTM_VAE: VAE is a common deep learning framework that learns the representation of data after dimensionality reduction by fitting the distribution of the data. This self-encoding architecture mainly performs anomaly detection through reconstruction, that is, it treats anomalous data as noise, considering that the data are compressed to retain only normal information while losing anomalous information. It discriminates anomalies by comparing reconstructed data with the original data. LSTM_VAE [LSTM_VAE combinesDAGMM: DAGMM [M: DAGMM combinesMadGAN: MadGAN [: MadGAN is a recUSAD: USAD [AD: USAD is an unGDN: GDN [GDN: GDN is a preCAE: CAE [CAE: CAE is a divn was set to 15 and 20 (KDDCUP99). The time-series masking ratio was set to 20%. The weights of the edge values when building the graph were set to Empirically, we set the size of the sliding window to 5, and the embedding dimension of the nodes to 64 and 128 (WADI). We trained our model using the Adam optimizer. The learning rates of the generator and discriminator were set to 0.001 and 0.0001, respectively. The maximum number of neighbors of a graph node To evaluate the anomaly detection performance, three metrics were used in our work: In The correlations among nodes support the construction of the graph. To analyze the effectiveness of the generated graph structure, we first analyzed the variation trend among the sensors and the similarity of the embeddings and then verified whether the connection relationships of the graph were reasonable. We selected two typical sensors: 1_MV_001, which represents the state of an electronically controlled valve, and the flow rate detected by 1_FIT_001_PV, which is controlled by this valve. As shown in Part of the graph structure learned by the model can be seen in The yellow part in Without masking: Retaining the original model without any masking operations implies that the model does not reconstruct the masked data and can utilize all nodes in the generated graph for predictions.With graph masking: Stochastically adding masks to the base model for graph nodes. This means that the model can only use some of their neighbors for predictions. The graph structure is not fully accessible, enhancing the model\u2019s ability to utilize correlations among different time series.With time-series masking: Masking a portion of the input of the model, obtained through sliding windows, and then having the model reconstruct the masked samples using the unmasked generated graph structure.With both edge masking and time-series masking: The model\u2019s predictions are obtained by combining the two masking strategies with a fine-tuned masking ratio. This operation can be called multi-masking.To assess the plausibility of the model architecture and the necessity of the masking operation, it was necessary to carry out ablation experiments. In our model, we aimed to verify whether the masking operation leads to performance improvement and whether multi-masking yields better results than individual masking. For this purpose, three approaches were used for comparison:To fix the other hyperparameters during the experiment, we set the masking ratio at each step to 20%. Since fixed hyperparameters were used to maintain the consistency of the experiments, the results may be slightly lower than the optimal hyperparameter settings. However, they are sufficient for determining the performance trend of the model under different masking strategies.We analyzed the specific characteristics of the different masking strategies, and the results are shown in The use of graph masking led to non-negligible improvements in the As shown in the table, the combined performance of multi-masking outperformed that of individual masking, demonstrating that multi-masking can combine the advantages of two individual masking operations and can be adapted to different datasets, making full use of the respective advantages of different masking strategies in the We also found it interesting that in the absence of graph masking operations, the initial graph structure input can significantly affect the training and convergence speed , despite our strategy to assist in the initial graph creation. However, after we performed the graph node masking operation, we were able to significantly reduce the convergence time of the training. That is, graph node masking enhances the model\u2019s ability to construct graph structures and reduces the instability caused by node embedding when building the graph.To evaluate the effect of different masking ratios on the model\u2019s performance, we kept other hyperparameters fixed and adjusted the masking ratio of the time series and graph. We changed the input window of the model to 10 to achieve a masking ratio accuracy of 0.1 for the temporal sequences, meaning that MGUAD would randomly set the time at one moment of each sequence to 0. Note that the model may not score optimally on the metric due to the change in the timing window, but we can determine the trend through experiments using different sizes of the timing window. Given the proven necessity of the masking operation, we conducted experiments using masking ratios ranging from 0.1 to 0.6, and the results are shown in In Unsupervised multivariate time-series anomaly detection is particularly important in real-world applications. In this paper, we propose a novel multi-masking model, MGUAD, which can effectively capture the temporal and spatial correlations present in the input data. MGUAD can automatically build the graph structure of dependency relationships between sensors and update this structure as the relationships change. An ablation study has been conducted to verify the contribution of the multi-masking strategy in the proposed model, which enhances the robustness and learning abilities of the model and makes it easy to deal with difficult and unbalanced data. We have also conducted comprehensive experiments on three public datasets. The experimental results demonstrate that our model outperforms state-of-the-art anomaly detection methods. For future work, we intend to extend our experiments to optimize MGUAD in terms of building initialization graphs, the hyperparameters of the model, masking strategies, etc. We also aim to conduct further research on the interpretability of this GNN-based model."}
+{"text": "GSE73072, containing samples exposed to four respiratory viruses , and respiratory syncytial virus (RSV)) was used as input data. Various preprocessing methods and machine learning algorithms were implemented and compared to achieve the best prediction performance. The experimental results showed that the proposed approaches obtained a prediction performance of 0.9746 area under the precision-recall curve (AUPRC) for infection prediction (SC-1), 0.9182 AUPRC for symptom class prediction (SC-2), and 0.6733 Pearson correlation for symptom score prediction (SC-3) by outperforming the best leaderboard scores of Respiratory Viral DREAM Challenge . Additionally, over-representation analysis (ORA), which is a statistical method for objectively determining whether certain genes are more prevalent in pre-defined sets such as pathways, was applied using the most significant genes selected by feature selection methods. The results show that pathways associated with the \u2018adaptive immune system\u2019 and \u2018immune disease\u2019 are strongly linked to pre-infection and symptom development. These findings contribute to our knowledge about predicting respiratory infections and are expected to facilitate the development of future studies that concentrate on predicting not only infections but also the associated symptoms.Respiratory diseases are among the major health problems causing a burden on hospitals. Diagnosis of infection and rapid prediction of severity without time-consuming clinical tests could be beneficial in preventing the spread and progression of the disease, especially in countries where health systems remain incapable. Personalized medicine studies involving statistics and computer technologies could help to address this need. In addition to individual studies, competitions are also held such as Dialogue for Reverse Engineering Assessment and Methods (DREAM) challenge which is a community-driven organization with a mission to research biology, bioinformatics, and biomedicine. One of these competitions was the Respiratory Viral DREAM Challenge, which aimed to develop early predictive biomarkers for respiratory virus infections. These efforts are promising, however, the prediction performance of the computational methods developed for detecting respiratory diseases still has room for improvement. In this study, we focused on improving the performance of predicting the infection and symptom severity of individuals infected with various respiratory viruses using gene expression data collected before and after exposure. The publicly available gene expression dataset in the Gene Expression Omnibus, named Respiratory infections are the leading cause of acute illnesses globally in both adults and children from past to present. According to a report by the etc. can cause infection, a large proportion of respiratory infections is caused by viruses. HRV has been identified as the virus most commonly associated with respiratory diseases, accounting for about 40% of infections. Influenza viruses, RSV, and Coronavirus follow HRV in terms of frequency . DREAM is a community-driven organisation with the mission of advancing biomedical and systems biology research through crowdsourcing competition. Competitions usually focus on tackling a specific biomedical research question, narrowed down to a specific disease. As the competitions are open to researchers around the world, a wide range of ideas and solutions can be presented. This allows for the most effective solution to the problem being sought. The Respiratory Viral DREAM Challenge was one of these competitions which aimed to develop early predictors of susceptibility to and contagiousness of respiratory reactions based on gene expression profiles collected before and after exposure . Particii.e., features) that yield the highest predictive performance. Thanks to this approach, we were able to identify common optimal gene subsets using ORA. This may provide a greater insight into the relationship between infection and symptom severity prediction. Furthermore, the pre- and post-exposure analyses also yielded valuable results that may be useful to other researchers for further studies on respiratory viruses. After conducting a literature review, we were unable to find any study that investigates the common sides of infection and symptom severity. Overall, our study is expected to provide significant benefits for future research in the field, especially for the development and improvement of predictive performance and statistical identification of biomarker genes.In this study, we aimed to outperform the leaderboard scores of the DREAM challenge for all categories and phases by employing different pre-processing techniques and machine learning methods. Additionally, we have utilized a two-step feature selection method that leverages both wrapper and filtering algorithms to enable the identification of the most effective genes for both infection and symptom predictions. The implementation of a two-step approach allowed us to select the least number of genes with accession number hallenge . GSE7307i.e., T.-24 or T.-30 h). Each volunteer was inoculated at time T.0 in a controlled environment with only one of the four different live respiratory viruses. Sampling began 1 day (24 or 30 h) before inoculation and continued at various intervals up to 7 days later. However, in this study, we only took into account up to 1 day after inoculation because one of the objectives of our study was to determine the early predictors immediately after exposure, which is the same as the objective of the DREAM challenge. To extract gene expression profiles from blood samples, an Affymetrix Human Genome U133A 2.0 microarray with 22,277 probes was used.To understand susceptibility to respiratory infections in humans, samples were collected both before and after infection. Therefore, peripheral blood samples were collected from healthy volunteers starting the day before : Prediction of viral shedding, i.e., whether the individual is infected or not. A binary outcome to evaluate infection prediction rate. Aims to find out predictors that cause infection.- Sub-Challenge 2 (SC-2): Prediction of symptomatic response to exposure. In other words, predicting whether or not the subject will become symptomatic after exposure. Aims to find out predictors that cause severe symptoms.- Sub-Challenge 3 (SC-3): Continuous-valued prediction of symptom score. Since the discrete-valued symptom score is calculated using the Jackson score, this task includes the direct prediction of the log-transformed version of the Jackson score. Aims to find out predictors that cause severe symptoms.- Respiratory viral infections are still one of the most common diseases imposing an economic burden on hospitals. Diagnosing as early as possible reduces mortality rates and contributes economically. At this stage, artificial intelligence-based systems are one of the solutions. However, since the viruses that cause respiratory diseases are spread through airborne transmission, it is difficult to determine the exact time of infection and onset of the symptoms from a genetic perspective. This makes it difficult to identify early markers of infection. To address these needs Respiratory Viral DREAM Challenge was held in 2016, which stands out from many studies in the literature in terms of the use of data sets containing various types of viruses, evaluating symptom severity, and including exact time when the subject is exposed to virus. Participants of the challenge were expected to make predictions for two phases, pre-exposure (phase 1) and post-exposure (phase 3), on data sets generated by injecting different respiratory viruses into volunteer subjects. Participants were expected to make predictions in three different tasks:To form the datasets, gene expression profiles of each sample had been obtained using a microarray with 22,277 probes, each representing one or more genes. These gene expression values were obtained by collecting blood samples at different time points and constitute the input features for the prediction models.For each of the three sub-challenges, participants had first made their submissions for the test set of the leaderboard phase. Then, in the second phase, an independent test set was used to evaluate the performance of the submissions. In addition to developing models with high prediction performance, the goal of this challenge was to identify predictors of infection as well as symptoms for both pre- and post-exposure periods. i.e., genes) that are important for infection and symptom development.The main motivation of this study is to improve the prediction performance of the challenge results using pre-processing and machine learning methods. There are multiple reasons for focusing on the results of the DREAM challenge in this study. First of all the DREAM challenge included multiple prediction tasks with varying objectives, all of which utilized the same gene expression data. Based on that the challenge dataset allowed us to perform a comprehensive analysis on different prediction tasks. Second, there is no other publicly available dataset published after DREAM challenge that contains the four different respiratory viruses along with actual class label information. Third, sampling of gene expression had started before the exposure, which led to the opportunity to perform pre-infection analysis. This way, we were able to propose models specifically for pre- and post-exposure data as well as various prediction problems. Fourth, the possibility of identifying related common genes that have a role in both infection and symptom development was another motivation for this study. Finally, the results obtained in the DREAM challenge are still open for further improvement, which shows that the prediction problems are not solved yet. It should be noted that there are limited studies in the literature that perform a comprehensive analysis similar to this work using data for multiple viruses and experiments, data for multiple time-points, computing predictions for pre-exposure and post-exposure periods, and finding predictors , Support Vector Machine (SVM), Random Forest (RF), k Nearest Neighbors (kNN) and XGBoOn the other hand, the goal of SC-3 is to estimate the severity of symptoms for a given subject, which is represented as a continuous-valued score. Therefore, SC-3 is a regression problem. The Lasso, Elastic Net, Ridge , Linear One of the factors that lead to high predictive performance for machine learning methods is the proper tuning of hyper-parameters. When the hyper-parameters of an algorithm are tuned properly, the prediction accuracy can be increased. In our study, each model training and testing experiment was conducted with both optimized and non-optimized models depending on whether hyper-parameters are optimized or not. In the experiments, we used the open-source library named Optuna to optime.g., feature selection, virus merging, and the results for optimized and non-optimized versions of the models were reported. Details of parameter spaces of the algorithms are given as a supplementary file technique was preferred during parameter optimization. In each iteration, one sample is marked as validation data and the rest is used to train the model with the specified parameter set. In the end, a prediction is obtained for each sample and the final accuracy is computed by averaging predictions obtained for all samples. This accuracy indicates the performance of the parameter set. The LOOCV is repeated for all hyper-parameter combinations sampled using random search and the particular hyper-parameter set that gives the best LOOCV performance is selected as the optimum. To find the best values of hyper-parameters, the overall accuracy is optimized for SC-1 and SC-2 and the Pearson correlation for SC-3. The hyper-parameter optimization steps explained above were performed for all pre-processing methods, ary file .Our main goal is to predict the subject\u2019s infection, symptom presence, and symptom severity as accurately as possible. In this article, we proposed machine learning-based models that take gene expression profiles of subjects as input and make forecasting about infection and symptoms. All methods, pre- and post-processing codes were implemented using the Python programming language. To implement classification and regression algorithms, we used the open-source machine learning library of Python called scikit-learn . To implThe sampling process was not performed in all time spans for each subject, which causes missing value problem. For example, blood samples of subjects with IDs \u201c3013\u201d and \u201c3015\u201d were not collected at the T.-24 time point, nor the samples of subject \u201c3014\u201d on T.0. Consequently, for a given experiment, the number of samples at different time points may not be equal, and such unbalanced sample numbers must be considered in experimental analysis so that machine learning models can be trained and tested systematically in a way that combines information from multiple time-points. To address this issue, those time points, which do not include data for all subjects of a given phase and experiment or those subjects who do not have data in all time points of a given phase and experiment could have been excluded from the analysis. However, to allow a fair comparison between the challenge results and our proposed models, neither subjects nor time points were discarded. Instead, we propose single time point and experiment (STPE), and average of features (AF) approaches to process data from all subjects and time points.Our experiments include two main stages. In the first one, machine learning models are applied only to preprocessed datasets obtained using STPE, AF, and/or virus merge (VM) approaches, which are explained below in more detail. This stage shows the prediction performance of the full use of gene expression profiles. The second stage consists of applying feature selection for the prediction of respiratory infection and the determination of significant genes that have an impact on the prediction of infection and symptoms.i.e., phase 1 or phase 3) because our goal is to make a prediction for a phase that spans multiple time points, rather than for a specific time point. The final class distributions were obtained by averaging these probabilities obtained for different time points. If gene expression data are not available for a subject at particular time points, these time points are excluded and the distributions obtained for the remaining time points are used to calculate the average. For SC-3, instead of a class probability distribution, symptom severity is predicted and averaged to calculate the phase prediction for each subject. As described above, time points when the subject has no gene expression samples are ignored. The STPE approach allows us to use data for all subjects and time points available for a given phase and experiment.Samples related to each time point of the experiment are treated as a separate dataset in the STPE approach. Machine learning models are trained separately for each dataset belonging to a particular time point and experiment. After training, for each experiment in the test set, the class probability distributions of the subjects are predicted for each time point in each phase by machine learning. In this way, despite the fact that gene expression signals of the subjects can be weak or strong at different time points, distinctive signals can be captured for all subjects using the AF approach, which also facilitates the identification of key gene expressions that impact disease prediction.The AF approach simply uses the average of the gene expression profiles and hence time point information is ignored. Although this may be considered as a downside of this approach, from another perspective it may also be an advantage, since the timing of symptoms varies from person to person and even virus to virus. For example, phase 3 contains eight different time points in the HRV DUKE experiment. While some subjects may become symptomatic between time points T.4 and T.12, others may show symptoms after T.12. As machine learning models cannot be trained/tested for each subject individually depending on the time point, symptom signals from all subjects should be acquired in a generalized model. This is because we assume that the changes in gene expression also begin with the onset of symptoms, making it easier to capture the changed signals is defined as the process of eliminating redundant and irrelevant features from a dataset to improve the performance of a learning algorithm . Thus, ni.e., gene expression) is calculated. Then, the features are sorted in descending order by their correlation score, and the training set is re-arranged according to the new order of the features. The second step aims to find the best subset of features. For this reason, starting from the most highly correlated feature, a subset is formed by adding the next feature at each iteration. The performance of each subset is evaluated using a wrapper algorithm and a LOOCV experiment on training set. Similar to hyper-parameter optimization, for SC-1 and SC-2, the overall accuracy and for SC-3 the Pearson correlation coefficient are optimized as the performance metrics to find the best feature subset. Since the main objective of feature selection is to reduce the number of dimensions, the least number of features that achieved the highest predictive performance was marked as the optimal subset of features. For example, if the first three and the first 20 features achieve the same highest accuracy as 75%, the first three features are selected as the optimal set. Once the best feature subset is found using training set, the test set is re-ordered using these features.Our proposed two-step FS method includes both the filtering and wrapper approaches. The process starts with applying a filtering method to the training set. During the filtering step, the correlation value of each feature and F-statistics were used to calculate the degree of correlation between a given feature and the output label. Similar to filtering step, wrapper algorithms also differ based on the sub-challenges. While XGBoost, LR, and KNN were employed for SC-1 and SC-2, Lasso, ElasticNet and Gradient Boosting Regressor were used for SC-3 as the prediction algorithms for the wrapper method.As explained in the introduction and problem definition sections, the main goal of this study is to propose machine learning models that achieve better prediction performance than the best performing methods of Respiratory Viral Dream Challenge . Detailei.e., STP, AF, and VM). This table only includes our results that are better than the best-performing leaderboard results of the challenge. It can be concluded that the data pre-processing approaches were able to produce better prediction scores than the leaderboard results in all subchallenges and phases. In particular, AF-based models, obtained the highest values, especially in post-exposure prediction . Moreover, although the highest AUPRC was 0.75 for the SC-2 phase 3 category of the challenge leaderboard, our AF models achieved a much higher score with an AUPRC of 0.92.Because SC-3 was associated with continuous symptom severity, the predictive performance of the models was evaluated using Pearson correlation coefficient, which describes the strength of the linear relationship between two variables. While a correlation of one indicates an exact linear relationship, 0 represents no relationship or similarity between the variables. In addition, we also calculated the mean square error (MSE) for each model. This is because the MSE is one of the most well-known methods of measurement, especially in regression problems, and could also be informative.Among our proposed models for SC-3, LinearSVR with the STPE approach achieved a Pearson correlation of 0.5897 for pre-exposure prediction. For SC-3 phase 3, our models also achieved higher values than the best-performing models of the DREAM challenge (a Pearson correlation of 0.5963). When the results of SC-3 are evaluated by MSE, the kNN regressor based on AF VM scored the lowest with an MSE of 0.1822. Even though MSE is expected to take low values when Pearson coefficient takes high values, this is not always observed in our results. This is because while the Pearson coefficient measures the strength of the relationship between the two variables, the MSE expresses the overall error of the model.i.e., STP, AF, and VM) and prediction methods , there is no winner that performs the best in all prediction tasks and phases. Furthermore, hyper-parameter optimization did not always improve the prediction performance of the models. This could be because the number of samples in the training set is small. Consequently, our proposed approaches achieved improvements of 1\u20133%, 7\u201316%, and 5\u20139% for SC-1, SC-2, and SC-3, respectively.If we compare data pre-processing methods , and 1 for DEE5 H3N2 at time T.-24, whereas this number is 1 for DEE4X H1N1 and 47 for HRV Duke at time T.0 for the ReliefF models. Because phase 1 includes time indices prior to and including T.0, the total number of unique features used in the ReliefF experiment of SC-1 phase 1 became 55 after removing duplicates (some features may be selected repeatedly in multiple time points and/or experiments).The model that used the AF-approach for data pre-processing, chi-square method for FS and LR as the classifier achieved an AUPRC of 0.8187 in the SC-2 phase 1 category, although only 8 gene expression features were used. Considering that the total number of features in each time point is 22,277, it can be interpreted that this model achieved a reasonably high score despite the small number of features. In addition, the Fisher score-based models achieved the best performance among all models using only 60 gene expression features in SC-1 phase 3 category. Similarly, it can be concluded that FS approaches highly improved the prediction of symptom severity scores with a Pearson correlation of 0.67, as shown in p-values are shared as supplementary file is 6, and this number is 2 for the genes that are common to SC-1 and SC-2 in phase 3, respectively. When evaluated according to sub-challenge, 8 genes were selected as common to SC-1 phase 1 and SC-1 phase 3 ; 28 genes were at the intersection of SC-2 phase 1 and SC-2 phase 3. In addition, only 1 gene was selected for all sub-challenges and phases, namely \u201cATP7A\u201d.The union of genes selected by FS methods are shown in p-value calculated using a hyper-geometric test (or Fisher\u2019s exact test) indicating whether terms are found in the gene lists more frequently than expected by chance. A p-value less than 0.05 is typically considered to be statistically significant.Despite the paucity of overlap among common genes, ORA was performed on the union of intersecting genes to gain better insight into the underlying association of genes with specific biological pathways. ORA is a simple statistical approach that determines which biological functions or processes (or pathways) are significantly enriched among genes in a given list . The degTo perform ORA, we used the WebGestalt platform, a web-based toolkit that takes a gene list as input and performs a functional enrichment analysis to make an interpretation of the given list . BecauseBecause we want to extract the underlying biological factors before and after exposure and the reasons for the symptoms, we need to analyze each sub-challenge and phase separately. Therefore, the intersecting genes for SC-1, SC-2, phase 1, and phase 3, whose numbers are listed in As a result of ORA, the ratio and false discovery rate (FDR) of the enriched pathways for SC-2 and phase 1 are shown separately in Type I Diabetes Mellitus, Intestinal Immune Network for IgA Production, Viral Myocarditis and Phagosome. ORA found out that HLA-DQA1, HLA-DQA2, HLA-DRB4 for phase 1 and HLA-DQB1, HLA-DRB4, HLA-DQA1 for SC-2 were the genes that had the maximum overlap with the enriched pathways.As can be seen in the In addition to these analysis, genes that are commonly selected among different experiments are also explored. Although the viruses in our experiments are different, they are all associated with a respiratory disease. Therefore, common genes affected by different viruses may also be useful for understanding the disease mechanism. For this purpose, the major genes commonly selected on different experiments are also obtained and provided as Supplementary file .The number of correctly and misclassified samples for each respiratory virus on the test set is shown in In our last experiment, we compared our method with DeepFlu, which is based on deep neural networks and is published recently in the literature. DeepFlu was specifically developed to predict symptom severity, which corresponds to SC-2 and was applied to the datasets for DEE2 H3N2 and DEE3 H1N1. It should be noted that DeepFlu used a gene-annotated versions for both datasets, while we used probe-annotated versions. This resulted in different number of input features for prediction models. Additionally, DeepFlu utilized Leave-one Person Out (L1PO) cross-validation experiment on combined samples from T0 and T24 time points to evaluate performance separately on DEE2 H3N2 and DEE3 H1N1 datasets. In our model, we used the AF approach for preprocessing and the XGBoost algorithm in default hyper-parameter settings without feature selection for computing predictions. Since DeepFlu results were obtained with L1PO, our model is evaluated using the samples belonging to DEE2 H3N2 and DEE3 H1N1 experiments only using LOOCV to make a fair comparison. The results show that the our method achieved an AUPRC of up to 0.956, outperforming DeepFlu\u2019s AUPRC of 0.76 in predicting the SC-2 label of DEE3 H1N1. On the other hand, our method obtained an AUPRC of 0.946 while DeepFlu could obtain 0.901 for predicting the SC-2 label of DEE2 H3N2. The results of both models are available in i.e., the VM approach) improved the prediction performance for some of the tasks, this was not observed in all settings.In this study, we aimed to improve the accuracy of predicting infection and symptom development in individuals exposed to respiratory viruses by using different machine learning models and approaches. Our results were compared with the Respiratory DREAM Challenge, which is considered as one of the most important competitions in the field. Among the proposed approaches, STPE, which treats each time point separately, and AF, which combines gene expression at different time points, performed better than the Challenge leaderboard in all categories in terms of prediction. Although merging samples from the same virus to enlarge train dataset and symptom severity (up to 0.93 AUPRC) compared to the methods submitted to the challenge. Furthermore, analysis of the mutual genes selected by feature selection methods showed that the \u201cimmune system\u201d has a strong association with symptom development. These findings also showed congruity with the biological studies in the literature.e.g., from Illumina. The predominant genes will be investigated during symptomatic peak periods, considering gene expression up to 120 h. Furthermore, the Gene Set Enrichment Analysis (In the next studies, the proposed approaches and methods will be performed on the other gene expression dataset collected with a different microarray chipset, Analysis approach10.7717/peerj.15552/supp-1Supplemental Information 1Click here for additional data file.10.7717/peerj.15552/supp-2Supplemental Information 2Click here for additional data file.10.7717/peerj.15552/supp-3Supplemental Information 3Click here for additional data file."}
+{"text": "Although diabetes mellitus is a complex and pervasive disease, most studies to date have focused on individual features, rather than considering the complexities of multivariate, multi-instance, and time-series data. In this study, we developed a novel diabetes prediction model that incorporates these complex data types. We applied advanced techniques of data imputation and feature selection . Additionally, we utilized self-supervised algorithms and transfer learning to address the common issues with medical datasets, such as irregular data collection and sparsity. We also proposed a novel approach for discrete time-series data preprocessing, utilizing both shifting and rolling time windows and modifying time resolution. Our study evaluated the performance of a progressive self-transfer network for predicting diabetes, which demonstrated a significant improvement in metrics compared to non-progressive and single self-transfer prediction tasks, particularly in AUC, recall, and F1 score. These findings suggest that the proposed approach can mitigate accumulated errors and reflect temporal information, making it an effective tool for accurate diagnosis and disease management. In summary, our study highlights the importance of considering the complexities of multivariate, multi-instance, and time-series data in diabetes prediction. Diabetes-associated mortality rates have increased by 3% across all age groups from 2000 to 2019, as reported by the World Health Organization (WHO)2. Diabetes and its complications are leading causes of disability and mortality. Early diagnosis is crucial for effective disease management and improving the life quality of the patients3. Accordingly, several trials have been conducted to predict the development of diabetes accurately6.Diabetes mellitus is a significant global health issue. Approximately one in 11 adults are affected worldwide, and 90% of the cases are type 2 diabetes mellitus (T2DM)7. Statistical methods such as time series regression and dimension reduction have traditionally been used for disease prediction8. However, the recent development of deep learning algorithms has enabled the application of deep neural architectures in diverse research tasks, including medical data with high correlations and dimensions9. For instance, the first investigation of diabetes multivariate time series prediction using deep learning models7 introduced the possibility of applying long short-term memory (LSTM) and gated-recurrent unit (GRU) on clinical data. The PIMA Indian dataset (PID), provided by the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), is one of the most widely used datasets for diabetes prediction. This is because the dataset has a high prevalence of diabetic outbreaks and includes several important features. Various approaches, including artificial neural networks (ANN), naive bayes (NB), decision trees (DT), and deep learning (DL), have been explored to provide an effective prognostic tool for healthcare professionals5. Recent approaches have successfully incorporated CNN, CNN-LSTM, and CNN-BiLSTM, significantly enhancing the metrics on a large scale11. Most studies on the Korean Genome and Epidemiology Study (KoGES) dataset have focused on identifying correlations between certain factors and diabetes development using statistical methods15. These studies have demonstrated correlations between diabetes development and factors such as waist circumference12, prehypertension, hypertension, and glycated hemoglobin levels13. However, the only application of time series prediction of diabetes using deep learning models based on the KoGES dataset thus far has been limited to a vanilla LSTM model that does not reflect the characteristics of this data in its structure16. Additionally, there is a need for a novel method that can enhance disease prediction in more imbalanced datasets with a significantly larger number of features and instances. Therefore, we have developed a sophisticated deep learning framework that can detect dynamic temporal patterns in feature combinations and label properties, enhancing diabetes development prediction performance. This framework includes adequate data preprocessing methods, such as feature selection and data imputation. Using the KoGES Ansan and Ansung dataset, we aimed to improve diabetes prediction using multivariate, multi-instance, and time series data.Despite the multifactorial nature of diabetes development, few studies have focused on predicting diabetes using multivariate, multi-instance, and time series data. The irregular visit patterns of patients (varying frequency and stay length) and the diversity of individual pathologies make collecting and organizing time series data challenging18. Unlike conventional statistical methods relying on variance distribution, such as correlation analysis8, deep learning algorithms have been developed and applied successfully to time series analysis after the recent advances in artificial intelligence (AI)9. Disease prediction is a domain where time series analysis has been applied, targeting specific features or the overall condition of the patient. Active research related to chronic diseases including diabetes and hypertension has been conducted due to the growing size of the affected populations and the social interest in these conditions. For example, recent studies have addressed the application of deep neural networks (DNNs) in hypertension19, as well as the use of LSTM and multi-layer perceptrons (MLPs) in heart disease20. Furthermore, the recent Corona virus 2019 (COVID-19) epidemic has led to various time series prediction tasks, including the use of deep learning algorithms such as LSTM, GRU and bi-LSTM22. Since many researchers continue to focus on data engineering and the optimization of existing models, there should be attempts to develop more sophisticated deep learning frameworks for novel insights. Meanwhile, recent research in time series prediction in the medical domain has focused on self-supervised algorithms to overcome the problems associated with inadequately labeled and incompletely collected data. These algorithms aim to capture temporal dynamics and enable early intervention for patients23. However, inherent difficulties in detecting the progression of associated features still pose problems, including multiple covariates, progression heterogeneity, and data storage issues24. Despite these challenges, time series prediction research continues to advance in the medical field, with potential for significant improvements in disease diagnosis and management.Time series analysis is a commonly used technique in various fields where the collected data has temporal dimensions. In such cases, time series classification frequently benefits from the enhancement of convolutional neural networks (CNNs)25. Therefore, careful consideration of the details of transfer learning implementation is necessary. One of the methods of transfer learning involves model weight initialization, where the knowledge acquired from the source domain is transferred to the target domain by initializing model weights, alleviating the performance of the target task. Recent research in transfer learning investigates diverse forms of datasets, including time series data, image data, and text data28. In time series prediction, studies have focused on multimodal data, multitask learning, and self-supervised approaches for the informative fusion of available datasets, providing appropriate task results29. Additionally, researchers have explored the selection of appropriate source domains among diverse datasets to overcome problems frequently encountered in time series datasets, such as missing labels30. However, shortages of large general datasets remain a challenge for future studies. Humans perceive most time series data not only sequentially but also as a whole. Motivated by this idea, the application of transfer learning and self-supervised learning in this study focuses on ongoing temporal self-data addition and the implementation of the idea into the model structure, where the data is in the same feature space. This approach is expected to improve the performance of time series prediction models in various applications.Transfer learning is a methodology used to convey information across data in neural networks. The three major approaches of developing the model algorithm scheme are what, how, and when to transfer, with the selection of information boundaries for the target task, knowledge transfer, and tuning methods, such as pruning and layer freezing, determining transfer learning performanceThe introduction of a novel progressive self-transfer framework for time series disease prediction, which effectively teaches dynamic temporal patterns via downstream classifiers.The introduction of efficacious methods to process discrete time series data, such as shifting and rolling window and modifying time resolution. By doing so, the total number of model training to learn important representative features was increased.Extensive training and evaluation of our method using a large dataset that is multivariate, multi-instance, and in time series. Given its ability to handle diverse datasets beyond our current study, our approach has potential extensibility.Here, we present a novel approach to time series disease prediction by adjusting time series data through the modification of time windows and time resolutions in a manner similar to data masking in self-supervised learning. Our proposed model framework transfers information, including the unseen patterns of variables and the temporal properties of labels, to predict diabetes development in each individual. We also apply ensemble techniques to calibrate multiple learners, demonstrating the potential applications of AI tools in the early prediction of diabetes. Our contributions include:Overall, our proposed approach has significant potential to improve the accuracy of diabetes prediction and may have broader implications for other time series prediction tasks in the medical domain.Results are presented in the order of experimental complexity and in the order in which they were conducted. We begin by presenting the results of the baseline experiments, which serve as a reference for comparison with the proposed downstream classifiers. Following this, we present the results of the single progressive self-transfer networks, which function as downstream classifiers. Finally, we introduce the ensemble results, which combine multiple classifiers.Non-progressive self-transfer networks are used as a baseline model for comparison before estimating the performance of each progressive self-transfer network. In this study, we introduce four downstream classifiers, called submodels, and the four non-progressive self-transfer networks are used to predict whether each patient, distinguished by their own code, experiences diabetes at the last time-step of the dataset. Despite having data for every time-step, we preprocessed the data dimensions to match the final prediction task of the four single progressive self-transfer networks. The descriptions of the four non-progressive baseline predictions, which are the output of the last time-step of the dataset and serve as the baseline output of the four progressive self-transfer networks, are shown in Table In this stage of our study, we applied progressive self-transfer learning to our model for each task in sequence. Each network had its own first task, which was designed based on the time series data preprocessing methods. These first tasks served as a foundation for the sequential learning tasks that followed. In this sequential stage, the knowledge from the previous learning step was transferred to the next learning step through weight initialization. This approach allowed the transfer of information in a progressive manner over time, resulting in what we call progressive self-transfer networks. Table The final stage of our study involved the application of ensemble techniques to integrate the results of the single progressive self-transfer networks. As we had four different networks (submodel 1\u20134), we experimented with every possible combination of ensemble networks to identify the trends in the results. Submodel 1 and submodel 2 were downstream classifiers that processed time series data using the shifting and rolling window method, while submodel 3 and submodel 4 were downstream classifiers that processed time series data by doubling and tripling the time resolution.Table We found that the combination of submodel 1, submodel 2, and submodel 4 produced the best performance in terms of AUC and other metrics overall. Thus, we conducted additional experiments to thoroughly inspect the validity of the combinations. Table Furthermore, the experimental results were compared to the baseline models: LSTM, GRU, and RNN. Table The performance of the proposed model is presented in Fig.\u00a0Given the complexity of the disease and unknown interactions between related factors, the accurate prediction of diabetes development is crucial. In this study, we proposed a progressive self-transfer network that incorporates time series data preprocessing methods, shifting and rolling window and modifying time resolution, to reflect feature representations in multivariate and multi-instance time series analysis. The proposed method also accounts for dynamic temporal patterns, including temporal imbalance of the label, which is common in medical data. The gradual improvement of the metrics performances shown in Fig.\u00a0Furthermore, our study contributes to the field by utilizing deep learning methods for time series prediction tasks on the KoGES dataset. To the best of our knowledge, there have been limited studies on this dataset using deep learning techniques. Previous studies have used conventional algorithms and primarily focused on identifying the association of a single or a few factors with the development of diabetes. In contrast, our approach considers multiple relevant features to predict diabetes development, providing a more comprehensive understanding of the disease. As such, our study offers a significant contribution to the field of diabetes prediction using time series analysis and deep learning methods.In order to optimize the LSTM model used in this research, we conducted a series of experiments adjusting various parameters. These experiments were performed across all stages, including the non-progressive self-transfer networks, single progressive self-transfer networks, and their ensemble applications. We tested LSTM models with different numbers of layers, and found that the model with five layers performed slightly better than the four-layer model in some metrics, but with a significant increase in standard deviations. We also manually adjusted the dropout rate and input unit size. Additionally, we optimized the methods used to modify the time series data, including the initial window size and the expansion of time intervals. Since we had a limited number of discrete time steps, we chose to double or triple the time resolution, which allowed us to finalize the structure of the input and output data.It is important to note that our study may have potential bias or human error issues, which can arise in discriminative supervised models. One potential source of human error is in the labeling process, which can be influenced by misreported survey responses or biases introduced during clinical measurements. Moreover, our study did not discriminate between type 1 diabetes mellitus (T1DM) and T2DM, which limits our understanding of the participants\u2019 diabetes mellitus development. To address these limitations, future studies could focus on disease-specific data collection to improve the reliability of the labels and allow the discrimination between T1DM and T2DM. Additionally, incorporating genetic information such as single nucleotide polymorphisms (SNPs) could enhance the model\u2019s background knowledge and enable personalized patient care. SNPs, in combination with lifestyle habits, could serve as key factors in diabetes development and support more effective patient interventions.In conclusion, our study presents a novel progressive self-transfer learning network that integrates information over time to predict diabetes development at a target time-step with remarkable performance. Our method has several advantages, including mitigating the problem of accumulated error in recursive neural networks, predicting multiple timepoints and utilizing each result in sequence, and improving the learning of important representative features through the modification of time series data. Our findings have implications for the field of digital healthcare for chronic diseases, particularly in the potential of AI to improve clinical efficiency and aid in early diagnosis and intervention for diseases like diabetes mellitus. Indeed, as the proposed model helps to detect more patients with improved metrics, our findings can significantly enhance the opportunity for early detection. With these promising results, our method can contribute to the development of digital healthcare and preventative medicine, enhancing the expertise of healthcare providers and improving the health outcomes of patients.The Korean Genome and Epidemiology Study (KoGES) Ansan and Ansung dataset was used in this research. The KoGES consortium aims to investigate the genetic\u2013environmental factors and interactions in common and complex diseases in Koreans. The study is an ongoing community-based cohort study conducted by the Korea Disease Control and Prevention Agency (KDCA) and the Ministry of Health and Welfare. The dataset contains biannual medical checkup and survey data of participants (40\u201369\u00a0years old) from 2001 to 2018, residing in either urban (Ansan) or rural (Ansung) areas. The cohort baseline with 10,030 participants was established in 2001\u20132002, and 6157 participants attended the last time-step in 2017\u20132018. The objective of the study was to predict whether each participant would develop diabetes at the last time-step. The American Diabetes Association (ADA) guidelines were followed, and a participant was considered to have diabetes if they fulfilled at least one of the criteria listed in Table t. Out of the 3995 participants who participated in every time-step, 3379 were included in the proposed network analysis after excluding those who already had diabetes at the first time-step. Table tn-1 data to generate a label for tn data for early prediction of the next time-step and used the latest six time-step sets of the data as an input for the proposed framework.Table 31. In this study, we employed the least absolute shrinkage and selection operator (LASSO) feature selection to identify the most relevant features for the proposed classification task. First, we sorted all variables and participants in every time-step, resulting in a dataset of 3379 participants and 850 features. We then selected only continuous features with less than 80% missing values in each time-step, resulting in a final dataset of 3379 participants and 56 features. The missing values in the dataset were imputed using bidirectional recurrent imputation for time series (BRITS). LASSO feature selection was then applied to the final dataset to select the most relevant features for the network training process.Feature selection is a crucial step in network training to ensure optimal model performance and efficiency32. Grid search was applied to find the most suitable penalty coefficient, resulting in the selection of 48 features with positive or negative correlation to each label data. The coefficients of the 48 selected features are displayed in Fig.\u00a03. Creatinine in blood serum, hemoglobin in whole blood, body fat, body mass index (BMI), and subscapular measurement (3rd) were identified as the top five features with the largest coefficients in magnitude. The larger the coefficient magnitude, the more effective the explanatory variables33. Additionally, the sign of the coefficient indicates a positive or a negative correlation with the label data33. The total demographic information of the selected 48 features is described as descriptive statistics in Table After performing basic preprocessing steps, LASSO was adapted for feature selection. LASSO is a type of regularized linear regression that controls the penalty strength to shrink insignificant coefficients to zero, reducing dimensionality and minimizing the number of features relevant to the labels simultaneously34. Notably, BRITS can perform imputation and classification or regression tasks concurrently, acting as a versatile multi-task learning algorithm. In our study, we utilized BRITS imputation twice. First, we applied BRITS imputation before the LASSO feature selection step to mitigate any potential biases and to ensure proper selection of appropriate features for the model. Second, we applied BRITS imputation before training the proposed network, but only for the selected 48 features instead of the entire feature set. The BRITS algorithm treats missing values as variables within the bidirectional RNN graph, performing missing value imputation and classification/regression applications simultaneously. Consequently, the combination of variables can impact the accuracy of imputation. Thus, the second application of the BRITS imputation helps to exclude less important features from the process, allowing only relevant features to be incorporated into the progressive self-transfer architecture. Figure\u00a0BRITS is a powerful algorithm that leverages a bidirectional recurrent neural network (BRNN) to impute missing values in multivariate time series data, taking advantage of feature means, standard deviations, and trendsThe proposed progressive self-transfer framework was implemented using PyTorch libraries and trained on a Linux Intel(R) Core (TM) i7-9700 CPU with a Nvidia GeForce RTX 2070 SUPER GPU environment. The network was trained and tested with a batch size of 32, a dropout rate of 0.2, and binary cross entropy loss with sigmoid function as the loss function. The experiments were repeated 10 times to obtain the means and standard deviations of the results.Figure\u00a0t4 using the data from the time steps t1 to t3. The second task is to predict the same at the time step t5 using the data from the time steps t1 to t4. The sequential progress is achieved by gradually adding time steps, and the best epoch's weight is transferred to the consequent tasks via weight initialization. The preprocessing methods shifting and rolling window and modifying time resolution will be discussed in the next section.Figure\u00a035. A pretext task is a task designed to improve the performance of the target prediction. In this study, the pretext task was the binary prediction of diabetes, formulated according to two time series data processing methods. Self-supervised sequences were carried out using transfer learning, which efficiently improved the performance of the pretext tasks over time. Only some finetuning was necessary without changes to the model architecture. Furthermore, the proposed framework increased the total number of model training iterations compared to single whole-data training, enabling the model to better perceive the self-data. In other words, the progressive self-transfer network transferred knowledge in sequence to capture high-level information of the self-data.Self-supervised learning and transfer learning are the key concepts behind the proposed network. Self-supervised learning is a technique where the labels are acquired from the data itself, and the data is partially used to predict the other parts of the dataFollowing the feature-level data preprocessing, we performed data preprocessing at the time-step level to enable sequential inputs for the proposed model framework. As the KoGES Ansan and Ansung study provides discrete time series data, we had explicit and repeated time intervals by the inspections. Based on a time window interval of two years, we adopted a modification technique that involves changing the window size or rolling the time window gradually. The first method we used is the shifting and rolling window approach. We applied this method in submodels 1 and 2, as shown in schematic diagrams A and B in Fig.\u00a0I, is defined in Eq. to find the best evaluation performances.In this study, a five-fold cross-validation was used to validate the model and to prevent overfitting. The dataset of 3379 participants was divided into five folds based on the binary labels of the last time step. Each fold was used as a validation set in consecutive order, and the mean metrics of the validation sets were used to evaluate the model performance. Accuracy, AUC, recall, precision, and F1 score were considered for the overall binary classification evaluation. All models, including the non-progressive single self-transfer network and ensemble results, were compared based on these five metrics. In the submodel performance evaluation, AUC was considered the most important metric to determine the best epoch and continue training in sequence, as it reflects the classification performance for all classes. The weight of the best epoch was then transferred to the following similar tasks based on AUC.Supplementary Tables."}
+{"text": "Gene regulatory networks (GRNs) are a way of describing the interaction between genes, which contribute to revealing the different biological mechanisms in the cell. Reconstructing GRNs based on gene expression data has been a central computational problem in systems biology. However, due to the high dimensionality and non-linearity of large-scale GRNs, accurately and efficiently inferring GRNs is still a challenging task.In this article, we propose a new approach, iLSGRN, to reconstruct large-scale GRNs from steady-state and time-series gene expression data based on non-linear ordinary differential equations. Firstly, the regulatory gene recognition algorithm calculates the Maximal Information Coefficient between genes and excludes redundant regulatory relationships to achieve dimensionality reduction. Then, the feature fusion algorithm constructs a model leveraging the feature importance derived from XGBoost (eXtreme Gradient Boosting) and RF (Random Forest) models, which can effectively train the non-linear ordinary differential equations model of GRNs and improve the accuracy and stability of the inference algorithm. The extensive experiments on different scale datasets show that our method makes sensible improvement compared with the state-of-the-art methods. Furthermore, we perform cross-validation experiments on the real gene datasets to validate the robustness and effectiveness of the proposed method.https://github.com/lab319/iLSGRN.The proposed method is written in the Python language, and is available at: The interaction between genes forms the dynamic biochemical network known as the gene regulatory networks (GRNs). Clarifying the biological mechanism of the cell cycle, damage repair, and apoptosis depends on understanding the information transmission in the biological GRN from a systematic perspective . In addiet al. proposed a method MICRAT based on MIC to infer GRN . The GNW is widely used for the inference of GRNs, including three public inferential GRN competitions, DREAM5, DREAM4, and DREAM3 , which records the expression levels of about 4400 genes under five different environmental perturbations , we obtained M regulatory genes (M\u226aG). In the feature fusion algorithm, we applied a non-linear ODEs model to describe gene regulatory relationship behaviors. The non-linear ODEs are more suitable for simulating the dynamic characteristics of genes and processing the time-series data and steady-state series data. Then, the XGBoost and RF are used to train the non-linear ODEs model, respectively, and the importance scores derived from the above two models are fused to obtain the final gene regulatory relationship.The overall flow of our method is shown in We employ a non-linear ODEs to construct gene networks. We define the non-linear ODEs model for time-series data as follows:j at time t, t, and M regulatory genes with respect to target gene j. Furthermore, for discrete time-series data, we utilize the difference equation to approximate the differential equation and simplify Formula 1 to:where b is the time step, the default value of b is 1. For steady-state series data, we define e denotes different experimental conditions.where According to the sparsity of large-scale GRNs, we proposed a regulatory gene identification algorithm. The approach calculates the correlation between the target gene and all candidate regulatory genes using MIC. The prediction accuracy of the proposed approach can be improved by identifying the most important regulatory genes. Moreover, our algorithm can effectively shrink potential candidate genes and avoid massive computational costs in the following training step of the machine-learning model.x and gene y as:We define the MI between gene X and Y represent the expression sequences for gene x and y, respectively. X and Y, and X and Y. Generally, the plane including x and y is divided into grids to calculate MIC, where G genes in the whole gene expression dataset, gene j is selected as the target gene, and the remaining genes are chosen as candidate regulatory genes. We calculate the MIC between a given target gene j and a candidate regulatory gene, exclude redundant regulatory genes according to the threshold of MIC, and attain the regulatory gene sets j. After repeating the above steps to obtain the regulatory gene set For j, we adopt machine-learning algorithms to learn the non-linear function j. The regulatory gene importance list for target gene j is calculated based on geometric mean method as follows:The feature fusion method employs XGBoost and RF models to learn the function j derived by the XGBoost model. j derived by the RF model. We multiply these two vectors to generate the final regulatory gene importance list for target gene j. After repeating the above steps to obtain the importance lists for all target genes, we finally integrate the importance lists into the importance matrix, which shows the regulatory relationship between any two genes.where Usually, a confusion matrix is applied to evaluate the accuracy of the inference algorithm, including the number of true positive (TP) samples, false negative (FN) samples, false positive (FP) samples, and true negative (TN) samples. Based on the confusion matrix, four evaluation indicators including recall, precision, true positive rate (TPR), and false positive rate (FPR), could be further calculated.This study applies two evaluation metrics, area under precision\u2013recall curves (AUPR) and area under receiver operating characteristic (AUROC), to assess the performance of the inference methods for GRNs. The AUROC and AUPR scores can better reflect the comprehensive performance of the classification algorithms. And we also define an overall score to evaluate the performance of the different methods.k edges of the inferred network. And k denotes the number of edges with the label \u201c1\u201d in the gold standard network. Early precision (EP) refers to the fraction of TPs in the top k edges, and the EPR represents the ratio of the EP of the proposed model to a random predictor, where a random predictor\u2019s EP is the edge density. The edge density is defined as the ratio of edges with the label \u201c1\u201d to all potential edges in the gold standard network, and the network of n genes has We also consider the early precision ratio (EPR) to evaluate the accuracy of the top Algorithm 1: The proposed algorithm for solving GRNs inferenceInput: time-series data G genes and T time sampling points, steady sequence data G genes and S experimental conditions, threshold Output: gene regulatory network 1 Initialize 2 Calculate MIC coefficient.j\u2009=\u20091; j\u2009\u2264\u2009G; j + + do3 for i\u2009\u2260\u2009j; i\u2009\u2264\u2009G; i++ do4\u2003\u2003\u2003\u2003for 5\u2003\u2003\u2002\u2009\u2009\u2009\u2009\u2009\u2009\u2009\u2009\u2009Combining time-series samples with steady-state series samples, the input and output pairs of training samples are obtained.6\u2003\u2003\u2003\u2003Dimensionality reduction based on MIC\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003if i to list \u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003Retain gene \u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003else\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003skip.j from list \u2003\u2003\u2003\u2003Obtain all regulatory genes 7 \u2003\u2003\u2003Build a non-linear ODEs model 8 \u2003\u2003\u2003Model importance score fusion.\u20039 All importance scores are obtained after ranking all importance.10 Return In this article, we make a comparison experiment of iLSGRN with other mainstream methods, including GENIE3, dynGENIE3, BiXGBoost, Non-linear ODEs, and MMFGRN. GENIE3 and dynGENIE3 utilize importance scores from RF to identify regulators for each target gene, and dynGENIE3 is an adapted version of the GENIE3 to deal with time-series and steady-state series data. BiXGBoost is based on the bidirectional model, and considers both candidate regulatory genes and target genes for a specific gene and uses XGBoost to calculate feature importance as regulatory gene relationships. Non-linear ODEs method combines non-linear ODE models and XGBoost to infer GRNs. MMFGRN fuses three models to infer GRNs through a weighted fusion strategy, including a time-series data model based on LightGBM and a steady-state data model based on LightGBM, as well as a time-series and steady-state data joint model based on XGBoost.in silico size100 dataset was recommended. We set several values within a small neighboring range (0.1\u20130.2) of this threshold to infer the GRNs and chose the one that corresponded to the optimal result. E.coli dataset are significantly wider than those on the DREAM4 dataset, and the previous study has reported that the GRN of E.coli is basically very sparse (E.coli dataset. Since this parameter setting method may cause overfitting, we further conducted cross-validation experiments to validate the robustness of the proposed method.When setting the MIC thresholds in our method, we referred to the results in a previous study, which also used MIC to infer GRNs. In the study (In the cross-validation experiment for the iLSGRN, we divided the genes into 2-folds, the training set and the test set. Here, we mainly discuss the effect of threshold and learning rate in XGBoost, and use the grid search method to adjust the parameters. Specifically, we set 10 different learning rates of XGBoost and 30 different thresholds of MIC on one training set, conducted a total of 300 experiments and recorded the overall score for each experiment. Next, the proposed model with the optimal parameters was applied on the test set. Finally, we switched the training set and test set and repeated the above-mentioned steps. The performance of the prediction algorithm is estimated by averaging the accuracy results of the two test sets.In addition, we used the grid search method to search the optimal parameters for other inference methods in comparison experiments. For GENIE3 and dynGENIE3, the parameters \u201cmax_depth\u201d and \u201cn_estimators\u201d were selected to optimize. For Non-linear ODEs and BiXGBoost, the optimized parameters include: \u201clearning_rate, max_depth, n_estimators.\u201d In MMFGRN, we optimized the parameters \u201clearning_rate\u201d in XGBoost and \u201clearning_rate\u201d in LigthGBM.in silico size100 and the E.coli gene expression dataset. Then, we performed cross-validation and ablation experiments to verify the robustness and stability of the method. There is no denying that the decay rate is still an important factor. However, it is difficult to set an exact decay rate value for all genes, and we consider the decay rate as a constant.In this section, we extensively compared our method with five state-of-the-art methods: Non-linear ODEs , BiXGBooin silico size100 dataset are shown in in silico size100 dataset. The distribution of the MIC is provided by in silico size100 is shown in In this part, we performed experiments of the above-mentioned approaches on the five sub-datasets of the DREAM4 dataset containing time-series data and steady-state series data. The main parameters of the iLSGRN on DREAM4 E.coli dataset. The main parameters of the iLSGRN on E.coli dataset are shown in E.coli dataset. As seen in E.coli dataset and E.coli data. E.coli dataset for each method are shown in E.coli dataset, the iLSGRN obtains the highest EPR values on Cold stress, Oxidative stress, and the same EPR value as MMFGRN on Lactose.Here, we apply the iLSGRN and other inference methods on the E.coli dataset. The main parameters of iLSGRN in the cross-validation experiment are shown in E.coli dataset. E.coli dataset. Although our method does not achieve the best scores in certain sub-datasets, our method still outperforms other methods, indicating the great potential of iLSGRN for inferring large-scale GRNs.To evaluate the robustness of the proposed method, we conducted 2-fold cross-validation experiments on the in silico size100 and E.coli datasets to validate the effectiveness of each module in the proposed method. In the ablation experiments, the effects of our regulatory gene recognition algorithm and feature fusion algorithm can be evaluated systematically, as shown in We performed ablation experiments on the DREAM4 The iLSGRN achieves the highest overall scores on all sub-datasets compared to the single-model approaches. From The iLSGRN algorithm is implemented based on XGBoost and RF. The computational complexity of XGBoost is on the order of in silico size100 and E.coli datasets is shown in The running time of the different methods on the DREAM4 E.coli datasets show that our method effectively eliminates large redundant gene expression information and improves the accuracy of inferring regulatory relationships.Currently, many computational methods for inference of GRNs have been proposed. In this article, we propose a scalable method based on model fusion to infer GRNs from time-series and steady-state series gene expression data. Our method first calculates the MIC between genes and sets a suitable threshold to exclude the redundant and enough weak gene regulatory relationships. Then, we combine XGBoost and RF to learn the non-linear ODEs model and fuse the importance scores. The experimental results on the DREAM4 and Compared with other model fusion methods, such as MMFGRN, our method utilizes MIC in the regulatory gene recognition algorithm to remove massive redundant regulatory links in large-scale GRNs. It fundamentally reduces the influence of redundant and enough weak genes on identifying GRNs. For the coexistence of time-series data and steady-state series data, the non-linear ODEs model can more precisely simulate the dynamic characteristics between genes and retain the information in steady-state series data. Moreover, the non-linear ODEs model can comprehensively deal with the non-linear relationships in GRNs. Meanwhile, we integrate XGBoost with RF machine-learning model, i.e. Boost and bagging, to obtain more accurate GRN.E.coli. The inference methods obtained low AUPR results on the E.coli dataset, which may be due to the sparsity of the GRNs. The real biological datasets only contain steady-state series gene expression data and lack time-series data, which may affect the performance of the developed method. We will further extend the proposed approach on more comprehensive real gene expression datasets in the future. Moreover, this study did not consider the batch effects of gene expression data, which may affect the calculation of the MIC or feature importance assessment. The feature fusion algorithm will increase the computational complexity and time cost of the proposed method due to the training process of XGBoost and RF models. Therefore, developing a faster and more robust algorithm is one of the future directions.Although our method has improved the prediction accuracy compared to other state-of-the-art methods, there are still some limitations. We only tested the validity of the proposed method on the simulated gene expression dataset of DREAM4 and the real gene expression dataset of btad619_Supplementary_DataClick here for additional data file."}
+{"text": "Development is a complex process involving precise regulation. Developmental regulation may vary in tissues and individuals, and is often altered in disorders. Currently, the regulation of developmental timing across neocortical areas and developmental changes in Down syndrome (DS) brains remain unclear. The changes in regulation are often accompanied by changes in the gene expression trajectories, which can be divided into two scenarios: (1) changes of gene expression trajectory shape that reflect changes in cell type composition or altered molecular machinery; (2) temporal shift of gene expression trajectories that indicate different regulation of developmental timing. Therefore, we developed an R package TempShift to separates these two scenarios and demonstrated that TempShift can distinguish temporal shift from different shape (DiffShape) of expression trajectories, and can accurately estimate the time difference between multiple trajectories. We applied TempShift to identify sequential gene expression across 11 neocortical areas, which suggested sequential occurrence of synapse formation and axon guidance, as well as reconstructed interneuron migration pathways within neocortex. Comparison between healthy and DS brains revealed increased microglia, shortened neuronal migration process, and delayed synaptogenesis and myelination in DS. These applications also demonstrate the potential of TempShift in understanding gene expression temporal dynamics during different biological processes. Gene expression has been observed to be dynamically regulated across development in many organisms. The temporal dynamics are often associated with the occurrence of developmental processes, such as generating new cells, responding to internal or external signals, and so on. While some of the developmental processes are universal, some others are distinct to species, tissues or disorders, which are accompanied by different temporal gene expression patterns. The differences during development can be categorized into two scenarios: (1) different shape (DiffShape) of gene expression trajectories may reflect increase or decrease in certain cell types, and disrupted or altered molecular machinery of developmental processes; (2) temporal shift of gene expression trajectories reflect different regulation of developmental timing.Development of the cerebral neocortex is an example. The neocortex is organized into structurally similar but functionally distinct areas. Development of neocortical areas involves in common processes, such as neurogenesis, neuron differentiation, synaptogenesis, myelination, and so on, but the maturation rate of distinct neocortical areas has been found different. Furthermore, these neurodevelopmental disorders have been observed impaired in neurodevelopmental disorders, such as Down syndrome (DS). DS, also known as trisomy 21, is a genetic disorder caused by the presence of all or part of a third copy of chromosome 21. As the most common neurodevelopmental disorder, the estimated prevalence of DS is as high as 13.65 per 10,000 live births . HoweverThe time-series design of transcriptome analyses provides a unique opportunity to detect the two scenarios of developmental changes in a high-throughput way. To date, multiple approaches have been developed for differential expression analysis, clustering and alignment of time-series data . A GaussHere, we programmed TempShift into an R package. The performance of TempShift was tested using different types of simulated data first. Then, we applied TempShift to human brain transcriptome data including 11 neocortical areas, revealing the gradual development of the human neocortex. Further application to a DS dataset demonstrated the ability of TempShift in handling data with unmatched age and revealed increased microglia and skewed developmental timing of neuronal and oligodendrocyte development in DS.TempShift is a statistical model designed for two goals: (1) distinguishing DiffShape, shift and no-shift genes, and (2) for shift genes, estimating the time shift between groups. To achieve the above goals, it builds three temporal expression models based on Gaussian process regression: independence model, shift model and no-shift model . The Difectively . Genes wi, the time vectors For group i, the time vectors i = 1) was considered as the reference group, and was modeled by the same function as the no-shift model. For group i and the reference group; i; The sine function i. The elements of i and the reference group, and For each group, the time vectors gene expression \u223chttp://pantherdb.org/, accessed on 18 April 2016) [Normalized human brain microarray data were downloaded from GSE25219 . Samplesil 2016) .Microarray data of DS samples were downloaded from GSE59630 . DFC samp < 0.01 and Tukey\u2019s HSD test p < 0.01 for comparison between cell type A and at least 6 other cell types. The enrichment of cell type-enriched in DiffShape genes and shift genes was tested by Fisher\u2019s exact test.Single cell RNA-seq of human neocortex were downloaded from GSE67835 . Log2-RPhttps://github.com/YingZhuLab/TempShift, accessed on 31 May 2023) implemented in R, we applied it to different types of simulated data with known time shift, including Gaussian process data, periodic data, and polynomial data. The specific principles of data simulation are described in To test the performance of TempShift d. In botfor \u0394t2) b,c,e,f.The non-periodic data representing gene expression changes across development were simulated with a residual variance accounting for approximately 10%. Then, TempShift was applied to the simulated non-periodic data with two-group and three-group, respectively. Again, TempShift successfully distinguished the shift genes from no-shift genes with for \u0394t2) h,i,k,l.In summary, the simulation results demonstrated that TempShift is able to identify temporally shifted genes and accurately estimate time shift between groups even with high noise.We then applied TempShift to investigate the developmental sequences in the human neocortex. The developmental sequences represent the order in which changes in structure or function occur during the process of development of an organism. The cerebral neocortex consists of functionally distinct sensory, motor and association areas. Previous studies have suggested differential gene expression and maturation rates of different neocortical areas. To explore the developmental sequences in the human neocortex, we applied TempShift to a previously published microarray data set, including samples from 11 neocortical areas across pshift=50 . On avershift=50 b, inferrshift=50 c.In the synaptic transmission group, we found multiple glutamate receptor genes a and GABIn the above analysis, we also identified GAD1, a synthetic enzyme of interneuron neurotransmitter GABA , as a teTo further explore the changes in neurodevelopmental processes and related genes in DS, we applied TempShift to a published data set including complementary DS samples . The orip < 0.05; We first selected DiffShape genes based on the mean . By intep < 0.05; t-test and gene co-expression network analysis. Only MBP were identified by gene co-expression network analysis, while no statistics were available for quantification. None of the genes were detected as differentially expressed based on paired t-test combining all samples.We identified 66 shift genes with criteria than 10 . Shift gAnother cell type found to be enriched with shift genes is interneuron 2 a. This con in DS e,g. VAMPon in DS e.2+ sensor for fast neurotransmitter release, are both delayed in DS, suggesting delayed synaptogenesis in the DS. In addition, ASTN1 encodes astrotactin, a neuron-glial adhesion molecule that mediates glial-guided neuronal migration [Furthermore, additional shift genes playing critical roles in cortical development further implied delayed development in oligodendrocytes generation, neuronal migration, neurite growth and synaptogenesis c,g. CNTNneration . CNTN6 aneration . Similarneration , and synigration . The expon in DS .https://github.com/YingZhuLab/TempShift, accessed on 31 May 2023) in this study. At first, we validated that TempShift works well for both periodic and non-periodic data and is robust to noise through simulation. We adopted the Gaussian kernel to fit the gene expression trajectories in this study, but it can be replaced with other kernels according to the properties of the data. As we maximize likelihood using a Quasi-Newton method, which finds local maxima, we would suggest using a larger Initial value of the length scale l of the Gaussian kernel to avoid overfitting. Otherwise, multiple Initial values can be tried to get the best fit. In summary, TempShift provides a framework that can be applied broadly to study temporal differences across different conditions, such as different tissue types, disease status and species. It can be applied not only to gene expression data, but also to other time-series measurements.TempShift is a framework that provides flexible modeling of global temporal shift of time-series data which can deal with an arbitrary number of replicates, does not require matched time points across conditions, and can be used for two or more conditions. For ease of use, we implemented TempShift into an open-source R package (The implementation of TempShift to human brain transcriptome data demonstrated the capability of TempShift in identifying shift genes and estimating temporal shift between as many as 11 neocortical areas at the same time. In addition to comparing multiple groups, Tempshift is able to detect developmental sequences of multiple biological processes in a high-throughput way. In the above application, we found that shift genes are enriched in synaptic transmission and neuron differentiation, and reconstructed the migratory streams of interneurons in the human neocortex.Using the DS data, we demonstrated the application of TempShift to analyze groups of time-series data with unmatched ages. Using TempShift, we selected DiffShape genes and shift genes, each of which respectively suggested increased microglia, and altered regulation of developmental timing in the DS, including delayed development of oligodendrocytes, neurite outgrowth, synaptogenesis and shortened period of neuronal migration. Anatomical changes observed in DS include reduced brain size, altered cortical lamination, reduced dendritic ramifications, diminished synaptic formation, and delayed myelination . TempShiThe TempShift detects shape difference and global shift of gene expression trajectories currently. For further studies, other models can be developed to identify more transformation of trajectories, or to refine the time interval, during which the trajectories are different in shape or temporally shifted.In summary, we believe that not only can TempShift be used for transcriptome data analysis of the human brain, but that it also has great potential for understanding the temporal dynamics of gene expression in other biological processes."}
+{"text": "To identify causation, model-free inference methods, such as Granger Causality, have been widely used due to their flexibility. However, they have difficulty distinguishing synchrony and indirect effects from direct causation, leading to false predictions. To overcome this, model-based inference methods that test the reproducibility of data with a specific mechanistic model to infer causality were developed. However, they can only be applied to systems described by a specific model, greatly limiting their applicability. Here, we address this limitation by deriving an easily testable condition for a general monotonic ODE model to reproduce time-series data. We built a user-friendly computational package, General ODE-Based Inference (GOBI), which is applicable to nearly any monotonic system with positive and negative regulations described by ODE. GOBI successfully inferred positive and negative regulations in various networks at both the molecular and population levels, unlike existing model-free methods. Thus, this accurate and broadly applicable inference method is a powerful tool for understanding complex dynamical systems. Traditional causal inference methods struggle to distinguish direct causation from synchrony and indirect effects. Here, authors present GOBI that overcomes this by testing a general model\u2019s ability to reproduce data, providing accurate and broadly applicable causality inference for complex systems. Various model-free methods, such as Granger causality (GC)2 and convergent cross mapping (CCM)3, have been widely used to infer causation from time-series data. Although they are easy to implement and broadly applicable10, they usually struggle to differentiate generalized synchrony versus causality15 and distinguish between direct and indirect causation20. For instance, when oscillatory time-series data is given, nearly all-to-all connected networks are inferred12. To prevent such false positive predictions, model-free methods have been improved 20), but further investigation is needed to show their universal validity.Identifying causal interaction is crucial to understand the underlying mechanism of systems in nature. A recent surge in time-series data collection with advanced technology offers opportunities to computationally uncover causation21 and the Kalman Filter23. Although testing the reproducibility is computationally expensive, as long as the underlying model is accurate, the model-based inference method is accurate even in the presence of generalized synchrony in time series and indirect effect29. However, the inference results strongly depend on the choice of model, and inaccurate model imposition can result in false positive predictions, limiting their applicability. To overcome this limit, inference methods using flexible models were developed39. In particular, the most recent method, ION12, infers causation from X to Y described by the general monotonic ODE model between two components, i.e., Alternatively, model-based methods infer causality by testing the reproducibility of time-series data with mechanistic models using various methods such as simulated annealingf can be any smooth and monotonic increasing or decreasing functions of Xi and XN is Y in the presence of self-regulation. Thus, our approach considerably resolves the fundamental limit of model-based inference: strong dependence on a chosen model. Furthermore, we derive a simple condition for the reproducibility of time series with Eq. \u2009\u2254\u2009X(t)\u2009\u2212\u2009X(t*)\u2009>\u20090, X negatively regulates Y (X\u00a0\u22a3\u00a0Y) \u2009<\u20090, then \u03c3\u2009=\u2009+\u2009) or negative (\u03c3\u2009=\u2009\u2212\u2009) regulation, the following regulation-detection function is always positive such that \u03c3Xd\u2009>\u20090.We first illustrate the common properties of time series generated by either positive or negative regulation with simple examples. When the input signal \u2009Y) Fig.\u00a0a, \\docummentclass2pt{minimmentclass2pt{minimX1 and X2 positively regulate Y together such that X1 and X2 positively and negatively regulate Y, respectively such that This idea can be extended to a case with multiple causes. For instance, when regulation-detection score\u00a0\u03c3 since the regulation-detection function is positive on an existing regulation , we develop a regulation-delta function \u0394:In the presence of monotonic regulation, the regulation-detection function X\u03c3Y (Eq. ). Thus, mentclass2pt{minimmentclass2pt{minim\u03c3 from X to Y. Based on this, we construct a framework for inferring a regulatory network from time-series data (Step 1). Because only A\u00a0\u22a3\u00a0B satisfies the criteria A\u00a0\u22a3\u00a0B is inferred as 1D regulation. Note that even for the other regulations, \u03c3 (Step 2). Three types of regulation D regulations D regulation-detection function and score that include negative self-regulation. Specifically, when inferring 1D positive regulation from X to Y, the criterion We apply the framework to infer regulatory networks from simulated time-series data of various biological models. In these models, the degradation rates of molecules increase as their concentrations increase, like in most biological systems . Such prior information, including the types of self-regulation, can be incorporated into our framework. For example, to incorporate negative self-regulation, when detecting 40, using the criteria M\u2009\u2192\u2009PC and PC\u2009\u2192\u2009P) and one negative 1D regulation (P\u00a0\u22a3\u00a0M) are inferred D regulation. This simplification is important for accurate inference when data is limited\u00a0 which contains A\u00a0\u22a3\u2009B Fig.\u00a0c left, mIFL Fig.\u00a0c right aIFL Fig.\u00a0f. See MeA\u00a0to\u00a0C exists. On the other hand, in SFL, only indirect negative regulation from A to C, induced from a regulatory chain A\u00a0\u22a3\u2009B\u2009\u2192\u2009C, exists.Next, we investigate whether the \u0394 test can distinguish direct and indirect regulations using examples of the coherent feed-forward loop . As a result, the regulation\u2013detection score A does not directly regulate C, then the regulation-detection score p-values for each data and integrate them using Fisher\u2019s method. The criteria successfully distinguishes between direct and indirect regulation even when the noise level varies to test whether A\u00a0\u22a3\u00a0C is a direct regulation for CFL, but not SFL. Then, merging 1D and 2D results successfully recovers the network structure of IFL, CFL, and SFL even from noisy time series , we develop a user-friendly computational package, GOBI, which can be used to infer regulations for systems described by Eq. are used, GOBI successfully infers the regulatory networks from in silico time series. Here, we use GOBI with these default hyperparameters to infer regulatory networks from experimentally measured time series. From the population data of two unicellular ciliates Paramecium aurelia (P) and Didinium nasutum (D)46 and predator (D) is successfully inferred and RNA polymerase sigma factor (\u03c328)47 to the disease are identified as 1D regulations is used, only GOBI infers the same structure on air pollutants and cardiovascular disease, PCM infers the same structure as GOBI infers, i.e., only NOase Fig.\u00a0d gray20.ure Fig.\u00a0d purple.We develop an inference method that considerably resolves the weakness of model-free and model-based inference methods. We derive the conditions for interactions satisfying the general monotonic ODE (Eq. ). As thiX causes Y, X causes Y either positively or negatively. Thus, GOBI cannot capture the regulation when X causes Y both positively and negatively or when the type of regulation changes over time. However, GOBI can be potentially extended to detect temporal-structured models, including non-monotonic regulation D regulation, which allows us to successfully infer the network structure even with a small amount of experimental data . In this study, we determine these values by using noisy simulated data of various examples 45. For the experimental time-series data X(t)\u2009=\u2009(X1(t),\u2009X2(t),\u2009\u22ef\u2009\u2009,\u2009XN(t)), X(t) can be interpolated with either the \u2018spline\u2019 or \u2018fourier\u2019 method, chosen by the user. For the spline interpolation, we use the MATLAB function \u2018interp1\u2019 with the option \u2018spline\u2019, and for the Fourier interpolation, we use the MATLAB function \u2018fit\u2019 with the option \u2018fourier1\u20138\u2019. After the interpolation, the derivative of X(t) is computed using the MATLAB function \u2018gradient\u2019 to compute the regulation\u2013detection score.Here, we describe the key steps of our computational package, GOBI on the domain of time series i\u2009=\u20091,\u20092,\u2009\u22ef\u2009\u2009,\u2009N. For example, with the positive 1D regulation X\u2009\u2192\u2009Y (\u03c3\u2009=\u2009+\u2009), t,\u2009t*) where Xd\u2009>\u20090. For the 2D regulation \u03c3\u2009=\u2009), t,\u2009t*) satisfying both For the ion (Eq. ) with re\u03c3 from X\u2009=\u2009 to Y exists, the following regulation-detection function (When the regulation type Xnew) is positively (negatively) added to the existing regulation type \u03c3. Because Xnew) does not have any regulatory role, the newly added regulation is inferred only when When we add any regulation to existing true regulation, the regulation-detection score is always one Fig.\u00a0j-l. ThusA\u00a0\u22a3\u00a0B\u2009\u2192\u2009C induces the indirect negative regulation A\u00a0\u22a3\u00a0C. In the presence of noise, the \u0394 test sometimes fails to distinguish between direct and indirect regulations . Thus, we combine them into one test statistic (\u03c72) using Fisher\u2019s method, pi\u2009=\u20090.001 for all the data, but it can also be chosen by the user.Indirect regulation is induced by the chain of direct regulations. For example, in SFL Fig.\u00a0e, regulaons Fig.\u00a0d\u2013g. Thus3 and PCM20, we choose an appropriate embedding dimension using the false nearest neighbor algorithm. Also, we select a time lag producing the first minimum of delayed mutual information. To select the threshold value \u2018T\u2019 in PCM, we use k-means clustering as suggested in20. We run CCM using \u2018skccm\u2019 and PCM using the code provided in20. For GC2, we run the code provided in55, specifying the order of AR processes of the first minimum of delayed mutual information as we choose a max delay with the CCM and PCM. Also, we reject the null hypothesis that Y does not Granger cause X, and thereby inferred direct regulations by using the F statistic with a significance level of 95%2. Specifically, we use embedding dimension 2 for the prey-predator, genetic oscillator, and estradiol data sets; and 3 for the repressilator and air pollutants and cardiovascular disease data sets. Also, we used time lag 2 for prey\u2013predator; 3\u2009~\u200910 for the genetic oscillator (there are eight different time-series data sets); 10 for the repressilator; 15 for the estradiol data set; and 3 for the air pollutants and cardiovascular disease data set.For CCMX(ti)\u2009\u22c5\u2009\u03b5 to X(ti), where \u03b5\u2009~\u2009N. Before applying our method, all the simulated noisy time series are fitted using the MATLAB function \u2018fourier4\u2019. However, if the noise level is too high, \u2018fourier4\u2019 tends to overfit and capture the noise. Thus, in the presence of a high level of noise, \u2018fourier2\u2019 is recommended for smoothing.With the ODE describing the system, we simulate the time-series data using the MATLAB function \u2018ode45\u2019. The sampling rate is 100 points per period for all the examples Figs.\u00a0, and\u00a03. For the experimental data, we first calculate the period of data by using the first peak of auto-correlation. Then, we cut the time series into periods Fig.\u00a0a, b. SpeFurther information on research design is available in the\u00a0Supplementary InformationPeer Review FileDescription of Additional Supplementary FilesSupplementary Data1Reporting Summary"}
+{"text": "Model-based data analysis of whole-brain dynamics links the observed data to model parameters in a network of neural masses. Recently, studies focused on the role of regional variance of model parameters. Such analyses however necessarily depend on the properties of preselected neural mass model. We introduce a method to infer from the functional data both the neural mass model representing the regional dynamics and the region- and subject-specific parameters while respecting the known network structure. We apply the method to human resting-state fMRI. We find that the underlying dynamics can be described as noisy fluctuations around a single fixed point. The method reliably discovers three regional parameters with clear and distinct role in the dynamics, one of which is strongly correlated with the first principal component of the gene expression spatial map. The present approach opens a novel way to the analysis of resting-state fMRI with possible applications for understanding the brain dynamics during aging or neurodegeneration. Machine learning for dynamical system discovery reveals heterogeneity across brain regions following structural variations. One avenue for analysis of resting-state functional magnetic resonance imaging (fMRI) is the use of computational models of large-scale brain network dynamics and support their finding . Eventually, such approach might lead to a new generation of more expressive, data-driven network models of large-scale brain dynamics.et\u00a0al. We wish to operate within the framework of network-based models of large-scale brain dynamics, meaning that we wish to incorporate the known structural connectivity in the whole-brain model. The method thus has to allow for this prespecified network connectivity.2) We are interested in noise-driven systems, that is, systems described by stochastic differential equations rather than deterministic ODEs. Some of the existing methods focus only on deterministic ODEs and incorporate only observation noise, but not the noise in the underlying system.3) It is the desired parameterization of the neural masses with regional and subject parameters. When faced with a problem of system identification for multiple related systems, many of the existing methods do not provide a way to share the learned dynamics; rather, the systems are inferred independently. Here, we wish to infer only one dynamical system with the intersubject and interregional variations represented via the regional and subject parameters only.4) It is the problem of partial observations. It is sometimes assumed that all state variables are observed, simplifying the inference problem greatly. We assume that we have only one-dimensional (1D) observation of the regional dynamics available , meaning that multiple system states are hidden and need to be inferred as well.To tackle this problem, we use the framework of amortized variational inference or variational autoencoders of a brain region j are generated by a dynamical systemxj(t) \u2208 \u211dsn is the state at time t and \u03b8s \u2208 \u211dms are the region-specific and subject-specific parameters.We follow the general framework of large-scale brain network modeling and assume that for a specific subject, the observations uext(t) is the external input, shared by all regions of a single subject, andj with n nodes. The functions f, g, and cg are initially unknown, and \u03b7j(t) and \u03bdj(t) are the system and observation noise, respectively (fig. S1A).The term f and observation function g, which are the same for all subjects, as well as region- and subject-specific parameters \u03b8s and the time-dependent external input uext. To do so, we adopt the general framework of amortized variational inference .From the observed time series of multiple subjects, we wish to infer both the evolution function g \u2261 cg. This allows us to effectively decouple the network problem to uncoupled regions with known network input, so we can consider the time series of one region of one subject as a single data point. We return to this choice and its possible implications in light of the results in Discussion.For reasons of computational tractability, we take a strong assumption that the observation and coupling functions are identical, f with a generic artificial neural network and function g as a linear transformation. The inference problem is ultimately transformed into the optimization of the cost function, ELBO, which is to be maximized over the weights of f, g, h1, h2, h3, and h4 and over the variances of the system and observation noise. After the optimization, we obtain the description of the dynamical system in terms of functions f and g, probabilistic representation of the regional and subject parameters \u03b8s and of the external input uext, and projections of the observations in the state space xj. The inferred parameters \u03b8s will not have a mechanistic meaning; however, they can provide a measure of (dis)similarity of the regions and subject and can be interpreted via the inferred dynamical system f.We represent the nonlinear function ia, the dynamics is either noise driven around a stable fixed point (for ia < 0) or oscillatory with frequency if (for ia > 0). In the synthetic dataset, these two parameters are randomly varied across regions. The second model is the parametric mean field model (pMFM) , different for every subject, with pMFM.Both models are used to generate synthetic data for eight subjects, each with individual structural connectome containing 68 cortical regions of the Desikan-Killiany parcellation and not from the posterior distribution q from We evaluate the quality of the trained model on the basis of the following criteria. First, we establish whether the inferred parameters are related to the original parameters of the model . Second,\u03b8r for the Hopf model. The bifurcation parameter a maps to the inferred parameter f maps to a > 0.\u00a0That is not a deficiency of the proposed method: In the fixed-point regime the activity is mainly noise driven, and the value of the frequency parameter has small to negligible influence .In most measures, the trained model performs comparably or slightly worse than the original model and markedly better than both the reshuffled surrogate and the noise surrogate correlation of the time series, which we use here as well. This static FC captures the spatial structure of statistical similarities; however, it has its limitations. Notably, it ignores the temporal changes in FC structure . We have set the subject parameter dimension sm to zero because of the observed difficulties with the convergence of the subject parameters; these are illustrated on fig. S10.We applied the developed method to human resting-state fMRI data obtained from the HCP and nine combinations of u and uext . For each of these configurations, we have located the fixed point(s) numerically by applying nonlinear root finding method to 60 random initial points chosen randomly from the interval in each dimension (see Methods). Furthermore, from each of these initial points, we simulated the system in the absence of noise to assess whether it exhibits unsteady dynamic, whether periodic or chaotic. In no configuration was multistability observed, and when simulated without the system noise, the system always converged to the fixed point, with no unsteady (periodic or chaotic) behavior would collapse to the prior distribution N along some or all dimensions. An example of this behavior can be seen on fig. S10.Next, we turn our attention to the region-specific parameters rm is increased, the necessary dimensionality of the parameter space will be reached, and further dimensions of the parameter space will go unused. This is what we see when we train several models with increasing parameter space dimensionality rm up to five dimensions , the parameters \u03b8r were randomly chosen from the prior distribution N. The system and observation noise were generated randomly and uniquely for each simulation. The external input uext was also generated randomly using the learned parameters. Each region was simulated independently from the others, and the network input was set to be the same for all simulations: The network time series u were precomputed using the empirical fMRI data, and we chose to use the time series whose power was at 80 percentile of all data to simulate the nodal dynamics with non-negligible and realistic network input. This analysis shows a clear relation between uext. The correlation of the generated time series with the network input depends on both These claims are supported by the statistical analysis of generated time series (f directly (fig. S14): Analysis of the eigenvalues at the fixed point shows the clear relation of i\u2202f/\u2202uext and i\u2202f/\u2202u on parameters \u03b8r explains the modulation of the response of the system to external and network input.Further insight into these can be obtained by the analysis of the inferred dynamical system Another way of analyzing the roles of region-specific parameter is to look at their relations with various features of the structural and functional data. We divide the features into two categories: First, those derived from the individual data that were used for the model training and, second, those obtained from external sources and not specific to the subject in question. Taking the features and using the inferred parameters for all subjects, we performed a multivariate linear regression for the different features . The feaWith the individual data, we evaluated the link to features of the structural connectome and the regional fMRI time series. The results correspond well with the effects of the parameters as established above. The first parameter network . This isWith the external data, we compared our inferred parameters against multiple feature maps used in previous studies on regional heterogeneity in whole-brain networks preserved in a new observation, such as FC, time-independent features of FCD, or energy and frequency spectra of individual time series, should be preserved in the simulated signals too, to the same extent as with the repeated observation.i indexing over all regions and subjects) while retaining the spatially structured signal caused by the neuronal sources It leads to a problem that can be efficiently solved, and (ii) the designed method is shown to perform reasonably well on the synthetic test cases, even if the assumption is clearly invalid. For instance, in the original Hopf model, the regions are coupled via both variables, yet the trained model produces dynamics with similar correlation structure. At the same time, we note that the network coupling was reduced in both synthetic test cases, and this assumption is a plausible explanation of this reduction. Similarly, it might have been part of the reason why we have not been able to reproduce the FC with the human fMRI data to the same degree as similar computational studies. The design of a computationally efficient method for system identification of fully coupled systems is thus an important direction for future studies.Diversity of the results on the dynamical nature of large-scale fMRI signals highlights the importance of the modeling choices, particularly that of the underlying dynamical model and the optimization cost function. Nozari and colleagues .How do our results compare to those of previous studies in terms of the regional heterogeneity of the parameters? Demirta\u015f et\u00a0al. (R2 = 0.47). In addition, Deco et\u00a0al. ; for this reason, we have used only the region-specific parameters in the main analysis. The precise reason of this failure is not fully clear; however, there are several options that might improve the behavior in future works.In the framework of variational autoencoders (VAE), one option would be increasing the Kullback-Leibler divergence penalty between the prior and the approximate posterior in the cost function following the so-called \u03b2-VAE approach. Proposed for the unsupervised discovery of disentangled representations, \u03b2-VAEs = \u220fiq, leading to increased flexibility of the model. Modification of our inference model along similar lines could rectify the observed issues.Another option arises from the conjecture that the failure is caused by imposing a particular structure of the latent space in the generative model, which is not appropriately reflected in the inference model. Here, the inspiration can be taken from the literature on hierarchical VAEs . Such link may provide insights into the origin of neurodegenerative diseases if the dynamically relevant parameters differ between the disease stages.The link between dynamically relevant parameters and the measurable quantities can be estimated from a preexisting patient cohort and then only applied to a single subject. That is advantageous if the measurement is difficult, costly, or impossible to perform in clinical setting (such as for cell type composition estimated from post mortem studies); in such cases, the dynamically relevant parameters may instead be estimated from easy-to-obtain resting-state fMRI and then mapped using the known link. This approach thus opens new possibilities for exploitation of large-scale neuroimaging databases such as HCP . These were further processed by the DiCER method (jy(t) of a brain region j is generated by a dynamical systemxj(t) \u2208 \u211dsn is the state at time t, \u03b8s \u2208 \u211dsm are the region-specific and subject-specific parameters, and uext(t) is the external input, shared by all regions of a single subjectj with n nodes.As outlined above, we assume that the observed activity jy. More precisely, we assume that in g \u2261 cg and that the observation noise term j\u03bd is small enough that it can be included in the coupling. Then, the network input has the formTo make the inference problem more tractable, we simplify the problem and assume that the nodes are coupled through the observed variable jy. This effectively decouples the time series in different nodes so that they can be processed separately, as described below.This form has the advantage that the network input is independent of any hidden variables and can be computed directly from the known observations k.For the purpose of the inference, we use the time-discretized form of q. In the following text, we consider only a single data point from one subject and one region and omit the region indexing for brevity.As usual in variational inference, we aim to maximize the ELBO and, by doing so at the same time, minimize the Kullback-Leibler divergence between the true posterior and the approximate posterior y, u, c} representing the data from a one region is composed of the observed time series y \u2208 \u211dtn, network input time series u \u2208 \u211dtn, and one-hot vector c \u2208 \u211dnsub, that is, a vector with zeros everywhere except i-th position with value one, encoding the identity of subject i. For this data point, the ELBO can be expressed as follows. A single data point {x, region- and subject-specific parameters \u03b8r and \u03b8s, and the external input uext, and the third line represents the approximate posteriors again for states, region- and subject-specific parameters, and the external input. Alternatively, the second and third line above can be rewritten using the Kullback-Leibler divergences of the posterior and prior distributionsHere, the first line represents the decoder loss; second line represents the priors for states y = g(x) + \u03bd = a \u00b7 x + b + \u03bd. This forward projection essentially represents the decoder part of the encoder-decoder system, so the likelihood in N stands for normal distribution with mean \u03bc and variance \u03c32. The parameters of the observation model, which are to be optimized, are the coefficients of the linear projection a and b, together with the observation noise variance We assume that the observation model can be modeled as a linear transformation of the system state with Gaussian noise, x given the network input u, external input uext, and the parameters \u03b8r and \u03b8s. It is here where the dynamical system f appears in the ELBO. This term can be expanded over time asThe first term in x0 and then evolve the system over time according to the function f. We represent the function f as a two-layer neural network, with a rectified linear unit (ReLU) activation function in the hidden layer. That is, with in = sn + rm + sm + 2, function f : \u211din \u2192 \u211dsn is given asW1 \u2208 \u211dhn\u00d7in, W2 \u2208 \u211dsn\u00d7hn, b1 \u2208 \u211dhn, and b2 \u2208 \u211dsn being the weights and biases, hn being the number of hidden units, and \u03d5(x) = max\u00a0 being the ReLU rectifier. The weights and biases of the network are to be optimized, together with the system noise SD \u03c3s. The number of hidden units is given in the table S1.Here, we use the standard normal distribution as a prior for the initial state p(\u03b8r) = N and p(\u03b8s) = N.For the region- and subject-specific parameters, we use the standard normal distribution as a prior, as is often used in variational autoencoders. The priors in the second and the third term in 2. Then, the prior readse\u22121/\u03c4. The variance is fixed to \u03c3 = 1 because any scaling can be done inside the function f, and the time scale is optimized together with the other parameters and neural network weights in the optimization process.We set the prior for the external input to an autoregressive process with time scale \u03c4 and variance \u03c3x and region-specific parameters \u03b8r, we use the idea of amortized variational inference, and instead of representing the parameters directly, we train a recurrent neural network to extract the means and the variances from the time series of the observations y, time series of the network input u, and the one-hot vector c encoding the subject identityWe follow the standard approach and use multivariate normal distributions for the approximate posteriors in h1 and h2. The input to the networks at step k is the concatenated observation ky and the network input ku, to which the time-independent one-hot vector c is also appended.Specifically, we use long short-term memory (LSTM) networks for both functions y, time series of the network input u, and identity vector c to the parameters of multivariate normal distributions with diagonal covariance, which define the approximate posterior distributions of the state time series x and regional parameters \u03b8r.In other words, the trained LSTM networks map the time series of observations \u03b8s and the external input uext depend only on the subject identity encoded in the one-hot vector c. Their means and variances are stored directly in the matrices of means (M1 \u2208 \u211dsm\u00d7nsub for \u03b8s and M2 \u2208 \u211dtn\u00d7nsub for uext) and matrices of log-variances (V1 \u2208 \u211dsm\u00d7nsub and V2 \u2208 \u211dtn\u00d7nsub). For a specific subject, the relevant values are extracted through the product with the one-hot vector cThe subject-specific parameters ijL is the ELBO associated with a subject i and region j, defined by h1, h2, weights of the neural network f, means and variances of the subject-specific parameters and of the external input time series M1, M2, V1, and V2; external input time scale \u03c4, system; and observation noise variances A and b.The optimization target is the negative dataset ELBOx and parameters \u03b8r and \u03b8s , and the projection bias b is set to zero. Matrices for subject-specific parameters and for external input M1, M2, V1, and V2 are initialized randomly drawing from a normal distribution . All layers of used neural networks use the default initialization provided by Keras.The initial conditions for the optimization are set as follows. The log variances of the system noise are set to \u22122, and the log variances of the observation noise are set to 0.\u00a0The projection vector uext and subject parameters \u03b8s and the encoding of the identity of the subject in one-hot vector c. Therefore, this procedure should not be understood as testing any predictive power for new dataset (we are making no claims in that regard), but rather evaluating if the generative model f is memorizing specific time series.To assess possible overfitting in the inference and generative networks, we adopt the following procedure when applying the method to resting-state fMRI data. We divide the time series from all regions and all subjects into the training set (80% of regions) and test set (20% of regions), and the optimization is then performed using the ELBO calculated from the training set only (fig. S11). We note that split is performed between the regional time series and not between subjects, that is, the regional time series in the test set come from the same subjects as the regional time series used for the training. This is a limitation of the present architecture, as it cannot be readily applied to new subjects because of the direct optimization of the external input i is described by two parameters: bifurcation parameter ia and intrinsic frequency if. For ia < 0, the uncoupled neural mass has one stable fixed point, and for ia > 0, the neural mass has a stable limit cycle indicating sustained oscillations with frequency if. The bifurcation exists at the critical value ia = 0.\u00a0The dynamics of each node in the network are given by a set of two coupled nonlinear stochastic differential equationsi = 2\u03c0if, G > 0 is the scaling of the coupling, ijw is the weight of connection from node j to node i. Additive Gaussian noise \u03b7 is included in the equations, with SD \u03b2.The Hopf model of large-scale brain dynamics , and the system is then simulated for 205 s.\u00a0The first 25 s are then discarded to avoid the influence of the initial conditions, leaving 180 s of data. The system is simulated with Euler method with time step \u0394t = 0.02 s.\u00a0As the observed variable, we take the first of the two variables in each node , downsampled to 1 Hz; therefore, every time series contain 180 time points. The data are normalized to zero mean and variance equal to one .To generate the synthetic dataset, we use the structural connectome matrices of the first eight subjects from the group described above. We simulate eight subjects, with increasing coupling coefficient ix is the total input current, H(ix) is the population firing rate, and iS is the average synaptic gating variable. The total input current depends on the recurrent connection strength ir, synaptic coupling strength J = 0.2609 nA, excitatory subcortical input oI = 0.295 nA, and the regional coupling G. The strength of the coupling between region j and i is proportional to the structural connection strength ijw. The kinetic parameters of the models are the decay time constant \u03c4s = 100 ms and \u03b3 = 0.641/1000. Values for the input-output function H(ix) are a = 270nC\u22121, b = 108 Hz, and d = 0.154 s.\u00a0Depending on the parameter values and the strength of the network coupling, the system can be either in monostable downstate regime at low firing rate values, bistable regime with two stable fixed points, or monostable upstate regime at high firing-rate values. The stochastic transitions between states are driven by the additive Gaussian noise \u03b7i with SD \u03c3.The pMFM was derived as a reduction from a spiking neural model , and picking the value where the mean of FC from the last 2 min was the highest. With this value of G, the activity of each subject was simulated for 16.4 min, first two of which were discarded to avoid the influence of the initial conditions. The Euler method with time step \u0394t = 10 ms was used for the simulation. The resulting time series of iS were temporally averaged over windows of size 0.72 s, leaving 1200 time points in every time series. The data are normalized to zero mean and variance equal to one .The initial conditions for To analyze the role of the regional parameters inferred from human resting-state fMRI data, we compare the inferred parameters to several regional features obtained on the individual level or on a population level from previous literature. All features are represented by a vector of 68 elements, corresponding to the cortical regions of Desikan-Killiany parcellation.The individual-level features are derived from the data used in the model fitting: structural connectivity and parcellated resting-state fMRI. For structural connectivity, it is node in-strength and eigenvector centrality. For fMRI data, it is the first and second spatial eigenvector obtained from PCA, vector of correlation coefficients of regional time series with the mean signal of resting-state fMRI, vector of correlation coefficients of regional time series with the network input time series , number We further consider several regional features derived from other sources unrelated to the modeling data. First, it is the neuronal density and neuronal size derived from the pioneering work of Von Economo and Koskinas , sampled uniformly from in each dimension. The root-finding method used the system Jacobian calculated by TensorFlow\u2019s automatic differentiation. Stopping tolerance was set to 1\u00a0\u00d7\u00a010\u22123 in L2 distance). If not, then we simulate the system further to 1200 s and evaluate again with the same criteria, and if the system has not converged, then it is marked as unsteady.To assess whether the system supports unsteady dynamics (limit cycles or chaotic dynamics), we simulate the systems from the same random initializations for 288 s in the absence of system noise. After that, we evaluate whether the system has converged to a fixed point by criteria of the last 72 s being in the vicinity of the last final state (threshold 1\u00a0\u00d7\u00a010"}
+{"text": "A and B. Using the two time points, we compute a contrast of gene expression reads per individual and gene. The age of each individual is known and it is used to compute, for each gene separately, a linear regression of the gene expression contrasts on the individual\u2019s age. Looking at the intercept of the linear regression to detect a departure from the baseline, we aim to reliably single out those genes for which there is a difference in the intercept among those individuals in group A and not in group B. In this work, we develop testing methodology for this setting based on two hypothesis tests\u2014one under the null and one under an appropriately formulated alternative. We demonstrate the validity of our approach using a dataset created by bootstrapping from a real data application in the context of multiple organ dysfunction syndrome (MODS).We are interested in detecting a departure from the baseline in a longitudinal analysis in the context of multiple organ dysfunction syndrome (MODS). In particular, we are given gene expression reads at two time points for a fixed number of genes and individuals. The individuals can be subdivided into two groups, denoted as groups A and B. For instance, the two groups can be defined as those individuals who recovered from MODS (group A) and those who suffer from a condition called \u201cprolonged MODS\u201d (group B).In this article, we present novel methodology whose development was motivated by an application in the context of multiple organ dysfunction syndrome (MODS). The data that motivated this research are structured as follows. We are given gene expression reads at two time points for A and not in group B.For each gene separately, we want to perform a linear regression of the gene expression contrasts on the individual\u2019s age. As we are interested in a departure from the baseline, we look at the intercept of the linear regression. The aim of this work is to reliably single out those genes for which there is a difference in the intercept among those in group A and B, respectively. For group A, we wanted to test the hypothesis B, given some level To approach this, we develop testing methodology based on two hypothesis tests. The two hypothesis tests are once under the null, and once under an appropriately formulated alternative. In an abstract setting, we are faced with two linear regressions\u2014p-values obtained by testing m genes.The aforementioned tests are carried out on both groups for each gene contrast under consideration. Since there are We demonstrate the validity of our approach using a simulated dataset in the context of multiple organ dysfunction syndrome (MODS) . The simp-values, and the hypothesis tests being carried out. A demonstration of the methodology on simulated data can be found in The article is structured as follows. We start with a brief literature review in j\u2019th column of a matrix Y.Throughout the entire article, we denote with While, initially, studies often compared gene expression data between distinct groups at fixed time points, there is a growing literature which considers time dependent expression data, meaning studies which extract insights from mRNA (or similar) samples collected at successive time points.p-values, where both group and time are the main factors. Their aim is to quantify the group effects and the time effects separately. The proposed approach to handling this problem is a two-stage model which first removes the time effect and then looks at the group effect. The given model is linear and the effects are determined via hypothesis testing with multiplicity correction.The identification of differentially expressed genes in a time course study is an active area of research. Starting with , the autIn another work , the autAnother contribution in the literature considerIn a similar fashion, in , the autIn contrast to the methodology presented in our article, which aims to identify gene contrasts showing a departure from the baseline in one group and not another, the aforementioned publications differ from ours in that they either aim to identify different group and time effects, consider two null hypotheses, or are based on functional principal component analysis.A second line of research available in the literature aims to process expression profiles with graphical methods as opposed to hypothesis testing. For instance, an algorithm to increase the temporal resolution of expression measurements and an application to skeletal muscle differentiation can be found in . The algIn the exploratory study of , the autFinally, there is literature on software tutorials for gene expression analysis with different time points, which, however, does not present new methodology. A workflow for the statistics computer software R, for the purpose of analyzing data from a micro-array time-course experiment, is presented in . The tutp-values are a leading cause of hospitalization and preventable death in children worldwide . From 208 deaths . MultiplWe consider a dataset originally created by the Pediatric Intensive Care Influenza (PICFLU) investigators group ,3, comprA custom gene panel was designed for ng genes ,20,21. TThe Pediatric Sequential Organ Failure Assessment (pSOFA) score, ranging from 0 to 24, was used to identify MODS. It can also quantify MODS over time and it iA and B with The dataset under investigation contains A and B, respectively. For each individual, we consider the fixed panel of Since the aim of this contribution is to showcase the new methodology, a simulated dataset is generated from the above dataset of the Pediatric Intensive Care Influenza (PICFLU) investigators group. As the analysis of the real dataset requires an extensive discussion of the biological implications, it is deferred to a separate publication. Instead, the simulated dataset we use was generated by bootstrapping (sampling with replacement) from the real data described above. For this, when bootstrapping, we keep the original group sizes of A) and \u201cMODS Recovery\u201d (group B) on the real data. While keeping fixed the original group sizes of A and B, respectively, we pool all contrast measurements for the group \u201cProlonged MODS\u201d (group A) and draw bootstrapped measurements from this pool to create a new panel of m genes. The same is carried out using the contrast data for the group \u201cMODS Recovery\u201d (group B). A vector of m new age measurements is created by sampling with replacement from the original m age measurements, which are all contained in the interval The bootstrapping is conducted as follows. Since we are only interested in the contrasts, see p-values for both p-values in both subfigures are sorted, meaning that it is not possible to immediately compare the p-values in the left and right subfigures. We observe that, for the test under the null, the p-values are quite conservative, with only very few significances. For the test under the alternative, we observe a step function behavior, in the sense that the level p-values either zero or one.p-values considered in We evaluate all discovered genes in However, since the dataset we analyze here is created by bootstrapping from the PICFLU dataset see , the disnot another group (denoted groups A and B). Although formulated as a problem with two endpoints, as a preprocessing step, the input data consisting of gene expression data collected at two different time points for the same set of genes and individuals in two different groups are converted to gene contrasts (differences in gene expression) per group. This effectively reduces the multiple endpoint problem to a single input.This article considered the problem of testing contrasts for a gene expression application in the context of multiple organ dysfunction syndrome (MODS). The statistical challenge of the problem under consideration consists in the fact that we are interested in genes showing significances with respect to one group but m denotes the number of genes. Precisely, we conduct two linear regressions, where each linear regression allows us to determine if the contrasts can be explained by a single covariate alone (in our application this is the age covariate) and focus on the intercept to detect a departure from the baseline in gene expression. The two sets of m hypotheses and m hypotheses under the alternative (to model the condition that we are interested in genes showing significances in group A but not in group B). Special attention is paid to the formulation and calibration of an appropriate level under the alternative. The level we choose is essentially arbitrary, but as motived in the literature . The dataset originally created for MODS stems from the Pediatric Intensive Care Influenza (PICFLU) investigators group ,3 and se"}
+{"text": "Disruption of pancreatic islet function and glucose homeostasis can lead to the development of sustained hyperglycemia, beta cell glucotoxicity, and ultimately type 2 diabetes (T2D). In this study, we sought to explore the effects of hyperglycemia on human pancreatic islet (HPI) gene expression by exposing HPIs from two donors to low (2.8mM) and high (15.0mM) glucose concentrations over 24 hours, assaying the transcriptome at seven time points using single-cell RNA sequencing (scRNA-seq). We modeled time as both a discrete and continuous variable to determine momentary and longitudinal changes in transcription associated with islet time in culture or glucose exposure. Across all cell types, we identified 1,528 genes associated with time, 1,185 genes associated with glucose exposure, and 845 genes associated with interaction effects between time and glucose. We clustered differentially expressed genes across cell types and found 347 modules of genes with similar expression patterns across time and glucose conditions, including two beta cell modules enriched in genes associated with T2D. Finally, by integrating genomic features from this study and genetic summary statistics for T2D and related traits, we nominate 363 candidate effector genes that may underlie genetic associations for T2D and related traits. Type 2 diabetes and related complications are among the leading causes of death globally . ClinicaOne approach to help identify and understand the genes that contribute to type 2 diabetes pathogenesis and progression is to explore the effects of physiologically relevant conditions, such as hyperglycemia, on human islet gene expression. To date, human islet glucose-stimulus studies have shown the effects of hyperglycemia on genes related to insulin secretion and oxidin vitro and sample cells at seven time points over 24 hours. We identify genes with changes in expression associated with time in culture or glucose exposure using both discrete models, which test for changes in expression at a given time point, and continuous models, which analyze all of the data longitudinally across time points. Using these models, we test for three types of gene expression effects: (i) time in culture, (ii) glucose exposure, and (iii) interaction effects between time and glucose exposure . Through this time-series experimental design, we demonstrate the importance of controlling for \u201ctime in culture\u201d to isolate expression effects associated with glucose exposure. We show that the transcriptomic response associated with high glucose exposure plateaus at 8 hours in beta cells and is variable across other cell types. For beta cells specifically, we find multifactorial effects on gene expression whereby many genes have patterns associated with all three of the considered variables: time, glucose exposure, and time:glucose\u2014highlighting the sensitivity of beta cells to both time in culture and glucose exposure. By clustering the expression profiles of genes in euglycemic and hyperglycemic conditions through time, we identify cell-type specific modules of co-expressed genes. For two of the beta cell expression modules, we find enrichment in genes related to type 2 diabetes, one of which shows strong enrichment in pathways related to insulin secretion. Finally, by modeling genetic association signals from type 2 diabetes and type 2 diabetes-related traits using the data from this study, we identify 363 candidate effector genes that may drive genetic association signals for type 2 diabetes, blood glucose, and glycated hemoglobin (HbA1c)\u2014seven of which feature in the aforementioned type 2 diabetes-associated expression modules. Collectively, the results from this study provide a high-resolution view of the effects of euglycemic and hyperglycemic conditions on islet cell types through time and should help guide future experiments to understand the molecular mechanisms that lead to islet dysfunction in disease states like type 2 diabetes.In this study, we perform single-cell RNA sequencing (scRNA-seq) to measure the transcriptional changes associated with sustained glucose exposure across islet cell types. We expose human pancreatic islets from two donors to euglycemic conditions (2.8mM) and hyperglycemic conditions (15.0mM) We obtained human pancreatic islets from two donors and acclimated the islets to a basal, euglycemic media of 2.8mM glucose (50mg/dL) over one hour . At the We fit three discrete models to characterize the transcriptomic response of islet cell types to glucose stimulation at each time point . First, 10 across models, we found that the BvH and BvL values were strongly correlated, but not the LvH signed \u2212log10 . CombineTo further characterize the transcriptional response to time in culture and glucose exposure, we calculated (i) the overlap of associated genes (FDR<5%) across time points within each model and cell type \u2013S6 and . FocusinWhile discrete models can effectively identify time-specific effects \u2013S6, suchTo model time, we considered two metrics: (i) sampled time and (ii) interpolated time \u2013S9, a meAcross the three continuous models, we identified 1,431 genes with expression patterns associated with time, glucose, and time:glucose . As antiABCC8 terms within each module method , we mode10 for each gene across phenotypes and found them to be slightly correlated (minimum Pearson\u2019s r=0.3), indicating the presence of both common and disease/trait-specific effects : 14 for type 2 diabetes, 358 for blood glucose, and 3 for HbA1c . We comp effects . Among tERO1B, HNRNPA2B1, HOPX, and RHOBTB3. Combined, these results strongly suggest a type 2 diabetes-relevant role of these genes in beta cells.We sought to add further evidence for a type 2 diabetes-relevant role of the genes within the type 2 diabetes-enriched beta cell expression modules and intersected the genes in modules 84 and 103 with the 363 candidate effector genes. Eight of the 15 genes in these modules (53%) were identified as candidate effector genes, including four genes not in the Type 2 Diabetes Knowledge Portal effector gene list\u2014In this study, we present an in-depth characterization of the 24 hour transcriptomic response of human pancreatic islets to glucose exposure across cell types and time.in vitro experimental conditions are not perfect representations of in vivo conditions; factors such as time in culture can induce expression changes that must be controlled for. As an example, we found that many of the genes induced by glucose in beta cells also show time in culture effects . These findings are an important reminder that islet cell types, especially beta cells, are sensitive to intentional and unintentional experimental perturbations, as we have shown previously . An impo effects , suggestWe find that the number of genes whose expression is associated with glucose exposure plateaus at 8 hours in beta cells, while other cell types showed less consistent trends. Interestingly, for beta cells specifically, we discover that ~85% of genes differentially expressed at \u22654 hours in LvH models already show expression differences at 1\u20132 hours, indicating a rapid transcriptomic response to glucose exposure\u2014a finding that would have been missed if we did not serially sample cells over a 24 hour period. In addition, by jointly modeling time and glucose concentration effects on gene expression of cells across all time points, we identify wide-spread multifactorial effects on gene expression in beta cells, whereby genes exhibit expression patterns associated with time in culture, glucose exposure, and time:glucose interaction effects, underscoring the sensitivity of beta cells to glucose exposure and other experimental conditions.PLIN3 in alpha cells , while ta cells , EIF5A ial cells ). Howeveal cells .The availability of cell type expression profiles over 24 hours allowed us to identify modules of genes with similar expression profiles through time in response to glucose stimulation. Interestingly, two beta cell modules were enriched in known type 2 diabetes effector genes. Genes in both modules showed increased expression in high glucose. For module 84, the differences in expression changed over time, particularly after 8 hours. For module 103, the differences in expression remained relatively constant after 4 hours.ABCC8, SLC30A8), including 18 of the 132 type 2 diabetes effector genes from the Type 2 Diabetes Knowledge Portal. Eight of these genes occur in beta cell modules 84 and 103, including four not previously annotated as an effector gene in the type 2 diabetes Knowledge Portal.Finally, by modeling genetic association summary statistics using genomic features derived from the single-cell data of this study, we nominate additional candidate effector genes for type 2 diabetes, blood glucose, and HbA1c. We identify many genes with a well-defined role in relationship to type 2 diabetes guidelines.2 for 72 hours at 37\u00b0C. Islets were packaged and transported to our laboratory at 4\u00b0C over a period of 24 hours. Upon receipt, we equilibrated the islets to 37\u00b0C for 1 hour in 2.8 mM glucose media prior to downstream processing.We obtained purified human pancreatic islets from two individuals through Prodo Laboratories or high (15 mM) glucose for 24 hours and performed single-cell RNA sequencing (scRNA-seq) at 1, 2, 4, 8, 12, and 24 hour time points. In addition, we performed scRNA-seq on islets at baseline 2.8 mM glucose prior to starting the stimulation experiment section,For each sampled time point, we dissociated the islet aliquot and performed scRNA-seq. To dissociate the islets, we incubated the 2,000 IEQ aliquot in 1mL Accutase solution at 37\u00b0C for 10 minutes, washed them with 2 mL PIM(S)TM , incubated the islets for 10 min at 37\u00b0C in 2 mL PBS with 4U Dispase I (Roche Diagnostics) / 2U DNase I (ThermoFisher Scientific), washed them once, and resuspended them in PIM(S). We then passed the final cell suspension through a BD 40 mm cell strainer to remove aggregates and assessed the cells for viability and abundance via staining with acridine orange and DAPI (Chemometec Nucleocounter NC-3000). With the filtered suspension, we generated a single-cell mRNA library using either a 10X Genomics SC3\u2019v2 or SC3\u2019v3 chemistry kit according to the manufacturer\u2019s instructions . We quanWe used CellRanger v3.1.0 with default parameters to process and align reads to GRCh38.p13 with Ensembl version 98 transcript definitions (reference file distributed by 10X Genomics), identify cell-containing droplets, and generate cell \u2715 gene count matrices.Next, we used DecontX impleme) to (i) XIST to the mean expression of all genes on the Y chromosome .We identified and removed multiplets using scrublet v0.2.1 , simulatFor subsequent analyses, we further processed the data using scanpy v1.6.0 . We remoAs a final quality control step, we calculated the gene expression correlation between replicates for each donor. Using the cell type annotations section,Prior to calculating cell type clusters, we reduced the cell \u2715 gene expression matrix to a core set of independent variables using principal component analysis (PCA). First, we selected the 2,000 most variable genes across samples by (i) calculating the most variable genes per sample and (ii) selecting the 2,000 genes that occurred most often across samples (sc.pp.highly_variable_genes with flavor=\u2018seurat\u2019 and batch_key=sample). After mean centering and scaling the ln(CP10K+1) expression matrix to unit variance, we performed principal component analysis using the 2,000 most variable genes. To select the number of PCs for subsequent analyses, we used a scree plot and calcUsing the harmony-adjusted PCs, we calculated clusters using the Leiden graph-based clustering algorithm v0.8.3 sc.tl.l. For theGCG , INS (beta cells), PPY (gamma cells), SST (delta cells), PRSS1 (acinar cells), KRT19 , COL1A1 (stellate cells), and CD68 (macrophages).To determine the cell type identity of clusters, we used well-established marker genes for the cell types common to islets : GCG using scvelo v0.2.4 (scv.pp.neighbors) and derived interpolated time, t\u0302, for each cell using the following formula:t is the target cell\u2019s sampled time point, w is a weight coefficient for t, di is the distance of the target cell to neighboring cell i, and ti is the sampled time point of neighboring cell i. To select values for n and w, we performed an exhaustive grid search to evaluate the effect of all combinations of n and w on the stability of t\u0302, where n ranged from 0 to 150 in intervals of 5 and w ranged from 0 to 1 in intervals of 0.25. For each value of w, we considered all values of n and compared t\u0302 to t\u0302 derived from the previous n value using mean squared error (MSE) as a stability metric for each t\u0302 derivation to the results of using t to better model the cellular state of each cell at each time point, assuming that cells captured at each time point are not at a uniform cellular state in their response to time in culture and/or glucose exposure but rather are distributed across cellular response phases. Prior to calculating interpolated time, we removed the 0 hour time point as there was no high glucose treatment at this time point. For each cell type and glucose condition, we calculated the y metric . Across on model section,i, and cell k, let Zki indicate whether gene i is expressed in cell k and Yki denote the ln(CP10K+1) normalized gene expression. We tested for association using the two-part regression model:Xk are the predictor variables for cell k and \u03b2i is the vector of fixed effect regression coefficients.For each cell type, we performed differential gene expression (DGE) analysis (both discrete time point DGE models and continuous time DGE models) using MAST v1.20.0, a two-part, generalized linear model with a logistic regression component for the discrete process and linear regression component for the continuous process . BrieflyAcross all DGE models, we included as fixed effect covariates cell complexity to control for unobserved nuisance variation and partFor the discrete time DGE models, we fit separate models for each cell type and time point. We considered three models. First, in the \u201cbasal-versus-high\u201d (BvH) model, we compared basal cells to cells exposed to high glucose (15 mM) across time by including a fixed effect \u201cglucose status\u201d variable . Second, in the \u201cbasal-versus-low\u201d (BvL) model, we compared basal cells to cells maintained in low glucose (2.8 mM) across time by including a fixed effect \u201ctime point\u201d variable . Third, in the \u201clow-versus-high glucose\u201d (LvH) model, we compared cells exposed to low glucose (2.8 mM) to cells exposed to high glucose (15 mM) at each time point by including a fixed effect \u201cglucose status\u201d variable .For the continuous time DGE models, we fit separate models for each cell type, jointly analyzing cells across all glucose exposures and time points, excluding the 0 hour time point. We considered three different models: (i) a \u201ccontinuous time\u201d model to test for time effects, (ii) a \u201cglucose\u201d model to test for glucose effects, and (iii) a \u201ctime:glucose interaction\u201d model to test for an interaction effect between time and glucose concentration. For all three models, we included glucose concentration and continuous time as fixed effect variables (in addition to cell complexity and participant identifiers). For the \u201ctime:glucose interaction\u201d model, we used the same model but added an additional time:glucose concentration interaction term.P-values obtained from the hurdle model, derived from the summed \u03c72 null distributions of the discrete (Zi) and continuous (Yi) components, as described in Finak et al. of all genes associated (FDR<5%) with any of the DGE models apart from the BvH model section.k-means (kmeans function in R). For the DPGP parameter configurations, we considered 45 different parameter configurations, ranging (i) the alpha parameter, which tunes the number of clusters, from 0.001 to 10.0 and (ii) the num_empty_clusters parameter, that governs the number of empty clusters introduced at each iteration of the clustering algorithm, from 2 to 16. For each parameter configuration, we merged the DPGP clusters across low and high glucose by (i) averaging the similarity matrices, (ii) concatenating the cluster assignments and the corresponding log-likelihood values, and (iii) re-running DPGP using the merged information from steps i-ii. For the k-means clustering of the consensus matrix, we started at k=2 and iteratively increased k by 1 until \u22651 cluster had a minimum co-clustering frequency \u22650.75. We removed all genes in the cluster(s) with a minimum co-clustering frequency \u22650.75 from the consensus matrix, subtracted the number of removed clusters from k, and repeated this process\u2014iteratively increasing k and removing clusters with a co-clustering frequency \u22650.75 until no genes remained in the consensus matrix. For final module assignments, we removed clusters with <5 genes.We clustered the gene expression \u2715 interpolated time matrices (one for each glucose condition) using the Dirichlet process Gaussian process (DPGP) package v0.1 . In ordehttps://t2d.hugeamp.org/method.html?trait=t2d&dataset=mccarthy). We controlled for the number of tests performed for each cell type using the Benjamini-Hochberg procedure.To determine T2D-relevant expression modules, we tested the modules for enrichment of T2D effector genes in the genes constituting the module. For T2D effector gene definitions, we used the Mahajan and McCarthy effector gene predictions from the T2D knowledge portal , and (iii) average expression within each cell type. In addition, we also included (i) cell type gene expression specificity values calculated using CELLEX v1.2.2 / (n+1), where r is the number of permuted PoP scores greater than the observed and n is the total number of permutations (We nominated candidate effector genes using the Polygenic Priority Score (PoPS) method v0.2 . BrieflyX v1.2.2 and (ii)utations . FinallyPrevious bulk islet transcriptomic studies report genes associated with glucose stimulation using a study design similar to the LvH model of this study \u201311. We cSupplement 1Supplement 2"}
+{"text": "Arabidopsis thaliana and cultivated soybean, as well as conducting a meta-analysis on published PGs. These factors include the construction method, the sequencing depth, and the extent of input data used for gene annotation. We observe substantial differences between PGs constructed using three common procedures and that results are dependent on the extent of the input data. Specifically, we report low agreement between the gene content inferred using different procedures and input data. Our results should increase the awareness of the community to the consequences of methodological decisions made during the process of PG construction and emphasize the need for further investigation of commonly applied methodologies.Pan-genomics is an emerging approach for studying the genetic diversity within plant populations. In contrast to common resequencing studies that compare whole genome sequencing data with a single reference genome, the construction of a pan-genome (PG) involves the direct comparison of multiple genomes to one another, thereby enabling the detection of genomic sequences and genes not present in the reference, as well as the analysis of gene content diversity. Although multiple studies describing PGs of various plant species have been published in recent years, a better understanding regarding the effect of the computational procedures used for PG construction could guide researchers in making more informed methodological decisions. Here, we examine the effect of several key methodological factors on the obtained gene pool and on gene presence\u2013absence detections by constructing and comparing multiple PGs of Pan-genomics is an emerging approach for studying plant intraspecific genomic diversity by comparing multiple genomes rather than relying on a single reference. Despite the common use of pan-genomics, the methodology involved in their construction is underexplored and still poorly understood. In this study, we assess the effect of several key methodological factors on pan-genome (PG) construction and report substantial impact of the applied procedure and input data. We observe considerable differences in gene content inferences between PGs constructed from the same data using different methods and vice-versa. These findings highlight the importance of making informed decisions during the PG construction process and demonstrate the need for extensive investigation of widely used computational methodologies.The continuous improvement and cost reduction of whole genome sequencing (WGS) leads to a rapid increase in the number of available plant genomes. These data provide new opportunities to study genetic diversity within and across plant species. It is now possible to examine the genetic variation present in populations consisting of hundreds or even thousands of accessions of the same plant species. Such comprehensive data sets may be utilized to assist crop breeding and improvement efforts and for The PG is defined as the nonredundant pool of genetic material present in the population of a given species (or a broader taxonomic group) . In pracWhen focusing on protein-coding sequences (\u201cgene-based\u201d PGs), PGs are composed of a set of pan-genes, each representing an orthologous set of genes present in one or more of the studied accessions. Pan-genes vary in terms of the number of accessions in which they are present. Pan-genes found in all examined accessions are termed core genes, whereas those present in some accessions only are referred to as shell or dispensable genes. Genes that are found in a single accession are called singletons. The relative ratios of core, shell, and singleton pan-genes, sometimes referred to as the PG composition, were previously hypothesized to be characteristic of particular plant species, potentially reflecting their evolution, ecology, and life history .In recent years, a different approach for pan-genomics has been developed, in which the term PG is used to represent a graph data structure, constructed based on inferred structural variants (SVs). These SVs are inferred from comparisons of whole genome sequences or long-read sequencing . MultiplNotably, the construction of a plant gene-based PG is a laborious, time-consuming, and often an expensive task, requiring considerable amounts of sequencing data as well as computational resources. Several gene-based PG construction methods have been devised, most of which can be classified as variants of three general methodologies: the de novo (DN) assembly and annotation approach, the map-to-pan (MTP) approach, and the iterative assembly (IA) approach. The general workflows of these approaches are depicted in In all construction approaches, gene annotation may be based on RNA-seq data produced for a subset of the analyzed accessions , proteinSeveral past publications have discussed the expected differences between the various PG construction approaches and the effects of other relevant methodological factors. PG projects also differ considerably in the input data used for their construction, from the amount (depth) and type of sequencing data used for genome assembly to the quality and richness of the data used in genome annotation. The amount and quality of input data often reflects the purpose and the budget of a project, specifically the number of analyzed accessions. Still, better understanding of the effects of such factors may assist researchers in making more informed decisions and better use of their budgets.In this study, we explore the effect of different methodological considerations on the resulting PG. Specifically, we examine the effects of the construction approach , the depth of sequencing, the quality of assembly, and the quality of annotation evidence on the two main aspects of the resulting PG: nonreference gene detection and gene presence\u2013absence inference. This was achieved by constructing multiple PGs using different procedures and empirical data sets and comparing them with one another. In addition, we explore the effects of several technical parameters specific to each approach. Our analysis reveals that certain factors affect the constructed PG to different extents and sometimes in ways that would be difficult to predict.Arabidopsis thaliana and 7 cultivated soybean (Glycine max) PGs, as detailed in In the following sections, we investigate the impact of different methodological factors on the construction of gene-based PGs. We employed a range of procedures and data sets to construct 20 A. thaliana ecotypes of the DN nonreference proteins and 5.7% (38) of the MTP nonreference proteins were highly similar to reference proteins (sequence identity \u2265 95% and 0.8 < length ratio < 1.2). An additional 7.5% (341) and 3.5% (23) of the DN and MTP nonreference proteins, respectively, were found to be truncated versions of reference proteins (sequence identity \u2265 95% and length ratio \u2264 0.8). In both cases, the identification of these genes as nonreference is likely due to the existence of a very close paralog or an artifact of the PG construction and annotation procedure. Next, we searched for additional functional support for the existence of nonreference genes. Potential homologs of the nonreference proteins were identified by searching through a database of plant proteins . Homologs were found (sequence identity > 50%) for 14% (639) and 32.9% (218) of the DN and MTP proteins, respectively. Additionally, using an extensive RNA-seq data set, evidence of nonreference gene expression was found for 12.9% (588) of the DN nonreference genes and 27.3% (181) of the MTP nonreference genes. We defined HQ candidates as nonreference genes that do not highly resemble reference proteins but do show homology to other known plant proteins or evidence of being transcribed. We found that 18.7% (856) and 39.4% (261) of the DN and MTP nonreference genes are HQ candidates. These results suggest that the higher number of gene models predicted by the DN approach allows it to detect more genuine novel genes, compared with the MTP approach. Note, however, that other definitions for HQ candidates are possible and that results may be dependent on the quality of the sequence data available in the target databases.One of the central aims of constructing PGs is the discovery of nonreference genes, because these genes cannot be detected using a single reference genome. We thus focused the next comparison on the sets of nonreference genes. Due to the general similarity between the MTP and IA procedures, we focused on the MTP and DN nonreference gene pools see . First, l Plants , excludiOut of the 4,565 and 663 nonreference pan-genes found in the DN and MTP PGs, respectively, 89 could be reliably matched between approaches based on protein sequence similarity. Thus, the great majority of nonreference genes were only detected by one of the pipelines but not the other. Hereafter, pan-genes that are only present in the nonreference pool of the DN pipeline are termed DN+|MTP\u2212, whereas pan-genes only detected by the MTP pipeline are termed DN\u2212|MTP+. By applying a series of transcript and protein sequence mapping analyses, we identified multiple methodological causes for the existence of DN+|MTP\u2212 and DN\u2212|MTP+ pan-genes. For instance, using the MTP approach, it is more challenging to detect nonreference genes in regions that are highly similar to reference regions annotated as noncoding, which leads to DN+|MTP\u2212 nonreference genes online. The analyses described above concentrated on the overall gene content of the compared PGs. Next, we examined discrepancies in gene presence\u2013absence within each ecotype by comparing the gene PAV matrices of the two PGs. In this analysis, we omitted genes that could not be matched across the two PGs , leaving 89 matched nonreference genes as well as 27,295 reference genes. Out of these, 23,106 (84.4%) were detected as core genes in both PGs. However, because these, by definition, display no variation and are present in all ecotypes, they are of lesser interest in the context of a PG analysis. We therefore proceeded only with the 4,278 pan-genes detected as noncore by either of the PG construction approaches. A total of 29,946 gene presence\u2013absence calls were performed across these pan-genes . Notably, we observed complete agreement in gene presence\u2013absence assignments between the two approaches in only 13% of these pan-genes. Moreover, 39% of the presence\u2013absence calls performed for these noncore genes were different in the two PGs. Most of these differences (88%) were classified as present in the MTP PG and absent in the DN PG, with various methodological causes identified online. Regardless of the construction approach applied, it is expected that the depth of the input sequencing data will affect the quality of the obtained PG. We assessed this effect by subsampling the raw reads of each accession to produce data sets with sequencing depths 10\u00d7, 20\u00d7, 30\u00d7, and 50\u00d7 as well as an additional data set that consists of the full sequence data, with average depth of 78\u00d7. Each data set was used as the input for the DN and MTP construction pipelines. Two additional PGs, termed \u201cHQ-assembly\u201d PGs, were constructed by applying the DN and MTP pipelines to a previously published data set of HQ, chromosome-level genome assemblies of the seven ecotypes . These HA, Both the DN and MTP pipelines begin the construction process by creating whole genome assemblies of all input accessions. As expected, assembly contiguity (contig N50) and completeness (assembly size and % complete BUSCOs ) improvefig. 3B and fig. 3B). This effect is also observed, although to a lesser degree, when comparing MTP results, with 81.9% and 91.7% core genes in the MTP 10\u00d7 and HQ-assembly PGs, respectively. The number of genes detected as present in each ecotype increased with the sequencing depth .We next focused on the effect of assembly quality on the ability to detect nonreference pan-genes , and HQ evidence . In all cases, the same genome assemblies (based on 50\u00d7 sequencing data) and annotation procedures were used.Genome annotation is a crucial step in the process of PG construction because it facilitates the detection of novel genes and highly affects the composition of the constructed PG. However, automatic genome annotation is often a nontrivial task requiring various inputs and involving multiple computational steps. The quality of the result is determined by many factors, including the specific tools used and the parameters set for each of them. In the context of PG construction, automatic genome annotations usually comprise three procedures: (1) reference genes lift-over (projection), in which the annotation of the reference genome is mapped to other genomes from the same species ; (2) ab We examined the number of reference and nonreference genes detected in each PG. Although the number of reference genes was highly similar across data sets, the number of nonreference genes appeared to be strongly affected by the annotation evidence online. PG construction is a multistep procedure involving the application of various computational tools, each with its specific parameters and thresholds which need to be set by the user. However, it is often difficult to determine the optimal values for these parameters or even predict their effect on the constructed PG. We examined the effects of several such parameters which we expected to be particularly influential. First, we evaluated the effect of two parameters related to gene presence\u2013absence detection in the MTP and IA approaches, termed the depth threshold and the coverage threshold . We found that within the commonly used range of values, gene presence\u2013absence inference as well as the PG composition is not highly affected. Choosing parameter values outside of this range, however, could highly impact the inferences online. Another factor that may affect novel gene detection, as well as gene presence\u2013absence calling in the IA and MTP pipelines, is the alignment software used for short-read mapping. Read mapping is applied at two different steps: (1) in the IA procedure, reads of each accession are iteratively mapped to the reference genome to detect nonreference (unmapped) reads, which are subsequently assembled into nonreference contigs; and (2) in both IA and MTP, once a nonreference sequence is detected and annotated, reads from each accession are mapped to the PG to determine the presence and absence of pan-genes in the input accessions, thus constructing the final gene PAV matrix.A. thaliana PGs from the same 50\u00d7 sequencing data, using either of two commonly used tools: BWA algorithm and therG. max) accessions of the DN\u2212|MTP+ type.First, we compared PGs constructed using the DN, MTP, and IA approaches from 50\u00d7 short-read sequencing data and observed considerable differences in size, composition, and content online. A. thaliana, which can be explained by the higher complexity and repetitiveness of the soybean genome. Still, the high percentage of complete BUSCOs suggests that gene prediction may be successfully performed on assemblies produced from as low as 20\u00d7 sequencing data. For DN PGs, the total number of pan-genes increased as assembly quality improved, which can be explained by the large difference in assembly sizes between 20\u00d7, 50\u00d7, and HQ-assembly data sets. In addition, the percentage of core genes was affected by assembly quality . In contrast, and in line with the results observed for A. thaliana, the MTP PGs were highly similar across different assembly qualities .We examined the effect of assembly quality on soybean PGs by constructing additional PGs from sequencing data subsampled to 20\u00d7 depth as well as from previously published HQ genome assemblies. As expected, assemblies produced using 50\u00d7 sequencing depth were superior to 20\u00d7 assemblies in terms of contiguity and completeness . These assemblies are of lower quality compared with those obtained for Brassica napus) and 586 (tomato). Notably, the DN approach was never chosen for PGs including more than 70 accessions, whereas the MTP/IA approach was chosen for PGs of varying size . This tendency of choosing different approaches for constructing PGs with respect to the number of sequenced accessions reflects the technical and computational challenges of assembling and annotating hundreds of plant genomes, as required by the DN approach.To further assess the generality of our results across additional plant PGs, we conducted a meta-analysis of previously published data sets. To this end, we performed a literature search and obtained a total of 15 plant PG data sets for which a gene PAV matrix was available online. R2 = 0.88, supplementary fig. S4At P = 0.002; fig. 4B), as well as higher overall occupancy . We note that the association between the number of accessions and the two PG composition metrics was very low . Together, the findings of the meta-analysis suggest that the higher numbers of nonreference genes detected by the DN approach is a general phenomenon, not restricted to the PGs of A. thaliana and soybean examined above. It should be noted, however, that some of the observed differences may derive from biological phenomena related to the choice of the accessions, life history traits, and genomic features of the various species such as recent polyploidy events.We compared the PG compositions across data sets by computing two metrics for each PAV matrix: (1) the percentage of core pan-genes, defined as those present in at least 95% of the accessions, and (2) overall occupancy, defined as the total percentage of gene presence across all pan-genes and accessions. Although the percentage of core pan-genes is a common and biologically meaningful metric for assessing PG composition, it requires the arbitrary choice of a presence cutoff (in this case 95%), whereas the occupancy measure is a more global metric. Nevertheless, the two metrics show high linear correlation , as well as methods for direct comparison of the gene content of two PGs. Such measures can be applied in future studies for the evaluation of PG quality, as they provide means for examining the two main benefits of a PG analysis: detection of nonreference genes and insight on intraspecific gene content diversity.Several past publications predicted that various methodological factors may affect the results of PG construction procedures . SpecifiOur comparisons of the most common PG construction approaches revealed considerable differences in the resulting PGs. We found that the sets of nonreference genes show very partial overlaps between PGs constructed from the same data but using different procedures, with the great majority of genes detected by one of the approaches only. Moreover, even for pan-genes included in both PGs, numerous discrepancies exist regarding presence\u2013absence inferences in specific accessions. We generally observed that the DN approach leads to larger pools of nonreference genes, whereas the MTP and IA approaches more readily determine genes to be present in specific accessions. This leads to marked differences in the reported PG composition. For example, PGs constructed using the DN approach typically display lower ratios of core genes compared with MTP PGs.B). This could be explained by the relative ease by which coding regions can be assembled compared with repetitive and otherwise low-complexity intergenic regions. Moreover, in case the MTP construction approach is used, it is sufficient that a genomic region containing a gene is adequately covered to call that gene present, without the need that the relevant genomic region is assembled into contigs. Thus, although the exact sequencing depth required for PG construction may be species specific and depend on multiple genomic characteristics, our observations suggest that data sets with low sequencing depth produce gene-based PGs with very similar compositions compared with those with higher depths, at least if only short-read data are used. This observation is especially important due to the abundance of sequencing data from past genome resequencing projects which might be \u201crecycled\u201d for the purpose of PG construction. Another methodological factor that was shown to impact PG construction is the quality of sequences used as annotation evidence. Our results indicate that obtaining a HQ set of transcript and protein evidence is highly beneficial, especially for reducing the amount of false gene detection. More generally, we find that the annotation step presents one of the main challenges in PG construction procedures, regardless of the applied approach. One possible way to overcome this challenge is to obtain several HQ annotations based on chromosome-level genome assemblies and then project and consolidate gene models onto the PG assemblies and specific parameter values, which may introduce discrepancies. Rigorous research of these factors may not always be feasible. One example is the choice of the genome assembler, which is known to have a major effect on the completeness, contiguity, and accuracy of assemblies . BecauseA major challenge encountered when evaluating the performance of different PG construction procedures is the lack of \u201cground-truth\u201d data to which results can be compared. This limits our ability to assess performance using measures such as specificity, sensitivity, and accuracy. Various indirect measures based on comparisons to known sequences in public databases could provide general performance estimates. However, a more meticulous analysis will require the development of benchmarking data sets of carefully curated PGs or devising dedicated PG simulation software tools, which could reliably emulate the underlying processes that generate intraspecific diversity . LikewisDespite being unable to directly assess the quality of the results obtained using different PG construction procedures, the observations of this study allowed us to formulate several general guidelines that could aid future researchers in making methodological decisions. First, with regard to the choice of the construction approach, the DN approach should be favored in case the main goal of the study is to obtain a large pool of candidate nonreference genes. This could be useful when phenotypes of interest are known to occur in the population of a species but not in the reference accession. The DN approach is also recommended when several HQ genome assemblies and annotations are available or when the study also includes analyses based on a PG graph. Unfortunately, due to the high sequencing and computational demands imposed by the DN approach, it might not always be feasible. In such cases, or in studies where the main focus is on the variation of gene presence\u2013absence within large populations , the MTP and IA approaches may be advised. We suggest that the MTP approach should be favored over the IA approach because it allows the analysis of nonreference sequences in their genomic context. However, this choice would depend on the genome size of the studied organism and the number of sequenced accessions. When the computational resources available for the study are limited, the IA approach could prove to be a valuable alternative.A common decision that researchers have to make when planning a PG project is whether to generate deep sequencing data for a small number of accessions or to sequence many accessions to lower depths, thereby balancing between the quality of specific genomes and the degree of genetic diversity covered in the data set. Based on our results, it appears that when constructing a gene-based PG, especially with the MTP and IA approaches, sequencing depth may be kept to the minimum required for successfully assembling the gene space. A simple rule of thumb may be that one should use the sequencing depth required to achieve an assembly with at least 95% complete BUSCOs. Even when genome assemblies of satisfying quality are available, the process of gene annotation still poses a considerable challenge. Our results emphasize the need for obtaining HQ annotation evidence, and thus, we recommend that substantial resources are allocated to producing relevant data sets, preferably based on transcriptomic sequences obtained from the same accessions included in the PG.Finally, a general recommendation for researchers constructing gene-based PG is to test the robustness of their results to methodological factors, for example, by applying multiple construction approaches. In addition, the development of hybrid approaches which could incorporate advantages from the DN. MTP, and IA procedures may be highly beneficial, although it requires further research.Pan-genomics is gradually becoming a standard approach for studying genomic diversity within species. For plant genomes, such analyses are still in their infancy and new methods are actively being developed and applied. In general, we expect that the increasing availability of sequencing data, especially long reads, will allow researchers to abandon older techniques in favor of those that depend on multiple HQ genome assemblies. One such approach is based on the concept of PG graphs . To dateRegardless of the specific methodology, obtaining solid understanding of the various factors affecting the constructed PGs is of high importance. The consistency of our observations across data sets, species, and studies indicates that methodological effects are not limited to specific input data or implementations of the construction procedures. Rather, they represent a general phenomenon that needs to be addressed, and researchers drawing biological conclusions from PG studies should be made aware of these effects.A. thaliana and from the Genome Sequence Archive (GSA) for soybean, based on accession numbers as detailed in Short Illumina and long PacBio reads in FastQ format were downloaded from the European Nucleotide Archive (ENA) for A. thaliana Col-0 ecotype version TAIR10.45 were downloaded from Ensembl Plants. Additional HQ genome assemblies and transcriptomes for seven A. thaliana ecotypes were obtained from the \u201c1001 genomes\u201d website, from the MPIPZ project and two Glycine soja accessions (W05 and PI483463), were downloaded from SoyBase. Genome assemblies for an additional seven G. max accessions were downloaded from GSA , as well as transcriptomes of two GSA see online.PGs were constructed using three computational pipelines implemented in the software package Panoramic v1.2.1 . These pReads preprocessing : short sequencing reads were filtered and trimmed using Trimmomatic v0.39 (ic v0.39 with paric v0.39 with parRead mapping (MTP and IA): in the IA approach, short reads were mapped to the reference genome and to the PG using Bowtie2 v2.5.1 : using samtools v1.15.1 : preprocessed whole genome data (DN and MTP) or unmapped reads (IA) were assembled into contigs using SPAdes v3.13.2 or Minia : The procedure begins with the PG equivalent to the reference sequence. At each step, the genome assembly from a single accession is mapped to the PG, using Minimap2 v2.17 : whole genome assemblies (DN) or the nonreference section of the PG (MTP and IA) were annotated by integrating multiple steps to produce candidate gene models. First, repetitive sequences were masked using EDTA . For A. thaliana, the relevant pretrained models provided with the software tools were used, whereas for soybean, Augustus and SNAP were first trained based on single-copy BUSCOs detected in the reference assembly. We used PASA v2.4.1 or the GSA set (soybean), whereas for \u201chigh-quality evidence\u201d PGs, we used transcripts from the MPIPZ set . To reduce evidence redundancy and run times, transcripts were first clustered using MMseq2 v13.45111 : protein products derived from candidate gene models were clustered into orthology groups using OrthoFinder2 v2.5.1 (2 v2.5.1 . The res2 v2.5.1 to ensurGene presence\u2013absence detection based on reads coverage (MTP and IA): the coverage of exons for each gene model was computed based on read mapping, using the command \u201cbedtools coverage -hist -sorted\u201d from bedtools v2.30.0 divided by the query length.Transcript sequences were mapped to genomic sequences using Minimap2 with parTo profile the expression of nonreference genes, RNA-seq reads derived from multiple studies online wGene PAV data for each study were downloaded or obtained through personal communication, as detailed in evad121_Supplementary_DataClick here for additional data file."}
+{"text": "In a first study, this paper argues and demonstrates that spiking neural networks (SNN) can be successfully used for predictive and explainable modelling of multimodal streaming data. The paper proposes a new method, where both time series and on-line news are integrated as numerical streaming data in the same time domain and then used to train incrementally a SNN model. The connectivity and the spiking activity of the SNN are then analyzed through clustering and dynamic graph extraction to reveal on-line interaction between all input variables in regard to the predicted one. The paper answers the main research question of how to understand the dynamic interaction of time series and on-line news through their integrative modelling. It offers a new method to evaluate the efficiency of using on-line news on the predictive modelling of time series. Results on financial stock time series and online news are presented. In contrast to traditional machine learning techniques, the method reveals the dynamic interaction between stock variables and news and their dynamic impact on model accuracy when compared to models that do not use news information. Along with the used financial data, the method is applicable to a wide range of other multimodal time series and news data, such as economic, medical, environmental and social. The proposed method, being based on SNN, promotes the use of massively parallel and low energy neuromorphic hardware for multivariate on-line data modelling. Methods for integrating multimodal data into one machine learning model have been developed across domain areas, such as fault detection and diagnostics in the context of sparse multimodal data and expert knowledge assistance8. In9, application of integrated approach is demonstrated for hydro generators. The published models are mainly dealing with vector-based data rather than time series and usually incorporate static rule-based knowledge into a computational model, such as a neural network, before the model is trained on data, rather than integrating dynamically both time series and on-line news/text information6.Integrated modelling of multimodal streaming data is an open problem10, with only few studies suggesting effective methods. For example, a deep learning approach used text data to reduce errors in the prediction of taxi demand11. In12, text is used for financial data analysis using classical statistical machine learning. In13, Hong Kong market data was used to find a correlation between stock values and financial news. In14, a combination of text information with a statistical feature in Financial LDA is proposed. In15, stock market data are associated with financial data for classification using time series methods like DTW and SAX. In16, 5-year stock and news data were used to find the association between them with statistical analysis. In17, it was demonstrated that sentimental polarities matter in stock market forecasting. In18, news polarity and stocks were interlinked, and sentimental analysis helped to find some associations between stock data and financial news. In19, it is demonstrated that volatility movements are more predictable than asset price movements when using financial news. In20, it was demonstrated that there is improvement in forecasting by acquiring the right contextual information for financial data. A strong correlation between stock data and acquired financial text was found in21. Prediction of stock market direction using financial news was studied in23. It was found that there is an impact of news data on financial data to predict if the market will rise or fall. Other machine learning methods, such as SVM and PSO were also tried for sentiment analysis of financial data to show improvement compared with the use of deep learning mechanisms.The problem of integrating time series with on-line news and text information has been acknowledgedThe above researches confirmed that on the one hand the integration of time series and news information may improve the quality of time series and on the other hand they raise the issue of lacking efficient methods for the task, as neither of the above methods were able to learn integrated time series data in an on-line, incremental and adaptive mode and most importantly, to extract meaningful dynamic patterns of interaction of the used variables for a better understanding of the modelled processes.25. News information can be obtained from various business websites that analyze financial data, such as stock market, commodity prices etc.A challenge is what multimodal streaming data and information to use and how to encode and combine all this information into one model for a better online predictive and explainable learning. News information can include textual data, reflecting on industry performance, economic conditions, and political events. For example, news has a significant impact on stock market sentiment and prices27. Despite the work done, there are limitations in these methods and there are still open questions, that are addressed in this paper:How to encode online news for a particular type of time series and how to evaluate their impact?How to integrate time series and online news incrementally in a predictive model and how to evaluate the impact of the news on the model outcome?How to reveal inherent and interpretable patterns of dynamic interaction between time series variables and online news, which patterns may change over time?Traditional machine learning techniques have already been used to extract and classify news from trusted sources30 and some SNN models allow for dynamic spatio-temporal patterns to be interpreted and explained33.To address the above research questions, we base our methods on the third generation of neural networks, spiking neural networks (SNN), because SNN have the ability to capture dynamic spatio-temporal patterns from multi-input streaming dataLearning in SNN is inspired by the human brain as the brain is capable of on-line integration of different sources of temporal and textual information over time. Integrating information from different sources is crucial for a better decision making and event prediction by humans, crucial for their survival. The challenge addressed in this paper is how to apply SNN to develop models that integrate multiple time series and other types of related online information, for a better predictive modelling and for a better understanding of the dynamic relationship between the data and other relevant information sources.34, etc., along with learning algorithms for unsupervised and supervised learning, one of them being the spike-time dependent plasticity (SDTP) algorithm35. Various architectures of SNN have been proposed including: deep SNN36; evolving SNN (eSNN)37; dynamic evolving SNN (deSNN)38; spike-pattern association neurons and networks (SPAN)39, NeuCube32.Various spiking neuron models have been proposed in the literature, including the popular Leaky Integrate and Fire neuron (LIF), spike response neuron, probabilistic spiking neuron42. Different types of SNN offer different specific characteristics, that make them more or less suitable for the task in hand. So which method to use? Our proposed model uses NeuCube as a suitable, brain inspired SNN architecture as explained in the paper.SNN have been used as a computational paradigm for the development of neuromorphic hardware as massively parallel computational architectures that consume thousands of times less energyThe main contributions of the paper are: (1) a method for feature extraction and encoding of online news and integrating them with time series data; (2) a method, based on SNN, for an integrated predictive modelling of multimodal time series; (3) a method for the discovery of explainable dynamic patterns of interaction between multimodal time series variables; (4) predictive and explainable modelling of stock price time series, integrated with online news.This section presents results on integrated modelling of multiple time series and on-line news using the methodology described in the method section. While the proposed method can be applied on any various time series data and related on-line news, here it is applied on stock indexes and on-line news. The results demonstrate the benefit of using news when predicting one of the time series.https://www.kaggle.com/itoeiji/deep-reinforcement-learning-on-stock-data}. The data records daily closing adjusted prices of eight stock indices. The observations are daily time series that span from January 3rd, 2017 to June 30th, 2017, with 129 observations total for each stock index. Weekends are removed, and missing values, due to holidays, are replaced with the prices immediately preceding them. The dataset comprises the following stock indices: Wipro Limited [WIT], Microsoft Corporation [MSFT], Lennar Corporation [LEN], International Business Machines [IBM], Adamis Pharmaceuticals Corporation [ADMP], Alphabet Inc. [GOOG], Reliance Industries Ltd. [RIL], and Tata Consultancy Services [TCS]. The raw data sample consists of ordered daily sequences of adjusted closing prices. Descriptive statistics for the input variables along with their evolution in time are shown in Supplementary Table Here we show results on a selected stock market dataset that contains 7K+\u2009stock records for daily prices with multiple variables {Using a sliding window approach, the original dataset is segmented into 70 window samples, each of size 60 with a sliding step of one, resulting in 4200 data points used to feed, learn, and test the model. The sliding window collects historical time series data in order to train the model and forecast the next day's closing price of the stock. When the actual output results are available, they can be incrementally added to the SNN model for further training. The news variable, which is encoded on a day-based value between 1 (strong positive news related to the output variable and \u2212 1 (strong negative news), is associated with the stocks.The real input data is encoded from continuous values to discrete sequences of spikes that represent changes in the data at times. The Threshold-based Representation (TR) encoding method, with a spike threshold of e.g., 0.5, is used to discretize the continuous streams of data into spikes. The spike information representation simplifies the input data by focusing on the changes in the data over time rather than using their real values in the followed modelling procedure, that aims at learning dynamic patterns of these changes that relate and trigger certain outcomes. This is a principle adopted from how the brain processes continuous value sensory information.43. This algorithm allocates the spatial coordinates of the input neurons representing the input variables based on the temporal similarity in the training data measured as time series similarity, so that similar input features are allocated to closer spiking neurons (input neurons) in the 3D space. The connections between the spiking neurons in the SNN are initialized using the small world connectivity rule, that creates probabilistically positive or negative connections of small values based on the distance between the neurons, the closer the neurons in the 3D space, the higher the probability to connect.All input variables, including the news variable, are mapped into the 3D SNN from a NeuCube model, based on their temporal similarity in the training data measured as time series similarity, so that similar input features are allocated to closer spiking neurons (input neurons) in the 3D space of the SNN, based on a graph matching algorithm to automatically generate a scalable 3D SNN, e.g., 1000 (10\u2009\u00d7\u200910\u2009\u00d7\u200910) neurons in a 3D reservoir according to their strongest connection weights with an input variable (similar to a membership function in fuzzy clustering) Fig.\u00a0a. The peng) Fig.\u00a0b. This pNeurons in the SNNcube are also clustered according to the maximum number of spikes received from an input neuron, which shows the dynamics of the interaction between the neurons Fig.\u00a0a. This iFigure\u00a038. The supervised training of the output regressor is performed using predefined output values of the dependent variable under study that are associated with the training samples. Based on the rank-order (RO) learning rule, the deSNN Mod parameter is set to 0.8. This is the modulation factor used to calculate the initial weight of every new connection formed from the SNN cube to an output neuron representing the desired output value. In order to reflect the following spikes on the same synapse, these connection weights are further adjusted using a drift parameter set here to 0.005.After a SNNcube model is trained, an output regression module is trained using a SNN regressor deSNN, which is a computationally efficient model that prioritizes the first spike arriving at the output neuronsAfter an integrated SNN model is trained, it is used to predict the outcome for any new input time series and on-line new samples. The regression plots in Fig.\u00a050 as shown in a comparative Supplementary Table Using the proposed SNN-based method results in a lower predictive error when compared with the use of evolving vector-based regression techniques, such as EFuNN or DENFISThis section presents positive results of integrated modelling of several temporal variables extracted from a single time series and on-line news on the predictive accuracy of one of them, along with presenting both numerical and visual analysis on the effect of news on the output variable.The dataset corresponds to the Wipro Limited [WIT] daily open-, high-, low-, close-, and adjusted closing prices. The observations are daily time series from January 3rd, 2017 to June 30th, 2017, with a total of 129 observations for each feature. Weekends are removed, and missing values due to holidays are replaced with the prices that came before them.The Text News Indicator [TNI], as previously described, is used in this section to assess its impact on predicted prices and the model's usefulness. TNI is measured in the range, with the sign indicating whether the trend is upward or downward, reflecting market sentiment. Descriptive statistics and the evolution of WIT OHLC prices are included in Suppl. Table The original dataset of the Indian Wipro Limited multinational corporation [WIT] stock is used here as a single stock data. The input variables are the temporal window sequences of five WIT stock prices: \u2018open\u2019, \u2018high\u2019, \u2018low\u2019, \u2018close\u2019, \u2018adjusted close\u2019 plus the news indicator, where the output variable is the adjusted closing price for the following day.For comparability, this method employs the same sliding window approach as in the previous section. The original dataset is divided into 70 window samples with 5 features (no news used) and 70 samples with six features that include the news. Each sample has 60-time units (days) and one output value.The same SNN model configuration and setting is used here as in the previous section. Modelling and analysis results with and without using news indicator are shown in Fig. Figure\u00a0Will predictive modelling of every single price from OHLC prices of a single stock and across multiple stocks benefit the same way with the inclusion of News information?This section presents results on integrated modelling of several temporal variables extracted from a single time series across several individual time series and on-line news, along with presenting both numerical and visual analysis on the effect of news on the output variable, being positive (decreased prediction error) or negative (increased error). The section answers the question: In order to answer the above question, we have run 64 experimental models (4\u2009\u00d7\u20098\u2009\u00d7\u20092) on the predictive modelling of each of the 4 OHLC feature of each of the 8 stock indexes with- and without using the same TNI news indicator.In order to detect the dynamic interactions between different features of a single stock predictive model with- and without using the same TNI news index, Fig.\u00a0The numerical values of the impact on the RMSE error of the predictive SNN models on different features of individual stocks with- and without using the TNI online news index across several stocks are shown in the supplementary Table The results on multiple time series predictive modelling combined with on-line news show significantly improved predictive accuracy with the inclusion of on-line news versus no use of news. It also reveals how the time series variables interact with the on-line news variable. The overall analysis of the SNN model above Fig.\u00a0b indicatThe experimental results of a single time series modeling using its 4 variables with or without on-line news presented in Table Through this analysis we can understand which of the time series variables, if used as predictive targets through the proposed methodology, will be positively affected by on-line news and which cannot benefit from the inclusion of news information, along with the explanation for that. This can be used for the creation of better predictive systems, such as trading systems, macro-economic trends, individual health states or environmental events.The proposed here methodology answers the main research question, namely how to understand the dynamic interaction of time series and on-line news through their integrative modelling. It offers a new method to evaluate the efficiency of using on-line news on the predictive modelling of time series.The SNN approach proposed here for the integration of times series and on-line news in predictive models, is very much inspired by the ability of the human brain to incorporate in an incremental way different temporal source of information through encoding their changes into spike sequences and then incrementally learning patterns of interactions between these sources in order to better predict future events and to gain a better understanding of the events. The SNN models capture the direction of influence between input features, so that causal associations can be identified.Along with numerical estimation of the impact of news on the predictive accuracy of time series variables, the method offers an interpretation and explanation.the choice of news sources;how their relevance to the predicted outputs changes over time;parameter optimization of the SNN models for better performance;a better integration of human knowledge and machine intelligence.Many questions still remain for further studies, such as:31, now the challenge for machine intelligence is to develop suitable for this hardware and also efficient and explainable computational models, such as the proposed SNN approach42.With the fast development of the massively parallel and low energy neuromorphic and quantum hardware platforms, that are complementing and substituting the traditional von Neuman machines48:cognitive studies, where EEG and fMRI data, that are of different spatial and temporal resolutions, can be integrated on the same time scale for a SNN model to be trained;seismic studies for earthquake prediction, where seismic time series data can be integrated with GPS and satellite image time series data);personalized wearable devices for personalized health, where ECG data can be integrated with other streaming personal information, such as physical activity, nutrition, sleep, etc.global warming prediction on different time scales, where relevant multiple and multimodal streaming information is integrated, including: temperature, air pollution, volcanic activities, solar eruptions, human population, green and fossil energy, etc.The proposed methodology, being a generic one, can be extended to specific methods for different domain areas The team is working on the implementation of the proposed model on the latest neuromorphic platforms to make this method available across application areas.Feature extraction and encoding of all streaming data and information as uniformly integrated numerical multimodal time series on the same time scale;Predictive modelling of the integrated multimodal times series in a SNN;Discovery of explainable dynamic temporal patterns of association between multimodal time series variables, including news time series.The proposed here methodology consists of three groups of methods as illustrated in Fig.\u00a0The main idea here is that all streaming data and information is first uniformly represented as numerical time series on the same time scale, forming a set of multimodal and uniformly represented time series, using the same time unit of measurement, e.g., seconds, days, years, etc. For financial time series and online news, the time unit would be a day.Here we propose how online news can be encoded into a numerical time series. A Turning News Index (TNI) is proposed in this paper to mitigate data-driven bias in news selection. The TNI index is linked in our case study to stock prices and is calculated using the 'Bag of Words' (BoW) method.A \u2018Bag of words\u2019 is a traditional feature extraction approach in which each word is represented as a feature. BoW is a simple approach that measures the importance of words based on their frequency in the document while ignoring word order. As a result, News is classified as 'direct' or 'indirect' based on the occurrence of the word and the topic of the news. For instance, any news referring to a specific topic, such as Wipro stock news, will have a direct impact on the prediction and will thus be classified as 'direct.' In the case of indirect news, its impact is on the overall trend of the stock market rather than on a specific stock. And will be labelled as 'indirect.'tf is the term frequency, which determines the significance of a term in a document. It computes the frequency of occurrence of a term 't' in one document 'd' with respect to the total number of terms in the same document 'd'; idf is the Inverse Document Frequency, which measures the rarity of a term. The idf of a term 't' is the log of the ratio of the total number of documents 'N' in the domain set to the number of documents 'D' in which that term appears; thus, tfidf assigns a numerical weight to words based on how important a word is to a corpus, or collection of documents.In this paper, the classified news is quantified using the Bag of Words (BoW) approach, and the n-gram model with TF-IDF is used to calculate TNI Eqs.\u00a0, 2, 3,\u00a04The upward and downward trends are analyzed using numerical values from the input text rather than just sentimental values. The news can be quantified in the range, where the sign indicates whether the trend is upward or downward, and in the range to indicate the probability value of news impact on the stock. Supplementary Table n continuous-value time series TS\u2009=\u2009{TS1, TS2, \u2026, TSn}, one of them denoted here as TSo being a target for predicting its next time value. For the case study, a TNI index series is also created as explained above. At any time, ti, an input vector is created when using the sequence of all vectors V((t1), \u2026,V(tl)) that form a training multivariate temporal sample S (t1\u2212tl) (Eq.\u00a0A moving window of time T\u2009=\u2009{t\u2212tl) Eq.\u00a0.6\\documeFirst, the real-value input data is transformed from continuous values to discrete sequences of spikes, each spike used to encode a change in the data over consecutive time moments, exemplified in Fig.\u00a032. This method is based on thresholding the difference between two consecutive values of the same input variable over time. As a result, it generates a positive spike that encodes an increased value at the next time point, and a negative spike that encodes a decreased value at the next time point. The spike information representation simplifies the input data by focusing on the changes in the data over time rather than using their real values in the followed modelling procedure that aims at learning dynamic patterns of these changes that relate and trigger certain outcomes in the 3D space+ and A\u2212 refer to the maximum fraction of synaptic adjustment if The SNNcube is trained incrementally in an unsupervised mode using the STDP rule on the samples S of the integrated time series and online news data. The STDP learning rule considers the time of spiking between two connected neurons, so that consecutive changes in data from one time to another across all input variables are learned. STDP is expressed in terms of STDP learning window e dW Eq.\u00a0. In the 1\u00a0\u2212\u00a0tl), an output deSNN regressor is trained again on the same data to associate the spiking activity of the SNNcube for every input sample S (t1\u2212tl)\u2009=\u2009{V(t1), V(t2), \u2026, V(tl)} with its corresponding desired output value of TSo (Tl+1) is the modulation factor used to establish the initial value of the wight wj,i; order(aj) is the order of arrival of a first spike from neuron j to neuron i at each input to neuron i; ej(t)\u2009=\u20091 if there is a consecutive spike at neuron j at time t during the presentation of the input sample S(t1\u2212tl); ej(t)\u2009=\u2009\u2212 1otherwise, indicating that there is no spike at time t at neuron j to be transferred to neuron i; D is a drift parameter, which can be different for \u2018spike\u2019 or \u2018no spike\u2019 drifts.After training on input samples S (t+1) Eqs.\u00a0, 9.8\\doc49).In the experiments above the following parameter values of the NeuCube model are used. A threshold-based encoding method with 0.5 spike rate is adopted. The SNNcube unsupervised learning parameters Leaking rate, STDP rate, firing threshold, training rounds, refractory time, and long-distance probability are respectively set to 0.0002; 0.1; 0.5; 1; 6; and 3. The deSNNs regressor\u2019s supervised learning parameters modulation factor (mod), drift adjustments (D); number of nearest neurons, and sigma are set respectively to 0.5, 0.005, 3 and 1 of the integrated time series, neurons in the SNNcube are clustered based on their connections to input neurons. In addition, dynamic information exchange is captured in a dynamic variable interaction graph as illustrated in Figs.\u00a0More details of the proposed in this section SNN-based method for predictive modelling and understanding of the dynamic interaction between all input variables are explained and illustrated in the Supplementary material.Supplementary Information."}
+{"text": "In systems biology, the accurate reconstruction of Gene Regulatory Networks (GRNs) is crucial since these networks can facilitate the solving of complex biological problems. Amongst the plethora of methods available for GRN reconstruction, information theory and fuzzy concepts-based methods have abiding popularity. However, most of these methods are not only complex, incurring a high computational burden, but they may also produce a high number of false positives, leading to inaccurate inferred networks. In this paper, we propose a novel hybrid fuzzy GRN inference model called MICFuzzy which involves the aggregation of the effects of Maximal Information Coefficient (MIC). This model has an information theory-based pre-processing stage, the output of which is applied as an input to the novel fuzzy model. In this preprocessing stage, the MIC component filters relevant genes for each target gene to significantly reduce the computational burden of the fuzzy model when selecting the regulatory genes from these filtered gene lists. The novel fuzzy model uses the regulatory effect of the identified activator-repressor gene pairs to determine target gene expression levels. This approach facilitates accurate network inference by generating a high number of true regulatory interactions while significantly reducing false regulatory predictions. The performance of MICFuzzy was evaluated using DREAM3 and DREAM4 challenge data, and the SOS real gene expression dataset. MICFuzzy outperformed the other state-of-the-art methods in terms of F-score, Matthews Correlation Coefficient, Structural Accuracy, and SS_mean, and outperformed most of them in terms of efficiency. MICFuzzy also had improved efficiency compared with the classical fuzzy model since the design of MICFuzzy leads to a reduction in combinatorial computation. A major challenge in systems biology is the accurate inference of gene regulatory networks (GRNs), which are used for investigating a range of complex biological processes. The development of microarray and sequencing technologies has produced large amounts of gene expression data for the reconstruction and analysis of GRNs. During gene expression, target genes are controlled by their regulators and thesBayesian network modelling is more sophisticated than Boolean modelling and is used to implement high performing reverse engineering approaches \u20138. PerriDifferential equations (DE) , are a dGiven pros and cons of the above-mentioned approaches, information theory-based methods have gained increasing attention in recent years , 15, 16 Some recently developed methods combine information theoretic concepts with other techniques to eliminate model specific issues. KFGRNI is such a method, using Conditional Mutual Information (CMI) based approach to fine-tune a list of genes, selected by the ensemble Kalman filter and regression approach . This meTo overcome these limitations of MI, the Maximal Information Coefficient (MIC) which has been designated \u201ccorrelation for the 21st century\u201d , 26, wasDue to their ability to deal with uncertain and imprecise information fuzzy concept-based models for GRN inference \u201336 have Variations of classical fuzzy models, incorporating changes in gene expression levels, activator-repression interactions, or pre-processing methods, have also been reported , 36, 39.In this research, we developed a novel hybrid fuzzy model, which combines a novel fuzzy model with the key feature of the information theory-based approach. This hybrid fuzzy model, called MICFuzzy, exploits the effectiveness of the MIC for pre-The rest of the paper is organized as follows. Section 2 deals with the methodology and the steps used in building our proposed hybrid model. In Section 3, we discuss the results obtained from the experiment and the comparison of the performance of our model with other state-of-the-art methods. Finally, we conclude the paper in Section 4.n sub-problems, where n is the number of genes in the network. The proposed model is applied to each sub-problem to identify the potential regulators for each target gene, using time series gene expression data. The MICFuzzy model has two main stages. In the pre-processing stage, an information theory-based approach identifies the list of the most promising regulators for each target gene from the high-dimensional gene space. Then, the novel fuzzy model selects the best regulator gene list for each target gene, reducing false predictions while accurately identifying activator and repressor regulatory genes. The pseudo-code for the MICFuzzy algorithm is given in This section presents a concise overview of our proposed model, MICFuzzy , a novelX and Y) data pairs, D can be defined as follows:D consists of pair of values , x \u2208 X, y \u2208 Y in a sample of size n. The grid size is XY < B(n), where the function of sample size, B(n)\u2009 = \u2009n0.6 is the default setting to achieve high performance. The term, I* is the maximum mutual information taken over an x and y pair. We use log2) to normalize the maximum mutual information https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0 15 May 2023We are grateful to the editor and the reviewers for their insightful comments and suggestions. We have incorporated all of the changes suggested by the editor and the reviewers, which have greatly improved the revised manuscript.--------------------------------------------------------------------------------------------------------------------------------------Editor\u2019s additional comments:Both the reviewers have raise substantial concerns that need to be addressed carefully. The authors must perform the comparative analysis of the develop approach with the existing methods thoroughly. Please mention the advantage and dis-advantage of the proposed model with respect to existing models. The authors should also provide the source code with step-by-step description for reproducibility of the proposed study.Response: Concurring with the editor, we have now revised the \u201cIntroduction\u201d Section to include a detailed comparative analysis of the pros and cons of the state-of-the-art methods and our proposed method. The new content will also make it clear how the proposed method addresses key limitations of the earlier work (Pages 2 - 6). These are further demonstrated in the \u201cResults and Discussion\u201d Section based on the results obtained in our experiments (Pages 15 - 21).The editor has requested that the source code be included. However, instead of source code, we feel it will be more appropriate to include the pseudo-code, which has a language-independent generic form. The readers can easily code in any of their preferred programming languages using the pseudo-code and the step-by-step explanation. This pseudo-code and associated explanation are provided in the supplementary document, S1. However, if source code is needed, it can also be easily made available. --------------------------------------------------------------------------------------------------------------------------------------- Reviewers' comments:Reviewer #1: The paper addresses the interesting topic of inferring gene regulatory network models from gene expression data. In particular, the authors present a method that combines the maximal information coefficient (MIC) to identify regulatory relations with a fuzzy model.I have some concerns about the implementation of this method, as well as its real contribution concerning existing methods.Here are some comments.Query 1: The paper lacks a description of how the proposed method was implemented. What programming language, libraries, etc., were used. More details about this are needed.Response to Query 1: Thank you for raising this important point. We have included the implementation details of the proposed method, the programming language (Python), and other associated libraries in the sub-section, \u201cExperimental setup and parameter settings\u201d under the \u201cResults and Discussion\u201d Section (Pages 11-12). This additional content is reproduced below for your easy reference. Further, the complete details of the steps to implement the proposed model are explained in the supplementary document, S1.\u201cMICFuzzy, implemented in Python, uses the minepy Python library [50] to evaluate the MIC between each gene pair in the pre-processing stage. The minepy library has identical performance on biological datasets as its original java implementation (mine.jar) [51] using the default settings published in [25]. We also applied the Scikit-fuzzy library [52], to build the fuzzy control system. MICFuzzy also utilised NumPy, SciPy, pandas, statistics, and Matplotlib scientific libraries. The steps of the implementation of MICFuzzy and its libraries are explained in S1 Fig.\u201dQuery 2: The estimation of mutual information or MCI is quite challenging when you don't have enough data. A major issue when computing information theory measures like mutual information, and therefore, MIC, is the difficulty of estimating correct values for these measures when the number of time points is limited, a typical problem in gene regulatory network inference problems. What approach did the authors follow to implement MCI? How did they validate that their estimation method was reliable? A better description of this issue is needed.Response to Query 2: We agree with the reviewer\u2019s point. As the reviewer has pointed out, the low sample size of gene expression data is a typical problem for genetic network inference, and it invariably impacts various reconstruction methods including the information theory-based approaches. This is a well-known problem and is widely discussed as the \u201cCurse of Dimensionality\u201d .However, since a low sample size invariably affects all known reconstruction methods, in our opinion it should not be associated to consider the effectiveness of the information theory-based approaches. In fact, the effectiveness of information theory-based reverse engineering methods over other measures for GRN reconstruction has been clearly demonstrated, even with small sample sizes. For example, Meyer et al. [1] proposed a mutual information (MI) based approach, minet, and highlighted the suitability of MI due to its ability to deal with several thousands of variables with limited sample size. ARACNE, an information-theoretic approach, showed that applying MI ranking is more robust and outperformed other Bayesian and Relevance network approaches [2]. Another method, MIBNI [3], also use MI to select the initial set of regulatory genes from a small sample size and outperformed several well-known state-of-the-art methods such as CLR [4] and REVEAL [5]. Recent studies, such as CMI2NI [6] and NSCGRN [7], using Conditional Mutual Information (CMI), similar to MI, for quantification of gene associations have demonstrated its effectiveness, in spite of small sample sizes, in GRN inference.As an enhanced version of MI, apart from its above-mentioned merits, Maximal information coefficient (MIC) also shows impressive performance as a pre-processing approach in GRN inference problems , especially, by providing a good noise-resistance [9]. In the MICRAT model [10], MIC has been effectively used to identify the initial regulatory network structure. Here, we would like to emphasize that MI or MIC, as a pre-processing method, tends to identify a large number of true regulators as top-ranked genes. With additional fine-tuning, the approach can further reduce false positives [11]. Due to this reported success of MIC [10], we decided to implement the MIC method in its original form in our proposed work. We used the minepy Python library [12], based on the original MIC estimator technique [13], to evaluate the MIC between genes. We may add that this library has also been successfully used in a recently developed method, MICTools [14] which is a combination of Total information coefficient (TIC) and MIC to assess the strengths of relationship among variables. It has been demonstrated the potential of MICTools to identify non-functional associations effectively with a relatively low number of samples. We agree that the inclusion of key aspects of the above discussion will help to improve the readability of the manuscript, and accordingly, these have now been included in the manuscript. Query 3: The MIC threshold (delta) is computed as the average value. Did the authors try other values? for example, the median, harmonic mean, etc.Response to Query 3: Thank you for this question. As we know, the MIC threshold is a user-defined value and researchers also have a choice of median or harmonic mean in their experiments. In our work, after trialling different options, we observed the \u2018average value\u2019 of MIC to be the most suitable and applied it as our preferred option. Query 4: When considering the real application (SOS DNA repair network), it is unclear if the proposed approach is a significant contribution compared to the existing method MICRAT, which has the same preprocessing. Overall, I think more experiments between these two methods are needed to identify the advantages of MICFuzzy over MICRAT better (if they exist).Response to Query 4: In order to address this important suggestion from the reviewer, we have now included additional experimental results of our proposed model on the DREAM4 100-gene dataset. For comparison, we have also included results, reported in [10], of the MICRAT method for this same dataset. Comparison between MICRAT and MICFuzzy shows that MICFuzzy outperforms MICRAT in terms of average Structural Accuracy, and significant improvements can be seen in the average F-Score and average Matthews correlation coefficient (MCC) over MICRAT. For real-world datasets, the authors of MICRAT [10] only considered the SOS DNA repair network. Therefore, we executed our method only on the SOS DNA repair network, which allowed us to perform a direct comparison with MICRAT. Since the source code of the MICRAT approach is not publicly available, further comparisons between MICRAT and MICFuzzy using other publicly available real-world datasets were not possible.This additional analysis is now included in the revised manuscript (Pages 15-16) as follows.\u201cWe evaluated our model based on Average F-score, Average MCC, and Average Structural Accuracy for all five networks in the DREAM4 10-gene and 100-gene datasets, since MICRAT has used these measures for performance evaluation and thus, direct comparison is possible. As shown in Figs 5 and 6, for Average Structural Accuracy, while MICFuzzy showed a slight improvement over MICRAT , it showed considerable improvement over NARROMI in 10-gene and 100-gene datasets. Our proposed model outperformed MICRAT and NARROMI substantially in both 10-gene and 100-gene datasets with respect to Average F-score and Average MCC because of its ability to identify a high number of true regulations and reduce the number of false predictions.\u201d ---------------------------------------------------------------------------------------------------------------------------------------Reviewer #2: In this study, the authors proposed an improved hybrid method named as MICFuzzy which involves two stages. First, as a pre-processing step, in order to determine the similarity between the target genes and the others, an information theory-based method is provided which uses the maximal information coefficient (MIC) to compute their correlations. By applying this step, the candidate genes with high regulatory relationships are determined and reduces the time complexity of considering the possible genes. In the next step, by applying fuzzy model, from the candidate genes the regulatory genes are nominated for each target gene by inferring the activator repressor gene pairs.I think the writing of the manuscript is suitable and the method has been evaluated carefully. I have some suggestions which can improve the quality of the manuscript.Query 1: In the introduction, there are several valuable works which should be reviewed. For example:a. Turki T, Taguchi YH. SCGRNs: Novel supervised inference of single-cell gene regulatory networks of complex diseases. Computers in biology and medicine. 2020 Mar 1;118:103656.b. Zhang Y, Chang X, Liu X. Inference of gene regulatory networks using pseudo-time series data. Bioinformatics. 2021 Aug 25;37(16):2423-31.c. Pirgazi, J., Olyaee, M. H., & Khanteymoori, A. (2021). KFGRNI: A robust method to inference gene regulatory network from time-course gene data based on ensemble Kalman filter. Journal of Bioinformatics and Computational Biology, 19(02), 2150002.d. Pirgazi J, Khanteymoori AR. A robust gene regulatory network inference method base on Kalman filter and linear regression. PloS one. 2018 Jul 12;13(7):e0200094.e. Segura-Ortiz A, Garc\u00eda-Nieto J, Aldana-Montes JF, Navas-Delgado I. GENECI: A novel evolutionary machine learning consensus-based approach for the inference of gene regulatory networks. Computers in Biology and Medicine. 2023 Mar 1;155:106653.Response to Query 1: Thank you for this suggested list of valuable work. In the \u201cIntroduction\u201d Section, we have now reviewed the work reported in the above references, thereby further improving the quality of the overall manuscript (Pages 3 - 4). The additional review included in the \u201cIntroduction\u201d Section is reproduced here for easy reference (Pages 3 - 4),\u201cGene networks inference using projection and lagged regression (GNIPLR) [14] uses both projection and lagged regression strategies to accurately infer GRNs from time series and non-time series data. LassoCV+RidgeCV [12] is another regression-based model which uses an improved version of both regression methods, incorporating cross-validation to increase model stability and produce accurate results. Both GNIPLR and LassoCV+RidgeCV outperform other existing high-performing regression-based inference methods. However, these regression methods are limited to capturing linear dependencies.\u201d\u201cKFGRNI is such a method, using Conditional Mutual Information (CMI) based approach to fine-tune a list of genes, selected by the ensemble Kalman filter and regression approach [23]. This method further improves model accuracy by removing false regulations. KFLR [24] uses MI and CMI in the preprocessing stage to eliminate noisy regulations, followed by a Kalman filter-based model averaging approach (a hybrid framework of Bayesian model averaging and linear regression methods) to infer possible regulators. However, both MI and CMI-based inferencing methods cannot discover important non-linear correlations such as sinusoids . MI is well suited for use on discrete or categorical data [18], but has known limitations when applied to continuous data, as is found in gene expression datasets.\u201dQuery 2: The quality of the figures are not suitable and should be improved.Response to Query 2: Thank you for pointing this out. All the figures have been recreated according to the guidelines. The .tif files were checked and generated using the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool as per the recommendation of Plos One.Query 3: The authors should describe the pseudo code in S1 with more details. Response to Query 3: The pseudo-code and the step-by-step explanation of the pseudo-code are included in the supplementary document, S1.Query 4: Moreover, please discuss about the time complexity.Response to Query 4: The time complexity of this method is discussed in detail in the revised manuscript and compared with other state-of-the-art methods as necessary under the sub-section \u201cImpact of pre-processing on computational time\u201d in \u201cResults and Discussion\u201d Section as follows (Pages 17-19),\u201cThe main drawback of the classical model is that it requires considerable computational time to select relevant activator-repressor pairs from all possible gene pairs in the search space [31]. Following the classical model, several methods were developed [36] to reduce computational time by utilizing clustering methods to reduce the number of gene pairs. Most of these techniques have improved the performance of the classical model [31] by reducing the required computational time by up to 50% . In addition, fuzzy logic-based tools for GRN modelling, such as FCM requires the use of model learning algorithms such as evolutionary algorithms, which are highly computationally expensive [49]. The time complexity of CS-FCM is O(n3MI) as reported in [34], where n is the number of nodes (genes) in FCM, M is the number of iterations for data sequences, and I is the number of iterations required for CS-FCM for optimal results. Based on the design of LASSO-FCM, and KFCSFCM, each method requires approximately the same computational time as CS-FCM, as demonstrated in experiments . Thus, the required time complexity of these considered FCM-based methods can be simplified as O(n3MI). In [31] the time complexity of the classical model is given as O(n3) which is < O(n3MI). Thus, the classical model is computationally faster than the above-considered FCM-based methods, LASSO-FCM, CF-FCM, and KFCSFCM. The time complexities of the CMI2NI, NARROMI, and MICRAT methods are difficult to compare. However, CMI2NI and NARROMI are computationally intensive as demonstrated in [53] and [21] respectively and the time complexity of these methods can be varied depending on the problem. In MICFuzzy, at pre-processing stage, the information-theory-based approach uses MIC which has been demonstrated to be less computationally intensive [25]. Hence the time complexity of the information-theory-based approach is O(n2), which is the maximum required time for MIC with default settings [54]. The time complexity of the proposed fuzzy model is O(nanrn) where na is the number of activators selected and nr is the number of repressors selected. The upper bound of na = n-1 and nr = n-2, therefore the time complexity of the proposed fuzzy model is O(n3). Then the overall time complexity of MICFuzzy is O(n2 + n3) or O(n3). In practice, na << n and nr << n, therefore O(nanrn) << O(n3) which indicates that MICFuzzy is more efficient than the classical model. This is one of the contributions of our work: reducing na and nr, using a preprocessing technique, to improve the efficiency of GRN inference. In this section, we further demonstrated the above-mentioned efficiency improvements of MICFuzzy over the classical model with respect to computational time based on our experiments. Using the MIC-based information theory-based approach as a pre-processing method, in MICFuzzy, the required combinatorial computation was significantly reduced compared with that for the classical model in all DREAM3 and DREAM4 inference problems. This reduction led to an improvement in the efficiency of the proposed model by reducing the required computational time . Based on the experiments, this improvement was more than 55% in all five networks in each sub-challenge, except for the DREAM3 10-gene Net4, which was nearly 40% . In 50-gene network inference, the reduction of combinatorial computation was more than 60% in all five networks, and for Net2 it was 90.4% while outperforming other methods in accuracy also in terms of SS_mean (Table 2) by obtaining a high number of true positives. In 100-gene networks, the improvement in the efficiency was significant obtaining more than 84% in all five networks while maintaining high model accuracy (Table 1). Based on these results, it is clear that MICFuzzy is applicable to large-scale network (> 100-gene) inference problems, which will be considered in our future work.\u201dQuery 5: The comparing methods like MICRAT, NARROMI, and \u2026 should be cited and explained in brief.Response to Query 5: We agree that the inclusion of these methods would improve readability. We have now included a brief description of each of these methods in the \u201cIntroduction\u201d Section (Pages 2 - 5).Further, in the revised manuscript, a brief description of each of the Bayesian models used for the performance comparison in inferring E. coli SOS DNA repair network is also included as follows (Page 3),\u201cBayesian network modelling is more sophisticated than Boolean modelling and is used to implement high performing reverse engineering approaches . Perrin\u2019s method, based on dynamic Bayesian networks is well-suited to the inference of gene regulatory interactions from gene expression data and their derivatives. However, this approach is limited to the inference of small-scale GRNs [4]. Bayesian Network inference with Java Objects (Banjo), is a software package that has been used for both Bayesian and Dynamic Bayesian network inference. Similar to Perrin\u2019s approach, Banjo requires a high computational time, since it uses heuristic search strategies in model learning [5]. Morshed et al. [6] implemented a Bayesian network model to capture both instantaneous and time-delayed interactions that occur concurrently, but the evolutionary search employed requires high computational time [6]. Unlike other Bayesian models, GlobalMIT uses mutual information test (MIT), an information theoretic-based scoring metric as a model learning technique rather than time-consuming local search strategies. However, this approach is poorly scalable since the complex nature of Bayesian modelling makes it computationally intensive when inferring large-scale networks [7].\u201dAs suggested by the reviewer, brief descriptions of NARROMI, CMI2NI, and MICRAT are also included and cited in the \u201cIntroduction\u201d Section in the revised manuscript as follows (Pages 4 - 5),\u201cNARROMI [21] and CMI2NI [22] are other novel MI-based methods for GRN inference. NARROMI, a combination of the ordinary differential equation-based recursive optimization (RO) method and the information theory-based mutual information (MI) method, is an effective method which outperforms most existing methods in terms of accuracy and false positive rates. In this approach, the least relevant regulators for each target are first removed using MI by evaluating pairwise correlations. Then indirect regulators are identified using recursive optimization, which further improved the overall model accuracy [21]. CMI2NI uses a novel association measure, conditional mutual inclusive information (CMI2) which helps to identify direct regulations while eliminating indirect regulations. The main drawback of CMI2 is that its efficiency is negatively impacted by the use of a random method to identify conditional genes [22].\u201d\u201cRecently MICRAT has been developed, using MIC to infer GRNs as an undirected graph that represents interactions between genes from time series gene expression data. Then the direction of these interactions is determined using a combination of conditional relative average entropy and time course mutual information of pairs of genes [15]. However, these models produce a high number of false regulatory predictions while inferring a high number of true regulations and have no ability to identify the activating or inhibiting effect of regulatory genes.\u201dReferences1. Meyer PE, Lafitte F, Bontempi G. minet: A R/Bioconductor Package for Inferring Large Transcriptional Networks Using Mutual Information. BMC Bioinformatics. 2008 October; 9(461).2. Adam A Margolin INB, Wiggins C, Stolovitzky G, Favera RD, Califano A. ARACNE: An Algorithm for the Reconstruction of Gene Regulatory Networks in a Mammalian Cellular Context. BMC Bioinformatics. 2006 March; 7(S7).3. Barman S, Kwon YK. A Novel Mutual Information-based Boolean Network Inference Method from Time-series Gene Expression Data. PLoS ONE. 2017 February; 12(2).4. Faith JJ, Hayete B, Thaden JT, Mogno I, Wierzbowski J, Cottarel G, et al. Large-Scale Mapping and Validation of Escherichia coli Transcriptional Regulation from a Compendium of Expression Profiles. PLOS Biology. 2007; 5(1).5. Liang S, Fuhrman S, Somogyi R. Reveal, A General Reverse Engineering Algorithm for Inference of Genetic Network Architectures. Pacific Symposium on Biocomputing. 1998;: 18-29.6. Zhang X, Zhao J, Hao JK, Zhao XM, Chen L. Conditional Mutual Inclusive Information Enables Accurate Quantification of Associations in Gene Regulatory Networks. Nucleic Acids Research. 2015 March; 43(5).7. Liu W, Sun X, Yang L, Li K, Yang Y, Fu X. NSCGRN: A Network Structure Control Method for Gene Regulatory Network Inference. Briefings in Bioinformatics. 2022 September; 23(5): bbac156.8. Yang D, Liu H. Maximal Information Coefficient Applied to Differentially Expressed Genes Identification: A Feasibility Study. Technology and Health Care. 2019; 27(1): 249\u2013262.9. Liu HM, Yang D, Liu ZF, Hu SZ, Yan SH, He XW. Density Distribution of Gene Expression Profiles and Evaluation of Using Maximal Information Coefficient to Identify Differentially Expressed Genes. PLoS ONE. 2019; 14(7).10. Yang B, Xu Y, Maxwell A, Koh W, Gong P, Zhang C. MICRAT: A Novel Algorithm for Inferring Gene Regulatory Networks Using Time Series Gene Expression Data. BMC Systems Biology. 2018; 12(115).11. Akhand MAH, Nandi RN, Amran SM, Murase K. Gene Regulatory Network Inference Using Maximal Information Coefficient. International Journal of Bioscience, Biochemistry and Bioinformatics. 2015; 5(5): 296-310.https://minepy.readthedocs.io/en/latest/; 2013.12. Albanese D. minepy - Maximal Information-based Nonparametric Exploration. , 13. Albanese D, Filosi M, Visintainer R, Riccadonna S, Jurman G, Furlanello C. minerva and minepy: A C Engine for the MINE Suite and Its R, Python and MATLAB Wrappers. Bioinformatics. 2013; 29(3): 407\u2013408.14. Albanese D, Riccadonna S, Donati C, Franceschi P. A Practical Tool for Maximal Information Coefficient Analysis. Gigascience. 2018 April; 7(4): 1-8.AttachmentResponse to Reviewers.docxSubmitted filename: Click here for additional data file. 22 Jun 2023MICFuzzy: A maximal information content based fuzzy approach for reconstructing genetic networksPONE-D-22-31082R1Dear Dr. Gamage,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Prabina Kumar Meher, Ph.D.Academic EditorPLOS ONEReviewers' comments:Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressedReviewer #2:\u00a0All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0(No Response)Reviewer #2:\u00a0Yes********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0(No Response)Reviewer #2:\u00a0Yes********** 4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0(No Response)Reviewer #2:\u00a0Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0(No Response)Reviewer #2:\u00a0Yes********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0(No Response)Reviewer #2:\u00a0The manuscript addresses all of the concerns and suggestions raised during the review process. The authors have made appropriate revisions, and the manuscript is now in an acceptable and publishable state.********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Yes:\u00a0Gonzalo A. RuzReviewer #1:\u00a0Reviewer #2:\u00a0No********** 26 Jun 2023PONE-D-22-31082R1 MICFuzzy: A maximal information content based fuzzy approach for reconstructing genetic networks Dear Dr. Nakulugamuwa Gamage:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Prabina Kumar Meher Academic EditorPLOS ONE"}
+{"text": "From a systematic perspective, it is crucial to infer and analyze gene regulatory network (GRN) from high-throughput single-cell RNA sequencing data. However, most existing GRN inference methods mainly focus on the network topology, only few of them consider how to explicitly describe the updated logic rules of regulation in GRNs to obtain their dynamics. Moreover, some inference methods also fail to deal with the over-fitting problem caused by the noise in time series data.In this article, we propose a novel embedded Boolean threshold network method called LogBTF, which effectively infers GRN by integrating regularized logistic regression and Boolean threshold function. First, the continuous gene expression values are converted into Boolean values and the elastic net regression model is adopted to fit the binarized time series data. Then, the estimated regression coefficients are applied to represent the unknown Boolean threshold function of the candidate Boolean threshold network as the dynamical equations. To overcome the multi-collinearity and over-fitting problems, a new and effective approach is designed to optimize the network topology by adding a perturbation design matrix to the input data and thereafter setting sufficiently small elements of the output coefficient vector to zeros. In addition, the cross-validation procedure is implemented into the Boolean threshold network model framework to strengthen the inference capability. Finally, extensive experiments on one simulated Boolean value dataset, dozens of simulation datasets, and three real single-cell RNA sequencing datasets demonstrate that the LogBTF method can infer GRNs from time series data more accurately than some other alternative methods for GRN inference.https://github.com/zpliulab/LogBTF.The source data and code are available at With the tremendous progress of advanced technology and the improvement of sensitivity of cell analysis, single-cell RNA sequencing (scRNA-seq) data have brought unprecedented challenges and opportunities for deciphering the regulatory relationship among genes . One chaFrom the systems biology point of view, inferring GRN plays an extremely crucial role in revealing underlying regulatory mechanisms to uncover potential genes . A vast Generally speaking, there are four levels of inferring GRNs . (i) Is Several data-driven methods have been applied to infer GRNs . For exaThe BN was first introduced by Logistic regression estimation-based Boolean Threshold Function to infer GRN by synchronous evolution. First, LogBTF embeds the coefficients estimated by regularized logistic regression into the Boolean threshold network model, which perfectly controls the in-degree of nodes and addresses the problem of over-fitting. Second, LogBTF employs the knowledge of the perturbation design to optimize the inferred network topology, which successfully handles the multi-collinearity problem caused by binarized gene expression data. Third, LogBTF is a kind of interpretable network inference method that could comprehensively output more detailed information about regulatory relationships, such as regulator or target, activation or inhibition, and relative regulatory strength. Moreover, numerous experiments conducted on artificial Boolean datasets, simulated single-cell datasets, and real scRNA-seq datasets demonstrate the effectiveness and efficiency of the LogBTF method. Lastly, the comparison study with eight existing GRN inference methods shows that the LogBTF method unearths potential regulatory relationships and obtains better inference performances simultaneously.In this article, we demonstrate an application of the Boolean threshold network model conceived to integrate time series single-cell data into a N is the total number of genes, the updating scheme of if is a Boolean function, ix at the time point t\u2009+\u20091, and t. Here, ik is the in-degree of the node ix, denoting the number of regulatory nodes of the target node ix.A BN model works on a directed graph network. It consists of a set of nodes representing the elements of a system, and the state of each node is quantified as 0 or 1 (true/expressed). At each discrete time, the state of each node is updated by the state of its neighbor (the node pointing to it) at the previous moment through a rule called Boolean function. Therefore, the edges in the BN represent the regulatory relationships between elements. Generally speaking, the Boolean function is expressed as a statement that acts on the inputs through a logical function using logical operators NOT, AND, OR, etc.; clearly, this statement also returns a False/True state. Hence, suppose Definition 2.1.Let N Boolean inputs be\u2002. A threshold function f on\u2002has the form:i is either xi orwhere l\u2002for\u2002is the weight-vector, and\u2002is the threshold.BNs with threshold functions are called Boolean threshold networks, which is a special subset of BNs . Here thCompared with BNs, Boolean threshold networks can easily be implemented and are very suitable for representing regulatory networks . Here, Bjl = jx, the regulatory node jx promotes the target node ix; if jx inhibits the target node ix.For each T be the total number of time points. For each node T\u22121 observations j-th observation (or at the time point j), ix at the time point j\u2009+\u20091, and the value is either 0 or 1, i.e. The logistic regression model is applied to estimate parameters of the above Boolean threshold network model using time series gene expression data. Let y for any input of class labels (0 or 1) (The logistic regression is considered as follows:where (0 or 1) . By applIt can be seen that the logistic regression model N variables or 1 (true/expressed). For the simulated or real single-cell expression data, no imputation is required, all dropouts are set to 0 and all non-zero counts are set to 1 regardless of the expression level . Moreove\u03bc\u2009=\u20090 and variance X. Then, a new perturbation input matrix i-th target gene as follows:To overcome the multi-collinearity problem and detect reliable regulatory relationships, we propose an optimization strategy based on knowledge of the perturbation design matrix as follo\u03c3 value to 0. Then the remaining no-zero elements in the coefficient Using the above strategy, the multi-collinearity problem is solved but the estimated value F-measure, and the area under the ROC curve (AUC) to evaluve (AUC) and the For ease of exposition, let i\u03b8 in kw \u2009=\u2009 0 if According to Theorem 2.1, we find that First, we generated a set of synthetic expression data. Here, we first construct a BN with nine nodes as follows:9 initial states. For each initial state, according to the Boolean threshold function in ix with Clearly, there are 2t, and set this matrix as the input data X. The state of the node at time t\u2009+\u20091 is set as output y. In this way, the input state matrix X is a full-rank matrix, which avoids the multi-collinearity problem between variables for coefficient estimation. Considering that the node size is only nine, it does not need to penalize the coefficients, so we employ the generalized linear regression model by setting \u03bb\u2009=\u20090 in In the following, we use all possible initial states to form a state matrix at time \u03b4 for F-measure\u2009=\u20090.864. In conclusion, it is robust and stable for LogBTF to adopt the Boolean threshold network model to improve the inference performance. Especially, we also explore the inference accuracy of the LogBTF method by randomly sampling a certain percentage of data from 512 observations, and the related results with discussions are given in In order to test the robustness of the LogBTF method, we investigate the tolerance of the model to data changes. Namely, we introduce noise into the simulated time series data by randomly flipping the state of each node with the probability of In this experiment, we further simulated dropout datasets to investigate the GRN inference applicability of LogBTF to zero-inflated single-cell gene expression data . Namely,y of 50% . The expConsidering that the SINCERITIES method is also functional in evaluating the inferred network from the aspect of activating and inhibitive regulations between regulator and target genes, we compare the AUROC and AUPR indexes of LogBTF with SINCERITIES under the situation of SIGN\u2009=\u20091. When omitting the activation or inhibition functions, we only study the regulatory relationships and the regulator/target roles among genes, i.e. SIGN\u2009=\u20090. In this case, the signature of regression or correlation coefficient does not work, which means that all coefficients can be taken as the measure like weights. Thus these nine GRN inference methods have a standard benchmark for comparisons. For more reliable performance validation, we further compare LogBTF with eight methods, including SINCERITIES that we just discussed, GRISLI, SCODE, GENIE3, TIGRESS, ARACNE, CLR, and GNIPLR. The comparing results of AUROC, AUPR, and Pre indexes on all available datasets are shown in The problem of GRN inference is a sparse prediction problem, which has a relatively low positive rate. In this case, Pre is a more valuable index because it measures the proportion of correctly inferred edges . Figure\u00a0F-measure), are available in All experimental results on datasets from GWN, including other estimation criteria for inferring GRN from single-cell gene expression data, To evaluate the performance of the LogBTF method, we apply it to three retrieved real single-cell gene expression datasets. Considering that the known gold standard networks of Matsumoto and hHEP datasets do not underlie the activation or inhibition information, so we only investigate the performances in the case of SIGN\u2009=\u20090. In addition, Finally, we apply the LogBTF method to the LMPP scRNA-seq dataset with 31 genes and 531 pseudo-time points. As a result, LogBTF method infers 306 regulatory relationships with DyAcc\u2009=\u20090.676, taking 1.156\u2009min. The inferred GRN shown in in silico and real and compared the performance of our method with those of eight well-known existing inference methods. In particular, our method significantly outperformed them in terms of AUROC, AUPR, Precision, and other evaluation criteria. Apparently, these results indicate that the proposed approach is a promising tool for accurate regulatory networks from time series single-cell gene expression data. Although the LogBTF method increases computation cost by utilizing cross-validation to choose the optimal parameters, it is possible to reduce the running time by parallel implementation, which will be included in our future study. Another future direction is to infer GRNs by constructing an asynchronously updated Boolean threshold network model and then comparing its results on the network dynamics with that of the synchronous update model.In this study, we proposed the Boolean threshold network framework, named LogBTF, by aggregating regularized logistic regression with Boolean threshold function. LogBTF is a btad256_Supplementary_DataClick here for additional data file."}
+{"text": "The negative binomial distribution has been shown to be a good model for counts data from both bulk and single-cell RNA-sequencing (RNA-seq). Gaussian process (GP) regression provides a useful non-parametric approach for modelling temporal or spatial changes in gene expression. However, currently available GP regression methods that implement negative binomial likelihood models do not scale to the increasingly large datasets being produced by single-cell and spatial transcriptomics.The GPcounts package implements GP regression methods for modelling counts data using a negative binomial likelihood function. Computational efficiency is achieved through the use of variational Bayesian inference. The GP function models changes in the mean of the negative binomial likelihood through a logarithmic link function and the dispersion parameter is fitted by maximum likelihood. We validate the method on simulated time course data, showing better performance to identify changes in over-dispersed counts data than methods based on Gaussian or Poisson likelihoods. To demonstrate temporal inference, we apply GPcounts to single-cell RNA-seq datasets after pseudotime and branching inference. To demonstrate spatial inference, we apply GPcounts to data from the mouse olfactory bulb to identify spatially variable genes and compare to two published GP methods. We also provide the option of modelling additional dropout using a zero-inflated negative binomial. Our results show that GPcounts can be used to model temporal and spatial counts data in cases where simpler Gaussian and Poisson likelihoods are unrealistic.https://github.com/ManchesterBioinference/GPcounts along with the data, code and notebooks required to reproduce the results presented here. The version used for this paper is archived at https://doi.org/10.5281/zenodo.5027066.GPcounts is implemented using the GPflow library in Python and is available at Bioinformatics online. For datasets that are collected with temporal or spatial resolution, it can be useful to model changes in time or space using a Gaussian process (GP). GPs provide a flexible framework for non-parametric Bayesian modelling, allowing for non-linear regression while quantifying the uncertainty associated with the estimated latent function and data measurement process to model temporal or spatial counts data with a negative binomial (NB) likelihood. GPcounts can be used for a variety of tasks, e.g. to infer temporal trajectories, identify differentially expressed genes using one-sample or two-sample tests, infer branching genes from scRNA-seq after pseudotime inference, or to infer spatially varying genes. We use a GP with a logarithmic link function to model variation in the mean of the counts data distribution across time or space. As an example, in Our package is developed using the GPflow library which weN temporal or spatial locations ny is modelled as a noisy observation of the function evaluated at nx, f is a latent function sampled from a GP prior and x = nx. The simplest and most popular choice of likelihood function is i.i.d. Gaussian noise centred at f, in which case A Gaussian Process (GP) is a stochastic process over real valued functions and defines a probability distribution over function space . GPs prof is drawn from a GP we write,To indicate that k is a positive semidefinite function f at any two locations x and l, controlling spatial or temporal variability, and the amplitude Here, n kernel . Below wThe kernel hyper-parameters and parameters of the likelihood function can be learned from the data by optimizing the log marginal likelihood function of the GP. The marginal likelihood is given by the probability of the data p is the period. When comparing different covariance functions the models are assessed based on the Bayesian Information Criterion (BIC), defined as:N corresponds to the number of observations and d to the number of optimized hyper-parameters of a given model.GPcounts enables model comparison between models with alternative kernels, such as linear and periodic. These kernels are defined as:r and mean \u03bc:The Negative Binomial (NB) distribution is a popular model for bulk RNA-seq counts data and has In some cases it has been found useful to model additional zero counts through a zero-inflated negative binomial (ZINB) distribution . HoweverWe provide one-sample and two-sample likelihood-ratio statistics in GPcounts to identify differentially expressed genes. In the one-sample case the null hypothesis assumes there is no difference in gene expression across time or space and draw counts at each time. We use the mean and percentiles to plot the predictive distribution with the associated credible regions. To smooth the mean of the samples, we use the Savitzky\u2013Golay filter with cubic polynomial to fit the samples . To smooik is calculated as \u03b2 is the slope calculated by fitting for each gene a NB regression model with intercept zero and iT corresponds to the total counts at the ith spatial location.In some cases, there may be confounding variation that will dominate the temporal or spatial trends in the data. For example, bx:In the two-sample time course setting it can also be useful to identify the time at which individual genes begin to follow different trajectories. This can be useful in bulk time course data in a two-sample testing scenario (bx the data are distributed around the trunk function f(x) according to some likelihood function Now consider data from two lineages bx, the mean function of the trunk continues to follow f while the mean function of the diverging branch trajectory changes to follow g,After the branching point bx is a hyper-parameter of the joint covariance function of this model along with the hyper-parameters of the GP functions , sine functions and cubic splines, to simulate data from time-varying genes. The sine functions are of the form a and c distributions are chosen to alter the signal amplitude and mean ranges respectively. The cubic spline function has the form x, y) where x and y are drawn from uniform distributions. For non-differentially expressed genes we choose constant values set to the median value of each dynamic function in the dataset. The low and high dispersion values are drawn from uniform distributions We simulated four counts time course datasets with two levels of mean expression (high/low) and two levels of dispersion (high/low) to assess the performance of GPcounts in identifying differentially expressed genes using a one-sample test. Each dataset has 600 genes with time-series measurements at 11 equally spaced time points library to samplIn ormation as impleThe tradeSeq method uses spline-based generalized additive models (GAMs) with a NB likelihood to identify genes significantly associated with changes in pseudotime (H1672 from dataset1 fitted using GPcounts with an NB likelihood where in (a) the pseudotime is estimated using slingshot and in (b) the true time is shown. We compare the performance of GPcounts running a one-sample test with NB likelihood and Gaussian likelihood against the performance of tradeSeq using an association test on ten cyclic single-cell simulation datasets from the tradeSeq benchmark. The datasets were simulated using the dynverse toolbox from mouse likelihood in GP inference. This provides a useful tool for RNA-seq data from time-series, single-cell and spatial transcriptomics experiments. Our results show that the NB likelihood can provide a substantial improvement over a Gaussian likelihood when modelling counts data. Our simulations suggest that gains are largest when data are highly over-dispersed. For lower dispersion data the performance of the Gaussian and NB likelihood is similar. We find that the Poisson distribution likelihood performs very poorly for highly expressed genes even for relatively low dispersion. RNA-Seq data can exhibit substantial over-dispersion, especially in the case of single-cell and spatial transcriptomics, and therefore the NB likelihood can be expected to provide a substantial benefit over the Poisson and Gaussian likelihood.Regarding our different application examples, the analysis of spatial transcriptomics data shows promising results. We found a substantial difference using the NB likelihood compared to the SpatialDE method that is based on a Gaussian likelihood GP. Using a similar normalization and testing set-up, we found a much larger set of spatially variable (SV) genes than SpatialDE. Similarly, we found more SV genes than the over-dispersed Poisson method, SPARK, which also uses GP inference but with differences in both the modelling and inference set-up. When modelling the scRNA-Seq data from To improve the practical performance of GPcounts, we implement a heuristic to detect locally optimal solutions and to detect numerical instability. Since the naive GP scales cubically with number of time points we improve the computational requirements through a sparse inference algorithm from the GPflow library using thN.B. was supported by King Saud University funded by Saudi Government Scholarship [KSU1546]. M.R. was supported by a Wellcome Trust Investigator Award [204832/B/16/Z]. M.R. and A.B. were supported by the MRC [MR/M008908/1].Conflict of Interest: none declared. btab486_Supplementary_DataClick here for additional data file."}
+{"text": "Inference of gene regulatory networks has been an active area of research for around 20 years, leading to the development of sophisticated inference algorithms based on a variety of assumptions and approaches. With the ever increasing demand for more accurate and powerful models, the inference problem remains of broad scientific interest. The abstract representation of biological systems through gene regulatory networks represents a powerful method to study such systems, encoding different amounts and types of information. In this review, we summarize the different types of inference algorithms specifically based on time-series transcriptomics, giving an overview of the main applications of gene regulatory networks in computational biology. This review is intended to give an updated reference of regulatory networks inference tools to biologists and researchers new to the topic and guide them in selecting the appropriate inference method that best fits their questions, aims, and experimental data. In a mostatic network analysis, and (ii) dynamical modeling, each of which offers different amounts and types of information regarding the network organization, topology, and behavior , d, d155], Particularly in Boolean models, similar to using time-series of expression data for GRN inference, the temporal information in the changes of the genes\u2019 expression is also used to infer the Boolean rules that govern these changes ,156\u2013161,In this review, we give a basic introduction to GRNs, their topological characteristics and, most importantly, we describe the main GRN inference methods. Our aim is to give life scientists an overview of how to use the abstract concept of GRNs to investigate complex cellular molecular interactions more thoroughly and identify how specific interactions determine cells\u2019 behavior and response to the environment. With the increasing abundance of transcriptomics data, data-derived GRNs have the potential to capture novel gene interactions and help to expand our knowledge of important molecular pathways. Despite the increasing interest in developing highly performing GRN inference methods, and this field of research having been active for more than 20 years, often the application of even the best performing methods in real-world studies raises questions about their reliability . Purely Additional limitations in the performance of GRN inference methods can come from other\u2014often ignored\u2014sources. For example, gathering experimental samples from various patients can suppress the variability, since patients have individual histories, immune system, or genetic profiles. Experimental protocols can present restrictions as well, such as limited measurements of expression , heterogeneity in bulk datasets, or incompleteness in single-cell RNAseq transcriptomics.Another challenge in data-driven GRN inference is result interpretability and the difficulty in dealing with the high complexity of the resulting networks increasing with their size. When considering cell state transitions (like differentiation or polarization) in a multicellular system, we can imagine that the interactions between TFs and their target genes can be cell type specific, potentially requiring different GRNs. However, all of the inference methods produce a single GRN, thus failing to provide the dynamics of the underlying mechanisms during cell state transition captured by the time-series. One possibility in dealing with this challenge is to infer stage-specific GRNs, thus having a time-evolving GRN that can help understanding how the involvement and interactions of specific genes/TFs lead to cell state transition. However, studying and understanding time-evolving GRNs remains an unexplored field of high complexity. Additionally, tracking specific regulatory pathways or interactions rapidly becomes a real challenge when dealing with large networks consisting of thousands of genes. We assume that topological network analysis might help in reducing the network size by selecting the most important nodes , although there is still a gap in our understanding between the structural and dynamical properties of the network ,168\u2013171.Using advanced computational methods, recent efforts have helped tackle the challenge of filling the gap between static and dynamic properties of GRNs. For example, in their work, Marazzi and colleagues developeOne of the most difficult challenges in performing data-driven GRN inference is validating the results and estimating the method performance in real case studies. Benchmarking the inference methods with simulated datasets from a prior knowledge network (PKN) and a small list of genes can be an efficient way to estimate the methods\u2019 performance. However, this is far from real biological systems, in which the actual regulatory mechanisms are mostly unknown and\u2014in some cases\u2014experimentally unexplored. Unfortunately, using public databases for validating the inferred GRNs can be limited to small-size GRNs and narrowed around the highly studied regulatory pathways. Consequently, many approaches can lead to biased conclusions and a real difficulty in identifying novel regulatory pathways that might play an important role in the system under study.We believe that many of these challenges will be addressed in integrative inference methods to be developed in the future. Despite the considerable improvements and the rapid growth in numbers of GRN inference methods, this remains a relatively new and highly complex field. Feeding the methods with different types of \u2018omics data and prior knowledge, GRN inference can help discover the importance of\u2014yet uncharacterized\u2014pathways, biological components, and interactions, thus further increasing our knowledge that will serve as a starting point for even more complete models, in a positive feedback loop."}
+{"text": "This study develops a new clustering method for high-dimensional zero-inflated time series data. The proposed method is based on thick-pen transform (TPT), in which the basic idea is to draw along the data with a pen of a given thickness. Since TPT is a multi-scale visualization technique, it provides some information on the temporal tendency of neighborhood values. We introduce a modified TPT, termed \u2018ensemble TPT (e-TPT)\u2019, to enhance the temporal resolution of zero-inflated time series data that is crucial for clustering them efficiently. Furthermore, this study defines a modified similarity measure for zero-inflated time series data considering e-TPT and proposes an efficient iterative clustering algorithm suitable for the proposed measure. Finally, the effectiveness of the proposed method is demonstrated by simulation experiments and two real datasets: step count data and newly confirmed COVID-19 case data. Clustering is a popular unsupervised machine learning technique for identifying patterns and groupings in data, which has been widely used in many domains, including biology, finance, and image processing. However, many real-world datasets, especially those in healthcare, finance, and environmental monitoring, often exhibit zero inflation, which refers to excessive zeros in the data. This characteristic poses significant challenges to traditional clustering algorithms, which assume that the data points follow a specific distribution or pattern. In particular, zero-inflated time series data are prevalent in many domains, such as disease surveillance and financial transaction analyses. For instance, in epidemiology, the counts of infectious diseases are often zero-inflated due to under-reporting, misclassification, and other factors. Various methods have been proposed to address the challenges of clustering zero-inflated time series data, including a zero-inflated Gaussian mixture model measure in time series data clustering is essential. Thus, we propose a similarity measure suitable for zero-inflated time series data inspired by the thick-pen transform (TPT) by Fryzlewicz & Oh . The TPTThe primary rationale of the proposed method is that e-TPT can effectively manage the issue of excessive zeros in zero-inflated time series data. To demonstrate this, we present two zero-inflated time series in Fig.\u00a0This study is motivated by two real-world time series. The first comprises data on the number of steps recorded from wearable devices. Figure We consider newly confirmed coronavirus disease 2019 (COVID-19) cases per day in Seoul, Korea, as the second time series dataset. South Korea had its first confirmed COVID-19 case in January 2020. As of February 2022, the cumulative number of confirmed cases was more than 2,665,000. Figure The remainder of this paper is organized as follows. Section t, respectively. As for the pen shape, Fryzlewicz & Oh and Y(t) proposed by Fryzlewicz & Oh (X(t) and Y(t) are on approximately the same scale. The TPMA is then defined asThis section proposes a similarity measure employed as the input variable for clustering zero-inflated time-series data. For this purpose, we consider the thick-pen measure of association (TPMA) between the two time series icz & Oh . Supposet. It is noticeable that the e-TPT transformation can affect the ratio due to the pen thickness. For example, the ratio is less affected when the pen is relatively thin, but the ratio can vary significantly when the pen is relatively thick compared to the data values.To reflect the characteristics of zero-inflated time series data, we propose a new similarity measure based on e-TPT and TPMA. From now on, we assume that the given time series data are nonnegative and zero-inflated. Then, the lower boundary of e-TPT for zero-inflated time series data rarely fluctuates. Therefore, it is natural to modify the TPMA measure of to set tFigure entclass1pt{minimaentclass1pt{minimaK optimal partitions of a set of observations E. We let K partitions of the data that satisfies The goal is to determine d, we define the clustering problem as minimizing the following cost function,UpdateP: Given a set of cluster prototypes M, update P with UpdateM: Given a partition P, update M with The cost function decreases for each iteration step. A well-known K-means algorithm . The time-varying parameters This model was first considered by Fryzlewicz & Ombao for a clmentclass2pt{minimModel 2 : Nonstationary AR model with slowly changing parametersN We generated two cases of data from a nonstationary AR model with slowly changing parameters. Thus, we used with difCase (a)Case (b)Model 3 : Block data with different patternsWe considered a noisy block time series with four different patterns. To generate the time series, we reused with theModel 4 : ZIP model with different meanith data in group g are generated from a zero-inflated Poisson model,Unif. The average zero ratio from the generated data set is 0.583. Figure We considered a time series FunFEM \u2013 Functional clustering based on discriminative functional mixture modeling by Bouveyron et\u00a0al. . We use FunHDDC \u2013 Functional clustering based on the functional latent mixture modeling by Schmutz et\u00a0al. . We use DTW \u2013 Time-series clustering based on the dynamic time warping (DTW) distance by Wang et\u00a0al. , which iFor comparison, we considered three existing functional and time series clustering methods:For the evaluation measure, we used the correct classification rate and the adjusted Rand index (aRand) by Hubert & Arabie . The aRaT increases. The reduction in accuracy for large T is observed for all methods, but the proposed method with In Model 1, the proposed TPT clustering with As described in Section Finally, Table\u00a0Fitbit, a wearable device. The step data from 79 participants were measured every minute, and the number of recorded days varies from 32 to 364 per person. The total number of days in the dataset is 21,394. We first clustered the days based on patterns without considering inter- or intra- subject variability. The scaling parameter We applied the proposed clustering algorithm to the step count data obtained from a From the mean curves shown in the left panel of Fig.\u00a0For comparison, the funFEM and funHDDC methods are applied to the step data. Figure The main difference between the clustering results using the proposed method and functional clustering methods is that the average number of steps in the least active group using the functional clustering methods is close to zero for all times, whereas the average time series of the least active group using the proposed method is far from zero. The proposed method uses the upper bound of e-TPT; thus, it is likely that time series with the most values of zero and those with all values of less than five are classified together using the proposed method. Depending on the purpose of the study, it may be essential to classify less-active days into one group. Therefore, the proposed clustering method can be used according to the purpose of the study.We computed two clustering validation measures for the numerical validation of the clustering results: the Dunn index Dunn , and variThe proposed method can also be applied to cluster step data for a particular individual. For this purpose, we selected the 67th individual with 364 recorded days. To observe this individual\u2019s activity patterns, we summarized their steps in Fig. The results of the proposed method are provided in Fig. We considered the number of new COVID-19 cases per day in Seoul, Korea, from February 5, 2020, to June 18, 2021, as a time series of length 500. There are 25 districts in Seoul, as depicted in Fig. As illustrated in Fig. We apply the FunFEM, FunHDDC, and DTW methods to compare the COVID-19 data. The clustering results are provided in Fig. In this study, we proposed a novel clustering method that can be applied to high-dimensional zero-inflated time series data. By modifying the TPT, we developed the e-TPT to improve the temporal resolution of zero-inflated time series data and introduced a similarity measure for zero-inflated time series data as an input variable for the clustering algorithm. Furthermore, an efficient iterative clustering algorithm was proposed. Finally, the effectiveness of the proposed method was demonstrated using simulation experiments and real-data analyses with step count data and newly confirmed COVID-19 case data.As e-TPT solves the problem of exceeding zero in zero-inflated data, the proposed method can cluster zero-inflated time series, which is commonly observed in various fields. In addition, the proposed method provides a multiscale view of the data by considering various thicknesses of the e-TPT. If we use a thick pen, we can cluster time series based on the global trend, and a thin pen renders cluster groups divided based on the local features of the data. Furthermore, the proposed method addresses missing data issues by utilizing the TPT, which can accommodate missing data through the consideration of a large thickness. Similarly, e-TPT also tackles missing data by transforming the raw series into smoothed time-series data.However, the time series length must be the same for the current algorithm to be applied. Future studies could explore to handle time series with varying lengths. Another issue in the proposed method is finding the pen\u2019s optimal thickness. Although the CV technique has been used for the thickness selection in the current study, an optimal choice using a data-adaptive selection may improve the clustering performance of the proposed method. It is reserved for future research."}
+{"text": "The advent of the Big Data era and the rapid development of the Internet of Things have led to a dramatic increase in the amount of data from various time series. How to classify, correlation rule mining and prediction of these large-sample time series data has a crucial role. However, due to the characteristics of high dimensionality, large data volume and transmission lag of sensor data, large sample time series data are affected by multiple factors and have complex characteristics such as multi-scale, non-linearity and burstiness. Traditional time series prediction methods are no longer applicable to the study of large sample time series data. Granular computing has unique advantages in dealing with continuous and complex data, and can compensate for the limitations of traditional support vector machines in dealing with large sample data. Therefore, this paper proposes to combine granular computing theory with support vector machines to achieve large-sample time series data prediction. Firstly, the definition of time series is analyzed, and the basic principles of traditional time series forecasting methods and granular computing are investigated. Secondly, in terms of predicting the trend of data changes, it is proposed to apply the fuzzy granulation algorithm to first convert the sample data into coarser granules. Then, it is combined with a support vector machine to predict the range of change of continuous time series data over a period of time. The results of the simulation experiments show that the proposed model is able to make accurate predictions of the range of data changes in future time periods. Compared with other prediction models, the proposed model reduces the complexity of the samples and improves the prediction accuracy. With the rapid development of the Internet of Things and wearable devices, more and more health data can be obtained from electronic medical records or wearable devices. Data such as heartbeat, pulse and body position changes are continuously monitored by smart wearable devices. In healthcare, much of the healthcare data is in the form of time series, such as continuous monitoring data on blood glucose, blood pressure and lipids associated with chronic diseases . The larIn the medical field, there are a wide variety of sources of healthcare data. Based on the above these health care data to carry out various types of predictive model research is the current research hotspot in the field of medical information data mining . For exaHowever, due to the characteristics of wearable sensor data with high dimensionality, large data volume and transmission lag, large-sample time series data are affected by multiple factors and have complex characteristics such as multi-scale, non-linearity and burstiness. Traditional time series prediction methods are no longer suitable for the study of large-sample time series data . GranulaGranular computing has distinct advantages when dealing with multiple sources, heterogeneous and massive amounts of data. Granular computing can be used to granulate complex problems into a number of simple problems using the idea of \u201cgranularity\u201d . First, In the study of data mining theory based on granular computing, Commonly used forecasting methods for time series data currently include statistical-based forecasting methods, knowledge discovery-based forecasting methods and combinatorial model-based forecasting methods. A comparison of the advantages and disadvantages of common forecasting models is shown in As can be seen, typical forecasting models all have advantages and disadvantages. No model can achieve a completely idealized prediction result. Statistical models are simpler and easier to implement, but it is difficult to achieve high predictive accuracy. In addition, the generalizability of the statistical model is not high. Neural network models can have good prediction accuracy, but require a large number of samples to support them and the convergence rate is not ideal. The Auto-Regressive Moving Average (ARMA) does notGranular computing can be divided into non-fuzzy granulation and fuzzy granulation. In the actual situation, fuzzy granulation cannot fully reflect the characteristics of things, so for most studies lacking prior information, fuzzy granulation is closer to reality than fuzzy granulation. Fuzzy granulation has the unique ability of data compression interval, and often forms a combined prediction model with SVM algorithm, which is widely used in wind speed forecasting, load forecasting, stock price forecasting and urban traffic flow forecasting. For example, In order to obtain higher prediction accuracy and at the same time avoid a large number of calculations during the processing of large sample time series data, this paper proposes to combine the Fuzzy set model and SVM The main innovations and contributions of this paper include.(1) The current research status of the mainstream models used for time series data forecasting is analyzed, and the advantages and disadvantages of the mainstream forecasting models are analyzed. The innovative new idea of combining granular computing with SVM for forecasting large sample time series data is proposed.(2) A prediction model based on fuzzy granulation and SVM was developed. By fuzzy granulation of the sample data, the data of a window is granulated into a fuzzy interval and SVM is applied to predict the trend of data change in the future time.Forecasting as a science was born in relation to weather forecasting. With the advent of techniques such as meteorology, mathematical statistics and machine learning, the research direction of weather forecasting has been broadened, enabling a new level of forecast accuracy to be stepped up. A large number of real-life forecasting problems are similar to weather forecasting. The data all contain a time component, making such time-series forecasting problems even more complex. It has become a major research direction in the field of data mining to explore the potential patterns in time series data more precisely.Time series data are a series of data values indexed in time order . The mosy(t), the data recorded at different time points t is y(t) . We call this set of data a discrete time series. Before the raw time series data is analyzed, the stationarity of the time series data is checked. In general, we consider a sampled series of a variable to be stationary if the system parameters and external conditions do not change. However, this is only a qualitative analysis and some statistical characteristics of the time series need to be tested.For some observed variable The joint distribution of a strictly stationary time series is invariant under different time shifts. For a strictly stationary time series, there is no change in trend. The relationship between the mean, variance and serial continuous terms of {Y} are invariant. Due to the stringent conditions for strict stationaryness, the majority of time series in practical scenarios are not strictly stationary. In practical scenarios we commonly use weakly stationary time series .The expectation, variance and covariance of the weakly stationary time series {Y} do not change over time.If the time series {Y} satisfies the above conditions, it is said to be a weakly stationary time series (a broadly stationary time series).If a time series passes the stationaryness check, it can be modeled and predicted by a classical fitting model. However, for unsteady time series, the original time series needs to be pre-processed to convert the time series into a steady time series and then re-modeled.A stationary time series can be considered as a form of statistical equilibrium. Statistical properties such as the mean and variance of a stationary time series are not time dependent. Stationaryness is also a prerequisite for the construction of time series forecasting models. In addition, the use of stationary time series can reduce the complexity of fitting models.p observations, random errors, and a constant.The models commonly used in time series forecasting are Moving Average (MA), Auto-Regressive (AR) models, Auto-Regressive Moving Average (ARMA) models, and Auto-Regressive Integrated Moving Average (ARIMA) models. The predicted value in an AR model is made up of a linear combination of yt and \u03b5t are the predicted value and random error at time t, respectively, \u03c6i is the autoregressive coefficient and c is a constant term. However, the constant term is usually omitted in practice for the sake of simplicity. To estimate the parameters of an AR model for a given time series, the Yulc-Walker equation is usually used.where The MA model uses a set of recent observations to predict the value at a subsequent point in time. The efficient integration of AR and MA models can result in a general class of efficient time series forecasting models, called ARMA.A non-stationary time series can be converted into a stationary time series after performing multiple difference operations on it. The flow of the ARIMA model is shown in The data set needs to be pre-processed with the necessary filtering and cleaning before the experiment begins. At the same time, in order to improve the efficiency of the operation, the time series need to be zero-mean processed. After zero-mean processing, the range of values of the data will be reduced, but the original pattern of variation of the data will not be changed. For non-stationary time series, it is necessary to consider the periodic variation or introduce the difference operation when pre-processing in order to reduce the impact of non-stationary on the time series. For non-stationary time series data, a stationary time series data is obtained by differencing operations. Commonly used tests for stationaryness are the time series plot test, the characteristic root test and the unit root test.At present, in addition to statistical methods such as ARIMA, exponential moving average model and Bayesian nonparametric model, other prediction methods based on machine learning have been put forward and tested in various fields, such as neural network, deep learning, LSTM, reinforcement learning and fuzzy system. Currently, granular computing is a hot topic of research in the field of artificial intelligence and is widely used in the solution of complex problems. Starting from a practical problem, granular computing can decompose a complex problem into a simple sub-problem and replace the optimal solution with a satisfactory one. Granular computing is a new way to simulate human thinking and a powerful tool to deal with massive, fuzzy, uncertain and incomplete data. The main models in the theoretical system of granular computing are: rough sets, fuzzy sets, quotient spaces, etc.Granular calculations consist of three main components: the granule, the granule layer and the granule structure. The granule is the most initial concept in the granule computing model and the most fundamental unit in the solution of complex problems. There are coarse and fine distinctions between granules. A granule layer is an abstract way of describing a complex problem space. It is clear from the concept of a granule layer that it is still not a structurally uniform whole. Therefore, the concept of grain structure is derived from the concept of grain layer. A granule structure is a relational structure consisting of interconnections between granule layers. It describes the structural relationships between the layers. The more complex the granule structure, the more complex the problem solving process will be. Therefore, it is important to maintain a high degree of independence and low coupling between granules at the same level in order to simplify the solution process.The first problem to be solved in granular computing is granulation. In simple terms, the problem of granulation focuses on the selection of a suitable granulation criterion for the construction of information granules. The most basic requirement for a granulation criterion is that the granulated object must be able to fully characterize the original data. Granule calculation is generally based on a bottom-up approach. Generally, the solving process is to divide a specific problem into several granules and solve each granule. The solution for the corresponding granule layer is then synthesized according to the corresponding criterion. Finally, the final solution of the entire solution space is synthesized from the solutions of the individual grain layers. Thus granular computing can be used to solve complex problems scientifically. Currently, there are a number of widely used granulation methods: relational granules based on equivalence relations, neighborhood granules based on neighborhood systems and fuzzy information granules based on fuzzy sets.In the natural sciences and in the study of practical problems, the phenomenon of \u201cfuzziness\u201d can be found everywhere. Fuzziness refers to the fact that the degree of difference between objects of study cannot be described by the exact mathematical theory of classical sets. In order to deal with these \u2018fuzzy\u2019 phenomena, the theory of fuzzy sets has been developed, which can be used in many areas of machine learning to give rational decisions under imprecise circumstances. A fuzzy set is a collection of fuzzy concepts that can be used to express fuzziness.X is a finite non-empty region, A is a fuzzy set on X, and \u03bcA(x) is the membership grade of A.where A(x) = 1, then x is considered to belong to A completely. If \u03bcA(x) = 0, then x is considered not to belong to A at all. If 0 < \u03bcA(x) < 1, then x is considered to belong to A to some extent (\u03bcA(x)). From the definition of a fuzzy set, it can be seen that the difference between an ordinary set and a fuzzy set is the range of values of the characteristic function. The former is the set {0,1} and the latter is the closed interval .If \u03bcA has different representations in different contexts, the three most commonly used representations being.The fuzzy set (1) Zadeh representation.A (xi) is the membership function of the fuzzy set A.Where, (2) Sequential couple representation.(3) Vector representation.Support vector machines (SVMs) are generalized linear classifiers suitable for binary classification situations. The basic principle of SVM is to find an optimal classification hyperplane that satisfies the classification requirements. The hyperplane maximizes the blank area on both sides of the hyperplane while maintaining classification accuracy. The optimal hyperplane is the most fault-tolerant to local perturbations of the training samples and produces the most robust classification results.Two classification based on SVM is shown in Support vector regression (SVR), developed from SVM, has good fitting power and is superior for regression estimation, time series prediction problems, etc. The basic principle of SVR is to map data to a higher dimensional feature space through a non-linear mapping. the goal of SVR is to find the optimal function from a set of function spaces.X denotes the training set, w denotes the weight vector and b denotes the bias.where f denotes the optimality function, C denotes the equilibrium factor and L denotes the loss function.where The kernel function in SVM is to map the original linear inseparable sample features into a high-dimensional space, so that it becomes linear separable in high-dimensional space. Kernel function is a function used to express the similarity between two samples, which can calculate the inner product of samples in high-dimensional space, thus avoiding the process of directly calculating high-dimensional space and greatly improving the efficiency of the algorithm. The use of kernel functions allows the SVR to map non-linear relations to higher dimensional spaces.i and k denotes kernel functions. Since radial basis kernel functions have been widely used in time series prediction, radial basis kernel functions are also used in this paper.Both \u03b1where \u03c3 denotes the radial base radius. The SVR can produce a unique global minimum solution.This study mainly focuses on the large sample time series data from medical monitoring cases, and the corresponding constrained optimization objective function is as follows:w, and the second term is a loss term that penalizes misclassifications. The parameter C controls the trade-off between the two terms. The constraints in the optimization problem ensure that the hyperplane separates the data points correctly. Non-negative parameter \u03bei is the slack variables.The objective function consists of two terms: the first term is a regularization term that penalizes large values of the weight vector Support Vector Machines can easily find optimal solutions on small sample training sets and have excellent generalization capabilities. However, SVM\u2019s advantages are not as obvious when dealing with large-scale data. When dealing with large scale data, the performance of SVM may be outperformed by other models. In other words, the SVM time series forecasting model is no longer suitable for large sample time series data. Since Granular Computing has a unique advantage in dealing with continuous, complex classes of data, and can compensate for the limitations of traditional SVM in dealing with large sample data. Therefore, this paper proposes to combine Granular Computing theory with SVM to achieve large sample time series data forecasting.Fuzzy granulation is an information processing method derived from fuzzy set theory. a method for describing information granulation was proposed by Lotfi A. Zadeh in 1979.x is a variable, P is a fuzzy subset, and \u03bb is the probability that an event may occur. In most cases, the range of values of the variable is a set of real numbers. P is a convex fuzzy subset within the range of values, and \u03bb is a fuzzy subset within the unit interval.Where There are three main ways of using fuzzy granulation algorithms to describe information granules: time-based algorithms, numerical axis-based algorithms and combinatorial axis-based algorithms. The time-axis based fuzzy granulation algorithm used in this study.X = . The essence of the fuzzification process is to find the fuzzy granule G on the time series X. The main fuzzy granules G widely used at present are triangular, trapezoidal and asymmetric Gaussian shapes. In this paper, triangular fuzzy granules are employed.First, the time series of the initial sample is divided into a number of small and consecutive time intervals as required by the actual problem. Each time series is the concept of a window in the conventional sense of the fuzzy granulation algorithm. The fuzzy granulation algorithm is then used to granulate the data from each window to produce a number of fuzzy information granules. The key to the fuzzy granulation process is the sample data fuzzification process. Assume a time series is A denotes the membership function.where G is a subset of fuzzy information in the domain U. Therefore, the definition of the fuzzy particle G is as follows:The steps of the fuzzy information granulation algorithm are as follows.Input: information on data based on time series.Output: fuzzy information granules.m of the fuzzy granules. rearrange the input time series in ascending time order. Assume that the rearranged order is X = ;Step l: Solve for the parameter a.Step 2: Solving for the fuzzy granule parameter b.Step 3: Solving for the fuzzy granule parameter G;Step 4: Get the fuzzy granule Step 5: End of the algorithm.Granular computing is a new direction emerging in the field of artificial intelligence. The theory of granular computing proposes new concepts and computational paradigms. In this paper, we combine the idea of granular computing with SVM to reduce the complexity of problem solving, thus effectively improving the training efficiency of SVM as well as the prediction accuracy.In the FGSVM hybrid model, in order to improve the prediction efficiency, the large time series is refined into sub-series. However, this method does not improve the prediction accuracy. Different from the FGSVM hybrid model, the fuzzy theory is improved in order to obtain better granulation effect. This process can be divided into two parts: The first part is to divide the original data according to certain rules and determine the best time window size. The second part is to determine the information granulation rules suitable for the original data, and the best membership function can ensure the superiority of data granulation. Building a fuzzy information granulation model based on time series data can be divided into two steps: Determine the time window partition and build membership function. The fuzzy granulation methods for the values is shown in G is characterized by three parameter values . In the second step, the SVM model is applied to regression prediction of the parameters associated with the fuzzy information granules and the parameter values obtained from the regression prediction are used to represent the interval of change in the time series. The flow of the proposed GC-SVM combination prediction model is shown in For large sample time series data, the process of the SVM prediction model based on information granulation is divided into two steps. The first step is to apply the fuzzy set-based information granulation algorithm to the initial sample data. The original sample data is converted into a number of fuzzy granules, and the data information of each granule sample Because time series data often have streaming characteristics and the collection time interval is not uniform, the fuzzy granulation algorithm based on time axis has more application prospects. Compared with the algorithm based on combined axes, the fuzzy granulation algorithm based on time axis can process a large number of data more efficiently. Because the time axis has certain regularity, the algorithm can be faster while ensuring the processing accuracy. In addition, the fuzzy granulation algorithm based on time axis has low requirements for data preprocessing.In order to verify the effectiveness of the proposed GC-SVM combination prediction model on large sample time series data, two typical medical monitoring cases were selected for experimental analysis. experiment I is fetal weight prediction and experiment II is blood pressure prediction. In obstetrics, accurate prediction of neonatal weight is of great importance. The change in fetal weight is one of the most important indicators of fetal development during pregnancy. Accurate prediction of fetal weight can reduce the risk of labor and improve the quality of the birth of the baby.The dataset for Experiment I was screened from a sample of 3,000 electronic medical records from the obstetrics department of a hospital between January 1, 2016 and December 31, 2016. The sample was screened for singleton, absence of pregnancy syndrome, and exclusion of malformed fetuses. The age distribution of pregnant women ranged from 22 to 43 years and had undergone an ultrasound examination within 72 h prior to delivery. Some of the experimental data of Experiment I are shown in The Experiment II dataset was drawn from 158 hypertensive patients from the same hospitals. The data time frame ranged from January 1, 2017 to July 31, 2017. Blood pressure data were measured at least twice a day during this period. The blood pressure grading used is shown in In this paper, two algorithms are used to complete the missing values and thus perfect the sample data. The first is the most commonly used mean-completion method . The reax\u2032 denotes the set of samples that do not contain missing values, xij denotes the j-th feature of sample i, and m denotes the number of samples.where The second is the nearest neighbor completion method . In thisdik represents the Euclidean distance between xi and where In order to eliminate the influence of units and data magnitude on the model prediction results, data normalization is performed before the parameters are entered into the prediction model.Where \u03bc indicates the mean of the current feature parameter values and \u03c3 indicates the standard deviation of the current feature.c and the parameter \u03b3 for the SVM classification in the prediction model is shown in c and parameters \u03b3 are 0.25 and 0.1758, respectively.An IBM server with Intel i7 6700k CPU, 8 GB RAM and 300G hard disk was used for this experiment. the system of the IBM server was Ubuntu 14.04 version of Linux operating system. The data cleaning during the experiment was written in Python scripting language, language version 3.5.2. The matlab function used for the cross-validation was crossval. A schematic diagram of the penalty coefficient The benchmarking issues in assessing the proposed GC-SVM combination prediction model are mainly related to accuracy and robustness, which are measured using the Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) as metrics.The proposed GC-SVM combination prediction model was compared with ARMA, SVM and artificial neural network (ANN). The lag of ARMA is 2, the value of AR coefficient is , and the value of MA coefficient is . ANN is a simple three-layer BP neural network model. The number of nodes in the input layer is 3, the number of nodes in the output layer is 1 and the number of nodes in the hidden layer is 1. The performance of all prediction models was averaged over 10 experiments, and the intervals of variation that occurred in the results of the 10 experiments were also recorded. The comparison of the fetal weight prediction models is shown in It can be seen that the GC-SVM prediction model reduces the prediction error of the model and improves the stability of the model compared to other prediction models. In order to further compare the robustness of the GC-SVM model with that of the ANN model, the distribution of the two types of models in each error range is analyzed in this paper, as shown in Taking the historical samples of a patient for 100 consecutive days as an example, the original blood pressure time series data are shown in a, m and b. The parameter a represents the minimum value of the change in value over the specified time interval. The parameter m represents the average value of the change in value over the specified time interval. The parameter b represents the maximum value of the change in value over the specified time interval. It can be seen that the trend in blood pressure after granulation is consistent with the actual trend in blood pressure, indicating that the information has maintained the pattern of the original sample after granulation. The experimental data after granulation can completely characterize the characteristics of the original experimental data, which indicates that the granulation algorithm is feasible. The final prediction results of the four models are shown in The characteristics of each information grain are characterized by three parameters It can be found that the proposed GC-SVM combination model can make full use of the advantages of both models to capture the trend of blood pressure time series more accurately than the traditional single model. Compared with other prediction models, the combination model has better performance in terms of both prediction precision and stability.This paper combines the emerging theory of granular computing with the well-established SVM regression forecasting model and applies it to the forecasting of large sample time series data, in order to uncover potential trends in continuous data. Because of the unique advantages of granular computing in dealing with continuous, complex data, it can compensate for the limitations of traditional SVM in dealing with large sample data. Therefore, this paper proposes to combine granular computing theory with SVM to achieve large sample time series data prediction. First, the initial sample time series is divided into a number of small and continuous time intervals according to the requirements of the actual problem. Each time series is the window concept in the conventional sense of the fuzzy granulation algorithm. The fuzzy granulation algorithm is then used to granulate the data from each window to produce a number of fuzzy information granules. The key to the fuzzy granulation process is the sample data fuzzification process, and this paper employs triangular fuzzy granules. The results of Experiment I of fetal weight prediction show that the GC-SVM combination model can reduce the error rate of prediction in the range of large error. The results of Experiment I of blood pressure time series show that the proposed GC-SVM combination model performs better in prediction accuracy and stability than other prediction models.The raw data supporting the conclusions of this article will be made available by the author, without undue reservation.YY conceived and designed the study, collected samples and performed the experiments, analyzed the data, and wrote the manuscript."}
+{"text": "Multivariate synchrony goes beyond the bivariate case and can be useful for quantifying how groups, teams, and families coordinate their behaviors, or estimating the degree to which multiple modalities from an individual become synchronized. Our package includes state-of-the-art multivariate methods including symbolic entropy, multidimensional recurrence quantification analysis, coherence , the cluster-phase \u2018Rho\u2019 metric, and a statistical test based on the Kuramoto order parameter. We also include functions for two surrogation techniques to compare the observed coordination dynamics with chance levels\u00a0and a windowing function to examine time-varying coordination for most of the measures. Taken together, our collation and presentation of these methods make the study of interpersonal synchronization and coordination dynamics applicable to larger, more complex and often more ecologically valid study designs. In this work, we summarize the relevant theoretical background and present illustrative practical examples, lessons learned, as well as guidance for the usage of our package \u2013 using synthetic as well as empirical data. Furthermore, we provide a discussion of our work and software and outline interesting further directions and perspectives. multiSyncPy is freely available under the LGPL license at: Across physical, biological, and social systems, interacting components of complex systems coordinate and, at times, synchronize their behavior. When two or more system components are aligned temporally and spatially, then their behavior is thought to be synchronized. Synchronization is a well-known natural phenomenon, and a seemingly universal property exhibited by complex systems Amazeen, appearincoordination dynamics, an area of inquiry that \u201cdescribes, explains and predicts how patterns of coordination form, adapt, persist and change in living things\u201d , physiological signals from the autonomic nervous system compared to unrelated components that each have a low frequency. Note that other methods do exist for generating symbolic states from continuous time series uses the temporal regularity and recurrence of system states and sequences of states as a proxy for synchronization is simply the number of recurrent cells divided by the total number of cells. The proportion of determinism (%DET) only considers sequences of diagonally recurrent cells that are longer than a specified length. It is the number of recurrent cells left after applying this criterion, divided by the number of recurrent cells. The average length of a diagonal sequence (ADL) of recurrent cells provides a third metric, and the length of the longest diagonal sequence provides the final metric (maxL).One of the benefits of recurrence-based analysis is that it is considered to handle nonlinearity and nonstationarity well, since it does not explicitly model the variables or their interactions as some particular set of functions . This means that information about the relative amplitude at different frequencies is ignored, which may be undesirable for some types of signals. For example, this issue often becomes noticeable when a recording includes Gaussian noise, since Gaussian noise impacts the spectral content at all frequencies, whilst the meaningful content of the \u2018true\u2019 signal may only be contained in a limited range of frequencies. If this is the case, then the Gaussian noise may dominate over the meaningful content when averaging across frequencies, especially when using a high sampling rate which allows for many frequency components to be computed. If possible, the reliability of the metric could be improved if filters are applied to remove noise such as a bandpass filter that removes noise occurring at irrelevant frequencies.If filtering is not possible, noise may have a serious impact on the results. To mitigate this issue, we propose an additional metric that is closely related to the averaged spectral coherence. Observing that the coherence value at each frequency component is a normalized version of the cross-spectral density (CSD) at that component, we propose to use the cross-spectral density to define an additional metric, but postpone normalization until after aggregating the values across frequencies. As a consequence, information contained in the cross-spectral density regarding amplitude can be used to moderate the impact that each frequency component has on the final output. Our proposal is to use the sum across frequencies of the squared cross-spectral density, and normalize by the sum across frequencies of the auto-spectral density of the first signal multiplied by the auto-spectral density of the second signal. This then produces a value between 0 and 1 for a pair of variables. For two variables, the calculation is as follows:n frequency components. Repeating the process across all pairs of variables and then averaging leads to a final multivariate metric. Hereafter, we shall refer to this additional metric as \u201csum-normalized CSD\u201d. This metric may be preferable when there is substantial noise that cannot be filtered out, for example, because it is in a frequency range of interest.where each sum is across A final point to note about both the aggregated coherence and sum-normalized CSD is that they are based on estimating the spectral density at different frequencies. As with any time-frequency analysis, the number of frequencies that can be analyzed varies according to the length of the input time series, making it difficult to compare results from multivariate time series of different lengths.The remaining two synchrony metrics are based on the concept of the phase of a periodic signal, which describes how far the signal is along its cycle of behavior, at any given moment in time to perform the Hilbert transform and then calculate angles from the resulting complex numbers, and (2) to perform wavelet analysis , coherence, the cluster-phase \u2018rho\u2019 metric, and a statistical test based on the Kuramoto order parameter (based on Frank and Richardson\u2019s weak null hypothesis), plus our proposed \u2018sum-normalized CSD\u2019.In addition to the different multivariate synchronization methods mentioned above, multiSyncPy also offers functions to generate surrogate datasets from samples containing several multivariate time series. Surrogate data is often used in synchronization-focused research as a way to determine whether or not the observed dynamics are different than chance levels . The parameter K is the coupling strength. The initial phases for the oscillators are provided as a numpy array, and so too are the natural frequencies in the parameter \u2018omegas\u2019. The alpha value modulates the contribution of Gaussian noise to the signals. The standard deviation of the Gaussian noise is the square root of parameter \u2018d_t\u2019, which is the length in seconds of the period between time steps, and the noise is multiplied by alpha before being added. The length is the number of time steps to generate. Below is the code to generate some example data.The function returns a numpy array of shape . Note that the numpy array is the data structure used across our package to represent multivariate time series.If we had chosen to use autoregressive synthetic data instead of data from a Kuramoto model to showcase our code, the synthetic data would be generated using the following code. The length must be specified, along with \u2018phi_1\u2019 and \u2018phi_2\u2019, respectively, the weighting of the values one and two time steps ago in the autoregressive process, \u2018epsilon\u2019 which is the standard deviation of Gaussian noise added at each time step, and an optional bias term \u2018c\u2019. These parameters correspond to the parameters of the autoregressive process described in more detail in the next section.This returns a univariate time series of the desired length. To construct a multivariate time series, multiple univariate time series would be generated and stacked together.With some data on which to compute the metrics, now it is possible to calculate the symbolic entropy. This requires only a simple function call:The symbolic entropy across the entire time series is returned as a single number.For mdRQA, our package includes a function to create the recurrence matrix for a multivariate time series. The user must specify a \u2018radius\u2019, which is used as a threshold to decide when two time points are sufficiently similar to be considered recurrent. If an appropriate value is not known a priori, then typically the radius is established by iteratively running the mdRQA analysis and adjusting the radius until the percentage of recurrence is between 1 and 5 , proportion of determinism (%DET), mean diagonal length (ADL), and max diagonal length (maxL). Each of these are a single number.The next metric is the aggregated coherence score, as used in Reinero et al. . Once agIt is worth noting that the aggregated coherence score can be affected by the presence of Gaussian noise, and if this is likely, then it may be valuable to also compute the \u2018sum-normalized CSD\u2019 metric proposed in this paper, as follows:The remaining metrics are slightly different because they are based upon analyzing the phase time series of the different variables. This means that it is necessary to convert the raw amplitude values in the time series into phase values. Because there are multiple valid ways to do so, such as extracting them from the Hilbert transform or using wavelet analysis, it is left open to the user to decide which method to use. The example code below shows how to use functions from the numpy a numpy array of length equal to the length of the input time series, which is a continuous estimate of synchrony at each moment, and (2) the overall score as a single value.By default our \u2018rho\u2019 function returns a continuous and time-varying estimate of synchronization. In contrast, we also provide a windowing function that allows users to do the same in order examine the development of synchrony over time using the other metrics. In other words, by default a majority of these metrics return values that summarize the entire time series, but our windowing function allows one to conveniently estimate the change in coordination over time. The user simply provides the time series data, the function used to compute a specific metric, the number of time steps to use as a window, and the number of time steps to use as a step size between successive windows. The outputs are provided in a numpy array with the first dimension representing windows, and the other dimensions being determined by the synchrony metric in question. For example, to calculate the symbolic entropy in windows of size 100 with an offset of 100 and thus, no overlap, the code is as follows:the inputs should be the phase time series rather than the raw amplitude values. To demonstrate this function, we generate a sample of multivariate time series from Kuramoto models and convert them into phase time series.The final synchrony metric included in our package is a statistical test on the Kuramoto order parameter, to determine how likely a sample of data is according to the weak null hypothesis of Frank and Richardson . Unlike Once the synthetic data are available, performing the test can be done using the \u2018kuramoto_weak_null\u2019 function as follows:p value, t-statistic, and degrees of freedom for the sample provided.This function returns the Lastly, multiSyncPy provides two functions for generating surrogate data from a sample of time series, which for example can then be used to calculate baseline results. The first cuts each variable in each time series of the sample into windows, and reorders the windows. The code to construct this type of surrogate data from a sample is as follows. A list containing numpy arrays of shape is required, as is the desired length of a window.The second method for constructing surrogate data works by swapping variables between time series, leaving the variables the same individually, but combined into new time series (with other randomly selected variables). The code to create this type of surrogate data is as follows.The sample must be a numpy array of shape . The function returns a surrogate sample with the same shape.Now that we have demonstrated how to compute the various metrics and take advantage of the functions of multiSyncPy, we next showcase the use of our package using the aforementioned two types of synthetic data as well as two existing empirical datasets. Doing so allows us to compare the methods for quantifying multivariate synchrony, and provide lessons learned from their application to different types of data. First, explorations of two types of synthetic data are presented. We used a stochastic autoregressive function, following the example of Gouhier and Guichard , which pIn addition to synthetic data, we use two datasets from real experiments. The ELEA corpus of recordings . However, so far it is not clear what the relative merits of the different methods are because most studies examining multivariate coordination employ only a single technique. Using synthetic data, we can compare the performance of these multivariate synchrony metrics by systematically varying simple parameters used to generate this data. Examples of our two types of synthetic time series (stochastic autoregressive process and Kuramoto oscillators) are visualized in Fig. Xt is the value of the process X at time t, \u03b20, \u03b21, \u03b22\u00a0are fixed parameters and \u03b5t is Gaussian noise added at time t.The first type of synthetic data we examine comes from a stochastic autoregressive function. The data generated by this process has temporal structure, as each value in the time series is a product of the previous two (plus Gaussian noise), however the way that the values develop over time is unpredictable and a consequence of the Gaussian noise added at each time step. The process is described by the following equation, where The advantage of this is that two different time series generated in this manner should have temporal structure, but no above chance-level synchrony with one another. We generate five time series of length 1000 from the autoregressive function, and stack them alongside one another to act as the variables of a multivariate time series.Following the example of Gouhier and Guichard , an impoWe select a number from a Gaussian distribution at each time step, and then add this same number to each variable in the multivariate data. The relative contribution of the correlated noise to the final data is a parameter that we vary through the course of our experiments, i.e., we treat it as an independent variable. The variables in the synthetic data are each normalized to have mean 0 and variance 1, and the Gaussian distribution used to generate correlated noise has a mean of 0, but a variable standard deviation. To perform our investigations, we gradually increase the standard deviation of the correlated noise. It begins at zero, and increases in steps of 0.1 up to 1.0. At each value for the standard deviation of the correlated noise, we use the aforementioned procedure to create a sample of 500 time series, each containing five variables and 1000 time steps, and then compute the synchrony metrics. See Fig. d and\u00a0the percentage change to determine how robust the metrics are to changes in parameters of the synthetic data. We attempted both strategies mentioned in the previous section for constructing surrogate data, namely (1) shuffling windows of data, and (2) swapping variables between time series so that the variables have the same structure but appear in different time series. For the approach of shuffling windows, each variable was cut up into windows with length 1/10 of the total time steps and reordered. However, this produced unexpectedly large discrepancies between the main results and the surrogate results even when there was no correlated noise, while in this circumstance there is no synchrony expected between variables in the original time series or the surrogate baseline (since the variables come from unconnected autoregressive processes), and so the outputs should not differ noticeably. For this reason, we report the results using the second type of surrogate data, in which variables are swapped. For this type of surrogate data, the results were as expected when no correlated noise had been added. The results are presented in Figs. d and the average percentage increase in each metric, compared to the surrogate baseline where variables had been swapped between time series after adding correlated noise.To help us evaluate the observed values for each metric, we create surrogate data for each sample after adding correlated noise and re-calculate the metrics on these, providing \u2018surrogate baseline\u2019 results. The observed values without surrogation are compared to the surrogate baseline and we report Cohen's Using the outputs of this investigation, we are able to compare how synchrony metrics perform as the standard deviation of the correlated noise is increased, and highlight some important differences. All of the metrics are evidently affected once the standard deviation of the correlated noise becomes sufficiently high. As shown in Fig. Relative to the other metrics, rho appears to stay close to zero for the longest, and has one of the lowest increases over the baselines when the correlated noise has its highest standard deviation. Unlike with coherence, the rho metric did not reach values close to its maximum of 1.0. One reason rho is less likely to be impacted by correlated noise is that this metric is based upon phase, which quantifies progression through some structured pattern; since the added noise does not have temporal structure, it may have a limited impact on the extracted phase, and therefore affect rho less than it affects the other metrics.When interpreting the\u00a0percentage change\u00a0results, it is important to note that, while the\u00a0surrogate baseline values for coherence and rho remained largely consistent while changing the correlated noise\u00a0and then shuffling variables to create surrogate data, the opposite was true for recurrence. The recurrence of the first baseline was on average 13.6% without correlated noise, but fell to 1.1% when the standard deviation of the noise was 1.0. This may help to explain the larger over-the-baseline increases observed in the recurrence. Symbolic entropy appears to have remained close to zero for longer than recurrence and coherence while increasing the standard deviation of the noise, but not for as long as rho. At no point was the change over the baselines particularly large for symbolic entropy. When interpreting the symbolic entropy, it is worth noting that there is a theoretical minimum of roughly 1.1 and a theoretical maximum value of 5.5 when using five variables. This provides some limitation on how much the observed entropy can increase over a baseline, which is not the case for the other metrics.Before moving on, a note of caution is that these results give a limited impression of how the metrics will perform in real scenarios. The autoregressive data, especially when we use a high parameter value for noise, adds correlated random numbers at each time step, which may be quite different from what would be expected with real sources of noise. Real sources might occur infrequently rather than having a consistent impact over all time points or have a more distinctive spectrum. Nevertheless, these investigations with unstructured but correlated noise can help to give an initial characterization of the different metrics in multiSyncPy.i in the model \u00a0denotes the random noise added at time t.Next, we quantify the impact of increasing the coupling strength of oscillators in a Kuramoto model on the synchrony metrics. We assume that data generated from models where the coupling is higher will have higher levels of synchrony on average, and the following results show how well our metrics reflect this assumed trend. For each variable We systematically increase the coupling strength parameter K, from 0.0 to 2.0 in steps of 0.2 and investigate the impact on the synchrony metrics. For each setting of K, we create a sample of 500 Kuramoto models and generate a multivariate time series with five variables and 1000 time steps. For each model in the sample, we choose a different set of natural frequencies for the oscillators sampled from an exponential distribution. This is important for the sake of our surrogate baseline where variables are swapped across time series, remaining the same but ending up in different time series, since high levels of synchrony could be observed in the surrogate data simply because the same natural frequencies are repeated across variables in the sample. Figures Using these results to compare the metrics, we find that all metrics increase as the coupling in the Kuramoto models is strengthened. Figure In Fig. Rho seemingly exhibits an increase over the baseline more quickly than some of the other metrics when coupling strength becomes larger, suggesting it does detect synchrony well in this context. The percentage of recurrence also seems to be responsive to coupling strength when looking at the percentage increase over the baseline, however the estimated effect size increases slightly more slowly than with the other metrics. Symbolic entropy showed lower increases over the surrogate baseline for smaller coupling strengths, but still demonstrated increases when the coupling strength was higher. Overall, the metrics fit the assumption that increased synchrony should be detected when the coupling between oscillators is increased.Using this type of synthetic data from Kuramoto models, it is possible to investigate various types of validity for our metrics. First, we can gain an insight into convergent validity by calculating the correlation between the metrics, since they are all expected to measure some form of multivariate synchrony. We do this across all Kuramoto time series generated according to the above procedure. With the same data, we also consider whether the multivariate metrics have concurrent validity with a metric that reflects the related concept of dyadic synchrony. We chose cross-correlation as a standard dyadic metric | > .95) between the coupling parameter and the average synchrony score of the time series generated using that coupling parameter. This provides strong evidence of criterion validity, in that all metrics increase when synchrony in the form of coupling is increased.Additionally, it is possible to examine how well each multivariate metric is suited to predicting the known coupling parameter of the Kuramoto models, providing an insight into criterion validity. Figure It is worth noting that this synthetic data may not resemble all of the important aspects of empirical signals as the Kuramoto data we generated assumes that the signals all come from periodic oscillators with equal and constant coupling.To complement the investigation of synthetic data, which allows us to vary key parameters of the data and examine the consequences, we also examine data from empirical studies, which have more complex and realistic properties, but at the cost that we do not know what parameters are driving it. We present two case studies using openly available data from the ELEA corpus and replaced using linear interpolation. This affected only 0.3% of data. Because the recordings are different lengths, we take 14,000 frames from the middle of each recording, to produce a dataset with greater consistency and with which it is easier to swap variables across recordings to produce a surrogate baseline. Because we are interested in the videos, we use the ELEA-AV sub-corpus, which includes all the sessions which were video-recorded, amounting to 26 different teams. From these, the 20 recordings with four members in a team were selected for use, so that each multivariate time series would contain the same number of variables. Windows containing 300 time steps of data (equating to 10 s of recording) with high and low levels of synchronization are displayed below in Fig. Our first goal in investigating the data is to identify whether synchrony occurs at above the level expected by chance. Our first option in this regard is to apply the statistical test of Frank and Richardson on the Kuramoto order parameter. This method compares the observed data to a null hypothesis stating how a sample of values is expected to be distributed. This works for the Kuramoto order parameter. For the other metrics included in the package, the expected distribution of values in a sample is not known, and so we use surrogate data to gain an understanding of whether observed synchrony is above chance level.In general, the test of the Kuramoto order parameter offers one way to examine levels of synchrony within a sample of recordings. However, this is predicated on the data being suitable for such analyses. The test is based upon the Kuramoto model of oscillating components, in which the components follow sinusoidal progressions and have a uniform distribution of phase values. As can be seen from Fig. t(19) = 51.7, p < .001, 95% CI , Cohen\u2019s d = 10.4, demonstrating a strong effect. However, we also find that when shuffling variables between time series to create a surrogate baseline in which synchronization is not expected, we achieve very similar results: t(19) = 46.5, p < .001, CI = , Cohen\u2019s d = 10.4. This acts as a word of caution and a \u2018lesson learned\u2019 from our empirical case study, confirming that the test of the Kuramoto order parameter must be used on data from which appropriate phase information can be extracted, otherwise the results may be highly inflated. The distribution of phase values can be inspected to see if it is (roughly) uniform before performing the test. These findings also suggest that there is value in having access to multiple methods for analyzing synchrony, which is what multiSyncPy offers, since one method may overcome the limitations of another method and provide a way to identify inconsistent results. Further investigations to determine how both the type of data and the extracted phase information impact the test of the Kuramoto order parameter, using synthetic data and alternative methods for extracting phase, is presented in our Appendix I.To demonstrate the point, we examine what happens if one does perform the test of the Kuramoto order parameter on the movement data. For the 20 recordings of four variables, t test, and estimated effect sizes. All of the metrics show some differences that are consistent with the hypothesis that synchrony is greater in the real compared to the surrogate data, and t tests suggest significance across all metrics except recurrence (as shown in Table Our next focus is the performance of the remaining synchrony metrics compared to a surrogate baseline. We take the same data and compute the percentage of recurrence (using a radius of 0.4 to determine recurrent points), cluster-phase rho, coherence, sum-normalized CSD and symbolic entropy. Then, the variables are shuffled across recordings to produce a surrogate baseline and the metrics are re-computed for comparison. The mean values of the different metrics are presented in Table The relative distribution of the synchrony metrics across ELEA meetings is shown in Fig. We also analyzed the data dividing each meeting into smaller, non-overlapping windows, following the example of .Another contribution of our work is to present an initial investigation of how different metrics perform on the same tasks. The methods collated in multiSyncPy have previously only been introduced in isolation from one another, and this initial comparison of multivariate metrics is novel. By examining two types of synthetic data, we observed that some metrics are more responsive than others to the addition of correlated noise to a multivariate signal, and some metrics appear more sensitive to the coupling strength in situations that can be modeled as coupled Kuramoto oscillators. The \u2018rho\u2019 metric, for example, seemed the least influenced by increased correlated noise, while being one of the metrics that increased most quickly with the Kuramoto coupling strength parameter.The synthetic data used in this paper are of course based on simplified mathematical models. They are useful, though, because they give the opportunity to change parameters of the data generation process and then observe corresponding trends in the values of synchrony metrics; however, there are some limitations worth noting. In particular, some of the interesting and complex properties of real-life signals may not be present in the data. One property that might be worthwhile to investigate in future work is quasi-periodicity, which is not reflected in our Kuramoto model since it is composed of oscillators following simple sinusoidal patterns. Methods for generating synthetic quasi-periodic data would make it possible to compare how different metrics perform under a wider range of conditions. Moreover, it is likely that a wide variety of nonlinear relationships between variables in a system are possible, and can be modeled in synthetic data. Future work could examine whether and how various nonlinear relationships impact synchrony metrics in different ways.Overall, on data from real-life experiments, our metrics showed limited increases over a surrogate baseline\u00a0when considering windows of data, although a significant effect was observed for four out of five metrics on the full ELEA recordings. The fact that the increases were frequently quite small or not statistically significant\u00a0might indicate that it may be hard to detect group-level synchrony in team tasks, it may be less common than the more frequently-investigated phenomenon of dyadic synchrony in teams, or it could be that this form of surrogation is quite conservative compared to simple randomization , choosing the correct pre-processing techniques , analyzing multivariate systems with the same number of variables, and determining which segments of the data to analyze . As the field shifts from primarily bivariate to multivariate coordination dynamics, careful thinking, experimentation, and systematic comparison and validation of the multiple possible methods (including those presented in this paper and others such as (Zhang et al., In conclusion, multiSyncPy provides a range of synchrony metrics that can be computed easily through simple function calls in Python. These metrics come from a range of theoretical backgrounds, and the context may make some metrics more appropriate or informative than others. All of the metrics apply to multivariate time series, and so can be used to investigate system-level constructs of synchrony. System-level synchrony is under-researched, even in contexts where synchrony has been studied previously, such as small group interactions. Our methods are also appropriate in situations where there are numerous variables and it would be difficult to make sense of a large number of pairwise synchronizations. In other words, we aim to contribute tools that may advance the field's understanding of how coordination functions across scales Kelso, . For the"}
+{"text": "PLEC) cause several rare diseases, denoted as plectinopathies, with most of them associated with progressive muscle weakness. Of several plectin isoforms expressed in skeletal muscle and the heart, P1d is the only isoform expressed exclusively in these tissues. Using high-resolution stimulated emission depletion (STED) microscopy, here we show that plectin is located within the gaps between individual \u03b1-actinin-positive Z-disks, recruiting and bridging them to desmin intermediate filaments (Ifs). Loss of plectin in myofibril bundles led to a complete loss of desmin Ifs. Loss of Z-disk-associated plectin isoform P1d led to disorganization of muscle fibers and slower relaxation of myofibrils upon mechanical strain, in line with an observed inhomogeneity of muscle ultrastructure. In addition to binding to \u03b1-actinin and thereby providing structural support, P1d forms a scaffolding platform for the chaperone-assisted selective autophagy machinery (CASA) by directly interacting with HSC70 and synpo2. In isoform-specific knockout (P1d-KO) mouse muscle and mechanically stretched plectin-deficient myoblasts, we found high levels of undigested filamin C, a bona fide substrate of CASA. Similarly, subjecting P1d-KO mice to forced swim tests led to accumulation of filamin C aggregates in myofibers, highlighting a specific role of P1d in tension-induced proteolysis activated upon high loads of physical exercise and muscle contraction.Plectin, a highly versatile cytolinker protein, is crucial for myofiber integrity and function. Accordingly, mutations in the human gene ( Striated muscles are elaborately organized machines designated for contraction, and their highly textured structure is a prerequisite for directed force development. Sarcomeres, the smallest functional units of muscle contraction, comprise precisely organized filament systems including thin (actin) and thick (myosin) filaments, titin, and nebulin , which bPLEC) on chromosome 8q24 lead to epidermolysis bullosa simplex with muscular dystrophy , an autosomal recessive skin blistering disorder associated with progressive muscle weakness , homogenized by pressing the samples through a 27-gauge needle, centrifuged for 10 min at 10,000\u00d7 For expression of recombinant plectin fragments in bacteria, cDNAs encoding P1d-24(2\u03b1) or P1f-24-specific sequences , in whicE. coli strain BL21 (DE3)RIL. GST fusion proteins were purified on gluthathione sepharose 4B beads , as described in the manufacturer\u2019s instructions. His-tagged fusion proteins were purified using nickel affinity chromatography on His\u2022Bind Resin according to the manufacturer\u2019s instructions.Fusion proteins were expressed in the g, 3 min, 4 \u00b0C), beads were washed three times with ice-cold wash buffer , before being processed for SDS-PAGE and immunoblotting.For pull-down experiments, GST-fusion proteins were immobilized on glutathione sepharose 4B beads for 30 min at 4 \u00b0C, washed with PBS to remove unbound protein, and incubated with 10\u201320 \u00b5g His-tagged fusion proteins in PBS at 4 \u00b0C for 1\u20134 h with gentle agitation. After centrifugation ) using a Dounce tissue grinder. The homogenate was centrifuged for 10 min at 10,000\u00d7 g, and the supernatant was mixed with 6\u00d7 SDS sample buffer . Cells were directly scraped off in 6\u00d7 SDS sample buffer and DNA was sheared by pressing the samples through a 27-gauge needle. Samples were boiled at 95 \u00b0C for 5 min before being subjected to SDS polyacrylamide gel electrophoresis performed under standard conditions. Proteins were transferred to nitrocellulose membranes using a Mini PROTEAN Tetra Cell blot apparatus . Membranes were scanned, and the amounts of protein contained in individual bands were quantified using ImageJ software .Cell and tissue lysates were prepared as described in . DissectPlec\u2212/\u2212 and wild-type littermates, both crossed into a p53\u2212/\u2212 background, as described in [Immortalized skeletal myoblasts were isolated from ribed in , and cul2 internal surface, as described in [Stretch experiments were carried out on flexible polydimethylsiloxan substrates coated with laminin-1 (Sigma-Aldrich) that were molded into the shape of a cell culture well with a 2.5 cmribed in . Plated N) and data points (n) is indicated in the figure legends. Comparisons between the values of two groups were made using an unpaired, two-tailed Student\u2019s t-test . Comparisons among the values of multiple groups were performed using one-way analysis of variance . The significance between values of individual groups and controls was subsequently determined using Tukey\u2019s post-hoc test. The p-values are * < 0.05, ** < 0.01, and *** < 0.001; a p-value < 0.05 was considered statistically significant.Data analyses and statistical evaluations were performed using Excel 2010 and SPSS Statistics v.19 . Data are given as mean \u00b1 SEM. The number of experiments myoblast cell line [To explore targeting mechanism and specific functions of P1d, we screened for P1d-specific binding partners using yeast two-hybrid screening with a mouse skeletal muscle cDNA library and plectin isoform-specific baits. To identify binding partners interacting with the isoform-specific N-terminal domain of P1d, two different bait proteins were used. One of them, encoded by exons 1d\u20138, represented the N-terminal, a five amino acid residue-long, isoform P1d-specific sequence and the ensuing actin-binding domain (ABD), while the other (encoded by exons 1d\u201324) included most of plectin\u2019s plakin domain A. Corresell line and from mice lacking plectin isoform P1b (P1b-KO ) insteadIt has been shown that the CASA machinery senses mechanical stress of the muscle fiber and leads to degradation of large cytoskeletal components damaged during contraction ,34,35,36Plec\u2212/\u2212) myoblasts can be differentiated ex vivo into multinucleated myotubes which closely mirrored the pathology of plectin-deficient muscle fibers, including desmin-positive protein aggregates and Z-disk aberrations, thereby representing a reliable tool for investigations at the cellular level [Plec\u2212/\u2212 myotubes were more susceptible to mechanical strain, as shown by increased detachment from a flexible membrane compared to +/+Plec cells when exposed to cyclic stretch using a cell stretcher [+/+Plec and Plec\u2212/\u2212 myotubes using antibodies to BAG3, SQSTM1, and filamin C myofibrils rather than bundles of multiple myofibrils, whereby the encasing of individual \u03b1-actinin-positive Z-disks by plectin molecules became clearly evident by image deconvolution of 3D oblique projections C. Finall2+-dependent contraction abilities, and unaltered biphasic force relaxation compared to wild-type samples. However, when we tested the vulnerability to mechanical stress by eccentric contractions, despite displaying a similar force reduction to wild-type myofibrils, myofibril bundles from P1d-KO revealed a significantly prolonged duration of the slow phase and reduced rate constant of the fast phase after the eccentric contraction. In the context of biomechanical studies that indicate organized inter-sarcomere dynamics as the cause of rapid relaxation [Plectin\u2019s versatility is largely based on the differential cellular targeting of its individual isoforms. As we show here, microscopic and ultrastructural analyses revealed that the loss of Z-disk-associated P1d leads to highly disorganized myofibrillar structures and phase displacement of sarcomeric structures, pointing towards increased inhomogeneity of the muscle structure. Detachment and loss of regularly anchored desmin Ifs, as observed in teased muscle fibers from P1d-KO mice, led to the formation of desmin aggregates in interior areas of the fiber. Moreover, peripheral and perinuclear desmin-positive signals indicated that other plectin isoforms expressed in muscle fibers, such as P1f and P1, were still intact. On the other hand, in plectin-stained myofibril bundles from P1d-KO mice, no remaining plectin signals were observed, reassuring us that P1d is the sole Z-disk-associated plectin isoform. In line with previous observations ,13 our alaxation ,45,46, oPrevious conventional confocal immunofluorescence microscopy studies on myofibers revealed that of the four major plectin isoforms expressed in mature cells, i.e., P1, P1b, P1d, and P1f, only P1d co-localized with desmin at the level of the Z-disks, while P1f was localized at the periphery of cells along the sarcolemma, P1 in perinuclear areas, and P1b in association with mitochondria. In previous studies, we have identified isoform-specific binding partners that show co-localization with isoforms at their targeting sites, in the cases of P1b, P1f, and P1 ,14,29, sWhether only the five amino acid residue-long isoform-specific protein sequence preceding P1d\u2019s ABD suffices for specific HSC70-binding, or whether the first exon-encoded protein domain acts in combination with the succeeding ABD, remains to be determined. Targeting signals or site-specific interaction domains residing in isoform-specific head domains have been described for a number of isoforms , but onlYeast two-hybrid screening revealed that P1d interacts with a wide spectrum of structural and signaling proteins. While its interaction with \u03b1-actinin appears to be crucial for the stabilization and lateral arrangement of Z-disks, P1d\u2019s ability to bind to HSC70 and synpo2 suggested that its role goes beyond that of just mechanical support. Since HSC70 was identified in most of the positive transformants showing interaction with the short N-terminal region of P1d, one can assume that it is directly involved in targeting of plectin to the Z-disk. In contrast, \u03b1-actinin and synpo2 displayed binding to the longer fragment of P1d, suggesting that the binding site for these proteins is located in the region downstream of plectin\u2019s ABD. This would be consistent with the previous observation that a fragment of the plakin domain binds to \u03b1-actinin (see above). Although \u03b1-actinin was clearly revealed as a P1d interaction partner using the yeast two-hybrid screening assay, additional in vitro binding studies showed that it can also bind to P1f. These observations imply that both plectin isoforms contain a binding site for \u03b1-actinin. While P1d interacts with \u03b1-actinin at the Z-disks and in this way stabilizes the contractile apparatus, P1f might be connected with cytoplasmic \u03b1-actinin residing underneath the sarcolemma where it supports membrane-associated protein complexes ,12. HowePlec\u2212/\u2212 myotubes the efficient degradation of filamin C via CASA requires the presence of P1d, and (ii) P1d is crucial for the proper function of the protein degradation machinery that plays an important role in muscle maintenance, especially during intensive contraction. As misfolded filamin C is efficiently degraded and replaced by newly synthesized protein, filamin C is a mechanosensing protein which maintains the structural integrity of the myofibers and preserves the contractile apparatus during muscle contraction. Our analysis shows that the proper alignment of Z-disks seems to be especially important for the efficient removal of unfolded filamin C, as highly upregulated filamin C levels were observed in P1d-KO and MCK-Cre/cKO myofibers. P1d\u2019s interaction with synpo2 suggests that P1d, functioning as a Z-disk-associated scaffolding platform, links the CASA machinery and the autophagic degradation pathway, while P1d-deficiency causes a misalignment of Z-disks and a shift and misplacement of CASA components. Measuring the mechanical strain exerted on cultured myotubes on a stretching device convincingly demonstrated that CASA is a tension-induced protein degradation machinery. As the cell stretcher used for these experiments could indeed mimic the naturally occurring skeletal muscle contraction, it could be used as a convenient tool to analyze multiscale mechanical responses governed by cytoskeletal prestress in ex vivo cell culture systems. Due to the disrupted HSC70-synpo2-Z-disk linkage and the ensuing dysfunction of CASA in plectin (P1d) deficiency, filamin C levels drastically increased upon mechanical strain in ubes see E. MoreovIn conclusion, our study shows that the Z-disk-associated plectin isoform P1d is indispensable for proper skeletal muscle function and maintenance, especially upon elevated tension and contractile activity. By interacting and encircling individual Z-disks, P1d bridges the contractile apparatus to the extrasarcomeric desmin IF cytoskeleton, thereby achieving the precise lateral alignment of sarcomeres required for the homogeneous spatial arrangement of myofibrils within myofibers. Aside from its structural role, P1d serves as a scaffolding platform for CASA, the tension-induced proteolytic machinery which becomes activated upon high loads of mechanical exercise and muscle contraction. Loss of P1d leads to the disorganization of myofibrils, combined with distortion of CASA and filamin C homeostasis, eventually manifesting as muscular lesions and protein aggregate formation. Our study offers a possible mechanistic explanation for the symptomatic muscle weakness associated with the majority of plectinopathies."
\ No newline at end of file