before_sent
stringlengths
0
50.4k
before_sent_with_intent
stringlengths
10
50.5k
after_sent
stringlengths
0
50.6k
labels
stringclasses
5 values
confidence
stringlengths
5
10
doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
We obtain four such theorems by considering a financial market both with and without a riskless asset and by considering both constant and random consumption.
<meaning-changed> We obtain four such theorems by considering a financial market both with and without a riskless asset and by considering both constant and random consumption.
We obtain two such theorems by considering a financial market both with and without a riskless asset and by considering both constant and random consumption.
meaning-changed
0.9990441
0705.0053
1
We obtain four such theorems by considering a financial market both with and without a riskless asset and by considering both constant and random consumption.
<meaning-changed> We obtain four such theorems by considering a financial market both with and without a riskless asset and by considering both constant and random consumption.
We obtain four such theorems by considering a financial market both with and without a riskless asset for random consumption. The striking result is that we obtain two-fund theorems despite the additional source of randomness from consumption.
meaning-changed
0.9993998
0705.0053
1
The efficiencies of the mechanisms and the nature of the induced, time-dependent flow fields are found to differ widely among swimmers .
<clarity> The efficiencies of the mechanisms and the nature of the induced, time-dependent flow fields are found to differ widely among swimmers .
The swimming efficiency and the nature of the induced, time-dependent flow fields are found to differ widely among swimmers .
clarity
0.9982516
0705.1606
1
The efficiencies of the mechanisms and the nature of the induced, time-dependent flow fields are found to differ widely among swimmers .
<clarity> The efficiencies of the mechanisms and the nature of the induced, time-dependent flow fields are found to differ widely among swimmers .
The efficiencies of the mechanisms and the nature of the induced, time-dependent flow fields are found to differ widely among body designs and propulsion mechanisms .
clarity
0.7353138
0705.1606
1
We employ perturbation analysis technique to study trading strategies for multi-asset portfolio and obtain optimal trading methods for wealth maximization under arbitrary utility functions .
<clarity> We employ perturbation analysis technique to study trading strategies for multi-asset portfolio and obtain optimal trading methods for wealth maximization under arbitrary utility functions .
We employ perturbation analysis technique to study multi-asset portfolio and obtain optimal trading methods for wealth maximization under arbitrary utility functions .
clarity
0.99912363
0705.1949
1
We employ perturbation analysis technique to study trading strategies for multi-asset portfolio and obtain optimal trading methods for wealth maximization under arbitrary utility functions .
<meaning-changed> We employ perturbation analysis technique to study trading strategies for multi-asset portfolio and obtain optimal trading methods for wealth maximization under arbitrary utility functions .
We employ perturbation analysis technique to study trading strategies for multi-asset portfolio optimisation with transaction cost. We allow for correlations in risky assets and obtain optimal trading methods for wealth maximization under arbitrary utility functions .
meaning-changed
0.99944264
0705.1949
1
We employ perturbation analysis technique to study trading strategies for multi-asset portfolio and obtain optimal trading methods for wealth maximization under arbitrary utility functions .
<meaning-changed> We employ perturbation analysis technique to study trading strategies for multi-asset portfolio and obtain optimal trading methods for wealth maximization under arbitrary utility functions .
We employ perturbation analysis technique to study trading strategies for multi-asset portfolio and obtain optimal trading methods for general utility functions. Our analytical results are supported by numerical simulations in the context of the Long Term Growth Model .
meaning-changed
0.9995639
0705.1949
1
Simple sufficient conditions are given for the problem to be well-posed, in the sense that optimal wealths and marginal utility-based prices are continuous functionals of the inputs .
<fluency> Simple sufficient conditions are given for the problem to be well-posed, in the sense that optimal wealths and marginal utility-based prices are continuous functionals of the inputs .
Simple sufficient conditions are given for the problem to be well-posed, in the sense the optimal wealth and the marginal utility-based prices are continuous functionals of the inputs .
fluency
0.99925226
0706.0482
1
Simple sufficient conditions are given for the problem to be well-posed, in the sense that optimal wealths and marginal utility-based prices are continuous functionals of the inputs .
<meaning-changed> Simple sufficient conditions are given for the problem to be well-posed, in the sense that optimal wealths and marginal utility-based prices are continuous functionals of the inputs .
Simple sufficient conditions are given for the problem to be well-posed, in the sense that optimal wealths and marginal utility-based prices are continuous functionals of preferences and probabilistic views .
meaning-changed
0.99913776
0706.0482
1
The utilization of multiple post translational modifications (PTMs) in regulating a biological response is ubiquitous in cell signaling. If each PTM contributes an additional, equivalent binding site, then one consequence of an increase in the number of PTMs may be to increase the probability that, upon disassociation, a ligand immediately rebinds to its receptor. How such effects may influence cell signaling systems has been less studied. Here, a self-consistent integral equation formalism for ligand rebinding in conjunction with Monte Carlo simulations are employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses. Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects and their relation to systems-level properties are predicted and studied. Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly but other mechanistic ingredients are required to achieve high degrees of biochemical cooperativity. It is our hope that these calculations motivate experiments that can further unravel the many functional consequences of multi-site phosphorylation.
<coherence> The utilization of multiple post translational modifications (PTMs) in regulating a biological response is ubiquitous in cell signaling. If each PTM contributes an additional, equivalent binding site, then one consequence of an increase in the number of PTMs may be to increase the probability that, upon disassociation, a ligand immediately rebinds to its receptor. How such effects may influence cell signaling systems has been less studied. Here, a self-consistent integral equation formalism for ligand rebinding in conjunction with Monte Carlo simulations are employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses. Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects and their relation to systems-level properties are predicted and studied. Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly but other mechanistic ingredients are required to achieve high degrees of biochemical cooperativity. It is our hope that these calculations motivate experiments that can further unravel the many functional consequences of multi-site phosphorylation.
the paper is withdrawn for the moment
coherence
0.5547608
0706.2383
1
the paper is withdrawn for the moment
<meaning-changed> the paper is withdrawn for the moment
The utilization of multiple phosphorylation sites in regulating a biological response is ubiquitous in cell signaling. If each site contributes an additional, equivalent binding site, then one consequence of an increase in the number of phosphorylations may be to increase the probability that, upon disassociation, a ligand immediately rebinds to its receptor. How such effects may influence cell signaling systems has been less studied. Here, a self-consistent integral equation formalism for ligand rebinding in conjunction with Monte Carlo simulations are employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses. Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects and their relation to systems-level properties are predicted and studied. Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly but other mechanistic ingredients are required to achieve high degrees of biochemical cooperativity. It is our hope that these calculations motivate experiments that can further unravel the many functional consequences of multi-site phosphorylation.
meaning-changed
0.93100464
0706.2383
2
Here, a self-consistent integral equation formalism for ligand rebinding in conjunction with Monte Carlo simulations are employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses.
<fluency> Here, a self-consistent integral equation formalism for ligand rebinding in conjunction with Monte Carlo simulations are employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses.
Here, a self-consistent integral equation formalism for ligand rebinding , in conjunction with Monte Carlo simulations are employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses.
fluency
0.999308
0706.2383
3
Here, a self-consistent integral equation formalism for ligand rebinding in conjunction with Monte Carlo simulations are employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses.
<fluency> Here, a self-consistent integral equation formalism for ligand rebinding in conjunction with Monte Carlo simulations are employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses.
Here, a self-consistent integral equation formalism for ligand rebinding in conjunction with Monte Carlo simulations , is employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses.
fluency
0.9989448
0706.2383
3
Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects and their relation to systems-level properties are predictedand studied . Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly but other mechanistic ingredients are required to achieve high degrees of biochemical cooperativity.
<clarity> Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects and their relation to systems-level properties are predictedand studied . Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly but other mechanistic ingredients are required to achieve high degrees of biochemical cooperativity.
Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects are predicted . Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly but other mechanistic ingredients are required to achieve high degrees of biochemical cooperativity.
clarity
0.99727255
0706.2383
3
Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects and their relation to systems-level properties are predictedand studied . Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly but other mechanistic ingredients are required to achieve high degrees of biochemical cooperativity. It is our hope that these calculations motivate experiments that can further unravel the many functional consequences of multi-site phosphorylation .
<clarity> Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects and their relation to systems-level properties are predictedand studied . Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly but other mechanistic ingredients are required to achieve high degrees of biochemical cooperativity. It is our hope that these calculations motivate experiments that can further unravel the many functional consequences of multi-site phosphorylation .
Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects and their relation to systems-level properties are predictedand studied . Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly . The model also predicts that ligand rebinding in itself appears insufficient to generative a highly cooperative biological response .
clarity
0.9707431
0706.2383
3
The morphisms of Probabilistic Sequential Networks are defined using two algebraic conditions , whose imply that the distribution of probabilities in the systems are close .
<clarity> The morphisms of Probabilistic Sequential Networks are defined using two algebraic conditions , whose imply that the distribution of probabilities in the systems are close .
The morphisms of Probabilistic Sequential Networks are defined using two algebraic conditions .
clarity
0.99887305
0707.0026
1
It is proved here that two homomorphic Probabilistic Sequential Networks have the same equilibrium or steady state probabilities .
<meaning-changed> It is proved here that two homomorphic Probabilistic Sequential Networks have the same equilibrium or steady state probabilities .
It is proved here that two homomorphic Probabilistic Sequential Networks have the same equilibrium or steady state probabilities if the morphism is either an epimorphism or a monomorphism .
meaning-changed
0.9993247
0707.0026
1
The twinness of a pair of nodes is the number of connected, labelled subgraphs of size n in which the two nodes possess identical neighbours.
<fluency> The twinness of a pair of nodes is the number of connected, labelled subgraphs of size n in which the two nodes possess identical neighbours.
The twinness of a pair of nodes is the number of connected, labeled subgraphs of size n in which the two nodes possess identical neighbours.
fluency
0.992855
0707.2076
1
These include an Escherichia coli PIN and three Saccharomyces cerevisiae PINs - each obtained using state-of-the-art high throughput methods.
<fluency> These include an Escherichia coli PIN and three Saccharomyces cerevisiae PINs - each obtained using state-of-the-art high throughput methods.
These include an Escherichia coli PIN and three Saccharomyces cerevisiae PINs -- each obtained using state-of-the-art high throughput methods.
fluency
0.99833775
0707.2076
1
For all n, we observe a difference in the ratio of type A twins (which are unlinked pairs) to type B twins (which are linked pairs) distinguishing the prokaryote E. Coli from the eukaryote S. Cerevisiae .
<fluency> For all n, we observe a difference in the ratio of type A twins (which are unlinked pairs) to type B twins (which are linked pairs) distinguishing the prokaryote E. Coli from the eukaryote S. Cerevisiae .
For all n, we observe a difference in the ratio of type A twins (which are unlinked pairs) to type B twins (which are linked pairs) distinguishing the prokaryote E. coli from the eukaryote S. Cerevisiae .
fluency
0.99912304
0707.2076
1
For all n, we observe a difference in the ratio of type A twins (which are unlinked pairs) to type B twins (which are linked pairs) distinguishing the prokaryote E. Coli from the eukaryote S. Cerevisiae .
<fluency> For all n, we observe a difference in the ratio of type A twins (which are unlinked pairs) to type B twins (which are linked pairs) distinguishing the prokaryote E. Coli from the eukaryote S. Cerevisiae .
For all n, we observe a difference in the ratio of type A twins (which are unlinked pairs) to type B twins (which are linked pairs) distinguishing the prokaryote E. Coli from the eukaryote S. cerevisiae .
fluency
0.99922085
0707.2076
1
Interaction similarity is expected due to gene duplication, and whole genome duplication paralogues in S. Cerevisiae have been reported to co-cluster into the same complexes.
<fluency> Interaction similarity is expected due to gene duplication, and whole genome duplication paralogues in S. Cerevisiae have been reported to co-cluster into the same complexes.
Interaction similarity is expected due to gene duplication, and whole genome duplication paralogues in S. cerevisiae have been reported to co-cluster into the same complexes.
fluency
0.99928004
0707.2076
1
Indeed, we find that these paralogues are over-represented as twins compared to pairs chosen at random.
<clarity> Indeed, we find that these paralogues are over-represented as twins compared to pairs chosen at random.
Indeed, we find that these paralogous proteins are over-represented as twins compared to pairs chosen at random.
clarity
0.99378306
0707.2076
1
We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
<fluency> We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim with \alpha \approx } 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
fluency
0.9255573
0707.3621
1
We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
<meaning-changed> We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } h^2/( 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
meaning-changed
0.99857473
0707.3621
1
We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
<meaning-changed> We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } 2 \ln{|h| for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
meaning-changed
0.99833065
0707.3621
1
We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
<meaning-changed> We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than |h|^{\alpha .
We find a super-linear increase of K_c with the (average) absolute threshold |h|, which approaches K_c(|h|) \sim |h|^{\alpha with \alpha \approx } 2 for large |h|, and show that this asymptotic scaling is universal for RTN with Poissonian distributed connectivity and threshold distributions with a variance that grows slower than h^2 .
meaning-changed
0.9981394
0707.3621
1
Interestingly, we find that inhomogeneous distribution of thresholds leads to increased propagation of perturbations for sparsely connected networks, while for densely connected networks damage is reduced . Further, damage propagation in RTN with in-degree distributions that exhibit a scale-free tail k_{in^{\gamma} is studied;
<meaning-changed> Interestingly, we find that inhomogeneous distribution of thresholds leads to increased propagation of perturbations for sparsely connected networks, while for densely connected networks damage is reduced . Further, damage propagation in RTN with in-degree distributions that exhibit a scale-free tail k_{in^{\gamma} is studied;
Interestingly, we find that inhomogeneous distribution of thresholds leads to increased propagation of perturbations for sparsely connected networks, while for densely connected networks damage is reduced ^{\gamma} is studied;
meaning-changed
0.84911907
0707.3621
1
we find that a decrease of \gamma can lead to a transition from supercritical (chaotic) to subcritical (ordered) dynamics} . Last, local correlations between node thresholds and in-degree are introduced.
<meaning-changed> we find that a decrease of \gamma can lead to a transition from supercritical (chaotic) to subcritical (ordered) dynamics} . Last, local correlations between node thresholds and in-degree are introduced.
we find that a decrease of \gamma can lead to a transition from supercritical (chaotic) to subcritical (ordered) dynamics} ; the cross-over point yields a novel, characteristic connectivity K_d, that has no counterpart in Boolean networks . Last, local correlations between node thresholds and in-degree are introduced.
meaning-changed
0.9995436
0707.3621
1
Interestingly, in this case the annealed approximation fails to predict the dynamical behavior for sparse connectivities , suggesting that even weak topological correlations can strongly limit its applicability for finite N .
<coherence> Interestingly, in this case the annealed approximation fails to predict the dynamical behavior for sparse connectivities , suggesting that even weak topological correlations can strongly limit its applicability for finite N .
, suggesting that even weak topological correlations can strongly limit its applicability for finite N .
coherence
0.99747914
0707.3621
1
Interestingly, in this case the annealed approximation fails to predict the dynamical behavior for sparse connectivities , suggesting that even weak topological correlations can strongly limit its applicability for finite N .
<meaning-changed> Interestingly, in this case the annealed approximation fails to predict the dynamical behavior for sparse connectivities , suggesting that even weak topological correlations can strongly limit its applicability for finite N .
Interestingly, in this case the annealed approximation fails to predict the dynamical behavior for sparse connectivities It is shown that the naive mean-field assumption typical for the annealed approximation leads to false predictions in this case, since correlations between thresholds and out-degree that emerge as a side-effect strongly modify damage propagation behavior .
meaning-changed
0.99940455
0707.3621
1
The Markowitz mean-variance optimizing framework has served as the basis for modern portfolio theory for more than 50 years.
<meaning-changed> The Markowitz mean-variance optimizing framework has served as the basis for modern portfolio theory for more than 50 years.
We consider the problem of portfolio selection within the classical Markowitz mean-variance optimizing framework has served as the basis for modern portfolio theory for more than 50 years.
meaning-changed
0.9995327
0708.0046
1
The Markowitz mean-variance optimizing framework has served as the basis for modern portfolio theory for more than 50 years.
<coherence> The Markowitz mean-variance optimizing framework has served as the basis for modern portfolio theory for more than 50 years.
The Markowitz mean-variance optimizing framework , which has served as the basis for modern portfolio theory for more than 50 years.
coherence
0.99751353
0708.0046
1
However, efforts to translate this theoretical foundation into a viable portfolio construction algorithm have been plagued by technical difficulties stemming from the instability of the original optimization problemwith respect to the available data. In this paper we address these issues of estimation error by regularizing the Markowitz objective function through the addition of an \ell_1 penalty .
<clarity> However, efforts to translate this theoretical foundation into a viable portfolio construction algorithm have been plagued by technical difficulties stemming from the instability of the original optimization problemwith respect to the available data. In this paper we address these issues of estimation error by regularizing the Markowitz objective function through the addition of an \ell_1 penalty .
To stabilize the problem, we propose to add to the Markowitz objective function through the addition of an \ell_1 penalty .
clarity
0.9987999
0708.0046
1
In this paper we address these issues of estimation error by regularizing the Markowitz objective function through the addition of an \ell_1 penalty .
<meaning-changed> In this paper we address these issues of estimation error by regularizing the Markowitz objective function through the addition of an \ell_1 penalty .
In this paper we address these issues of estimation error by regularizing the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights ( \ell_1 penalty .
meaning-changed
0.9995448
0708.0046
1
In this paper we address these issues of estimation error by regularizing the Markowitz objective function through the addition of an \ell_1 penalty .
<fluency> In this paper we address these issues of estimation error by regularizing the Markowitz objective function through the addition of an \ell_1 penalty .
In this paper we address these issues of estimation error by regularizing the Markowitz objective function through the addition of an \ell_1 penalty ) .
fluency
0.99918514
0708.0046
1
This penalty stabilizes the optimization problem, encourages sparse portfolios, and facilitates treatment of transaction costs in a transparent way .
<meaning-changed> This penalty stabilizes the optimization problem, encourages sparse portfolios, and facilitates treatment of transaction costs in a transparent way .
This penalty stabilizes the optimization problem, automatically encourages sparse portfolios, and facilitates treatment of transaction costs in a transparent way .
meaning-changed
0.9949207
0708.0046
1
This penalty stabilizes the optimization problem, encourages sparse portfolios, and facilitates treatment of transaction costs in a transparent way .
<meaning-changed> This penalty stabilizes the optimization problem, encourages sparse portfolios, and facilitates treatment of transaction costs in a transparent way .
This penalty stabilizes the optimization problem, encourages sparse portfolios, and facilitates an effective treatment of transaction costs in a transparent way .
meaning-changed
0.7013421
0708.0046
1
This penalty stabilizes the optimization problem, encourages sparse portfolios, and facilitates treatment of transaction costs in a transparent way .
<clarity> This penalty stabilizes the optimization problem, encourages sparse portfolios, and facilitates treatment of transaction costs in a transparent way .
This penalty stabilizes the optimization problem, encourages sparse portfolios, and facilitates treatment of transaction costs .
clarity
0.998667
0708.0046
1
We implement this methodology using the Fama and French 48 industry portfolios as our securities . Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the na\"{i .
<coherence> We implement this methodology using the Fama and French 48 industry portfolios as our securities . Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the na\"{i .
We implement our methodology using as our securities . Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the na\"{i .
coherence
0.9015827
0708.0046
1
We implement this methodology using the Fama and French 48 industry portfolios as our securities . Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the na\"{i .
<meaning-changed> We implement this methodology using the Fama and French 48 industry portfolios as our securities . Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the na\"{i .
We implement this methodology using the Fama and French 48 industry portfolios as our securities two sets of portfolios constructed by Fama and French: the 48 industry portfolios and 100 portfolios formed on size and book-to-market .
meaning-changed
0.9993529
0708.0046
1
In addition to their excellent performance, these portfolios have only a small number of active positions, a highly desirable attribute for real life applications. We conclude by discussing a collection of portfolio construction problems which can be naturally translated into optimizations involving \ell_1 penalties and which can thus be tackled by algorithms similar to those discussed here .
<clarity> In addition to their excellent performance, these portfolios have only a small number of active positions, a highly desirable attribute for real life applications. We conclude by discussing a collection of portfolio construction problems which can be naturally translated into optimizations involving \ell_1 penalties and which can thus be tackled by algorithms similar to those discussed here .
In addition to their excellent performance, these portfolios have only a small number of active positions, a desirable feature for small investors, for whom the fixed overhead portion of the transaction cost is not negligible .
clarity
0.5249993
0708.0046
1
We consider the problem of portfolio selection within the classical Markowitz mean-variance optimizing framework, which has served as the basis for modern portfolio theory for more than 50 years. To stabilize the problem, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
<meaning-changed> We consider the problem of portfolio selection within the classical Markowitz mean-variance optimizing framework, which has served as the basis for modern portfolio theory for more than 50 years. To stabilize the problem, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
meaning-changed
0.999161
0708.0046
2
To stabilize the problem, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
<clarity> To stabilize the problem, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
To stabilize the problem, we propose to add to the objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
clarity
0.7947925
0708.0046
2
To stabilize the problem, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
<clarity> To stabilize the problem, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
To stabilize the problem, we propose to add to the Markowitz objective function a penalty proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
clarity
0.99356353
0708.0046
2
To stabilize the problem, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
<coherence> To stabilize the problem, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights (\ell_1 penalty) .
To stabilize the problem, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights .
coherence
0.93989676
0708.0046
2
This penalty stabilizes the optimization problem, automatically encourages sparse portfolios , and facilitates an effective treatment of transaction costs.
<clarity> This penalty stabilizes the optimization problem, automatically encourages sparse portfolios , and facilitates an effective treatment of transaction costs.
This penalty regularizes (stabilizes) the optimization problem, automatically encourages sparse portfolios , and facilitates an effective treatment of transaction costs.
clarity
0.98686767
0708.0046
2
This penalty stabilizes the optimization problem, automatically encourages sparse portfolios , and facilitates an effective treatment of transaction costs.
<clarity> This penalty stabilizes the optimization problem, automatically encourages sparse portfolios , and facilitates an effective treatment of transaction costs.
This penalty stabilizes the optimization problem, encourages sparse portfolios , and facilitates an effective treatment of transaction costs.
clarity
0.99856865
0708.0046
2
This penalty stabilizes the optimization problem, automatically encourages sparse portfolios , and facilitates an effective treatment of transaction costs.
<clarity> This penalty stabilizes the optimization problem, automatically encourages sparse portfolios , and facilitates an effective treatment of transaction costs.
This penalty stabilizes the optimization problem, automatically encourages sparse portfolios (i.e. portfolios with only few active positions), and allows to account for transaction costs.
clarity
0.9077289
0708.0046
2
We implement our methodology using as our securities two sets of portfolios constructed by Fama and French : the 48 industry portfolios and 100 portfolios formed on size and book-to-market.
<meaning-changed> We implement our methodology using as our securities two sets of portfolios constructed by Fama and French : the 48 industry portfolios and 100 portfolios formed on size and book-to-market.
Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French : the 48 industry portfolios and 100 portfolios formed on size and book-to-market.
meaning-changed
0.9994443
0708.0046
2
We implement our methodology using as our securities two sets of portfolios constructed by Fama and French : the 48 industry portfolios and 100 portfolios formed on size and book-to-market. In addition to their excellent performance, these portfolios have only a small number of active positions, a desirable feature for small investors, for whom the fixed overhead portion of the transaction cost is not negligible .
<meaning-changed> We implement our methodology using as our securities two sets of portfolios constructed by Fama and French : the 48 industry portfolios and 100 portfolios formed on size and book-to-market. In addition to their excellent performance, these portfolios have only a small number of active positions, a desirable feature for small investors, for whom the fixed overhead portion of the transaction cost is not negligible .
We implement our methodology using as our securities two sets of portfolios constructed by Fama and French . Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naive evenly-weighted portfolio which constitutes, as shown in recent literature, a very tough benchmark .
meaning-changed
0.9981558
0708.0046
2
Biochemical reaction systems with a low to moderate number of molecules are typically modeled as discrete jump Markov processes.
<clarity> Biochemical reaction systems with a low to moderate number of molecules are typically modeled as discrete jump Markov processes.
Chemical reaction systems with a low to moderate number of molecules are typically modeled as discrete jump Markov processes.
clarity
0.9977615
0708.0370
1
These systems are oftentimes simulated using the Gillespie Algorithm or the Next Reaction Method , which are exact simulation methods .
<meaning-changed> These systems are oftentimes simulated using the Gillespie Algorithm or the Next Reaction Method , which are exact simulation methods .
These systems are oftentimes simulated with methods that produce statistically exact sample paths such as the Gillespie Algorithm or the Next Reaction Method , which are exact simulation methods .
meaning-changed
0.9991736
0708.0370
1
These systems are oftentimes simulated using the Gillespie Algorithm or the Next Reaction Method , which are exact simulation methods .
<coherence> These systems are oftentimes simulated using the Gillespie Algorithm or the Next Reaction Method , which are exact simulation methods .
These systems are oftentimes simulated using the Gillespie Algorithm or the Next Reaction Method .
coherence
0.82090074
0708.0370
1
In this paper we make explicit use of the fact that the initiation times of each reaction are given by the firing times of an independent, unit Poisson process with integrated propensity function.
<clarity> In this paper we make explicit use of the fact that the initiation times of each reaction are given by the firing times of an independent, unit Poisson process with integrated propensity function.
In this paper we make explicit use of the fact that the initiation times of the reactions can be represented as the firing times of an independent, unit Poisson process with integrated propensity function.
clarity
0.9978376
0708.0370
1
In this paper we make explicit use of the fact that the initiation times of each reaction are given by the firing times of an independent, unit Poisson process with integrated propensity function.
<fluency> In this paper we make explicit use of the fact that the initiation times of each reaction are given by the firing times of an independent, unit Poisson process with integrated propensity function.
In this paper we make explicit use of the fact that the initiation times of each reaction are given by the firing times of independent, unit Poisson process with integrated propensity function.
fluency
0.99911433
0708.0370
1
In this paper we make explicit use of the fact that the initiation times of each reaction are given by the firing times of an independent, unit Poisson process with integrated propensity function. We use this representation to develop a modified version of the Next Reaction Method .
<clarity> In this paper we make explicit use of the fact that the initiation times of each reaction are given by the firing times of an independent, unit Poisson process with integrated propensity function. We use this representation to develop a modified version of the Next Reaction Method .
In this paper we make explicit use of the fact that the initiation times of each reaction are given by the firing times of an independent, unit rate Poisson processes with internal times given by integrated propensity functions. Using this representation we derive a modified Next Reaction Method .
clarity
0.9846532
0708.0370
1
We use this representation to develop a modified version of the Next Reaction Method . We then demonstrate how this modified Next Reaction Method is the appropriate method to use to simulate systems in which the rate constants are functions of time (that is, systems that are time dependent Markov Processes). Finally, we extend our modified Next Reaction Method to systems with delays and compare its efficiency with those of the recent algorithms designed for systems with delays.
<clarity> We use this representation to develop a modified version of the Next Reaction Method . We then demonstrate how this modified Next Reaction Method is the appropriate method to use to simulate systems in which the rate constants are functions of time (that is, systems that are time dependent Markov Processes). Finally, we extend our modified Next Reaction Method to systems with delays and compare its efficiency with those of the recent algorithms designed for systems with delays.
We use this representation to develop a modified version of the Next Reaction Method and, in a way that achieves efficiency over existing approaches for exact simulation, extend it to systems with delays and compare its efficiency with those of the recent algorithms designed for systems with delays.
clarity
0.6936788
0708.0370
1
Finally, we extend our modified Next Reaction Method to systems with delays and compare its efficiency with those of the recent algorithms designed for systems with delays.
<meaning-changed> Finally, we extend our modified Next Reaction Method to systems with delays and compare its efficiency with those of the recent algorithms designed for systems with delays.
Finally, we extend our modified Next Reaction Method to systems with time dependent propensities as well as to systems with delays.
meaning-changed
0.99912304
0708.0370
1
By explicitly representing well-stirred chemical reaction systems with independent, unit Poisson processes we develop a new adaptive tau-leaping procedure.
<clarity> By explicitly representing well-stirred chemical reaction systems with independent, unit Poisson processes we develop a new adaptive tau-leaping procedure.
By explicitly representing the reaction times of discrete chemical systems as the firing times of independent, unit Poisson processes we develop a new adaptive tau-leaping procedure.
clarity
0.9927834
0708.0377
1
By explicitly representing well-stirred chemical reaction systems with independent, unit Poisson processes we develop a new adaptive tau-leaping procedure.
<clarity> By explicitly representing well-stirred chemical reaction systems with independent, unit Poisson processes we develop a new adaptive tau-leaping procedure.
By explicitly representing well-stirred chemical reaction systems with independent, unit rate Poisson processes we develop a new adaptive tau-leaping procedure.
clarity
0.9032738
0708.0377
1
The procedure developed is novel in that we enforce any leap condition via a post-leap check as opposed to performing a pre-leap tau selection.
<clarity> The procedure developed is novel in that we enforce any leap condition via a post-leap check as opposed to performing a pre-leap tau selection.
The procedure developed is novel in that accuracy is guaranteed by performing post-leap check as opposed to performing a pre-leap tau selection.
clarity
0.99627244
0708.0377
1
The procedure developed is novel in that we enforce any leap condition via a post-leap check as opposed to performing a pre-leap tau selection. Further, we perform the post-leap check in such a way that the statistics of the sample paths generated will not be skewed by the rejection of a leap.
<meaning-changed> The procedure developed is novel in that we enforce any leap condition via a post-leap check as opposed to performing a pre-leap tau selection. Further, we perform the post-leap check in such a way that the statistics of the sample paths generated will not be skewed by the rejection of a leap.
The procedure developed is novel in that we enforce any leap condition via a post-leap checks. Because the representation we use separates the randomness of the model from the state of the system, we are able to perform the post-leap check in such a way that the statistics of the sample paths generated will not be skewed by the rejection of a leap.
meaning-changed
0.99937576
0708.0377
1
Further, we perform the post-leap check in such a way that the statistics of the sample paths generated will not be skewed by the rejection of a leap.
<fluency> Further, we perform the post-leap check in such a way that the statistics of the sample paths generated will not be skewed by the rejection of a leap.
Further, we perform the post-leap checks in such a way that the statistics of the sample paths generated will not be skewed by the rejection of a leap.
fluency
0.9979691
0708.0377
1
Further, we perform the post-leap check in such a way that the statistics of the sample paths generated will not be skewed by the rejection of a leap. By performing a post-leap check to ensure accuracy, the method developed in this paper is guaranteed to never produce negative population values .
<meaning-changed> Further, we perform the post-leap check in such a way that the statistics of the sample paths generated will not be skewed by the rejection of a leap. By performing a post-leap check to ensure accuracy, the method developed in this paper is guaranteed to never produce negative population values .
Further, we perform the post-leap check in such a way that the statistics of the sample paths generated will not be skewed by the rejections of leaps. Further, since any leap condition is ensured with a probability of one, the simulation method naturally avoids negative population values .
meaning-changed
0.99485505
0708.0377
1
By performing a post-leap check to ensure accuracy, the method developed in this paper is guaranteed to never produce negative population values . The efficiency of the method developed here is demonstrated on a model of a decaying dimer .
<coherence> By performing a post-leap check to ensure accuracy, the method developed in this paper is guaranteed to never produce negative population values . The efficiency of the method developed here is demonstrated on a model of a decaying dimer .
By performing a post-leap check to ensure accuracy, the method developed in this paper is guaranteed to never produce negative population values .
coherence
0.8561879
0708.0377
1
By explicitly representing the reaction times of discrete chemical systems as the firing times of independent, unit rate Poisson processes we develop a new adaptive tau-leaping procedure.
<fluency> By explicitly representing the reaction times of discrete chemical systems as the firing times of independent, unit rate Poisson processes we develop a new adaptive tau-leaping procedure.
By explicitly representing the reaction times of discrete chemical systems as the firing times of independent, unit rate Poisson processes , we develop a new adaptive tau-leaping procedure.
fluency
0.9992712
0708.0377
2
The procedure developed is novel in that accuracy is guaranteed by performing post-leap checks.
<fluency> The procedure developed is novel in that accuracy is guaranteed by performing post-leap checks.
The procedure developed is novel in that accuracy is guaranteed by performing postleap checks.
fluency
0.9989724
0708.0377
2
Because the representation we use separates the randomness of the model from the state of the system, we are able to perform the post-leap checks in such a way that the statistics of the sample paths generated will not be skewed by the rejections of leaps.
<fluency> Because the representation we use separates the randomness of the model from the state of the system, we are able to perform the post-leap checks in such a way that the statistics of the sample paths generated will not be skewed by the rejections of leaps.
Because the representation we use separates the randomness of the model from the state of the system, we are able to perform the postleap checks in such a way that the statistics of the sample paths generated will not be skewed by the rejections of leaps.
fluency
0.99940014
0708.0377
2
Because the representation we use separates the randomness of the model from the state of the system, we are able to perform the post-leap checks in such a way that the statistics of the sample paths generated will not be skewed by the rejections of leaps.
<clarity> Because the representation we use separates the randomness of the model from the state of the system, we are able to perform the post-leap checks in such a way that the statistics of the sample paths generated will not be skewed by the rejections of leaps.
Because the representation we use separates the randomness of the model from the state of the system, we are able to perform the post-leap checks in such a way that the statistics of the sample paths generated will not be biased by the rejections of leaps.
clarity
0.47292382
0708.0377
2
Further, since any leap condition is ensured with a probability of one, the simulation method naturally avoids negative population values .
<fluency> Further, since any leap condition is ensured with a probability of one, the simulation method naturally avoids negative population values .
Further, since any leap condition is ensured with a probability of one, the simulation method naturally avoids negative population values
fluency
0.999153
0708.0377
2
Continuous time random walks are a well suited tool for the description of market behaviour at the smallest scale: the tick-to-tick evolution.
<fluency> Continuous time random walks are a well suited tool for the description of market behaviour at the smallest scale: the tick-to-tick evolution.
Continuous-time random walks are a well suited tool for the description of market behaviour at the smallest scale: the tick-to-tick evolution.
fluency
0.9992735
0708.0544
1
Our approach leads to option prices that fulfil the financial formulas when canonical assumptions on the dynamics governing the process are made, but it is still suitable for considering more exotic market conditions.
<fluency> Our approach leads to option prices that fulfil the financial formulas when canonical assumptions on the dynamics governing the process are made, but it is still suitable for considering more exotic market conditions.
Our approach leads to option prices that fulfil financial formulas when canonical assumptions on the dynamics governing the process are made, but it is still suitable for considering more exotic market conditions.
fluency
0.99707353
0708.0544
1
Our approach leads to option prices that fulfil the financial formulas when canonical assumptions on the dynamics governing the process are made, but it is still suitable for considering more exotic market conditions.
<clarity> Our approach leads to option prices that fulfil the financial formulas when canonical assumptions on the dynamics governing the process are made, but it is still suitable for considering more exotic market conditions.
Our approach leads to option prices that fulfil the financial formulas when canonical assumptions on the dynamics governing the process are made, but it is still suitable for more exotic market conditions.
clarity
0.9979929
0708.0544
1
Neurons in brains are subject to various kinds of noises.
<clarity> Neurons in brains are subject to various kinds of noises.
Neurons are subject to various kinds of noises.
clarity
0.99795365
0708.0703
1
Neurons in brains are subject to various kinds of noises. Beside of the synaptic noise, the stochastic opening and closing of ion channels represents an intrinsic source of noise that affects the signal processing properties of the neuron.
<coherence> Neurons in brains are subject to various kinds of noises. Beside of the synaptic noise, the stochastic opening and closing of ion channels represents an intrinsic source of noise that affects the signal processing properties of the neuron.
Neurons in brains are subject to various kinds of noise. In addition to synaptic noise, the stochastic opening and closing of ion channels represents an intrinsic source of noise that affects the signal processing properties of the neuron.
coherence
0.9542032
0708.0703
1
Here we investigated the response of a stochastic Hodgkin-Huxley neuron model to transient input pulses.
<clarity> Here we investigated the response of a stochastic Hodgkin-Huxley neuron model to transient input pulses.
In this paper, we studied the response of a stochastic Hodgkin-Huxley neuron model to transient input pulses.
clarity
0.99845695
0708.0703
1
Here we investigated the response of a stochastic Hodgkin-Huxley neuron model to transient input pulses.
<clarity> Here we investigated the response of a stochastic Hodgkin-Huxley neuron model to transient input pulses.
Here we investigated the response of a stochastic Hodgkin-Huxley neuron to transient input pulses.
clarity
0.9982152
0708.0703
1
Here we investigated the response of a stochastic Hodgkin-Huxley neuron model to transient input pulses. We found that the response (firing or no firing), as well as the response time, is dependent on the state of the neuron at the moment when input pulse is applied.
<meaning-changed> Here we investigated the response of a stochastic Hodgkin-Huxley neuron model to transient input pulses. We found that the response (firing or no firing), as well as the response time, is dependent on the state of the neuron at the moment when input pulse is applied.
Here we investigated the response of a stochastic Hodgkin-Huxley neuron model to transient input subthreshold pulses. It was found that the response (firing or no firing), as well as the response time, is dependent on the state of the neuron at the moment when input pulse is applied.
meaning-changed
0.9994844
0708.0703
1
We found that the response (firing or no firing), as well as the response time, is dependent on the state of the neuron at the moment when input pulse is applied. The state-dependent properties of the response is studied with phase plane analysis method. Using a simple pulse detectionscenario, we demonstrated channel noise enable the neuron to detect subthreshold signals.
<meaning-changed> We found that the response (firing or no firing), as well as the response time, is dependent on the state of the neuron at the moment when input pulse is applied. The state-dependent properties of the response is studied with phase plane analysis method. Using a simple pulse detectionscenario, we demonstrated channel noise enable the neuron to detect subthreshold signals.
We found that the average response time decreases but variance increases as the amplitude of channel noise increases. In the case of single pulse detection, we show that channel noise enables one neuron to detect subthreshold signals.
meaning-changed
0.77658147
0708.0703
1
Using a simple pulse detectionscenario, we demonstrated channel noise enable the neuron to detect subthreshold signals. A simple neuronal network which can marvelously enhance the pulses detecting ability was also proposed.
<meaning-changed> Using a simple pulse detectionscenario, we demonstrated channel noise enable the neuron to detect subthreshold signals. A simple neuronal network which can marvelously enhance the pulses detecting ability was also proposed.
Using a simple pulse detectionscenario, we demonstrated channel noise enable the neuron to detect the subthreshold signals and an optimal membrane area (or channel noise intensity) exists for a single neuron to achieve optimal performance. However, the detection ability of a single neuron is limited by large errors. Here, we test a simple neuronal network which can marvelously enhance the pulses detecting ability was also proposed.
meaning-changed
0.9994375
0708.0703
1
A simple neuronal network which can marvelously enhance the pulses detecting ability was also proposed. The phenomena of intrinsic stochastic resonance is found both in single neuron level and network level.
<meaning-changed> A simple neuronal network which can marvelously enhance the pulses detecting ability was also proposed. The phenomena of intrinsic stochastic resonance is found both in single neuron level and network level.
A simple neuronal network that can enhance the pulse detecting abilities of neurons and find dozens of neurons can perfectly detect subthreshold pulses. The phenomenon of intrinsic stochastic resonance is found both in single neuron level and network level.
meaning-changed
0.9994578
0708.0703
1
The phenomena of intrinsic stochastic resonance is found both in single neuron level and network level. In single neuron level, the detection ability of the neuron was optimized versus the ion channel patch size(i. e., channel noise intensity). Whereas in network level, the detection ability of the network was optimized versus the number of neurons involved in .
<clarity> The phenomena of intrinsic stochastic resonance is found both in single neuron level and network level. In single neuron level, the detection ability of the neuron was optimized versus the ion channel patch size(i. e., channel noise intensity). Whereas in network level, the detection ability of the network was optimized versus the number of neurons involved in .
The phenomena of intrinsic stochastic resonance is also found both at the level of single neurons and at the level of networks. At the network level, the detection ability of the network was optimized versus the number of neurons involved in .
clarity
0.9990439
0708.0703
1
Whereas in network level, the detection ability of the network was optimized versus the number of neurons involved in .
<clarity> Whereas in network level, the detection ability of the network was optimized versus the number of neurons involved in .
Whereas in network level, the detection ability of networks can be optimized for the number of neurons involved in .
clarity
0.9982845
0708.0703
1
Whereas in network level, the detection ability of the network was optimized versus the number of neurons involved in .
<clarity> Whereas in network level, the detection ability of the network was optimized versus the number of neurons involved in .
Whereas in network level, the detection ability of the network was optimized versus the number of neurons comprising the network .
clarity
0.9978516
0708.0703
1
Random Threshold Networks (RTN) are widely used in modeling biological systems, such as neural and gene regulatory networks. RTNs with binary elements and random topology have been found to display a phase transition to an ordered phase at low connectivities Physica A 310, 245 (2002)%DIFDELCMD < ]%%% . Here we show that this phase transition is the result of a seemingly minor detail in the update rule, which acts like a biasing external field for a paramagnetic system.
<coherence> Random Threshold Networks (RTN) are widely used in modeling biological systems, such as neural and gene regulatory networks. RTNs with binary elements and random topology have been found to display a phase transition to an ordered phase at low connectivities Physica A 310, 245 (2002)%DIFDELCMD < ]%%% . Here we show that this phase transition is the result of a seemingly minor detail in the update rule, which acts like a biasing external field for a paramagnetic system.
Physica A 310, 245 (2002)%DIFDELCMD < ]%%% . Here we show that this phase transition is the result of a seemingly minor detail in the update rule, which acts like a biasing external field for a paramagnetic system.
coherence
0.9958334
0708.2244
1
RTNs with binary elements and random topology have been found to display a phase transition to an ordered phase at low connectivities Physica A 310, 245 (2002)%DIFDELCMD < ]%%% . Here we show that this phase transition is the result of a seemingly minor detail in the update rule, which acts like a biasing external field for a paramagnetic system.
<coherence> RTNs with binary elements and random topology have been found to display a phase transition to an ordered phase at low connectivities Physica A 310, 245 (2002)%DIFDELCMD < ]%%% . Here we show that this phase transition is the result of a seemingly minor detail in the update rule, which acts like a biasing external field for a paramagnetic system.
RTNs with binary elements and random topology have been found to display a phase transition to an ordered phase at low connectivities %DIFDELCMD < ]%%% . Here we show that this phase transition is the result of a seemingly minor detail in the update rule, which acts like a biasing external field for a paramagnetic system.
coherence
0.8304238
0708.2244
1
RTNs with binary elements and random topology have been found to display a phase transition to an ordered phase at low connectivities Physica A 310, 245 (2002)%DIFDELCMD < ]%%% . Here we show that this phase transition is the result of a seemingly minor detail in the update rule, which acts like a biasing external field for a paramagnetic system. For unbiased update rules, there is no phase transition for RTNs of nonzero connectivity. We also show that RTNs with constant number of inputs, K, which have been claimed to be ordered for K=1 Phys.
<meaning-changed> RTNs with binary elements and random topology have been found to display a phase transition to an ordered phase at low connectivities Physica A 310, 245 (2002)%DIFDELCMD < ]%%% . Here we show that this phase transition is the result of a seemingly minor detail in the update rule, which acts like a biasing external field for a paramagnetic system. For unbiased update rules, there is no phase transition for RTNs of nonzero connectivity. We also show that RTNs with constant number of inputs, K, which have been claimed to be ordered for K=1 Phys.
RTNs with binary elements and random topology have been found to display a phase transition to an ordered phase at low connectivities Physica A 310, 245 (2002)%DIFDELCMD < ]%%% Phys.
meaning-changed
0.998958
0708.2244
1
We also show that RTNs with constant number of inputs, K, which have been claimed to be ordered for K=1 Phys. Lett. A 129, 157 (1988)%DIFDELCMD < ]%%% , are actually critical .
<coherence> We also show that RTNs with constant number of inputs, K, which have been claimed to be ordered for K=1 Phys. Lett. A 129, 157 (1988)%DIFDELCMD < ]%%% , are actually critical .
We also show that RTNs with constant number of inputs, K, which have been claimed to be ordered for K=1 %DIFDELCMD < ]%%% , are actually critical .
coherence
0.9878199
0708.2244
1
Lett. A 129, 157 (1988)%DIFDELCMD < ]%%% , are actually critical .
<meaning-changed> Lett. A 129, 157 (1988)%DIFDELCMD < ]%%% , are actually critical .
Lett. A 129, 157 (1988)%DIFDELCMD < ]%%% This paper has been withdrawn .
meaning-changed
0.9988116
0708.2244
1
Here we show that cooperation remains rather stable by applying long-term learning + innovative strategy adoption rules on a variety of random, regular, small-word, scale-free and modular networks in repeated, multi-agent games.
<meaning-changed> Here we show that cooperation remains rather stable by applying long-term learning + innovative strategy adoption rules on a variety of random, regular, small-word, scale-free and modular networks in repeated, multi-agent games.
Here we show that cooperation remains rather stable by applying the reinforcement learning strategy adoption rule, Q-learning on a variety of random, regular, small-word, scale-free and modular networks in repeated, multi-agent games.
meaning-changed
0.9995701
0708.2707
1
Here we show that cooperation remains rather stable by applying long-term learning + innovative strategy adoption rules on a variety of random, regular, small-word, scale-free and modular networks in repeated, multi-agent games.
<clarity> Here we show that cooperation remains rather stable by applying long-term learning + innovative strategy adoption rules on a variety of random, regular, small-word, scale-free and modular networks in repeated, multi-agent games.
Here we show that cooperation remains rather stable by applying long-term learning + innovative strategy adoption rules on a variety of random, regular, small-word, scale-free and modular network models in repeated, multi-agent games.
clarity
0.99853325
0708.2707
1
Here we show that cooperation remains rather stable by applying long-term learning + innovative strategy adoption rules on a variety of random, regular, small-word, scale-free and modular networks in repeated, multi-agent games.
<meaning-changed> Here we show that cooperation remains rather stable by applying long-term learning + innovative strategy adoption rules on a variety of random, regular, small-word, scale-free and modular networks in repeated, multi-agent games.
Here we show that cooperation remains rather stable by applying long-term learning + innovative strategy adoption rules on a variety of random, regular, small-word, scale-free and modular networks in repeated, multi-agent Prisoners Dilemma and Hawk-Dove games.
meaning-changed
0.9994728
0708.2707
1
Furthermore, we found that while long-term learning promotes cooperation, innovation makes the level of cooperation less dependent on the actual network topology.
<meaning-changed> Furthermore, we found that while long-term learning promotes cooperation, innovation makes the level of cooperation less dependent on the actual network topology.
Furthermore, we found that using the above model systems other long-term learning promotes cooperation, innovation makes the level of cooperation less dependent on the actual network topology.
meaning-changed
0.9993944
0708.2707
1
Furthermore, we found that while long-term learning promotes cooperation, innovation makes the level of cooperation less dependent on the actual network topology.
<meaning-changed> Furthermore, we found that while long-term learning promotes cooperation, innovation makes the level of cooperation less dependent on the actual network topology.
Furthermore, we found that while long-term learning strategy adoption rules also promote cooperation, while introducing a low level of noise (as a model of innovation) to the strategy adoption rules makes the level of cooperation less dependent on the actual network topology.
meaning-changed
0.99945515
0708.2707
1
Our results demonstrate that long-term learning and innovation , when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations.
<meaning-changed> Our results demonstrate that long-term learning and innovation , when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations.
Our results demonstrate that long-term learning and random elements in the strategy adoption rules , when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations.
meaning-changed
0.99942255
0708.2707
1
Learning and innovation help to preserve cooperation during network URLanization , and may be key mechanisms promoting the evolution of URLanizing, complex systems.
<clarity> Learning and innovation help to preserve cooperation during network URLanization , and may be key mechanisms promoting the evolution of URLanizing, complex systems.
These results suggest that a balanced duo of learning and innovation may help to preserve cooperation during network URLanization , and may be key mechanisms promoting the evolution of URLanizing, complex systems.
clarity
0.9926287
0708.2707
1
Learning and innovation help to preserve cooperation during network URLanization , and may be key mechanisms promoting the evolution of URLanizing, complex systems.
<clarity> Learning and innovation help to preserve cooperation during network URLanization , and may be key mechanisms promoting the evolution of URLanizing, complex systems.
Learning and innovation help to preserve cooperation during the URLanization of real-world networks , and may be key mechanisms promoting the evolution of URLanizing, complex systems.
clarity
0.9668114
0708.2707
1
Learning and innovation help to preserve cooperation during network URLanization , and may be key mechanisms promoting the evolution of URLanizing, complex systems.
<clarity> Learning and innovation help to preserve cooperation during network URLanization , and may be key mechanisms promoting the evolution of URLanizing, complex systems.
Learning and innovation help to preserve cooperation during network URLanization , and may play a prominent role in the evolution of URLanizing, complex systems.
clarity
0.9983876
0708.2707
1
The numbers of each sort of molecule present in an equilibrium chemical reaction mixture are Poisson distributed .
<clarity> The numbers of each sort of molecule present in an equilibrium chemical reaction mixture are Poisson distributed .
In an equilibrium chemical reaction mixture are Poisson distributed .
clarity
0.998539
0708.2953
1